The problem is, if AI does a good job 99% of the time, what about that 1% when it messes up? And who will know how to fix it?
I just asked Perplexity.ai which NFC division the AFC West will play in 2026. It actually claimed that the AFC West played the NFC West in 2024, when it played the NFC South. That was a VERY straightforward question to botch.
At least for what we do in engineering, it’s amazing how much it can assist when used in the right context. That said, it’s a “use but review” approach, although that is no different than if an engineer didn’t have the tool. Everything must be reviewed by people. There’s so much busy work, especially when it comes to writing requirements and test cases - where the process of writing by AI is far beyond any human, in terms of speed.
We’re also using it to translate documentation from engineers where their English is sub-par (2nd language), basically rewrite so we have a consistent reading level.
I suppose my argument is, yes it’ll mess up - but when do humans not? What is the error rate in any job done by a human? For us, it’s not replacing people, it’s making them more productive. I still need dozens more people. Although there’s definitely clerical positions that might be at risk.