I’m not so bullish on AI. What is can do organizing, cataloguing and deciphering information is amazing, but it needs large databases of accurate information to produce anything worthwhile. And then who or what validates AI’s output. In many cases AI is being trained on false or incomplete data, or it is hallucinating or in many cases it is overfitting the data. Businesses have always been vulnerable to managers who make decisions on bad information. AI could be magnifying these types of mistakes with catastrophic results.
Yep, even if AI is right 99.9% of the time, it’s that 0.1 percent that will put a company out of business.
You raise some good points, but much of what you’re describing applies more to “general AI” making broad, autonomous decisions. In reality, most businesses today use narrow, task-specific AI. These systems don’t get free rein; they operate inside strict boundaries on well-defined tasks with human oversight and validation.
As for “who validates AI’s output,” that’s where multiple-model workflows come in. What I usually do is take something from Grok (or Claude, Gemini, etc.) and cross-check it with ChatGPT, or vice versa. Two independent models catch a lot of mistakes because they fail in different ways.