AI does two things:
1 - it sucks in whatever it finds
2 - it tries again to make you happy
It doesn’t actually know anything. It doesn’t actually believe anything. One of my co-workers tried to use ChatGPT to figure out how to use one of our tools to do something we need. ChatGPT eventually made up an API call that didn’t exist. But made it look real, all the right kind of documentation. Then he pointed out that the API doesn’t exist. And it said he’s right.
Our IT Dept warned about using AI due to “AI Hallucinations”. Your API example is a good one. I think the more it learns, the greater the odds of it learning wrong only to spit out gibberish.