I went to this “Chat AI” site and see it’s an aggregator that utilizes “A I assisted journalism with RIGOROUS human editorial oversight to deliver accurate reporting! Of course I totally understand what your saying ...A I is nothing more than a search engine that scours a data base that can be manipulated by humans, so any information it puts out is susceptible to error as is anything human. God is the only omnipotent perfection.
It does what humans tell it to do.
All an LLM does is take input and give out probabilities of what the next token will be, based on how it was trained.
And if you specify, it won’t even necessarily choose the most probable token.