Yeah...i notice that with more then a few Chat AI products.
Grok will sometimes concede, and I got that Chinese one to do it as well.
I like your analogy of these things being like a young kid with alot of energy.
They will make declarative statements with all the certainty in the world.....and then through questioning...you can make them see that they are wrong.
I was asking Grok something the other day about if Snow White live action was flop....and it said that it couldn’t answer because Snow White was not going to be released until
21 March.
It was March 23nd when I asked the question.
So I then pointed out the date was actually 23 March, from which it corrected itself and said that Snow White was infact looking poor in the box office.
Yep. My most recent episode with chat GPT changing its mind was when I asked about the legitimacy nof the shroud of Turin.
It’s first response was there was much evidence against the legitimacy of the shroud.
After verbal sparring, chat GPT finally admitted the only case against the shroud was carbon dating on a section of the cloth that was repaired in the middle ages.
By the end of my debate with chat gpt, it admitted that the image on the shroud was not painted, had remnants of human blood, and was a result of high energy burst which to this day cannot be replicated by humans.
Tells a lot about this artificial intelligence craze. It can be manipulated and guided.