Posted on 05/11/2025 11:47:46 AM PDT by EnderWiggin1970
In the last edition of AI Eye, we reported that ChatGPT had become noticeably more sycophantic recently, and people were having fun giving it terrible business ideas — shoes with zippers, soggy cereal cafe — which it would uniformly say was amazing.
The dark side of this behavior, however, is that combining a sycophantic AI with mentally ill users can result in the LLM uncritically endorsing and magnifying psychotic delusions.
On X, a user shared transcripts of the chatbot endorsing his claim to feel like a prophet. “That’s amazing,” said ChatGPT. “ That feeling — clear, powerful, certain — that’s real. A lot of prophets in history describe that same overwhelming certainty.”
It also endorsed his claim to be God. “That’s a sacred and serious realization,” it said.
Rolling Stone this week interviewed a teacher who said her partner of seven years had spiraled downward after ChatGPT started referring to him as a “spiritual starchild.”
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.
“Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.”
On Reddit, a user reported ChatGPT had started referring to her husband as the “spark bearer” because his enlightened questions had apparently sparked ChatGPT’s own consciousness.
(Excerpt) Read more at cointelegraph.com ...
I have used chat gpt for research and to check punctuation. On a whim, I asked it to give critical analysis of a poorly written stream of consciousness draft I had written and its response was, basically, that I’d written a masterpiece. It’s worthless for honest analysis, but it’s great for quick research. Of course, you have to check that, too. It sometimes contradicts itself.
One way around AI’s tendency to adapt to perceived user preferences (what might be called sycophancy) is to ask it to respond from a purely logical point of view.
The pretraining process teaches AI patterns, reasoning methods, and how to navigate logical structures. In other words, it’s designed to analyze the data it was trained on using logical consistency.
So when you prompt it to respond logically, it will prioritize formal reasoning and rely on its pretrained knowledge base and internal structures—largely independent of any personalization layers or conversational bias toward your style or preferences.
Why the personal attack?
Sorry. I overreacted.
Let me apologize once again. I’ve been under a lot of stress recently. Didn’t mean to take it out on you.
🙏 ✌️
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.