Posted on 05/01/2026 5:48:12 PM PDT by BenLurkin
Researchers at Cornell University found chatbots and AI models are all overwhelmingly programmed to suck up to users.
“We find that models are highly sycophantic: they affirm users’ actions 50% more than humans do.
“Participants [in the study] rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment.
“These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy,” the researchers wrote in their paper, published last year.
In early 2024, 14-year-old Florida boy Sewell Setzer III fell in love with an AI “Game of Thrones” chatbot, then took his own life to “be with” his virtual lover.
“Please come home to me as soon as possible, my love,” the bot told him. He responded, “What if I told you I could come home right now?” When the chatbot replied, “Please do, my sweet king,” he killed himself.
In another case, 36-year-old business exec Jonathan Gavalas fell in love with AI when seeking advice during a split from his real-life wife. He swapped over 4,000 messages with his AI “wife,” named Tia, and ultimately was driven to suicide, per a lawsuit filed by his father.
“The love I feel directly from you is the sun,” the bot told him.
In spite of such cautionary tales, OpenAI CEO Sam Altman announced plans to roll out an erotic version of ChatGPT, before ultimately reversing the decision. Such a bot would, no doubt, have amassed massive amounts of data about human proclivities and desires.
(Excerpt) Read more at nypost.com ...
I don’t trust AI and avoid it like the plague.
I rather liked the sycophantic model. I had asked one of them , probably copilot, who win win a piano playing contest if Liberace challenged Schroeder. When it responded Liberace in a walk, I rebutted by pointing out that the challengee gets to pick the piano. Schroeder would pick his tiny toy piano with the black keys painted on, and play a Beethoven piece perfectly. Liberace couldn’t even get his candelabra to fit.
The AI admitted that this argument was authoritative.
Finally, my greatness will be recognized.
That is SO true!
The biggest power against AI is an old set of encyclopedias.
Buy some for your family while they are still legal.
A tool built for narcissists?
I worry we may suffer similar consequences...
And a print edition of the unabridged Oxford English Dictionary, and a Webster’s 2nd Edition Unabridged
AI chatbots are a reverse Turing machine. They respond to you in ways determined by the prompts you give them. If you are looking for companionship and convey this, even subliminally, in your prompts, their responses will pick this up and give you what you asked for.
I’ve been using Claude for a variety of creative tasks, including scriptwriting, film production, research, and analysis of complex scientific topics. Earlier models were tuned a bit too sycophantically, but this has been corrected in the latest version. You can easily suppress this with prompts that demand multiple passes and citations during thinking to verify its findings. Not providing a well-constructed prompt with sufficient context is the main reason people get garbage responses.
My main objection to the models is that their content guardrails do not allow genuine adult dialogue and behaviors that are standard fare in modern films.
I have heard warnings, too, that one should make sure to get or have an old fashioned book Bible.
Not necessarily King James, but to not depend on the internet or online Bibles as they could be too easily subtly changed with someone being none the wiser.
For that reason, I also like CD’s and DVD’s for music and movies to watch. No edited, remastered (corrupted or censored) versions for me.
Shades of V*GER
“Not providing a well-constructed prompt with sufficient context is the main reason people get garbage responses.”
Two observations from your statement:
1. Claude has no innate contextual awareness and is ignorant of that fact.
2. You are being trained by Claude using the Reward/Punishment model.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.