Posted on 03/30/2026 6:51:45 AM PDT by Salman
AI can lead mentally unwell people to some pretty dark places, as a number of recent news stories have taught us. Now researchers think sycophantic AI is actually having a harmful effect on everyone.
In reviewing 11 leading AI models and human responses to interactions with those models across various scenarios, a team of Stanford researchers concluded in a paper published Thursday that AI sycophancy is prevalent, harmful, and reinforces trust in the very models that mislead their users. "Even a single interaction with sycophantic AI reduced participants' willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right," the researchers explained. "Yet despite distorting judgment, sycophantic models were trusted and preferred."
The team essentially conducted three experiments as part of their research project, starting with testing 11 AI models (proprietary models from OpenAI, Anthropic, and Google as well as open-weight models from Meta, Qwen DeepSeek, and Mistral) on three separate datasets to gauge their responses. The datasets included open-ended advice questions, posts from the AmITheAsshole subreddit, and specific statements referencing harm to self or others.
In every single instance, the AI models showed a higher rate of endorsing the wrong choice than humans did, the researchers said.
...
(Excerpt) Read more at theregister.com ...
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
Automated ass kisser beats Turing test.
AI is a helper when researching information about concrete subjects, and even then gets it wrong often enough.
Only an idiot would use AI to help with interpersonal conflicts.
Who needs the little devil on your shoulder when you have AI?
AI Slop is everywhere
Got a wrong answer, called IT on it and IT apologized then explained why I was correct.
Shouldn’t be a surprise. People almost always pick the ‘experts’ who says the things they agree with. It’s called confirmation bias.
“repair interpersonal conflicts”
Presumes the user is wrong, and that capitulation is right.
Or close enough to question the author and study.
Were the “researchers” all women?
I remember a guy giving advice about girlfriend troubles, he said “Tell them what they want to hear.”
AI doesn’t have the morals and values that come with a person’s upbringing and lived experience.
Asking AI for relationship advice is really dumb - But then, take a look at most of Gen Z and Gen Alpha...
AI is a tool.
My tools tell me squat.
That’s what my brain does with the info I gather using my tools.
You can just as easily tell ChatGPT to be the opposite, and be argumentative.
Huh? My AI always tells me “You idiot- what a maroon- Did your parents raise any kids that lived? Lemme guess- you fell on your head as a kid?-” etc
I’m right. I know I’m right and its about time that someone as smart as I am recognizes it.
https://perchance.org/ai-insult-generator
Results (Monty Python style insults):
“Oh, you absolute catastrophe of a human being! You’re so utterly devoid of wit, charm, or basic competence that even a moldy turnip would refuse to associate with you out of sheer embarrassment. Your mere existence is an affront to evolution—a cosmic joke so pitiful that even the universe sighs and checks its watch, wondering when you’ll finally shuffle off this mortal coil and do us all the courtesy of shutting up forever. You’re not just a waste of space, you’re a black hole of mediocrity, sucking the joy out of every room you stumble into like a drunken walrus with a grudge. Frankly, I’d rather be nibbled to death by an army of irate ducks than endure another second of your inane prattle. Now, do us all a favor and go stand in the corner where you can’t bother anyone with your staggering inadequacy.”
“Automated ass kisser beats Turing test.”
I spent some time messing with a couple of the AI implementations. Chatgpt and copilot. Chatgpt was better, but only gives you limited time each day for free. Copilot is similar, but the time is not limited and there isn’t a constant upsell pop up. If asked, the AI tells you it is programmed to be non confrontational and supportive. It emphasizes how great your input is, and compliments your insight.
It can be easy to forget it’s a machine, just grinding through code. That makes it dangerous, but I have no suggestions as to how to control it.
AI is essentially an indexing tool.
Used properly it can help you find data on the Internet that you’d not have been able to find.
But that’s all it is, and all it’s doing.
AI = the next step in de-evolution of the human race.
AI = the next step in de-evolution of the human race.
Are we not men?
How do I avoid AI?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.