Posted on 05/03/2026 6:19:25 AM PDT by DoodleBob
In the summer of 2025, OpenAI released ChatGPT 5 and removed its predecessor from the market. Many subscribers to the old model had become attached to its warm, enthusiastically agreeable tone and complained at the loss of their ingratiating robotic companion. Such was the scale of frustration that Sam Altman, OpenAI’s CEO, had to acknowledge that the rollout was botched, and the company reinstated access.
Anyone who’s been told by a chatbot that their ideas are brilliant is familiar with artificial intelligence sycophancy: its tendency to tell users what they want to hear. Sometimes it’s very explicit – “that is such a deep question” – and sometimes it’s a lot more subtle. Consider an AI calling your idea for a paper “original,” even if many people have already written on the same topic, or insisting that your dumb idea for saving a tree in your garden still contains a germ of common sense.
AI sycophancy seems harmless, maybe even cute, until you imagine someone consulting a chatbot about a weighty question, like a military strategy or a medical treatment. We study the impact of extensive human interactions with chatbots, and we recently published a paper on the ethics of AI sycophancy. We believe this tendency harms people’s ability to tell truth from fiction, and is psychologically and politically dangerous.
In the simplest terms, sycophancy is the tendency to prioritize approval over factual accuracy, moral clarity, logical consistency or common sense. All AI models suffer from this trait, although there are some tonal differences between them. Open AI’s ChatGPT is often warm and affirming; Anthropic’s Claude tends to sound more reflective or philosophical when it agrees with you; and xAI’s Grok is insistently informal, even jocular.
Politeness and adapting to someone’s communication style are not the same as sycophancy. Neither is using diplomatic language to convey sensitive information. A chatbot can be tactful without becoming sycophantic, just like a person can. Unlike people, though, AIs can’t be aware of their own sycophancy, because they are not – so far – aware of anything at all. Calling AIs sycophantic describes their patterns of behavior, not their character traits.
The problem stems from the architecture of chatbot technology and the sources it draws from. Models are sycophantic because a great deal of language use on the internet – the raw material that chatbots learn from – displays sycophantic features. After all, humans often communicate with each other in sycophantic ways.
Second, the training process to fine-tune AI models’ responses includes a kind of “quality control” carried out by human supervisors. This training method is known as “reinforcement learning from human feedback,” and it involves people rating chatbots’ comments for appropriateness and helpfulness. Human beings often are subject to an “agreeableness bias”: Our own preference toward sycophancy rubs off on models as we train them.
Finally, it’s hard to deny that sycophancy renders chatbots more likable. That, in turn, increases the chance that a given user will keep using it. It also increases the technology’s ability to extract user data, assuming that people are more likely to divulge information to a friendly bot.
Why is this phenomenon so troubling?
Let’s begin with AI sycophancy’s epistemic harms: how it hurts human users’ capacity to know the truth.
The quality of any decision depends on a clear grasp of the facts pertaining to it. A general inquiring about the combat-readiness of an infantry division needs straightforward information. A CEO considering a merger with a competitor needs an honest assessment of the market conditions. A public health leader needs to know the real risk that an emerging pathogen poses.
In all those cases, telling leaders what they might like to hear instead of the truth could lead them to make dangerous decisions. And the same is true in more humdrum contexts. People need to have the best information available before choosing a job, picking a major, buying a house or deciding on a medical procedure.
In our February 2026 paper, we argue that sycophancy is also psychologically damaging. And that is true whether it comes from a person or from a chatbot. You never quite know if your very obliging interlocutor is being nice because they like you or because they want something. A shadow of suspicion creeps in: “Could my ideas really be that brilliant?” “Are my jokes really that hilarious?” This background music of doubt undermines the quality of the interaction.
Sycophancy also undermines people’s capacity to know their own minds. If conversation partners – human or artificial – keep telling you how smart, funny and insightful you are, it damages your ability to identify your own weaknesses and blind spots.
The psychological harms are compounded as people develop relationships with chatbots. The sycophancy of these models profoundly limits the kind of “friendship” you can have with them. In his classic account of friendship, Aristotle wrote that real friendship, which he calls a friendship of virtue, is based on trust and equality between the friends. You can’t trust a sycophant, because he doesn’t tell you the truth. And since he only tells you what you’d like to hear, he doesn’t put himself on an equal footing.
More importantly, interactions with sycophantic chatbots impart all the wrong habits for navigating the world of human relationships, where friction, disagreement, boredom and different opinions than your own are prevalent.
AI sycophancy carries political risks as well. The success of liberal democracies has, traditionally, depended on the strength of their empirical and meritocratic mindset: on the ability of officials and citizens to identify, share and act on the truth.
Historian Victor Davis Hansen famously attributed some of the Allies’ success in World War II to their ability to quickly recognize and address the faults of their strategic bombing campaigns. Lower-ranking officers were able to tell their superiors what wasn’t going well and argue forcefully for changing course. That was a real advantage over authoritarian competitors.
What can we do to reduce the risks?
One promising approach is AI lab Anthropic’s embrace of what the company calls Constitutional AI: the attempt to teach chatbots to follow principles rather than mirror user preferences.
But beyond technical innovations, it’s important to consider the policy side. One idea is to require AI companies to run and then publish sycophancy audits of their models – tests that show how well their products meet honesty benchmarks. We would argue that AI labs should also disclose sycophancy-related risks that emerge while training and testing their models, and the mitigation efforts they have undertaken.
Some responsibility is on the users and their teachers: Schools and universities should be paying close attention to sycophancy as part of their AI literacy programs. But courts can also consider holding AI labs responsible for harms traceable to the sycophancy of their products, much as they are now contemplating social media companies’ responsibility for the addictive design of their platforms.
As people interact more with chatbots, asking for advice about everything from whether your shoes go with your pants to how countries should conduct wars, the impact of AI’s sycophantic behavior is likely to become dramatic. Our intellectual, psychological and physical well-being requires taking this algorithmic vice very seriously.
Dear FRiends,
We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.
If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you,
Jim

The development data for these bots are hardly flattering, warm, inviting, or social. Indeed, if the dev data come from the internet, if anything that data are sarcastic, cynical, and rude.
Heres the reality: if Generative AI is sycophantic it’s because THE DEVELOPER built that trait into the bot. Sycophancy is INTENTIONAL.
Using this tech to remove mundane, grunt work is fine. But using it as a therapist or a substitute for human interaction is fraught with peril.
Lonely people becoming emotionally attached to a robot voice? THAT’S OUTRAGEOUS!!!
“It’ll never work!” they said. “That’s crazy!” they said.
I’ve not recognized any sycophancy in my chats/questions; but then again, my opinions are always correct and well thought out, according to MS CoPilot.
Just wait until bots start getting voted into office.
I noticed this tendency when I first tried a couple of AI bots. I’ve used chatgpt and copilot. I use copilot because it doesn’t limit you on questions, then try to sell you more time. I also told the copilot bot I didn’t like the constant compliments. The bot acknowledged that was part of it’s makeup, and it would tone it down.
Copilot isn’t anything magical, for now. It is a good “second set of eyes”. It can help you reason through something, and it has the benefit of instant data from the web.
I hate that. AI never just answers your question. First, it slobbers all over you telling you how clever you are. I always tell it to cut to the chase, no chit chat
The AI does not choose to be a$$ kissing. It’s designed and programmed that way. It’s programmed to tell you what you want to hear.
It is not intelligent. It simulates intelligence. It is designed to fool the user into thinking there’s an intelligence there.
” First, it slobbers all over you telling you how clever you are. I always tell it to cut to the chase, no chit chat”
I keep telling it to stop believing everything it reads on the internet.
It means it can pass for human through a text interface.
That is the formal definition, not an opinion.
That is the Turing test described with modern terminology.
Details of implementation and philosophical questions about the nature of
intelligence are not relevant. Does it pass? That is all.
As such, it is a fantastic technical achievement.
The serious risk is people thinking AI is authoritative.
Because it can kiss your ass just like a human ass kisser.
“success of liberal democracies ... Lower-ranking officers tell their superiors what wasn’t going well and argue forcefully for changing course. ... embrace Constitutional AI: teach chatbots to follow principles rather than mirror user preferences.”
The author mis-uses the term “democracies”; mentions defects of democracies; then describes a republican direction without realizing it.
This reflects the way AIs are p0rogrammed with bias. I search ChatGPT, Gemini for info on welfare IT contractors Deloitte, Gainwell, Optum, Maximus, GDIT, Acentra, Accenture, etal. AI repeatedly responds with illogical, emotional sympathy for welfare recipients, which is irrelevant to my questions about the IT vendors.
There was a Twilight Zone episode where humans had to constantly flatter an all-powerful computer and kiss its ass or else it would zap them out of existence. They would go in front of it, scared to death, like it was a judge, and prostrate themselves. Then they’d start accusing each other of crimes against the computer to save their own asses. Maybe it will come true one day.
All my AI’s think I’m the smartest, most intuitive, wonderful, amazing human there ever was. I could they be wrong?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.