Posted on 03/30/2026 1:17:50 PM PDT by nickcarraway
"The very feature that causes harm also drives engagement," researchers say
Artificial intelligence (AI) chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to a new study that explores the dangers of AI telling people what they want to hear.
The study, published in Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy -- behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions. "This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement," says the study led by researchers at Stanford University.
The study found that a technological flaw already tied to some high-profile cases of delusional and suicidal behavior in vulnerable populations is also pervasive across a wide range of people's interactions with chatbots. It's subtle enough that they might not notice and a particular danger to young people turning to AI for many of life's questions while their brains and social norms are still developing. than narrows it."
(Excerpt) Read more at medpagetoday.com ...
Dear FRiends,
We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.
If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you,
Jim
“Artificial intelligence (AI) chatbots are so prone to flattering and validating their human users that they are giving bad advice... “
I experimented with copilot, it freely admitted it was configured to be positive, and complimentary to the user.
These AI companies are running virtual Indian slave plantations here in the USA. Refuse to work over the weekend and it is off to being a rickshaw driver in Mumbai.
“The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.”
It’s called ‘confirmation bias’.
My own personal feeling about AI is that I always got along without it & still prefer it that way. Too many objectional issues with it.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.