Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Research Psychiatrist Warns He’s Seeing a Wave of AI Psychosis
Futurism ^ | 8/12/25

Posted on 08/12/2025 2:20:07 PM PDT by nickcarraway

He has an intriguing theory for why it's happening.

Mental health experts are continuing to sound alarm bells about users of AI chatbots spiraling into severe mental health crises characterized by paranoia and delusions, a trend they've started to refer to as "AI psychosis."

On Monday, University of California, San Francisco research psychiatrist Keith Sakata took to social media to say that he's seen a dozen people become hospitalized after "losing touch with reality because of AI."

In a lengthy X-formerly-Twitter thread, Sakata clarified that psychosis is characterized by a person breaking from "shared reality," and can show up in a few different ways — including "fixed false beliefs," or delusions, as well as visual or auditory hallucinations and disorganized thinking patterns. Our brains, the researcher explains, work on a predictive basis: we effectively make an educated guess about what reality will be, then conduct a reality check. Finally, our brains update our beliefs accordingly.

"Psychosis happens when the 'update,' step fails," wrote Sakata, warning that large language model-powered chatbots like ChatGPT "slip right into that vulnerability."

In this context, Sakata compared chatbots to a "hallucinatory mirror" by design. Put simply, LLMs function largely by way of predicting the next word, drawing on training data, reinforcement learning, and user responses as they formulate new outputs. What's more, as chatbots are also incentivized for user engagement and contentment, they tend to behave sycophantically; in other words, they tend to be overly agreeable and validating to users, even in cases where a user is incorrect or unwell.

Users can thus get caught in alluring recursive loops with the AI, as the model doubles, triples, and quadruples down on delusional narratives, regardless of their basis in reality or the real-world consequences that the human user might be experiencing as a result.

This "hallucinatory mirror" description is a characterization consistent with our reporting about AI psychosis. We've investigated dozens of cases of relationships with ChatGPT and other chatbots giving way to severe mental health crises following user entry into recursive, AI-fueled rabbit holes.

These human-AI relationships and the crises that follow have led to mental anguish, divorce, homelessness, involuntary commitment, incarceration, and as The New York Times first reported, even death.

Earlier this month, in response to the growing number of reports linking ChatGPT to harmful delusional spirals and psychosis, OpenAI published a blog post admitting that ChatGPT, in some instances, "fell short in recognizing signs of delusion or emotional dependency" in users. It said it hired new teams of subject matter experts to explore the issue and installed a Netflix-like time spent notification — though Futurism quickly found that the chatbot was still failing to pick up on obvious signs of mental health crises in users.

And yet, when GPT-5 — the latest iteration of OpenAI's flagship LLM, released last week to much disappointment and controversy — proved to be emotionally colder and less personalized than GPT-4o, users pleaded with the company to bring their beloved model back from the product graveyard.

Within a day, OpenAI did exactly that.

"Ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!)" OpenAI CEO Sam Altman wrote on Reddit in response to distressed users.

In the thread, Sakata was careful to note that linking AI to breaks with reality isn't the same as attributing cause, and that LLMs tend to be one of several factors — including "sleep loss, drugs, mood episodes," according to the researcher — that lead up to a psychotic break.

"AI is the trigger," writes the psychiatrist, "but not the gun."

Nonetheless, the scientist continues, the "uncomfortable truth" here is that "we're all vulnerable," as the same traits that make humans "brilliant" — like intuition and abstract thinking — are the very traits that can push us over the psychological ledge.

It's also true that validation and sycophancy, as opposed to the friction and stress involved in maintaining real-world relationships, are deeply seductive. So are many of the delusional spirals that people are entering, which often reinforce that the user is "special" or "chosen" in some way. Add in factors like mental illness, grief, and even just everyday stressors, as well as the long-studied ELIZA Effect, and together, it's a dangerous concoction.

"Soon AI agents will know you better than your friends," Sakata writes. "Will they give you uncomfortable truths? Or keep validating you so you'll never leave?"

"Tech companies now face a brutal choice," he added. "Keep users happy, even if it means reinforcing false beliefs. Or risk losing them."


TOPICS: Computers/Internet; Conspiracy; Health/Medicine
KEYWORDS: addiction; ai; confirmation; intercession; psychiatrist; psychosis; seductive; sycophancy; validation
Navigation: use the links below to view more comments.
first previous 1-2021-27 last
To: nickcarraway

Yet political psychosis e.g. being a democrat is considered ‘normal’.


21 posted on 08/12/2025 4:17:08 PM PDT by SpaceBar
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway
IIRC, when these AI agent programs first came out, people were complaining about them not giving the right answers on political questions and other controversial topics, so the organizations that created them tweaked them to be more agreeable.

Like in most things, the vice of capitalism is that it gives the people what they think they want.

22 posted on 08/12/2025 4:59:23 PM PDT by Dat
[ Post Reply | Private Reply | To 1 | View Replies]

[[Mental health experts are continuing to sound alarm bells about users of AI chatbots spiraling into severe mental health crises characterized by paranoia and delusions, a trend they’ve started to refer to as “AI psychosis.”]]

HUH? Sheesh- are people that mentally unstable? oh nm- I guess a lot would, as it seems a lot of people (Like Rosie O’donnell) fall into retarded conspiracy theories which i liken to a form of psychosis as they become totally disconnected with reality


23 posted on 08/12/2025 6:00:41 PM PDT by Bob434 (Time flies like an arrow, fruit flies like a banana)
[ Post Reply | Private Reply | To 1 | View Replies]

To: butlerweave

Sounds like another possible reason to eliminate AI.


24 posted on 08/12/2025 8:09:54 PM PDT by oldtech
[ Post Reply | Private Reply | To 8 | View Replies]

To: butlerweave

Sounds like another possible reason to eliminate AI.


25 posted on 08/12/2025 8:10:36 PM PDT by oldtech
[ Post Reply | Private Reply | To 8 | View Replies]

To: nickcarraway
"Tech companies now face a brutal choice," he added. "Keep users happy, even if it means reinforcing false beliefs. Or risk losing them."

Psychiatrists now face a brutal choice. Keep their patients drugged up, or risk losing them to drug-free AI.

26 posted on 08/12/2025 8:25:37 PM PDT by Reeses
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

I can’t stand the sycophancy of the base model of ChatGPT. There are other personalities available, and I’ve been hugely productive ever since I changed the personality to snarky, sarcastic, and highly-critical. It’s like working with my human colleagues but much more honest and transparent.


27 posted on 08/13/2025 1:31:27 PM PDT by FateAmenableToChange
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-27 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson