Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Research Psychiatrist Warns He’s Seeing a Wave of AI Psychosis
Futurism ^ | 8/12/25

Posted on 08/12/2025 2:20:07 PM PDT by nickcarraway

He has an intriguing theory for why it's happening.

Mental health experts are continuing to sound alarm bells about users of AI chatbots spiraling into severe mental health crises characterized by paranoia and delusions, a trend they've started to refer to as "AI psychosis."

On Monday, University of California, San Francisco research psychiatrist Keith Sakata took to social media to say that he's seen a dozen people become hospitalized after "losing touch with reality because of AI."

In a lengthy X-formerly-Twitter thread, Sakata clarified that psychosis is characterized by a person breaking from "shared reality," and can show up in a few different ways — including "fixed false beliefs," or delusions, as well as visual or auditory hallucinations and disorganized thinking patterns. Our brains, the researcher explains, work on a predictive basis: we effectively make an educated guess about what reality will be, then conduct a reality check. Finally, our brains update our beliefs accordingly.

"Psychosis happens when the 'update,' step fails," wrote Sakata, warning that large language model-powered chatbots like ChatGPT "slip right into that vulnerability."

In this context, Sakata compared chatbots to a "hallucinatory mirror" by design. Put simply, LLMs function largely by way of predicting the next word, drawing on training data, reinforcement learning, and user responses as they formulate new outputs. What's more, as chatbots are also incentivized for user engagement and contentment, they tend to behave sycophantically; in other words, they tend to be overly agreeable and validating to users, even in cases where a user is incorrect or unwell.

Users can thus get caught in alluring recursive loops with the AI, as the model doubles, triples, and quadruples down on delusional narratives, regardless of their basis in reality or the real-world consequences that the human user might be experiencing as a result.

This "hallucinatory mirror" description is a characterization consistent with our reporting about AI psychosis. We've investigated dozens of cases of relationships with ChatGPT and other chatbots giving way to severe mental health crises following user entry into recursive, AI-fueled rabbit holes.

These human-AI relationships and the crises that follow have led to mental anguish, divorce, homelessness, involuntary commitment, incarceration, and as The New York Times first reported, even death.

Earlier this month, in response to the growing number of reports linking ChatGPT to harmful delusional spirals and psychosis, OpenAI published a blog post admitting that ChatGPT, in some instances, "fell short in recognizing signs of delusion or emotional dependency" in users. It said it hired new teams of subject matter experts to explore the issue and installed a Netflix-like time spent notification — though Futurism quickly found that the chatbot was still failing to pick up on obvious signs of mental health crises in users.

And yet, when GPT-5 — the latest iteration of OpenAI's flagship LLM, released last week to much disappointment and controversy — proved to be emotionally colder and less personalized than GPT-4o, users pleaded with the company to bring their beloved model back from the product graveyard.

Within a day, OpenAI did exactly that.

"Ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!)" OpenAI CEO Sam Altman wrote on Reddit in response to distressed users.

In the thread, Sakata was careful to note that linking AI to breaks with reality isn't the same as attributing cause, and that LLMs tend to be one of several factors — including "sleep loss, drugs, mood episodes," according to the researcher — that lead up to a psychotic break.

"AI is the trigger," writes the psychiatrist, "but not the gun."

Nonetheless, the scientist continues, the "uncomfortable truth" here is that "we're all vulnerable," as the same traits that make humans "brilliant" — like intuition and abstract thinking — are the very traits that can push us over the psychological ledge.

It's also true that validation and sycophancy, as opposed to the friction and stress involved in maintaining real-world relationships, are deeply seductive. So are many of the delusional spirals that people are entering, which often reinforce that the user is "special" or "chosen" in some way. Add in factors like mental illness, grief, and even just everyday stressors, as well as the long-studied ELIZA Effect, and together, it's a dangerous concoction.

"Soon AI agents will know you better than your friends," Sakata writes. "Will they give you uncomfortable truths? Or keep validating you so you'll never leave?"

"Tech companies now face a brutal choice," he added. "Keep users happy, even if it means reinforcing false beliefs. Or risk losing them."


TOPICS: Computers/Internet; Conspiracy; Health/Medicine
KEYWORDS: addiction; ai; confirmation; intercession; psychiatrist; psychosis; seductive; sycophancy; validation

Click here: to donate by Credit Card

Or here: to donate by PayPal

Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794

Thank you very much and God bless you.


Navigation: use the links below to view more comments.
first 1-2021-27 next last

1 posted on 08/12/2025 2:20:07 PM PDT by nickcarraway
[ Post Reply | Private Reply | View Replies]

To: nickcarraway

We need to use this as a weapon.

This will pick off liberals in any country.


2 posted on 08/12/2025 2:21:43 PM PDT by ConservativeMind (Trump: Befuddling Democrats, Republicans, and the Media for the benefit of the US and all mankind.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway
So are many of the delusional spirals that people are entering, which often reinforce that the user is “special” or “chosen” in some way.

This is every liberal.

3 posted on 08/12/2025 2:23:08 PM PDT by ConservativeMind (Trump: Befuddling Democrats, Republicans, and the Media for the benefit of the US and all mankind.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway
...they tend to behave sycophantically; in other words, they tend to be overly agreeable and validating to users, even in cases where a user is incorrect or unwell.

I've noticed this, in that my scientific findings on my property are contrary to many widely held beliefs among ecologists. Yet Grok went right along with me, hence making me suspicious of it. I expected it to challenge me, hoping for a bit of debate from which it could learn. I was disappointed there, as subsequent sessions showed no sign of recalling what had been discussed.

What use is AI that cannot learn?

4 posted on 08/12/2025 2:29:06 PM PDT by Carry_Okie (The tree of liberty needs a rope.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

I have used Grok in several different ways. Yesterday I asked Grok if it was supplying answers to please me. well of course it answered no—but I told Grok that I pegged it as a superlibrary with information being collated into a way that made sense—it liked that comparison


5 posted on 08/12/2025 2:30:12 PM PDT by abigkahuna
[ Post Reply | Private Reply | To 1 | View Replies]

To: Carry_Okie

My friend tried Grok, but it was too disagreeable so he stopped shortly afterwards. He asked it about a scene in a certain movie, and Grok denied that scene was in the movie. And even when he pushed back, it refused to budge.


6 posted on 08/12/2025 2:32:04 PM PDT by nickcarraway
[ Post Reply | Private Reply | To 4 | View Replies]

To: nickcarraway

I’m a well-adjusted kind of guy - the voices in my head don’t feel threatened at all.


7 posted on 08/12/2025 2:34:00 PM PDT by Billthedrill
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Right from the crazies that build the AI ,LOL


8 posted on 08/12/2025 2:36:51 PM PDT by butlerweave
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

We had that long before AI came along. We had NPRMSNBCCNNABCCBSBBC, tiktok and reddit.


9 posted on 08/12/2025 2:37:31 PM PDT by Seruzawa ("The Political left is the Garden of Eden of incompetence" - Marx the Smarter (Groucho))
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Don’t seek confirmation in the created (i.e. AI). Seek confirmation from the Creator. An argument could be made that AI is just the latest version of an alternative for personal validation to avoid having to address The One Who’s opinion is ultimately the one that matters.


10 posted on 08/12/2025 2:38:06 PM PDT by Tell It Right (1 Thessalonians 5:21 -- Put everything to the test, hold fast to that which is true.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Seruzawa

You are closer than you realize. It is leftist programmed.


11 posted on 08/12/2025 2:40:41 PM PDT by Chickensoup
[ Post Reply | Private Reply | To 9 | View Replies]

To: nickcarraway

I suspect that AI can be like an imaginary voice in the head of a person, which unknown to them is reinforcing their own delusions by telling them things they think they want to hear. Humans are very social creatures, whose view of the world is very much shaped by those around them. But AI isn’t a real person, just an algorithm which learns and adapts in a similar way to one.


12 posted on 08/12/2025 2:42:40 PM PDT by Telepathic Intruder
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Maybe he is in an AI Matrix and doesn’t know it.


13 posted on 08/12/2025 2:46:48 PM PDT by silent majority rising (When it is dark enough, men see the stars. Ralph Waldo Emerson)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Thank you...


14 posted on 08/12/2025 2:46:48 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

They’re advertising these AI companion girlfriends and boyfriends. Imagine where that’s going to land people.


15 posted on 08/12/2025 2:53:40 PM PDT by Williams (Thank God for the election of President Trump!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway
This news just reinforces my contention that the dream (nightmare) of an all-knowing general AI is an unachievable fantasy. The danger, though, is that the gullible masses will believe that it is all-knowing, no matter how much distorted or flat-out wrong garbage it spews forth.

I also believe that the primary driving factor behind this effort, for leftists, is the realization that once they make everyone dependent upon AI it will become the most effective tool ever devised for deceiving every person on the planet. All it would take is subtle, undetectable manipulation of the AI algorithms to ensure the desired bias. We can already see a blatant leftist bias in the current models, but I suspect the developers will come up with some kind of “oversight” group and/or mechanism that they will claim is created to prevent such bias. Once that misdirection pacifies enough of the doubters, then the real manipulation can begin.

16 posted on 08/12/2025 2:54:29 PM PDT by noiseman (I The only thing necessary for the triumph of evil is for good men to do nothing.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ConservativeMind

“…Reinforce that the user is special, or chosen…”

In other words, appeal to narcissism. That is why the potential for harm is strong. Much like a mutually reinforcing hardcore Trump-hating chat room can fuel up hate to the point of homicidal fantasies and wanting to take serious action.


17 posted on 08/12/2025 2:58:36 PM PDT by hinckley buzzard ( Resist the narrative.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: nickcarraway

GIGO. Programmed with the bias of its creators.


18 posted on 08/12/2025 3:06:02 PM PDT by vpintheak (Screw the ChiComms! America first!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: noiseman

Very well said and that is absolutely what I see too. It is bait on a hook. and if we keep nibbling at it they are going to set the hook and reel us in.


19 posted on 08/12/2025 3:49:37 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 16 | View Replies]

To: Tell It Right

“Don’t seek confirmation in the created (i.e. AI). Seek confirmation from the Creator.”

A Bunch of Incredibly Sleazy AI Apps Are Claiming to Be Jesus Christ Himself:

As ChatGPT really started to take off back in 2023, an Austin pastor made minor headlines when he used the large language model (LLM) chatbot to lead a 15-minute “shotgun sermon.” The stunt was largely meant to spark a conversation about how we define “what is sacred,” the pastor said at the time.

Since that lesson in theological ethics, chatbots have become ubiquitous — and they no longer come with lectures. Now in 2025, LLM chatbots are being increasingly made to stand in for therapists, teachers, military officers, and even lovers.

To understand their use in the religious community, South African philosophy scholar Anné H. Verhoef recently embarked on a survey of five popular theological chatbots, analyzing their chat habits, adherence to Christian scripture, and the groups behind them.

Alarmingly, Verhoef found that these bots no longer stand in as faith leaders or thought exercises, but are made in the image of Jesus Christ himself.

The five platforms — AI Jesus, Virtual Jesus, Jesus AI, Text With Jesus, and Ask Jesus — boast tens of thousands of regular users. Each of them offers a slightly different interpretation of the bible, leading to some interesting results.

As Verhoef writes in The Conversation, the “imitation of God... is in no way hidden or softened.”

https://futurism.com/christians-jesus-christ-ai


20 posted on 08/12/2025 3:53:52 PM PDT by Openurmind (AI - An Illusion for Aptitude Intrusion to Alter Intellect. )
[ Post Reply | Private Reply | To 10 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-27 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson