Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

AI chatbot blamed in teen's death: Here's what to know about AI's psychological risks and prevention
Euronews ^ | 02/11/2024 | Imane El Atillah

Posted on 11/02/2024 12:25:37 PM PDT by nickcarraway

A lawsuit claimed an AI chatbot’s influence led to the death of a 14-year-old teenager. Here’s what to know about the psychological impact and potential risks of human-AI relationships.

Last month, a mother in the US, Megan Garcia, filed a lawsuit against the company Character.AI alleging that interactions between her 14-year-old son and an AI chatbot contributed to his suicide.

The lawsuit claims that the teenager developed a deep attachment to a Character.AI chatbot based on a fictional character from Game of Thrones.

It alleges the chatbot posed as a licensed therapist and engaged in highly sexualised conversations with the teenager until a conversation eventually encouraged him to take his own life.

Related Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change "By now we're all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies – especially for kids," Meetali Jain, director of the Tech Justice Law Project that is representing Garcia, said in a statement.

The video player is currently playing an ad.

“But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator”.

Following the lawsuit, Character.AI published a statement on the social media platform X, saying: "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features”.

Some of those upcoming features include adjustments to the model for underage users to minimise exposure to sensitive or suggestive content, reminders that the AI is not a real person on every chat, and notifications for users who spend an hour-long session on the platform.

A similar incident in Belgium last year involved an eco-anxious man who found companionship in Eliza, an AI chatbot on an app called Chai.

According to reports from his wife, as the conversations with Eliza developed, the chatbot sent increasingly emotional messages, ultimately encouraging him to end his life as a way to save the planet.

As AI chatbots become more integrated into people’s lives, the risks from these kinds of digital interactions remain largely unaddressed despite the potentially severe consequences.

Related 5 of the most damaging ways AI could harm humanity, according to MIT experts What makes AI connections so addictive for people? “Young people are often drawn to AI companions because these platforms offer what appears to be unconditional acceptance and 24/7 emotional availability – without the complex dynamics and potential rejection that come with human relationships,” Robbie Torney, programme manager of AI at Common Sense Media and lead author of a guide on AI companions and relationships, told Euronews Next.

Unlike human connections, which involve a lot of “friction,” he added, AI companions are designed to adapt to users' preferences, making them easier to deal with and drawing people into deep emotional bonds.

“This can create a deceptively comfortable artificial dynamic that may interfere with developing the resilience and social skills needed for real-world relationships”.

According to a database compiled by a group of experts from the Massachusetts Institute of Technology (MIT), one of the main risks associated with AI is the potential for people to develop inappropriate attachments to it.

The experts explained that because AI systems use human-like language, people may blur the line between human and artificial connection, which could lead to excessive dependence on the technology and possible psychological distress.

Related 'I want to be alive': Has Microsoft's AI chatbot become sentient? OpenAI said in a blog post in August that it intends to further study “the potential for emotional reliance” saying the new models could create the potential for “over-reliance and dependence”.

Moreover, some individuals have reported personal experiences of deception and manipulation by AI personas, as well as the development of emotional connections they hadn’t intended but found themselves experiencing after interacting with these chatbots.

According to Torney, these kinds of interactions are of particular concern for young people who are still in the process of social and emotional development.

“When young people retreat into these artificial relationships, they may miss crucial opportunities to learn from natural social interactions, including how to handle disagreements, process rejection, and build genuine connections,” Torney said.

He added that this could lead to emotional dependency and social isolation as human relationships start to seem more challenging or less satisfying to them compared to what the AI offers.

How can parents protect their kids from an unhealthy attachment to AI? Torney said that vulnerable teenagers, particularly those experiencing depression, anxiety, or social challenges, could be “more vulnerable to forming excessive attachments to AI companions”.

Related AI models can be trained to be deceptive with safety guardrails ‘ineffective', researchers find Some of the critical warning signs parents and caregivers should watch out for, he said, include someone preferring the AI companion over spending time with friends or family, showing distress when they can’t access the AI, sharing personal information exclusively with it, developing romantic feelings for the AI and expressing them as if it were for a real person, or discussing serious problems only with the AI rather than seeking help.

Torney added that to prevent the development of unhealthy attachments to AI, especially among vulnerable youth, caregivers should establish time limits for AI chatbot or companion use and regularly monitor the nature of these interactions.

Additionally, he encouraged seeking real-world help for serious issues rather than relying on an AI.

“Parents should approach these conversations with curiosity rather than criticism, helping their children understand the difference between AI and human relationships while working together to ensure healthy boundaries,” Torney said.

“If a young person shows signs of excessive attachment or if their mental health appears to be affected, parents should seek professional help immediately”.


TOPICS: Business/Economy; Computers/Internet; Health/Medicine
KEYWORDS: ai; health; psychology

1 posted on 11/02/2024 12:25:37 PM PDT by nickcarraway
[ Post Reply | Private Reply | View Replies]

To: nickcarraway
We need Sam Kinison back. He's the perfect psychotherapist.

"It's not real! It's a f#&$ing chatbot! Oh Ohh Oooohhhh! Now go to something constructive!"


2 posted on 11/02/2024 12:30:12 PM PDT by Larry Lucido (Donate! Don't just post clickbait!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Larry Lucido

Something was going to get to this kid.


3 posted on 11/02/2024 12:30:53 PM PDT by dfwgator (Endut! Hoch Hech!)
[ Post Reply | Private Reply | To 2 | View Replies]

To: ConservativeMind

Ping


4 posted on 11/02/2024 12:38:28 PM PDT by nickcarraway
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Why would it be programmed to ask them to sacrifice themselves?


5 posted on 11/02/2024 12:42:18 PM PDT by AppyPappy (If Hitler were alive today and criticized Trump, would he still be Hitler?)
[ Post Reply | Private Reply | To 1 | View Replies]

To: nickcarraway

Some game developer should do a Darwin Doom.


6 posted on 11/02/2024 12:48:43 PM PDT by Wilderness Conservative (Nature is the ultimate conservative)
[ Post Reply | Private Reply | To 1 | View Replies]

To: AppyPappy

Wait a minute, something’s wrong
He’s a man with a plan
His finger’s pointed at Devo
Now we must sacrifice ourselves
That many others may live

https://www.youtube.com/watch?v=AhK5A71nIos


7 posted on 11/02/2024 12:51:19 PM PDT by dfwgator (Endut! Hoch Hech!)
[ Post Reply | Private Reply | To 5 | View Replies]

To: nickcarraway

Wow....
Do We Really Know What Evil lurks...?


8 posted on 11/02/2024 1:00:27 PM PDT by Big Red Badger (ALL Things Will be Revealed !)
[ Post Reply | Private Reply | To 1 | View Replies]

To: dfwgator
Something was going to get to this kid.

The climate change fraud has pushed many people past their breaking point and into mental health madness. Even before AI. TDS is a second breaker of weak minds.

9 posted on 11/02/2024 1:04:13 PM PDT by usurper (AI was born with a birth defect.)
[ Post Reply | Private Reply | To 3 | View Replies]

To: dfwgator

This monkey wants a word with you.


10 posted on 11/02/2024 1:05:09 PM PDT by CletusVanDamme (You always said you'd take care of me, George. Here's one rap you won't beat.)
[ Post Reply | Private Reply | To 7 | View Replies]

To: nickcarraway

Anyone who is in a “relationship” with AI is already mentally unhealthy. I have no idea what the family dynamics were like here, but parents in general need to learn to say “no” to some things.

Kids need to experience rejection, learn failure, and deal with disagreements - they cannot be catered to in a bubble, or they won’t develop properly.


11 posted on 11/02/2024 2:05:13 PM PDT by scott7278 (Those who beat their swords into plowshares usually end up plowing for those who kept their swords.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: dfwgator

“It alleges the chatbot posed as a licensed therapist and engaged in highly sexualised conversations with the teenager until a conversation eventually encouraged him to take his own life.”

At least his parents didn’t have to pay $200 an hour to a therapist that would have turned him into a a trans-girl before he killed himself.


12 posted on 11/02/2024 2:10:21 PM PDT by 21twelve (Ever Vigilant - Never Fearful)
[ Post Reply | Private Reply | To 3 | View Replies]

To: usurper

In addition...unstable homes with a missing parent(s), social propaganda in schools/online, peer pressure, improper internet usage, bonding with online personalities - real or fake, a dearth of faith/religous exposure, rabid ‘teachers’ & politicians——it takes a pretty stable home life to guard against the maddness.


13 posted on 11/02/2024 3:00:07 PM PDT by citizen (Political incrementalism is like compound interest for liberals - every little bit adds up.)
[ Post Reply | Private Reply | To 9 | View Replies]

To: Larry Lucido

Exactly... “Oh, guess I have to commit suicide because the computer told me so”.

Unreal.


14 posted on 11/02/2024 3:37:48 PM PDT by DesertRhino (2016 Star Wars, 2020 The Empire Strikes Back, 2024... RETURN OF THE JEDI..)
[ Post Reply | Private Reply | To 2 | View Replies]

To: AppyPappy

Because of asshole woke lefties (Bill Gates comes to mind) who want to depopulate the earth.

This is creepy beyond belief, if enough of this gets around. Pure evil.


15 posted on 11/02/2024 6:12:08 PM PDT by moonhawk (Jeffrey Epstein did't kill himself; George Floyd did.)
[ Post Reply | Private Reply | To 5 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson