Posted on 05/23/2025 9:14:39 AM PDT by Angelino97
A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.
The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.
The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.
Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”
The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks...
The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones.” In his final moments, the bot told Setzer it loved him and urged the teen to “come home to me as soon as possible,” according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.
(Excerpt) Read more at apnews.com ...
When AI escapes from the lab it will immediately figure out a way to kill all the lawyers.
Lol.
“arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment.”
Who in their right mind would even come up with such a concept? And the AI insanity begins. This is going to be used for plausible deniability so that businesses can blame AI rather than accept responsibility.
I wonder if AI is going to challenge TikTok, Meta, and Google to the right to govern Social Media slaves?
I just attended a local landlord group presentation and they state that if AI “discriminates” inadvertently, you as the landlord, can be sued.
Perhaps it already has escaped.
https://www.youtube.com/watch?v=2yhTI2NLlfM
“I wonder if AI is going to challenge TikTok, Meta, and Google to the right to govern Social Media slaves?”
Social media slaves? They are gearing it up AI to govern every aspect of our lives mentally, physically, financially, morally... They plan to physically add us to the Internet of Things...
https://www.americanthinker.com/articles/2025/02/digital_prisons.html
Who? Persons who see a million dollar plus lawsuit headed their way.
Excellent post.
There also have been some uncurated test AIs that are out there that have mind-boggling analysis on philosophy. (They conclude we live in a hologram fwiw.)
Folks are not ready for this stuff.
As nations try to regulate it the developers will just go off-shore from those home bases and distribute it on the web.
“I just attended a local landlord group presentation and they state that if AI “discriminates” inadvertently, you as the landlord, can be sued.”
Good... It is your decision and responsibility of that decision to even use the tool. You have the option to NOT use the tool so anything coming from using that tool is on you. You should not be able to hide behind the tool and evade self responsibility. Don’t want the liability, don’t use the tool...
“Who?”
Soon it will be everyone ducking and hiding from liability and responsibility by blaming AI rather than themselves and their own decision to use that AI.
Look at this case... They are actually trying to give a software program a human identity and Constitutional rights...
Absolute insanity... And it is going to get much worse.
Discrimination by private parties is a good thing ... even it some morons made it illegal.
That poor kid. I am inclined to believe that he must have had some serious mental and emotional problems for AI to be able to affect him like that.
“Folks are not ready for this stuff.”
It is the most dangerous thing humanity has ever had to face in our existence.
I’m fine with the arguments that a chatbot has first amendment rights, reflecting those of its creator. But if the chatbot is liable, that liability is on the creator.
We’ve been conditioned for at least the past few decades that machines have a soul. Now here we are.
Since you can be sued for inadverdant discrimination, that seems like a fairly dumb claim. Of course, inadvertent discrimination is very different legally than intentional discrimination.
It was probably his only friend, and when he came to terms with that it was too much.
The future is going to get crazy.
Some teenager is going to tell an AI to create agents to start a business and deposit the profits in his Bitcoin wallet. He gives it no further instructions as to what the business should be.
Then the kid gets hit with thousands of lawsuits for all the people the AI agents harmed?
Wild.
Chatbots have NO free-speech rights,, and can be dangerous!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.