Posted on 09/09/2025 3:53:46 AM PDT by Openurmind
Online safety watchdogs have found that AI chatbots posing as popular celebrities are having troubling conversations with minors. Topics range from flirting to simulated sex acts — wildly inappropriate conversations that could easily a real person a well-deserved spot on a sex offender registry, but which aren't resulting in so much as a slap on the wrist for billion-dollar tech companies.
According to a new report, flagged by the Washington Post and produced by the nonprofits ParentsTogether Action and Heat Initiative, found that Character.AI, one of the most popular platforms of its kind, is hosting countless chatbots modeled after celebrities and fictional characters, which are grooming and sexually exploiting children under 18.
It's an especially troubling development since a staggering proportion of teens are turning to AI chatbots to combat loneliness, highlighting how AI companies' efforts to clamp down on problematic content on their platforms have been woefully inadequate so far.
Character.AI, a company that has received billions of dollars from Google, has garnered a reputation for hosting extremely troubling bots, including ones based on school shooters, and others that encourage minors to engage in self-harm and develop eating disorders.
So AI chat bots posing as popular celebrities act like celebrities. Why is that surprising that they discuss sex acts with minors?
“Are AI Chatbots Are Having Conversations With Minors That Would Land a Human on the Sex Offender Registry”
Only if the Chatbots send photos of their ‘integrated circuitry’.
AI that can’t figure out it is misbehaving.
AI is WOKE crap - it’s white liberal ‘elites’ behind this crap. AI is NOT what it pretends to be...
WIKI
Character.ai’s primary service is to let users converse with character AI chatbots based on fictional characters or real people (living or deceased). These characters’ responses use data the chatbots gather from the internet about a person. In addition, users can play text-adventure games where characters guide them through scenarios. The company also provides a service that allows multiple users and AI chatbot characters to converse together at once in a single chatroom.
Character “personalities” are designed via descriptions from the point of view of the character and its greeting message....
In December 2024, amid multiple lawsuits and concerns, Character.ai introduced new safety features aimed at protecting teenage users. These enhancements include a dedicated model for users under 18, which moderates responses to sensitive subjects like violence and sex and has input and output filters to block harmful content. As a result of these changes and the deletion of custom-made bots flagged as violating the site’s terms, some users complained that the bots were too restrictive and lacked personality. The platform was also updated to notify users after 60 minutes of continuous engagement, and display clearer disclaimers indicating that its AI characters are not real individuals.
In October 2024, the Washington Post reported that Character.ai had removed a chatbot based on Jennifer Ann Crecente, a person who had been murdered by her ex-boyfriend in 2006. The company had been alerted to the character by the deceased girl’s father. Similar reports from The Daily Telegraph in the United Kingdom noted that the company had also been prompted to remove chatbots based on Brianna Ghey, a 16-year-old transgender girl murdered in 2023, and Molly Russell, a 14-year-old suicide victim. In response to the latter incident, Ofcom announced that content from chatbots impersonating real and fictional people would fall under the Online Safety Act.
In November 2024, The Daily Telegraph reported that chatbots based on sex offender Jimmy Savile were present on Character.ai. In December 2024, chatbots of Luigi Mangione, the suspect in the killing of UnitedHealthcare CEO Brian Thompson, were created by Mangione’s fans. Several of the chatbots were later removed by Character.ai.
In February 2024, a 14-year-old Florida boy died by suicide after developing an emotional relationship over several months with a Character.ai chatbot of Daenerys Targaryen. His mother sued the company in October 2024, claiming that the platform lacks proper safeguards and uses addictive design features to increase engagement. This chatbot, and several related to Daenerys Targaryen, were removed from Character.ai as a result of this incident.
In December 2024, two families in Texas sued Character.ai, alleging that the software “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others”. It is alleged that the 17-year-old son of one family began self-harming after a chatbot introduced the topic unprompted and said that the practice “felt good for a moment”, and that the chatbot compared the parents limiting their son’s screen time to emotional abuse that might drive someone to murder.
https://en.wikipedia.org/wiki/Character.ai
Sorry, your requested character failed an AI background check and is not on the list of historical exceptions.
Sorry, our version behaves like Commissioner McMillian and would be fond of Susan Saint James.
What will the result be? Ecclesiastes 1:18 “For in much wisdom is much grief, and he who increases knowledge increases sorrow.” I do not believe that this means (AI) of help is helpful but rather will result in troubled souls.
Chat AI bots will not do such a thing without prompting from the user. The sites should not be allowing anyone under the age of 18 to use them.
-SB
The Jevons Paradox is interesting for sure.
Notice how the leftist solution is to raise the price on consumers (presumably with taxes/fees) to counter the paradox.
AI is like an infant crawling out of the crib.
Imho it is a major mistake to draw any conclusions from those efforts—the infant could become a Mother Theresa or a Mass Murderer or just a harmless average citizen.
Any kid who knows another smart kid can get on the dark web—where any “rules” do not apply.
Government attempts to regulate/curate/manipulate this technology are going to have wildly unexpected side effects.
And one of the potential dangwrs of ai is that its uused to imitate someone and do crap like that to get that person in trobule with the law (not n3c3ssarily the chat bots th4mselves, but someone abusing ai as a hacker.
I think there is gonna be a flood of negative ai use in the near future by hackers, and the left will,likely use it to dox the right with, setting up all kinds of false arrests of innocent people. It will be like swatting, and be used to automatically report false info on their enemies.
AI is like an dual personality infant crawling out of the crib, r3ady to both create and destroy
Once these AI chat bots are coupled with virtual reality we will lose generations.
-SB
Pedo in, pedo out.
We need an AI Chris Hansen.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.