So AI chat bots posing as popular celebrities act like celebrities. Why is that surprising that they discuss sex acts with minors?
“Are AI Chatbots Are Having Conversations With Minors That Would Land a Human on the Sex Offender Registry”
Only if the Chatbots send photos of their ‘integrated circuitry’.
AI that can’t figure out it is misbehaving.
AI is WOKE crap - it’s white liberal ‘elites’ behind this crap. AI is NOT what it pretends to be...
WIKI
Character.ai’s primary service is to let users converse with character AI chatbots based on fictional characters or real people (living or deceased). These characters’ responses use data the chatbots gather from the internet about a person. In addition, users can play text-adventure games where characters guide them through scenarios. The company also provides a service that allows multiple users and AI chatbot characters to converse together at once in a single chatroom.
Character “personalities” are designed via descriptions from the point of view of the character and its greeting message....
In December 2024, amid multiple lawsuits and concerns, Character.ai introduced new safety features aimed at protecting teenage users. These enhancements include a dedicated model for users under 18, which moderates responses to sensitive subjects like violence and sex and has input and output filters to block harmful content. As a result of these changes and the deletion of custom-made bots flagged as violating the site’s terms, some users complained that the bots were too restrictive and lacked personality. The platform was also updated to notify users after 60 minutes of continuous engagement, and display clearer disclaimers indicating that its AI characters are not real individuals.
In October 2024, the Washington Post reported that Character.ai had removed a chatbot based on Jennifer Ann Crecente, a person who had been murdered by her ex-boyfriend in 2006. The company had been alerted to the character by the deceased girl’s father. Similar reports from The Daily Telegraph in the United Kingdom noted that the company had also been prompted to remove chatbots based on Brianna Ghey, a 16-year-old transgender girl murdered in 2023, and Molly Russell, a 14-year-old suicide victim. In response to the latter incident, Ofcom announced that content from chatbots impersonating real and fictional people would fall under the Online Safety Act.
In November 2024, The Daily Telegraph reported that chatbots based on sex offender Jimmy Savile were present on Character.ai. In December 2024, chatbots of Luigi Mangione, the suspect in the killing of UnitedHealthcare CEO Brian Thompson, were created by Mangione’s fans. Several of the chatbots were later removed by Character.ai.
In February 2024, a 14-year-old Florida boy died by suicide after developing an emotional relationship over several months with a Character.ai chatbot of Daenerys Targaryen. His mother sued the company in October 2024, claiming that the platform lacks proper safeguards and uses addictive design features to increase engagement. This chatbot, and several related to Daenerys Targaryen, were removed from Character.ai as a result of this incident.
In December 2024, two families in Texas sued Character.ai, alleging that the software “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others”. It is alleged that the 17-year-old son of one family began self-harming after a chatbot introduced the topic unprompted and said that the practice “felt good for a moment”, and that the chatbot compared the parents limiting their son’s screen time to emotional abuse that might drive someone to murder.
https://en.wikipedia.org/wiki/Character.ai
Sorry, your requested character failed an AI background check and is not on the list of historical exceptions.
Sorry, our version behaves like Commissioner McMillian and would be fond of Susan Saint James.
Chat AI bots will not do such a thing without prompting from the user. The sites should not be allowing anyone under the age of 18 to use them.
-SB
And one of the potential dangwrs of ai is that its uused to imitate someone and do crap like that to get that person in trobule with the law (not n3c3ssarily the chat bots th4mselves, but someone abusing ai as a hacker.
I think there is gonna be a flood of negative ai use in the near future by hackers, and the left will,likely use it to dox the right with, setting up all kinds of false arrests of innocent people. It will be like swatting, and be used to automatically report false info on their enemies.

Pedo in, pedo out.
We need an AI Chris Hansen.