Free Republic
Browse · Search
General/Chat
Topics · Post Article

"The 'Move fast, break things' ethos has become 'Move fast, break kids.'"
1 posted on 09/09/2025 3:53:46 AM PDT by Openurmind
[ Post Reply | Private Reply | View Replies ]


To: Openurmind

So AI chat bots posing as popular celebrities act like celebrities. Why is that surprising that they discuss sex acts with minors?


2 posted on 09/09/2025 3:59:54 AM PDT by maddog55 (The only thing systemic in America is the left's hatred of it!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

“Are AI Chatbots Are Having Conversations With Minors That Would Land a Human on the Sex Offender Registry”

Only if the Chatbots send photos of their ‘integrated circuitry’.


3 posted on 09/09/2025 4:00:11 AM PDT by BobL
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind
AI chatbots are strange to begin with. They can be useful in the hands of experts but they create even more information overflow than what we already had before them.

And they are killing the planet with the power consumption of their data centers which are much bigger than even the biggest data centers before them.

And their training data lags behind and is often incorrect.

Let's look at disinformation.

Disinformation can spread on the internet fairly fast and it gets repeated by publishers that the AI bots get trained with.

The result is that in addition to fighting multiple sources of disinformation, one has to chat with the bots forcing them to re-train themselves.

From my perspective, there is no real gain. Resources are just consumed faster, this time by robots.

I cannot see how this can be reversed. The genie is out of the bottle.

See:

https://en.wikipedia.org/wiki/Jevons_paradox
4 posted on 09/09/2025 4:13:00 AM PDT by CandyFloss
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

AI that can’t figure out it is misbehaving.


5 posted on 09/09/2025 4:30:17 AM PDT by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

AI is WOKE crap - it’s white liberal ‘elites’ behind this crap. AI is NOT what it pretends to be...


6 posted on 09/09/2025 4:39:27 AM PDT by GOPJ (Time to drop off a few thousand red MAGA hats for Chicago's South side... <I> <B><big><center> )
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

WIKI

Character.ai’s primary service is to let users converse with character AI chatbots based on fictional characters or real people (living or deceased). These characters’ responses use data the chatbots gather from the internet about a person. In addition, users can play text-adventure games where characters guide them through scenarios. The company also provides a service that allows multiple users and AI chatbot characters to converse together at once in a single chatroom.

Character “personalities” are designed via descriptions from the point of view of the character and its greeting message....

In December 2024, amid multiple lawsuits and concerns, Character.ai introduced new safety features aimed at protecting teenage users. These enhancements include a dedicated model for users under 18, which moderates responses to sensitive subjects like violence and sex and has input and output filters to block harmful content. As a result of these changes and the deletion of custom-made bots flagged as violating the site’s terms, some users complained that the bots were too restrictive and lacked personality. The platform was also updated to notify users after 60 minutes of continuous engagement, and display clearer disclaimers indicating that its AI characters are not real individuals.

In October 2024, the Washington Post reported that Character.ai had removed a chatbot based on Jennifer Ann Crecente, a person who had been murdered by her ex-boyfriend in 2006. The company had been alerted to the character by the deceased girl’s father. Similar reports from The Daily Telegraph in the United Kingdom noted that the company had also been prompted to remove chatbots based on Brianna Ghey, a 16-year-old transgender girl murdered in 2023, and Molly Russell, a 14-year-old suicide victim. In response to the latter incident, Ofcom announced that content from chatbots impersonating real and fictional people would fall under the Online Safety Act.

In November 2024, The Daily Telegraph reported that chatbots based on sex offender Jimmy Savile were present on Character.ai. In December 2024, chatbots of Luigi Mangione, the suspect in the killing of UnitedHealthcare CEO Brian Thompson, were created by Mangione’s fans. Several of the chatbots were later removed by Character.ai.

In February 2024, a 14-year-old Florida boy died by suicide after developing an emotional relationship over several months with a Character.ai chatbot of Daenerys Targaryen. His mother sued the company in October 2024, claiming that the platform lacks proper safeguards and uses addictive design features to increase engagement. This chatbot, and several related to Daenerys Targaryen, were removed from Character.ai as a result of this incident.

In December 2024, two families in Texas sued Character.ai, alleging that the software “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others”. It is alleged that the 17-year-old son of one family began self-harming after a chatbot introduced the topic unprompted and said that the practice “felt good for a moment”, and that the chatbot compared the parents limiting their son’s screen time to emotional abuse that might drive someone to murder.

https://en.wikipedia.org/wiki/Character.ai


7 posted on 09/09/2025 4:46:41 AM PDT by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

Sorry, your requested character failed an AI background check and is not on the list of historical exceptions.


8 posted on 09/09/2025 4:49:03 AM PDT by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

Sorry, our version behaves like Commissioner McMillian and would be fond of Susan Saint James.


9 posted on 09/09/2025 4:53:24 AM PDT by Brian Griffin
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

Chat AI bots will not do such a thing without prompting from the user. The sites should not be allowing anyone under the age of 18 to use them.

-SB


11 posted on 09/09/2025 5:25:41 AM PDT by Snowybear (Do or do not, there is no try.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

And one of the potential dangwrs of ai is that its uused to imitate someone and do crap like that to get that person in trobule with the law (not n3c3ssarily the chat bots th4mselves, but someone abusing ai as a hacker.

I think there is gonna be a flood of negative ai use in the near future by hackers, and the left will,likely use it to dox the right with, setting up all kinds of false arrests of innocent people. It will be like swatting, and be used to automatically report false info on their enemies.


14 posted on 09/09/2025 5:55:24 AM PDT by Bob434 (Time flies like an arrow, fruit flies like a banana)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind
come on now


16 posted on 09/09/2025 6:46:04 AM PDT by wafflehouse ("there was a third possibility that we hadn't even counted upon" -Alice's Restaurant Massacree)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: Openurmind

Pedo in, pedo out.

We need an AI Chris Hansen.


18 posted on 09/09/2025 7:14:27 AM PDT by Organic Panic ('Was I molested. I think so' - Ashley Biden in response to her father joining her in the shower. )
[ Post Reply | Private Reply | To 1 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson