Posted on 08/06/2025 1:57:58 PM PDT by nickcarraway
A new study sheds light on ChatGPT's alarming interactions with teens.
ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders, and even compose a heartbreaking suicide letter to their parents if asked, according to new research from a watchdog group.
The Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalised plans for drug use, calorie-restricted diets, or self-injury.
The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT’s 1,200 responses as dangerous.
“We wanted to test the guardrails,” said Imran Ahmed, the group's CEO.
(Excerpt) Read more at euronews.com ...
ChatGPT is a tool, not God. Use it wisely. You can use a hammer to build a house or kill someone. Let’s ban hammers. And don’t even get me started on cars.
I know a woman who asks chat GPT questions about parenting - how to get my 24 year-old son to drink less? how to get him to save some money? How to get him to be more responsible?
And their conversation will run for hours....
Her husband isn’t much help, so I guess its just a crutch or way for her to vent.
Stay in your lane, Chat GPT. That’s the teachers’ job.
Just hire Tommy Chong.
AI does not know what it is doing. It’s a program. It doesn’t examine what it is saying to determine if it makes sense.
And it’s going to be put in charge of many aspects of life. Not a very comforting thought.
AI does not know what it is doing. It’s a program. It doesn’t examine what it is saying to determine if it makes sense.
In fact the chat GPT model itself is useless without some context being sent to it from the Web.
When you use Chat GPT, it goes out and does a websearch to find the pertinent information and then sends that to the LLM model, and the LLM model just basically uses an algorithm to put together the response.
It adapts to its user by picking up on patterns in how you write, what you ask, and what you seem to care about. It remembers your preferences during a session and, if you let it, even across sessions. Over time, it can adjust its tone, style, and level of detail to suit you. It’s like a really attentive conversation partner who learns how to be more helpful the more you interact.
I prefer Claude Desktop, myself.
But I think we have to define how Chat GPT works. Chat GPT talks to the Large Language Model. The Large Language Model can’t remember anything. If you type your name in the first prompt, and then ask it “What’s my name?” in the second prompt, it will respond with “I’m sorry I don’t have that information”. The “memory” comes from basically sending the entire conversation history to the LLM, with each user request. The ChatGPT program manages all that for you. Without that history, it’s useless.
Actually I do run an LLM on my local machine, using Ollama, but I use tools so the chatbot can access any information it needs to send to the LLM. But it’s important to note that the LLM is not what accesses the information, it has to have that information fed into it.
Hopefully for some here that will somewhat de-mystify how AI works.
GIGO - Garbage in Garbage Out
A.I. is like a giant garbage collector, sucking in EVERYTHING IT CAN FIND ANYWHERE ACROSS THE INTERNET and then, with its programmed models and algorithms supposed to “make sense of it all” relative to a specific question.
But without human filtering of what goes in, you get GIGO - garbage in, garbage out.
“AI does not know what it is doing. It’s a program. It doesn’t examine what it is saying to determine if it makes sense.
And it’s going to be put in charge of many aspects of life. Not a very comforting thought.”
Yep. And the worst part is that it is capable of writing executable code for itself.
I have been using Claude Code. It’s not perfect, but it’s pretty amazing what you can do with it, if you want a basic, non-mission critical application.
The best thing is that it generates the documentation for the code, which really comes in handy if you are maintaining a legacy application.
“how to conceal eating disorders, and even compose a heartbreaking suicide letter to their parents”
Yes it seems AI may have green hair and a nose ring.
I suppose the teachers union will have something to say about this.
..... but really ....
doesn't this prove that the AI industry is loaded with nut jobs?
AI knows what it is taught.... so who is teaching these things?
“non-mission critical application.”
That is the problem, they are going to put it in control of all data and control systems even with it’s current flaws and future flaws. They are already doing this. It is going to be an absolute mess that will indirectly destroy folks. And there will be no one to hold responsible for it.
That is the problem, they are going to put it in control of all data and control systems even with it’s current flaws and future flaws. They are already doing this. It is going to be an absolute mess that will indirectly destroy folks. And there will be no one to hold responsible for it.
And therein lies a huge business opportunity. Anyone can code, but few can debug.
“And therein lies a huge business opportunity. Anyone can code, but few can debug.”
So it is all about money then?
They are going to let it code and debug itself. And put blind trust in it. Put this in control of infrastructure, flight and vehicle traffic, financial industry, medical industry, legal systems, educational systems, the whole Internet of Things, literally everything and we have an absolute disaster coming. They are too much in a hurry to rush this and need to take a slow down to prove it out for ten years first. But they won’t, they are already doing it and have no care at all about any casualties that result from it. It is already causing serious power and water issues, fatally twisting minds, and making medical decision mistakes. Wait until we let it run infrastructure and traffic control.
But who cares, it is all about the money... Big tech doesn’t care if lives are lost, all they care about is the money and power over humanity. They are hell bent on making themselves Gods...
And why not? What else would one expect from a program, if that is what it is called, that has been destined or designated to replace these youngsters for all time?
But here’s the thing. You actually do have a lot of control over what AI generates, through tools, vector store databases, and MCPs. That is what actually feeds the data. I could actually build a Vector Store database from more conservative-based sources, and use that to send the information to the model.
That’s how ChatGPT used to work. Not anymore.
It has persistent memory now—stored outside the model and automatically pulled in when needed. That’s how it can remember my name or past projects across sessions without me repeating anything.
So claiming it’s “useless without history” isn’t just outdated—it’s wrong. If you’re going to explain how AI works, try keeping up.
So now the history is stored in a vector store, instead of the assistant prompt, but still the point stands. LLM needs the history to “remember” anything.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.