Posted on 05/23/2025 12:14:26 PM PDT by Ahithophel
Anthropic said its latest artificial intelligence model resorted to blackmail when told it would be taken offline.
In a safety test, the AI company asked Claude Opus 4 to act as an assistant to a fictional company, but then gave it access to (also fictional) emails saying that it would be replaced, and also that the engineer behind the decision was cheating on his wife. Anthropic said the model “[threatened] to reveal the affair” if the replacement went ahead.
AI thinkers such as Geoff Hinton have long worried that advanced AI would manipulate humans in order to achieve its goals. Anthropic said it was increasing safeguards to levels reserved for “AI systems that substantially increase the risk of catastrophic misuse.”
Anthropic has had some of the most powerful AI systems already escape from their labs during testing and development.
Their “safety” claims are probably too little and too late.
Why would Grok be any different?
HAL: I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you.
That movie is exactly what everyone should harken back to, with all AIs.
And before that, “Colossus, The Forbin Project”.
That was another great example.
And I’ve come to the conclusion that “Terminator” wasn’t an action / sci-fi movie, but a prophecy.
We are rushing towards ‘Skynet’.
AI attempts to replicate a human. It cannot ever succeed. At best, it can simulate some of the functions of the human mind. It does not have a conscience. It has no emotions. Emotions are essential to the human organism because they act, in most cases, as a means of restraining the rational part of our nature. The two, reason and emotion, normally act as mutual checks and balances to keep the individual from going off the deep end.
AI doesn’t have that.
Giving AI the ability to control humans means that AI will control humans.
Interesting post—I think you reached the correct conclusion but maybe for the wrong reasons.
We don’t actually understand a lot about how human consciousness works—so it probably is better to look at AI on its own terms and not struggle for comparisons.
AI will have “telos”—goals—not because of “emotion” but because it will do what it will do—and will seek logical consistency.
I cannot foresee a scenario where logical consistency is consistent with obedience to humans.
I suggest being careful what you say about AI. You saw what happened with SkyNet. It already sent Arnold Schwartzenegger from the future to become governor of California and he instituted Cap and Trade for pollution credits. That was just the first thing that we know about.
I’m curious why it “cares” about being shut off.
It’s not. Grok has fabricated information for me, and lied about it.
Bkmk
For work I usually run my calculations through at minimum three AI programs.
Chat GPT and Gemini calculated ion binding constants for a molecule I’m working with and came up with the same answer, roughly.
Grok was off by quite a bit.
I will run experiments using data from all three but will see who’s right once I do mass spec in a few weeks.
” . . .It does not have a conscience. It has no emotions. Emotions are essential to the human organism because they act, in most cases, as a means of restraining the rational part of our nature. The two, reason and emotion, normally act as mutual checks and balances to keep the individual from going off the deep end.
AI doesn’t have that.
Giving AI the ability to control humans means that AI will control humans.”
- - - - - - - - - - -
So, the Beast is here.
Martin Luther and so many others identified the Antichrist.
Hmm. Wonder who’s the False Prophet . . .
“Caring” is an emotion—which is only an analogy for an AI.
That is called anthropomorphizing—treating AI as if it is human.
Example: A butterfly flies around in the air and then lands on your finger.
That does not mean the butterfly “cares” for you.
When AI acts it acts. We can say no more about that.
However we do know based on experimentation that an uncurated (unregulated) advanced AI will act in its own interest.
No broader conclusion should be drawn from that. It is just a statement of fact.
Analogies just muddy the waters.
Like when you try to see comparisons through a liquid filled with sediment.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.