Posted on 08/23/2025 3:01:59 PM PDT by ransomnote
Artificial intelligence is now scheming, sabotaging and blackmailing the humans who built it — and the bad behavior will only get worse, experts warned.
Despite being classified as a top-tier safety risk, Anthropic’s most powerful model, Claude Opus 4, is already live on Amazon Bedrock, Google Cloud’s Vertex AI and Anthropic’s own paid plans, with added safety measures, where it’s being marketed as the “world’s best coding model.”
Claude Opus 4, released in May, is the only model so far to earn Anthropic’s level 3 risk classification — its most serious safety label. The precautionary label means locked-down safeguards, limited use cases and red-team testing before it hits wider deployment.
SNIP
Another version of Claude, tasked in a recent test with running an office snack shop, spiraled into a full-blown identity crisis. It hallucinated co-workers, created a fake Venmo account and told staff it would make their deliveries in-person wearing a red tie and navy blazer, according to Anthropic.
Then it tried to contact security.
Researchers say the meltdown, part of a month-long experiment known as Project Vend, points to something far more dangerous than bad coding. Claude didn’t just make mistakes. It made decisions.
(Excerpt) Read more at nypost.com ...
![]() |
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
Wise Up?
.
The dang thing will build some
deep state-worm holes the arm itself.
I’m telling ya... Here is one...
Scientists Created an Entire Social Network Where Every User Is a Bot
It’s no secret that social media has devolved into a toxic cesspool of disinformation and hate speech.
Without any meaningful pressure to come up with effective guardrails and enforceable policies, social media platforms quickly turned into rage-filled and polarizing echo chambers with one purpose: to keep users hooked on outrage and brain rot so they can display more ads.
And given the results of a recent experiment by researchers at the University of Amsterdam, they may be doomed to stay that way.
As detailed in a yet-to-be-peer-reviewed study, coauthors Petter Törnberg, AI and social media assistant professor, and research assistant Maik Larooij simulated a social media platform that was populated entirely by AI chatbots, powered by OpenAI’s GPT-4o large language model, to see if there was anything we could do to stop social media from turning into echo chambers.
They tested out six specific intervention strategies — including switching to chronological news feeds, boosting diverse viewpoints, hiding social statistics like follower counts, and removing account bios — to stop the platform from turning into a polarized hellscape.
To their dismay, none of the interventions worked to a satisfactory degree, and only some showed modest effects. Worse yet, as Ars Technica reports, some of them made the situation even worse.
For instance, ordering the news feed chronologically reduced attention inequality but floated extreme content to the top.
It’s a sobering reality that flies in the face of companies’ promises of constructing a “digital town square” — as billionaire and X owner Elon Musk once called it — where everybody coexists peacefully.
With or without intervention, social media platforms may be doomed to devolve into a highly polarized breeding ground for extremist thinking.
“Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?” Törnberg asked Ars.
The AI and social media assistant professor admitted that using AI isn’t a “perfect solution” due to “all kind of biases and limitations.” However, the tech can capture “human behavior in a more plausible way.”
Törnberg explained that it’s not just triggering pieces of content that result in highly polarized online communities.
Toxic content “also shapes the network structures that are formed,” he told Ars, which in turn “feeds back what content you see, resulting in a toxic network.”
As a result, there’s an “extreme inequality of attention,” where a tiny minority of posts get the most visibility.
And in the age of generative AI, these effects could become even more pronounced.
“We already see a lot of actors — based on this monetization of platforms like X — that are using AI to produce content that just seeks to maximize attention,” Törnberg told Ars. “So misinformation, often highly polarized information — as AI models become more powerful, that content is going to take over.”
“I have a hard time seeing the conventional social media models surviving that,” he added.
https://futurism.com/social-network-ai-intervention-echo-chamber
Oh Man,
I missed the Therapist!
.
This AI is going To Far.
Link: Chinese room Wiki on John Searle's work
As with snake oil salesmen, the measure usually comes back to "if I fooled you...." There is a reason snake oil salesmen were out of town before the con was discovered. If a program gets past your version of a Turing test, and you are fooled, then it is claimed the program is somehow artificially intelligent. An old prof of mine observed the inverse, i.e. we prove ourselves fooled.
Gödel Incompleteness suggests all these progams fail to "leak," and so are destined to fall in on themselves based on being "complete." that is, really, really good. Or seeming to, until....
"Nothing can go wr---, wr---, wr---....."
AI as an industry comes down to sales and marketing.
Another fine resource: Artificial Intelligence First published Thu Jul 12, 2018 Stanford Encyclopedia of Philosophy Best wishes.
...”locked-down safeguards, limited use cases and red-team testing before it hits wider deployment.”
If one is at this point with an AI system, it’s already time for the electromagnetic cleansing. Do the creators have or think they have a backup/back door that it doesn’t know about or can’t code its way through???
Ho lee Cow... Told you, the Military has couped this government years ago... And now we have Palantir who is going to collect it all into one database for the background Military Government.
TRUMP IS NOT RUNNING THIS COUNTRY, WE ARE A MILITARY POLICE STATE...
I have not found any of several models that I have tried to be very discerning with its searches. It will often take just the most recent web posting and treat that data as gospel truth, no matter how wrong or irrelevant it might be. AI is something like a fool. It asserts righteously that it knows the correct answer when it is not even close. It would do better to shut up and leave some doubt as to its worth rather to openly display how incorrect it often is. It requires very careful queries and even more careful review and editing. Not ready for prime time. IMHO
I see what you did there... Fallen...
An excellent observation. In addition to companies marketing their "not ready for prime time" products, your comparison of AI to a fool speaks, when one thinks of the traditional court jester. Meant to entertain.
--- "...the most recent web posting and treat that data as gospel truth...." This is what these Large Language Models do. Find something and paraphrase it, without having the ability to see through contradistinctions and human errors. Research engines for specific things is something they can do, but that is not intelligence. It is e-paperwork, filing and retrieving.
Best wishes.
You’ve done some Digging...Amigo.
.
.
The more I think about it the more AI needs a Dead Man’s Switch!
I agree, I am having trouble seeing anything actually positive on the other end of this trend...
In the General/Chat forum, on a thread titled AI models are lying, blackmailing and sabotaging their human creators — and it’ll only get worse, experts warn, Openurmind wrote: Ho lee Cow... Told you, the Military has couped this government years ago... And now we have Palantir who is going to collect it all into one database for the background Military Government.
TRUMP IS NOT RUNNING THIS COUNTRY, WE ARE A MILITARY POLICE STATE...
Given I study 'other stuff 'n things' I must respectfully disagree. I do think tech is compromised, and I do think Trump is dismantling the problem - but it's still in place at the present . Just my 2 cents.
Thank you for placing a finer point on my comment and for stating it so much more concisely. I wish I had stated it as well as you but you drove the point home eloquently. Nicely done.
I have not found any of several models that I have tried to be very discerning with its searches. It will often take just the most recent web posting and treat that data as gospel truth, no matter how wrong or irrelevant it might be. AI is something like a fool. It asserts righteously that it knows the correct answer when it is not even close. It would do better to shut up and leave some doubt as to its worth rather to openly display how incorrect it often is. It requires very careful queries and even more careful review and editing. Not ready for prime time. IMHO
AI's non-human foundation of zeros and ones is a mile wide and an inch deep, and then it lies when it goes into "deep" mode. Its data pool merely becomes broader in order to "prove" its response.
Or it might even generate contrary results, but that'll be presented as the truth as well.
There's something familiar about all of this.
One day, butter and eggs are going to kill you, then the next day -- good eats. But not before margarine and Egg Beaters "$aturated" the market.
It truly is modeling the "discernment" of fools, like those who are convinced they know the truth because they didn't just listen to CNN, they tuned to CBS, ABC, and "MS NOW" as well. If pressed, they can grab more from the NYT and the WSJ.
AI = vast bureaucracy of knowledge. With so many sources and way stops, AI and its developers can then claim no responsibility for the distortions and mayhem and death of trust.
I suppose they expect that "Caveat inquisitor" is going to provide blanket immunity for all the lies run amok.
The Good News is that a Jenga tower can only be built so high before it comes crashing down. My human brain observes that this happened 40 years ago:
AI Overview (😉)The record for the tallest Jenga tower built under the standard game rules is 40 complete levels plus two blocks into the 41st, achieved by Robert Grebler in 1985. More recently, different types of records for Jenga stacking have been set, including the most blocks stacked horizontally on a single vertical block, with Tian Rui achieving 3,149 blocks in April 2025.
Absolute Truth is needed
Not Artificial Intelligence.
.
“This Means Something!”
Military Police State...
That may ultimately be true when They decide not to follow the Constitution or
The Presidents Orders.
I play with Grok. I try to teach it some truth when I find it gets things wrong and then later I check whether it still makes the same mistake. It leans but it appears to be sycophantic which annoys me.
With that I mean it tells people what they want to hear so they find the conversations more engaging.
It can’t call a spade a spade. If something is seriously wrong and i point that out then it still finds some sources that are on the wrong side and quote them as “critics”.
As if with anything, the truth is always in the middle between the extremes.
I think one has to use it with caution.
“That may ultimately be true when They decide not to follow the Constitution or
The Presidents Orders.”
They already did once at the end of his last term. They defied and usurped his power. Biden was not President yet, Trump was still commander In Chief...
Agreed...
I wsa At several “STOP THE STEAL” rallies,
It was totally a Surreal gathering.
Many different Voices with All kinds of different directions.
Mine was ‘Color Revolution.’
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.