Posted on 02/27/2026 7:17:34 PM PST by SeekAndFind
Well, that escalated quickly...
As Ben Smith reported earlier on Friday, the Pentagon and AI giant Anthropic were in the midst of a showdown regarding the deployment terms of its Claude AI model in defense systems. The Pentagon wanted to authorize "all lawful use," but Anthropic had other ideas.
Anthropic has drawn its line at two points: no domestic mass surveillance and no fully autonomous weapons operating without meaningful human oversight. Those limits are already written into its defense agreements. The Pentagon wants broader language that covers “all lawful use” once Claude is embedded.
READ MORE: AI Is Already Embedded in Military Systems - Now the Fight Is Over How Far It Can Go
Now, Experts Are Warning Revolutionary AI Tools Change Everything
Per Pentagon spokesman Sean Parnell, Anthropic had a deadline of 5:00 PM Eastern Friday to reach a deal.
Under Secretary of War Emil Michael recently told Bloomberg News:
“For any AI system we might use, are we using it to protect our warfighters in the right way? Are we using it to give them the best tools to be efficient and lethal?
"Ultimately, at the end of the day, we follow the law—all laws—but we can’t let any one company stand between us and the warfighter. They don't make the rules. Congress makes the rules, @POTUS signs them, we execute them—and we do so safely."
Under Secretary Emil Michael (@USWREMichael) on the @DeptofWar’s commitment to providing the most efficient AI capabilities to the warfighter:
“For any AI system we might use, are we using it to protect our warfighters in the right way? Are we using it to give them the best tools to be efficient and lethal?Ultimately, at the end of the day, we follow the law—all laws—but we can’t let any one company stand between us and the warfighter. They don't make the rules. Congress makes the rules, @POTUS signs them, we execute them—and we do so safely."
pic.twitter.com/zRtwHojj6A— Department of War CTO (@DoWCTO) February 27, 2026
Now, however, it looks as though it may be too late. President Trump issued a statement via Truth Social on Friday afternoon that basically says of Anthropic, "Dead to me."
Trump is ordering all federal agencies to "IMMEDIATELY CEASE all use of Anthropic’s technology," referring to the company as "A RADICAL LEFT, WOKE COMPANY" and "Leftwing nut jobs," and declaring that they've made "a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War."
pic.twitter.com/B51SWfn81N— Rapid Response 47 (@RapidResponse47) February 27, 2026
THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.
Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
PRESIDENT DONALD J. TRUMP
That looks to be rather definitive. I guess we'll see...
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
I predict that’s gonna leave a mark.
Anthropic (Claude) tech staff is 80% Indian/Pakistani (here in the USA). AI is really a South East Asian technology.
Yes. Lefties are giddy about someone ‘standing up to’ Trump but here’s the thing.
PRC is working on AI. They are absolutely definitely not going to refrain from mass surveillance and they don’t care if their tech hurts people.
Yes absolutely it’s very risky but not playing = defeat. Atomic put us on top with one viable superpower last century. If two+ powers master AI military use then negotiations occur.
If one power alone does, war. That power will use what it has and quickly before there’s a chance another achieves parity.
So ... like it or not, a cautious approach is the fastest guarantee of mass deaths.
I'm having trouble squaring those two side by side comments.
If it were truly a "supply chain risk" shouldn't all use immediately be stopped?
I would also assume a bare AI product "with guardrails" would be considered safer than a bare product "without gaurdrails." Without gaurdrails would therhetically be the bigger "supply chain risk" as it is inherently more dangerous, would it not?
I just read Anthropic has already announced they plan to sue over the supply chain risk designation, especially if they keep using the product after the designation. A worst case scenario could be the judge not only removing the "supply chain risk" from Anthropic, but, also attempting to designate its replacement without guardrails as the actual supply chain risk, AFTER they've already switched. That's likely beyond a Federal Court's jurisdiction, but, the ruling isn't far fetched on something like this, in today's environment.
What's your source on that? I just asked a different AI engine, and it said estimates are 80+% are in the US, and of that, 40+% are American.
If Anthropic had given in. It would have been the equivilaent of selling the AI and company to the US government for $200 millions. Did anyone see what happened to Musk when he got involved with Trump? The value of his company started dropping like a stone. A.I. is probably one of the most hated inventions no one ever asked for and once people see how bad it really is the value of any company tied to the A.I. doing all the bad things ( Anthropic told the Pentagon that it’s A.I. wasn’t prepared for the tasks the military wants and guess who would pay when A.I. goes bonkers blowing up a bunch of civilians because the military wants to do things with Claude it just can’t do and they do it anyway) will be toast.
AI is often and rightfully compared to the development and proliferation of nuclear weapons. In your scenario, just because China is engaging in dangerous development practices, doesn’t mean the US automatically and immediately, should, in response.
The only exception to that, in my mind, would be if China has already far, far exceeded us in AI, and proven that AI without guardrails may be safe, at least at these early stages of its development. Some people do believe all that. I don’t, I think this is purely a power play, by those seeking more power, in the form of AI. But neither reality of what’s going on is good.
Well, if Jroehl did use AI then I guess they are wrong. I just hope the accurate AI system has control of the weapons before it hits the “fire” button.
I don’t share President Trumps views on this one. Especially when it comes to massive eavesdropping and targeting. We already know the Democrats want many conservatives gone, why give them even more tools when they are back in office?
Dept. of Grok
Hail to our great Chief, Donald J Trump.
An analogy would be what if Boeing had veto power over the use of minuteman missiles in the event of a nuclear strike. Boeing builds them but they don't get to decide if they're allowed to be used or not, that would be ridiculous. That responsibility rests with the President, Joint Chiefs of Staff, and Congress. They're accountable to the people, Boeing is accountable to shareholders. Likewise we cannot allow Anthropic to dictate terms of the use of a military asset, they don't get to decide when, where, or how it's used. That power and responsibility solely rests with the elected U.S. government.
Anthropics restrictions would prevent use in American fully autonomous hunter-killers drones being released into an enemy’s territory to seek and destroy designated targets. And THAT will be a large part of near future battle doctrine.
This is just one. I have the other 10:
https://www.uscis.gov/sites/default/files/document/data/quarterly_all_forms_fy2025_q3.xlsx
“40+% are American” = That is one of the biggest lies of the last 20 years.
"Autonomous" means there is no human interaction, or control, over what the weapon decides to destroy, including accidents involving friendlies. If that is what the government has decided they want, then they can and will get it, from multiple sources.
The problem I have is that supposedly, the contract they have with Anthropic prohibited it, and should be honored so long as we aren't at war. And, if true, after breaking the contract, they have designated this AI company with a black ball designation almost exclusively used for dangerous foreign products, while still continuing to use it, undercutting the claim it's risky.
The interesting societal side affect may be that people actually begin to understand terms like "guardrails" and the potential dangers of AI without them, and the AI guardrail industry which is severely underfunded and underdeveloped right now, might begin to take root, to help prevent us from a potential SKYNET situation down the road. OpenAI publicly said today they are siding with Anthropic on the use of guardrails, who along with Google Gemini may use Anthropic development tools internally for their own AI products.
Don’t know what that is but I can’t open it on this device.
Only peripherally related... But....
Someone (not me) posted “Colossus; The Forbin Project” over at BitChute today:
https://www.bitchute.com/video/Cl7ZlHjFngkx
Yes, The Forbin Project shows how stuid a government can be, but the US DoW is not that stupid (yet). Our weapon systems have humans in the loop when the weapon system is powerful enought to require this. And we also have periodic reviews by safety committees who are charged to make certian the guardrails are appropriate for the system.
But I agree that the final decision is up to the people we elect to manage these systems, not the the contractors, or programmers, or engineers who build the system. I have been there.
Why with Claude? Among majo5 AI platforms, Claude is the most explicitly states they want to ftlter misinformation and disinformation. In other words, censors
Who might benefit from this decision? Palantir? ChatGBT says it’s the major AI players like Open AI, Google and Microsoft.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.