Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

'Disastrous Mistake': Trump Calls Out Anthropic, Orders All Federal Agencies to Cut Ties
Red State ^ | 02/27/2026 | Susie Moore

Posted on 02/27/2026 7:17:34 PM PST by SeekAndFind

Well, that escalated quickly...

As Ben Smith reported earlier on Friday, the Pentagon and AI giant Anthropic were in the midst of a showdown regarding the deployment terms of its Claude AI model in defense systems. The Pentagon wanted to authorize "all lawful use," but Anthropic had other ideas.

Anthropic has drawn its line at two points: no domestic mass surveillance and no fully autonomous weapons operating without meaningful human oversight. Those limits are already written into its defense agreements. The Pentagon wants broader language that covers “all lawful use” once Claude is embedded.


READ MORE: AI Is Already Embedded in Military Systems - Now the Fight Is Over How Far It Can Go

Now, Experts Are Warning Revolutionary AI Tools Change Everything


Per Pentagon spokesman Sean Parnell, Anthropic had a deadline of 5:00 PM Eastern Friday to reach a deal. 

Under Secretary of War Emil Michael recently told Bloomberg News: 

“For any AI system we might use, are we using it to protect our warfighters in the right way? Are we using it to give them the best tools to be efficient and lethal?

"Ultimately, at the end of the day, we follow the law—all laws—but we can’t let any one company stand between us and the warfighter. They don't make the rules. Congress makes the rules, @POTUS signs them, we execute them—and we do so safely."

Under Secretary Emil Michael (@USWREMichael) on the @DeptofWar’s commitment to providing the most efficient AI capabilities to the warfighter:

“For any AI system we might use, are we using it to protect our warfighters in the right way? Are we using it to give them the best tools to be efficient and lethal?

Ultimately, at the end of the day, we follow the law—all laws—but we can’t let any one company stand between us and the warfighter. They don't make the rules. Congress makes the rules, @POTUS signs them, we execute them—and we do so safely."

pic.twitter.com/zRtwHojj6A— Department of War CTO (@DoWCTO) February 27, 2026

Now, however, it looks as though it may be too late. President Trump issued a statement via Truth Social on Friday afternoon that basically says of Anthropic, "Dead to me." 

Trump is ordering all federal agencies to "IMMEDIATELY CEASE all use of Anthropic’s technology," referring to the company as "A RADICAL LEFT, WOKE COMPANY" and "Leftwing nut jobs," and declaring that they've made "a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." 

pic.twitter.com/B51SWfn81N— Rapid Response 47 (@RapidResponse47) February 27, 2026

THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. 
 
The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. 
 
Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.
 
WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!
 
PRESIDENT DONALD J. TRUMP

That looks to be rather definitive. I guess we'll see...


TOPICS: Culture/Society; Government; News/Current Events
KEYWORDS: ai; aitruth; anthropic; defense; woke

Click here: to donate by Credit Card

Or here: to donate by PayPal

Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794

Thank you very much and God bless you.


Navigation: use the links below to view more comments.
first 1-2021 next last

1 posted on 02/27/2026 7:17:34 PM PST by SeekAndFind
[ Post Reply | Private Reply | View Replies]

To: SeekAndFind

I predict that’s gonna leave a mark.


2 posted on 02/27/2026 7:23:13 PM PST by paulcissa (The left hates you and wants you dead.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Anthropic (Claude) tech staff is 80% Indian/Pakistani (here in the USA). AI is really a South East Asian technology.


3 posted on 02/27/2026 7:28:50 PM PST by jroehl (And how we burned in the camps later - Aleksandr Solzhenitsyn - The Gulag Archipelago)
[ Post Reply | Private Reply | To 1 | View Replies]

To: paulcissa

Yes. Lefties are giddy about someone ‘standing up to’ Trump but here’s the thing.

PRC is working on AI. They are absolutely definitely not going to refrain from mass surveillance and they don’t care if their tech hurts people.

Yes absolutely it’s very risky but not playing = defeat. Atomic put us on top with one viable superpower last century. If two+ powers master AI military use then negotiations occur.

If one power alone does, war. That power will use what it has and quickly before there’s a chance another achieves parity.

So ... like it or not, a cautious approach is the fastest guarantee of mass deaths.


4 posted on 02/27/2026 7:52:54 PM PST by No.6
[ Post Reply | Private Reply | To 2 | View Replies]

To: SeekAndFind
We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period

I'm having trouble squaring those two side by side comments.

If it were truly a "supply chain risk" shouldn't all use immediately be stopped?

I would also assume a bare AI product "with guardrails" would be considered safer than a bare product "without gaurdrails." Without gaurdrails would therhetically be the bigger "supply chain risk" as it is inherently more dangerous, would it not?

I just read Anthropic has already announced they plan to sue over the supply chain risk designation, especially if they keep using the product after the designation. A worst case scenario could be the judge not only removing the "supply chain risk" from Anthropic, but, also attempting to designate its replacement without guardrails as the actual supply chain risk, AFTER they've already switched. That's likely beyond a Federal Court's jurisdiction, but, the ruling isn't far fetched on something like this, in today's environment.

5 posted on 02/27/2026 8:13:36 PM PST by Golden Eagle (Principles, not partisanship)
[ Post Reply | Private Reply | To 1 | View Replies]

To: jroehl
Anthropic (Claude) tech staff is 80% Indian/Pakistani (here in the USA). AI is really a South East Asian technology

What's your source on that? I just asked a different AI engine, and it said estimates are 80+% are in the US, and of that, 40+% are American.

6 posted on 02/27/2026 8:16:43 PM PST by Golden Eagle (Principles, not partisanship)
[ Post Reply | Private Reply | To 3 | View Replies]

To: SeekAndFind

If Anthropic had given in. It would have been the equivilaent of selling the AI and company to the US government for $200 millions. Did anyone see what happened to Musk when he got involved with Trump? The value of his company started dropping like a stone. A.I. is probably one of the most hated inventions no one ever asked for and once people see how bad it really is the value of any company tied to the A.I. doing all the bad things ( Anthropic told the Pentagon that it’s A.I. wasn’t prepared for the tasks the military wants and guess who would pay when A.I. goes bonkers blowing up a bunch of civilians because the military wants to do things with Claude it just can’t do and they do it anyway) will be toast.


7 posted on 02/27/2026 8:24:25 PM PST by rottweiller_inc (Lupus urbem intravit. Fulminis ictu vultures super turrem exanimat.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: No.6

AI is often and rightfully compared to the development and proliferation of nuclear weapons. In your scenario, just because China is engaging in dangerous development practices, doesn’t mean the US automatically and immediately, should, in response.

The only exception to that, in my mind, would be if China has already far, far exceeded us in AI, and proven that AI without guardrails may be safe, at least at these early stages of its development. Some people do believe all that. I don’t, I think this is purely a power play, by those seeking more power, in the form of AI. But neither reality of what’s going on is good.


8 posted on 02/27/2026 8:24:46 PM PST by Golden Eagle (Principles, not partisanship)
[ Post Reply | Private Reply | To 4 | View Replies]

To: Golden Eagle; jroehl

Well, if Jroehl did use AI then I guess they are wrong. I just hope the accurate AI system has control of the weapons before it hits the “fire” button.

I don’t share President Trumps views on this one. Especially when it comes to massive eavesdropping and targeting. We already know the Democrats want many conservatives gone, why give them even more tools when they are back in office?


9 posted on 02/27/2026 8:31:45 PM PST by 21twelve (Ever Vigilant - Never Fearful)
[ Post Reply | Private Reply | To 6 | View Replies]

To: SeekAndFind

Dept. of Grok


10 posted on 02/27/2026 8:35:35 PM PST by bigbob (We are all Charlie Kirk now)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Hail to our great Chief, Donald J Trump.


11 posted on 02/27/2026 8:48:26 PM PST by gildafarrell (To Strive, To Seek, To Find and Not To Yield!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: 21twelve
The issue is setting the precedent of allowing a private company to dictate how the U.S. government conducts operations. I would hope that it's obvious why we can't allow defense contractors to dictate how military assets are used. The issue is not how far the safeguards on AI are, the issue is who gets to decide that. The U.S. government has to be the one to decide that because as mentioned it functions within a framework of checks and balances that ultimately trace to the elected representatives of the American people. We cannot have a private company have veto power over the use of an asset. What if China infiltrates that company (they probably already have) and starts dictating U.S. use of this asset? Obviously we can't allow that.

An analogy would be what if Boeing had veto power over the use of minuteman missiles in the event of a nuclear strike. Boeing builds them but they don't get to decide if they're allowed to be used or not, that would be ridiculous. That responsibility rests with the President, Joint Chiefs of Staff, and Congress. They're accountable to the people, Boeing is accountable to shareholders. Likewise we cannot allow Anthropic to dictate terms of the use of a military asset, they don't get to decide when, where, or how it's used. That power and responsibility solely rests with the elected U.S. government.

12 posted on 02/27/2026 8:54:58 PM PST by GaryCrow
[ Post Reply | Private Reply | To 9 | View Replies]

To: SeekAndFind

Anthropics restrictions would prevent use in American fully autonomous hunter-killers drones being released into an enemy’s territory to seek and destroy designated targets. And THAT will be a large part of near future battle doctrine.


13 posted on 02/27/2026 8:55:35 PM PST by House Atreides (I’m now ULTRA-MAGA-PRO-MA)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Golden Eagle

This is just one. I have the other 10:

https://www.uscis.gov/sites/default/files/document/data/quarterly_all_forms_fy2025_q3.xlsx

“40+% are American” = That is one of the biggest lies of the last 20 years.


14 posted on 02/27/2026 9:11:25 PM PST by jroehl (And how we burned in the camps later - Aleksandr Solzhenitsyn - The Gulag Archipelago)
[ Post Reply | Private Reply | To 6 | View Replies]

To: House Atreides
Anthropics restrictions would prevent use in American fully autonomous hunter-killers drones

"Autonomous" means there is no human interaction, or control, over what the weapon decides to destroy, including accidents involving friendlies. If that is what the government has decided they want, then they can and will get it, from multiple sources.

The problem I have is that supposedly, the contract they have with Anthropic prohibited it, and should be honored so long as we aren't at war. And, if true, after breaking the contract, they have designated this AI company with a black ball designation almost exclusively used for dangerous foreign products, while still continuing to use it, undercutting the claim it's risky.

The interesting societal side affect may be that people actually begin to understand terms like "guardrails" and the potential dangers of AI without them, and the AI guardrail industry which is severely underfunded and underdeveloped right now, might begin to take root, to help prevent us from a potential SKYNET situation down the road. OpenAI publicly said today they are siding with Anthropic on the use of guardrails, who along with Google Gemini may use Anthropic development tools internally for their own AI products.

15 posted on 02/27/2026 9:11:28 PM PST by Golden Eagle (Principles, not partisanship)
[ Post Reply | Private Reply | To 13 | View Replies]

To: jroehl

Don’t know what that is but I can’t open it on this device.


16 posted on 02/27/2026 9:12:24 PM PST by Golden Eagle (Principles, not partisanship)
[ Post Reply | Private Reply | To 14 | View Replies]

To: All

Only peripherally related... But....

Someone (not me) posted “Colossus; The Forbin Project” over at BitChute today:
https://www.bitchute.com/video/Cl7ZlHjFngkx


17 posted on 02/27/2026 10:09:18 PM PST by LegendHasIt
[ Post Reply | Private Reply | To 1 | View Replies]

To: LegendHasIt

Yes, The Forbin Project shows how stuid a government can be, but the US DoW is not that stupid (yet). Our weapon systems have humans in the loop when the weapon system is powerful enought to require this. And we also have periodic reviews by safety committees who are charged to make certian the guardrails are appropriate for the system.

But I agree that the final decision is up to the people we elect to manage these systems, not the the contractors, or programmers, or engineers who build the system. I have been there.


18 posted on 02/27/2026 10:29:28 PM PST by KC_for_Freedom (retired aerospace engineer and CSP who also taught)
[ Post Reply | Private Reply | To 17 | View Replies]

To: SeekAndFind

Why with Claude? Among majo5 AI platforms, Claude is the most explicitly states they want to ftlter misinformation and disinformation. In other words, censors


19 posted on 02/28/2026 1:35:14 AM PST by paudio (Charlie Kirk is this era's MLK)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Who might benefit from this decision? Palantir? ChatGBT says it’s the major AI players like Open AI, Google and Microsoft.


20 posted on 02/28/2026 3:34:52 AM PST by doggieboy
[ Post Reply | Private Reply | To 1 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson