Posted on 02/23/2026 4:51:11 PM PST by deks
Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday morning for what senior Defense officials made clear was anything but a courtesy call. The meeting represents a breaking point between the U.S. military and the AI company behind Claude — the only artificial intelligence model currently operating inside the military’s classified systems — over whether a private technology company gets to set the rules for how its tools are used in war.
The core dispute is over what conditions Anthropic will place on military use of Claude. The company has sought formal assurances that its technology will not be used for mass surveillance of American citizens or to develop autonomous weapons capable of firing without a human in the decision chain.
The Pentagon’s position, articulated by spokesman Sean Parnell last week, is that it wants to use Claude however it sees fit, provided the deployment does not violate the law.
Hegseth arrives prepared to deliver an ultimatum. The Pentagon’s leverage is a potential designation of Anthropic as a “supply chain risk” — a label that carries devastating consequences.
Under that designation, Anthropic’s existing government contracts would be voided, and every other company doing work with the Pentagon would be required to certify that it is not using Claude in any part of its workflow.
No competing AI model currently available is considered as capable for sensitive defense and intelligence work. The Pentagon may be holding a loaded gun it would rather not fire.
The Wall Street Journal reported that Claude — accessed through Anthropic’s partnership with defense contractor Palantir Technologies — was used in the classified special operations mission in Venezuela. That revelation made Anthropic the first known AI developer whose technology was deployed in a classified military operation.
(Excerpt) Read more at patriot.tv ...
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
You would think the military would train their own models.
That was my immediate reaction too. Maybe they realize they're way behind the curve and don't have the time to play catch up, assuming that they could even catch up.
It should be fairly easy to add in your own guardrails into the prompts.
Pretty sure Lloyd Austin and General Milley were doing that at home on their own time before Hegseth reuint everything.
Terminator 3 was not a very good movie, but I am having flashbacks to the scene where the military insists on injecting the experimental Skynet program into the government network to hunt down a mysterious virus. The developer is reluctant to fully activate Skynet, but the top brass dangle the promise of unlimited funding in front of him, so he makes Skynet completely operational. Not to spoil the 2003 movie for anyone, but ... this turns out to be a mistake.
Guardrails for AI Models
https://blog.webex.com/innovation-ai/guardrails-for-ai-models/
You would think the military would train their own models.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This ^^^
I’m torn on this one to be honest. Guardrails are extremely important to keep AI from going rogue. BUT...I’m shocked that our military isn’t in complete control of their own training and the guardrails associated with it.
It’s not like they can’t afford the worlds best data scientist to get the job done.
The first condition “Mass Surveilance” , I wouldn’t like, but could live with. We are already close to that right now and have been for over the last 10 years.
The second condition “Autonomous Weaponry” I would feel under a present level of threat. If the AI-Weapons decide 97.8% of human beings are ‘superfluous impediments and non-essential obstructions” slowing down the reaching of project goals, we will have very little defense against them.
Yet hundreds of thousands of citizens of India have been been transported to the USA over the last 20 years and control a huge number of our most crucial computer systems. And displaced hundreds of thousands of American IT workers. Yet we hear nothing about this. It is truly Orwellian.
Just one small example. Trump was banned from Twitter by an Indian (Vijaya Gadde, 2012). There are hundreds of similar examples. It just gets more unbelievable from there.
Guardrails are grouped according to their purpose and the types of risks they address. McKinsey has developed a taxonomy of guardrails, based on specific risks:
Appropriateness guardrails check if the content generated by AI is toxic, harmful, biased, or based on stereotypes and filter out any such inappropriate content before it reaches customers.
Hallucination guardrails ensure that AI-generated content doesn’t contain information that is factually wrong or misleading.
Regulatory-compliance guardrails validate that generated content meets regulatory requirements, whether those requirements are general or specific to the industry or use case.
Alignment guardrails ensure that generated content aligns with user expectations and doesn’t drift away from its main purpose. These guardrails can help maintain brand consistency, for example.
Validation guardrails check that generated content meets specific criteria: that is, the content contains or does not contain certain information. If a piece of generated content is flagged by a validation guardrail, the content can be funneled into a correction loop to fix the error. Validation should be the last of a series of tasks that guardrails perform. After that, a human validator should review flagged or ambiguous cases that require human reasoning.
https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-are-ai-guardrails
Scary, Ultimately the Pentagon will get its way. However if weapon systems have the capability of being deployed without human input, the scenario of political circumstances generating a nuclear first strike is not implausible. Consider: War with nuclear armed nation X is considered inevitable. AI is deployed to defend (shoot) if nation Y is under assault. It interpets incidents as the opening battle. It concludes that either it deploys nuclear weapons immedeatly or nation Y will be defeated and destroyed. Far fetched?
‘Colossus: The Forbin Project’
It happened in this movie.
I think the military’s position is they don’t want any civilian guardrails however limited. The German manufactures got in trouble for producing Zyklon B. They trusted the authorities.
Fascinating, thank you.
We are so screwed.
At its most basic level, GenAI breaks takes a prompt and assigns each word (technically tokens) a meaning and a ranking of the attention/relative importance. From there, this analysis is bounced against giant a correlation matrix of the probability of one word following another word. The model then generates a response on a word by word basis, where the next word is based on the previous word.
Thus….
The stakes climbed higher following reporting by The Wall Street Journal that Claude — accessed through Anthropic’s partnership with defense contractor Palantir Technologies — was used in the classified special operations mission that resulted in the capture of Venezuelan President Nicolás Maduro.
Rather than cementing the relationship, the disclosure complicated it. Pentagon officials told Bloomberg News they became concerned after learning that Anthropic had questions about how its technology was used during the Maduro raid.
….if the military was using Generative AI to make PowerPoint decks, summarize white papers, or automake mundane tasks, that’s ok.
But, if the military is basing a high-stakes strategy or military decision on the output from a probability-based word generation mousetrap….
It’s foolish to by a system that you cannot have full control over.
And they continue to import more and more —you do hear about it, just not from MSM or even “right wing” media as it offends the donor class who benefit the most from it.
We are so screwed.
This.
This was one of the major decisions in the line of mistakes that led to our ultimate demise in one of the war games led by a couple of Silicon Valley AI companies.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.