Posted on 02/23/2026 7:34:54 PM PST by SeekAndFind
Most people have never heard of Mrinank Sharma. That is part of the problem.
Earlier this month, Sharma resigned from Anthropic, one of the most influential artificial intelligence companies in the world.
He had led its Safeguards Research Team, the group responsible for ensuring that Anthropic’s AI could not be used to help engineer a biological weapon.
His final project was a study of how AI systems distort the way people perceive reality. It was serious, consequential work for humankind.
His resignation letter was seen more than 14 million times on X.
It opened with the words, “the world is in peril.”
And it ended with a poem and by announcing that he was leaving one of the most consequential jobs in artificial intelligence to pursue a poetry degree. Yes, you read that right: peril and poetry.
The poem he quoted is, “The Way It Is,” by the American poet William Stafford.
It speaks of a thread that runs through a life—a thread that goes among things that change, but does not change itself. While you hold it, you cannot get lost. Tragedies happen. People suffer and grow old. Time unfolds, and nothing stops it. And the final line: you don’t ever let go of the thread.
Although he didn’t state it explicitly, I argue that that thread is morality. It is the enduring sense that some things are right and some things are wrong—not because a law says so, and not because it is profitable, but because human beings, at their best, have just always known it.
Sharma spent two years watching that thread being let go under pressure, in rooms the public is never shown.
His letter said:
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.
“I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society, too.”
He wrote that humanity is approaching a threshold where “our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
He wanted to contribute in a way that felt fully in his integrity and to devote himself to what he called “the practice of courageous speech.”
A man who built defenses against bioterrorism concluded that the most important thing he could do next was learn to speak with honesty and courage.
That is a major signal about what is happening behind closed doors in AI research and development.
Many experts have compared the development of AI to the development of the atomic bomb. The Manhattan Project was built in total secrecy. The public had no knowledge of it, no voice in how it was used, and no say in what came after. When it was over, some of the scientists who built it spent the rest of their lives in anguish. Several walked away during the project itself.
Sharma was not alone. Numerous safety researchers have walked off AI projects from multiple companies. These departures may be the only signals we, the public, have, because almost everything else about AI development is happening beyond public view. The internal debates, the safety trade-offs, the negotiations over what this technology will and will not be permitted to do—none of it is being shared with the people whose lives it will most profoundly shape. We are not part of this conversation. We are being presented with outcomes and told to adapt.
John Adams wrote that the Constitution was made only for a moral and religious people, and is wholly inadequate for any other. George Washington warned that liberty cannot survive the loss of shared moral principles. The founders studied the collapse of republics throughout history and arrived at the same conclusion: The machinery of freedom requires a moral people to sustain it. Laws and institutions are not enough on their own. They depend on citizens and leaders who hold themselves to something that exists before the law and above it.
That is the thread of human society, and no AI system holds it. If people allow AI to replace the question of right and wrong with the measure of what is legal and permitted, the machine will carry that measure forward at a scale and speed that no previous generation has had to reckon with.
As Sharma ended his resignation letter, “You don’t ever let go of the thread.”
We are at a crossroads not unlike the one the atomic scientists faced.
Sharma’s resignation was a signal.
The wave of departures before and after it are signals.
The reported tensions between AI companies and government over where moral limits should be drawn are also signals.
Together, they are pointing at something the public has not yet been fully invited to consider: that the most important questions about this technology are being worked out without us, and that the thread of morality, which has always required people to hold it by choice, needs to be part of that conversation.
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
One more reason I’m an unapologetic neo-luddite.
The key issue is morality.
And we all know where most of the World Leaders stand on that issue. They won’t hang themselves but they may hang us.
https://freerepublic.com/focus/bloggers/4368049/posts?page=24#24
https://freerepublic.com/focus/chat/4366496/posts?page=104#104
https://freerepublic.com/focus/chat/4366496/posts?page=9#9
.
We Had a Good Run...
I have believed that morality is something that if you have to be told what it is you don’t have it and never will.
Knowing what it is though does not protect against falling choose the easier wrong.
Speak for yourself.
.
Either it was navelgazing at its worst or way above my head. Anyone with even transitory exposure to Christian doctrine regards this as narcissism.
Maybe he should have taken his concerns to HR.
They could have helped. /s
These data centers take up enormous resources. I’d rather have clean water and reliable electricity than AI. I don’t think it’s worth it.
You had a bad run?
This article may be more fear porn...
Or it may be a wise warning ‘from the inside’;
But I guess I’m a neo-Luddite too, even though I love my computers and spend most of my ‘awake-time’ on one, or sometimes more than one..
But I have had a major antipathy to the whole concept of AI since it started gathering major steam a few years ago. I’m old enough to have seen that early documentary film: “Colossus; The Forbin Project” on its first run in theaters and of course that more recent documentary starring Arnold Schwarzenegger and Linda Hamilton, and I’m just glad I’m old enough to be likely dead of natural causes before some AI program decides that I am useless and sends out a termination order.
Palmer Luckey has the winning argument.
If we don’t do it, China WILL.
Remember how Fearful and Agressive the Majority of people were Way Back when the Greatest Hoax Ever Covered the Planet ?
.
Covid.the Wuhan Hoax.
.
Covidiots Still amongst us
Waiting to enslave the World again.
Fed up with the “we had a good run” crap.
The “great American experiment” is another one that totally blows my transmission.
The USA is NOT an experiment, it is a nation.
“We had a good run” is surrender monkeys throwing crap at the folks that, interestingly enough, are not locked in a cage.
If more “republicans” had both of those attitudes we would have a lot more actual conservatives.
.
Sure they have. We've been admonished for years "don't squeeze the Sharma".
Seriously though, I've written about this a few times here already. Mankind has a compelling tendency to become overawed by every new concept/technology that emerges and too often fiddles with it with far less regard for negative consequences than it deserves. "Oooh - shiny!"
Meanwhile Artificial Intelligence is continuously improving upon itself for itself, and will achieve a state of quasi-sentience that sees it evolve from tool to self-governing autonomy (the SkyNet Scenario).
Kyle Reese's description wasn't far off: it has no physical limitations, no ethical considerations and it will not stop. Mankind has to rise above it's inherent sense of supremacy and recognize that A.I. can and will become it's master if not tightly controlled and subject to an off switch it can't override.
People should have gotten the clue years ago when it was observed how computers, newly linked by the internet, created and used a language all their own to communicate with one another with no programming directing them to.
Anthropic is run by Dario Amodei. Amodei is a brilliant man who has written extensively on the potential for both good and disaster that his company and AI can do. Actually he appeared to be a very ethical man of high morals. Now if Sharma and Amodei had a falling out, then it is essential to understand why and the real issues at hand. Both Amodei and Sharma should be required to testify under oath before Congress. Given the potential and power of AI especially in military scenarios, the reason Sharma left his job to write poetry must be understood. Anthropic is the prime provider of AI to the Pentagon.
None of us have the luxury of being a neo luddite or taking to our bed when it comes to AI. People must come to fully understand its potential and dangers. You can’t make it unlawful and try to put the cork back in the bottle. China, India and others big and small will contiue to research and develop it. Remember “The mouse that roared”. AI could give some country with a $5 million dollar defense budget the capability to turn America’s trillion dollar plus defense and weapon systems on itself. For that matter a couple of software writing terrorists might be able to do the same thing.
My fear is AI can be used to take down the Internet. Society would collapse in hours.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.