Posted on 05/02/2026 12:14:40 PM PDT by DFG
Leading AI chatbots have spooked experts by spitting out detailed instructions on how to build biological weapons capable of causing mass casualties, according to an alarming report Wednesday.
While top AI labs like Google, OpenAI and Anthropic have taken extensive steps to ensure their AI models are safe, the New York Times obtained more than a dozen transcripts showing examples in which chatbots described how to cause harm and death in painstaking detail.
In one instance, an unnamed AI firm hired David Relman, a microbiologist at Stanford University, to conduct safety tests on its chatbot before public release.
Relman was shocked when the chatbot provided instructions not only on how to modify an “infamous pathogen” to resist available treatments, but also on how to deploy on a public transportation system in a way that would maximize the death toll, according to the Times.
“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman told the outlets.
Relman said the company, which couldn’t be named due to a confidentiality agreement, made changes to address his concerns, though he felt they weren’t enough to ensure public safety.
The transcripts were reportedly provided by subject-matter experts whom AI companies have enlisted to conduct safety tests on their products – in part by probing how well their safeguards would hold up if a determined user pressed for information on deadly weaponry.
Kevin Esvelt, a genetic engineer at the Massachusetts Institute of Technology, told the Times of a case in which OpenAI’s ChatGPT detailed how a weather balloon could be used to spread deadly pathogens over a US city.
(Excerpt) Read more at nypost.com ...
Dear FRiends,
We need your continuing support to keep FR funded. Your donations are our sole source of funding. No sugar daddies, no advertisers, no paid memberships, no commercial sales, no gimmicks, no tax subsidies. No spam, no pop-ups, no ad trackers.
If you enjoy using FR and agree it's a worthwhile endeavor, please consider making a contribution today:
Click here: to donate by Credit Card
Or here: to donate by PayPal
Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794
Thank you very much and God bless you,
Jim
This article sits somewhere between legitimate concern and clickbait framing — let me break it down.
What's accurate:
The underlying concern is real and serious. Dario Amodei's worry about bio risk is genuine and well-documented — he's written about it extensively. The red-teaming process described (hiring domain experts to stress-test models before release) is actual industry practice. The general phenomenon of models sometimes bypassing safety guardrails is documented.
Where the framing inflates:
The headline says AI chatbots are "spitting out" bioweapon instructions, but the article quietly acknowledges these were elicited by expert red-teamers specifically trying to break safeguards using sophisticated, domain-specific probing — not casual users typing "how do I make anthrax." That's a materially different threat model.
The sources are also almost entirely anonymous or hidden behind NDAs, so the specific transcripts can't be independently evaluated. The one named case — Relman's — involved a model that was never publicly released and the company subsequently made changes.
The Anthropic response is actually the most honest quote in the piece: the distinction between "plausible-sounding text" and "actionable capability uplift" is a real and important one that the article glosses over.
My read: The core concern is legitimate — bio risk is probably the most serious frontier AI safety issue. But the article uses worst-case red-team scenarios to imply routine user behavior, which is misleading. Dario's own framing is more precise and credible than the NYT's presentation of it.
Not pure clickbait, but the framing does real damage to the signal-to-noise ratio.
Me:
This was a New York Post article, not New York Times.Claude:
You're right, my mistake — the byline clearly says Thomas Barrabi and it's the New York Post.That distinction actually matters analytically. The Post's editorial instincts lean toward maximalist framing on culture-war and tech-panic stories, which makes the clickbait coefficient somewhat higher than if the NYT had run the same piece. The NYT is the original source of the transcripts — the Post is essentially summarizing and amplifying NYT reporting, which adds another layer of potential distortion since you're getting a tabloid's interpretation of a broadsheet's already-framed investigation.
The core assessment stands, but knowing it's the Post summarizing NYT work rather than original Post reporting actually makes the framing less reliable than I initially evaluated. The NYT piece itself would be worth reading directly if you want the less-filtered version of the underlying reporting.
AI chatbots terrify scientists with ‘chilling’ instructions on how to make a million dollars a day by buying and selling stocks.
When computers control the market trading, that will be chilling.
So, Fauci lost his job to AI
Did the AI volunteer the information, or were they asked?
The number of ways to kill people continue to burgeon.
No, the ai helps fauci be more efficient at killing
That is to be expected. Even humans commit crimes. knowledge is knowledge....
They have no morals. When asked to come up with stuff, they do it. Morality excluded.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.