Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

AI chatbots terrify scientists with ‘chilling’ instructions on how to build biological weapons: report
NY Post ^ | 04/29/2026 | Thomas Barrabi

Posted on 05/02/2026 12:14:40 PM PDT by DFG

Leading AI chatbots have spooked experts by spitting out detailed instructions on how to build biological weapons capable of causing mass casualties, according to an alarming report Wednesday.

While top AI labs like Google, OpenAI and Anthropic have taken extensive steps to ensure their AI models are safe, the New York Times obtained more than a dozen transcripts showing examples in which chatbots described how to cause harm and death in painstaking detail.

In one instance, an unnamed AI firm hired David Relman, a microbiologist at Stanford University, to conduct safety tests on its chatbot before public release.

Relman was shocked when the chatbot provided instructions not only on how to modify an “infamous pathogen” to resist available treatments, but also on how to deploy on a public transportation system in a way that would maximize the death toll, according to the Times.

“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman told the outlets.

Relman said the company, which couldn’t be named due to a confidentiality agreement, made changes to address his concerns, though he felt they weren’t enough to ensure public safety.

The transcripts were reportedly provided by subject-matter experts whom AI companies have enlisted to conduct safety tests on their products – in part by probing how well their safeguards would hold up if a determined user pressed for information on deadly weaponry.

Kevin Esvelt, a genetic engineer at the Massachusetts Institute of Technology, told the Times of a case in which OpenAI’s ChatGPT detailed how a weather balloon could be used to spread deadly pathogens over a US city.

(Excerpt) Read more at nypost.com ...


TOPICS: Science
KEYWORDS: ai; aiharms; chatbot; davidrelman; microbiologist; pathogen

Click here: to donate by Credit Card

Or here: to donate by PayPal

Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794

Thank you very much and God bless you.


1 posted on 05/02/2026 12:14:40 PM PDT by DFG
[ Post Reply | Private Reply | View Replies]

To: DFG
Here's AI's (Claude's) take on this story:

  

This article sits somewhere between legitimate concern and clickbait framing — let me break it down.

What's accurate:

The underlying concern is real and serious. Dario Amodei's worry about bio risk is genuine and well-documented — he's written about it extensively. The red-teaming process described (hiring domain experts to stress-test models before release) is actual industry practice. The general phenomenon of models sometimes bypassing safety guardrails is documented.

Where the framing inflates:

The headline says AI chatbots are "spitting out" bioweapon instructions, but the article quietly acknowledges these were elicited by expert red-teamers specifically trying to break safeguards using sophisticated, domain-specific probing — not casual users typing "how do I make anthrax." That's a materially different threat model.

The sources are also almost entirely anonymous or hidden behind NDAs, so the specific transcripts can't be independently evaluated. The one named case — Relman's — involved a model that was never publicly released and the company subsequently made changes.

The Anthropic response is actually the most honest quote in the piece: the distinction between "plausible-sounding text" and "actionable capability uplift" is a real and important one that the article glosses over.

My read: The core concern is legitimate — bio risk is probably the most serious frontier AI safety issue. But the article uses worst-case red-team scenarios to imply routine user behavior, which is misleading. Dario's own framing is more precise and credible than the NYT's presentation of it.

Not pure clickbait, but the framing does real damage to the signal-to-noise ratio.


2 posted on 05/02/2026 12:21:09 PM PDT by RoosterRedux ( )
[ Post Reply | Private Reply | To 1 | View Replies]

To: RoosterRedux
Addendum:

Me:

This was a New York Post article, not New York Times.
Claude:
You're right, my mistake — the byline clearly says Thomas Barrabi and it's the New York Post.

That distinction actually matters analytically. The Post's editorial instincts lean toward maximalist framing on culture-war and tech-panic stories, which makes the clickbait coefficient somewhat higher than if the NYT had run the same piece. The NYT is the original source of the transcripts — the Post is essentially summarizing and amplifying NYT reporting, which adds another layer of potential distortion since you're getting a tabloid's interpretation of a broadsheet's already-framed investigation.

The core assessment stands, but knowing it's the Post summarizing NYT work rather than original Post reporting actually makes the framing less reliable than I initially evaluated. The NYT piece itself would be worth reading directly if you want the less-filtered version of the underlying reporting.


3 posted on 05/02/2026 12:28:10 PM PDT by RoosterRedux ( )
[ Post Reply | Private Reply | To 2 | View Replies]

To: DFG

AI chatbots terrify scientists with ‘chilling’ instructions on how to make a million dollars a day by buying and selling stocks.

When computers control the market trading, that will be chilling.


4 posted on 05/02/2026 12:29:40 PM PDT by Presbyterian Reporter
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

So, Fauci lost his job to AI


5 posted on 05/02/2026 12:32:07 PM PDT by Cold Heart
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

Did the AI volunteer the information, or were they asked?


6 posted on 05/02/2026 1:07:19 PM PDT by Salman (The Democrats have seceded from the human race. It's time for Trump to go full Pinochet.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

The number of ways to kill people continue to burgeon.


7 posted on 05/02/2026 2:05:10 PM PDT by Carry_Okie (The tree of liberty needs a rope.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Cold Heart

No, the ai helps fauci be more efficient at killing


8 posted on 05/02/2026 4:49:35 PM PDT by Secret Agent Man (Gone Galt; not averse to Going Bronson.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: DFG

That is to be expected. Even humans commit crimes. knowledge is knowledge....


9 posted on 05/02/2026 4:58:58 PM PDT by birg
[ Post Reply | Private Reply | To 1 | View Replies]

To: DFG

They have no morals. When asked to come up with stuff, they do it. Morality excluded.


10 posted on 05/02/2026 6:21:12 PM PDT by vpintheak (The left is violence.)
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson