Posted on 03/17/2022 9:04:29 PM PDT by algore
It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.
All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence.
The paper had us at The Verge a little shook, too. So, to figure out how worried we should be
The Verge spoke with Fabio Urbina, lead author of the paper. He’s also a senior scientist at Collaborations Pharmaceuticals, Inc., a company that focuses on finding drug treatments for rare diseases.
This interview has been lightly edited for length and clarity.
This paper seems to flip your normal work on its head. Tell me about what you do in your day-to-day job.
Primarily, my job is to implement new machine learning models in the area of drug discovery. A large fraction of these machine learning models that we use are meant to predict toxicity.
No matter what kind of drug you’re trying to develop, you need to make sure that they’re not going to be toxic. If it turns out that you have this wonderful drug that lowers blood pressure fantastically, but it hits one of these really important, say, heart channels — then basically, it’s a no-go because that’s just too dangerous.
So then, why did you do this study on biochemical weapons? What was the spark?
We got an invite to the Convergence conference by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection, Spiez Laboratory. The idea of the conference is to inform the community at large of new developments with tools that may have implications for the Chemical/Biological Weapons Convention.
We got this invite to talk about machine learning and how it can be misused in our space. It’s something we never really thought about before. But it was just very easy to realize that as we’re building these machine learning models to get better and better at predicting toxicity in order to avoid toxicity, all we have to do is sort of flip the switch around and say, “You know, instead of going away from toxicity, what if we do go toward toxicity?”
Can you walk me through how you did that — moved the model to go toward toxicity?
I’ll be a little vague with some details because we were told basically to withhold some of the specifics. Broadly, the way it works for this experiment is that we have a lot of datasets historically of molecules that have been tested to see whether they’re toxic or not.
Beware the numbers of the beast, for they are human numbers. If you have understanding you will understand the numbers of IT
It is a Beast, of that there is no doubt.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.