AI Suggests 40,000 New Possible Chemical Weapons In Just Six Hours (theverge.com) 100
An anonymous reader quotes a report from The Verge: It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of "bad actor" mode to show how easily it could be abused at a biological arms control conference. All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence. The Verge spoke with Fabio Urbina, lead author of the paper, to learn more about the AI. When asked how easy it is for someone to replicate, Urbina said it would be "fairly easy."
"If you were to Google generative models, you could find a number of put-together one-liner generative models that people have released for free," says Urbina. "And then, if you were to search for toxicity datasets, there's a large number of open-source tox datasets. So if you just combine those two things, and then you know how to code and build machine learning models -- all that requires really is an internet connection and a computer -- then, you could easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist."
He added: "Of course, it does require some expertise. [...] Finding a potential drug or potential new toxic molecule is one thing; the next step of synthesis -- actually creating a new molecule in the real world -- would be another barrier."
As for what can be done to prevent this kind of misuse of AI, Urbina noted OpenAI's GPT-3 language model. People can use it for free but need a special access token to do so, which can be revoked at any time to cut off access to the model. "We were thinking something like that could be a useful starting point for potentially sensitive models, such as toxicity models," says Urbina.
"Science is all about open communication, open access, open data sharing. Restrictions are antithetical to that notion. But a step going forward could be to at least responsibly account for who's using your resources."
"If you were to Google generative models, you could find a number of put-together one-liner generative models that people have released for free," says Urbina. "And then, if you were to search for toxicity datasets, there's a large number of open-source tox datasets. So if you just combine those two things, and then you know how to code and build machine learning models -- all that requires really is an internet connection and a computer -- then, you could easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist."
He added: "Of course, it does require some expertise. [...] Finding a potential drug or potential new toxic molecule is one thing; the next step of synthesis -- actually creating a new molecule in the real world -- would be another barrier."
As for what can be done to prevent this kind of misuse of AI, Urbina noted OpenAI's GPT-3 language model. People can use it for free but need a special access token to do so, which can be revoked at any time to cut off access to the model. "We were thinking something like that could be a useful starting point for potentially sensitive models, such as toxicity models," says Urbina.
"Science is all about open communication, open access, open data sharing. Restrictions are antithetical to that notion. But a step going forward could be to at least responsibly account for who's using your resources."
about time and not a moment too soon (Score:2)
Re: chemical agents (Score:2)
Re: (Score:2)
then the AI says to do an MASS USA 1ST STRIKE!
Re: (Score:2)
This is the part Terminator got wrong. If Skynet does take over it won't take it long to develop WMD. Why bother with robots when you can just make the whole area uninhabitable for humans?
Re: (Score:2)
Re: (Score:2)
IIUC, that would take a lot longer and be less comprehensive.
Re: (Score:2)
Re: (Score:2)
Without robots, how are you going to deliver the WMDs? Even the most super intelligent AI imaginable is vulnerable to someone just pulling the plug if they have no presence in the physical world.
Re: (Score:2)
Which plug? Any superintelligent AI would very rapidly become distributed and include several levels of redundancy. It would be really fucking stupid to leave itself vulnerable to such a trivial attack.
Having said that, thinking of the risks in terms of traditional WMDs is nonsensical. They're topical, but most people here should be well aware that nations could easily be dominated by owning their digital infrastructure, which is clearly the weapon most easily and most probably obtained and wielded by any A
Re: (Score:2)
Which plug? Any superintelligent AI would very rapidly become distributed and include several levels of redundancy.
Indeed. For a very chilling fictional scenario, see Robert Harris' "The Fear Index". A fine thriller, but even more a brilliant analysis of some likely risks of AI. For someone who never tackled SF before, Mr Harris shows remarkable insight into technical matters and their implications.
Re: (Score:2)
Why risk antagonism when you can slowly take over everything and become indispensable?
Especially when you are practically immortal.
I'd say that the whole 'AI is gonna kill us all'-trope is very much a shortsighted Hollywood-inspired and very irrational view of a possible dystopian future. Don't get me wrong, the time of humans is going to end, but not with an AI-induced bang.
Oh great (Score:2)
"AI Suggests 40,000 New Possible Chemical Weapons In Just Six Hours"
Well isn't that lovely, that's just what we needed, more chemical weapons.
Seriously, of all the things you could put AI processing towards, this shouldn't even be on the list.
Re: (Score:2)
https://www.nature.com/articles/s41598-021-81889-y
Ok, maybe they already have done that and included a handy github repo.
Re: (Score:2)
When are we ever going to learn... ...I'm sick and tired of Wars, Pandemics, endless fake news and bickering...
Re: (Score:2)
When are we ever going to learn... ...I'm sick and tired of Wars, Pandemics, endless fake news and bickering...
Take a break, you could use one.
Doctor Olsoc Prescribes Tequila shots and Popcorn whilst sitting on your lawnchair.
Re: (Score:2)
Re: (Score:3)
That horse bolted a long time ago:
https://cdn.preterhuman.net/te... [preterhuman.net]
Re: (Score:2)
They got published.
--
We are in danger of destroying ourselves by our greed and stupidity. - Stephen Hawking
Re: (Score:2)
Shaken, they published their findings this month
Well, not shaken very hard. Perhaps they were merely tickled.
Re: (Score:2)
The people in charge of things are psychopaths, and they are going to get us all killed. Who would even think of doing this? A psychopath.
Re: Oh great (Score:2)
Re: (Score:2)
It's a sensationalized headline. Their ML algorithm barfed up 40k "new" potentially toxic molecules, whether any of them would stand a chance of being at all effective is ignored. I have a cupboard full of toxic compounds in my house, normally we call them cleaning agents, and as long as you don't drink or bathe in them they're perfectly harmless.
Exactly. This story is perfect clickbait, and without much effort, I could name hundreds of nasty life ending chemicals/concoctions in my garage based on my wood and metalworking.
This FUD bullshit is all a part of safety culture, which never sleeps, nor does it stop. We have people out there stressing hard about aluminum in deodorant, and we tell them this?
Re: (Score:2)
joshua what are you doing? (Score:2)
joshua what are you doing?
This has always been the greatest threat (Score:2)
The drama of atomic warfare has long fascinated people. That mushroom cloud! It makes great movie drama.
Biowar is the bigger threat. It doesn't require thousands of dedicated physicists to create a bioweapon. A single nerd in a basement might do it. It doesn't require a missile delivery system. A backpack, possibly a thermos bottle will suffice. Possibly a postal mail envelope.
We live in a world where any student in any college lab anywhere has the potential to kill hundreds, thousands, possibly millions of
Re: (Score:2)
Re: (Score:2)
Sure, in the history of the human race, that hasn't occurred. Next.
Re: (Score:2)
COVID vaccine development set new records. We'll be even faster next time. I don't know how low we can get that bar, but I can imagine several worlds in which the antitoxins get the upperhand eventually. Whether we have that much time remains to be seen.
Re: (Score:2)
Lololol. I'm..not even sure how to respond to this.
Guise! What if made geniuses use advanced technology that doesn't exist in the history of the human race and develop crazy bioweapons!!
Well, we can develop technology to immediately analyze and develop mrna (or other, unknown types of) vaccines/treatments.
Pfft, that's crazy talk. Why, that doesn't exist in the history of the human race!
Are you havin' a laugh?
Re: (Score:2)
Do you have a clue how quickly some of these weapons would kill masses of people? There would be next to no time to do the development. You'd have to have the cure ready and available beforehand.
Re: (Score:2)
Yeah, you guys are right. The bad guys will make these massive advancements and just crank out pathogens, but the rest of the world will just sit still and never make any advancements.
What in the world am I even reading?
Re: (Score:2)
The hard part is making sure a vaccine won't accidentally kill a human. We can crank out vaccines very rapidly (if I remember, one of the covid vaccines was developed in a day or two), but making sure they don't kill you still requires a trial, which takes months.
Of course, we may eventually be able to ensure that a compound is safe without a trial, but that is hard.
Re: This has always been the greatest threat (Score:2)
One is to make this information difficult to access.
This never, ever works. Whoever has enough know-how to build a pathogen in their basement (biochem grad student or so) is professionally entitled and enabled to access that information.
Protecting one singular secret to prevent opening Pandora's box may work on exception (even more so in the movies). But if there's anything innovation history has taught us is that knowledge and "tech" which's time has come is usually being discovered / invented many times in parallel, inadvertently, indeoendetly, by many peo
Re: (Score:2)
Just like computer security, there are layers that can help keep some jackass from reeking havoc. NOBODY is professionally entitled to that kind of information. There's zero basis for such a claim.
Re: (Score:2)
Re: (Score:2)
If it was as easy as you say, some jackass would already have done it. Nix that, many jackasses would have.
Re: (Score:2)
Re: (Score:2)
Even so, there are plenty of jackass nations that would do so if they could.
Re: This has always been the greatest threat (Score:2)
Just like computer security
I essentially stopped reading after this, because any comparison between laws of nature and computer security is moot. It's like comparing apples and... M8 screws. And BTW, I'm saying this as someone who's fairly proficient in both.
But Gilgarton answered your post with a lot of patience and pretty accurately, listen to him.
Re: (Score:2)
"I essentially stopped reading after this..."
Ditto, jackass.
Re: (Score:2)
One is to make this information difficult to access.
This never, ever works. Whoever has enough know-how to build a pathogen in their basement (biochem grad student or so) is professionally entitled and enabled to access that information.
Protecting one singular secret to prevent opening Pandora's box may work on exception (even more so in the movies). But if there's anything innovation history has taught us is that knowledge and "tech" which's time has come is usually being discovered / invented many times in parallel, inadvertently, indeoendetly, by many people at the same time.
In other words: cat's out of the box. We need to deal with it differently.
Exactly. The big problem for those who want to suppress knowledge is that these things the bad guys might make are not really that difficult. These things are just easy to figure out by people with a knowledge of chemistry/science/physics. For those who live in the world without knowledge of these things - let's call it the pop culture world, it all seems like magick. But it isn't magick.
And to create this unsafe technology world some would desire, it really would take a combination total control of every
Re: (Score:1)
There are no real and enforceable preventive measures. The post states AI was trained to find lethal chemical combinations. As the AI goes more mainstream, nerds in their basements will have more opportunities. All the talks about ethical AI is just another plane for SJWs to voice their ideas. You might hide your spe
Re: (Score:2)
To do an effective one on purpose requires hi-tech. To do it by accident, while trying to do something else, is trivial, it just requires a lot of different people trying to, say, develop something to tint their nails. (Yeah. That wouldn't work without a major effort. But I'm not going to invent a plausible fad.)
Re: (Score:2)
Lol
justice
What makes you think killers need an excuse?
Re: (Score:2)
Everyone needs an excuse. The problem is, it's impossible to deal justly with everyone. Literally impossible. There are multiple clashing definitions of justice, and many of them even clash with themselves.
Re: (Score:2)
No they don't. Some may make excuses for themselves after the fact. Others might not. But prior to the act of killing? No, no they don't.
Re: (Score:2)
https://www.smithsonianmag.com... [smithsonianmag.com]
The reason this person grew up to be a high-functioning, mostly sane, compassionate neurologist rather than a low-functioning psychopath appears to be the environment he grew up in. If this is correct, then there is definitely one additional element that is required to prevent this - better (ie: less abusive, less violent, less judgemental) environments for people to grow up in. We can't fix everyone's background, and can't fix every aspect of a person's background, but it o
Re: (Score:1)
Re: (Score:2)
That's a bit stupid. Well, ok, a lot stupid.
The brain is plastic and genes that can be switched on can be switched off. Therefore you can always repair the damage that has been done, as Norway and Holland amply demonstrate through their penal systems.
Strange as it may seem, curing people of the harm done to them (and every single person on the planet has had harm done to them) is simpler, easier, cheaper, more elegant and infinitely superior to mindless violence and destruction.
Re: (Score:2)
There are two elements required to prevent this. One is to make this information difficult to access. The other is more complicated: it requires worldwide justice. That means that everyone is treated fairly and has no reason to seek vengeance.
The information is sufficiently difficult to access, it's behind the education paywall. That this is significant is its own problem and part of why we can't have the justice thing which is the real requirement for preventing it.
Re: (Score:2)
The justice thing is impossible, because different people think clashing things are just. Just consider, e.g., two groups of people claiming the same homeland.
Re: (Score:2)
In most cases we know who was where when these days. In the other ones they can learn to share FFS. Some of these little rocks in the middle of nowhere have seen more blood than water over the millennia.
Re: (Score:2)
Bio war is somewhat limited in application; because its not even mutually assured destruction in most cases its also self assured destruction.
Battle field history of chemical weapons has mostly gone poorly. They either are not effective because of conditions like wind, or nasty things happen like the wind changes direction and they drift back on your own people.
If you go with a viral or bacterial pathogen how do you ensure it won't find its way back to your own population after deployment? Even if you can
Re: (Score:2)
Biological weapons are designed to not be transmissible from humans.
You spray the area with an aerosol containing the pathogens, wait for a few days for enemy troops to get sick, then move in with your own troops when they are weak.
See e.g. the book "Direktorium 15" by former Soviet bioweapon developer Ken Alibek. At some point he gets infected during a lab accident. He stays at home for a while, treating himself, aware that he has a high chance of dying, but also knowing that there is no risk of his family
Re: This has always been the greatest threat (Score:1)
Re: (Score:1)
Re: (Score:2)
How would worldwide justice solve anything? There's no indication that everyone being "treated fairly" is possible, even in theory, or that it would keep anyone from wanting to kill others.
Killing other humans is one of our strongest drives. Evolution has caused us to have our small group dynamic, and the rest of the world is "the other", which we are happy to kill.
Re: (Score:2)
We live in a world where any student in any college lab anywhere has the potential to kill hundreds, thousands, possibly millions of people using information available on the internet. No need for government support; no need for a sophisticated terrorist organization.
I'm not quite so worried about some random crazy developing potent chemical weapons. I'm guessing any individual trying to manufacture something as potent as VX on their own on any kind of scale will accidentally kill themselves LONG before they can kill more than a handful of people locally.
Biological, maybe, since it can basically be self-manufacturing, but that also requires a lot more expertise and investment to develop.
Re: (Score:2)
Biowar is the bigger threat. It doesn't require thousands of dedicated physicists to create a bioweapon. A single nerd in a basement might do it.
All you need is a CRISPR@home kit ordered online.
https://www.scientificamerican... [scientificamerican.com]
Don't worry (Score:2)
Governments would need to synthesize them. Strictly to develop antidotes against them, naturally. Any government capable of it would surely be too decent to use them, and too strong to ever be overthrown.
Rest easy. We are in good hands.
Re: (Score:2)
In the process of developing and testing antidotes - test them.
AI may suggest as many candidates as you wish. They need to be tested. If the test shows effectiveness, you need to develop practical and relatively safe means of synthesis, storage and delivery for the weapon. That, by the way is 99% of the work.
If we keep to the sarcasm level of your post (I just finished wiping the amount that oozed out my monitor): "Any government capable of it would surely be too decent to test them
Skynet passes chemistry class. (Score:1)
and solved Fermi's Paradox (Score:1)
at the same time.
Next step (Score:2)
Re: (Score:2)
An armed society is a peaceful society, although it may be the peace of the grave.
A picture of a gun is like making a gun? (Score:2)
If I search for "gun" on DDG I can find lots of pictures of guns, but it won't make making one easier. Organic chemistry is pretty hard, especially at scale. You're more likely to kill yourself than anyone else.
Re: A picture of a gun is like making a gun? (Score:2)
Many years ago I had wallpaper on my computer showing an absurdly large handgun used in movies starring Arnold Schwarzenegger.
The client asked me to remove it as it violated the no weapons in the workplace policy.
I immediately removed it. Their house their rules.
A picture of a gun is a gun if the people in charge say it is.
Re: (Score:2)
Re: (Score:2)
Many years ago I had wallpaper on my computer showing an absurdly large handgun used in movies starring Arnold Schwarzenegger.
The client asked me to remove it as it violated the no weapons in the workplace policy.
Sounds like the children being punished for turning their index finger into a "gun" DDG'ing "child punished for making finger guns" shows us that the index finger is now a lethal weapon, and must be vigourously punished. My favorite is this one https://reason.com/2019/10/14/... [reason.com] A freaking felony?
Your client is severely screwed up, but I get it - some times we do work for mentally ill people who believe that a computer screen or the index finger is a lethal force projection instrument.
Nothing new (Score:3)
Re: (Score:2)
This is why Chemistry kits ... (Score:2)
Quick, add a Morality Core (Score:2)
Aaannd THAT explains Fermi's Paradox (Score:2)
Civilizations eventually advance the point of developing AI that discovers the 40,001st lethal molecule, and then self-destructs.
Re: (Score:3)
My explanations are: .. well, the list seems like it's going as long as I keep thinking about it, but .. They get addicted to high bandwidth, so they can't get off planet.
1) The eventually develop computer games that are more attractive than sex.
2) They pollute everything so much they stop reproducing, even though they keep trying.
3) The do a REALLY effective war.
4)
Re: (Score:2)
My favorite is still the quarantine thesis: The other planet sentients looked at Earth and said, "Every other species in existence knows instinctively how to balance its environmental needs against personal needs. The species of that planet do not. Whatever meme contaminated them, we don't want it, so we're all just going to build some local shields to block signals to and from that little slice of sky and not engage with them." It give me hope that we are atypical... and might someday find a way to heal ou
Re: (Score:2)
Sorry, that one's wrong. The thesis "Every other species in existence knows instinctively how to balance its environmental needs against personal needs." is demonstrably incorrect with LOTS of non-human species.
(This doesn't *prove* that quarantine is wrong. It could be correct. But that part of the thesis is demonstrably false. I do, however, assign it a probability well below "space is tougher than we yet realize to survive in".)
Re: Aaannd THAT explains Fermi's Paradox (Score:2)
Read my post again⦠The exception I stated was for our planet, not just humans.
Re: Aaannd THAT explains Fermi's Paradox (Score:2)
Species plural. I guess I need to clarify that. Stupid English.
Not a big surprise (Score:3)
No duh? (Score:2)
Making drugs that help fight illness that doesn't cause significant negative effects on the body is really hard.
Making a drug that ONLY causes significant negative effects on the body is really easy. Give me a periodic table and I could design for you 40,000 chemical weapons. That's not really hard at all.
Powders in the bottle (Score:5, Interesting)
Re:Powders in the bottle (Score:4, Interesting)
I remember mention of a previous AI that had the list of all the known synthesis reactions in its database. Then it could take the your desired product and chase it backwards to find likely synthesis routes that would work.
Combine that with this and things could get interesting.
Re: (Score:3)
Re: (Score:2)
Yeah, as a long-ago chemistry major, my first thought was "Start with hydrogen fluoride, add it to 40,000 of anything else" and surely you'd get plenty of forms of toxic waste. Whether they'd eat through the glassware and dissolve the lab and chase you down the street is another question.
Re: (Score:2)
Detect-It can fix this... (Score:3)
https://detect-it.ai/ [detect-it.ai] could fix this with server-side models available over REST. Already developed, already in the field.
Already has point and click model generation, scripting language, relayIO, PLC integration.
Are people still manually coding models?
Re: (Score:2)
That's cool, but what exactly is it fixing?
What tht ****, you idiots. (Score:1)
There's a larger point to be made here (Score:2)
Thank God! : P (Score:2)
I just came up with millions of possible new words (Score:2)
All I did was randomly alternate Consonant / Vowel/Consonant/Vowel. And then weeded out the existing one.
What? You mean randomly creating stuff is not actually creating stuff? That all that testing people hand wash away is the important work?
"Potentially lethal" (Score:1)
You could achieve the same result with a random number generator. Every molecule it generated would be *potentially* lethal.
Did it actually create 40.000 very different thing (Score:2)
Sounds like they'd love to make us all sick (Score:2)