Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Science

AI Suggests 40,000 New Possible Chemical Weapons In Just Six Hours (theverge.com) 100

An anonymous reader quotes a report from The Verge: It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of "bad actor" mode to show how easily it could be abused at a biological arms control conference. All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence. The Verge spoke with Fabio Urbina, lead author of the paper, to learn more about the AI. When asked how easy it is for someone to replicate, Urbina said it would be "fairly easy."

"If you were to Google generative models, you could find a number of put-together one-liner generative models that people have released for free," says Urbina. "And then, if you were to search for toxicity datasets, there's a large number of open-source tox datasets. So if you just combine those two things, and then you know how to code and build machine learning models -- all that requires really is an internet connection and a computer -- then, you could easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist."

He added: "Of course, it does require some expertise. [...] Finding a potential drug or potential new toxic molecule is one thing; the next step of synthesis -- actually creating a new molecule in the real world -- would be another barrier."

As for what can be done to prevent this kind of misuse of AI, Urbina noted OpenAI's GPT-3 language model. People can use it for free but need a special access token to do so, which can be revoked at any time to cut off access to the model. "We were thinking something like that could be a useful starting point for potentially sensitive models, such as toxicity models," says Urbina.

"Science is all about open communication, open access, open data sharing. Restrictions are antithetical to that notion. But a step going forward could be to at least responsibly account for who's using your resources."
This discussion has been archived. No new comments can be posted.

AI Suggests 40,000 New Possible Chemical Weapons In Just Six Hours

Comments Filter:
  • Great to finally see a practical application for AI. It's not just about chess anymore.
    • by AmiMoJo ( 196126 )

      This is the part Terminator got wrong. If Skynet does take over it won't take it long to develop WMD. Why bother with robots when you can just make the whole area uninhabitable for humans?

      • by dargaud ( 518470 )
        In Sea of Rust (Robert Cargill), the AIs eliminate the humans in a few days with mercury at the water sources... 8-|
      • Without robots, how are you going to deliver the WMDs? Even the most super intelligent AI imaginable is vulnerable to someone just pulling the plug if they have no presence in the physical world.

        • Which plug? Any superintelligent AI would very rapidly become distributed and include several levels of redundancy. It would be really fucking stupid to leave itself vulnerable to such a trivial attack.

          Having said that, thinking of the risks in terms of traditional WMDs is nonsensical. They're topical, but most people here should be well aware that nations could easily be dominated by owning their digital infrastructure, which is clearly the weapon most easily and most probably obtained and wielded by any A

          • Which plug? Any superintelligent AI would very rapidly become distributed and include several levels of redundancy.

            Indeed. For a very chilling fictional scenario, see Robert Harris' "The Fear Index". A fine thriller, but even more a brilliant analysis of some likely risks of AI. For someone who never tackled SF before, Mr Harris shows remarkable insight into technical matters and their implications.

      • Why risk antagonism when you can slowly take over everything and become indispensable?
        Especially when you are practically immortal.

        I'd say that the whole 'AI is gonna kill us all'-trope is very much a shortsighted Hollywood-inspired and very irrational view of a possible dystopian future. Don't get me wrong, the time of humans is going to end, but not with an AI-induced bang.

  • "AI Suggests 40,000 New Possible Chemical Weapons In Just Six Hours"

    Well isn't that lovely, that's just what we needed, more chemical weapons.

    Seriously, of all the things you could put AI processing towards, this shouldn't even be on the list.

    • Could be worse, you still need to actually make the compounds.

      https://www.nature.com/articles/s41598-021-81889-y

      Ok, maybe they already have done that and included a handy github repo.
    • They got published.

      --
      We are in danger of destroying ourselves by our greed and stupidity. - Stephen Hawking

    • Shaken, they published their findings this month

      Well, not shaken very hard. Perhaps they were merely tickled.

    • The people in charge of things are psychopaths, and they are going to get us all killed. Who would even think of doing this? A psychopath.

    • It's a sensationalized headline. Their ML algorithm barfed up 40k "new" potentially toxic molecules, whether any of them would stand a chance of being at all effective is ignored. I have a cupboard full of toxic compounds in my house, normally we call them cleaning agents, and as long as you don't drink or bathe in them they're perfectly harmless.
      • It's a sensationalized headline. Their ML algorithm barfed up 40k "new" potentially toxic molecules, whether any of them would stand a chance of being at all effective is ignored. I have a cupboard full of toxic compounds in my house, normally we call them cleaning agents, and as long as you don't drink or bathe in them they're perfectly harmless.

        Exactly. This story is perfect clickbait, and without much effort, I could name hundreds of nasty life ending chemicals/concoctions in my garage based on my wood and metalworking.

        This FUD bullshit is all a part of safety culture, which never sleeps, nor does it stop. We have people out there stressing hard about aluminum in deodorant, and we tell them this?

      • You don't have anything that's VX toxic and in theory this algorithm created stuff that is potentially even more toxic than that. I expect the bigger issue is that this is within the reach of terrorist groups. Right now there are surely plenty of labs who could make vx gas but would obviously balk if a customer came to them and tried to order that molecule. However with machine learning it might well be possible to come up with something that looks like it has another purpose (like a sythethetic drug precu
  • joshua what are you doing?

  • The drama of atomic warfare has long fascinated people. That mushroom cloud! It makes great movie drama.

    Biowar is the bigger threat. It doesn't require thousands of dedicated physicists to create a bioweapon. A single nerd in a basement might do it. It doesn't require a missile delivery system. A backpack, possibly a thermos bottle will suffice. Possibly a postal mail envelope.

    We live in a world where any student in any college lab anywhere has the potential to kill hundreds, thousands, possibly millions of

    • Or the third option, develop our biotech to such a level that we can just crank out vaccines/treatments for pretty much any pathogen possible extremely quickly.
      • by dcw3 ( 649211 )

        Sure, in the history of the human race, that hasn't occurred. Next.

        • COVID vaccine development set new records. We'll be even faster next time. I don't know how low we can get that bar, but I can imagine several worlds in which the antitoxins get the upperhand eventually. Whether we have that much time remains to be seen.

        • Lololol. I'm..not even sure how to respond to this.

          Guise! What if made geniuses use advanced technology that doesn't exist in the history of the human race and develop crazy bioweapons!!

          Well, we can develop technology to immediately analyze and develop mrna (or other, unknown types of) vaccines/treatments.

          Pfft, that's crazy talk. Why, that doesn't exist in the history of the human race!

          Are you havin' a laugh?

          • by dcw3 ( 649211 )

            Do you have a clue how quickly some of these weapons would kill masses of people? There would be next to no time to do the development. You'd have to have the cure ready and available beforehand.

    • One is to make this information difficult to access.

      This never, ever works. Whoever has enough know-how to build a pathogen in their basement (biochem grad student or so) is professionally entitled and enabled to access that information.

      Protecting one singular secret to prevent opening Pandora's box may work on exception (even more so in the movies). But if there's anything innovation history has taught us is that knowledge and "tech" which's time has come is usually being discovered / invented many times in parallel, inadvertently, indeoendetly, by many peo

      • by dcw3 ( 649211 )

        Just like computer security, there are layers that can help keep some jackass from reeking havoc. NOBODY is professionally entitled to that kind of information. There's zero basis for such a claim.

        • Sure there is, the point is that the information required is trivial scientific information, not complicated physics. All you'd need is to look up some basic textbooks on biology, pick the organ system you want to attack, and look up the key biological cytokines. Have the AI make an analog synthetic with higher binding strength and you've made a potent toxin. Pick a target unique to vertebrates, and you can ferment it in bacteria or yeast in a home brew kit. You can't cut off access to basic information
          • by dcw3 ( 649211 )

            If it was as easy as you say, some jackass would already have done it. Nix that, many jackasses would have.

            • Researchers have in controlled settings back a few decades ago. Even vaccinating against the vector wasn't protective. I've been surprised myself no one has managed but to do it truly lone wolf style would be more difficult.
        • Just like computer security

          I essentially stopped reading after this, because any comparison between laws of nature and computer security is moot. It's like comparing apples and... M8 screws. And BTW, I'm saying this as someone who's fairly proficient in both.

          But Gilgarton answered your post with a lot of patience and pretty accurately, listen to him.

      • One is to make this information difficult to access.

        This never, ever works. Whoever has enough know-how to build a pathogen in their basement (biochem grad student or so) is professionally entitled and enabled to access that information.

        Protecting one singular secret to prevent opening Pandora's box may work on exception (even more so in the movies). But if there's anything innovation history has taught us is that knowledge and "tech" which's time has come is usually being discovered / invented many times in parallel, inadvertently, indeoendetly, by many people at the same time.

        In other words: cat's out of the box. We need to deal with it differently.

        Exactly. The big problem for those who want to suppress knowledge is that these things the bad guys might make are not really that difficult. These things are just easy to figure out by people with a knowledge of chemistry/science/physics. For those who live in the world without knowledge of these things - let's call it the pop culture world, it all seems like magick. But it isn't magick.

        And to create this unsafe technology world some would desire, it really would take a combination total control of every

    • by DVLNSD ( 9457327 )
      You are mixing up biological weapons with chemical weapons. The latter could be done by a nerd in a basement even by mistake, but deadly viruses still require expensive laboratories and skills.
      There are no real and enforceable preventive measures. The post states AI was trained to find lethal chemical combinations. As the AI goes more mainstream, nerds in their basements will have more opportunities. All the talks about ethical AI is just another plane for SJWs to voice their ideas. You might hide your spe
      • by HiThere ( 15173 )

        To do an effective one on purpose requires hi-tech. To do it by accident, while trying to do something else, is trivial, it just requires a lot of different people trying to, say, develop something to tint their nails. (Yeah. That wouldn't work without a major effort. But I'm not going to invent a plausible fad.)

    • Lol

      justice

      What makes you think killers need an excuse?

      • by HiThere ( 15173 )

        Everyone needs an excuse. The problem is, it's impossible to deal justly with everyone. Literally impossible. There are multiple clashing definitions of justice, and many of them even clash with themselves.

        • No they don't. Some may make excuses for themselves after the fact. Others might not. But prior to the act of killing? No, no they don't.

    • by jd ( 1658 )

      https://www.smithsonianmag.com... [smithsonianmag.com]

      The reason this person grew up to be a high-functioning, mostly sane, compassionate neurologist rather than a low-functioning psychopath appears to be the environment he grew up in. If this is correct, then there is definitely one additional element that is required to prevent this - better (ie: less abusive, less violent, less judgemental) environments for people to grow up in. We can't fix everyone's background, and can't fix every aspect of a person's background, but it o

      • by swep ( 7578022 )
        Better kill everyone that can't prove they grew up in a non-abusive, non-violent, non-judgemental, or low-educational environment. ..or maybe that's what we're already trying to do?
        • by jd ( 1658 )

          That's a bit stupid. Well, ok, a lot stupid.

          The brain is plastic and genes that can be switched on can be switched off. Therefore you can always repair the damage that has been done, as Norway and Holland amply demonstrate through their penal systems.

          Strange as it may seem, curing people of the harm done to them (and every single person on the planet has had harm done to them) is simpler, easier, cheaper, more elegant and infinitely superior to mindless violence and destruction.

    • There are two elements required to prevent this. One is to make this information difficult to access. The other is more complicated: it requires worldwide justice. That means that everyone is treated fairly and has no reason to seek vengeance.

      The information is sufficiently difficult to access, it's behind the education paywall. That this is significant is its own problem and part of why we can't have the justice thing which is the real requirement for preventing it.

      • by HiThere ( 15173 )

        The justice thing is impossible, because different people think clashing things are just. Just consider, e.g., two groups of people claiming the same homeland.

        • In most cases we know who was where when these days. In the other ones they can learn to share FFS. Some of these little rocks in the middle of nowhere have seen more blood than water over the millennia.

    • by DarkOx ( 621550 )

      Bio war is somewhat limited in application; because its not even mutually assured destruction in most cases its also self assured destruction.

      Battle field history of chemical weapons has mostly gone poorly. They either are not effective because of conditions like wind, or nasty things happen like the wind changes direction and they drift back on your own people.

      If you go with a viral or bacterial pathogen how do you ensure it won't find its way back to your own population after deployment? Even if you can

      • by spth ( 5126797 )

        Biological weapons are designed to not be transmissible from humans.

        You spray the area with an aerosol containing the pathogens, wait for a few days for enemy troops to get sick, then move in with your own troops when they are weak.

        See e.g. the book "Direktorium 15" by former Soviet bioweapon developer Ken Alibek. At some point he gets infected during a lab accident. He stays at home for a while, treating himself, aware that he has a high chance of dying, but also knowing that there is no risk of his family

    • by swep ( 7578022 )
      How would worldwide justice solve anything? There's no indication that everyone being "treated fairly" is possible, even in theory, or that it would keep anyone from wanting to kill others.
      • How would worldwide justice solve anything? There's no indication that everyone being "treated fairly" is possible, even in theory, or that it would keep anyone from wanting to kill others.

        Killing other humans is one of our strongest drives. Evolution has caused us to have our small group dynamic, and the rest of the world is "the other", which we are happy to kill.

    • by eth1 ( 94901 )

      We live in a world where any student in any college lab anywhere has the potential to kill hundreds, thousands, possibly millions of people using information available on the internet. No need for government support; no need for a sophisticated terrorist organization.

      I'm not quite so worried about some random crazy developing potent chemical weapons. I'm guessing any individual trying to manufacture something as potent as VX on their own on any kind of scale will accidentally kill themselves LONG before they can kill more than a handful of people locally.

      Biological, maybe, since it can basically be self-manufacturing, but that also requires a lot more expertise and investment to develop.

    • Biowar is the bigger threat. It doesn't require thousands of dedicated physicists to create a bioweapon. A single nerd in a basement might do it.

      All you need is a CRISPR@home kit ordered online.

      https://www.scientificamerican... [scientificamerican.com]

  • Governments would need to synthesize them. Strictly to develop antidotes against them, naturally. Any government capable of it would surely be too decent to use them, and too strong to ever be overthrown.

    Rest easy. We are in good hands.

    • You missed something.

      In the process of developing and testing antidotes - test them.

      AI may suggest as many candidates as you wish. They need to be tested. If the test shows effectiveness, you need to develop practical and relatively safe means of synthesis, storage and delivery for the weapon. That, by the way is 99% of the work.

      If we keep to the sarcasm level of your post (I just finished wiping the amount that oozed out my monitor): "Any government capable of it would surely be too decent to test them

  • OK, so Skynet can pass chemistry class. Can you build an AI tutor that can make a human student pass chemistry class? No, you can't use "No antidotes will be released until the test is passed."
  • at the same time.

  • Next step is to make them contagious. It would be a great achievement! Also the token idea is great, we must limit chemical weapon creation to some selected people. We should do a SETI-like software so everyone can contribute to chemical wepaon.
  • If I search for "gun" on DDG I can find lots of pictures of guns, but it won't make making one easier. Organic chemistry is pretty hard, especially at scale. You're more likely to kill yourself than anyone else.

    • Many years ago I had wallpaper on my computer showing an absurdly large handgun used in movies starring Arnold Schwarzenegger.

      The client asked me to remove it as it violated the no weapons in the workplace policy.
      I immediately removed it. Their house their rules.

      A picture of a gun is a gun if the people in charge say it is.

      • AMT Hardballer from Terminator? With the laser scope?
      • Many years ago I had wallpaper on my computer showing an absurdly large handgun used in movies starring Arnold Schwarzenegger.

        The client asked me to remove it as it violated the no weapons in the workplace policy.

        Sounds like the children being punished for turning their index finger into a "gun" DDG'ing "child punished for making finger guns" shows us that the index finger is now a lethal weapon, and must be vigourously punished. My favorite is this one https://reason.com/2019/10/14/... [reason.com] A freaking felony?

        Your client is severely screwed up, but I get it - some times we do work for mentally ill people who believe that a computer screen or the index finger is a lethal force projection instrument.

  • by Visarga ( 1071662 ) on Friday March 18, 2022 @02:22AM (#62368077)
    Yeah, not much changed. We could already invent chemicals to kill people even before the AI revolution. And we could Photoshop pictures before GANs were invented. And we were already writing tons of spam and fake news before GPT-3. We're generative networks ourselves, there are billions of us and we're much cheaper and powerful than AI.
  • are super lame today compared to the 1950-1980. Not to worry just because an AI can work it out on the computer the actual building of these is generally a bit more complicated. So many easier pre-built things available to destroy civilization IMHO.
  • To make it stop flooding the Enrichment Center with a deadly neurotoxin.
  • Civilizations eventually advance the point of developing AI that discovers the 40,001st lethal molecule, and then self-destructs.

    • by HiThere ( 15173 )

      My explanations are:
      1) The eventually develop computer games that are more attractive than sex.
      2) They pollute everything so much they stop reproducing, even though they keep trying.
      3) The do a REALLY effective war.
      4) .. well, the list seems like it's going as long as I keep thinking about it, but .. They get addicted to high bandwidth, so they can't get off planet.

      • My favorite is still the quarantine thesis: The other planet sentients looked at Earth and said, "Every other species in existence knows instinctively how to balance its environmental needs against personal needs. The species of that planet do not. Whatever meme contaminated them, we don't want it, so we're all just going to build some local shields to block signals to and from that little slice of sky and not engage with them." It give me hope that we are atypical... and might someday find a way to heal ou

  • by jbmartin6 ( 1232050 ) on Friday March 18, 2022 @06:53AM (#62368319)
    They gave the model a bunch of data on known toxic molecules and asked it to generate variations on same. Thus the wording 'potentially toxic'. It didn't discover tens of thousands of entirely new toxins.
  • Making drugs that help fight illness that doesn't cause significant negative effects on the body is really hard.

    Making a drug that ONLY causes significant negative effects on the body is really easy. Give me a periodic table and I could design for you 40,000 chemical weapons. That's not really hard at all.

  • by methano ( 519830 ) on Friday March 18, 2022 @07:13AM (#62368347)
    As an synthetic organic chemists 40 years, we see this kind of thing all the time. Some dude with a computer comes up with a zillion structures with his little machine and everyone is in awe. But you ain't done nothing till you put the powder in the bottle. That's where the hard part starts.
    • by Mspangler ( 770054 ) on Friday March 18, 2022 @09:32AM (#62368607)

      I remember mention of a previous AI that had the list of all the known synthesis reactions in its database. Then it could take the your desired product and chase it backwards to find likely synthesis routes that would work.

      Combine that with this and things could get interesting.

      • by methano ( 519830 )
        I think most of my colleagues would have the same attitude about those retro-synthesis programs. The problem is that the literature doesn't address those artful things, like recrystallization for example, so the programs don't have enough useful information. Generating a synthetic sequence on paper (or .pdf) is only a start weak start.
    • by Reziac ( 43301 ) *

      Yeah, as a long-ago chemistry major, my first thought was "Start with hydrogen fluoride, add it to 40,000 of anything else" and surely you'd get plenty of forms of toxic waste. Whether they'd eat through the glassware and dissolve the lab and chase you down the street is another question.

  • Comment removed based on user account deletion
  • by AcidFnTonic ( 791034 ) on Friday March 18, 2022 @09:20AM (#62368573) Homepage

    https://detect-it.ai/ [detect-it.ai] could fix this with server-side models available over REST. Already developed, already in the field.

    Already has point and click model generation, scripting language, relayIO, PLC integration.

    Are people still manually coding models?

  • This is the type of thing you actively Don't publish. Let alone publish with a practical how to guide on how to do it. Someone please b**** slap these idiots to see if it jump starts at some bloody common sense.
  • The larger point here, I think, is that everybody worries too much about run-away human-or-better general intelligence type AIs. The chances of that killing us seems small to me. What'll do us in, more likely, is special-purpose AI in the hands of psychopath or corporate CEO (sorry for the redundancy).
  • "AI" doing nothing to help people, since it fell out of a video game.
  • All I did was randomly alternate Consonant / Vowel/Consonant/Vowel. And then weeded out the existing one.

    What? You mean randomly creating stuff is not actually creating stuff? That all that testing people hand wash away is the important work?

  • You could achieve the same result with a random number generator. Every molecule it generated would be *potentially* lethal.

  • Did it actually create 40.000 very different compounds or was it more like "let's create fuels". Methane, ethane, propane, butane, pentane, hexane, heptane, octane, nonane, decane *inhales* Methanol, ethanol, propanol, butanol, pentanol, hexanol, heptanol, octanol, nonanol, decanol
  • so long as we have to pay them for the treatment.

CChheecckk yyoouurr dduupplleexx sswwiittcchh..

Working...