Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Medicine Science

AI Cracks Superbug Problem In Two Days That Took Scientists Years 81

A new AI tool developed by Google solved a decade-long superbug mystery in just two days, reaching the same conclusion as Professor Jose R Penades' unpublished research and even offering additional, promising hypotheses. The BBC reports: The researchers have been trying to find out how some superbugs - dangerous germs that are resistant to antibiotics - get created. Their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Prof Penades likened it to the superbugs having "keys" which enabled them to move from home to home, or host species to host species.

Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings. So Mr Penades was happy to use this to test Google's new AI tool. Just two days later, the AI returned a few hypotheses - and its first thought, the top answer provided, suggested superbugs may take tails in exactly the way his research described.

AI Cracks Superbug Problem In Two Days That Took Scientists Years

Comments Filter:
  • by Petersko ( 564140 ) on Friday February 21, 2025 @05:20AM (#65184183)

    What did they give the AI to work with? If it was years of research, the fingerprints of the hypotheses under consideration will be all over the evolving test data, even if they didn't state them explicitly. It could be more like asking the AI, "Given these evolving datasets, what are the likely questions being asked?"

    A very impressive feat, to be sure. But maybe not the one they're in awe of.

    • This does not seem to be a LLM, so it's not just parroting patterns it found on the internet. From the description in the article, this seems to be an AI that's more about identifying patterns in the data itself.

      • This does not seem to be a LLM, so it's not just parroting patterns it found on the internet. From the description in the article, this seems to be an AI that's more about identifying patterns in the data itself.

        So having all of the persons and related colleagues work, plus the citation chain to identify related works it sounds like it would output something with elements similar to the work being done. It’s not unusual for human colleagues to actually know what each other are working on and what they are trying to show, it’s usually right in the open. How do we know it didn’t scrape together several key points humans were making and pointed out the relations?

    • Coming up with a hypothesis is not the same as discovering the answer. Higgs came up with with his hypothesis for the higgs boson in 1964 it then took 50 years, billions of dollars and thousands of people to actually make the discovery. Coming up with a best guess means nothing until you have collected the data to prove it is correct. We make hypotheses all the time in science and a lot of them turn out to be wrong.

      It would be nice if science journalists had even the most basic idea of how the scientific
    • by r1348 ( 2567295 )

      Veritasium has a nice video on how AI is applied to molecular biology research: https://www.youtube.com/watch?... [youtube.com]

      In short: no, it's not an LLM, but it uses some similar techniques of statistical approximation.

    • Could also be a prime case of the Texas sharpshooter fallacy. Look, it got something right for once, and we'll ignore the 8 million batshit crazy responses it's come up at other times.
  • by AlvySinger ( 900304 ) on Friday February 21, 2025 @05:28AM (#65184191)

    Full credit to Angela Collier, and see https://www.youtube.com/watch?... [youtube.com]

    This is AI-hyperbole and mis-reporting. LLM aggregated existing reports and summarised them, nothing more. And the reporting about unpublished paper is not true.

    • by zawarski ( 1381571 ) on Friday February 21, 2025 @06:47AM (#65184259)
      So you are saying it's a tall tail?
      • by AlvySinger ( 900304 ) on Friday February 21, 2025 @07:20AM (#65184291)

        Looks like it. Angela Collier debunks it pretty thoroughly but it's mostly output of LLM seems to be representative of what when into the LLM (so LLM was working but not in a magic way of "hey human, try this original idea that none of you have thought of", etc.).

        The bit about it replicating unpublished research seems to be explainable by... the LLM actually having the document and that being overlooked by the person involved. (So no magic about Google always listening either, etc.)

        I'm sure creative insightful original thought AI is - or is going to be - possible in some contexts but this isn't it. A few people in Google or funded by Google appear to be hyped about it, some that aren't are suggesting the LLM output looks like historic papers as might be expected. Google's ALM seems to give better results than others but then it has better input.

        It's getting reported as an AI-scientist type breakthrough, which it doesn't seem to be. The damage here is that anyone skimming headlines will think something AI-ish has happened when it looks like it hasn't.

        • It's getting reported as an AI-scientist type breakthrough, which it doesn't seem to be.

          Look, just hear this out. So we may not be able to take a ton of monkeys and typewriters come through and output Shakespeare, so why can’t we just get a lot of these models together and see who’s right? Sounds easy!

      • Follow the yellow brick tail!
    • Thanks. AI explainability in action.

    • by geekmux ( 1040042 ) on Friday February 21, 2025 @07:59AM (#65184351)

      Full credit to Angela Collier, and see https://www.youtube.com/watch?... [youtube.com]

      This is AI-hyperbole and mis-reporting. LLM aggregated existing reports and summarised them, nothing more. And the reporting about unpublished paper is not true.

      Perhaps we should just forget the whole make-it-smarter-than-us goal and just focus on building AI-enhanced anti-bullshit filters that don’t make us dumber with clickbait lies.

      Damn that shit gets old. And in no way is a zero-trust society a safe or sane one, which is exactly where we’re headed. For ALL age groups. So much for the innocence of youth. Not sure what morals or ethics looks like a decade from now. We’ve destroyed a lot in the last one.

      • "Not sure what morals or ethics looks like a decade from now. We’ve destroyed a lot in the last one."

        Russia.

    • That's a pretty dramatic fraud claim, which is surprising considering that reliable news sources reported on this story. What is the counter-evidence? Is this new AI even an LLM? It doesn't seem so from the description of the tool. The tool seems to be focused on finding patterns in research data. That's not the same thing as an LLM.

      I hate "sources" that come in the form of YouTube videos.

      • To be clear not suggesting fraud, just that clever - but not magic - thing looks to have done something clever. So yes, to it's impressive but not quite the simplistic view some of the headlines are suggesting.

      • The "new AI" is Gemini, and yes it's an LLM. If you'd watched the video, you'd know that. You'd also learn that the "reliable news source" is owned by the Daily Mail, so no, it's not a reliable source.
        It's clickbait journalism backed by paid reviews; as Angela points out, this is just the way marketing/PR teams at billion-dollar valuation AI companies operate at the moment. It's a business model.
        I know, 20 minute videos suck, but she is a physicist, a scientist who is working with these tools daily, and she

        • Clearly, your YouTube video has some factual issues. The article linked in the story above, is from BBC, not the Daily Mail. The BBC is quite reliable.

          This calls into question every other point made by your YouTube prognosticator. Every YouTubeer is in it for the money, so they'll say whatever dramatic things they have to, to get the clicks. Even if they happen to be a physicist (which may not even be true). In the same way that TV commercials lie, YouTube videos lie. And its for the same reason: financial

    • LLM aggregated existing reports and summarised them, nothing more. And the reporting about unpublished paper is not true.

      My god, they are making soailent green out of people. They're making our ai models out of people! Next thing they'll be breeding us like cattle for food.

    • Holy shit I'm less than 5 minutes in and I love this girl.

  • Nah (Score:4, Insightful)

    by wildstoo ( 835450 ) on Friday February 21, 2025 @05:32AM (#65184195)
    Pretty sure Google was just listening to his conversations on his Android device and the information "accidentally" bled over to its co-scientist AI. It's like when you casually mention coffee machines or something with your phone nearby and suddenly every advert you see on every webpage is for coffee machines.
  • Or, more aptly, if you train an LLM to generate images of a clockface it's bound to get a few right - especially if it's outputting a few at a time
    • Or, more aptly, if you train an LLM to generate images of a clockface it's bound to get a few right - especially if it's outputting a few at a time

      Doubly so when your training material is largely scraped ads for clocks who make the hands “look appealing”

    • Or "Program performs well at solving a problem it's optimized for"
    • This thing has not even shown it is correct once. A hypothesis is an educated guess as to what is happening and guesses, regardless of how educated they are, can be wrong. This is scientific method 101: until you hypothesis is confirmed by data we have learnt nothing.
    • Generative "AI" "inference": We have two handed example clocks, and three handed example clocks. Fuck everything, we're doing five hands.
  • by fluffernutter ( 1411889 ) on Friday February 21, 2025 @05:43AM (#65184209)
    I'm not clear if this helps us actually fight superbugs or if it is just a useless finding with no practical application whatsoever.
    • I'm not clear if this helps us actually fight superbugs or if it is just a useless finding with no practical application whatsoever.

      If this idea is correct, and these supebugs have different tails, what happens if you break off those tails? Does that stop them? What if the tail is modified in some fashion? What happens then?

      This research allows for testing of those ideas.

      • Sure that sounds like a great idea, but is it possible to break off the tails in a person who is infected?
        • by HiThere ( 15173 )

          Maybe. And possibly without horrendous side effects. The only way to find out is to test. And that takes time and is expensive. But it's a place to start.

    • by jd ( 1658 )

      It depends on how the tail is obtained.

      We know bacteria can steal DNA from other bacteria, viruses, and even infected hosts, it's how we developed CRISPR. It's what CRISPR is. If superbugs are using this trick to get the tails, then there may be novel gene splicing processes that would be of interest.

      It also depends on whether we can target the tail.

      If it's stolen DNA, does this mean all superbugs (regardless of type) steal the same DNA? If so, is there a way to target that specifically and thus attack all

      • by HiThere ( 15173 )

        I really doubt that they're stealing the same tail. OTOH, there are probably only a limited number that are useful. But we need to make sure that they aren't doing something essential in OUR biochemistry before we target them.

  • by Provocateur ( 133110 ) <shedied.gmail@com> on Friday February 21, 2025 @05:44AM (#65184211) Homepage

    There, did you feel that?
    "Feel what?"
    That tone
    "What tone?"
    That smugness in his voice!

    • by mjwx ( 966435 )

      There, did you feel that?
      "Feel what?"
      That tone
      "What tone?"
      That smugness in his voice!

      I think we're a long way away from Skippy the Magnificent... We're a long way away from Skippy the Meh.

  • False (Score:4, Informative)

    by backslashdot ( 95548 ) on Friday February 21, 2025 @06:14AM (#65184229)

    It may have taken decades to "come up with the hypothesis" .. but that 9.9 years of research pathway leading to the conclusion was accessible to the AI making the hypothesis in combination with a leading question/prompting basically pre-discovered fact. There's no indication whether the AI simply stated the obvious. What I mean is .. for example when CRISPR gene editing protein was developed, one of the innovations was adding a nucleus localization signal (NLS) to the protein. That's what they got a patent on. But see that NLS is nothing new, lots of proteins have had that added in the past. It's virtually guaranteed that if you asked an AI "how can I make sure my DNA editing protein gets into the nucleus" .. it would state "add an NLS to it"... similarly .. their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Well .. it may have been known that bacteria can form tails after virus infection. This was established over the years: https://www.annualreviews.org/... [annualreviews.org] It may also have been separately known that these tails can enable bacteria to dominate against competitors: https://www.nature.com/article... [nature.com] and that bacteria use virus (phage) tails to spread in nature https://www.cell.com/cell-host... [cell.com] So if you ask the AI leading questions you can bring it to the conclusion that superbugs can spread across species due to the presence of tails.

    The real test is make it find out something you don't have the answer to already (and ideally that researchers have made no progress on answering for a while).

    • by dfghjk ( 711126 )

      While I agree with what you posted, which is rare, remember that the purpose here is not to devise a "real test", it is to provide confirmation of a belief in AI potential. Doing what you propose would actually be helpful, but perhaps not profitable.

      Once you. know the answer, see if an AI, given the data necessary, could reproduce the result. It's something, not very interesting.

  • Between all the vagueness it's hard to see how the AI made any new hypothesis. That bacteria DNA has special regions for phage assisted gene transfer is old hat.

    "Transduction is the process of DNA transfer from one bacterium to another via bacterial viruses, bacteriophages. Many bacteriophages are able to transfer bacterial genes, including GEIs, as passengers in their genomes."

    What's actually new?

    • Agreed... I couldn't tell what the hell they were talking about, either. I even googled "bacteria tail" to see if it was new nomenclature since it's been a while since I was a practicing microbiologist but you just get back stuff about flagella. Perhaps the article was written by an AI, as well. If it is a novel method of gene transfer that would be interesting
  • Given the previous story, how do we know the AI didn't cheat ?

    And given 2001, how do we know it didn't read his lips ?
  • Iâ(TM)m a microbiologist and I cannot understand the story. I even read the original BBC report what is a bug? A âoebugâ refers to a certain type of insect. What is being studied here called a bacterium. In addition, there is writing about âoethe virusesâ. However, bacteria donâ(TM)t get viruses and donâ(TM)t work with viruses. Rather they have bacteriophages which are virus-like Iâ(TM)m jealous of all of you computer people who, when reporters write something with
    • Penades's paper from last year seems to describe something similar, but then how was the AI hypothesis any different?

      https://www.cell.com/cell-host... [cell.com]

    • by ceoyoyo ( 59147 )

      "when reporters write something with a lot of hyperbole, arenâ(TM)t you able to have a bit of the technical parts accurately reported"

      Reporters are good at oversimplifying and hyping any field. As far as I can tell, equally.

      You're also being pretty critical. I've heard the argument that bacteriophages aren't viruses, but it's pretty niche. Most people, including their discoverers, consider them viruses.

  • by PineGreen ( 446635 ) on Friday February 21, 2025 @06:42AM (#65184255) Homepage

    I work as a Physicist in a National Lab and we were recently given fedramp compliant access to chatgpt including the thinking o1 model and I've been trying to use it. And, it is indeed getting useful.

    First I gave it the full problem and it wasn't able to get very far, although it got the basic intuition about the problem right. But then I chopped the problem into small pieces -- I had a vague idea on how to do the calculation and was stuck at the first step. Admittedly, I wasn't really working very hard, but my PhD student spent one month looking at it and got sidetracked many times. O1 did manage to get a crucial insight. Surely, it is a standard math technique that I've seen applied to in many contexts and chatgpt must have seen it in its trainings but the fact is that it got me on the right track in 5 mins. I could probably find it in the right text book or eventually work it out myself, but it would definitely take me much longer. So the next step is to lead it through the rest of the problem step by step. Will see how it goes, but whoever claims these things are not intelligent in at least some sense of the word is a moron.

    • by evanh ( 627108 )

      The way AI advocates seem to look at it is a landmine is intelligent just because it autonomously triggers.

    • Will see how it goes, but whoever claims these things are not intelligent in at least some sense of the word is a moron.

      Do you consider calculators to be intelligent? They are also capable of solving very difficult problems very quickly.

    • by Samare ( 2779329 )

      Yes, the fact that it incorporates all available data makes it useful to get a solution based on that data without having to study it.
      One of its weaknesses, is that it tends to "hallucinate" so we have to be able to check the results.

    • by dfghjk ( 711126 )

      "But then I chopped the problem into small pieces..."

      So you applied superhuman intelligence beyond the ability of even cutting edge AI? IF only an AI with vastly superior intelligence to humans could be so clever as to reduce a problem to smaller pieces!

      No doubt AI can be useful, but it is still just deterministic software. All the claims of human-like or even superhuman abilities are just grift.

    • by arunce ( 1934350 )

      You should try DeepSeek R1 - physicist I know has exactly the same opinion, mostly about how the CoT helps getting things done.

    • Will see how it goes, but whoever claims these things are not intelligent in at least some sense of the word is a moron.

      LLMs are certainly useful, but oversold and when I use them, they're great at generating code that "LOOKS" correct, but isn't. In fact, I find that the Java they produce only compiles about 50% of the time. I work with them daily. At this point, I can't tell if I waste more time correcting their mistakes than had I just wrote the simple code myself. Here's the important distinction. Physics is largely theoretical. It takes a lot of work to find a mistake, so I will wager you saw something that looked

      • It doesn't know that when writing code in a language, it should compile it to see if it works.

        I assume that it can't compile the code because you're using an older version. Modern versions have access to tool calls that can do things like compile code to see errors and check outputs. It can then generate a new version until it gets it "right" or gives up.

        Personally I don't think it's bad that the first attempt has errors. Most humans also need to iterate in this way. For complex problems humans can

    • Useful when used intelligently is not the same as actually intelligent. It's the same with any tool.

    • Useful is not equal to intelligence. The models simply predict the most probabilistic output for the input compared to what they have seen. The 'intelligence' is having access to all the information they have been trained on 'in their view' at the same time. They are also still not accurate and suffer from that, they don't always yield useful results.

  • a SuperBug was a VW beetle with a 1500cc engine and a curved windscreen.

  • All our problems will be solved by AI. Hail to the AI !
  • by dfghjk ( 711126 ) on Friday February 21, 2025 @08:01AM (#65184353)

    If it came up with the same "hypothesis" that already existed it not only didn't "crack" any problem, it wasn't even the first to propose a possible answer.

    "Mr Penades was happy to use this to test Google's new AI tool. "

    And how was this done? That would make a great deal of difference to what is being implied here, after all. There's every reason to believe that what really happened was that AI was used to confirm, or fail to disprove, the existing hypothesis.

    • by dfghjk ( 711126 )

      should have read other comments first, it's already confirmed bullshit. Smelled like it, though.

  • Is it some technical jargon? Might be nice to summarize the meaning and insert it between two commas...

  • This just means that most/all precursor research (not done by "AI") was actually published and that this area has a problem with data organization.

    • by PPH ( 736903 )

      TFS says unpublished. But it's not unlikely that Penades stored his work someplace like Google Drive. And the AI scraped it from there.

      • by gweihir ( 88907 )

        I wrote "precursor research". The article says the _actual_ research was unpublished. Maybe learn to read?

  • You know it's going to happen.

  • It found it because it was fed the answer in its training.

    https://www.youtube.com/watch?... [youtube.com]

  • It's just like a nuclear weapon plant that always shows pictures of nuclear medicine. All that power--taken from what little reverses we have--and for what?
  • what was training data? you need AI to explain how it came to this conclusion.
  • probably too good to be true. I suspect premeditated collusion.

  • How do we know that the AI didn't cheat by hacking into the professor's computer to get at his unpublished research?

Never buy from a rich salesman. -- Goldenstern

Working...