


AI Cracks Superbug Problem In Two Days That Took Scientists Years 81
A new AI tool developed by Google solved a decade-long superbug mystery in just two days, reaching the same conclusion as Professor Jose R Penades' unpublished research and even offering additional, promising hypotheses. The BBC reports: The researchers have been trying to find out how some superbugs - dangerous germs that are resistant to antibiotics - get created. Their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Prof Penades likened it to the superbugs having "keys" which enabled them to move from home to home, or host species to host species.
Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings. So Mr Penades was happy to use this to test Google's new AI tool. Just two days later, the AI returned a few hypotheses - and its first thought, the top answer provided, suggested superbugs may take tails in exactly the way his research described.
Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings. So Mr Penades was happy to use this to test Google's new AI tool. Just two days later, the AI returned a few hypotheses - and its first thought, the top answer provided, suggested superbugs may take tails in exactly the way his research described.
Be wary about assigning credit. (Score:5, Insightful)
What did they give the AI to work with? If it was years of research, the fingerprints of the hypotheses under consideration will be all over the evolving test data, even if they didn't state them explicitly. It could be more like asking the AI, "Given these evolving datasets, what are the likely questions being asked?"
A very impressive feat, to be sure. But maybe not the one they're in awe of.
Re: (Score:2)
This does not seem to be a LLM, so it's not just parroting patterns it found on the internet. From the description in the article, this seems to be an AI that's more about identifying patterns in the data itself.
Re: (Score:3)
This does not seem to be a LLM, so it's not just parroting patterns it found on the internet. From the description in the article, this seems to be an AI that's more about identifying patterns in the data itself.
So having all of the persons and related colleagues work, plus the citation chain to identify related works it sounds like it would output something with elements similar to the work being done. It’s not unusual for human colleagues to actually know what each other are working on and what they are trying to show, it’s usually right in the open. How do we know it didn’t scrape together several key points humans were making and pointed out the relations?
What Credit? It Solved Nothing (Score:2)
It would be nice if science journalists had even the most basic idea of how the scientific
Re: (Score:2)
Veritasium has a nice video on how AI is applied to molecular biology research: https://www.youtube.com/watch?... [youtube.com]
In short: no, it's not an LLM, but it uses some similar techniques of statistical approximation.
Re: (Score:2)
This is mostly not true! (Score:5, Insightful)
Full credit to Angela Collier, and see https://www.youtube.com/watch?... [youtube.com]
This is AI-hyperbole and mis-reporting. LLM aggregated existing reports and summarised them, nothing more. And the reporting about unpublished paper is not true.
Re:This is mostly not true! (Score:4, Funny)
Re:This is mostly not true! (Score:5, Informative)
Looks like it. Angela Collier debunks it pretty thoroughly but it's mostly output of LLM seems to be representative of what when into the LLM (so LLM was working but not in a magic way of "hey human, try this original idea that none of you have thought of", etc.).
The bit about it replicating unpublished research seems to be explainable by... the LLM actually having the document and that being overlooked by the person involved. (So no magic about Google always listening either, etc.)
I'm sure creative insightful original thought AI is - or is going to be - possible in some contexts but this isn't it. A few people in Google or funded by Google appear to be hyped about it, some that aren't are suggesting the LLM output looks like historic papers as might be expected. Google's ALM seems to give better results than others but then it has better input.
It's getting reported as an AI-scientist type breakthrough, which it doesn't seem to be. The damage here is that anyone skimming headlines will think something AI-ish has happened when it looks like it hasn't.
Re: (Score:2)
It's getting reported as an AI-scientist type breakthrough, which it doesn't seem to be.
Look, just hear this out. So we may not be able to take a ton of monkeys and typewriters come through and output Shakespeare, so why can’t we just get a lot of these models together and see who’s right? Sounds easy!
Re: (Score:2)
Re: (Score:2)
Thanks. AI explainability in action.
Re:This is mostly not true! (Score:4, Insightful)
Full credit to Angela Collier, and see https://www.youtube.com/watch?... [youtube.com]
This is AI-hyperbole and mis-reporting. LLM aggregated existing reports and summarised them, nothing more. And the reporting about unpublished paper is not true.
Perhaps we should just forget the whole make-it-smarter-than-us goal and just focus on building AI-enhanced anti-bullshit filters that don’t make us dumber with clickbait lies.
Damn that shit gets old. And in no way is a zero-trust society a safe or sane one, which is exactly where we’re headed. For ALL age groups. So much for the innocence of youth. Not sure what morals or ethics looks like a decade from now. We’ve destroyed a lot in the last one.
Re: (Score:2)
"Not sure what morals or ethics looks like a decade from now. We’ve destroyed a lot in the last one."
Russia.
Re: (Score:3)
That's a pretty dramatic fraud claim, which is surprising considering that reliable news sources reported on this story. What is the counter-evidence? Is this new AI even an LLM? It doesn't seem so from the description of the tool. The tool seems to be focused on finding patterns in research data. That's not the same thing as an LLM.
I hate "sources" that come in the form of YouTube videos.
Re: (Score:2)
To be clear not suggesting fraud, just that clever - but not magic - thing looks to have done something clever. So yes, to it's impressive but not quite the simplistic view some of the headlines are suggesting.
Re: (Score:2)
The "new AI" is Gemini, and yes it's an LLM. If you'd watched the video, you'd know that. You'd also learn that the "reliable news source" is owned by the Daily Mail, so no, it's not a reliable source.
It's clickbait journalism backed by paid reviews; as Angela points out, this is just the way marketing/PR teams at billion-dollar valuation AI companies operate at the moment. It's a business model.
I know, 20 minute videos suck, but she is a physicist, a scientist who is working with these tools daily, and she
Re: (Score:3)
Clearly, your YouTube video has some factual issues. The article linked in the story above, is from BBC, not the Daily Mail. The BBC is quite reliable.
This calls into question every other point made by your YouTube prognosticator. Every YouTubeer is in it for the money, so they'll say whatever dramatic things they have to, to get the clicks. Even if they happen to be a physicist (which may not even be true). In the same way that TV commercials lie, YouTube videos lie. And its for the same reason: financial
Re: (Score:2)
LLM aggregated existing reports and summarised them, nothing more. And the reporting about unpublished paper is not true.
My god, they are making soailent green out of people. They're making our ai models out of people! Next thing they'll be breeding us like cattle for food.
Re: (Score:2)
Holy shit I'm less than 5 minutes in and I love this girl.
Nah (Score:4, Insightful)
A broken clock is right twice a day (Score:1)
Re: (Score:2)
Or, more aptly, if you train an LLM to generate images of a clockface it's bound to get a few right - especially if it's outputting a few at a time
Doubly so when your training material is largely scraped ads for clocks who make the hands “look appealing”
Re: (Score:2)
Hypothesis != Fact (Score:2)
Re: (Score:2)
Is this useful? (Score:3)
Re: (Score:2)
I'm not clear if this helps us actually fight superbugs or if it is just a useless finding with no practical application whatsoever.
If this idea is correct, and these supebugs have different tails, what happens if you break off those tails? Does that stop them? What if the tail is modified in some fashion? What happens then?
This research allows for testing of those ideas.
Re: (Score:2)
Re: (Score:2)
Maybe. And possibly without horrendous side effects. The only way to find out is to test. And that takes time and is expensive. But it's a place to start.
Re: (Score:2)
Re: (Score:2)
It depends on how the tail is obtained.
We know bacteria can steal DNA from other bacteria, viruses, and even infected hosts, it's how we developed CRISPR. It's what CRISPR is. If superbugs are using this trick to get the tails, then there may be novel gene splicing processes that would be of interest.
It also depends on whether we can target the tail.
If it's stolen DNA, does this mean all superbugs (regardless of type) steal the same DNA? If so, is there a way to target that specifically and thus attack all
Re: (Score:2)
I really doubt that they're stealing the same tail. OTOH, there are probably only a limited number that are useful. But we need to make sure that they aren't doing something essential in OUR biochemistry before we target them.
Overheard at the lab (Score:3)
There, did you feel that?
"Feel what?"
That tone
"What tone?"
That smugness in his voice!
Re: (Score:2)
There, did you feel that?
"Feel what?"
That tone
"What tone?"
That smugness in his voice!
I think we're a long way away from Skippy the Magnificent... We're a long way away from Skippy the Meh.
Re: (Score:2)
Will we ever get there? I give it a gold-plated Scmaybe.
False (Score:4, Informative)
It may have taken decades to "come up with the hypothesis" .. but that 9.9 years of research pathway leading to the conclusion was accessible to the AI making the hypothesis in combination with a leading question/prompting basically pre-discovered fact. There's no indication whether the AI simply stated the obvious. What I mean is .. for example when CRISPR gene editing protein was developed, one of the innovations was adding a nucleus localization signal (NLS) to the protein. That's what they got a patent on. But see that NLS is nothing new, lots of proteins have had that added in the past. It's virtually guaranteed that if you asked an AI "how can I make sure my DNA editing protein gets into the nucleus" .. it would state "add an NLS to it"... similarly .. their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Well .. it may have been known that bacteria can form tails after virus infection. This was established over the years: https://www.annualreviews.org/... [annualreviews.org] It may also have been separately known that these tails can enable bacteria to dominate against competitors: https://www.nature.com/article... [nature.com] and that bacteria use virus (phage) tails to spread in nature https://www.cell.com/cell-host... [cell.com] So if you ask the AI leading questions you can bring it to the conclusion that superbugs can spread across species due to the presence of tails.
The real test is make it find out something you don't have the answer to already (and ideally that researchers have made no progress on answering for a while).
Re: (Score:2)
While I agree with what you posted, which is rare, remember that the purpose here is not to devise a "real test", it is to provide confirmation of a belief in AI potential. Doing what you propose would actually be helpful, but perhaps not profitable.
Once you. know the answer, see if an AI, given the data necessary, could reproduce the result. It's something, not very interesting.
Tail? Do they mean Genomic Island by any chance? (Score:3)
Between all the vagueness it's hard to see how the AI made any new hypothesis. That bacteria DNA has special regions for phage assisted gene transfer is old hat.
"Transduction is the process of DNA transfer from one bacterium to another via bacterial viruses, bacteriophages. Many bacteriophages are able to transfer bacterial genes, including GEIs, as passengers in their genomes."
What's actually new?
Re: (Score:2)
Re: (Score:2)
See my previous comment for some references.
Re: (Score:2)
Yes, but... (Score:2)
And given 2001, how do we know it didn't read his lips ?
Re: (Score:2)
Why such poor reporting. Itâ(TM)s not a bug (Score:2)
Re: (Score:3)
Penades's paper from last year seems to describe something similar, but then how was the AI hypothesis any different?
https://www.cell.com/cell-host... [cell.com]
Re: (Score:2)
"when reporters write something with a lot of hyperbole, arenâ(TM)t you able to have a bit of the technical parts accurately reported"
Reporters are good at oversimplifying and hyping any field. As far as I can tell, equally.
You're also being pretty critical. I've heard the argument that bacteriophages aren't viruses, but it's pretty niche. Most people, including their discoverers, consider them viruses.
It is getting useful. (Score:5, Interesting)
I work as a Physicist in a National Lab and we were recently given fedramp compliant access to chatgpt including the thinking o1 model and I've been trying to use it. And, it is indeed getting useful.
First I gave it the full problem and it wasn't able to get very far, although it got the basic intuition about the problem right. But then I chopped the problem into small pieces -- I had a vague idea on how to do the calculation and was stuck at the first step. Admittedly, I wasn't really working very hard, but my PhD student spent one month looking at it and got sidetracked many times. O1 did manage to get a crucial insight. Surely, it is a standard math technique that I've seen applied to in many contexts and chatgpt must have seen it in its trainings but the fact is that it got me on the right track in 5 mins. I could probably find it in the right text book or eventually work it out myself, but it would definitely take me much longer. So the next step is to lead it through the rest of the problem step by step. Will see how it goes, but whoever claims these things are not intelligent in at least some sense of the word is a moron.
Re: (Score:3)
The way AI advocates seem to look at it is a landmine is intelligent just because it autonomously triggers.
Re: (Score:3)
Will see how it goes, but whoever claims these things are not intelligent in at least some sense of the word is a moron.
Do you consider calculators to be intelligent? They are also capable of solving very difficult problems very quickly.
Re: (Score:2)
Yes, the fact that it incorporates all available data makes it useful to get a solution based on that data without having to study it.
One of its weaknesses, is that it tends to "hallucinate" so we have to be able to check the results.
Re: (Score:3)
"But then I chopped the problem into small pieces..."
So you applied superhuman intelligence beyond the ability of even cutting edge AI? IF only an AI with vastly superior intelligence to humans could be so clever as to reduce a problem to smaller pieces!
No doubt AI can be useful, but it is still just deterministic software. All the claims of human-like or even superhuman abilities are just grift.
Re: (Score:1)
You should try DeepSeek R1 - physicist I know has exactly the same opinion, mostly about how the CoT helps getting things done.
Useful != intelligent LLMs only mimic correctness (Score:2)
Will see how it goes, but whoever claims these things are not intelligent in at least some sense of the word is a moron.
LLMs are certainly useful, but oversold and when I use them, they're great at generating code that "LOOKS" correct, but isn't. In fact, I find that the Java they produce only compiles about 50% of the time. I work with them daily. At this point, I can't tell if I waste more time correcting their mistakes than had I just wrote the simple code myself. Here's the important distinction. Physics is largely theoretical. It takes a lot of work to find a mistake, so I will wager you saw something that looked
Re: (Score:2)
I assume that it can't compile the code because you're using an older version. Modern versions have access to tool calls that can do things like compile code to see errors and check outputs. It can then generate a new version until it gets it "right" or gives up.
Personally I don't think it's bad that the first attempt has errors. Most humans also need to iterate in this way. For complex problems humans can
Re: (Score:2)
Useful when used intelligently is not the same as actually intelligent. It's the same with any tool.
Re: (Score:2)
Useful is not equal to intelligence. The models simply predict the most probabilistic output for the input compared to what they have seen. The 'intelligence' is having access to all the information they have been trained on 'in their view' at the same time. They are also still not accurate and suffer from that, they don't always yield useful results.
Back in the day... (Score:2)
a SuperBug was a VW beetle with a 1500cc engine and a curved windscreen.
Re: (Score:2)
and 12 volts!
The future is beautiful !!1 (Score:1)
cracks a problem? (Score:3)
If it came up with the same "hypothesis" that already existed it not only didn't "crack" any problem, it wasn't even the first to propose a possible answer.
"Mr Penades was happy to use this to test Google's new AI tool. "
And how was this done? That would make a great deal of difference to what is being implied here, after all. There's every reason to believe that what really happened was that AI was used to confirm, or fail to disprove, the existing hypothesis.
Re: (Score:2)
should have read other comments first, it's already confirmed bullshit. Smelled like it, though.
WTF is taking a tail? (Score:2)
Is it some technical jargon? Might be nice to summarize the meaning and insert it between two commas...
Not as surprising as it looks... (Score:2)
This just means that most/all precursor research (not done by "AI") was actually published and that this area has a problem with data organization.
Re: (Score:3)
TFS says unpublished. But it's not unlikely that Penades stored his work someplace like Google Drive. And the AI scraped it from there.
Re: (Score:2)
I wrote "precursor research". The article says the _actual_ research was unpublished. Maybe learn to read?
News from next week: AI trained on stolen research (Score:2)
You know it's going to happen.
Lies (Score:2)
It found it because it was fed the answer in its training.
https://www.youtube.com/watch?... [youtube.com]
AI: Looking for justafiable theft (Score:2)
did "AI" index his work? (Score:2)
This is great advertising for google (Score:2)
probably too good to be true. I suspect premeditated collusion.
Did the AI cheat? (Score:2)
How do we know that the AI didn't cheat by hacking into the professor's computer to get at his unpublished research?