Software Beats Animal Tests at Predicting Toxicity of Chemicals (nature.com) 70
Machine-learning software trained on masses of chemical-safety data is so good at predicting some kinds of toxicity that it now rivals -- and sometimes outperforms -- expensive animal studies, researchers report. From a report: Computer models could replace some standard safety studies conducted on millions of animals each year, such as dropping compounds into rabbits' eyes to check if they are irritants, or feeding chemicals to rats to work out lethal doses, says Thomas Hartung, a toxicologist at Johns Hopkins University in Baltimore, Maryland. "The power of big data means we can produce a tool more predictive than many animal tests."
In a paper published in Toxicological Sciences on 11 July, Hartung's team reports that its algorithm can accurately predict toxicity for tens of thousands of chemicals -- a range much broader than other published models achieve -- across nine kinds of test, from inhalation damage to harm to aquatic ecosystems. The paper "draws attention to the new possibilities of big data," says Bennard van Ravenzwaay, a toxicologist at the chemicals firm BASF in Ludwigshafen, Germany. "I am 100% convinced this will be a pillar of toxicology in the future." Still, it could be many years before government regulators accept computer results in place of animal studies, he adds. And animal tests are harder to replace when it comes to assessing more complex harms, such as whether a chemical will cause cancer or interfere with fertility."
In a paper published in Toxicological Sciences on 11 July, Hartung's team reports that its algorithm can accurately predict toxicity for tens of thousands of chemicals -- a range much broader than other published models achieve -- across nine kinds of test, from inhalation damage to harm to aquatic ecosystems. The paper "draws attention to the new possibilities of big data," says Bennard van Ravenzwaay, a toxicologist at the chemicals firm BASF in Ludwigshafen, Germany. "I am 100% convinced this will be a pillar of toxicology in the future." Still, it could be many years before government regulators accept computer results in place of animal studies, he adds. And animal tests are harder to replace when it comes to assessing more complex harms, such as whether a chemical will cause cancer or interfere with fertility."
Amazing stuff (Score:3, Insightful)
Re: (Score:3)
Further proof that machine learning and AI has real world use. This is replacing the suffering of millions of animals today. Truly useful.
Don't worry, soon we'll start seeing AI rights activists, and companies that use chemicals will have to have disclaimers stating "No software was harmed in the testing of this product".
Re: (Score:2)
I'm ..I'm not sure if this should be modded "Funny", or Insightful ..
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Of course, it never really did, the rabbit always died, pregnant or not, when they were relevant to the testing.
Re:Amazing stuff (Score:4, Informative)
"Practical use of structure activity relationships has therefore been largely limited to so-called read-across, i.e. the pragmatic comparison to one or few similar chemicals...This subjective expert-driven approach cannot be quickly applied to large numbers of chemicals. Read-across dependence on human opinion makes evaluation of the technique difficult and prevents reliable estimates of method reproducibility."
Do you know what that means? It means using statistical analysis, on large amounts of data, to compare the chemical structures of different chemicals to determine the biological properties of each substance. This means that a new chemical can be compared to an existing chemical and if found similar we would know the biological properties and effects of that chemical. Thus animal testing on a new chemical that is comparable to an existing chemical, which has already been tested on animals, would not be necessary.
That's not AI. That's not machine learning. It's statistical analysis on a large scale.
It does not do away with animal testing. One needs to know the effects. Without animal testing you have no understanding of what the chemical will do. The only reason you would not need to perform animal testing is if testing has already been done and the new chemical has similar properties to previously tested chemicals.
Jackass.
Re: (Score:3)
That's not AI. That's not machine learning. It's statistical analysis on a large scale.
Haven't you heard?
AI is the new Serverless Quantum Internet of Agile Blockchain Mobile Things Architecture.
Re: (Score:2)
SQIABMTA? Doesn't roll off the tongue.
Re: (Score:2)
SQIBAMTA (pronounced squib-amta) should do just fine as well.
Re: (Score:1)
Then use people to find out what the chemicals will do. People are animals.
Re: (Score:2)
Then use people to find out what the chemicals will do. People are animals.
If you're doing lethality studies, you'll have to pay your human volunteers MUCH more than the price of a rat.
Re: (Score:2)
I imagine that you would still want to do animal studies.
Untill you are really sure of the AI/ML, you don't prescribe it to people just because the software says "3mg per kg of body mass" is fine.
Re: (Score:2)
That's not machine learning. It's statistical analysis on a large scale.
And that is exactly what machine learning is. Discovering relationships in data patterns on very large sets of data with many attributes using statistical properties.
Molecules of interest generally have many parts ("functional groups") and many structural patterns. So many in fact that coming up with a consistent way of simply naming the molecules has been a challenge for chemistry for generations. And each of these groups and structural components interact with biological systems in different ways. After m
Re: (Score:2)
This article is probably not the best example of AI, but lets talk about this in general, because it keeps coming up here on Slashdot.
That's not AI. That's not machine learning. It's statistical analysis on a large scale.
Much of this "that's not AI" talk is really just moving the goal posts. We thought only an AI could play chess. Now we have machines that play chess, but they are not AI. We thought only AI could differentiate and integrate equations, now software can do that. We thought only AI could win at Jeopardy, diagnose diseases, play Go, beat a human at Quake 3, and drive cars...
Re: (Score:2)
This is replacing the suffering of millions of animals today. Truly useful.
There is one thing you a forgetting... this is also putting millions of animals out of a job! ;)
Same argument (Score:2)
There is one thing you a forgetting... this is also putting millions of animals out of a job! ;)
Funny but the ironic bit is how that is the EXACT same argument people use when we replace jobs that involve handling or emitting chemicals that kill people. We can't stop mining coal despite it killing people because people might lose their jobs! We can't use autonomous vehicles because truck drivers might lose their jobs.
Re: (Score:1)
This is replacing the suffering of millions of animals today. Truly useful.
Now liberals will have to think fast to come up with another reason for hating science.
Re: (Score:3)
Liberals do not hate science. Conservatives tend much more to hate it and believe in literal interpretations of the bible instead.
Liberal haters of science: Anti-vax, No GMO, No Nukes, No TMT. Feelings expressed as endless protests, lawsuits, food labeling propositions and other actions designed to stop science and its applications in its tracks.
Conservative haters of science are the creationists. Their feelings are expressed as..feelings, which you occasionally see expressed on websites. Have you ever seen an instance - just a single instance - of creationists preventing public infrastructure from being built? If you have, please let
Re: (Score:2)
That's a wash because climate activists are peculiarly uninterested in putting technology to work on fixing the problem. They don't support the largest carbon-free energy sources (even hydro, which is still by far the most important renewable worldwide) and they don't support geoengineering to sequester carbon already in the environment. The latest sequestration effort heading for the leftist dumper is a chickpea that has been genetically modified to pull carbon from the air into the soil, where it becomes
Re: (Score:2)
That's the catch - it's learned. For it to learn correctly, the initial data set be based on materials actually tested on animals (or people).
I'm not saying this isn't useful. But understand that it's basically just interpolating within regimes already covered by empirical test data. It's not extrapolation outside the bounds of that empirical data set. It's inherently limited to biological reactions to known substances and chemically simi
Re: (Score:2)
It won't be able to accurately predict how biology will react to a new material
Except that the paper shows that it does. Not 100% accuracy, but 78-96%, quite good enough to do compound screening.
What sorts of "new material" do you think they will encounter? Chemicals made out of new elements?
Chemicals are put together is well defined ways, based on the small number of types of bonds that exist, though combinatorics leads to very large numbers of structures. Most organic molecules are made of just four types of atoms. A few others (phosphorus and sulfur, for example) are fairly common
Re: (Score:2)
Right up till it fails. (Score:1)
Yeah, but the SOFTWARE said it DIDN'T cause cancer!
Who do we blame if it fails? (Score:4, Insightful)
If we take a toxin that kills us, but had passed Animal Testing, then it is just God playing trick on us. But if it is something that an algorithm didn't realize to check then it is the fault of man. And some poor grad student will get hit with a multi-billion dollar lawsuit for not realizing such a chemical is harmful.
This is actually with my Tongue in Cheek response. But also a reflection of our culture and its intolerance for mistakes, to a point where we are being held back on progressing, because there could be new mistakes made, even though overall it is a much better solution.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Avoiding overfitting to your training data is easy.
Models generalize to at least some data it wasn't trained on - that's the whole point. If they don't, they get thrown out.
But if the new compound is really dissimilar, enough that it can't be said to look like the data in the test set, then all bets are off.
I don't know enough about chemistry to know if that's likely to happen often. Hopefully, chemists will know if the compound they have an idea for is widely different from existing ones. Humans aren't out
Re: (Score:1)
Both chemists and the AI hopefully.
Count me skeptic (Score:5, Interesting)
Re: (Score:2)
However, it does not mean animal testing is not necessary if neither chemical was tested. When comparing two chemicals, one chemical needs to be tested in order to understand the effects of the other.
Re: (Score:2)
If two chemicals are found to be similar, it is reasonable to assume that both would have the same biological properties.
Exactly how similiar? Like H2O & H2O2, for instance? (Note: I tried to use <sub> tags for the 2's, but they don't appear to work here...).
Re: (Score:2)
Maybe I'm old-fashioned but it seems to me that confirming that a substance is not toxic and predicting how toxic it may be are two very different things.
Chemists have been able to make a good guess at whether a heretofore-uninvented chemical compound would be toxic for quite some time now. Finding out precisely how toxic something is will require actual testing for the foreseeable future, but finding out whether something is toxic should be easy by now.
It won't replace animal testing soon, but it may reduce it.
The Poison Is In The Dose (Score:2)
No, you're not old-fashioned, because the old-fashioned people were well aware of the phrase: "The poison is in the dose." :)
--
.nosig
This will only benefit patent holders. (Score:1)
Those poor animals (Score:3, Funny)
Re: (Score:2)
Re:Those poor animals (Score:4, Interesting)
Those poor animals. I say, rather than torture the animals, let us get rid of these government regulations and let the people who want these stupid products test them out.
The problem here instead of well cared for (poor) animals, you would be testing on literally poor humans. Exploiting the poor isn't what I call an improvement.
Re: (Score:2)
source data (Score:2)
Where's the source data going to come from if we were to stop all animal tests (which you know, sociologically speaking, is where this is leading)?
The software predicts toxicity based on existing data, right?
Re: (Score:2)
You'll still sometimes need to do experiments to get new data, but hopefully a lot less often. This model will work best if you already have data for lots of similar compounds. And in that case, there's not much benefit to doing experiments on the new compound. Any data they produced wouldn't improve the model much.
When you start working on a really novel chemical that's very different from anything you have training data on, this model won't work well, so you'll need to do experiments. Then you can add
A good first round solution? (Score:2)
I'm not particularly familiar with the industry, but I was presuming this could be first line of defense, with a smaller round of animal testing as the last line of defense to confirm the safety of the chemicals as attested by the predictions. Trust when the model says it's toxic/irritant, but verify when the model proclaim something to be safe.
If the models work, then the animal testing should be relatively humane (they should just be getting safe doses at that point) and cheaper/quicker (fewer animals ma
on 350 to 700+ chemicals each? (Score:2)
Re: (Score:2)
All the past testing (Score:2)
wrong approach (Score:2)
That's not how life science works. When biological or environmental differences lead to variations in test results, those variations are not "errors," they are data. Averaging them out, o