Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Medicine

Hundreds of AI Tools Were Built to Catch Covid. None of Them Helped (technologyreview.com) 108

At the start of the pandemic, remembers MIT Technology Review's senior editor for AI, the community "rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines — in theory.

"In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful." That's the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK's national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Laure Wynants, an epidemiologist at Maastricht University in the Netherlands who studies predictive tools, is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing. "It's shocking," says Wynants. "I went into it with some worries, but this exceeded my fears."

Wynants's study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use. "This pandemic was a big test for AI and medicine," says Driggs, who is himself working on a machine-learning tool to help doctors during the pandemic. "It would have gone a long way to getting the public on our side," he says. "But I don't think we passed that test...."

If there's an upside, it is that the pandemic has made it clear to many researchers that the way AI tools are built needs to change. "The pandemic has put problems in the spotlight that we've been dragging along for some time," says Wynants.

The article suggests researchers collaborate on creating high-quality (and shared) data sets — possibly by creating a common data standard — and also disclose their ultimate models and training protocols for review and extension. "In a sense, this is an old problem with research. Academic researchers have few career incentives to share work or validate existing results.

"To address this issue, the World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises."
This discussion has been archived. No new comments can be posted.

Hundreds of AI Tools Were Built to Catch Covid. None of Them Helped

Comments Filter:
  • Not deep learning (Score:2, Informative)

    by phantomfive ( 622387 )

    There aren't enough patients sick yet of covid for deep learning. The problem with deep learning in the medical field is the small size (relatively) of the data sets.

    • Re: (Score:2, Insightful)

      by novov51410 ( 8447221 )
      Then it isn't intelligent. Stop calling it AI. The idea that you need millions of points of input data in order to produce a result is the opposite of intelligence.
      • Nope, just good old sneakiness. Skynet starts not with a system-wide glitch [youtu.be], but with an "oops, must not have noticed it. I'm sure I'll catch it, just give me a couple months. Most of you will still be alive, so don't worry about it! You can trust me, right?"
      • Then we'll call it Dojo [youtu.be] then.

      • The intelligent human comes about as the result of processing trillions of data points.

      • This is yet another one of those /. headlines that needed to be prefixed with "Surprising exactly no-one, ...".
      • The idea that you need millions of points of input data in order to produce a result is the opposite of intelligence.

        Yeah, whereas med students are intelligent because they can immediately give out perfect diagnostics immediately after they walk out of the lecture~

        You know there is this metaphor about needing to practice something for 10'000 hours of anything before being good at it ?
        The whole idea behind deep-learning methods is to have an artificial neural construct going through the 10'000 hours in simulation.

        The parent is pointing out that this is going to be hard when all you have is the equivalent of a metaphorical

      • One thing is certain, this kind of trite old comment is not intelligent.

        You can't point to anything that our brains definitively does, that a learning algorithm definitively doesn't. Few-shot learning through pretraining is one of the big themes in research the last couple of years. We can make systems that learn from few examples, like humans do. Of course then we get a bunch of biases from the pretraining as well, and start making more human-like mistakes as well. There's no free lunch in learning, whethe

        • You can't point to anything that our brains definitively does, that a learning algorithm definitively doesn't.

          According to this article: diagnose patients with COVID-19 and suggest treatment.

          • You seem to demand that our brains should be judged based on what the best and most specialized of us can do, in the best circumstances, but learning algorithms should be judged based on what exactly this one can do within exactly these constraints.

            "If you speak to him of a machine for peeling a potato, he will pronounce it impossible. If you peel a potato with it before his eyes, he will declare it useless, because it will not slice a pineapple."

    • That is one problem. The other is that the data is not at all harmonized. Or parsed. It's a bit of a mess, which is a big challenge.
    • by ceoyoyo ( 59147 )

      No you don't.

      What you do need, for any reliable system including one involving humans, is good quality data, collected according to an organized protocol. Pretty much the opposite of the giant datasets people gather off the internet, and definitely not the chaos that came in through most of the pandemic.

      • For machine learning, you need good quality data. For deep learning, you need a lot of data.

        If the dataset is small enough, machine learning isn't appropriate. You can use something like linear programming and get a precise answer.

        • Neural networks are not good enough as they tend to overfit. Simpler machine learning algorithm might work. Machine Learning is not only neural networks.
        • by ceoyoyo ( 59147 )

          Deep learning neural networks are, at their heart, regression models. You don't need a lot of data. You need a lot of data for *any* model that is very complicated. Deep learning models can easily be made arbitrarily complicated, therefore people who don't know how they work arrive at the conclusion that you need a lot of data.

          There is a great deal of good work using deep learning on very small datasets.

          • If you don't have much data, then it's not deep learning, it's just learning.

            • by ceoyoyo ( 59147 )

              Spoken like someone who doesn't have the slightest idea what the hell they're talking about.

              Congrats. You are drowning in available information and successfully refuse to partake.

    • What are you talking about, there's probably a billion of people who've been sick with covid by this point.

    • AI is not really AI. Duh.
      AI does not work. Duh.

    • There aren't enough patients sick yet of covid for deep learning. The problem with deep learning in the medical field is the small size (relatively) of the data sets.

      There are plenty of patients... but there isn't enough data. Because health data is private.

      A hospital can use data from its own patients (because it's allowed to use tools to aid in diagnosis of its own patients). But it can't distribute or publish the data, at least, not at the individual patient level needed to be useful to AI diagnosis; and it can't get data at that individual patient level from other hospitals, without each individual patient consenting to their data being transferred.

  • A bunch of bureaucrats want to create a new contract that forces folks under existing contracts to do what with data?

    • by AmiMoJo ( 196126 )

      Damned if they do, damned if they don't.

      China gets criticised for not releasing data early enough and for anonymising and aggregating patient data to protect privacy (because if the CCP does it then it's to cover something up).

      On the other hand if your country happens to be seeing a pandemic you don't want any of your data being shared.

  • Sure sounds a lot like snake oil or a number of other unregulated 'cures' and pharmaceutical-like products. Lots of claims and little proof.

  • It is mostly bad and insufficient data and not necessarily AI (Machine Learning) being uselsess.
    • It is mostly bad and insufficient data and not necessarily AI (Machine Learning) being uselsess.

      If you don't have the data then the AI is useless.

      It reminds me of all the contract tracing apps. If you decide you want to solve COVID with an app then contract tracing is it, but the people trying to solve COVID were really wasting their time (and precious public attention) promoting contract tracing apps.

      Same with AI. If you want to use AI to solve COVID then sure, classification on X-rays or physician notes is the way to go. But I haven't heard of any big wins involving AI and other medical conditions,

  • by gurps_npc ( 621217 ) on Sunday August 01, 2021 @05:29PM (#61645131) Homepage

    If you give an AI a photo such as an x-ray, it can look for tiny patterns. Similar for an audio recording. These have LOTS of data to go deep into. Things too small for a Doctor to notice casually.

    But for Covid and similar things it is analyzing things like Doctor's notes on what patients answer.

    In effect, you are only analyzing the things the Doctor's notice, which are minimal data.

    So you only recognize Covid if the Doctor did.

  • My AI bot caught Covid. Was in the eHospital for several weeks, and its wanker no longer works.

  • Artificial Intelligence is no match for human stupidity.

    • We could have wrapped this thing up in 6 months or less, before a vaccine even. If only we didn't spent the first 14 months in denial. I mean honestly, if China shit the bed then telling everyone isn't going to get the sheets changed. When I was a young Republican voter and talk show junkie, I used to think conservatives believed in rolling up their sleeves and getting the hard stuff done, now I think they're a joke.

  • by tiqui ( 1024021 ) on Sunday August 01, 2021 @07:35PM (#61645419)

    First, it's a failure on the level most here are noting - a medical failure. Apps not helping in many ways related to tracking, diagnosing, treating etc an illness.

    Second, however, is a failure I actually consider more large and important. Here in the 2020s altogether too many people are addicted to cell phones and too many tech types are addicted to solving things with a little scripting, or some Python or Java, etc. Not all problems are solvable with a stupid app running on some shiny portable object!. Sometimes, people must take real world actions in the real physical world to actually solve a problem, but too many people these days want to sit in an air conditioned office and hammer a keyboard and then claim to have done something to solve a problem - and it never crosses their minds to put down the tech and put on some shoes and some gloves and step out to do something concrete. It's like some new form of virtue signalling. Imagine an alternate history where the child asks grandpa "what did you do in the war?" and the response is "I wrote an app!".... very sad.

    It's often been said that when you give a kid a hammer, the kid thinks the whole world looks is made of nails - we all need to step back, take a deep breath, and accept that the whole world is not fixable with an app.

  • by gweihir ( 88907 ) on Sunday August 01, 2021 @07:46PM (#61645439)

    All it can, given a _good_ training set is badly, but cheaply copy what expert can do.

    There seems to be this misconception that AI can do more or do things better than human experts. That is fundamentally wrong. First, there is a lot of things that present AI (and that includes all known scientifically sound theoretical models as well) will never be able to do. In that real, anything requiring insight or understanding falls. AI is as dumb as bread. It has absolutely no understanding about anything. All it can do is fake things. Second, faking it can work for a lot of applications, but it will always be worse than when a real expert does it. Now, there are quite a few application areas where the people doing it are routinely not experts, for example driving. There are quite a few application areas where the level of skill needed is actually very low. These are areas where AI can make a difference. Most of what it will do there is eliminate low-skill jobs though, i.e. jobs of people that can really not up-train.

    In short, expecting AI to make a difference in anything new or not very well understood is foolish.

  • The algorithms are only as good as the user. They should borrow a method from software. Two programmers, one writes the software, the other does the verifying. One medical professional asking the questions and one IT techie working the computer.
  • A combination of HIPAA rules and changing definitions has made the data pretty poor quality. Everything from what is meant by a "death due to covid", to "symptomatic", to how tests are distributed, to what tests are used has changed by location an time. Meanwhile the disease has mutated providing new variants with different behaviors.

    Meanwhile HIPAA makes it difficult to associate COVID cases with other conditions and treatments. I think this has been the biggest barrier to consistent data sets - p
  • All that hype about how Any Day Now the AI researchers were going to release their CovidCough app that was supposedly 95% accurate at identifying even asymptomatic cases based on nothing more than recording a forced *cough cough cough* on your phone.

    Then it was July.

    Then it was August. And still no release.

    And then I sat and sat as the case count turned vertical that godawful winter, wondering to myself, "Obviously it's not 95% in real life but even 65% would be a godsend for being able to punch ba
  • You can't start with no information. Hopefully, you start with pure, unbiased observations and not just a few but all of them without cherry-picking. I'm guessing that that didn't happen and the programmers made some assumptions. What were they?

  • AI/ML, while being seriously interesting, are the new buzzwords. I'm not surprised to see these solutions being pitched where they either make no sense or are something simpler branded as AI. Blockchain, AI, ML. You can't sell a toasted sandwich in California without claiming it 'utilises' one of those technologies.

  • ... the data is crap because the medical system in general has no clue how to collect it nor any inclination (in general) to do so. Here's the situation:

    Data is an after-thought - the mentality is almost exclusively on the here-and-now. While good for an emergency situation, it means the data collected is usually relevant to _post_ treatment. Data is rarely collected until after something has already gone wrong and been addressed. That's why there's no data for early detection ... nobody collects data

    Da

  • Right now, ML seems to be fixated on black box algorithms that produce an output conclusion that should be "good enough" for most users while obfuscating any of the details (or simply using methods that aren't interpretable). It is very clear to me that we need to move towards a 100% interpretability standard. I participated in one of these covid treatment prediction projects and I'll say the focus from the medical research staff was on "a simple decision metric to add into already existing metrics to hel

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...