Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Medicine

Statisticians Warn That AI Is Still Not Ready To Diagnose COVID-19 (discovermagazine.com) 38

Discover magazine reports: For years, many artificial intelligence enthusiasts and researchers have promised that machine learning will change modern medicine. Thousands of algorithms have been developed to diagnose conditions like cancer, heart disease and psychiatric disorders. Now, algorithms are being trained to detect COVID-19 by recognizing patterns in CT scans and X-ray images of the lungs.

Many of these models aim to predict which patients will have the most severe outcomes and who will need a ventilator. The excitement is palpable; if these models are accurate, they could offer doctors a huge leg up in testing and treating patients with the coronavirus. But the allure of AI-aided medicine for the treatment of real COVID-19 patients appears far off. A group of statisticians around the world are concerned about the quality of the vast majority of machine learning models and the harm they may cause if hospitals adopt them any time soon.

So far, their reviews of COVID-19 machine learning models aren't good: They suffer from a serious lack of data and necessary expertise from a wide array of research fields. But the issues facing new COVID-19 algorithms aren't new at all: AI models in medical research have been deeply flawed for years...

"The main problems appear to be (perhaps unsurprisingly) a lack of necessary data, and not enough domain expertise," writes Slashdot reader shirappu: The leader of the above group of statisticians, Maarten van Smeden, pointed to a lack of collaboration between researchers as a road block in the way of developing truly accurate models. "You need expertise not only of the modeler," he said, "but you need statisticians, epidemiologists and clinicians to work together to make something that is actually useful."
One biostatistician has even been arguing that with many current AI models, medical researchers are using machine learning to "torture their data until it spits out a confession."
This discussion has been archived. No new comments can be posted.

Statisticians Warn That AI Is Still Not Ready To Diagnose COVID-19

Comments Filter:
  • by account_deleted ( 4530225 ) on Sunday July 19, 2020 @11:39AM (#60307437)
    Comment removed based on user account deletion
    • Did you even read the summary?

      Many of these models aim to predict which patients will have the most severe outcomes and who will need a ventilator.

      • The data is already there: are you over the age of 55, and have a few significant, pulmonary-related issues? Then you're at high risk and if infected would need the most help. Pretty darn simple. Unless you're trying to use "AI" to go all Minority Report on #RandomPerson...
        • by Cylix ( 55374 )

          I think the article is really saying we didn’t get the answers we wanted.

          “Why isn’t everyone dying under this model? It’s all wrong! This AI doesn’t ship until Windows doesn’t run on OS2!”

        • by jd ( 1658 )

          Still not what the article is about. RTFA and get back to us.

          • Yep, the article is BS. AI doesn't exist today, and you don't need it anyway to determine who is at risk: people who are over the age of 55, and those with pulmonary risks.
            • by jd ( 1658 )

              It's not about who is at risk, in the general sense. Besides, the virus lacerates the insides of all age groups and that kills more than the lung infection.

              It's about who is going to get which effect, which is totally different.

              • Besides, the virus lacerates the insides of all age groups and that kills more than the lung infection.

                Would need an actual citation for that, because the CDC only has one rigorous tiny study [cdc.gov] of 8 patients that showed that. And it was for people from a nursing home, with a median age of 73.5 years old.

    • by ptaff ( 165113 )

      Antibody tests for COVID-19 that can be assessed in hours already exist. What's the point of this?

      Funding.

      The inevitable financial depression due to COVID-19 fast-forwards arrival to AI winter [wikipedia.org], so many AI shops that have been surfing on subsidies and promises of unicorns are now grasping at straws, as many clueless people giving them money out of fear of missing out are now realizing the results are just not there and won't be there either in a year or five.

      • I've been saying a version of what you just said for a long time now.
        So many companies have invested millions and millions into developing so-called 'AI' thinking it was Just Another Development Cycle, only to find out that it's more-or-less a dead end. Now they have investors and stockholders breathing down their necks, ready to chop their heads off if there isn't a Return on Investment. So their marketing departments are throwing the thing at every wall they can, seeing if it 'sticks' anywhere, and hypin
    • There's some young businessman wanting to make a name for himself who thinks, "AI is the answer to everything!"

      And what none of these people get is that AI is only as good as the data that's fed it. A core component of any mathematical learning algorithm is its ability to compare its results against reference table data, then adjust its calculations to better match the results. But if your reference table data is junk, so too shall be your "AI". And there's the problem. To my knowledge, doctors have not

    • This isn't about diagnosing if the person has COVID. Although with a 40% failure rate for most existing tests, that would be an improvement.

      This is about determining how severe it'll get. There are seven different categories patients can fall in, knowing which in advance is a definite benefit.

  • There is nothing terribly specific about CT or radiograph findings of COVID pneumonitis, and the findings vary significantly from patient to patient. The imaging findings also lag behind the clinical picture - O2 sat, dyspnea, etc. This project sounds more like a grab for research funding, anything with buzzwords combining medical imaging, AI, and COVID probably get drowned in money before they can state their hypotheses.
  • Dear domain experts, please help us with our data. We know it might put you out of a job. (See the problem?) Weird that people can be greedy even in the middle of a pandemic, I know...
  • by Rosco P. Coltrane ( 209368 ) on Sunday July 19, 2020 @11:59AM (#60307525)

    AIs who sort of guess you might be sick - maybe - and POTUSes who "feel" a remedy will work.

    I think we should just go back to leeches and perfurme: we'll have a better chance of getting cured.

  • This is a confusing article (the first one linked to). First, COVID-19 is detected using a chemical test. Why is there a need to replace it? Also, if there is a need to replace it, and if it's done with data, what is the data that can replace a chemical test? Second, if it's done using data, is the result of the analysis inherently qualitative? The last point involves the difference of methods using in AI in industry and scientific research. Deep learning methods, which have become the norm in AI, are comm
  • One biostatistician has even been arguing that with many current AI models, medical researchers are using machine learning to "torture their data until it spits out a confession."

    You know what we call torturing data to create a model when we don't use machine learning? Science.

    Machine learning (ML) models are simply a new kind of model. Yes, they are often opaque. Yes, they can occasionally give bad results. But they can also provide brilliant results that are otherwise inaccessible. Discounting these resu

    • You know what we call torturing data to create a model when we don't use machine learning? Science.

      While that approach is so common today, it is, in fact - not science. Richard Feynman [youtube.com] lays it out quite nicely. You make a guess, you calculate the results of your guess, you run an experiment, then you use the results to determine if your guess was wrong. If it's wrong - then you refine your guess and repeat.

      Letting data lead you results in the failed models we see so often, and is quite intellectually lazy. It takes more work and experience to do it the right way, but you end up with the proper answer

  • You can't 'see under the hood' with these so-called half-assed excuses for 'AI', you can't see how it came to the output you get from it, so how the ever-loving fuck can you even think of using it to diagnose medical conditions in humans? It's a shoe-on-head ridiculous instance right up there with letting some shitty AI decide whether to launch nuclear missiles; it's a solution being pushed by marketing departments yet again, searching desperately for a problem, and it's just not appropriate or even remotel
  • That sums it up. How is this news?
  • That seems unfortunately very common. Experts in a buzzword field (e.g. AI) slap their expertise on a field or product they have little expertise in and call it a day. After all, the buzzword does all the magic and it's usually good enough to attract investors/grants.

    The opposite is very common too where buzzwords get slapped on otherwise functional products retroactively, e.g. all that IoT junk with abysmal security or "AI" products where the AI is a glorified if-then statement.

    Buzzwords ruin everything.

  • We'll have AI about 20 years after we have sustainable fusion. AI doesn't exist today, in any remote stretch of the imagination.
  • THAT is the problem. They want to tweak the software so it will give them the statistical results THEY want. Which is more lock downs, more canceling this or that, more masks and more control of humans lives by the globalist.
  • If they are accurate enough to say with a reasonably high probability somebody has the Covid, then followup tests can be done with slower but more accurate techniques.

  • Quite frankly AI is nothing but hype, it can't even be relied upon to direct your call to the right department in a company let alone be relied upon to make an important diagnosis. AI has a place for very well defined problems, but anything that includes the vagaries of humans it will always fail.
  • It is very useful Thank you for good information. UFABET [ufabet911.com]

"Remember, extremism in the nondefense of moderation is not a virtue." -- Peter Neumann, about usenet

Working...