Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Medicine Science

AI Equal With Human Experts in Medical Diagnosis, Study Finds (theguardian.com) 37

Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found. From a report: The potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment. Last month the government announced $305 million of funding for a new NHS artificial intelligence laboratory. However, experts have warned the latest findings are based on a small number of studies, since the field is littered with poor-quality research. One burgeoning application is the use of AI in interpreting medical images -- a field that relies on deep learning, a sophisticated form of machine learning in which a series of labelled images are fed into algorithms that pick out features within them and learn how to classify similar images. This approach has shown promise in diagnosis of diseases from cancers to eye conditions.

However questions remain about how such deep learning systems measure up to human skills. Now researchers say they have conducted the first comprehensive review of published studies on the issue, and found humans and machines are on a par. Prof Alastair Denniston, at the University Hospitals Birmingham NHS foundation trust and a co-author of the study, said the results were encouraging but the study was a reality check for some of the hype about AI. Dr Xiaoxuan Liu, the lead author of the study and from the same NHS trust, agreed. "There are a lot of headlines about AI outperforming humans, but our message is that it can at best be equivalent," she said.

This discussion has been archived. No new comments can be posted.

AI Equal With Human Experts in Medical Diagnosis, Study Finds

Comments Filter:
  • by Anonymous Coward

    The sooner we can de-symbolize the medical profession as being these infallible super-geniuses, the better. I have never dealt with a group of sloppier thinkers and workers than doctors. They are dangerous people and can change your life forever for the worse with one wave of their magic wand.
    For a profession whose main symbol is the stethoscope, they sure don't seem to be able to listen. They are filled with biases, erroneous information, and logical fallacies.
    Please, automate as much as possible.

    • Beating doctors at diagnoses is not hard. A nurse with a flowchart can do better. Of course, a doctor with a flowchart could do even better, but they are too pompous to use them.

       

      • Re:Good (Score:4, Informative)

        by Areyoukiddingme ( 1289470 ) on Friday September 27, 2019 @03:45PM (#59244126)

        Beating doctors at diagnoses is not hard. A nurse with a flowchart can do better. Of course, a doctor with a flowchart could do even better, but they are too pompous to use them.

        Indeed.

        This latest craze looks new to Millennials, but the "AI" of the late '80s could already beat doctors. They were called expert systems, and they were a goddamned flowchart. They reliably beat doctors in 1988. Doctors are terrifyingly bad, statistically. The subject they are attempting to specialize in is too big for the human mind to hold all at once but they refuse to accept this.

        Those expert systems were carefully buried by the AMA, along with the studies that showed how much better they were than people. Several million people have died since then who would not have died if the expert system had been in use.

        • by jma05 ( 897351 )

          Buried? Quite conspiratorial of you.

          No one figured out how to integrate them into routine practice in an automated way, properly.
          The doctors ended up being swamped by alerts that were generated either wrongly or with inadequate information.

          Earlier "A.I" was rule based and was applied to clean data. Doctors often had more patient context, beyond the data fed into the systems and ended up overriding them, and often with this frustration, correct recommendations as well.
          Now doctors do use "algorithms", except

    • "The sooner we can de-symbolize the medical profession as being these infallible super-geniuses, "

      Agreed, like all professions, 80% are crap in their job.
      Lots of doctors are doctors just because their father was, a bit like the village idiot.

    • That is actually a reflection on how abysmal the Doctors are at correct diagnosis. I sometimes wonder about people who were cured of cancer - whether they even had cancer in the first place and were misdiagnosed and had to suffer horrible treatment, while they actually had a lesser illness which their bodies cured despite all the poison treatments.
  • With cheap, ubiquitous healthcare in the USA and the current issue with a surplus of physicians, I cannot understand why anyone would waste time and energy pursuing this.

    Oh, wait....
    • by AHuxley ( 892839 )
      Re "pursuing this."
      To sell to governments who cant educate their own nations populations as doctors/experts, for use by NGO and charity groups.
      A lab full of computers that works 24/7 for a third and fourth world nations.
      The "money" is in the sale and support of such networked systems long term.
  • Not ready yet (Score:5, Insightful)

    by Nidi62 ( 1525137 ) on Friday September 27, 2019 @03:35PM (#59244086)

    Considering how often doctors misdiagnose things I wouldn't really call this an impressive feat.

    • Considering how often doctors misdiagnose things even when they actually inspect the patient, having the AI at an equal level as the doctor when both are just using a picture is even less impressive than it sounds when you think they're comparing to the actual results the doctor gets in real life.

    • by AHuxley ( 892839 )
      Then put in peer review and find the experts who make mistakes.
  • by mykepredko ( 40154 ) on Friday September 27, 2019 @03:40PM (#59244106) Homepage

    As we've talked about Self-Driving Cars, Autonomous Flying Vehicles, machines have basically met the same level as humans but the expectation is that they must be perfect. There is the implicit requirement that machines cannot make mistakes, even if it is a mistake that a human would make.

    But, we have a wrinkle here where a human doctor is protected against litigation when they make mistakes by malpractice insurance - if the AI is truly as good as a human doctor, would it's malpractice insurance cost just as much on a per-patient basis? I'm putting in that final qualifier because I would expect that the AI would be able to access test data faster and would be able to do more diagnoses in the same amount of time that a human doctor would take.

  • Will an AI have any empathy? Any emotional reaction?

    While technical excellence is fantastic, there is a human element people still tend to expect. At least for now.
    • by Anonymous Coward

      "Will an AI have any empathy? Any emotional reaction? "

      LOL if you expect that from doctors. At best they present you with rehearsed theatrics. There's no more empathy there than when you bumped into a stranger and shot them a dirty look.

    • by Shotgun ( 30919 )

      No. And when was the last time you got "empathy" from a doctor. If he cared that damn much about my feelings, she'd drop the fees a little bit.

      Besides, due to the AMA, there won't be a Dr. Kiosk that you walk up to and get a diagnosis. You'll have to talk to the doctor, who will then type what you said into a program. Kind of like how they type what you said into Google now. They will be there to tell you to put one probe in your mouth and the other in your anus, or maybe the other way around depending

    • Will an AI have any empathy? Any emotional reaction?

      Computer programs do what they are programmed to do.

      If a computer is programmed to show empathy, then it will show empathy.

      Human-level empathy is not hard to fake.

      Doctor-level empathy is even easier.

    • by AHuxley ( 892839 )
      Think of the negative side for gov rationed care set in a fictional dystopian book/movie :)
      Factor in the productive ability of a person if the computer grants approval for treatment.
      An average fit younger citizen with good academic results? Treatment always approved.
      Older person who is on a nations gov pension plan? Too old for treatment. Computer set to say nothing extra can be done. No approval.
      Over 60 and the new treatment is set to never be approved. A decade later with a better per patient pr
  • by Shotgun ( 30919 ) on Friday September 27, 2019 @04:12PM (#59244260)

    Equal at diagnosis. Currently. The AI will continue to improve, while it is a hit or miss situation with individual doctors. The improvements will be shared universally, which is hit or miss with individual doctors.

    Not equal at focus. The computer will do one thing, and one thing only, and will be tireless at it. It will not be distracted by hunger, drowsiness, sexual desire or prejudice of the outcome (I don't think this patient has cancer, so I overlook the hard to see cancer).

    Not equal on work time. The computer will even make a diagnosis while everyone else is asleep.

    Not equal on cost. Hardware is cheap. More so when compared to a doctor's salary.

  • by rsilvergun ( 571051 ) on Friday September 27, 2019 @04:24PM (#59244334)
    Family member had medical complications that were more than likely due to a doctor not ordering needed tests since, had the tests come back empty, the insurance company wouldn't have paid for.

    I'm not even sure if the doctor knew they were doing it. Implicit bias are real; and a bitch.
  • Until we have real AI that can think, you can't trust it for something as important as this. Good thing doctors would perform their due dilligence and verify any of it's findings.. right? RIGHT, DOCTORS? You do this, RIGHT?
    • "you can't trust it for something as important as this"

      As what? Determining that you broke a leg? Have cancer? A cold?

      Sure, some diagnosis may always be so consequential to require human analysis to be sure its accurate, but to test for athlete's foot may be fine with a computer. We already trust nurse practitioners and regular nurses for some ailments. This would just be an extension of that.

    • "Thinking" is a general-purpose tool. Because people can think, doctors can learn how to diagnose illnesses. Mathematicians can learn how to do math. These are not jobs that people are especially good at, without specialized training.

      Medical AI, on the other hand, is specifically tailored to diagnose illnesses. It does so because that is what it is designed to do. Just as a calculator is better than a human at math, a computer will soon be far better at diagnosing illnesses than humans.

  • The function of state medical boards is to keep medical procedures as old-fashioned and lucrative for as long as possible, so expect them to maniacally oppose this. But what if beside every Indian casino there were a medical building staffed by personnel who could competitively apply these AIs to your problems?

  • As someone who lives in a rural area, finding a physician can be a real schlepp! Add to that the fact that doctor error is the number 3 cause of death in the US, and you can easily make a case for taking the doctors ( except maybe surgeons) OUT of medical care. A doctor is nothing but a walking database, and they really SUCK at their jobs. With AI replacing humans you get: 1) Better, more accurate care 2) MUCH lower costs 3) More access in areas where doctors don't want to live (Inner cities and states with
  • The trick isn't to make a machine that's better than a human at performing a given task. That's never been at all difficult. Hammers hit harder than hands. Screw drivers turn better than fingers. Drills, pens, buttons, anvils, hoists, cars, boats, desks.

    Just about any machine we've ever built does some task better than humans do. That's why we build them. That's what a tool is.

    What makes each and every one of these machines no where near as good as a human is that it needs its environment to be perfect

  • Automated general image interpretation is a fantasy currently. Diagnosis systems are where we should be focusing. AI folk consider radiological interpretation as some massive image recognition task, a giant CIFAR challenge, when in reality it is nothing like this. Even the simplest of medical imaging, the humble chest x-ray (CXR), requires near human level AI for useful interpretation. We can train CNN's etc. to recognise possible lesions yet its much harder to know which ones are important. Sensitivity

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...