Become a fan of Slashdot on Facebook


Forgot your password?
AI IBM Medicine Technology

IBM Says Watson Health's AI Is Getting Really Good at Diagnosing Cancer ( 51

An anonymous reader shares a report: In deciding on cancer treatment, doctors often get together in a "tumor board" to go over the options. IBM's Watson now sits in on those meetings in a few hospitals, such as in South Korea and India -- and it generally makes the same calls that a human expert would. So says IBM in a series of studies it's presenting this weekend at the ASCO cancer treatment conference in Chicago. "It's not making a diagnosis. That's not what we set out to do," says Andrew Norden of IBM's Watson Health division. "They will run Watson Oncology in a tumor board and sort of get another external opinion." Watson's "concordance rate" -- the degree to which it agrees with human doctors -- ranged from 73% to 96%, depending on the type of cancer (such as colon cancer) and the particular hospital where the study was done (in India, South Korea, and Thailand).
This discussion has been archived. No new comments can be posted.

IBM Says Watson Health's AI Is Getting Really Good at Diagnosing Cancer

Comments Filter:
  • Mixed Messages (Score:4, Informative)

    by Anonymous Coward on Thursday June 01, 2017 @07:28PM (#54530299)

    Title: "IBM Says Watson Health's AI Is Getting Really Good at Diagnosing Cancer "
    Summary: "It's not making a diagnosis. That's not what we set out to do," says Andrew Norden of IBM's Watson Health division"

    • Re:Mixed Messages (Score:5, Informative)

      by hey! ( 33014 ) on Thursday June 01, 2017 @08:29PM (#54530621) Homepage Journal

      Well, that's the headline editor's fault. As I understand it, a tumor board doesn't diagnose; it makes treatment plan decisions.

      Well, that's the headline writer. My understanding is that a tumor board isn't about making a diagnosis, it's about deciding between alternative modes of treatment. If your doctor happens to be a surgical oncologist, a multidisciplinary board is less likely to have a systematic bias toward surgery over chemotherapy.

  • Where does it end? (Score:3, Interesting)

    by UPZ ( 947916 ) on Thursday June 01, 2017 @07:30PM (#54530327)
    And soon it will put doctors out of business. Too bad for young graduates of 10+ years of training with staggering student loans and no jobs.
    • star trek had doctors and they had better tech than we do now. someone has to make treatment decisions

      • by Anonymous Coward

        Didn't Voyager use an AI doctor who presented as a hologram?

      • That's one future. Another is the autodoc from Ringworld, Elysium and Passengers. Just climb your sick self in, shut the lid, and the machine fixes you right up.

        • Just climb your sick self in, shut the lid, and the machine fixes you right up.

          . . . and if it can't fix you up, you are already right there in your coffin, ready to be buried . . . or shipped to the Soylent Green factory.

          This definitely would streamline the whole process.

    • And soon it will put doctors out of business. they have a union to stop that.

    • by Anonymous Coward

      Diagnosis requires knowing what questions to ask. If you rely on patient-reported symptoms and lifestyle questions, you're going to have a lot of misdiagnoses. Even if you run a battery of tests, you'll have so many false positives that it's hard to make out what's going on. A standard blood panel has more than 20 tests, so at least one of them is likely to be outside the 95%-confidence "normal" range even in a healthy patient.

      AI isn't going to help with surgery; as yet robotics is only useful in very contr

  • Who is right? (Score:5, Interesting)

    by manu0601 ( 2221348 ) on Thursday June 01, 2017 @07:41PM (#54530379)
    Watson agrees with humans in 73% to 96% of the cases. But who is right when it disagrees, the human doctor or Watson?
    • AlphaGo.
    • by SharpFang ( 651121 ) on Thursday June 01, 2017 @08:04PM (#54530511) Homepage Journal


    • And maybe more importantly, if it disagrees with the humans, can we figure out why? This is a problem with a lot of machine learning applications, but there aren't many where understanding the decision-making process is more vital than it is here.

      • by AHuxley ( 892839 )
        Few nations want to fund or do the real epidemiology. It usually shows pollution, issues with mil/gov or private sector production lines, maintenance, substances once passed as been safe, lack of filters, lack of expected gov inspections going back decades.
        No government wants heavy metal issues talked about in public and tracked back to some weapon production line or that aircraft repair issues resulted in sick workers.
        Unions and employers did nothing as workers got sick. Governments wanted jobs and a p
      • by ceoyoyo ( 59147 )

        This is a bigger problem with non-machine learning algorithms. People have a great deal of difficulty figuring out how they know what they know. Sometimes they're honest and say they don't know, sometimes they're happy to make something up.

        There have been studies on physicians specifically. The students tend to follow diagnostic criteria. Once you get good at it, you don't anymore.

        With the machine learning algorithms you can crack them open and poke around to your heart's content. And the "it's a black

    • by AHuxley ( 892839 )
      Depends on a nations skills at producing experts who can read results and the people they educate over the decades.
      If a nation has poor quality staff and poor tracking of results per doctor, lacks tracking and peer review, a system will sell as been better.
      A human who writes the books, book chapters and teaches decades of new staff will have the skills with their colleagues.
      Teaching hospital vs a lab that has limits and finds staff that can do the job to some standard.
    • Watson agrees with humans in 73% to 96% of the cases. But who is right when it disagrees, the human doctor or Watson?

      There are plenty of cancers where there is a philosophical difference in how to approach it surgically and there are questions that make a particular study less applicable in a given circumstance. For example, I know of a stomach cancer case where one doctor advocated strongly for open surgery to excise the tumor and repair the stomach and perform a radical lymphadonectomy followed by chemo and another surgeon who advocated strongly for chemo followed by endoscopic surgery and a more limited lymphadonectomy

  • by Roger W Moore ( 538166 ) on Thursday June 01, 2017 @07:49PM (#54530417) Journal
    Why are they worried about making the same calls as doctors? What they should be doing is (obviously after the fact) comparing it to whether the patient actually had cancer. Nobody cares that Watson might only agree with doctors 73-96% of the time if, overall, it catches more cancers.

    In fact, even if it has a comparable success rate but disagrees with doctors that's great because it means it is catching cancers that doctors are likely to miss.
  • it agrees with those who control the power to disconnect it.
  • Despite the inevitable point that a computer surpasses human ability to diagnose and treat ailments and disease,

    might there not still be a place for humans as middlemen to broker the information to a fellow human?

  • *types in "I have a headache*

    Google: You have cancer!

  • by Gravis Zero ( 934156 ) on Thursday June 01, 2017 @08:04PM (#54530509)

    IBM needs to up their game because WebMD has been diagnosing me with cancer for years. ;)

    • by Anonymous Coward

      I'll say, wake me up when it cures MS (not the software giant).

    • Maybe Watson can cure IBM's stagnating business and stock price . . . ?

      Dr. Warren Buffet already threw in the towel on that patient.

  • Is Watson just making a yes/no answer or is it actually understanding the reports and suggesting courses of action? Can Watson catch an X-Ray or other report that is total BS or one that is suspicious enough to have been created by a human error? Still this is definitely progress.
  • A doctor makes 20x what a retail worker makes and is likely easier to automate. Retail workers aren't paid much and can stalk shelves, fix broken things and do a multitude of things. They won't all be replaced by online shopping, I still need my immediate gratification. Doctors though are very high paid and my family doctor only orders tests and prescribes things. Evidently in Canada she spends over half her time doing soul crushing paperwork. There is no reason a computer can't 1) listen to symptoms,
  • Isn't it reasonable to request the Slashdot editors to update the title and add an "EDIT: ...â disclaimer to the summary? The title outright contradicts the summary.

    It's not like Slashdot has printed millions of copies of a newspaper, it's not an epic task to change this.

    I lament that there don't seem to be any online news sources that do the minimum to notice or correct their mistakes, whether directly in-place or as a retraction / correction after the fact.

    Please prove me wrong Slashdot!

We can found no scientific discipline, nor a healthy profession on the technical mistakes of the Department of Defense and IBM. -- Edsger Dijkstra