Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
IBM AI Medicine

IBM Promised Its AI Platform Watson Would Be a Big Step Forward in Treating Cancer. But After Pouring Billions Into the Project, the Diagnosis is Gloomy. (wsj.com) 90

Can Watson cure cancer? That's what IBM asked soon after its AI system beat humans at the quiz show "Jeopardy!" in 2011. Watson could read documents quickly and find patterns in data. Could it match patient information with the latest in medical studies to deliver personalized treatment recommendations? "Watson represents a technology breakthrough that can help physicians improve patient outcomes," said Herbert Chase, a professor of biomedical informatics at Columbia University, in a 2012 IBM press release. Six years and billions of dollars later, the diagnosis for Watson is gloomy [Editor's note: the link may be paywalled; alternative source]. WSJ: More than a dozen IBM partners and clients have halted or shrunk Watson's oncology-related projects. Watson cancer applications have had limited impact on patients, according to dozens of interviews with medical centers, companies and doctors who have used it, as well as documents reviewed by The Wall Street Journal. In many cases, the tools didn't add much value. In some cases, Watson wasn't accurate. Watson can be tripped up by a lack of data in rare or recurring cancers, and treatments are evolving faster than Watson's human trainers can update the system. Dr. Chase of Columbia said he withdrew as an adviser after he grew disappointed in IBM's direction for marketing the technology. No published research shows Watson improving patient outcomes. IBM said Watson has important cancer-care benefits, like helping doctors keep up with medical knowledge.
This discussion has been archived. No new comments can be posted.

IBM Promised Its AI Platform Watson Would Be a Big Step Forward in Treating Cancer. But After Pouring Billions Into the Project,

Comments Filter:
  • by Anonymous Coward on Monday August 13, 2018 @02:37PM (#57118028)

    after they replaced the HR system, no one could figure out how.

  • >> Six years and billions of dollars later

    If it was IBM's gamble, then nothing of value was lost. If, however, the billions were invested by medical teams duped by impossible promises, then that's a different story.

    (Story is paywalled and I'm too lazy to read TFA before commenting.)
  • I doubt IBM poured billions into oncology Watson. All it was doing was creating recommendations for doctors from reading patient reports and treatment research reports.

    IBM sure poured billions into Watson and it's their biggest future product. The oncology seems like a small side project.

    • WATSON is able to access existing knowledge. A.I. in general is good at searching and using known problem solving methods. But think outside of the box? It does not look good.
  • by mwvdlee ( 775178 ) on Monday August 13, 2018 @03:06PM (#57118228) Homepage

    Judging from the other comments, IBM's AI system has improved human vision to a perfect 20-20 hindsight.

  • watson's finger (Score:4, Informative)

    by trb ( 8509 ) on Monday August 13, 2018 @03:08PM (#57118232)

    Watson beat people at Jeopardy because it always got to answer first, because its button-pressing finger was faster than a human's button-pressing response. A fairer assessment of Watson's Jeopardy-playing abilities vs humans would have Watson respond with the same button-mashing-delay profile as its competitors. Beyond that, the relevant question is not whether Watson can beat humans at Jeopardy, or mash a button faster than a human, but whether it can analyze data better than a human to detect cancer (or solve whatever medical problem). And for the most part, it doesn't matter whether the answer comes back in 10 ms, 300 ms, or a a minute, or an hour. Like with any other tool, the question is whether it can help get the job done better for a reasonable price.

    • also, nobody is expecting Ken Jennings to go around diagnosing cancer.
    • Re: watson's finger (Score:2, Informative)

      by Anonymous Coward

      Watch the PBS NOVA on Watson. They accounted for the delay for humans to buzz in and intentionally made the computer wait to give a âoefairâ chance. The rest of your post is spot on about fast turnaround not being applicable in all use cases.

      • by Anonymous Coward

        I saw that special, and they mostly just uncritically reported what IBM was saying, which was a tad one-sided. Ken Jennings talks in more detail about the buzzer advantage (or not) in this blog post [ken-jennings.com]. While he starts off by saying "[s]ome have called this an unfair advantage; I’m inclined to think of it as a fair one", he goes on to say this:

        [I]t’s certainly true that Watson needed that speed advantage to hang with top human players.

        I think that's a key quote. He writes quite a bit to defend IBM,

    • Comment removed based on user account deletion
  • because all AI can do is manage the information it is given. Could it make a leap? Yes if the right information is present and the right questions are asked.
    Today's AI is not really there yet, today it is just a big automated filter and matching machine there really is not any intelligence behind it ATM.

    No one has figured out how to program a concept in to data. No one knows yet how to program thought, consciousness in to an algorithm. At least that I know of.

    Just my 2 cents ;)
  • by Anonymous Coward

    I know the Washington Post does it but that doesn't mean that putting periods in headlines looks any less fucking retarded. And don't go starting with Huffington Post "This Is The Most Scandalous Detail Of What Happened, And Here's How You Should Feel About It" headlines either. If I wanted entertainment/news I would go to Maddox's site because even though he sucks, he's still better at it than any of you ever will be.

  • One of the problems with a field like oncology (or medicine in general) is that the AI has to rely on training from humans, using source material generated by humans. Which leaves it with the same problem humans have: research is fast evolving, sometimes biased, incomplete, or experimentally flawed, and oftentimes contradictory from study to study. Seriously, just go look up any complex biomedical subject on pubmed and start reading studies. You will find results all over the place. This is why meta-analys
    • by ceoyoyo ( 59147 )

      It's not spitting out nonsense (generally), it's just not doing better than a person would.

      The reason is that going off datamining isn't guaranteed to give you better results. YOu have to have a situation where all that data is actually meaningful.

      The problem with oncology is that there are some cancers where a specific mechanism can be targeted, there's test for that situation, and a specific drug. A human can read a + on a piece of paper and prescribe the appropriate drug just as well as Watson can.

      The ot

      • by EvilSS ( 557649 )

        So again, Watson does about the same as a person would.

        Except if you read up on it, it didn't. It did worse. In some cases giving out completely inappropriate treatment suggestions a newly minted oncology resident would know are bullshit. Much of this has been attributed to mistakes and biases made in its training.

        • If the overall performance is similar to a human, but some of the mistakes it makes are as easily identified by a human as you presume, then having the human reviewing the machine performance would give a better result than just the human or just the computer.

          Alas, it was just bullshit, and the humans also make mistakes that look stupid to other humans.

          • by EvilSS ( 557649 )
            I'm not presuming, I'm stating what has been reported from the people using it. If the results are no better, and sometimes worse, than a human then it is wasting time and money. Having an under-performing co-worker doesn't make the rest of the team better. It makes them less efficient.
  • For AI to shine with anything this complicated, you really need detailed and consistent testing. This is where tech needs to go first in medicine. In very many areas, it would help to have a massive increase in testing and fidelity of the tests.

    After the success of the genome project, we should have moved to a massive effort to develop cost efficient full system scanning and testing instead of starting the brain project.

    We should launch a project to figure out how to measure every aspect of health imaginabl

  • by Rick Schumann ( 4662797 ) on Monday August 13, 2018 @03:35PM (#57118416) Journal
    I've said it before, I'll keep saying it: until we actually understand how a biological brain produces the phenomena we call 'thinking', we will not be able to create 'machine intelligences' that match or exceed human beings. Period. It's 'magical thinking' to keep hooking up more and more processors and throw more and more data at the same half-assed software and expect it to suddenly be smart and cognitive like a human brain. 'Deep learning algorithms' are just a very small part of the total answer, and that's all they've been obsessively focusing on.

    Now, what they should be investing 'billions and billions of dollars' in, is research and development of newer, better instrumentation for observing a living brain in action (and I do NOT mean 'a better fMRI, I mean invent something that's a new and different approach). Only when we can see the total system in action will we even have a chance to understand how it works, the problem being that once it's dead, it's dead, and dissecting it isn't going to show you what you need to see.
    • by ljw1004 ( 764174 )

      I've said it before, I'll keep saying it: until we actually understand how a biological brain produces the phenomena we call 'thinking', we will not be able to create 'machine intelligences' that match or exceed human beings. Period. Now, what they should be investing 'billions and billions of dollars' in, is research and development of newer, better instrumentation for observing a living brain in action (and I do NOT mean 'a better fMRI, I mean invent something that's a new and different approach).

      There's a HUGE market today for the kinds of things that the current machine-learning approach does plenty well enough at at looks like it has a lot more growth area - license-plate recognition, facial recognition, image recognition, medical imagining recognition in the case of Deep Mind. It looks like the billions that are being invested today are a good investment.

      And you propose switching that investment to a speculative thing that might bear fruit in 50-100 years time, and if it did then the result woul

      • That's the point it DOESN'T 'do the job plenty well enough' it always falls short of the mark because it has ZERO capacity to actually THINK, your dog or cat has better cognitive ability, and people will trust this half-assed excuse for AI too much and disasters will happen.

        And you propose switching that investment to a speculative thing that might bear fruit in 50-100 years time, and if it did then the result would be a general-purpose intelligence that replaces a lowly-paid human being? Why should someone invest billions in that?

        They're putting short-term profits ahead of something that isn't garbage. Face it: the so-called 'AI' they keep trotting out has had billions invested in it, thinking it's going to be Just Another Design Cycle, and it turns out that it f

        • by ljw1004 ( 764174 )

          That's the point it DOESN'T 'do the job plenty well enough' it always falls short of the mark because it has ZERO capacity to actually THINK, your dog or cat has better cognitive ability, and people will trust this half-assed excuse for AI too much and disasters will happen.

          I think what we've discovered is that the "capacity to actually think" is by and large unimportant for most of the needs we have -- good-enough large scale image recognition, good-enough medical imaging assessments, and others that I listed.

          Disasters? You'll have to spell out why you say the "ability to think" or general-purpose intelligence will lead to fewer disasters rather than more. Personally, I reckon it would lead to more disasters just through sheer complexity and unpredictability and un-debuggabil

          • I think what we've discovered is that the "capacity to actually think" is by and large unimportant for most of the needs we have

            Yeah? Who the hell is this 'we' you're referring to? Not anyone I've ever talked to. I think you're making that up, and the 'we' is actually just 'you'.

            If a "thinking" industrial robot kills a human, how the heck will we debug that or fix it?

            At least you can then ask it why it did what it did, instead of even the programmers that wrote it telling you "We have no idea why it did that", which is the current 'state of the art' in AI; even the programmers have no idea what's going on 'under the hood' when it's running.

            • by ljw1004 ( 764174 )

              [I think what we've discovered is that the "capacity to actually think" is by and large unimportant for most of the needs we have] ... Yeah? Who the hell is this 'we' you're referring to? Not anyone I've ever talked to. I think you're making that up, and the 'we' is actually just 'you'.

              As you wrote, "that's all they've been obsessively focusing on". The "they" in your sentence are the ones who presumably think that deep learning is good enough for their purposes, else they wouldn't pouring their billions into it. I'm just describing to you what the market has perceived, which you yourself observed too.

              • Sure, and what I'm saying is that they know it's garbage and they're selling it anyway because otherwise they know investors and stockholders will crucify them. Meanwhile if it's anything involving possible harm to or loss of human life (e.g., self-driving cars) their legal departments have assured them that the projected losses from paying out settlements will be trivial compared to the profits.
      • I never understood why people think image recognition is new "AI". It isn't. License plate readers (and image recognition) have been around for decades.
        • by ljw1004 ( 764174 )

          I never understood why people think image recognition is new "AI". It isn't. License plate readers (and image recognition) have been around for decades.

          Image recognition was indeed around for decades. It was based on convolutions for edge-detection, haar cascades for face recognition. You'll have used these if you had a camera with face detection up to a few years ago. If you've coded with OpenCV you'll have used these APIs, e.g. https://docs.opencv.org/3.4.1/... [opencv.org]

          It was an old technology that had run its course and really was a dead end. It wasn't making progress. It required too much custom human coding for things you wanted to recognize, and it was hit-or

    • > until we actually understand how a biological brain produces the phenomena we call 'thinking', we will not be able to create 'machine intelligences' that match or exceed human beings.

      I don't know; depends on your definitions. To me, an intelligent machine is defined by its behavior, not by its internal design or building materials. We can probably build something that behaves very closely to a human, even though internally it's built of switching silicon, or ants running through tubes.

      Historically, our

    • by Kjella ( 173770 )

      It's a dead end if you want to build Lt. Cmdr. Data, but honestly I just want a glorified Roomba. I mean we struggle to make a burger flipping robot, imagine having a chef in your kitchen 24x7x365 and for bonus points it'll set the table, be your waiter and clean the dishes. And that can't just vacuum the floors but scrub the toilet, dust the furniture, rinse the sink, clean the windows and so on. And that can take my dirty laundry, sort it, wash it, dry it, iron my shirts and hang them in my closet. I don'

  • Not yet there (Score:5, Informative)

    by Artem S. Tashkinov ( 764309 ) on Monday August 13, 2018 @03:42PM (#57118470) Homepage

    Maybe because finding patterns without actually understanding anything is not really "intelligence". The AI hype is slowly dying and even non-IT/non-science-related people finally have finally come to a realization that
    1) AI is not a magical pill that can solve all the problems in the world
    2) There isn't too much "intelligence" in AI
    3) Coding real intelligence is a lot harder than using throwing reinforced convolutional neural networks at everything
    4) We do ... not understand how these trained networks operate and that turns them into a black box you cannot really trust and which is bound to give absolutely wrong results.

    It's not like we understand how the human brain operates but we have certain reasons to believe it's mostly rational, intelligent and infallible (with exceptions, of course) since it has got us here - the age of technology and an improved quality and increased length of life which no other animal has been able to achieve.

    I'm not against reinventing the biological intelligence that the human beings possess but it surely looks like we haven't come close to it.

  • "No published research shows Watson improving patient outcomes."

    Would a doctor want to publish research showing that their expensive need is diminishing?
    • You know who gets cancer? Doctors. Their kids. Pharma execs. Government regulators and their wives and husbands. Billionaires. Mob bosses. Everyone. Nobody is sitting on a cure.
      • Citation, please.... for all of your assertions.

        • Seriously? Ain't nobody got time for that. All I can cite is from personal experience, and I've been around cancer for quite a few years. Billionaires? Did Jobs not pay the cancer mafia maybe?
        • Please provide a citation for your claim that conversation about technical subjects is restricted to published academic materials.

  • by Jodka ( 520060 ) on Monday August 13, 2018 @04:00PM (#57118582)

    Well professional bionformaticians had already been working on the problem of personalized medicine and medical diagnosis before IBM and Watson got involved. If you listen to them, there is a clear consensus of how that is is going to work in the future.

    Part 1: Because of a dependence of both disease and the effectiveness of treatments upon personal genetics, every person will get sequenced at birth. That will do at least three things: reduce what otherwise appears as statistical noise in assessing treatment efficacy by resolving interdependencies between the treatment and personal genetics, improve estimates of the likelihood of any individual developing a disease or disorder, and help to identify the best treatments for specific individuals.

    Part 2: Every patient treatment and its outcome become a trail logged into a massive database along with the patient's medical history and genetics. Currently, massive amounts of information about the effectiveness of treatments is discarded because the records of treatment after a drug is released are not accumulated. Now, before a drug is introduced to the market, there are clinical trials on a subpopulation, and that becomes an authoritative record of the drug effectiveness. That is a tiny fraction of the potential information out there and insufficient to assess interactions of drugs with other factors such as genetics.

    One of the barriers to implementing that system is the price of sequencing, about $1000.00/person. Prices are projected to fall until sequencing becomes ubiquitous.

    The other barrier is privacy legislation (HIPAA) and financial incentives acting on institutions against information sharing. Despite endless funded government initiatives to implement sharable electronic medical records, patient medical information remains siloed within provider and insurance networks. Rather than work to share information, those institutions are competing to build the largest silo. (This circumstance exemplifies a typical type of government ineptitude, which is to continuously and futilely throw enormous sums of money at a problem rather than simply and cheaply reforming the legislation and regulation giving rise to the perverse incentives which created the problem. Information sharing for research medical use to benefit personalized medicine was the main driver behind the U.K. slackening medical records privacy, demonstrating that in the U.K. not all government officials are complete idiots.)

    Finally, the main point of this post: The bioinformaticians have wished for that future because they knew that the problem of personalized medicine was information-starved before IBM threw billions at the problem. Given adequate information, the computational solutions of personalized medicine are already known by those humans with domain-specific expertise.

    If IBM had instead invested those billions in reducing the cost of sequencing further and in lobbying government to fix the stupid incentives and restrictions acting against medical information sharing, the problem could have been solved by now. Another case of someone with a hammer looking for nails by pounding on things to see if they move.

    Had Watson been genuinely intelligent, it would explained all that that IBM.

  • Over-hyped products and deceitful marketing failed us again.
  • Difference between this and Theranos is that I hope IBM can give back some invested money. Probably they don't care that much as any day or another we'll see a "Watson AI will solve problem X" news again and people will put money on it again.
  • by OneHundredAndTen ( 1523865 ) on Monday August 13, 2018 @04:35PM (#57118764)
    The traditional modus operandi of the AI community remains the same as it was from its inception: some problems are solved initially with spectacular results, and optimistic extrapolations are made on the basis of such successes to other problems - which, invariably, turn out to be far more difficult to tackle, with the ensuing disappointing results. The AI community seems to have forgotten its past, and is therefore condemned to repeat it, as we are seeing with Watson and with the digital assistants, the usefulness of which remains extremely limited.
    • by sphealey ( 2855 )

      Apologize - mouse slipped resulted in bad moderation. Please mod this up - it is a good comment.

  • "A 1.6 Billion-Year-Old Accident Waiting to Happen" http://bit.ly/18a3ul5 [bit.ly]
  • by Scarletdown ( 886459 ) on Monday August 13, 2018 @05:23PM (#57119030) Journal

    They are missing out on a major opportunity here.

    IBM has this AI thingie called Watson. Right now they are tasking it with cancer related programming.

    Don't they have enough nerds there to convince their bosses to give it proctology programming?

    Then after they feed the patient a laxative that they can call No Shit Sherlock, the AI's controller can put it to work with the command, "Dig Deeper Watson."

  • He's just not as good as a young experienced one. Old doctors assume they've seen it all and 70% of the customers are diagnosed as 'stomach flu' anyway, just on general principle.
    And nobody is as old as IBM.

  • by martinX ( 672498 ) on Monday August 13, 2018 @07:04PM (#57119476)

    If "treatments are evolving faster than Watson's human trainers can update the system", wouldn't the same be true for oncologists trying to keep up with the latest and greatest?

    • by urusan ( 1755332 )

      All sophisticated modern AI systems (except possibly Deep Learning) suffer from the knowledge acquisition problem https://en.wikipedia.org/wiki/... [wikipedia.org] Deep learning has its own different problems (ex. how did it get that answer? nobody knows!), but if things change you can basically just retrain it with the newest data.

      As your system/model gets bigger and bigger with more and more moving parts (dimensions, sub-models, rules, algorithms, data, etc.), it becomes more and more brittle, because all these different

  • by Sqreater ( 895148 ) on Tuesday August 14, 2018 @06:23AM (#57121602)

    "...treatments are evolving faster than Watson's human trainers can update the system"

    And if you do away with the mass of humans doing a particular area of expertise and turn it over to "AI" you will freeze that area at that level of AI expertise. The AI has no motivation array. It can't look for ways to "do it better," as humans constantly do. It cannot advance the field. AI is potentially a human disaster if too much trust is given it. Remember, Watson was built, programmed, turned on to play Jeopardy by humans. Then it was turned off when it had satisfied the motivations of its creators. It did not WANT to play Jeopardy, or anything else. It is a rock, a tool. Nothing else. Unmotivated intelligence is not intelligence. We must not rely on it.

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...