Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Google Medicine Technology

Google's Lung Cancer Detection AI Outperforms 6 Human Radiologists 97

Google AI researchers working with Northwestern Medicine created an AI model capable of detecting lung cancer from screening tests better than human radiologists with an average of eight years experience. VentureBeat reports: When analyzing a single CT scan, the model detected cancer 5% more often on average than a group of six human experts and was 11% more likely to reduce false positives. Humans and AI achieved similar results when radiologists were able to view prior CT scans. When it came to predicting the risk of cancer two years after a screening, the model was able to find cancer 9.5% more often compared to estimated radiologist performance laid out in the National Lung Screening Test (NLST) study.

Detailed in research published today in Nature Medicine, the end-to-end deep learning model was used to predict whether a patient has lung cancer, generating a patient lung cancer malignancy risk score and identifying the location of the malignant tissue in the lungs. The model will be made available through the Google Cloud Healthcare API as Google continues trials and additional tests with partner organizations. The model was trained using more than 42,000 chest CT screening images taken from nearly 15,000 patients, 578 of whom developed cancer within a year, during a low-dose computed tomography LDCT study the National Institutes of Health (NIH) conducted in 2002. Results were then validated with data sets from Northwestern Medicine.
This discussion has been archived. No new comments can be posted.

Google's Lung Cancer Detection AI Outperforms 6 Human Radiologists

Comments Filter:
  • Very likely not true (Score:3, Interesting)

    by gweihir ( 88907 ) on Tuesday May 21, 2019 @02:03AM (#58628258)

    It is probably more like Watson in the medical field: Has somewhat better accuracy, but occasionally is so far off that it would kill the patient. They stopped using it for that as a result, I believe.

    • by Freischutz ( 4776131 ) on Tuesday May 21, 2019 @03:52AM (#58628618)

      It is probably more like Watson in the medical field: Has somewhat better accuracy, but occasionally is so far off that it would kill the patient. They stopped using it for that as a result, I believe.

      My cousin died of colon cancer because a human doctor made that same mistake.

      • My cousin died of colon cancer because a human doctor made that same mistake.

        A human doctor who sees an AI as an assistant, not as a rival, would probably not have made that mistake.

      • I lost my grandmother to colon cancer, so I can sympathize. Colon cancer is not found using radiology though. It's found using a colonoscopy.

        • by carlcmc ( 322350 )
          CT colonography is one of the acceptable equal alternatives to colonoscopy.
        • I lost my grandmother to colon cancer, so I can sympathize. Colon cancer is not found using radiology though. It's found using a colonoscopy.

          The doctor told him not to worry when he went and reported rectal bleeding. He was told it was totally normal. I still wish I had been intolerably annoying and insisted he go to another doctor for a second opinion. It just always gives me the shivers to think that there must be as many incompetent doctors as I have come across incompetent programmers. I always get a second opinion on potentially serious medical issues.

      • Why cry that pains all the times,when you can get the best quality pain pills online at https://onlinepainpillsstore.c... [onlinepainpillsstore.com] we got the best of pains relief pills for depression,anxiety etc, https://onlinepainpillsstore.c... [onlinepainpillsstore.com] ,https://onlinepainpillsstore.com/product/xanax-alprazolam/ https://onlinepainpillsstore.c... [onlinepainpillsstore.com] we do overnight and discrete delivery across the US and canada https://onlinepainpillsstore.c... [onlinepainpillsstore.com] and also the best products or pills on deck... https://onlinepainpillsstore.c... [onlinepainpillsstore.com] https://onlinepai [onlinepainpillsstore.com]
    • by K. S. Kyosuke ( 729550 ) on Tuesday May 21, 2019 @05:38AM (#58628824)

      but occasionally is so far off that it would kill the patient

      So, just like human physicians, then?

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      I'll settle for an AI that can quickly sort:

      1. The ones that are definitely NOT cancer with 100% accuracy
      2. The ones that definitely ARE cancer with 100% accuracy
      3. Not 100% sure... hand to a trained radiologist for a deeper dive

      If 1 and 2 are 80% of cases... that's an awful lot of time to focus on 3.

    • Second set of eyes (Score:4, Interesting)

      by sjbe ( 173966 ) on Tuesday May 21, 2019 @06:09AM (#58628898)

      It is probably more like Watson in the medical field: Has somewhat better accuracy, but occasionally is so far off that it would kill the patient. They stopped using it for that as a result, I believe.

      Back around 2003 I spent some time working in and around a radiology lab for some six sigma work I was doing while my wife was doing her residency. They were using computers to assist the doctors in scanning images back then. So this sort of tech is nothing new to radiology and presumably it has improved in the last 15 years. But they used it as basically a second set of eyes. No doctor would or should entirely trust these systems even if they are on average better than the physician himself. Likewise these systems can provide a really nice backup to a doctor who might miss something. People are imperfect and doctors do make mistakes. These systems are made by people and are imperfect as well. The best practice is to use systems like this in conjunction with a doctor to have the best chance of correct diagnosis and treatment. Sort of a medical version of "with enough eyes all bugs are shallow" approach.

      Bear in mind that when they are doing radiology or pathology, they are essentially making a prediction. Predictions by their very nature are wrong some percent of the time. It's like playing poker - even if you play the hand perfectly you are going to lose some of the time. They see an image or some tissue on a slide and they are being asked to predict what this group of cells is going to do in the future, often with very imperfect information about what the cells are and with missing data about the disease process. Some disease are deadly and look almost indistinguishable from something benign and vice-versa. Clinical doctors and/or medical staff routinely fail to provide relevant, complete, and/or correct data on the imaging or pathology requisition. My wife is a pathologist and she daily gets pathology requisitions with bad or misleading or incorrect clinical data.

      • by HiThere ( 15173 )

        Don't they do a biopsy if it's suspicious enough? Surely they'd do a biopsy on much less evidence than a full operation.

        • Don't they do a biopsy if it's suspicious enough?

          I'm not really sure if you are asking a question about radiology or pathology. If a pathologist is seeing it they have already taken a biopsy or tissue sample. If a radiologist is suspicious about something they might take a biopsy if that is possible but not all conditions permit biopsies. It's easy to take a skin biopsy but something in the liver might be rather difficult. And even if they do a biopsy it doesn't necessary clarify matters. There are lots of diseases we have a poor understanding of or

      • by gweihir ( 88907 )

        The thing is not the failure rate. The problem is that this "Artificial Stupidity" makes extreme mistakes on occasion that no actual doctor would make. Sure, if you keep the radiologist in the loop and if you give it simple enough tasks, it can perform to some degree. But that is not really helpful or useful.

    • Watson sucked.
      Because it was fed bad data to process. In terms of cancer treatment, Watson was fed with a bunch preliminary research, weighted nearly equally with the normal proven treatments and diagnostic measures. Also the data that is fed, is often very limited, screw the financial data, and appointment data, but focus on just the Clinical data, where many treatment options are necessary to be done to fit the persons budget, or make sure undue pressure on tax payers is lessened, and that they can be see

    • It's actually very likely true.
      I worked in medical imaging years ago. We were approaching parity with even the best radiologists in terms of breast anomalies in my case.

      Also, keep in mind here. The medical profession doesn't move very fast and is very regulated. It is highly unlikely; that they'd perform surgery simply trusting the system.

      The 'accuracy' and 'savings' will most likely come in the form of screenings. If you have to scan 1000 patients, you feed them through the AI. The ones with positive hits

    • But isn’t this different than Watson in that this AI isn’t making recommendations on treatment but rather is narrowly focused on detection? Certainly in the case of cancer, a patient normally gets multiple confirmations before treatment. This could be a good aid as it could lighten the load a bit; instead of needing a doctor looking for cancer in a scan that all scans are routinely checked. Any scans that the AI flags is subject to Further review and additional testing.
    • Why cry that pains all the times,when you can get the best quality pain pills online at https://onlinepainpillsstore.c... [onlinepainpillsstore.com] we got the best of pains relief pills for depression,anxiety etc, https://onlinepainpillsstore.c... [onlinepainpillsstore.com] ,https://onlinepainpillsstore.com/product/xanax-alprazolam/ https://onlinepainpillsstore.c... [onlinepainpillsstore.com] we do overnight and discrete delivery across the US and canada https://onlinepainpillsstore.c... [onlinepainpillsstore.com] and also the best products or pills on deck... https://onlinepainpillsstore.c... [onlinepainpillsstore.com] https://onlinepai [onlinepainpillsstore.com]
  • Danger (Score:3, Insightful)

    by Pieroxy ( 222434 ) on Tuesday May 21, 2019 @02:07AM (#58628264) Homepage

    This is to me extremely worrisome to me. It's great that some AI is able to do so, but where did it learn how to do so? From radiologists which gave it a nice training dataset.

    If this AI is used 'en masse', over time, it will more or less replace radiologists diagnosis. As a result, less and less radiologists will be able to do its job, and no more training datasets will be created. We will blindly rely on an AI with no one around understanding anymore how it works.

    Remember the AI needs humans first to train it. It cannot function outside of this and is in no way autonomous.

    • Re:Danger (Score:5, Insightful)

      by ShanghaiBill ( 739463 ) on Tuesday May 21, 2019 @02:37AM (#58628372)

      but where did it learn how to do so? From radiologists which gave it a nice training dataset.

      Nope. The dataset didn't come from radiologists making predictions. It came from retrospective analysis: Whether the anomaly actually turns out to have been a cyst, benign growth, or malignant tumor, after a biopsy and/or after the disease manifests.

      If it relied on the predictions of the radiologists, the best it could ever do is equal their accuracy. But it exceeds them.

      As a result, less and less radiologists will be able to do its job, and no more training datasets will be created.

      Nope. It doesn't work that way.

      We will blindly rely on an AI with no one around understanding anymore how it works.

      If it gives more accurate results, and saves lives, does it really matter if we understand the mechanism? We certainly don't understand how human brains work, yet we rely on them everyday.

      • Re: (Score:2, Interesting)

        by geekmux ( 1040042 )

        but where did it learn how to do so? From radiologists which gave it a nice training dataset.

        Nope. The dataset didn't come from radiologists making predictions. It came from retrospective analysis: Whether the anomaly actually turns out to have been a cyst, benign growth, or malignant tumor, after a biopsy and/or after the disease manifests.

        If it relied on the predictions of the radiologists, the best it could ever do is equal their accuracy. But it exceeds them.

        As a result, less and less radiologists will be able to do its job, and no more training datasets will be created.

        Nope. It doesn't work that way.

        Why do you assume a capitalist society won't "work that way"? Radiologists cost money. They're humans, so of course they're a pain in the ass to deal with compared to a machine that can replace dozens of them. Regardless of how AI is getting the data or how it's being analyzed, all it will take is consistency from AI (which will ultimately "save lives" as you've stated), and the human radiologist career will be eradicated.

        Sorry fellow meatsacks, but we will eventually be deemed not be worth the risk by o

        • Re:Danger (Score:4, Insightful)

          by K. S. Kyosuke ( 729550 ) on Tuesday May 21, 2019 @05:41AM (#58628830)

          Why do you assume a capitalist society won't "work that way"? Radiologists cost money. They're humans, so of course they're a pain in the ass to deal with compared to a machine that can replace dozens of them.

          He didn't say that the society didn't work this way, he said the software training didn't work this way. If the training data sets were not created by radiologists, then the lack of radiologists can't lead to a lack of training data sets.

          • Re: (Score:2, Insightful)

            Lack of radiologists won't lead to a lack of training data sets, however, it will lead to a lack of humans who can validate the results of the AI, which in turn will lead to diminishing quality of results.
            • The validation of the results, however, comes from biopsies and post-mortem analysis. Not from some human checker looking over the AI's shoulder.

            • Lack of radiologists won't lead to a lack of training data sets, however, it will lead to a lack of humans who can validate the results of the AI, which in turn will lead to diminishing quality of results.

              The humans who validate the results of the AI aren't radiologists. They're surgeons and oncologists. This AI will in no way replace them, so your argument makes no sense.

          • Why do you assume a capitalist society won't "work that way"? Radiologists cost money. They're humans, so of course they're a pain in the ass to deal with compared to a machine that can replace dozens of them.

            He didn't say that the society didn't work this way, he said the software training didn't work this way. If the training data sets were not created by radiologists, then the lack of radiologists can't lead to a lack of training data sets.

            From your description, I'm struggling to understand why we need radiologists at all, which only solidifies my point. Doesn't really matter if AI goes through the front door to learn it, or if it reverse engineers the answers based on retrospective analysis. Once AI radiology is proven to be as accurate or better, hospitals won't want to carry the liability of a human radiologist. It's that simple.

            • I'm struggling to understand why we need radiologists at all

              We don't.

              We don't need them to make diagnoses, since software is superior in every way.

              We don't need them to train the software, since that is not how the software is trained.

              Radiologists are medical doctors, and should retrain for other specialties.

              • I'm struggling to understand why we need radiologists at all

                We don't.

                We don't need them to make diagnoses, since software is superior in every way.

                We don't need them to train the software, since that is not how the software is trained.

                Radiologists are medical doctors, and should retrain for other specialties.

                So radiology will soon be a dead career, and we'll no longer see colleges offering overpriced degrees in this specialty anymore. Hospitals will stop hiring for these positions now that iDiagnose is available.

                Let's sit back and see how long it takes Greed N. Corruption to agree with that. In the meantime, I'll send an email to Hades and schedule that snowball fight.

                • Let's sit back and see how long it takes Greed N. Corruption to agree with that.

                  No greed or corruption is needed. Just common sense.

                  Radiology is a dead end career. It will fade away over the next decade or so.

                  Medical costs will go down. Outcomes will improve. Resources will be better allocated.

        • We know the more procedures a doctor does, the better they are at it. If this reduces due to automation, they won't be as good at it.

          That is a fair trade if the net lives saved increases, but there is no point denying it is happening.

          I expect in 100 years quality human drivers will have plummeted even as lives saved increases.

          • We know the more procedures a doctor does, the better they are at it. If this reduces due to automation, they won't be as good at it.

            Memory loss, physical decline, lack of sleep the night before, stress, failing eyesight, addiction....there are a dozen or more reasons why a surgeon may not be on their "A game" any particular morning before surgery, and all it takes is one major fuck-up to prove that experience doesn't always equal "better".

            And according to the local "investigative" news team, anything more than two major fuck-ups questions why you still have a career.

        • by bigpat ( 158134 )

          and the human radiologist career will be eradicated.

          Which would be progress and I hope the AMA won't stand in the way of killing the specialty off with great haste.

          If a $20,000 computer can do a better job than a $400,000 per year radiologist then it should help save lives even more by reducing costs of health care and making it affordable for more people.

          Radiologists are still doctors so they should be able to find another job.

          This should be a 5 year transition to 0 radiologists being graduated from medical schools and retraining for the rest of them.

      • by Anonymous Coward

        The problem is, if it is not continually improved it will become obsolete and start making mistakes that kill people. It will also become a huge target by foreign or sinister actors and will not be easily diagnosed when there is a problem of AI killing a racial group because someone, who figured it out, decided to add some things to accomplish a broader agenda. We must always understand it and must always babysit it.

        Another problem this causes is setting civilization up for failure. We have already dumbe

        • by HiThere ( 15173 )

          WRT the first paragraph:
          While true, that's nearly an irrelevant argument. What that is an argument for is the design of a learning algorithm that doesn't get set in stone. Since it is taught by retrospective analysis ("You predicted cancer, but when we went in you were wrong" vs. "You predicted no cancer, but the patient died of it" to pick two extreme positions out of a large number that would need to be handled), continual learning (with, admittedly, a bit of a time lapse) is possible. It does, of cour

      • You must be too young to remember the world before GPS. Unless you never had directional and map-reading skills to begin with - then you wouldn't notice everyone else has lost them. Radiology is of course more highly specialized and possibly more insulated from the effect. But people's minds definitely go "that way" (downward) as far as skills being lost. When looking case-by-case, you may get higher accuracy from the AI. But to discount the impact of institutional brain drain is to discount the utility of
        • Many companies, entire nations, have foundered on that mistake.

          Please provide a few examples of companies and entire nations that foundered because they automated a process and suffered from brain atrophy as a result.

        • by HiThere ( 15173 )

          Sorry, but I remember my mother getting lost in a parking lot. I wife had an equally bad sense of direction, and my sister is not too much better. The ability to navigate has always been highly variable, and incredibly poor among some otherwise intelligent people. It seems to more often be quite poor among women, for obvious evolutionary reasons, but it's by no means restricted to them.

          Map reading skills, I admit, are different. They need to be learned. But they need a good sense of direction underlyin

      • by AHuxley ( 892839 )
        Re "If it gives more accurate results, and saves lives, does it really matter if we understand the mechanism?"
        Nations with toxic pollution problems will command their nations AI never to report some results.
        The nation can stay at "international" averages every year for every condition reported.
        Reduce spending on expensive reports and tests.
        An AI can shape who is "worthy" of care in a nation.
        No pollution problems get found as no human experts get to see the actual data sets again and again. Thats only
    • Re: (Score:3, Interesting)

      by AmiMoJo ( 196126 )

      From radiologists which gave it a nice training dataset.

      That's where new radiologists come from too.

      Sure they learn the biology and the medical side, but for diagnosis they use training data sets. Test results, scan images.

      I don't really see a problem with the AI being widely used for diagnosis if it proves reliable. If the AI says that it isn't a tumour then a human will have to step in to figure out what is causing the symptoms that the patient is experiencing, so false negatives should be no worse than they are now.

    • by sjbe ( 173966 ) on Tuesday May 21, 2019 @06:28AM (#58628958)

      This is to me extremely worrisome to me. It's great that some AI is able to do so, but where did it learn how to do so? From radiologists which gave it a nice training dataset.

      This is correct and will remain correct.

      If this AI is used 'en masse', over time, it will more or less replace radiologists diagnosis.

      No it will not. It will supplement even if it ultimately does a lot of the heavy lifting. Radiologists aren't going away. What they probably will do over time is become more like clinical pathologists [wikipedia.org] where machines do a lot of the grunt work and the doctor spends their time reviewing the results and dealing with corner cases. You don't really need or want a doctor to deal with the routine stuff. You want the expert to be dealing with the difficult problems at the limits of our tools.

      As a result, less and less radiologists will be able to do its job, and no more training datasets will be created. We will blindly rely on an AI with no one around understanding anymore how it works.

      I understand why you might think that but that's not what will happen. Human radiologists will always been needed and available - they will just have more powerful tools at their disposal to get the right answer and the nature of their job will shift as those tools become better. Let's use clinical pathology as an example. When you have blood drawn it is sent to a lab. It is then run through some very sophisticated machines that often can figure out the diagnosis without human help. But it's not a lights out factory. A clinical pathologist has to oversee everything and they often need a big staff of highly trained technicians to run and maintain the machines. Humans have to carefully tend the machines and to do that they have to understand them and where their limits are. Humans have to interpret the reports from the machines and know what they mean. Humans have to work with the other medical staff for treatment. Humans have to deal with the corner cases the machines aren't equipped to handle. Clinical pathologists didn't go away just because they got some fancy new technology - they just didn't have to do a lot of wasteful grunt work anymore.

      Arguments that tools will replace humans are almost always wrong. What they almost always do is enable the human to do more powerful tasks that were previously difficult or too expensive. Computers replaced typewriters but we still need people to use them. Getting a more capable tool let's us stop wasting time on tasks that don't add value.

      • "No it will not. It will supplement even if it ultimately does a lot of the heavy lifting. Radiologists aren't going away. What they probably will do over time is become more like clinical pathologists [wikipedia.org] where machines do a lot of the grunt work and the doctor spends their time reviewing the results and dealing with corner cases."

        Yea, a bit like the mandatory driver in a self-driving car.
        They are supposed to be alert but instead they fall asleep at the wheel (or stop-button) because there's no

      • I have been told since before medical school that AI would kill radiology, yet here we are 13+ years later and its nowhere close. In fact, it is a struggle for us to hire new radiologists because the demand is high right now. Of course that doesn't mean it will never happen, but more than likely I will be retired before the need for radiologists is significantly impacted. We have been using AI in certain parts of radiology for a while (breast and lung cancer screening being the biggest examples). I welcom
      • by HiThere ( 15173 )

        It all depends on your time frame. I'll agree that they're unlikely to replace radiologists this decade. Talk about the next decade, and I'll say you're whistling past the graveyard. You might be correct, but that's not the way to bet. If you're talking about a decade or two after that, I'll say you're being silly. There MIGHT still be some job that's called radiologist, but it will resemble the current job about as much as using a word processor resembles setting type for a Linotype.

  • Plus all the photos it pulled up from their Picasa data store. "I wasn't *consulting* the photos -- for me, it's just like remembering all this person's selfies from when they were smoking at a barbecue. Plus those lungs look familiar. Don't ask me how I know that, it's probably just something I remember as well."

  • by Anonymous Coward

    John works in a dusty environment without protective gear; Joe smoke 3 packs of cigarette daily; Susan grew up in a smoker families that have track records of lung cancer. All are mid-low income family and cannot health insurance that cover annual CT scan.

    Now, tell me how useful of Google AI analyzing this early CT scan comparing to digging records of massive medical record to develop a preventive program?

    • by bickerdyke ( 670000 ) on Tuesday May 21, 2019 @04:21AM (#58628674)

      Assuming that the largest cost factor of an annual CT scan is the radiologist who is going to bill 3 or 4 hours to look through ~200 CT slices, the benefit for them may be that they finally can afford annual CT scans.

      Joe could already pay for it from the money he could save by quitting the 3 packs a day but that is a different challenge...

      • Re: (Score:3, Informative)

        by ixneme ( 1838374 )
        Except it's not. The professional fee (i.e., fee that the radiologist gets paid) is about 10% of the cost billed. The rest goes to the hospital or whomever owns the CT scanner.
        • by Anonymous Coward
          A fine example of the Invisible Hand always making sure the worker gets his due. Good thing we're not like all those socialist countries that have to pay zero point five TIMES as much as we do on healthcare!
        • by ceoyoyo ( 59147 )

          The radiology review is by far the most expensive part of most medical imaging. Radiologists are usually the top income specialty. The scanners are a bit of a capital cost, but are nearly free to operate after that, and the tech to run them costs about $50 an hour.

          The actual amount billed over and above that is a function of your particular jurisdiction.

  • Soon Big Brother Google can feed the data from AI powered CT scans to its AI powered death panels, which in turn will communicate with its AI powered autonomous terminators. And of course we all know beforehand, the Google AI will decide us deplorables don't deserve to live.

  • by Anonymous Coward

    Even pigeons can be better than radiologists...

    https://www.scientificamerican.com/article/using-pigeons-to-diagnose-cancer/?redirect=1

    • by ceoyoyo ( 59147 )

      That's pathologists, but yes, it does emphasize the point that there's nothing particularly special about our visual system.

  • It determined that treatment was pointless and should be halted immediately. The AI will save time and money spent on worthless humans.
  • This argues for using computers to train human doctors to spot cancers, not for the replacement of human doctors for diagnosing cancers. Humans push things forward, not machines. Humans are motivated to invent and advance things, not algorithms.
  • Does it over diagnose clinically insignificant cancers?

    Not all cancer screening is *find all cancer*. In medicine the more important goal is to find clinically significant cancers. Not all lung cancers are clinically significant (think the slowly changing GGN (ground glass nodules that are low grade indolent lung cancers).

    This concept is a hot topic and contentious across multiple cancer disciplines. While initially appealing to *find moaaar cancer*, there is a cost associated with that. Direct cost to the
  • pigeons?

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...