Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Medicine

Computer Detection Effective In Spotting Cancer 89

Anti-Globalism notes a large study out of the UK indicating that computer-aided detection can be as effective at spotting breast cancer as two experts reading the x-rays. Mammograms in Britain are routinely checked by two radiologists or technicians, which is thought to be better than a single review (in the US only a single radiologist reads each mammogram). In a randomized study of 31,000 women, researchers found that a single expert aided by a computer does as well as two pairs of eyes. CAD spotted nearly the same number of cancers, 198 out of 227, compared to 199 for the two readers. "In places like the United States, 'Where single reading is standard practice, computer-aided detection has the potential to improve cancer-detection rates to the level achieved by double reading,' the researchers said."
This discussion has been archived. No new comments can be posted.

Computer Detection Effective In Spotting Cancer

Comments Filter:
  • Re:WTF? just WTF? (Score:4, Informative)

    by ColdWetDog ( 752185 ) * on Saturday October 04, 2008 @09:57PM (#25260603) Homepage

    Why is this news and NOT standard practice already?

    Because the computer systems are expensive and it hasn't been clear that they work as good or better than humans. It's a very complex issue and has been studied for quite some time. In particular, the issue is "false positives" which cause anxiety and often prompt additional, invasive, expensive testing. From a rather quick [breast-can...search.com] Google Review of Available Information and Literature (GRAIL):

    Although there is some evidence to support the view that prompting can significantly improve an individual radiologist's detection performance [6,7], there are still many unanswered questions about the way in which prompting will affect radiologists in the National Health Service Breast Screening Programme. In particular, studies have shown that the number and distribution of false prompts are critical to the success of the process [5,7]. In most commercial systems, in which sensitivity to abnormalities is seen as an important selling point, false prompts are the price one has to pay. Some systems produce, on average, nearly one false prompt per mammogram. Published research studies on commercial prompting systems [8,9] have not to date demonstrated any statistically significant improvement in radiologists' detection performance within a screening programme in which the vast majority of films are normal. However, such studies have shown that systems can detect a large proportion of subtle lesions using a retrospective review of prior screening films of patients with cancer. Because it is not yet known whether prompting has any detrimental effects on radiologists' performance in population screening mammography, these systems cannot yet be used in a population screening programme.

    TFA doesn't even mention the false positive rate, just the fact that it found as many cancers as the double Radiologist method. So keep your pantyhose on. It's something that should get better with time and experience, but it's hard to say that the system is ready for universal application.

  • Re:WTF? just WTF? (Score:5, Informative)

    by Metasquares ( 555685 ) <{moc.derauqsatem} {ta} {todhsals}> on Saturday October 04, 2008 @10:20PM (#25260705) Homepage

    Most results are presented via ROC curve (for the uninitiated, this is a curve that plots true positive rate against false positive rate based on some threshold for classifying a lesion), so the FPR can theoretically be reduced if you're willing to lose sensitivity as well.

    The thing is, the outcomes are not balanced. The risk of missing a cancer is considered far greater than the risk of returning a false positive, so the algorithms are usually created with sensitivity rather than specificity in mind. In my opinion (and since I work on some of these algorithms, my opinion is important :)), this is as it should be, and we should worry about specificity only if we can keep a comparable level of sensitivity.

    In any case, the article Yahoo is sourcing from does mention the specificity (which is 1-false positive rate), and it is encouraging: with CAD, the specificity was 96.9%, vs. 97.4% for double reading. Given that sensitivity was also similar (87.2% vs. 87.7%), this article paints CAD in a very favorable light.

  • Re:WTF? just WTF? (Score:2, Informative)

    by Anonymous Coward on Saturday October 04, 2008 @10:46PM (#25260851)

    I worked on the first clinically useful mammo CAD system (also the first to have FDA approval in the US). Unlike some of the smaller scale (often academic) programs I saw at the time, there was a large degree of domain knowledge (i.e. breast specific) in our codes.

    This is pretty typical in other pattern recognition domains as well. You can get a certain distance with fairly generic approach algorithmics, but to really push the performance boundaries, you need local info and approaches as well.

    This paper doesn't actually say anything particularly new relative to our results 10 years ago, but it's a broader study than was available then.

  • by Robert1 ( 513674 ) on Saturday October 04, 2008 @11:07PM (#25260951) Homepage

    Because mammography is an extremely non-sensitive test.

    http://mammography.ucsf.edu/inform/html/graphics/graph2a5.gif [ucsf.edu]

    This shows how few women the test can actually benefit - 37 out of 10,000 over all lifetimes. Even worse is that the women who are diagnosed falsely positive far outstrips those that actually have cancer by orders of magnitude. This creates a harmful burden on the falsely diagnosed women - creating morbidity and even mortality.

    You can make a machine that gets 99.9% of all women who do have breast cancer. Unfortunately, out of the 9740 that never will/don't have breast cancer 9700 will be falsely diagnosed as having it.

  • Not true (Score:3, Informative)

    by Kibblet ( 754565 ) on Saturday October 04, 2008 @11:39PM (#25261125) Homepage
    Where do you get your stats from? I've seen otherwise (ACS site, for starters. School, too, in my community health class. I'm in nursing). Furthermore, of all the inequities in research and healthcare, this is just one that is female-positive. Take, for example, cardiovascular health and women. Women are treated differently when it comes to suspected heart attacks and other issues of cardiovascular health, and it usually winds up killing them.
  • Ah yes (Score:2, Informative)

    by XanC ( 644172 ) on Saturday October 04, 2008 @11:49PM (#25261183)

    Because government-funded research is inherently free of any and all bias. It is never politically motivated, and areas to research and not to research are chosen purely on scientific merit by a government bureaucrat, whose #1 goal is not to increase and extend his own power. </sarcasm>

    Seriously though, there are a lot of people who believe exactly that, and even if the commercial research may be biased, at least that's known and out in the open.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...