Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Medicine Software Science Technology

AI Can Predict Heart Attacks More Accurately Than Doctors (digitaltrends.com) 48

An anonymous reader quotes a report from Digital Trends: Scientists from the University of Nottingham in the United Kingdom have managed to develop an algorithm that outperforms medical doctors when it comes to predicting heart attacks. As it stands, around 20 million people fall victim to cardiovascular disease, which includes heart attacks, strokes, and blocked arteries. Today, doctors depend on guidelines similar to those of the American College of Cardiology/American Heart Association (ACC/AHA) in order to predict individuals' risks. These guidelines include factors like age, cholesterol level, and blood pressure. In employing computer science, Stephen Weng, an epidemiologist at the University of Nottingham, took the ACC/AHA guidelines and compared them to four machine-learning algorithms: random forest, logistic regression, gradient boosting, and neural networks. The artificially intelligent algorithms began to train themselves using existing data to look for patterns and create their own "rules." Then, they began testing these guidelines against other records. And as it turns out, all four of these methods "performed significantly better than the ACC/AHA guidelines," Science reports. The most successful algorithm, the neural network, actually was correct 7.6 percent more often than the ACC/AHA method, and resulted in 1.6 percent fewer false positives. That means that in a sample size of around 83,000 patient records, 355 additional lives could have been saved.
This discussion has been archived. No new comments can be posted.

AI Can Predict Heart Attacks More Accurately Than Doctors

Comments Filter:
  • by Gravis Zero ( 934156 ) on Monday April 17, 2017 @11:49PM (#54254227)

    the AI took the easy route and was scaring people into having heart attacks. ;)

  • Bullshit headline (Score:5, Insightful)

    by Mitreya ( 579078 ) <<moc.liamg> <ta> <ayertim>> on Monday April 17, 2017 @11:55PM (#54254245)

    these methods "performed significantly better than the ACC/AHA guidelines,"

    Yeah outperforming the general written guidelines is totally the same as outperforming a live doctor.

    "Helpful tool that may substitute rule-of-thumb guidelines for doctors", maybe.

    • I work in this field (Score:4, Interesting)

      by Anonymous Coward on Tuesday April 18, 2017 @12:53AM (#54254413)

      The biggest problem with these guidelines (the paper calls it an algorithm) is that humans optimize their predictive power using simple statistical analysis based on a training dataset. Then medical professionals are told the guidelines can be reasonably used anywhere. Throw it some data from a vastly different hospital and it will degrade in performance very quickly. For example, create rules using data from academic hospital, but test it on a rural hospital and the results won't be as good. There are different label distributions and often times even feature distributions.

      If you use machine learning to train on the same hospital that you test, of course it will be better than a model created by humans on a different hospital.

      I see this happen all the time with readmission prediction. There's a very popular system called LACE that was optimized for a specific hospital. It's simple to implement and test so there's a lot of reproduction. LACE has wild swings in performance (from AUC=0.55 to AUC=0.75 depending the dataset). Researchers will often compare LACE to a machine learning method like Random Forest, training and testing on same hospital, and shockingly the results are better.

      • Feedback loops (Score:2, Insightful)

        by Anonymous Coward

        In law enforcement you even get feedback loops where the AI is trained on the police arrest data, which guides the police to where to place enforcement for better arrests, which guides the AI which guides the police which guides the AI which guides the police....

        Obviously is focusses the police in a crime area and all their arrests are focussed on that area because that is where they are.

        The companies hire smart people, who understand the problem, but the corporate interest is to *sell* the AI, and this fee

    • by Anonymous Coward

      "Generally accepted guidelines" are, in some fields 30 years behind current research. (been there...)
      But thats not a problem, because sick people are great for generating revenue.

  • Because if it could we could use some here...
  • by Anonymous Coward

    The trouble with data mining (my job) is you can overoptimise for microdetails that are not normal in the current data. I notice that they're using one group of test data for both the training and the tests, and that would cause over optimization.

    " This cohort was then randomly split into a 75% sample of 295,267 patients to train the machine-learning algorithms and the remaining sample of 82,989 patients for validation"

    So for example, both sets would cover the same 10 years over Brexit, wars, economic event

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      This exactly!

      The baseline method was a diagnostic algorithm created by humans on a training dataset. Using machine learning, these researchers split train/test using a different dataset and compared models using the same test, but different train. YOU CANNOT DO THIS. But many researchers do and it bothers me because it's not sound methodology. The classification label and feature distributions will be different between the original train and new train datasets.

      Also, I've noticed a huge increase in AUC as a

    • by WoOS ( 28173 ) on Tuesday April 18, 2017 @02:11AM (#54254579)

      And one can "meta-train" for the test data group. Like
      * Train
      * Compare to test set
      * Worse than guideline result => Change training parameters
      * Train
      * Compare to test set
      * Still worse than guideline result => Change training parameters more
      * Train
      * Compare to test set
      * Better than guideline result => Publish

      I will be impressed, if it is better than a human doctor on new cases.

      • by WDot ( 1286728 ) on Tuesday April 18, 2017 @09:52AM (#54256069)
        This is not accepted practice in the machine learning field. A lot of people split the training set into training and validation , and "test" on validation. When they have their parameters set, then they perform a final test on the test set. For some datasets, they might not even be able to directly access the test set answers, they might have to go through a 3rd party server with limitations on how often they can submit.
  • That's a bunch of malar~ '^ #~ g^g^g^g^g @aa` a , % [NO CARRIER]

  • That is, have an order that legally prohibits the use of AI/AI-derived treatments in nearly all cases. For the rest, a statement accurately documenting how no medical personnel were reassigned or terminated due to AI implementation - whether by direct or indirect means.

    • ... have an order that legally prohibits the use of AI/AI-derived treatments in nearly all cases. For the rest, a statement accurately documenting how no medical personnel were reassigned or terminated due to AI implementation - whether by direct or indirect means.

      You can't, that horse is out of the barn.
      Your insurance will keep asking for more supporting data until the AI says "approved", "Alternate benefit will be reimbursed" or "treatment requested not indicated." Then your Doctor says "your insurance will not cover the gold standard treatment, but you can pay out of pocket; or we can use the standard treatment that is covered, what do you want to do?'

  • Misleading headline (Score:5, Interesting)

    by GeekWithAKnife ( 2717871 ) on Tuesday April 18, 2017 @03:08AM (#54254659)

    The AI does not outperform a doctor. Does not outperform any doctor period.

    It has managed to perform better than guidelines.

    Now let's consider that the AI is basing this on data gathered by medical science. What it's outperforming in actuality is thus old prediction models.

    Overall, quite happy about it but let's not pretend that AI had 5 hours of sleep in the last 48, had a shower, drove to work, checked on the kids and yet manage to perform all work duties admirably. -because then it might actually outperform a doctor.
    • by thrich81 ( 1357561 ) on Tuesday April 18, 2017 @09:27AM (#54255869)

      "let's not pretend that AI had 5 hours of sleep in the last 48, had a shower, drove to work, checked on the kids and yet..." -- and that is exactly why I am eagerly looking forward to getting all my medical treatment from an AI just as soon as technologically possible. I don't want to be seen (or cut on) by a human who has had 5 hours of sleep in the last 48, even if somehow the profession has gotten itself in the position where that is bragged about. No other profession with potentially deadly consequences (aircraft pilots, truck drivers, military) treats sleep deprivation so casually. No thanks, I'll take my chances with the ever wakeful AI.


      • Oh absolutely. If an AI or some program/robot can replace human functions I'm all for it. After all, repetitive accurate work is what machines do best.

        Currently there are no AI alternatives to actual doctors. Until then...
    • The AI does not outperform a doctor. Does not outperform any doctor period.

      It has managed to perform better than guidelines.

      Now let's consider that the AI is basing this on data gathered by medical science. What it's outperforming in actuality is thus old prediction models.

      Yes it does, because a good many doctors don't actually apply the old prediction model. They "go with their gut" because "it's usually right." When you look at the statistics, a good many doctors underperform vs the prediction model, but don't know it, because they only remember their successes, not their failures. It's a survival mechanism of human memory. If they remembered their failures accurately, they'd be crushed under the load of guilt.

      What's odd about all these medical diagnostics by machine st

  • by 140Mandak262Jamuna ( 970587 ) on Tuesday April 18, 2017 @05:48AM (#54255061) Journal
    It will not be long before the AI neural network aggregate more data and determine, these 335 lives are not worth saving. What is the real incentive for artificially intelligent to save the naturally stupid?
  • by moeinvt ( 851793 ) on Tuesday April 18, 2017 @09:04AM (#54255653)

    Not just on /. It seems to me that these stories about AIs & automation/robots taking human jobs have been all over the place in the past few months. I consciously try to avoid most mainstream U.S. media, but I can't help some incidental exposure. I also listen to NPR ~ 2hrs/week and they've been doing a series of stories on this general topic as well.

    It sure feels like a systematic effort at psychological conditioning. Is this simply the media trying to get back at Trump by blaming automation rather than immigration for displacing U.S. born workers? Is it just some current fad in the media which is going to pass when a majority of people get bored with it? Or perhaps it's just some distorted perception/selective attention on my part. It still feels sort of weird.

    • by Anonymous Coward

      ...to the giant Slashdot circlejerk. There are people here who still take Ray Kurzweil seriously.

  • I find doctors to be quite bad at routing diagnostics, so I think a roll of a d100 has at least as good a chance of predicting heart attack as most doctors.

    Doctors are generally good at minor surgery, prescribing drugs, and addressing simple injuries. Beyond that, diving meaning from chicken bones seems to be just as accurate as doctors in predicting and/or diagnosing general issues.

    • Having some experience in this... I think it has more to do with how terrible our actual knowledge is about a lot of bad problems (or more fairly, how ridiculously wide the spectrum of their symptoms are)

      The guidelines for differential diagnosis are... sometimes embarrassing to look at. But I'm not sure they could be better.
      What do you do when lupus erythematosus can present with identical symptoms to an autoeczematous id reaction to a fungal tinea incognito infection on your damn shin? The doctor throws
  • Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years.

    At a population level I can see where being able to predict who will have a heart attack in the next 10 years would be helpful, but how much value is there for an individual?

    Is the idea that a person begin preventative measures now and avoids the heart attack in the future? This algorithm wouldn't seem to be very helpful in planning surgical interventions like clearing blocked arteries.

  • > The artificially intelligent algorithms began to train themselves using existing data to look for patterns and create their own "rules." Then, they began testing these guidelines against other records. And as it turns out, all four of these methods "performed significantly better than the ACC/AHA guidelines," Science reports.

    This is merely back testing. It's easy to come up with an algorithm that works with data set A and they works with data set B. People have been claiming back tested algorithms to p

  • Summary is not clear, neither is TFA. Does AI better use the usual data (age, cholesterol level, blood pressure), or did it use other data (waist size, alcohol intake, omega-3/omega 6 fat intake...)?

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...