Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Medicine

A Hospital Algorithm Designed To Predict a Deadly Condition Misses Most Cases 53

Epic Systems' algorithm for identifying signs of sepsis, an often deadly complication from infections that can lead to organ failure, doesn't work as well as advertised, according to a new study published in JAMA Internal Medicine. The Verge reports: Epic says its alert system can correctly differentiate patients who do and don't have sepsis 76 percent of the time. The new study found it was only right 63 percent of the time. An Epic spokesperson disputed the findings in a statement to Stat News, saying that other research showed the algorithm was accurate. Sepsis is hard to spot early, but starting treatment as soon as possible can improve patients' chances of survival. The Epic system, and other automated warning tools like it, scan patient test results for signals that someone could be developing the condition. Around a quarter of US hospitals use Epic's electronic medical records, and hundreds of hospitals use its sepsis prediction tool, including the health center at the University of Michigan, where study author Karandeep Singh is an assistant professor.

The study examined data from nearly 40,000 hospitalizations at Michigan Medicine in 2018 and 2019. Patients developed sepsis in 2,552 of those hospitalizations. Epic's sepsis tool missed 1,709 of those cases, around two-thirds of which were still identified and treated quickly. It only identified 7 percent of sepsis cases that were missed by a physician. The analysis also found a high rate of false positives: when an alert went off for a patient, there was only a 12 percent chance that the patient actually would develop sepsis. Part of the problem, Singh told Stat News, seemed to be in the way the Epic algorithm was developed. It defined sepsis based on when a doctor would submit a bill for treatment, not necessarily when a patient first developed symptoms. That means it's catching cases where the doctor already thinks there's an issue. "It's essentially trying to predict what physicians are already doing," Singh said. It's also not the measure of sepsis that researchers would ordinarily use.
This discussion has been archived. No new comments can be posted.

A Hospital Algorithm Designed To Predict a Deadly Condition Misses Most Cases

Comments Filter:
  • by Ostracus ( 1354233 ) on Wednesday June 23, 2021 @08:15AM (#61512674) Journal

    Wonder how many systems would benefit from a sense of smell?

  • This glitch gives new meaning to Epic...but also highlights when one software company "owns" a particular market.

    Back in the 1980's when expert systems were the current fad that was one of the examples showcased...an expert system could more accurately diagnosis patients than a single medical doctor.

    Of course there was another term GIGO--garbage in, garbage out. :)

    JoshK.

  • Obligatory (Score:5, Funny)

    by Random361 ( 6742804 ) on Wednesday June 23, 2021 @08:17AM (#61512684)
    EPIC fail.
    • One thing Epic does really well is protect it's market share. When hospitals say they need something (e.g. sepsis detection) Epic will respond with a new feature for their salesmen to sell. Whether that feature works or not is less important than whether it blocks other vendors from invading their space ("Don't buy a third party product, it;s already built in to your Epic system").
      • by dmr001 ( 103373 )
        I don't work for Epic; I work for a hospital that is a customer. When we need something, we'll typically try to build it ourselves. Some things we plead for Epic for build for us if it isn't doable using their software. I think there's typically no charge for that beyond routine maintenance. We will sometimes also write our own browser-based apps that can grab data out of the EMR.

        I don't speak for my employer, but I've never seen Epic try to sell us new features.
  • by shabble ( 90296 ) <metnysr_slashdot@shabble.co.uk> on Wednesday June 23, 2021 @08:41AM (#61512730)

    That means it's catching cases where the doctor already thinks there's an issue

    Compare with the Machine Learning exercise back in '17 I think that was attempting to identify cancers from photos, only to have their "AI" learn that photos with rulers in them were more likely to have cancers identified later (because they only used a ruler for lesions that were concerning to begin with before taking the photo.):

    • by Luckyo ( 1726890 )

      There will be quite a few errors like these in early days of machine learning going mainstream.

      But fact remains that if you want to identify specific patterns in images and acting on them, machine learning AI systems with sufficient training are getting close, and it's just been a few years since the machine learning AI breakthrough.

      The case you're using as an example is exactly why. They see patterns in images. If images have a clear pattern, like the ruler, they identify that this is a marker. If there is

      • There's also a danger of over-training, which you haven't mention, where important patterns are erased based on new training data, or miss-training, when the neural-nets will learn not causality patterns, but just correlation patterns. Once they will get good enough and people start depending on them (as it is with people - blindly), the corner cases will sting like a bee those unfortunate involved.

        • by Luckyo ( 1726890 )

          This is a common problem for people who don't quite understand statistics. "But the edge cases" is true. But entire system isn't compared to "perfection". It's compared to "human performance in the same scenario". We don't need AI that is perfect and has no edge case failures. We need an AI that has less edge case failures that humans do.

          And humans fail a LOT in this specific field.

          • I think you mix AI with statistical pattern search - two different things. AI is based on simplified neuron and networks built upon it. There is no theory or understanding of what kind of information such a neural network stores (no I know off) - all is done through training, additionally there is no theory, which would tell us that such a neural network reached it's capacity and with new patterns it will start corrupting existing patterns - the examples are numerous: a close, across the street truck recogn

            • by Luckyo ( 1726890 )

              I'm in agreement on the theory, but I think you misunderstand the target of the endeavour in this case. We're not looking for something that is exceedingly multivariate and requires understanding of things like "intention", i.e. driving to generate proper model for "what should be done". Example case would be "what is this car on my right going to do". You can do a neural network data crunching and learn on basis of millions of interactions of that kind. Or you can do it like humans do and understand intent

              • I agree with AI usefulness, my only point is that for a long time it cannot be fully autonomous (and should not be treated as such) - because we don't understand and don't have mathematical tools to check what patterns are associated with real life models through training.

                I agree that large sets of data will approximate error to the minimum - I just don't want to be in that minimum event. Overall, even with nowadays technology AI may on average be better than humans, the problem is that humans will rely on

                • by Luckyo ( 1726890 )

                  You're once again making a comparison to ideal situation that is impossible. Sure, no one wants to die because AI makes a mistake. But does it make you feel any better that same mistake was made by a human doctor? Would you still take a human doctor instead of AI is you know that human doctors are less accurate and make more mistakes?

                  Because that's how good AI needs to be to displace human doctors. It needs to be better than human doctors. It doesn't need to be 100% correct. And considering error rates in t

        • by sjames ( 1099 )

          And there's the major concern for AI systems. The AI tends to act as a black box. We don't know if it is looking at relevant features, non-causal correlations, or accidental coincidences in the training data.

    • by ceoyoyo ( 59147 )

      Convenience sampling is a bitch. Unfortunately, people are lazy and cheap. TANSTAAFL.

  • Does using the system increase profits for the hospitals?
    By increasing the number of medical procedures?
    By increasing the number of conditions the doctor needs to check for?
    By reducing lawsuits?

    • I would argue that last one is a win-win. Sure reduced lawsuits is good for the hospital, but presuming the way it's achieved isn't by some kind of suppression or fear, fewer lawsuits from patients is also a good thing.

    • By reducing lawsuits?

      If it works, then yes, because sepsis is one of the biggest killers of people in the hospital.

      There have been a bunch of medical startups in the last 5 years trying to catch sepsis early (because it is low hanging fruit to saving lives and getting into hospitals).

    • Does using the system increase profits for the hospitals? For the most part such systems are more of a Cost Savings instrument vs a Profit system.
      By increasing the number of medical procedures? Increasing the number of medical procedures on a patient isn't optimal profit for the organization, there is more money in treating the patient and get them out of the building so they can treat an other one. Doing a checkup and Diagnosing a condition has higher margins per time than a procedure.
      By increasing the num

  • It defined sepsis based on when a doctor would submit a bill for treatment, not necessarily when a patient first developed symptoms.

    This sounds like one of those "clever" answers for "clever" interview questions. And they never tell you anything useful about the person's problem solving ability.

  • And are now trying to fake it. Not good.

  • Good, more data for the model. Add those 1,709 cases where the model failed and retrain. The "data engine" will fix all problems, but that means you need to get close to the data itself.
  • The new study found it was only right 63 percent of the time. [....] The analysis also found a high rate of false positives: when an alert went off for a patient, there was only a 12 percent chance that the patient actually would develop sepsis.

    How does an algorithm with an 88% false positive rate be rated as being right 63% of the time? By the same study.

    • I'm assuming it's 88% of positives that are false, not 88% of all results that are false.

      If negative results are both more accurate and more common, that brings up the overall accuracy.

      • by ranton ( 36917 )

        I'm assuming it's 88% of positives that are false, not 88% of all results that are false.

        If negative results are both more accurate and more common, that brings up the overall accuracy.

        This exactly highlights why the statistics used are a complete mess. I could get a 94% success rate by just saying no one will get sepsis if getting a lot of the negative results correct raises my success rate.

        The only two things which should matter are the percentage of positive results which are correct, and the percentage of sepsis cases which are missed. 88% of positives were false, and 67% of sepsis cases were missed. With failure rates that high, how they came up with a success rate of between 63-76%

  • by jimbrooking ( 1909170 ) on Wednesday June 23, 2021 @09:32AM (#61512846)

    If one patient in 16 is getting sepsis, it sounds like Michigan Medical has a problem.

  • Man, I'm going to undercut Epic with my sepsis-sensing invention. It only works 50 percent of the time, but at a fraction of the cost. Question: is it possible to patent coin flipping, if it's used in a medical capacity?
    • Yes. It is easy to patent completely useless inventions. In fact, it is generally easier to patent a useless invention than a useful invention (much less prior art.) All that is required is a well-payed patent lawyer. The lawyer will use lots of language such as "implemented on a computer, with a storage device, with a display device, and optionally with an internet connection. " Somewhere around Claim Z a stand-alone coin-flipping device will be described. The lawyer will possibly also include remark

    • FYI: If enough trials are conducted, it is possible to get the coin flipping device to accurately predict medical conditions for small groups of patients.

      For instance, in 1000 trials of 10 patients each, only 25% (or so) of the trials will be 50-50. 37% of the trials will show a marked success. By eliminating the non-promising results (bias), a coin toss can be shown to be statistically accurate.

      Unfortunately, as pointed out by a statistics professor, a medical student needs a successful result to get a

    • by sjames ( 1099 )

      If you want the system to maximize profit, it needs to scan a picture of the patient and be able to tell "Rolex" from "Rollx".

  • Tech has to have better results than humans because the lazy (most people) will end up using it in replacement of actually doing their jobs.

    Look at cops using facial recognition tech. Their told not to rely upon it to the exclusion of doing 'real' detective work yet they often end up doing it anyway and leading to false fines and actions being taken on innocent people.

    So, knowing people are lazy, the tech needs to be as good as or better than real doctors doing evaluations before I'd feel comfortable deploy

  • Epic fail (Score:4, Interesting)

    by groobly ( 6155920 ) on Wednesday June 23, 2021 @10:57AM (#61513068)

    Epic makes one of the patient portal systems I use. It is complete garbage programming and UI. I wouldn't trust them to detect whether AC power is even on.

    • Please let the world know of a good EHR system. They ALL SUCK! The good ones are so focused on one specialty, then when that practice grows or gets acquired by some other organization, it becomes a problem because it is too specialized and will not communicate well with the other systems.

      • This is why standards are so important in the computer world. Interoperability is a nightmare otherwise. I shudder when thinking back to my DOS days and how janky hardware used to be then. Now it's all USB. How far we have come.
  • "he Epic algorithm was developed. It defined sepsis based on when a doctor would submit a bill for treatment, not necessarily when a patient first developed symptoms."
    • by dvice ( 6309704 )

      Compare this to Google-style:
      1. Collect data from patients.
      2. Wait a year
      3. Mark patients who have cancer
      4. Teach AI with the year old data, now marked with cancer-tags.

  • The problem is Medical Doctors, much like the rest of the population, tend to either take an All or nothing approach towards computer assistance.

    Such a tool that predicts conditions, is based on general statics and Math to find consolations between Sepsis and Test Results. The reason why we have doctors to actually go and talk to the patients and do exams, vs them being in a backroom all the time just looking at the Lab Results, is because a trained Doctor can often find things that such Lab Results may n

    • by tomhath ( 637240 )
      Sepsis is a difficult problem because the patient seems fine when you talk to them, then the symptoms appear suddenly and the condition worsens rapidly. Putting all of the patient data together is important so you know which patients to keep close eye on, but you really need nurses on the floor who are alert to a change in the patient's condition. The doctors can't be hovering over the beds all day .
  • by thomn8r ( 635504 ) on Wednesday June 23, 2021 @01:38PM (#61513712)
    It defined sepsis based on when a doctor would submit a bill for treatment, not necessarily when a patient first developed symptoms.

There are two ways to write error-free programs; only the third one works.

Working...