Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Medicine

AI Tool Cuts Unexpected Deaths In Hospital By 26%, Canadian Study Finds (www.cbc.ca) 77

An anonymous reader quotes a report from CBC News: Inside a bustling unit at St. Michael's Hospital in downtown Toronto, one of Shirley Bell's patients was suffering from a cat bite and a fever, but otherwise appeared fine -- until an alert from an AI-based early warning system showed he was sicker than he seemed. While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand. That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program. The cause turned out to be cellulitis, a bacterial skin infection. Without prompt treatment, it can lead to extensive tissue damage, amputations and even death. Bell said the patient was given antibiotics quickly to avoid those worst-case scenarios, in large part thanks to the team's in-house AI technology, dubbed Chartwatch. "There's lots and lots of other scenarios where patients' conditions are flagged earlier, and the nurse is alerted earlier, and interventions are put in earlier," she said. "It's not replacing the nurse at the bedside; it's actually enhancing your nursing care."

A year-and-a-half-long study on Chartwatch, published Monday in the Canadian Medical Association Journal, found that use of the AI system led to a striking 26 percent drop in the number of unexpected deaths among hospitalized patients. The research team looked at more than 13,000 admissions to St. Michael's general internal medicine ward -- an 84-bed unit caring for some of the hospital's most complex patients -- to compare the impact of the tool among that patient population to thousands of admissions into other subspecialty units. "At the same time period in the other units in our hospital that were not using Chartwatch, we did not see a change in these unexpected deaths," said lead author Dr. Amol Verma, a clinician-scientist at St. Michael's, one of three Unity Health Toronto hospital network sites, and Temerty professor of AI research and education in medicine at University of Toronto. "That was a promising sign."

The Unity Health AI team started developing Chartwatch back in 2017, based on suggestions from staff that predicting deaths or serious illness could be key areas where machine learning could make a positive difference. The technology underwent several years of rigorous development and testing before it was deployed in October 2020, Verma said. Dr. Amol Verma, a clinician-scientist at St. Michael's Hospital who helped lead the creation and testing of CHARTwatch, stands at a computer. "Chartwatch measures about 100 inputs from [a patient's] medical record that are currently routinely gathered in the process of delivering care," he explained. "So a patient's vital signs, their heart rate, their blood pressure ... all of the lab test results that are done every day." Working in the background alongside clinical teams, the tool monitors any changes in someone's medical record "and makes a dynamic prediction every hour about whether that patient is likely to deteriorate in the future," Verma told CBC News.

This discussion has been archived. No new comments can be posted.

AI Tool Cuts Unexpected Deaths In Hospital By 26%, Canadian Study Finds

Comments Filter:
  • by hashish16 ( 1817982 ) on Wednesday September 18, 2024 @08:10AM (#64795671)
    and it's ripe for AI intervention. I work in medical diagnostics and I've been telling folks for years that AI will have the biggest impact augmenting doctors and nurses. What I've realized is that your average doctor is just that, average. In some areas AI is better, others it's worse. By bringing down the cost of testing, we can essentially replace many early stage medical professionals and achieve much better outcomes.
    • by Baron_Yam ( 643147 ) on Wednesday September 18, 2024 @08:20AM (#64795701)

      It'll be a fight - AI will be superior at finding non-obvious but common indicators of common issues. In other words, things humans might easily miss. Statistically it will be awesome and have a huge positive effect on outcomes.

      What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.

      • by Luckyo ( 1726890 ) on Wednesday September 18, 2024 @10:45AM (#64796163)

        This is the "we need to keep radar operators" narrative. For those not in the know, the main human problem in scenarios which require constant vigilance is human adaptability. We adapt to "this is just as all the case before it, so it must be it".

        And brain stops noticing incoming German bomber attack on the radar screen that is clearly visible to command officer who walks in and takes a look at the screen already being looked at by operator who isn't raising the alarm. It's a biologically hard coded feature of a human mind.

        This is AI correcting for human flaws. As with all such corrections, in the beginning there will be some edge cases that automation will miss. Such events will be found and added to the library of things that automation must react to. And eventually we'll get to today's situation, where people generally aren't even trusted to look at raw output of sensors at all. They get a collated view instead, that is fully pre-processed by automation. And so one radar operator does a job of a thousand or more operators during early WW2, and he does it far, far better.

      • by 0xG ( 712423 )

        Exactly this. It's a great adjunct, but the danger is in *relying* on it.

      • What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.

        When AI becomes proficient in a task, the main thing we will rely on is consistency. A consistency we will never achieve with error-prone humans. Ever. Eventually insuring or even employing humans in any AI-proven profession will be more viewed as a liability.

        And quite frankly if you’re a medical outlier today, you’re already at risk of misdiagnosis. Humans aren’t really any better when they’re just as inexperienced with the rare and unknown.

      • by dvice ( 6309704 )

        It other words you are saying that human doctor is like picking a random number 1/100 and hope that it is the winning number, but with AI you get to pick 99/100, which means that you can still lose, but it is just much less likely.

        So what exactly is the downside? It is not like your average doctor is going to get that rare decease right either.

      • It'll be a fight - AI will be superior at finding non-obvious but common indicators of common issues. In other words, things humans might easily miss. Statistically it will be awesome and have a huge positive effect on outcomes.

        What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.

        The right way to use AI is as a tool and not as a replacement for a human expert. Expecting the human replacement is a strawman that will only encourage skeptics and short-sellers, but it's not necessary and not the right goal. This is true to medical diagnosis, for video creation, etc.

      • "The AI says it's nothing".

        The medical malpractice system will ensure that doesn't happen at any significant scale. Doctors still need to defend their decisions frequently, and a doctor that defers decisions to a computer rather than performing a diagnosis themselves will quickly find themselves stripped of their medical license, or at the very least sued out of existence.

        AI is a tool to identify things to go look at, not a decision maker. Even in this case it identified an abnormality, the diagnosis went to traditional humans to be

      • What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.

        Most of the doctors I have seen had the exact same result as an AI. In other words, things would not get any worse if we switched to AI doctors. Running across a doctor who is willing to think and is not overfilled with arrogance is an almost miraculous event. At least Dr House was willing to think... even if it is all just stories.

    • Well off course, that is what science is.

      any science consists of:

      • Observation
      • trying to see a pattern in the observations
      • formulating a theory
      • testing that theory
      • repeat

      This is true for medicine in Roman days, in medieval times, modern medicine, music theory, physics, etc.

      So basically, any science is "one big decision tree" if you apply the existing theory.

      • Medicine is applying pre-existing solutions, not formulating theories and testing them. Insurance companies wouldn't allow it. This is why learning models are so helpful in this area. Diagnostic tests are cheap relative to an actual doctor. MD-PhD's are a different story, but the average patient is seeing an average doctor. Average doctors are very average at their job.
        • by GoTeam ( 5042081 )
          I heard about this really cool device that only needs a drop of your blood to run all kinds of diagnostic tests in a short period of time. The company is run by a dynamic young go-getter. I see great things in their future!!! I think the company is called Theranos...
          • by Hasaf ( 3744357 )
            I heard that the person leading the company that was working on this was sent to prison in order to keep her from disrupting the entire medical industry. Billions of dollars were at risk, there is no way "Big Medicine" could have allowed her to be successful!

            Or, just possibly she was just a fraud. . . I'll take conspiracy theory #1. It sounds better on late-night talk radio than the possibility that she was "just a fraud."
        • .... and 95% of the time that is all that is necessary. Your GP looks at your symptoms, applies their experience/training to them and sees that 95% of the time it points to XXXXXX issue, then he/she/it proposes YYYYY treatment.

          There do exist research hospitals that do indeed formulate new theories, but for that 95% of the time, your garden variety GP is just fine for you.

          When you have done the YYYYY treatment and it isn't working as suspected, the your GP will look at that 5% of possibilities and prob

      • by gtall ( 79522 )

        "formulating a theory". Eh? It is the formulating a theory step that not part of the decision tree in that mechanizing it with AI won't be straightforward if at all doable.

        Think of AI formulating Einstein's relativity. Where's the training data that replaces Newton's equations with Einstein's equations? How does that training data indicate what form those new equations should take? What changes in the actual conception of the problem are simply hiding that AI can find them? Even if it could, which conceptio

    • It's ripe for classical procedural intervention. No need for fuzzy matching or hallucinations. It can really just be threshold numbers preprogrammed in.

      Not to say AI can't help. But until it was a buzzword, nobody seemed to do any of this automation before more cheaply.

      • by jenningsthecat ( 1525947 ) on Wednesday September 18, 2024 @09:59AM (#64795987)

        No need for fuzzy matching or hallucinations. It can really just be threshold numbers preprogrammed in.

        I suspect that's very much not true. It's not just about "threshold numbers" - it's about correlating the numbers and evaluating relationships among them. For example, borderline high - or low - blood sugar, may not be significant except in the context of some other measurement, symptom, or condition.

        As the number of combinations and permutations increases, it's about analysis and inference, not just thresholds. It seems to me that various flavours of (what we really shouldn't be calling) AI excel at that kind of thing.

      • That isn't true for analyzing (volumetric) imagery like MRI scans, to find tumors for example. Recent "AI" deep nets do this job much, much better than any previous algorithms. That is more important than whether the appellation "AI" is a buzzword.
    • by Hadlock ( 143607 )

      Banfield vet/pet hospitals already has this. At least since 2014. They're required to use the decision tree.

      And yeah human doctors aren't especially good at making decisions or thinking creatively. The medical industry optimizes for

      1. People with families rich enough to support them for 8-12 years of education and
      2. People who are really good at memorizing huge amounts of data for short periods of time
      3. Mental/physical fortitude/stamina to do 8-12 years of rigorous schooling

      None of

      • I already didn't like doctors much before CV19, but seeing their total lack of professionalism, authoritarian instincts toward censorship, poor advice based on political biases, conformity, and other terrible non-skeptical non-scientific practices, I'd much much much rather see an AI hooked up to some imaging gear. I have basically zero trust for doctors based on my experience with them. I'll take just about anything else if it will screw them over and put them in the same position as a sysadmin staring do
    • by tragedy ( 27079 )

      Based on my experiences with the medical system, one of the biggest problems is that the left hand never knows what the right hand is doing, and the typical patient has to deal with dozens of "left hands" with very little coordination in care. So patients get passed from nurse to nurse, doctor to doctor, with every one of them essentially starting from scratch and having no idea about the big picture for the patient. They rely on their protocols to handle each patient in what's essentially a vacuum. There a

    • The vast majority of doctors that I have visited were checklist doctors. They would follow the decision tree all the way down to, "we don't know so we will call it something generic like IBS". Of course, the test to check for c diff was dropped by the lab and an erroneous result was returned. AI would have performed just as good as most of my human doctors.

      Of course, I do have a doctor now who went, "gee, that's strange" and reordered the test and hurray! I am cured. AI would have replaced the dozens of oth

  • ...AI tool increases MAID deaths 26%.

  • So is there now a 26% increase in expected deaths?
    • It would be a fairly small decrease in overall deaths, because a lot of them are late stage pneumonia, metastatic cancer, trauma wounds bleeding out, etc.

      So it makes sense to only look at how big a piece of the small number it is. Because unexpected often means preventable.

  • by evanh ( 627108 ) on Wednesday September 18, 2024 @08:23AM (#64795705)

    "The technology underwent several years of rigorous development and testing before it was deployed ..."

      Not only does it only do a single narrow job, of matching known alert conditions, it has also been carefully trained at that one job. No pillaging of unfiltered datasets.

    Basically, it's like anything other computer program in that it is infinitely repeatable and relentless.

    • Re: (Score:3, Informative)

      by gweihir ( 88907 )

      Indeed. It also is not an LLM but an actually proven technology.

      • It also is not an LLM but an actually proven technology.

        Literally no one said it's an LLM. ... Wait. ... Do you think AI = LLM? I have a very low opinion of you but even I thought more highly of you than that.

  • AI? (Score:4, Insightful)

    by RobinH ( 124750 ) on Wednesday September 18, 2024 @08:37AM (#64795751) Homepage
    Is it really using AI or is it just a typical program looking for specific signals?
    • It's traditional AI, an expert system. Not generative garbage.

    • by unrtst ( 777550 )

      I wondered the same thing! And if it is making use of AI, I'd extend that to ask if the use of AI is in any way necessary to its operation?

      TFS says it, "measures about 100 inputs," and alerts on changes or those that exceed expected norms. Those norms are things we already have codified (ex. every time I get a blood test, the acceptable ranges are provided right along with the data), and detecting and alerting on changes is not difficult to codify. I'm quite curious about *how* they've made use of AI here,

      • I wondered the same thing! And if it is making use of AI, I'd extend that to ask if the use of AI is in any way necessary to its operation?

        TFS says it, "measures about 100 inputs," and alerts on changes or those that exceed expected norms. Those norms are things we already have codified (ex. every time I get a blood test, the acceptable ranges are provided right along with the data), and detecting and alerting on changes is not difficult to codify. I'm quite curious about *how* they've made use of AI here, and if it's actually a better option than a manually coded analysis program would be.

        Also, if all that personal data is fed into an AI system for it to do this monitoring and prediction work, how secure is that data? I imagine that a big portion of this problem is getting all the data points in one place and correlating them. Makes for a good target.

        I think it's a combination.

        They're probably only training on data from that one hospital unit, so just the raw readings probably isn't enough data for the model to predict outcomes.

        At the same time various thresholds probably get exceeded a lot, so if you alert every time there's a threshold exceeded then it's just spam.

        But if you do a bunch of feature engineering on the raw data (thresholds + various relationships between readings) then the model now has enough data to make useful predictions.

        But I also ag

      • I'm guessing that they used AI to help come up with the alerting thresholds but the actual implementation is just a lookup table. I have no direct involvement with the project and don't know. But that's at least the first approach that one should consider for this type of situation.
    • by spiryt ( 229185 )

      Is it really using AI or is it just a typical program looking for specific signals?

      I suppose there's more to the AI than just LLMs which are getting all the press these days. (although I do hate that term 'AI' which sort of became meaningless these days).

    • This sounds like something I worked on in the 90's where a doc would get a page if an inpatient's metrics exceeded a sigma threshold.

      Where AI (ML) could be good is to look at all of these data points together and find patterns where a moving set of metrics would lead to poor outcomes inside of those individual sigmas.

      That is to say AI should be used here but TFS doesn't want us to know if it is or not.

    • Is it really using AI or is it just a typical program looking for specific signals?

      It's AI in that it is built using machine learning. It's literally right there in the summary. We've been doing AI for close to a decade now, the whole drawing stupid pictures and telling people to glue cheese on pizza is just one tiny and massively over-hyped application of an ML system. That doesn't mean all AI = ChatGPT.

  • now the Private Equity companies that own all our hospitals will be able to cut nursing staff by 26%, adding much needed shareholder value!
    • by Anonymous Coward

      The only way to solve the issue is to do like Quebec did; all hospitals and healthcare treatments are owned and managed by the government without any private intervention. It's really the only efficient and best solution to have good healthcare and to not get ripped off by those capitalist private companies.

      • The only way to solve the issue is to do like Quebec did; all hospitals and healthcare treatments are owned and managed by the government without any private intervention.

        Your government must somehow be MUCH better at managing things than our govt in the US....I'd not put so much faith in our govt making medical decisions for me.

        • The only way to solve the issue is to do like Quebec did; all hospitals and healthcare treatments are owned and managed by the government without any private intervention.

          Your government must somehow be MUCH better at managing things than our govt in the US....I'd not put so much faith in our govt making medical decisions for me.

          As opposed to a rent-seeking money-grubbing corporation doing it?

          • As opposed to a rent-seeking money-grubbing corporation doing it?

            As much as I deride the corps running it now....I'd still have to answer YES....still better than having the US federal govt in charge of anything involving my healthcare.

            There are things the govt must do, as that there is no other way. Say national defense....

            Anything the feds do, it slow, inefficient, wasteful....so, I'm only for letting them do what cannot be done any other way.

            So far, I'm happy with my healthcare...I've gotten timely

        • by zlives ( 2009072 )

          yes we trust the insurance companies to deny treatments and poor staff can be blamed for any deaths, including poor staffing.

          • yes we trust the insurance companies to deny treatments

            Never had that happen for anything that I've needed to have examined, tested or fixed.

  • by Arrogant-Bastard ( 141720 ) on Wednesday September 18, 2024 @09:41AM (#64795943)
    Before I get into this, it'd be instructive to study the history of Mycin [wikipedia.org], a simple inference engine from the 1970's specifically designed to diagnose infectious diseases. Mycin was quite a bit simpler than today's AI/LLM diagnostic models and it's rulesets were predetermined based on simple clinical rules, e.g., "if patient is exhibiting symptom X, then possible infections are A, B, and C". It was designed to mimic the diagnostic process (i.e. decision tree) used by clinicians. Valid critiques of Mycin included its ad hoc ruleset, but worth noting is that unlike human clinicians, Mycin never got tired, stressed, overworked, or forgetful. So in that sense, systems like Mycin could be a useful backstop, e.g., a second opinion that might, in some circumstances, catch something that slipped by a human.

    Now to this present case. Sure, this was done by an AI, but did it need to be? And when it was done, was that actually an AI function or just trend line analysis that could have been accomplished with far simpler software? Certainly if a patient's temperature increases (beyond normal diurnal variations) then that should be flagged for human scrutiny; if it increases rapidly or displays other unexpected behavior, that flag should be marked "urgent". And that's just one parameter: if it's examined in concert with others (e.g. blood pressure, heart rate, respiration rate, VO2) then a set of simple rules would suffice to catch most (but not all) deteriorating conditions that might get by overworked hospital staff.

    What I'm getting at is that this particular task shouldn't require AI for the most part. Yes, there are edge cases that won't get picked up by simplistic rulesets used in an inference engine, but then again it's questionable whether they'd be picked up by AI either. Realtime analysis of vital signs (supplemented by other monitoring as appropriate) should suffice to catch a heck of a lot of things and frankly should have already been in place. It's not computationally complex, it's repeatable, and it's not difficult to customize to the patient (age, weight, height, underlying conditions, etc.)
  • Comment removed based on user account deletion
    • by Luckyo ( 1726890 )

      You've been told that AI will replace humans in so far that it will make humans much more efficient at their jobs. The job of countless radar operators of WW2 is now handled by a single radar operator because of systems like it. Did it replace radar operators? No, you still need someone to operate the radars. But one operator can now do a job of what required an army of them.

      Same will apply in medical field. Instead of an army of diagnosticians we run today, we'll have maybe one or two. Who will operate the

    • by ukoda ( 537183 )
      Does it really matter if this kind of AI can out perform humans? As long as the humans are not taken out of the loop what you really have is a good tool that compliments the humans. This example case shows the right way to use such systems, notify the humans that something needs their review. The real risk is humans relying on AI too much and not doing what humans do best too.
  • I can well believe that the analytical abilities of Chartwatch are, on their own, worth the cost and effort. But I wonder how much of this improvement in survival rate is down to that analysis, and how much of it is the result of having another set of 'eyes' on patients' charts.

    THESE eyes never need to sleep, and their efficiency isn't compromised by emotions, distractions, forgetfulness, or any number of other human characteristics which may lead to less-than-optimal attention and analysis. So they aren't

  • Google "wiring room experiment" to find out why this result is bogus.

  • I'm not a medical professional but if someone has a "really, really high" amounts of white blood cells it shouldn't take an AI to figure out something is very wrong here. Is this article basically telling us people working in US hospitals are shit at their job so bad, that a few simple rules (because this AI can be substituted by just a few "if" statements) applied to test results reduces mortality by 26%. I knew US is just a 3rd world country pretending to be a superpower but this is a new low, even for y
  • AI? Give me a break. It sounds like a program even I could write: If the white blood cell count is very high, alert a human immediately.

    I'm waiting for toothpaste that 'Now has AI in it!' to make people smarter. Snake oil Y2K...
  • They pretty much run a CBC if you walk in the door. Cheryl highlights anything outside of normal parameters. No AI required, just good old tech from 1995, a lab tech born in 1965, and a highlighter invented in 1955.

  • Can this tool catch things that are difficult to detect or get misdiagnosed? My partner is a nurse in Ontario, Canada, and the single biggest issue, the charts / reports not getting reviewed. There are many, many other issues, none of which are Ford's fault (for the haters), but that one is the major issue.

    This leads me to wonder if this tool is just a reviewer, and if it is, cool. Using my medical history as a gauge, the number of times X is listed in a test, but I don't find out for MONTHS, is annoyi
  • The AI purveyor can proclaim anything they want--but lying doesn't make it true.
  • What kinds of stats are we talking? What number of unexpected deaths is 26%? One? Two? More? Certainly not all of their 13,000 admissions or even the 84 complex cases. What if staff simply introduced a system of double-checking each others' charts & diagnoses? How far would that decrease mortality? & how would the cost of that compare to systematically implementing this AI double-checking system?

    Although it has to be said, if it's true in medicine, as it is in other domains, expertise & qual

If you didn't have to work so hard, you'd have more time to be depressed.

Working...