AI Tool Cuts Unexpected Deaths In Hospital By 26%, Canadian Study Finds (www.cbc.ca) 77
An anonymous reader quotes a report from CBC News: Inside a bustling unit at St. Michael's Hospital in downtown Toronto, one of Shirley Bell's patients was suffering from a cat bite and a fever, but otherwise appeared fine -- until an alert from an AI-based early warning system showed he was sicker than he seemed. While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand. That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program. The cause turned out to be cellulitis, a bacterial skin infection. Without prompt treatment, it can lead to extensive tissue damage, amputations and even death. Bell said the patient was given antibiotics quickly to avoid those worst-case scenarios, in large part thanks to the team's in-house AI technology, dubbed Chartwatch. "There's lots and lots of other scenarios where patients' conditions are flagged earlier, and the nurse is alerted earlier, and interventions are put in earlier," she said. "It's not replacing the nurse at the bedside; it's actually enhancing your nursing care."
A year-and-a-half-long study on Chartwatch, published Monday in the Canadian Medical Association Journal, found that use of the AI system led to a striking 26 percent drop in the number of unexpected deaths among hospitalized patients. The research team looked at more than 13,000 admissions to St. Michael's general internal medicine ward -- an 84-bed unit caring for some of the hospital's most complex patients -- to compare the impact of the tool among that patient population to thousands of admissions into other subspecialty units. "At the same time period in the other units in our hospital that were not using Chartwatch, we did not see a change in these unexpected deaths," said lead author Dr. Amol Verma, a clinician-scientist at St. Michael's, one of three Unity Health Toronto hospital network sites, and Temerty professor of AI research and education in medicine at University of Toronto. "That was a promising sign."
The Unity Health AI team started developing Chartwatch back in 2017, based on suggestions from staff that predicting deaths or serious illness could be key areas where machine learning could make a positive difference. The technology underwent several years of rigorous development and testing before it was deployed in October 2020, Verma said. Dr. Amol Verma, a clinician-scientist at St. Michael's Hospital who helped lead the creation and testing of CHARTwatch, stands at a computer. "Chartwatch measures about 100 inputs from [a patient's] medical record that are currently routinely gathered in the process of delivering care," he explained. "So a patient's vital signs, their heart rate, their blood pressure ... all of the lab test results that are done every day." Working in the background alongside clinical teams, the tool monitors any changes in someone's medical record "and makes a dynamic prediction every hour about whether that patient is likely to deteriorate in the future," Verma told CBC News.
A year-and-a-half-long study on Chartwatch, published Monday in the Canadian Medical Association Journal, found that use of the AI system led to a striking 26 percent drop in the number of unexpected deaths among hospitalized patients. The research team looked at more than 13,000 admissions to St. Michael's general internal medicine ward -- an 84-bed unit caring for some of the hospital's most complex patients -- to compare the impact of the tool among that patient population to thousands of admissions into other subspecialty units. "At the same time period in the other units in our hospital that were not using Chartwatch, we did not see a change in these unexpected deaths," said lead author Dr. Amol Verma, a clinician-scientist at St. Michael's, one of three Unity Health Toronto hospital network sites, and Temerty professor of AI research and education in medicine at University of Toronto. "That was a promising sign."
The Unity Health AI team started developing Chartwatch back in 2017, based on suggestions from staff that predicting deaths or serious illness could be key areas where machine learning could make a positive difference. The technology underwent several years of rigorous development and testing before it was deployed in October 2020, Verma said. Dr. Amol Verma, a clinician-scientist at St. Michael's Hospital who helped lead the creation and testing of CHARTwatch, stands at a computer. "Chartwatch measures about 100 inputs from [a patient's] medical record that are currently routinely gathered in the process of delivering care," he explained. "So a patient's vital signs, their heart rate, their blood pressure ... all of the lab test results that are done every day." Working in the background alongside clinical teams, the tool monitors any changes in someone's medical record "and makes a dynamic prediction every hour about whether that patient is likely to deteriorate in the future," Verma told CBC News.
Modern Medicine is just a glorified Decision Tree. (Score:5, Interesting)
Re:Modern Medicine is just a glorified Decision Tr (Score:5, Interesting)
It'll be a fight - AI will be superior at finding non-obvious but common indicators of common issues. In other words, things humans might easily miss. Statistically it will be awesome and have a huge positive effect on outcomes.
What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.
Re:Modern Medicine is just a glorified Decision Tr (Score:4, Interesting)
This is the "we need to keep radar operators" narrative. For those not in the know, the main human problem in scenarios which require constant vigilance is human adaptability. We adapt to "this is just as all the case before it, so it must be it".
And brain stops noticing incoming German bomber attack on the radar screen that is clearly visible to command officer who walks in and takes a look at the screen already being looked at by operator who isn't raising the alarm. It's a biologically hard coded feature of a human mind.
This is AI correcting for human flaws. As with all such corrections, in the beginning there will be some edge cases that automation will miss. Such events will be found and added to the library of things that automation must react to. And eventually we'll get to today's situation, where people generally aren't even trusted to look at raw output of sensors at all. They get a collated view instead, that is fully pre-processed by automation. And so one radar operator does a job of a thousand or more operators during early WW2, and he does it far, far better.
Re: (Score:2)
Exactly this. It's a great adjunct, but the danger is in *relying* on it.
Re: (Score:2)
What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.
When AI becomes proficient in a task, the main thing we will rely on is consistency. A consistency we will never achieve with error-prone humans. Ever. Eventually insuring or even employing humans in any AI-proven profession will be more viewed as a liability.
And quite frankly if you’re a medical outlier today, you’re already at risk of misdiagnosis. Humans aren’t really any better when they’re just as inexperienced with the rare and unknown.
Re: (Score:2)
It other words you are saying that human doctor is like picking a random number 1/100 and hope that it is the winning number, but with AI you get to pick 99/100, which means that you can still lose, but it is just much less likely.
So what exactly is the downside? It is not like your average doctor is going to get that rare decease right either.
Re: (Score:2)
It'll be a fight - AI will be superior at finding non-obvious but common indicators of common issues. In other words, things humans might easily miss. Statistically it will be awesome and have a huge positive effect on outcomes.
What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.
The right way to use AI is as a tool and not as a replacement for a human expert. Expecting the human replacement is a strawman that will only encourage skeptics and short-sellers, but it's not necessary and not the right goal. This is true to medical diagnosis, for video creation, etc.
Re: (Score:2)
"The AI says it's nothing".
The medical malpractice system will ensure that doesn't happen at any significant scale. Doctors still need to defend their decisions frequently, and a doctor that defers decisions to a computer rather than performing a diagnosis themselves will quickly find themselves stripped of their medical license, or at the very least sued out of existence.
AI is a tool to identify things to go look at, not a decision maker. Even in this case it identified an abnormality, the diagnosis went to traditional humans to be
Re: (Score:2)
What it will also do is enable reliance on it and reduce humans detecting unusual things. "The AI says it's nothing". So if you're an outlier medically, this might not be great news.
Most of the doctors I have seen had the exact same result as an AI. In other words, things would not get any worse if we switched to AI doctors. Running across a doctor who is willing to think and is not overfilled with arrogance is an almost miraculous event. At least Dr House was willing to think... even if it is all just stories.
Re: (Score:2)
Re: (Score:2)
Well off course, that is what science is.
any science consists of:
This is true for medicine in Roman days, in medieval times, modern medicine, music theory, physics, etc.
So basically, any science is "one big decision tree" if you apply the existing theory.
Re: Modern Medicine is just a glorified Decision T (Score:2)
Re: (Score:2)
Re: (Score:3)
Or, just possibly she was just a fraud. . . I'll take conspiracy theory #1. It sounds better on late-night talk radio than the possibility that she was "just a fraud."
Re: (Score:2)
There do exist research hospitals that do indeed formulate new theories, but for that 95% of the time, your garden variety GP is just fine for you.
When you have done the YYYYY treatment and it isn't working as suspected, the your GP will look at that 5% of possibilities and prob
Re: (Score:2)
"formulating a theory". Eh? It is the formulating a theory step that not part of the decision tree in that mechanizing it with AI won't be straightforward if at all doable.
Think of AI formulating Einstein's relativity. Where's the training data that replaces Newton's equations with Einstein's equations? How does that training data indicate what form those new equations should take? What changes in the actual conception of the problem are simply hiding that AI can find them? Even if it could, which conceptio
Re: (Score:2)
It's ripe for classical procedural intervention. No need for fuzzy matching or hallucinations. It can really just be threshold numbers preprogrammed in.
Not to say AI can't help. But until it was a buzzword, nobody seemed to do any of this automation before more cheaply.
Re:Modern Medicine is just a glorified Decision Tr (Score:4, Interesting)
No need for fuzzy matching or hallucinations. It can really just be threshold numbers preprogrammed in.
I suspect that's very much not true. It's not just about "threshold numbers" - it's about correlating the numbers and evaluating relationships among them. For example, borderline high - or low - blood sugar, may not be significant except in the context of some other measurement, symptom, or condition.
As the number of combinations and permutations increases, it's about analysis and inference, not just thresholds. It seems to me that various flavours of (what we really shouldn't be calling) AI excel at that kind of thing.
Re: (Score:2)
No matter how you slice it, the end result is going to ultimately be just combinations of threasholds.
I'm just speculating here, so it's quite possible that what I'm about to say is wrong. That said, I assume that the AI has ongoing access to new information which may not have been available the day before, never mind when the system was last updated. So some of those thresholds may be associated with criteria which previously hadn't even existed. Maybe they were even synthesized by the AI on the fly. They can get away with letting it do that because the system's output isn't prescriptive - it's just warnin
Re: (Score:2)
Re: (Score:2)
Banfield vet/pet hospitals already has this. At least since 2014. They're required to use the decision tree.
And yeah human doctors aren't especially good at making decisions or thinking creatively. The medical industry optimizes for
1. People with families rich enough to support them for 8-12 years of education and
2. People who are really good at memorizing huge amounts of data for short periods of time
3. Mental/physical fortitude/stamina to do 8-12 years of rigorous schooling
None of
Re: (Score:1)
Re: (Score:2)
Based on my experiences with the medical system, one of the biggest problems is that the left hand never knows what the right hand is doing, and the typical patient has to deal with dozens of "left hands" with very little coordination in care. So patients get passed from nurse to nurse, doctor to doctor, with every one of them essentially starting from scratch and having no idea about the big picture for the patient. They rely on their protocols to handle each patient in what's essentially a vacuum. There a
Re: (Score:2)
The vast majority of doctors that I have visited were checklist doctors. They would follow the decision tree all the way down to, "we don't know so we will call it something generic like IBS". Of course, the test to check for c diff was dropped by the lab and an erroneous result was returned. AI would have performed just as good as most of my human doctors.
Of course, I do have a doctor now who went, "gee, that's strange" and reordered the test and hurray! I am cured. AI would have replaced the dozens of oth
In unrelated news... (Score:1)
...AI tool increases MAID deaths 26%.
Unexpected Deaths? (Score:2)
Re: (Score:2)
It would be a fairly small decrease in overall deaths, because a lot of them are late stage pneumonia, metastatic cancer, trauma wounds bleeding out, etc.
So it makes sense to only look at how big a piece of the small number it is. Because unexpected often means preventable.
Re:mRNA vaccine deaths = unexpected deaths. (Score:5, Informative)
You morons are still around? Well, at least some of you died off from being non-vaccinated. But apparently not enough.
Incidentally, this has _never_ been controversial. It _always_ was insane FUD and nothing else. You probably take bleach, believe in blood-letting and think the world is flat as well.
Re: (Score:3)
vampires
Re: (Score:2)
Re: (Score:2)
Useless FUD. A spike protein binding to an ACE2 receptor can kill susceptible people if there's too much activity. Actual SARS-CoV-2 infections do that in greater quantities. An infection that is more common than influenza. By what other mechanism do you propose that it's dangerous?
Re: (Score:2)
This is no longer controversial.
It was never controversial. It was stupid. And it identified the people who repeated it as stupid.
Good use but nothing general about it (Score:5, Insightful)
"The technology underwent several years of rigorous development and testing before it was deployed ..."
Not only does it only do a single narrow job, of matching known alert conditions, it has also been carefully trained at that one job. No pillaging of unfiltered datasets.
Basically, it's like anything other computer program in that it is infinitely repeatable and relentless.
Re: (Score:3, Informative)
Indeed. It also is not an LLM but an actually proven technology.
Re: (Score:2)
It also is not an LLM but an actually proven technology.
Literally no one said it's an LLM. ... Wait. ... Do you think AI = LLM? I have a very low opinion of you but even I thought more highly of you than that.
AI? (Score:4, Insightful)
Re: (Score:3)
It's traditional AI, an expert system. Not generative garbage.
Re: (Score:3)
I wondered the same thing! And if it is making use of AI, I'd extend that to ask if the use of AI is in any way necessary to its operation?
TFS says it, "measures about 100 inputs," and alerts on changes or those that exceed expected norms. Those norms are things we already have codified (ex. every time I get a blood test, the acceptable ranges are provided right along with the data), and detecting and alerting on changes is not difficult to codify. I'm quite curious about *how* they've made use of AI here,
Re: (Score:2)
I wondered the same thing! And if it is making use of AI, I'd extend that to ask if the use of AI is in any way necessary to its operation?
TFS says it, "measures about 100 inputs," and alerts on changes or those that exceed expected norms. Those norms are things we already have codified (ex. every time I get a blood test, the acceptable ranges are provided right along with the data), and detecting and alerting on changes is not difficult to codify. I'm quite curious about *how* they've made use of AI here, and if it's actually a better option than a manually coded analysis program would be.
Also, if all that personal data is fed into an AI system for it to do this monitoring and prediction work, how secure is that data? I imagine that a big portion of this problem is getting all the data points in one place and correlating them. Makes for a good target.
I think it's a combination.
They're probably only training on data from that one hospital unit, so just the raw readings probably isn't enough data for the model to predict outcomes.
At the same time various thresholds probably get exceeded a lot, so if you alert every time there's a threshold exceeded then it's just spam.
But if you do a bunch of feature engineering on the raw data (thresholds + various relationships between readings) then the model now has enough data to make useful predictions.
But I also ag
Re: (Score:2)
Re: (Score:1)
Is it really using AI or is it just a typical program looking for specific signals?
I suppose there's more to the AI than just LLMs which are getting all the press these days. (although I do hate that term 'AI' which sort of became meaningless these days).
Re: (Score:2)
This sounds like something I worked on in the 90's where a doc would get a page if an inpatient's metrics exceeded a sigma threshold.
Where AI (ML) could be good is to look at all of these data points together and find patterns where a moving set of metrics would lead to poor outcomes inside of those individual sigmas.
That is to say AI should be used here but TFS doesn't want us to know if it is or not.
Re: (Score:2)
Is it really using AI or is it just a typical program looking for specific signals?
It's AI in that it is built using machine learning. It's literally right there in the summary. We've been doing AI for close to a decade now, the whole drawing stupid pictures and telling people to glue cheese on pizza is just one tiny and massively over-hyped application of an ML system. That doesn't mean all AI = ChatGPT.
This is great news (Score:1)
Re: (Score:1)
The only way to solve the issue is to do like Quebec did; all hospitals and healthcare treatments are owned and managed by the government without any private intervention. It's really the only efficient and best solution to have good healthcare and to not get ripped off by those capitalist private companies.
Re: (Score:1)
Your government must somehow be MUCH better at managing things than our govt in the US....I'd not put so much faith in our govt making medical decisions for me.
Re: (Score:3)
Your government must somehow be MUCH better at managing things than our govt in the US....I'd not put so much faith in our govt making medical decisions for me.
As opposed to a rent-seeking money-grubbing corporation doing it?
Re: (Score:1)
As much as I deride the corps running it now....I'd still have to answer YES....still better than having the US federal govt in charge of anything involving my healthcare.
There are things the govt must do, as that there is no other way. Say national defense....
Anything the feds do, it slow, inefficient, wasteful....so, I'm only for letting them do what cannot be done any other way.
So far, I'm happy with my healthcare...I've gotten timely
Re: (Score:2)
yes we trust the insurance companies to deny treatments and poor staff can be blamed for any deaths, including poor staffing.
Re: (Score:1)
Never had that happen for anything that I've needed to have examined, tested or fixed.
Insurers already lining up to buy it... (Score:2)
as title says..
AI...but not really AI (Score:3)
Now to this present case. Sure, this was done by an AI, but did it need to be? And when it was done, was that actually an AI function or just trend line analysis that could have been accomplished with far simpler software? Certainly if a patient's temperature increases (beyond normal diurnal variations) then that should be flagged for human scrutiny; if it increases rapidly or displays other unexpected behavior, that flag should be marked "urgent". And that's just one parameter: if it's examined in concert with others (e.g. blood pressure, heart rate, respiration rate, VO2) then a set of simple rules would suffice to catch most (but not all) deteriorating conditions that might get by overworked hospital staff.
What I'm getting at is that this particular task shouldn't require AI for the most part. Yes, there are edge cases that won't get picked up by simplistic rulesets used in an inference engine, but then again it's questionable whether they'd be picked up by AI either. Realtime analysis of vital signs (supplemented by other monitoring as appropriate) should suffice to catch a heck of a lot of things and frankly should have already been in place. It's not computationally complex, it's repeatable, and it's not difficult to customize to the patient (age, weight, height, underlying conditions, etc.)
Re: (Score:2)
Re: (Score:2)
You've been told that AI will replace humans in so far that it will make humans much more efficient at their jobs. The job of countless radar operators of WW2 is now handled by a single radar operator because of systems like it. Did it replace radar operators? No, you still need someone to operate the radars. But one operator can now do a job of what required an army of them.
Same will apply in medical field. Instead of an army of diagnosticians we run today, we'll have maybe one or two. Who will operate the
Re: (Score:2)
More than one mechanism? (Score:2)
I can well believe that the analytical abilities of Chartwatch are, on their own, worth the cost and effort. But I wonder how much of this improvement in survival rate is down to that analysis, and how much of it is the result of having another set of 'eyes' on patients' charts.
THESE eyes never need to sleep, and their efficiency isn't compromised by emotions, distractions, forgetfulness, or any number of other human characteristics which may lead to less-than-optimal attention and analysis. So they aren't
wiring room (Score:2)
Google "wiring room experiment" to find out why this result is bogus.
How bad is the situation in US? (Score:1)
AI? My eye... (Score:2)
I'm waiting for toothpaste that 'Now has AI in it!' to make people smarter. Snake oil Y2K...
Weird. My doctor just has Cheryl. (Score:2)
They pretty much run a CBC if you walk in the door. Cheryl highlights anything outside of normal parameters. No AI required, just good old tech from 1995, a lab tech born in 1965, and a highlighter invented in 1955.
Cool, but isn't this just a review tool? (Score:2)
This leads me to wonder if this tool is just a reviewer, and if it is, cool. Using my medical history as a gauge, the number of times X is listed in a test, but I don't find out for MONTHS, is annoyi
Nonsense (Score:2)
"unexpected deaths" (Score:2)
Although it has to be said, if it's true in medicine, as it is in other domains, expertise & qual