GPT-4 Will Hunt For Trends In Medical Records Thanks To Microsoft and Epic (arstechnica.com) 54
An anonymous reader quotes a report from Ars Technica: On Monday, Microsoft and Epic Systems announced that they are bringing OpenAI's GPT-4 AI language model into health care for use in drafting message responses from health care workers to patients and for use in analyzing medical records while looking for trends. Epic Systems is one of America's largest health care software companies. Its electronic health records (EHR) software (such as MyChart) is reportedly used in over 29 percent of acute hospitals in the United States, and over 305 million patients have an electronic record in Epic worldwide. Tangentially, Epic's history of using predictive algorithms in health care has attracted some criticism in the past.
In Monday's announcement, Microsoft mentions two specific ways Epic will use its Azure OpenAI Service, which provides API access to OpenAI's large language models (LLMs), such as GPT-3 and GPT-4. In layperson's terms, it means that companies can hire Microsoft to provide generative AI services for them using Microsoft's Azure cloud platform. The first use of GPT-4 comes in the form of allowing doctors and health care workers to automatically draft message responses to patients. The press release quotes Chero Goswami, chief information officer at UW Health in Wisconsin, as saying, "Integrating generative AI into some of our daily workflows will increase productivity for many of our providers, allowing them to focus on the clinical duties that truly require their attention." The second use will bring natural language queries and "data analysis" to SlicerDicer, which is Epic's data-exploration tool that allows searches across large numbers of patients to identify trends that could be useful for making new discoveries or for financial reasons. According to Microsoft, that will help "clinical leaders explore data in a conversational and intuitive way." Imagine talking to a chatbot similar to ChatGPT and asking it questions about trends in patient medical records, and you might get the picture. Dr. Margaret Mitchell, chief ethics scientist at Hugging Face, is concerned about GPT-4's ability to make up information that isn't represented in its data set. Another concern is the potential bias in GPT-4 that might discriminate against certain patients based on gender, race, age, or other factors.
"Combined with the well-known problem of automation bias, where even experts will believe things that are incorrect if they're generated automatically by a system, this work will foreseeably generate false information," says Mitchell. "In the clinical setting, this can mean the difference between life and death."
In Monday's announcement, Microsoft mentions two specific ways Epic will use its Azure OpenAI Service, which provides API access to OpenAI's large language models (LLMs), such as GPT-3 and GPT-4. In layperson's terms, it means that companies can hire Microsoft to provide generative AI services for them using Microsoft's Azure cloud platform. The first use of GPT-4 comes in the form of allowing doctors and health care workers to automatically draft message responses to patients. The press release quotes Chero Goswami, chief information officer at UW Health in Wisconsin, as saying, "Integrating generative AI into some of our daily workflows will increase productivity for many of our providers, allowing them to focus on the clinical duties that truly require their attention." The second use will bring natural language queries and "data analysis" to SlicerDicer, which is Epic's data-exploration tool that allows searches across large numbers of patients to identify trends that could be useful for making new discoveries or for financial reasons. According to Microsoft, that will help "clinical leaders explore data in a conversational and intuitive way." Imagine talking to a chatbot similar to ChatGPT and asking it questions about trends in patient medical records, and you might get the picture. Dr. Margaret Mitchell, chief ethics scientist at Hugging Face, is concerned about GPT-4's ability to make up information that isn't represented in its data set. Another concern is the potential bias in GPT-4 that might discriminate against certain patients based on gender, race, age, or other factors.
"Combined with the well-known problem of automation bias, where even experts will believe things that are incorrect if they're generated automatically by a system, this work will foreseeably generate false information," says Mitchell. "In the clinical setting, this can mean the difference between life and death."
So what (Score:1)
Remember when ppl worried about self-driving cars killing ppl, and then they did and we've moved on?
"GPT-4's ability to make up information that isn't represented in its data set."
Where do you think it learned that statistical trick from?
Re: So what (Score:1)
Why does injecting bleach seem rather tame to an ex-junkie?
Re: (Score:2)
Remember when ppl worried about self-driving cars killing ppl, and then they did and we've moved on?
You forgot the step where everybody rams defenses down our throat for self-driving cars killing people. "Well, it's still less than humans and machines will always be better because $reasons." Can't wait to see the equivalent arguments when people ask the doctor about a hangnail and end up getting open heart surgery that kills them thanks to ChatGPT.
"GPT-4's ability to make up information that isn't represented in its data set."
Where do you think it learned that statistical trick from?
Re: So what (Score:2)
Re: (Score:2)
In ten years time, sure. Today? I want ChatGPT as far away from my healthcare (in America? Yeah right) as possible.
Re: (Score:2)
Chief Ethics Scientist at Hugging Face (Score:2)
"Another concern is the potential bias in GPT-4 that might discriminate against certain patients based on gender, race, age, or other factors"
Perhaps I'm picking nits here, but I would hope someone esteemed enough to be the MASTER CHIEF of ETHICS SCIENTISTS (what does that even mean?) would care about discrimination in it's moral sense rather than expression concern about the legal use of the term (ie.the liability, ie. companies losing money from lawsuits)
Re: (Score:1)
You can do better.
For example you could have pointed out that the discrimi
HIPPA? (Score:3)
Re: HIPPA? (Score:2)
Re: (Score:2)
Re: (Score:2)
"I am altering the deal. Pray I do not alter it any further."
It doesn't matter. If they want it, they will find some pseudo-legal way to get it.
Comment removed (Score:4, Interesting)
Oh yeah (Score:5, Insightful)
This will end well.
Re: (Score:2)
Yeah, this is the least appropriate use of a model like this. I mean, I'm sure it will separate a few investors from their money, but WTF are they thinking?
They Are Coming To Take Me Away (Score:3)
Ha Ha He He Hey Hey
This is silly (Score:2)
Dr. Margaret Mitchell, chief ethics scientist at Hugging Face
Re: (Score:2)
Our Benevelent Overlords (Score:2)
I love how the title makes it sound so beneveolent. There is a profit motive, and you, the patient, are paying for it, in one way or another. Remember, the health care industry has no incentive to make you healthier, just to keep you alive and paying your bills.
And the entire world is going ga-ga over these generative algorithms, while they still aren't much more than a parlor trick that falls apart under scrutiny. You're all being taken in by the local fortune teller.
The whole model of AI class is "we just
Re: (Score:2, Interesting)
How come Roman bridges still stand
Romans had good engineers and better concrete than we do.
their architects thought heavier objects fell faster than light ones
They do. Give it a try. Or do you think the Romans lived in a vacuum?
Re: Our Benevelent Overlords (Score:1)
Is that Galileo calling, asking for his canonball experiment back?
Re: (Score:2)
If I drop a baseball, a wad of paper, and an envelope, they'll hit the ground in that same order. We even know why. The only question is why you'd deny such a simple and obvious fact?
Yes, we know that when air resistance isn't a significant factor, objects will fall at the same rate, but that doesn't mean they don't fall at different rates when air resistance is a significant factor.
Your lesson for the day: If you want to be pedantic, you also need to be specific. Try harder.
Also, Rome wasn't "the pre-e
Re: Our Benevelent Overlords (Score:1)
What if you are sliding slabs of stone down a ramp, as you might do in the course of bridge-building?
What if you are measuring a vertical distance by dropping stones?
Re: (Score:2)
Who cares? What bizarre point are you trying to make anyway?
Re: (Score:2)
Even a child today ought to know that you can use the tape measure approach by tying a weight to a rope (plumb bob level with a long rope) and lower it down to the ground and then look up the length of the rope.
The Romans certainly already knew better. As far as I know they used tools like Dioptra with which they measured height using the intercept theorem and then could calcu
Re: (Score:2)
Romans did do their kind of engineering. They didn't need to understand the effects of atmospheric friction on a falling object to engineer good (still relatively small scale) bridges. That is not an example of a pre-engineering world.
From what I can tell, what GP expresses, regardless of whether I agree with it or not, is a time where primarily trial-and-error was used. That of course is a fine enough heuristic method for some application l
THe perfect being the enemy of the good (Score:2, Insightful)
Plenty Will Doubt (Score:2)
It is unrealistic to expect perfection however does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?
Yeah, body-positivity activists, for one.
Re:THe perfect being the enemy of the good (Score:4, Informative)
does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?
Yes, ANYONE knowing how LLMs work should doubt it. A human may ultimately review their own biases and keep them in check in the light of the specifics for a particular case, while the AI will accumulate the biases of everybody that has participated in their training set, with no mechanism to compensate for them.
Plus, the AI hallucinates information that is not even relevant but 'sounds appropriate' to the topic, so the accuracy of every single line generated by an AI should be put into question.
Re: (Score:1)
It is unrealistic to expect perfection however does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?
The question I am asking is "Is GPT-4 more or less accurate now then my current Primary Care Physician at; answering my medical questions with answers tailored to her experience with me as a patient and as a person?".
That answer is no.
Currently I can choose my PCP. It looks like In the very near future I won't be able to know whether my PCP works for a company that considers patient communication a lesser priority in her workflow and forces her to use GPT-4 or work elsewhere..
My recent experience with inte
What about medical costs? (Score:2)
From past reports, AI was claimed achieving higher accuracy than medical doctors in identifying certain health trends. Which leads to the question: what about medical costs? Will it reduce because less health care professionals will be performing actual work on patient evaluation and/or be reviewing the AI reports? Or will it increase because the gauging health care industry of Murrica always finds a way to make even more profits?
Re: (Score:2)
cloud compute fee will be billed to the US patient
Re: (Score:2)
Says it right there in the article. Since, of course, op answered own question with two equally possible options for increasing profit. The answer is: Efficiency and gauging. But my guess is not for healing, in fact that's tangentially mentioned in the summary as something they can do now because of this awesomeness.
This will NOT end well. (Score:2, Informative)
TV Show 'Next' (Score:1)
https://www.imdb.com/title/tt9... [imdb.com]
This show was about an AI called "NEXT" that leveraged data against people for its own purposes, and episode 10's summary says "NEXT taunts LeBlanc with a new tactic relating to his illness".
It was an interesting show, and I'm not saying whether it could happen, but some may find it interesting.
Re: (Score:2)
This show was about an AI called "NEXT" that leveraged data against people for its own purposes, and episode 10's summary says "NEXT taunts LeBlanc with a new tactic relating to his illness".
It was an interesting show, and I'm not saying whether it could happen, but some may find it interesting.
I heard a rumor ChaosGPT watched the entire miniseries and took plenty of notes... and that it is already hard at work writing new and improved firmware for Echo's and Smart TVs.
Re: (Score:2)
https://www.imdb.com/title/tt9... [imdb.com]
This show was about an AI called "NEXT" that leveraged data against people for its own purposes...It was an interesting show, and I'm not saying whether it could happen, but some may find it interesting.
I watched this show, and couldn't make it past the third episode because I found it utterly terrifying and literally could not continue.
The situation is that Protagonist knows that Next is being duplicitous and has a problematic agenda, which of course Next sees as a 'problem' to be 'solved'. So, after a series of attempts to solve the problem, Protagonist takes his son and runs away, with the son also having seen what Next can do (Next took over the home's Alexa speaker and started saying things to the son
Watson, come here, I want to see you (Score:3)
Applying A.I. without showing how it works (Score:1)
Here, OpenAI not only won't reveal how GPT-4 actually works but it will be applied on sensitive data too.
This is not about page ranking in search engines anymore.
Disambiguation (Score:2)
To possibly forestall some of the potential snark related to Epic... this is Epic Systems [wikipedia.org], not to be confused with Epic Games [wikipedia.org]. The two are decidedly unrelated.
$20 says won't look for increases in myocarditis (Score:2)