Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Medicine Microsoft

GPT-4 Will Hunt For Trends In Medical Records Thanks To Microsoft and Epic (arstechnica.com) 54

An anonymous reader quotes a report from Ars Technica: On Monday, Microsoft and Epic Systems announced that they are bringing OpenAI's GPT-4 AI language model into health care for use in drafting message responses from health care workers to patients and for use in analyzing medical records while looking for trends. Epic Systems is one of America's largest health care software companies. Its electronic health records (EHR) software (such as MyChart) is reportedly used in over 29 percent of acute hospitals in the United States, and over 305 million patients have an electronic record in Epic worldwide. Tangentially, Epic's history of using predictive algorithms in health care has attracted some criticism in the past.

In Monday's announcement, Microsoft mentions two specific ways Epic will use its Azure OpenAI Service, which provides API access to OpenAI's large language models (LLMs), such as GPT-3 and GPT-4. In layperson's terms, it means that companies can hire Microsoft to provide generative AI services for them using Microsoft's Azure cloud platform. The first use of GPT-4 comes in the form of allowing doctors and health care workers to automatically draft message responses to patients. The press release quotes Chero Goswami, chief information officer at UW Health in Wisconsin, as saying, "Integrating generative AI into some of our daily workflows will increase productivity for many of our providers, allowing them to focus on the clinical duties that truly require their attention." The second use will bring natural language queries and "data analysis" to SlicerDicer, which is Epic's data-exploration tool that allows searches across large numbers of patients to identify trends that could be useful for making new discoveries or for financial reasons. According to Microsoft, that will help "clinical leaders explore data in a conversational and intuitive way." Imagine talking to a chatbot similar to ChatGPT and asking it questions about trends in patient medical records, and you might get the picture.
Dr. Margaret Mitchell, chief ethics scientist at Hugging Face, is concerned about GPT-4's ability to make up information that isn't represented in its data set. Another concern is the potential bias in GPT-4 that might discriminate against certain patients based on gender, race, age, or other factors.

"Combined with the well-known problem of automation bias, where even experts will believe things that are incorrect if they're generated automatically by a system, this work will foreseeably generate false information," says Mitchell. "In the clinical setting, this can mean the difference between life and death."
This discussion has been archived. No new comments can be posted.

GPT-4 Will Hunt For Trends In Medical Records Thanks To Microsoft and Epic

Comments Filter:
  • Remember when ppl worried about self-driving cars killing ppl, and then they did and we've moved on?

    "GPT-4's ability to make up information that isn't represented in its data set."

    Where do you think it learned that statistical trick from?

    • Remember when ppl worried about self-driving cars killing ppl, and then they did and we've moved on?

      You forgot the step where everybody rams defenses down our throat for self-driving cars killing people. "Well, it's still less than humans and machines will always be better because $reasons." Can't wait to see the equivalent arguments when people ask the doctor about a hangnail and end up getting open heart surgery that kills them thanks to ChatGPT.

      "GPT-4's ability to make up information that isn't represented in its data set."

      Where do you think it learned that statistical trick from?

      • "Can't wait to see the equivalent arguments when people ask the doctor about a hangnail and end up getting open heart surgery that kills them thanks to ChatGPT." Well, stuff like that happens everyday with human doctors, the amount of mistakes that are made by healthcare people is staggering. In 10 years time, I'll probably rather have an AI doctor as a real human.
    • I'm sure its results, based on images of people with six to seven fingers on each hand and weirdly distorted facial features, will be perfectly cromulent when it comes to making medical decisions.
  • "Another concern is the potential bias in GPT-4 that might discriminate against certain patients based on gender, race, age, or other factors"

    Perhaps I'm picking nits here, but I would hope someone esteemed enough to be the MASTER CHIEF of ETHICS SCIENTISTS (what does that even mean?) would care about discrimination in it's moral sense rather than expression concern about the legal use of the term (ie.the liability, ie. companies losing money from lawsuits)

  • by sinij ( 911942 ) on Tuesday April 18, 2023 @09:11PM (#63460502)
    I consider using AI to process health data to be HIPPA violation, as there is no guarantee it will remain anonymous.
    • Unfortunately, if you read the fine print, you invariably give the medical provider permission to use your data for medical research.
    • What, you mean you didn't grant permission to use your private data?

      "I am altering the deal. Pray I do not alter it any further."

      It doesn't matter. If they want it, they will find some pseudo-legal way to get it.

      • Re:HIPAA? (Score:4, Interesting)

        by Glyphn ( 652286 ) on Wednesday April 19, 2023 @05:30AM (#63460998)

        This is a complicated topic and I'm not going to be able to respond both concisely and with nuance. So, first, three concessions: Yes, people do or try to do illegal things. Yes, currently there are also legal provisions for accessing your health data in certain forms (e.g., de-identified, or grossly aggregated) and for certain purposes. And yes, I would concede that as health data pools grow the risks of misuse that exist will get higher and new ones will arise.

        However, when it comes to human data there are also a variety of legal controls, including but not limited to HIPAA, and the assertion that "they" will find some "pseudo-legal" way is, in my experience, often false.

        I say this because I've worked in this space for years. Someone will want to analyze data that they have access to. Or someone will want to sell data they have access to. In every instance I've been involved in, someone else with authority has spiked those inclinations hard. I've seen this at multiple companies, large and small.

        But yes, please fight for your data rights. We aren’t as well protected as I would like, and I believe it’s only going to get uglier. But we are also better protected than you state.

  • Oh yeah (Score:5, Insightful)

    by 93 Escort Wagon ( 326346 ) on Tuesday April 18, 2023 @09:12PM (#63460508)

    This will end well.

    • by narcc ( 412956 )

      Yeah, this is the least appropriate use of a model like this. I mean, I'm sure it will separate a few investors from their money, but WTF are they thinking?

  • by zenlessyank ( 748553 ) on Tuesday April 18, 2023 @09:18PM (#63460514)

    Ha Ha He He Hey Hey

  • Dr. Margaret Mitchell, chief ethics scientist at Hugging Face

  • I love how the title makes it sound so beneveolent. There is a profit motive, and you, the patient, are paying for it, in one way or another. Remember, the health care industry has no incentive to make you healthier, just to keep you alive and paying your bills.

    And the entire world is going ga-ga over these generative algorithms, while they still aren't much more than a parlor trick that falls apart under scrutiny. You're all being taken in by the local fortune teller.

    The whole model of AI class is "we just

  • It is unrealistic to expect perfection however does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?
    • It is unrealistic to expect perfection however does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?

      Yeah, body-positivity activists, for one.

    • by TuringTest ( 533084 ) on Wednesday April 19, 2023 @01:41AM (#63460812) Journal

      does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?

      Yes, ANYONE knowing how LLMs work should doubt it. A human may ultimately review their own biases and keep them in check in the light of the specifics for a particular case, while the AI will accumulate the biases of everybody that has participated in their training set, with no mechanism to compensate for them.

      Plus, the AI hallucinates information that is not even relevant but 'sounds appropriate' to the topic, so the accuracy of every single line generated by an AI should be put into question.

    • It is unrealistic to expect perfection however does anyone really doubt that GPT-4 will be less biased and more accurate than human medical personnel?

      The question I am asking is "Is GPT-4 more or less accurate now then my current Primary Care Physician at; answering my medical questions with answers tailored to her experience with me as a patient and as a person?".

      That answer is no.

      Currently I can choose my PCP. It looks like In the very near future I won't be able to know whether my PCP works for a company that considers patient communication a lesser priority in her workflow and forces her to use GPT-4 or work elsewhere..

      My recent experience with inte

  • From past reports, AI was claimed achieving higher accuracy than medical doctors in identifying certain health trends. Which leads to the question: what about medical costs? Will it reduce because less health care professionals will be performing actual work on patient evaluation and/or be reviewing the AI reports? Or will it increase because the gauging health care industry of Murrica always finds a way to make even more profits?

    • cloud compute fee will be billed to the US patient

      • "or for financial reasons"
        Says it right there in the article. Since, of course, op answered own question with two equally possible options for increasing profit. The answer is: Efficiency and gauging. But my guess is not for healing, in fact that's tangentially mentioned in the summary as something they can do now because of this awesomeness.
  • by jasnw ( 1913892 )
    Holy freaking hell, people, we're talking about things that are basically jumped-up neural networks here - there is absolutely NO intelligence involved in ANY of this. One of the big problems is when one of these beasties has to deal with something that's outside what mathematicians call the span of their knowledge, as in anything that is different from what they were trained with, nobody knows what gibberish the A"I" might come up with. This is where it gets scary. I can very clearly see a scenario wher
  • https://www.imdb.com/title/tt9... [imdb.com]

    This show was about an AI called "NEXT" that leveraged data against people for its own purposes, and episode 10's summary says "NEXT taunts LeBlanc with a new tactic relating to his illness".

    It was an interesting show, and I'm not saying whether it could happen, but some may find it interesting.

    • This show was about an AI called "NEXT" that leveraged data against people for its own purposes, and episode 10's summary says "NEXT taunts LeBlanc with a new tactic relating to his illness".

      It was an interesting show, and I'm not saying whether it could happen, but some may find it interesting.

      I heard a rumor ChaosGPT watched the entire miniseries and took plenty of notes... and that it is already hard at work writing new and improved firmware for Echo's and Smart TVs.

    • https://www.imdb.com/title/tt9... [imdb.com]

      This show was about an AI called "NEXT" that leveraged data against people for its own purposes...It was an interesting show, and I'm not saying whether it could happen, but some may find it interesting.

      I watched this show, and couldn't make it past the third episode because I found it utterly terrifying and literally could not continue.

      The situation is that Protagonist knows that Next is being duplicitous and has a problematic agenda, which of course Next sees as a 'problem' to be 'solved'. So, after a series of attempts to solve the problem, Protagonist takes his son and runs away, with the son also having seen what Next can do (Next took over the home's Alexa speaker and started saying things to the son

  • by buzz_mccool ( 549976 ) on Wednesday April 19, 2023 @02:16AM (#63460832)
    What happened to the IBM health care AI?
  • With GPT-3 and GPT-3.5 there was at least an understanding of how it was implemented.

    Here, OpenAI not only won't reveal how GPT-4 actually works but it will be applied on sensitive data too.

    This is not about page ranking in search engines anymore.
  • To possibly forestall some of the potential snark related to Epic... this is Epic Systems [wikipedia.org], not to be confused with Epic Games [wikipedia.org]. The two are decidedly unrelated.

  • $20 says it will not look for increases in Myocarditis rates. That will be off-limits.

Never ask two questions in a business letter. The reply will discuss the one you are least interested, and say nothing about the other.

Working...