Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Medicine

Healthcare Org With Over 100 Clinics Uses OpenAI's GPT-4 To Write Medical Records (theregister.com) 111

US healthcare chain Carbon Health has implemented an AI tool named Carby, powered by OpenAI's GPT-4 language model, to automatically generate medical records from conversations between physicians and patients. The Register reports: If a patient consents to having their meeting recorded and transcribed, the audio recording is passed to Amazon's AWS Transcribe Medical cloud service, which converts the speech to text. The transcript -- along with data from the patient's medical records, including recent test results -- is passed to an ML model that produces notes summarizing important information gathered in the consultation. The screenshot of an example medical chart below shows what type of text the software, nicknamed Carby, generates. The hypothetical patient's information and vital measurements are included, as well as a summaries of medical records and diagnoses.

Carbon Health CEO Eren Bali said the software is directly integrated into the firm's electronic health records (EHR) system, and is powered by OpenAI's latest language model, GPT-4. Carbon Health said the tool produces consultation summaries in four minutes, compared to the 16 consumed by a flesh and blood doctor working alone. Clinics can therefore see more patients [...] Generative AI models aren't perfect, and often produce errors. Physicians therefore need to verify the AI-generated text. Carbon Health claims 88 percent of the verbiage can be accepted without edits. Carbon Health said the model is already supporting over 130 clinics, where over 600 staff have access to the tool. A clinic testing the tool in San Francisco reportedly saw a 30 percent increase in the number of patients it could treat.

This discussion has been archived. No new comments can be posted.

Healthcare Org With Over 100 Clinics Uses OpenAI's GPT-4 To Write Medical Records

Comments Filter:
  • Great move! (Score:4, Funny)

    by gweihir ( 88907 ) on Tuesday June 06, 2023 @10:04PM (#63582240)

    So ChatGPT can hallucinate various additional expensive conditions! I predict great economic success!

    • A clinic testing the tool in San Francisco reportedly saw a 30 percent increase in the number of patients it could treat.

      No need for any AI. I can get that up to 50% if you just let me treat random people random made up treatments.

    • "So ChatGPT can hallucinate various additional expensive conditions! I predict great economic success!"

      It's even worse.
      It claims being able to work in German, but it doesn't.

      I spent yesterday over a hour with it to get a list of 50 German 5-letter words containing an 'o' and an 'u', it just couldn't do it.

      There were one or 2 right ones, the rest had one of the 2 missing or only 4 letters or the words were French or English, it just doesn't get it.

      And that's a simple word list, no maths, no internet search n

  • by Chiasmus_ ( 171285 ) on Tuesday June 06, 2023 @10:06PM (#63582244) Journal

    >Carbon Health claims 88 percent of the verbiage can be accepted without edits

    Yeah, I buy that. Then again, this is perhaps the situation where I *least* want to encounter the 12% of the verbiage where it hallucinates wildly.

    I sometimes use GPT-4 to summarize corporate meetings, and yeah, 88% accuracy feels about right. It once hallucinated that our CEO opened the meeting by discussing the sacred indiginous land upon which our campus was situated, though. Which is inaccurate in about four different ways.

    "She's allergic to sulfa drugs," "She's not allergic to sulfa drugs..." hey, they're both valid sentences that can be produced by language. That's the goal of contemporary LLMs, after all.

    • by Xenx ( 2211586 ) on Tuesday June 06, 2023 @10:21PM (#63582268)
      I mean, the physician still needs to sign off on what it says. Yes, humans can be lazy/inaccurate. However, if that was the case, would you be able to trust them to transcribe it themselves?
      • by narcc ( 412956 ) on Tuesday June 06, 2023 @10:49PM (#63582320) Journal

        If they don't want to transcribe their own notes, they'll hire a transcriptionist. At a lousy 88% accuracy, the AI won't last the day.

        • by Xenx ( 2211586 )
          Given the company has over 100 clinics, we already know what choice people are making. Sure, it's a small subset. But, it only shows that people are willing to accept it.
          • by GrumpySteen ( 1250194 ) on Wednesday June 07, 2023 @08:01AM (#63582914)

            No, it shows that for-profit healthcare clinics are willing to inflict it on the patients in order to increase profits.

            • Why not? I mean, in the past couple decades we went from being able to see doctors when you visit to being able to sometimes see a nurse, though usually only a technician, and even then they're pressure to hurry because turn-around time is important to the bottom-line. When the clinics and hospitals started being treated the same as any call-in customer service line, the writing was on the wall. Now we're in the end-run to turn for-profit healthcare into a non-human populated profit-only service. If it's in

        • by Entrope ( 68843 )

          The speech transcription part presumably has a higher accuracy rate than 88%, which is the number given for the summary notes about the visit. Transcriptionists don't generate notes -- although decades ago my father would record his spoken notes for his transcriptionist to type up.

          • by Shaitan ( 22585 )

            That depends. If you actually hire someone dedicated on your staff and create an environment without a revolving door they'll end up first correcting your notes and then eventually just doing the summarizing for you if you need them to take a crack at one here and there because you don't have time. With success and confidence you can keep down this path until they are just doing it for you within a couple years.

            The result will definitely be better than this because unlike GPT they won't just make random but

        • If they don't want to transcribe their own notes, they'll hire a transcriptionist. At a lousy 88% accuracy, the AI won't last the day.

          Oh, so you wish to vet AIs accuracy against the tired, burnt-out post-COVID masses who still don't quite feel right in body or mind after dealing with the last three years of political and societal abuse?

          Challenge accepted. I'd give humans in this position a year. Two tops.

          • by narcc ( 412956 )

            Considering that this type of AI is simply not capable of producing an accurate summary, I'll take the burnt-out human every time.

            This will be just another failed AI experiment before the years out. They're starting to pile up.

          • by Shaitan ( 22585 )

            A burnt out human will make mistakes. But they will be errors, they'll follow a pattern and be predictable and usually simple to spot.

            This AI making "mistakes" at the same or even lower rate is far worse because it doesn't distinguish between generating a summary and writing a fictional summary and more often than not will produce a blended result of the two. It mixes fiction into its responses on a seeming arbitrary basis and does a fantastic job of making that fiction indistinguishable from the non-fictio

      • When people read things they don't really *read* it, they scan over it and let their brain pattern match. Leads to typos and even entirely wrong words being accepted as correct because it was auto-corrected while they read. Even negation, can/can't, do/don't, when one is expected over the other it's common to "see" it even when it's not there. Proof readers have to consciously force themselves to see what's actually on the page in order to spot such mistakes. I have 0% confidence in physicians spotting 100%

        • by Shaitan ( 22585 )

          "You don't expect someone to outright invent conditions and effects so when they are written down they'll be believed."

          Yes this is what people should think of when they use this model. The model is ALWAYS producing fiction while incorporating some level of the material and prompts provided into that fiction. The model does not distinguish or understand a desire for a non-fictional result.

      • And judges are supposed to review ai-generated sentencing recommendations but data suggests they don't. Technical laypersons put far too much faith in this technology to use in applications that risk people's lives. These AI implementation decisions are made by money-grubbing health care and insurance managers based on exaggerated, dubious, or outright false claims by money-grubbing tech execs filtered through money-grubbing sales people.
        • by Xenx ( 2211586 )
          We're talking about physicians here. It's already part of their job to pay attention when reading the patients records. You can either trust them to take care with their job, or you can't. If they're failing to do it properly, they're already in a position where they shouldn't be trusted at their job.
    • by ufgrat ( 6245202 )

      Then again, doctors "know better than their patients" when it comes to drugs. Record says "Patient vomits blood when given this medication", and doctor says "I've never seen a patient do that. Give her the medication". The doctor had to deliberately override the warning on the patient's record to give her a medication.

      And yes, this actually happened.

      • by serviscope_minor ( 664417 ) on Wednesday June 07, 2023 @03:42AM (#63582648) Journal

        Yep. Also doctors often don't listen.

        Anecdote time! I was at the doctor a few weeks ago. Urgent care by way of A&E for an infection which was rapidly getting worse while waiting for an "emergency" referral to come through (thanks, Tories). The doctor was clearly pissed off at me for wasting time (even though you know, I had an infection rapidly getting worse and was instructed by GP to go to A&E if I got certain symptoms which I did), and fucked up a basic procedure (taking a swab) in three different ways. Then as I was listening to the phone conversation between the doctor and the specialist unit for a faster referral, I noticed the doctor passed on incorrect symptoms and medication history as well.

        Am I terrified of AI making mistakes? Compared to what?

        • Doctor's handwriting. That's what frightens me. I have almost never been able to read it and then your pharmacist gives out powerful drugs based on that. I guess you've actually persuaded me. It's a bit like self driving cars where, sure, we know that there will be accidents but in the end if we could assure ourselves that the rate of those accidents is less than a human then it's an overall gain.

          • That's my view of self driving cars too.

            I don't think AI even needs to be better than average because bad results are caused disproportionately by the worst people. Sure, like 80% of drivers, I'm above average, but even assuming that it counts for nothing if I'm hit by a moron.

            Also AIs will always make mistakes that a human essentially never would, but also won't every make mistakes that humans do. AIs don't get tired, hungry, angry, impatient or distracted, and have no desire to "punish" someone they feel

            • Yet they do generate their completions based on the errors written by humans within their training datasets. Because of this, they do generate the same mistakes as people as well. 'Garbage in, garbage out' still applies.
        • by Shaitan ( 22585 )

          Yes but that is an atypical outcome, an anecdote. A doctor normally makes errors, a model like GPT cannot distinguish between being asked to produce creative fictional content and a non-fictional summary. It intentionally generates plausible fictional output.

          • Definitely an anecdote. How atypical is it? In the UK the postcode lottery is notorious, though from my observation, it appears little different from how it is in the U.S.for much the same reasons (I doubt anywhere is substantially different).

            How bad does a doctor have to be before an AI is better? If the worse doctors produce a disproportionate amount of poor outcomes, how good did an AI have to be before outcomes are inn average improved?

            I will add that today's doctor thought that anecdotal one only foule

            • by Shaitan ( 22585 )

              "How atypical is it?"

              Atypical enough that I've never seen it. Sure I've seen incorrect diagnosis before but I've never seen a doctor just make things up UNLESS they were doing so to avoid red tape and restrictions insurance has put between them and providing the care they think you need. Which touches back on the biggest difference, doctors are humans, they have predictable objectives and motivations and therefore their fiction will again follow a predictable and detectable pattern whereas the AI has no mot

              • Atypical enough that I've never seen it.

                There's a lot of anecdotes on this thread. I'm not convinced it's specially rare.

                Sure I've seen incorrect diagnosis before but I've never seen a doctor just make things up UNLESS they were doing so to avoid red tape and restrictions insurance has put between them and providing the care they think you need.

                I'm uninsured, it was all on the NHS. I don't think the doctor was making stuff up so much as paying not much attention and not taking notes. The doctor appeared in

                • by Shaitan ( 22585 )

                  "But this doctor just wanted to bugger off out of the consultation room and was happy relaying inaccurate bullshit too."

                  And do you believe this is a common or uncommon motivation among doctors? I'd venture it is a very common motivation and the orders and notations he made have a very predictable pattern of covering his arse and not entangling him with crap to follow up on. A pattern another doctor can easily pick up, because they no doubt do the same at times.

                  Now you mention NHS... that could change things

                  • And do you believe this is a common or uncommon motivation among doctors?

                    Wanting to half arse a job and get away is a very common human motivation. We've ALL had bad days and wanted to go home and not put in our best work. Doctors aren't immune from the same failings as the rest of humanity.

                    I'd venture it is a very common motivation and the orders and notations he made have a very predictable pattern of covering his arse and not entangling him with crap to follow up on. A pattern another doctor can easily p

                    • by Shaitan ( 22585 )

                      "He? I don't believed I mentioned the doctor's gender :)"

                      He is gender neutral when referring to a group or party of unknown gender. In more modern practice the writer's own gender is a neutral placeholder. In any case, what difference does it make?

                    • He is gender neutral when referring to a group or party of unknown gender. In more modern practice the writer's own gender is a neutral placeholder. In any case, what difference does it make?

                      In modern use, he has more or less become displaced by "they" which has been in use since since at least the late middle English language. Nowadays not so much. In the 1950s you'd have been almost universally correct. It started being dropped by style prescriptivists in the 60s and by the 80s it was still common though

                    • by Shaitan ( 22585 )

                      "In modern use, he has more or less become displaced by "they""

                      Only among the mentally ill.

                    • Hm let's see, who should I look to about usage of English...?

                      Chaucer onwards, with a huge body of work including Shakespeare, Swift, Fielding, Austen, Byron, Eliot, Thackeray, Dickens, or anonymous axe-grinder Shaitan. You accuse, well, all of the above of being mentally ill, but what about someone who refuses to acknowledge the world around is changing in spite of vast amounts of easily observable evidence.

                      Singular "they" has been part of English since before English was English. Gender neutral "he" had a

                    • by Shaitan ( 22585 )

                      "why not go to the 1700s before "he" was used as gender neutral?"

                      He and other male references are used in a neutral manner throughout the American founding documents and law. So you are already incorrect.

                      "Chaucer onwards, with a huge body of work including Shakespeare, Swift, Fielding, Austen, Byron, Eliot, Thackeray, Dickens"

                      "You accuse, well, all of the above of being mentally ill"

                      On the contrary, I referred to MODERN use of "they" as a gender neutral pronoun as being both unpopular (at least here in the

                    • He and other male references are used in a neutral manner throughout the American founding documents and law. So you are already incorrect.

                      Oh yes, America is the centre of the English language...

                      And even so, you are wrong. If all the wording was gender neutral, then why did it take quite some time before women were afforded the rights granted to men?

                      On the contrary, I referred to MODERN use of "they" as a gender neutral pronoun as being both unpopular

                      "they" as a gender neutral pronoun has been in continuous

                    • by Shaitan ( 22585 )

                      "Oh yes, America is the centre of the English language..."

                      It certainly is. At least American English which is the only flavor that counts in context of an American speaker. You are after all criticizing my use of American English.

                      "And even so, you are wrong. If all the wording was gender neutral, then why did it take quite some time before women were afforded the rights granted to men?"

                      The portions which did not apply to women were explicit and actually referred to women. Women have had the same rights as m

                    • It certainly is.

                      You're not very good at this, are you? You were making a claim about the 1700s, as if some how me being out by a few years entirely invalidated my argument.

                      The portions which did not apply to women were explicit and actually referred to women.

                      Nope. This is untrue.

                      As I recall, you are the one who made an issue of it, not me.

                      I raised it, you flipped your shit about it.

                      I didn't say non-existent, I said unpopular.

                      Also entirely wrong. It's been in common use in many cases sine English existed. I

                  • Just the other day some high muckity muck in military AI testing indicated a simulation in which a system was tasked with blowing up surface to air missiles had a human operator in the loop as a fail safe, it got points for successful kills and because the human operator sometimes indicated a negative confirmation it began blowing up the human operator to get a higher score. When a penalty was introduced for that behavior it began blowing up the communication tower the operator used to send the fail safe signal.

                    That was a hypothetical, and they had to issue a correction. The DoD is very loose with its usage of "simulation" extending to table-top exercises and thought experiments. They love to report on them though since it gets them attention and sets off alarm bells to raise their budgets.

      • Well, how else will the doctor ever get to see a patient vomit blood with that medication?
    • Comment removed based on user account deletion
    • Then again, this is perhaps the situation where I *least* want to encounter the 12% of the verbiage where it hallucinates wildly.

      (American Litigation Complex) "Clearly our attempts to properly market our plan for exponential growth for the sake of feeding the Complex, have failed.."

    • Notice that first sentence, "US healthcare chain Carbon Health"...

      Decades of market consolidate thanks to zero anti-trust laws means you can't vote with your dollars anymore. Anywhere you go is either the same company under a different name or at least owned by the same handful of major shareholders with the same board of directors.

      If you're wealthy you can pay extra for customized healthcare. I knew a guy who did that and it cost him around $25k/yr on top of his insurance and other medical bills. T
  • by davidwr ( 791652 ) on Tuesday June 06, 2023 @10:11PM (#63582254) Homepage Journal

    Until the doctors sign off on the draft, it's not a medical record.

    As long as there is a licensed professional who is risking his license if he doesn't catch and correct mistakes [slashdot.org], the "AI" is just another tool in the doctor's toolbox.

    Wake me up when people seriously start talking about letting computer programs BE doctors [fandom.com]..

    • Re: (Score:3, Interesting)

      by sfcat ( 872532 )
      There have been systems capable of medical diagnosis for decades and they have better error rates than human doctors. They are also based upon logic based systems so they are completely different types of AI from ChatGPT. AMA has blocked these systems for decades. That being said, this is a terrible idea which probably won't prove the accuracy of those medical records.
      • by narcc ( 412956 )

        This is very true. There was a lot of fear-mongering back in the 80's about 'expert systems'. These were actually very cool and more in line with what people expect AI to be doing internally, being capable of drawing inferences using a large 'knowledge base' of facts.

        Those things were supposed to replace doctors ... sound familiar? [usatoday.com]

        That obviously didn't happen, and expert systems don't "hallucinate" things.

    • by P.J. Hinton ( 3748 ) on Tuesday June 06, 2023 @11:01PM (#63582336)

      Would an insurance company robo-rejecting claims that are supposed to be reviewed and signed off by a physician be enough evidence to make you realize that there are physicians who can and will sign their name to anything?

      https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims

      • by davidwr ( 791652 )

        Would an insurance company robo-rejecting claims that are supposed to be reviewed and signed off by a physician

        Just as disturbing, possibly more so, is that the relevant licensing boards aren't doing a good job of policing their own industry.

      • by kmoser ( 1469707 )
        We will have come full circle when Carby begins not only transcribing medical records, but then suggests to the insurance companies that claims based on those same records should be denied.
      • by Shaitan ( 22585 )

        Sounds more like evidence of insurance companies screwing people to me. Rejecting every claim they can with any excuse they can, when it statistically lowers their payout, is best industry practices on the part of insurance companies.

    • by narcc ( 412956 )

      just another tool in the toolbox

      I'm not a fan of this particular idiom. It lets you excuse, without any thought, all sorts of awful, ineffective, inefficient, and down right dangerous things. Sometimes, it's not "just another tool". Sometimes, you need to remove the offensive thing from the toolbox and unceremoniously deposit it in a more appropriate bin.

      That said, transcription is absolutely an appropriate uses for AI. Summarization, however, is not.

      • by Shaitan ( 22585 )

        It depends on the type of AI. Recognition is indeed a valid use of "AI." Various LLM like GPT-X are really only validly used for entertainment purposes and the companies/entities controlling them tend to block the more entertaining purposes they could be used for.

    • Until the doctors sign off on the draft, it's not a medical record.

      As long as there is a licensed professional who is risking his license if he doesn't catch and correct mistakes [slashdot.org], the "AI" is just another tool in the doctor's toolbox.

      Wake me up when people seriously start talking about letting computer programs BE doctors [fandom.com]..

      Human nature is a power force, without significant efforts to maintain vigilance the user eventually starts trusting the drafts (maybe not at 88%, but probably at 95%).

      I think a partial fix might be to make the tool auto-inject a random number of ChatGPT style errors and tell the doctor it's doing it.

      Force the doctor to proof-read and don't let them proceed until they fixed everything (and enforce an annoyingly long timeout if they try to submit with uncaught injected errors).

      It's not a perfect solution, bu

    • by Shaitan ( 22585 )

      "As long as there is a licensed professional who is risking his license if he doesn't catch and correct mistakes [slashdot.org], the "AI" is just another tool in the doctor's toolbox."

      That is nice on paper but in reality the doctors can't and won't spend the time to review this crap like they should. The whole point of these notes is they see too many patients and don't remember. If a skilled transcriptionist they worked with regularly did the job, they could catch far more errors because any errors being m

  • by Tony Isaac ( 1301187 ) on Tuesday June 06, 2023 @10:17PM (#63582258) Homepage

    So readers will know it's fake.

  • We don't need AI to generate reports. We need AI to read all the reports that we do get. Seriously if your work load is that you don't have time to put together a draft and edit it. Then who the hell has the time to read it? Not reading medical records explains a lot about my experiences with healthcare in the US.

    • by narcc ( 412956 )

      I get it. They're the single most inefficient thing humanity has created. Reams upon reams of paper for a healthy baby going in for regular checkups and vaccines.

      Then who the hell has the time to read it?

      My wife had to when she was doing permanency work. Part of writing a child profile involved tediously digging through endless stacks of paper to put together a medical history/summary. I can't imagine a doctor actually reading through all that for a new patient.

    • Not reading medical records explains a lot about my experiences with healthcare in the US.

      A lot of it is copy and paste (well before ChatGPT) descended from the myriad of checkboxes and lists to keep the the lawyers at bay and insurance from denying claims (as well as the push towards measurable data).

      The end effect is a crapflood instead of a structured narrative that explains the MD's reasoning (which also had its abuses, but could at least be easily understood and/or refuted).

      AI is a further push in this direction of endless data-points to appease the bureaucrats and CYA instead of improving

    • Per the summary: 16 minutes of a doctor's time to enter the data into the EMR system vs... a minute to verify 88% of the records and 16 minutes to correct the 12% remaining. Hint: a typical doctors visit is less than 16 minutes. That gives the doctors the most value to their patients.

    • >"We need AI to read all the reports that we do get. Seriously if your work load is that you don't have time to put together a draft and edit it. Then who the hell has the time to read it?"

      I have a lot of experience with this topic. And you hit it on the head. The level of nonsense and redundancy is astounding. A lot of it is driven by legal- covering your butt, tons of regulations, ill-thought-out policies and forms, and never-ending insurance company requirements, etc.

      Clinicians are collecting and e

      • by Shaitan ( 22585 )

        There are types of AI that could help. Generative AI systems aren't it. These are systems which generate fictional creative content that 'passes' as being plausible enough in training and feedback. The system isn't even designed to have a preference for 100% non-fictional output in any way and even if those training and providing feedback prefer the same the system isn't going to weight toward providing such a thing rather than fictional content which was plausible enough to be accepted as non-fiction.

  • by Anonymous Coward

    I have an extremely high confidence that nothing at OpenAI would qualify as meeting anything but the barest minimum of the HIPAA security rule. And, like everything else in the industry, nobody will give a shit.

  • Stupid question! (Score:5, Interesting)

    by Chas ( 5144 ) on Tuesday June 06, 2023 @10:51PM (#63582326) Homepage Journal

    How the fuck is this HIPAA compliant?

    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Tuesday June 06, 2023 @11:10PM (#63582352)
      Comment removed based on user account deletion
    • by micheas ( 231635 )

      How the fuck is this HIPAA compliant?

      As someone who absolutely hates this for more reasons than the average anti AI person I strongly suspect this is HIPAA compliant.

      Patients have the right to see records and request corrections and this does that.

      What this also does is take forms that are filled out to get the insurance company to pay the bill rather than record what is actually happening with the patient. AI generated doctors notes that are matching what the insurance company expects will get approved and so this will look good to insurance

    • by ranton ( 36917 )

      How the fuck is this HIPAA compliant?

      What makes you think it isn't HIPAA compliant? I find it hard to believe Amazon built and released its AWS Transcribe Medical cloud service without ensuring it is HIPAA compliant. And OpenAI's API keeps each customer's data private to that customer (it won't train its base model on the data sent to the API). What part of this solution looks like a HIPAA violation to you?

  • by RightwingNutjob ( 1302813 ) on Tuesday June 06, 2023 @11:06PM (#63582348)

    is mindless and tedious boilerplate data entry to satisfy billing requirements.

    One wonders whether the rare thing that actually requires attention is more likely to get lost in the AI nonsense or is more likely to get lost in the boring and tedious data entry part of physicians' work.

    Perhaps one might actually scientifically study this.

    • It is also used when you need to go to a specialist or series of specialists. Billing might be a big role, but documenting trends is important.

  • by Rosco P. Coltrane ( 209368 ) on Tuesday June 06, 2023 @11:46PM (#63582396)

    ... until they decide not to ask the patient's consent anymore.

    If a patient consents to having their meeting recorded and transcribed, the audio recording is passed to Amazon's AWS Transcribe Medical cloud service,

    Oh hell no. No no no!

    It's not even the ChatGPT transcription that bothers me, it's handing over my medical record to Amazon. Like those motherfuckers didn't have enough data on me already.

    • It's not even the ChatGPT transcription that bothers me, it's handing over my medical record to Amazon.

      Amazon is one of the bigger cloud service providers and a lot of their tools are HIPAA compliant, or claim to be. So, likely as not your medical records are already handed over to Amazon.

    • by ranton ( 36917 )

      Oh hell no. No no no!

      It's not even the ChatGPT transcription that bothers me, it's handing over my medical record to Amazon. Like those motherfuckers didn't have enough data on me already.

      It wouldn't be surprising if a significant amount of your medical records already make it onto AWS and Azure servers from multiple applications your doctors and their partners use already. The hospital would have no need to tell you. Have they ever told you what version of Epic they use or where their backups are stored? I work in the health insurance industry and I can assure you your medical records sent to insurance are almost certainly being stored in AWS somewhere.

    • you just don't get medical records. Also, good luck not consenting. You'll need an electron microscope to find that fine print.
    • ... until they decide not to ask the patient's consent anymore.

      If a patient consents to having their meeting recorded and transcribed, the audio recording is passed to Amazon's AWS Transcribe Medical cloud service,

      Oh hell no. No no no!

      It's not even the ChatGPT transcription that bothers me, it's handing over my medical record to Amazon. Like those motherfuckers didn't have enough data on me already.

      You say that now.

      But just wait until you log into amazon.com and see the genital rash cream helpfully preloaded into your cart!!

  • and the ChatGPT generated advise was to just accept it because there are a lot of idiots in the world living happy lives. =/

  • Don't do this. Don't use an LLM for this. You'll get bollocks instead of medical notes.
  • by null etc. ( 524767 ) on Wednesday June 07, 2023 @12:05AM (#63582424)

    But the question is, will ChatGPT hallucinate images of a gorilla into my xray scans?

  • by ClueHammer ( 6261830 ) on Wednesday June 07, 2023 @12:41AM (#63582466)
    Due to using a AI just took and enormous leap forward
  • This is a disaster (Score:5, Insightful)

    by ufgrat ( 6245202 ) on Wednesday June 07, 2023 @12:43AM (#63582470)

    Doctors have a bad habit of "interpreting" what patients say already, because some smug bastard told them during their training "Patients lie".

    Now you're going to hand this off to a piece of software that's known to fake it's homework?

    In order to save 12 minutes?!?!?

    How about spending time with the patient, and learning more about the patient, instead of treating them as an endless stream of data points that you believe are lying to you already??

    I am serious when I say this sort of AI cost-cutting should be illegal.

    • Doctors have a bad habit of "interpreting" what patients say already, because some smug bastard told them during their training "Patients lie".

      Patents do often lie, this is a pretty indisputable fact. https://www.cmaj.ca/content/19... [www.cmaj.ca]

      Would you rather doctors ignored reality to spare your feelings?

      And that's not even getting on to flat out inaccuracies in what the patient says due to simply misremembering. I currently have something that's been going on about 6 ish weeks. I had to call my doctor for records

      • by ufgrat ( 6245202 )

        Patents do often lie, this is a pretty indisputable fact. https://www.cmaj.ca/content/19... [www.cmaj.ca]

        Would you rather doctors ignored reality to spare your feelings?

        No, I wouldn't. The problem is, in my opinion, the use of the word "lie". Say they're mistaken, say they're reluctant to give embarrassing details, or to give information that portrays them in a negative light-- but don't call them liars. This is a personal peeve primarily from the IT world, but it applies to all areas of life. Calling someone a liar establishes a negative connotation, and leads to an unconscious level of contempt.

        And I know patients either accidentally, or deliberately, mislead their d

        • say they're reluctant to give embarrassing details, or to give information that portrays them in a negative light

          In other words they are aiming to deceive the doctor by telling the doctor something which is not true, or knowingly omitting a piece of information when asked.

          And I know patients either accidentally, or deliberately, mislead their doctors.

          And what's the word for someone who deliberately misleads?

          But I've also seen doctors write down inaccurate information on a chart because they didn't believe t

    • And when somebody tells you "do you want some faceless bureaucrats making medical decisions for you!" on social media or in a TV advert can you stop the kneejerk reaction to the word "bureaucrat" and catch it before you vote against the law you proposed?

      That's the problem with politics. Scary words, moral panics, culture wars and the like heavily influence how people vote. People also have a lot of issues and it's easy to sneak stuff through.

      You've got a lot of single issuer voters that the pro-corp
  • by Anonymous Coward
    I'm a physician. I work in hospitals and clinics - I also supervise doctors in training (ie, students, residents & fellows), which includes co-signing their notes. Our electronic medical record (EMR) is a mainstream system (Epic), and one of its key functions is to copy and paste vast amounts of data into physicians' notes. I suppose this is to save time and satisfy government & insurance billing requirements, but it makes the notes a nightmare of random and often inaccurate (the prior day's assessm
  • Nothing can go wrong, go wrong, go wrong, go wrong, go wrong, go wrong, go wrong...

  • A well known technique to ensure reviews are actually done is to add "text bombs". So the software should intentionally introduce significant transcription errors in 3% of the texts and verify that these are caught by the manual review. Keep reviewers awake and alert. Of course, the errors should not just be adding "text bomb here!" but replacing one medical term with a related, but different one, e.g. diabetes -> dialysis.
    • Good idea right up to the time that the protocol to take the fake diagnosis out of the record accidentally fails to run, and the misdiagnosis that were deliberately added but missed by the reviewer are put on the record for, say, a thousand patients.

      ...replacing one medical term with a related, but different one, e.g. diabetes -> dialysis.

      Maybe only a minor problem when a thousand diabetes patients are suddenly scheduled for renal dialysis, but may be more of a problem when a thousand patients get scheduled for emergency heart surgery because their heart arhythmia diagnosis was changed to aortic

  • Doctors don't seem to read those anyway, so you could hide a Harry Potter book in there for all I care :D.

  • ....if in the medical records start appearing false illnesses?

    I mean, the lawyer got hit for mentioning false cases in trial because of IA.

  • So an AV recording of my interaction with my doctor will be donated to Amazon? Sign me up!

  • Generative AI models aren't perfect, and often produce errors. Physicians therefore need to verify the AI-generated text.

    I think i found a bug in the system.

  • I'm not afraid that I have blood clots in my legs eventhough they were amputed ten years ago.
  • Clinics can therefore see more patients [...]

    Hang on. Instead of using AI to improve patient outcome (say, by reducing time physicians spend writing up a consult and instead spending more time with the patient), they'll use the technology to grind through even more patients, thereby making more money. All while being less accurate.

    Ahhh, USA. Don't stop... being you.

  • The puick bs2wn f6c jumped over the lazy zed5cal ,ranscriptionist.

  • by kackle ( 910159 ) on Wednesday June 07, 2023 @09:01AM (#63583032)
    "Describe how involving ChatGPT in medical record processing would be a bad idea."

    ChatGPT:

    "While ChatGPT, as a language model, has the capability to process and generate text, involving it in medical record processing could have several drawbacks and pose significant challenges. Here are some reasons why it would be a bad idea:

    1. Privacy and security concerns: Medical records contain highly sensitive and private information about individuals' health conditions. Sharing this data with a third-party service like ChatGPT could raise concerns about data privacy and security breaches. Unauthorized access or misuse of medical records could have serious consequences, including identity theft or medical fraud.

    2. Lack of medical expertise: ChatGPT lacks specialized medical training and knowledge beyond what it has been trained on until September 2021. Medical record processing requires a deep understanding of complex medical terminologies, procedures, and diagnostic codes. ChatGPT's responses may not always be accurate or reliable, potentially leading to misinterpretations or incorrect decisions based on the processed information.

    3. Legal and regulatory compliance: The handling and processing of medical records are subject to strict legal and regulatory requirements, such as HIPAA (Health Insurance Portability and Accountability Act) in the United States. ChatGPT may not have the necessary safeguards and infrastructure in place to ensure compliance with these regulations. Involving ChatGPT in medical record processing could expose healthcare organizations to legal liabilities and penalties for non-compliance.

    4. Ethical considerations: Medical record processing involves dealing with sensitive information and making critical decisions that affect individuals' health and well-being. Relying solely on an AI model like ChatGPT for such tasks raises ethical concerns. The lack of empathy, accountability, and human judgment in AI systems can lead to potential biases, errors, or oversights that could adversely impact patient care and outcomes.

    5. Lack of contextual understanding: ChatGPT processes text based on patterns it has learned from training data. However, medical records often contain nuanced information that requires a deep understanding of the patient's history, previous treatments, and ongoing conditions. ChatGPT may struggle to grasp the full context and may produce incomplete or inaccurate interpretations of the records, leading to potential medical errors.

    6. Limited ability for interaction and clarification: Medical record processing often involves clarifying ambiguous or incomplete information through direct communication with healthcare providers or patients. ChatGPT's one-way text-based communication restricts its ability to seek clarification, ask follow-up questions, or request additional information, which is crucial in accurately understanding and processing medical records.

    Given these concerns, it is advisable to rely on qualified healthcare professionals with domain-specific expertise and compliant systems for the processing of medical records. AI models like ChatGPT can be valuable as supportive tools in healthcare settings, but they should not replace the human judgment and expertise necessary for accurate and responsible medical record processing.

    And Genghis Khan likes using his smart phone."

    I don't know...if the doctor has to read/confirm the stuff anyway, then what's the point?

    (I admit I made up its last sentence.)
  • If the transcripts are uploaded to ChatGPT how is confidential patient information protected?

    I had a similar conversation with a relative who produces medical reports. She told me how she is using ChatGPT to assist her. When I asked her what happens if she pastes details on a confidential drug study to ChatGPT she blanched. It never occurred to her that she might be contributing information to the model

  • The core problem being addressed here is documentation burden. Basically, for insurance and regulatory reasons, more and more of a doctor’s or nurse’s time is being taken up with filling out electronic health records. It is to the point where where this documentation can take up a majority of the doctor’s or nurse’s time, and the actual treating of patients is a fun side hobby they can do when things are slow. As an example that I heard at a medical informatics conference: A nurse wh
  • These guys run an urgent care place near here, and are calif bay area based. I have used them once. They are awful, so this is no surprise. No disrespect for medical staff, but all the clerical staff and the procedures are terrible and they treat people badly in my experience.

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...