Forgot your password?
typodupeerror
Medicine AI

In Real-World Test, an AI Model Did Better Than ER Doctors At Diagnosing Patients 81

A new study from Harvard Medical School and Beth Israel Deaconess found that an OpenAI reasoning model outperformed experienced ER doctors at diagnosing and managing patient cases using messy, real-world emergency department records. Researchers say the results don't support replacing doctors, but they do suggest AI could meaningfully reshape clinical workflows if tested carefully in prospective trials. NPR reports: The researchers ran a series of experiments on the AI model to test its clinical acumen -- including actual cases like the lupus patient who'd been previously treated at the emergency department at Beth Israel in Boston. The team graded how well the AI model could provide an accurate diagnosis at three moments in time, from the triage stage in the ER, up to being admitted into the hospital. Overall, AI outperformed two experienced physicians -- and did so with only the electronic health records and the limited information that had been available to the physicians at the time. "This is the big conclusion for me -- it works with the messy real-world data of the emergency department, " said Dr. Adam Rodman, a clinical researcher at Beth Israel and one of the study authors. "It works for making diagnoses in the real world."

Other parts of the study focused on case reports published in the New England Journal of Medicine and clinical vignettes to suss out whether the AI model could meet well-established "benchmarks" and game out thorny diagnostic questions. "The model outperformed our very large physician baseline," said Raj Manrai, assistant professor of Biomedical Informatics at Harvard Medical School who was also part of the study. The authors emphasize the AI relied on text alone, while in real life, clinicians need to attend to many other inputs like images, sounds and nonverbal cues when diagnosing and treating a patient.
The findings have been published Thursday in the journal Science.

In Real-World Test, an AI Model Did Better Than ER Doctors At Diagnosing Patients

Comments Filter:
  • by bryanandaimee ( 2454338 ) on Thursday April 30, 2026 @06:10PM (#66121184) Homepage
    It is a common misconception that the doctor's name was Frankenstein. Actually the AI was named Frankenstein. It created the doctor from spare parts.
  • by fahrbot-bot ( 874524 ) on Thursday April 30, 2026 @06:18PM (#66121198)

    Presumably the "AI" has a wider pool of data to pull from than any single doctor, or even a small group of doctors, on scene so I could see this sort of thing used to double-check and/or offer alternative diagnoses, perhaps with percentages. Use tools if/when they're helpful, but remember that they're just tools.

    • Double-check, yes ("That looks like cancer, Doctor Moe" and the doc gets a little piece of it, and the oncologist verifies the piece is cancer)... but the surgeon is the one(s) running the show.

      I wouldn't never, ever put an AI in full automated control of a DaVinci surgical robot... and, the AI might be wrong "that that looks like cancer", and the condition is something else. (But, you know for sure, someone is gonna attach an AI to a surgical robot, and the hospital can get rid of the surgeons, and still

      • AI models are better than our ability to trust them. Already. They are certainly better than the average person in most knowledge professions, and for the few that it isnt, it's just a matter of time. Computers are better thinkers than we are. That is just obvious.
        • by HiThere ( 15173 ) <charleshixsn@NOSpAM.earthlink.net> on Thursday April 30, 2026 @08:27PM (#66121388)

          You're making MUCH too wide an assertion. There are areas where the AI is better, but it's competence is "jagged". If you say it's better at guessing protein folding, I'll agree. if you say it's a better surgeon...not this month. Probably not this year. But it's better in certain specific areas, and those areas are increasing.

          • A surgeon is a skilled profession, not a knowledge profession. I won a court case with AI, when my lawyer was pushing us to settle. AI found an argument, and we got a positive result.
            • by thegarbz ( 1787294 ) on Friday May 01, 2026 @04:43AM (#66121820)

              A surgeon is a skilled profession, not a knowledge profession. I won a court case with AI, when my lawyer was pushing us to settle. AI found an argument, and we got a positive result.

              The difference between knowledge and skill is bridged by robotics. The use of robotics in surgery is increasing precisely because this skill is better augmented by machines. I have little doubt that AI will eventually bridge this gap but it does require the participation of two different fields of work.

        • Computers don't think. We don't even understand how animals think.

          • Re: (Score:2, Insightful)

            by saloomy ( 2817221 )
            Computers do think. Computers do not do consciousness. Consciousness is not a prerequisite to thinking. Thinking is a prerequisite to consciousness.
          • I agree.
            They have never thought, and they won't... I (as, y'know, a human) can decide if I should cross a street (and I can sing song lyrics as I do it).
            A computer cannot, by it's definition, "think" like us humans can. It can analyze data fed to it, and data that it sees through whatever cameras it has.

            Me or you on the street can tell if a car is speeding through "that light", and we can make a decision split second... pull over or slam on the brakes or any number of things.

            • by tragedy ( 27079 )

              A computer cannot, by it's definition, "think" like us humans can. It can analyze data fed to it, and data that it sees through whatever cameras it has.

              If it's just a semantic argument about definitions of "thinking' as something that humans do, then you're technically right, but the argument is meaningless.

              The real question is if human brains are, ultimately a subset of Turing machines. If they are, then computers can definitely think if we don't use a definition of thinking that leads to a circular argument. If they are not, then that means that they can solve classes of problems that a Turing machine can not. Of course, as it stands, any system that can

              • Basically, people speculate that maybe there's something special about the brain that a computer can't do

                That something special is the ability - or inherent tendency - to want. To form objectives. To decide how one wants to spend ones time. To take joy in the unexpected. To hope, to dream, to feel. Yeah, these are hard, or even impossible, to test objectively for, but that doesn't make them not a real distinction.

                • by tragedy ( 27079 )

                  Yeah, these are hard, or even impossible, to test objectively for, but that doesn't make them not a real distinction.

                  The inability to test for them objectively quite literally does make them not a real distinction. You're just delving into the realm of spirituality. Even there your argument doesn't hold water when you consider concepts like the Shintoist (or Shintoist-adjacent) tsukumogami for made things or even Yaoyorozu no Kami for made things. There are quite a few religions and spiritual beliefs that grant a spirit to things that aren't alive.

                  Wanting, forming objectives, deciding how to spend one's time, taking joy i

            • "Me or you on the street can tell if a car is speeding through "that light", and we can make a decision split second... pull over or slam on the brakes or any number of things."

              First of all, it's "You or I on the street", not "Me or you on the street." Second of all, someone should invent self-driving cars!

              You are trying to play a semantic war of wits but you are unarmed. You want to define "thinking" as "something only a human can do", then claim computers can't "think." It's a strawman based tautology.

          • I don't understand how you think either. Does that mean that you don't think? I mean you are clearly thought and inference challenged, so maybe you can't see how stupid your argument is?
        • The computer isn't thinking... it's pouring over it's database of downloaded training data and finding the most relevant thing to tell you in regards to your query, nothing more.

      • by tragedy ( 27079 )

        On the other hand. An AI to scream at the doctor: "Why are you removing that liver, you're supposed to be removing a spleen!" might not be a bad thing. That isn't a random example of course. That's something that recently happened and the patient bled to death and the doctor insisted that the kidney was actually a spleen swollen to several times the size it should be and on the wrong side due to disease. This turned out to not be the first time the same doctor had removed the wrong thing. He had previously

        • On the other hand. An AI to scream at the doctor: "Why are you removing that liver, you're supposed to be removing a spleen!" might not be a bad thing. That isn't a random example of course. That's something that recently happened and the patient bled to death and the doctor insisted that the kidney was actually a spleen swollen to several times the size it should be and on the wrong side due to disease.

          I assume you're referring to this incident [nytimes.com] of negative patient outcome.

          There is the concern about the system making a mistake, but with competent surgeons, they should be able to tell when the AI itself is the one making the mistake.

          Aye, there's the rub, innit? Competent surgeons also don't remove the wrong organ in the first place. To be more than fair, minimally-invasive surgery does require special skills, and it can be hard to find one's way around without seeing a full surgical field. Still, surgeons should at least know left from right.

          on the wrong side due to disease

          Organs are occasionally found in surprising places. (Like Einstein's brain?) Rarely. But you can usually find them with CT o

          • by tragedy ( 27079 )

            I assume you're referring to this incident [nytimes.com] of negative patient outcome.

            That appears to be the one, yes. Of course, it is really one example among many.

            Aye, there's the rub, innit? Competent surgeons also don't remove the wrong organ in the first place.

            True, but then you need to break down the set of incompetent surgeons into subsets. One of those subsets might be total Dunning-Kruger narcissist who won't accept any possibility that they could make a mistake and will go ahead and boldly and proudly pull the wrong organ. However, another subset is all the various kinds who will just quietly correct themselves when the error is identified. So, if you take the competent ones who

    • These are not controlled experiments. They are merely reprocessing of existing cases with simplified data sources that happen to correlate with the outcomes.

      Anyone can make a diagnosis, even without looking at a patient. Do it enough times, argue the failures were anomalies or unfair comparisons or just don't mention them, hype up the successes. Post on TikTok.

      (and for the experts: always look at who is paying for a reasonable looking study, and list all the biases you can think of up front)

      • by dvice ( 6309704 )

        Here is a study about triage where AI fails to beat humans:

        "Results from a randomized, interface-blinded, crossover simulation study involving 120 hypothetical telemedical encounters performed by real primary care physicians, the AI co-clinician or GPT-realtime. "
        https://deepmind.google/blog/a... [deepmind.google]

    • by tragedy ( 27079 )

      Also, while this trial was just about getting data from records, an AI might be able to do very well interviewing patients if properly designed. As it stands, doctors do not always do a great job at that. The typical experience for a patient is that they come in, sit in a waiting room for a while. Get called in, then a nurse takes some vitals. Then they sit there waiting for the doctor for a while. Then the doctor comes in, asks a few questions while trying to pretend to not be in hurry, usually pretty badl

      • Or, a human doctor could come to 'that' conclusion on their own.
        Free from biases? An AI is as biased as it's training dataset is; and the medical system that gets payouts from drug companies to push their pill, so of course the medical system that bought the thing wants those payouts to continue, so they tinker with it's dataset and have it push pills.

        And, if a surgeon can't tell the difference between a kidney and the liver, they probably shouldn't have a scalpel.
        A good, competent, well trained doctor doe

        • by tragedy ( 27079 )

          Or, a human doctor could come to 'that' conclusion on their own.

          Sorry, what conclusion? You replied to my fairly long post without a quote or anything and used a demonstrative pronoun without mentioning what it was a pronoun for. The safest bet seems to be you meant that they would have come to the conclusion about Chagas disease? I specifically provided that example because it is exactly the sort of thing that doctors often miss. Tropical diseases often get misdiagnosed outside the tropics because they simply get ruled out as likely causes early on and patients often m

    • by jythie ( 914043 )

      I wonder how well they handle less common cases though. The classic problem with these systems, while they are always improving, is they follow the 'most of the time it is the most common case' line of reasoning, so most of the time they are right.
       
      Though the recurring problem with these tools is that since they tend to give the most common answers, they can easily become a source of bias, making changing course even more difficult.

  • accountability sinks.

    1. What is a Reverse Centaur?

    The Reverse Centaur: The AI acts as the "head" or decision-maker, and the human is the "body" or worker, forced to keep up with an impossible, algorithmic pace.

    2. What is an Accountability Sink?

      A "moral crumple zone"—is a human who is present only to take the blame when an AI system fails.

    Who wants to work in such an environment?

    Some of us would refuse.

  • "AI" (Score:4, Insightful)

    by darkain ( 749283 ) on Thursday April 30, 2026 @06:26PM (#66121212) Homepage

    The biggest problem with "AI" is the marketing. There is no single "AI", there are a ton of various semi-related and entirely unrelated tooling that uses various bits of machine learning. From what this reads like, they have a more advanced fuzzy logic search engine in place, which is one of the absolute best use cases for AI/ML workloads. But at the end, its no different than any other search engine. It doesn't do the critical work for you, it simply searching vast repositories of knowledge and suggests possible outcomes. How many people instantly trust the top result on Google without even seeing the rest of them? This is the same thing.

    • by dvice ( 6309704 )

      I think that the blind spot for many are the AI models used for science (Alphafold, AlphaEvolve, AlphaGenome, WeatherNext, AlphaEarth,AlphaZero, ... ). They already have multiple AI models that have made new scientific discoveries, some of which are Nobel-level, ground breaking discoveries. People are underestimating the potential impact of this. Demis Hassabis has already said that finding a cure for all known deceases within 10 years is a possible scenario.

  • If you know anything about the shit show that is emergency rooms right now you know that private equity has bought them all and is slashing patient time and staff. A couple of states have banned this but the private equity firms just used freaky corporate structures to get around the bans and they are currently in the courts being challenged.

    So yeah your AI can outperform a doctor that gets 5 minutes with the patient before having to move on to the next one in order to keep their private equity Masters
    • by hwstar ( 35834 )

      They haven't bought the ones owned by a nonprofit healthcare group (Think Kaiser, et. al). The problem is they have bought a lot of the anesthesiologists and other specialties which operate as contractors in the emergency room environment. This was a big problem up until a few years ago when the Congress passed an act to prohibit third party balance billing from contractors in the emergency room. One of the few places left where they can get away with third party balance billing is Ambulance Services. Altho

    • If you know anything about the shit show that is emergency rooms right now you know that private equity has bought them all and is slashing patient time and staff. A couple of states have banned this but the private equity firms just used freaky corporate structures to get around the bans and they are currently in the courts being challenged. So yeah your AI can outperform a doctor that gets 5 minutes with the patient before having to move on to the next one in order to keep their private equity Masters satisfied. Although honestly I wouldn't be surprised if the study was still bumpkis. I can't be bothered sitting down and really reading through it.

      Yeah, my first thought was, "This would be a really cool advisory tool in a civilized society. It'll be used as a doctor / physician's assistant replacement in America's for profit healthcare system."

    • Although honestly I wouldn't be surprised if the study was still bumpkis. I can't be bothered sitting down and really reading through it.

      That's OK. We are all just forever grateful that you at least took the time to pretend to be an expert anyway!

  • by couchslug ( 175151 ) on Thursday April 30, 2026 @06:31PM (#66121218)

    Doctors are tired, stressed and multitasking, They diagnose by pattern matching, ideal for AI.

    • by evanh ( 627108 )

      Problem is, if you repeat the question the LLM will give a different answer each time.

      • by ranton ( 36917 ) on Thursday April 30, 2026 @11:00PM (#66121562)

        Problem is, if you repeat the question the LLM will give a different answer each time.

        No it won't. It may change the wording, but not the answer. I just asked Gemini what the capital of the US was five times using Google, and got five unique responses. All of them said it was Washington DC though.

      • by dvice ( 6309704 )

        LLM can change the answer, but you can ask it x times and use majority vote to pick the best answer to get rid of random errors. This is an actual scientific strategy that is used with AI models to get more accurate results. There was one experiment where they managed to get over a million right answers without a single error using this strategy.

    • by dlasley ( 221447 ) on Thursday April 30, 2026 @07:58PM (#66121346) Homepage
      agreed, this is one of the truly useful and beneficial uses of AI so far. give doctors wherever they are an incredibly powerful tool for patient diagnosis so they can focus on triage and care. alleviate some of the pressure from being a walking Gray's Anatomy and let doctors be empathetic healers instead.
    • by Tablizer ( 95088 )

      Doctors are tired, stressed and multitasking...

      AI-slop is thus competitive with human-slop.

  • from Open AI to Harvard.

  • "This is the big conclusion for me -- it works with the messy real-world data of the emergency department..It works for making diagnoses in the real world."

    To be as precise as we should be when discussing departments handling life-saving measures to the extreme, it works for making diagnoses in an overtaxed understaffed environment known for earning a ranking of "messy" when it comes to annual audits.

    Messy, is how I would describe the masses trying to swallow the death-by-medical-error statistics that currently reek of dismissal and profit. Let's hope this can ultimately improve on that issue. IMHO, the conclusion more proves we grossly understaff ER depart

  • A real world test? (Score:5, Interesting)

    by lucifuge31337 ( 529072 ) <daryl AT introspect DOT net> on Thursday April 30, 2026 @06:47PM (#66121246) Homepage
    That was in fact not a real world test at all. It was using records to ex-post-facto relitigate decisions. And yes, this is an important step towards something that may be considered for "real world testing" but it's not that at all.
    • by thegarbz ( 1787294 ) on Friday May 01, 2026 @04:45AM (#66121822)

      It used real data to come up with a solution as it would have been applied. The only thing not "real world" about this is that no one made a decision based on this data.

    • by ufgrat ( 6245202 )

      OK-- how about this real world example:

      For 20 years, every doctor I saw made assumptions about my health, and completely missed a simple diagnosis. I noticed something important in 2019, and mentioned to every doctor I saw since, from primary care, to emergent care, to cardiologists, endocrinologists, urologists. None of them listened. They worked rather hard, in fact, to explain away this one fact.

      One endocrinologist in 2025 listened, asked a simple question, and made an accurate diagnosis. Probably sa

  • Doctors (Score:5, Insightful)

    by CAIMLAS ( 41445 ) on Thursday April 30, 2026 @07:11PM (#66121278)

    In my experience, doctors make some of the worst diagnosticians. They're the human equivalent of a narrowly trained, highly optimized model: they focus on a very narrow corpus of doctrinal, prescribed medicine (regardless of their subdiscipline). This leads to significant cognitive bias and the tendency to overly generalize. You see this quite a bit with even things as simple as blood work and iron levels: "your iron levels are fine" - meanwhile, ferrin is low, which has a whole slew of symptoms which get passed off as hysteria, particularly with women. There are entire communities of people suffering from low ferrin where they struggled for years getting a proper diagnosis when the tests themselves told the story, had the doctors not been prone to an overly generalized prognosis and ivory tower thinking.

    It would make sense that AI would supersede them in capabilities: the corpus is larger and they aren't as prone to the kind of cognitive problems doctors are, at least to as high a degree.

    • In my experience, doctors make some of the worst diagnosticians.

      You're declaring success based on an assumption of a flawed study. But we have time and time again shown that AI outperforms dedicated diagnosticians in their field of practice too. Like here: https://pmc.ncbi.nlm.nih.gov/a... [nih.gov] and that study was over a year ago in the oncology field. It's gotten better since.

      But really I think you're generalising something that wasn't generalised in the first place:

      They're the human equivalent of a narrowly trained, highly optimized model: they focus on a very narrow corpus of doctrinal, prescribed medicine (regardless of their subdiscipline). This leads to significant cognitive bias and the tendency to overly generalize.

      The study was about ER doctors. They are not narrowly trained, quite the opposite, they are broadly trained as

      • by ufgrat ( 6245202 )

        Yes, and no. Most ER doctors are trained to solve the emergency-- get the patient to a non-emergent status, and possibly admit them, but beyond that, their remit is not to cure the patient, but to resolve the current crisis.

      • by CAIMLAS ( 41445 )

        I've experienced the same BS in the ER, too.

        Got hit in the head (eye) with someone's shoulder. Felt "fine", no concussion, but my eye felt weird. Figured I'd pulled a muscle, and my vision got increasingly fatigued over the course of an hour. Went to the ER. The pain got worse and got to the point where I was just laying down with my eyes shut. Did some imaging. The ER doctor came in, asked me general questions, and left after a couple hours. They kept me there waiting for 5 hours, and were in the process o

  • by gurps_npc ( 621217 ) on Thursday April 30, 2026 @07:12PM (#66121280) Homepage

    The AI relied on the text records. Which were things the NURSE noticed and entered into the chart. The nurse did the hard part, examining the patient, asking the right questions.

    You cannot diagnose just on blood pressure, heart rate, oxygen rate. You need to notice things like:

    slurred speech
    dilated eyes
    excessive sweat
    pale
    red skin
    rash
    bruised

    The thing is, it was a trained nurse that noticed these symptoms and WROTE THEM DOWN. And she usually knew exactly what it was, but waited for the doctor to say.

    Anyone can diagnose correctly 90% of the time if you have the right information. Also note, diagnoising a problem is not like on House or Watson. 80+% of the time the answer is blindingly obviously.

    Bleeding profusely from a jagged wound = knife attack
    Patient comes in acting exactly like the 9 other drug addicts you got last month = using whatever the new/most common drug is.
    Patient smelling of alcohol is not a big mystery
    blood tests indicating high sugar = Diabetes
    Long time diabetic with blood in urine = kidney failure
    Immense pain in big toe from an overweight person = Gout

    For most cases, any EMT can tell what the problem is. The problem is not the common cases, but the problematic ones,

    For those "mysteries", do we want an AI diagnosing without a human confirming it? No. But we can probably save a bit of money by having the AI do it before a doctor confirms.

    • I'm still digging through the article, but do they explain how they account for a simple "more symptoms = problem" model. How much noise is in the diagnosis information that could influence the results. What am I trying to say? "If we're sure there is something wrong with you we can find out with pretty good accuracy. If we're not sure then performance goes way down" sounds right.
    • What does it even mean to "diagnose without images, sounds and nonverbal cues" ?   How can you taste without using your taste-buds , or do you pretend mapping  paper/rock/scissors  to crossword puzzles?
    • by thegarbz ( 1787294 ) on Friday May 01, 2026 @05:16AM (#66121852)

      The thing is, it was a trained nurse that noticed these symptoms and WROTE THEM DOWN. And she usually knew exactly what it was, but waited for the doctor to say.

      Anyone can diagnose correctly 90% of the time if you have the right information.

      Sorry but that is horseshit. There's a difference between gathering data, analysing data, and making a knowledge based decisions on data. You have no basis to say the nurse knew anything other than a list of symptoms she was trained to record. Diagnosis (in general, even when we're talking about your car not starting) is a multifaceted approach. No one here was diagnosing anything with the work of all parties. It is true the AI replced the doctor and not the nurse, but your assertion that the doctor was meaningless in this is just a baseless assumption (and a very dangerous one as the knowledge based decision relies on a wide array of knowledge that a typical nurse is not trained to have, and not using all that knowledge can quickly lead to incorrect diagnosis).

      There's a reason we have nurses do nursing jobs, doctors do doctor jobs, and pharmacists do pharmacy jobs. The world is full of examples of people reaching outside their field of training. You can find them by doing a google search along with the words "botched", "sued", or "fatal".

      • You have never been a nurse. It is true that we need doctors to do doctor stuff and nurses to do nursing stuff. But nurse stuff is a lot bigger than you think it is.

        Most of the time when you go to an ER, an Intake Nurse decides who gets seen first and which doctor to send you too. They do the triage and in order to do that the nurse has to have a very good idea of what your diagnosis is.

        Like I said before - anyone can diagnosis correctly 90% of the time. But 90% is not 100%.

        The reason the doctor does the

    • by ufgrat ( 6245202 )

      I had a lengthy conversation with a doctor once, and it revolved around the "90% problem". 90% of the time, a patient who presents with symptom "X" has condition "Y". I said OK, what if the patient has something else? What if they're part of the 10%?

      Well, according to the doctor, it would take an overwhelming amount of evidence to convince the doctor it *wasn't* condition "Y", before they'd even look at other options. If you happen to have a condition that might only affect 1% of the population, chances

  • IBM Watson did something similar: Usually better than an experienced MD, just occasionally killing a patient via hallucination. The same is likely true here.

    • Yeah was just going to say this.

      How many times did they make a claim similar to this one about diagnostic parity only to fall foul of the kinda obvious fact that medicine and sales have somewhat different metrics for success? I certainly lost count.

  • Read the end of the study - it's mostly funded by AI companies: Funding: We gratefully acknowledge support from NIH/NIEHS award R01ES032470 (A.K.M.), the Harvard Medical School Deanâ(TM)s Innovation Award for Artificial Intelligence (A. K. M.), Macy Foundation awards B25-15 and P25-04 (A.R. and J.C.), Moore Foundation award 12409 (A.R., J.C., and Z.K.), NIH/NIAID 1R01AI17812101 (J.C.), NIH-NCATS UM1TR004921 (J.C.), the Stanford Bio-X Interdisciplinary Initiatives Seed Grants Program (J.C.), NIH U01 NS
  • So the work had already been done.

  • I think I remember seeing her over on /b/.

  • This isn't really a new result, nor tied to genAI. Machine-learning models have a long track record of being able to identify medical problems better than humans based on records. Not really a surprise, the problem is essentially one of pattern matching and machine learning is _really_ good at extracting patterns from large volumes of data and then matching new data against those patterns. I wouldn't apply genAI to the problem, though, the established ML systems do a better job using fewer resources.

  • by LostMyBeaver ( 1226054 ) on Friday May 01, 2026 @12:54AM (#66121666)
    I have spent time with 9 neurosurgeons, 2 neurologists, 1 orthopedic surgeon, 8 general practitioners and over 50 nurses in 2026. There have also been 3 or more radiologists/neuroradiologists involved.

    They all disagree on what is wrong with me or throw their hands in the air and say "I don't know" and another said "it'll heal in a few days, ignore it".

    Here's the catch, the doctors all lack the ability to understand what is wrong with me. But as I collect information from each one of them and I upload the CTs and MRIs into the AIs and finally I have a doctor who isn't very good, but is not a specialist. As a result, I have a list of questions to ask the doctors to see if we can work on the problems.

    As someone who takes 16 pain killers every day for months now, I want to get past this. I don't believe the doctors will figure it out. I believe it will be the AIs
    • Well this is just classic medicine. If you go to an internist with an issue, they will try to address it with medications. If you go to a surgeon, they will try to address it with surgery (or do exploratory surgery to try and figure it out). Each works within their own area of expertise. My 94 year old grandma had a lump that was breast cancer. The surgeon wanted to do a full mastectomy at her age, which was ridiculous. Meanwhile the oncologist said, no, let's do a less invasive lumpectomy because I better

    • by BranMan ( 29917 )

      I hope that some of your questions point out that there may be 2 or even 3 distinct things wrong with you. As a few decades in my industry have shown, some of the most intractable problems are actually multiple problems that affect each other - once we identified that, things got a whole lot easier.

      It is rare for multiple things to go kerfluey (that's the technical term) at once, but it does happen.

      Good luck! Hope you get past it.

    • Good luck.
      Hope you find your beaver.

  • by noshellswill ( 598066 ) on Friday May 01, 2026 @09:48AM (#66122132) Homepage
    Don't be mislead by emotion or class agendas. Medical care by *.AI is coming fast. Even confirmed Luddites find quality, accessibility and relatability in current AI responses to discussion of symptoms & underlying illness. Came as a factual and aesthetic shock to me  when GOOG.ai addressed a personal medical issue multiple  physicians "refused" to engage.  Not even considering  year+ wait-times for appointments. I'm not making any broad claim or theoretical prediction ... in fact I have suggested  Godels Theorem precludes various novel behaviors  beyond formal systems ...  but I do observe   quality medical screening and threat evaluation is even now within the ability of AI models. The very wealthy can worry about  & optimize "trust";  for everyone else that extravagance plays second fiddle to concrete results.
    • Does it hurt when you write like that?
      All non-perportional and everything?
      Seems like it would hurt.

  • These studies need to be taken with a big grain of salt. Essentially what this did was provide to AI and two test doctors the clinical data entered by a nurse who examined the patient, and based off that limited text information the AI tended to do better. The thing is... doctors see patients themselves, even in the ER. They ask their own questions, physically examine the patient, and collect their own data. So it is no surprise a doctor, given only limited 3rd party data, didn't make as good of a diagnosis

  • Eliminating gender/color discrimination. Women get treated so badly in medical situations, even by women doctors, as do people of color, being accused of drug-seeking and such.

    There's also the problem of doctors not being as widely educated as a computer can be: case in point my wife. My wife has constant shortness of breath problems. Oncologist: you have cancer, get over it (she does, but that's not the problem). Lung doctor: you have asthma, use your inhaler (it doesn't help. Cardiologist: lose we
  • I use AI constantly in IT support, solutions engineering, and automation. Am I an insecure train wreck with no qualifications? No, I've been working in IT for 22 years. I can vet the answers. When it spits out a powershell script, I read it, and say "That's not right. That's not how you do that. That's the wrong part of the registry." I'm not some idiot 20 year old who did their homework with AI and has no idea what they're looking at. Now translate that to doctors. I don't see a reason not to roll it out.
  • Benchmarking AI performance in medicine is an active area of research: what to measure, how to measure it, and how to decide which AI model is best for a particular application. This is made more challenging when new models are relased frequently.

Time is an illusion perpetrated by the manufacturers of space.

Working...