Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Math Medicine Programming

When AI Botches Your Medical Diagnosis, Who's To Blame? (qz.com) 200

Robert Hart has posed an interested question in his report on Quartz: When artificial intelligence botches your medical diagnosis, who's to blame? Do you blame the AI, designer or organization? It's just one of many questions popping up and starting to be seriously pondered by experts as artificial intelligence and automation continue to become more entwined into our daily lives. From the report: The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis? Medical error is currently the third leading cause of death in the U.S., and as many as one in six patients in the British NHS receive incorrect diagnoses. With statistics like these, it's unsurprising that researchers at Johns Hopkins University believe diagnostic errors to be "the next frontier for patient safety." Of course, there are downsides. AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?
This discussion has been archived. No new comments can be posted.

When AI Botches Your Medical Diagnosis, Who's To Blame?

Comments Filter:
  • by Psychofreak ( 17440 ) on Tuesday May 23, 2017 @08:35PM (#54474177) Journal

    So the computer produces a list of possible diagnosis. This list I understand is called a "differential diagnosis" and may have as few as 2 or 3 items or as many as several hundred.

    I would expect that this diagnosis list, as well as a management plan, to be then put in the hands of a human. After a series of tests I would expect the AI to be consulted again if necessary.

    In today's world and probably the near future, say the next decade, I doubt that medicine will become "autodoc" "robotic physician" the holographic "Doctor" or some "magic cryokit" There will be a human with a powerful tool to aid in diagnosis of the patient.

    Now, what will happen in 50 years, that is to be seen.

    Phil

    • I'm more interested in knowing who gets sued if the AI is hacked to deliberately misdiagnose...

      • who ever makes money from it

        • "who ever makes money from it"

          More like whoever has money that can potentially be liberated for some noble cause like enriching lawyers.

          Another factor is that a doctor generally has to screw up substantially to be sued successfully. I suspect the same isn't true of an AI computer program which is likely to be held to a much higher standard. That may cause AI to be relegated to an advisory role. "Alexa, I think this dude has cancer, what do you think?" "I think it's just a hangover. Tell him to take two

      • by Trogre ( 513942 )

        Umm.

        I'm going to guess whoever hacked it?

      • Anything that benefits the guy with the most money will become the law or be protected from it with legal loopholes unless the guy suing can get a competitor on board to which the cycle just keeps repeating itself. Our technology is evolving faster than our ethics. Hacking a medical AI, which we all know will be remote cloud computing and not an in-house local server, will be the equivalent of hacking a modern 9th grader's graphing calculator during a final exam with an already questionable "grade" for most
    • by ceoyoyo ( 59147 )

      I expect in 10 years, 20 max, it will be illegal in civilized places to diagnose and prescribe treatment without consulting a computer, just like it is now without consulting a medically trained person.

      This is going to change fast, as soon as it becomes widely apparent how bad the medical profession is at doing this stuff.

      • "I expect in 10 years, 20 max, it will be illegal in civilized places to diagnose and prescribe treatment without consulting a computer, just like it is now without consulting a medically trained person."

        "Windows just crashed again. You'll have to stop that damn bleeding until IT can fix this thing ... ???

    • the operator may sue the manufacturer, but you should sue whoever you bought the service from(doctor/hospital).

      look, it's pretty much the same already now if you go get your eyes lasered and messed around with - the machine does 100% of the actual operation and the doctor is there just to press stop. but it's still his fault if something is fucked up.

    • The institution of the medical doctor is a very imbedded one. Medical schools are selective not because they will weed out the bad doctor, but to keep the supply of doctors down to prevent the institution from getting over saturated and lowering the paycheck amount. Then there are rules and regulations for what mid-level providers can can can't do. So you may go to a nurse practitioner (NP) it may appear that they are doing the same job as a MD but a doctor will need to sign off on the work and recommend

      • Medical schools are selective not because they will weed out the bad doctor, but to keep the supply of doctors down to prevent the institution from getting over saturated and lowering the paycheck amount.

        No, medical schools are still very selective here in the UK where no one but an idiot would go into medicine for the money.

        • by jedidiah ( 1196 )

          Even in the US, where things can be quite lucrative, you would be an idiot to get into it just for the money. It's an unbelievable grind that would take the most macho workaholic in Silicon Valley and spit him out into little pieces.

          If you've never asked a doctor about this and let him rant, you really have no idea.

    • Aside from a human likely remaining in the loop; TFA seems to overstate the urgency of the 'who screwed up?' question.

      In practice(literally and figuratively in this case), mistakes are considered a bad thing; but getting medical diagnosis error-free(especially if you include things like "patient has multiple conditions contributing to their reported symptoms; doctor correctly diagnoses one of them; but that correct diagnosis delays treatment for the other one") is sufficiently difficult that mere error i
      • is sufficiently difficult that mere error isn't generally considered a matter of culpability unless it's accompanied by negligence or recklessness

        In other words: medicine is based on imperfect information, it's impossible to correctly diagnose and treat anything, and people should stop expecting doctors to get everything right and focus more on getting things less-wrong.

        I'm a fan of exploratory pharmacology, although I don't know if that's considered ethical. I don't particularly care, so long as it's safe.

        I went in for psychiatric care due to ADHD, and eventually discovered my insomnia was severe (I thought I was getting 6.5 hours; I was getti

      • Aside from a human likely remaining in the loop; TFA seems to overstate the urgency of the 'who screwed up?' question.

        ...

        There's also the fact that, unless a doctor more or less straight up murders a patient; the 'blame' in a financial sense tends to get kicked up the chain to whoever allowed the doctor to practice...

        Indeed.

        Doctors probably get most diagnoses correct, mainly because most people are sick with common problems. When it comes to non-common causes, doctors get it wrong, probably every day. I would bet that doctors are basically making wild arsed guesses for 10% of the people who walk in the door, really nothing better than even a very crappy AI could do, and those guesses are often wrong.

        Does it matter? To the patient, yes, obviously, but the majority of patients end of getting at least a bit better r

    • A real world exemple : electo-kardio-grams (traces of the heart activity).
      They are a useful tool to help daignose heart rythm problems.

      Since a couple of decade already, given how simple the data is (less than a dozen of 1D signals), we already have managed to do automatic recognition.
      To the point that any modern EKG will give you a diagnostic printed after the traces them self on the report.

      How do doctors use it ?
      We are trained to first look at the traces, see if they seem obviously wrong or not,
      then apply

    • by AmiMoJo ( 196126 )

      Human doctors are pretty bad at getting the right diagnosis and treatment in my experience. AI will probably get better than a human relatively quickly (in the next couple of decades), especially if it has better access to sensor data. Humans already rely a lot on computers to interpret media sensors for them.

    • by mjwx ( 966435 )

      Now, what will happen in 50 years, that is to be seen.

      Ultimately it is not doctors who have to deal with misdiagnosis, its medical insurers, specifically liability insurance. The same will be true in 50 years, get misdiagnosed by Dr Robot MD, get payout same as if you're misdiagnosed by Dr Meatbag today.

    • You have it wrong. The list is not to be put in human hands. The human is responsible for wrong diagnostics much more often than the software. This is known for a long time. This was already the case 40 years ago for some specific diseases where the software (expert system) was better than a human to diagnose the disease. Now, the spectrum is broaden because data is available to take advantage of machine learning. It is known, such systems are better than the expert physician at diagnostic. It is beaten onl

      • So, why going back to a physician once you have the best diagnostic possible?

        Because humans are able to access and interpret a wider array of information. We see patterns in how people walk, how they breathe, and how the scientific analysis of data provides a particular diagnosis in detectable situations where a different diagnosis is correct.

        Sometimes it's 0.01% likely that other diagnosis is correct, given the data you understand; and some available data you can't yet name or quantify identifies when it's almost-guaranteed the alternate option is the correct one. We call that

    • by clodney ( 778910 )

      So the computer produces a list of possible diagnosis. This list I understand is called a "differential diagnosis" and may have as few as 2 or 3 items or as many as several hundred.

      I would expect that this diagnosis list, as well as a management plan, to be then put in the hands of a human. After a series of tests I would expect the AI to be consulted again if necessary.

      The scenario you describe is the easy one, because the AI generates a (potentially large) list of potential diagnoses, and the human doctor uses that as input to their decision making process. In that case, the human doctor is still in charge and ultimately responsible.

      For a more troublesome case, consider the article (https://techcrunch.com/2017/05/08/chinese-startup-infervision-emerges-from-stealth-with-an-ai-tool-for-diagnosing-lung-cancer/) I read recently about automated lung cancer detection from CT

    • It should actually be the opposite right now

      The human's diagnosis and management plan should be double checked by the AI, not the other way around

      AI diagnostics is very good at catching all the "routine" findings, not so much at outliers. Human experts are better at catching outliers and novel cases, but will sometimes miss the "routine" ones for various reasons (e.g. fatigue etc...)

      When you have the AI go first, it's results *further* constrain the human's diagnosis, leading to the outliers and novel cases

      • AI diagnostics is very good at catching all the "routine" findings, not so much at outliers.

        I'm not so sure about that. Humans will generally tend to think a disease is something they're already familiar with. Given hoofbeats, they look for horses. An AI can be programmed with a very large amount of data, which will generally have good information on outliers, so it might be better at spotting the occasional zebra.

        (This depends on the doctor, of course. There are places where doctors will be prone t

  • Canada? (Score:5, Insightful)

    by AthanasiusKircher ( 1333179 ) on Tuesday May 23, 2017 @08:38PM (#54474197)

    They're not even a real country anyway....

    But to be more serious, this is going to become a serious problem soon. Whether it's cars or medical diagnosis or some other AI application. I think promoters of AI underestimate just how outraged the public will be when someone is "killed by an evil robot." Human error we can understand and sometimes condemn. But I think there's the potential for a lot more backlash even for minor incidents with AI -- and even if they likely wouldn't have been preventable by a human doctor/driver/whatever. At that point, it won't matter that the stats say it actually saves more lives overall, if the error or the death is egregious enough.

    • Re:Canada? (Score:5, Informative)

      by mentil ( 1748130 ) on Tuesday May 23, 2017 @10:39PM (#54474773)

      Remember the story a few months back on Slashdot about the girl who was held hostage in a hospital, and separated from her parents, because they said she was being given the wrong treatment by her parents due to a misdiagnosis? It'd be pretty difficult for an AI to one-up the evils intentionally committed by humans in the medical industry. Sure it might've been incompetence at first (child abuse is more likely than a super-rare disease) but after a point it was all CYA. Medical malpractice happens all the time, and there are tons of cases of "small-town doctor misdiagnoses rare affliction he'd never seen, patient suffers for it" that never hit the news, yet an AI might never make that mistake. This is why assistants following a flowchart are less likely to misdiagnose than a doctor; we're just replacing the "human reading a flowchart" with a computer program that speeds up the process.

      • It's not quite like that...they insisted on removing her from a treatment that was working, to a treatment that hadn't worked and was actually hurting their child.

        Sometimes parents and doctors (or even an AI) are going to give a different diagnosis, and it will always be an ethical quandary what to do. However, just as I don't believe parents have the right for their child to skip vaccinations or rely on faith healers, I think this had progressed into a point where the parents clearly were not acting in th

    • Robots have killed people. I haven't seen any great outcry about it. I don't agree with you, and won't until I see some actual evidence.

  • by El Cubano ( 631386 ) on Tuesday May 23, 2017 @08:42PM (#54474229)

    A misdiagnosis by a human physician can only be analyzed and argued about. A misdiagnosis by an AI physician can be forensically investigated. It can even be perfectly reenacted, both with the same and different inputs. That would allow, for example, a determination of whether the fault was a design flaw or a problem with the supplied inputs.

    This would allow for very precise determination of responsibility. Today, if the patient omits some medically relevant detail and a misdiagnosis occurs, the human physician can only argue that he could have possibly come to a different conclusion with the additional information. With an AI, we can feed the updated parameters to it and actually see whether the result would or would not have been different. If the result would have in fact been different and correct, then the fault lies with the patient, or possible whomever was responsible for collecting the input data. If the result would not have changed, then there is a possible design flaw for which the developer/manufacturer may be held liable.

    In my mind, this can only mean an improvement from where things currently stand.

    • That would allow, for example, a determination of whether the fault was a design flaw or a problem with the supplied inputs.

      But what if it's neither?

      I think that's what TFA is really getting at. Many AI algorithms are opaque nowadays -- you create whatever combination of "neural networks" or whatever the buzzword de jour is for the adaptive algorithms, and then you just feed them a bunch of data. And you wait for the results to seem "good." At that point, you end up with a sort of "black box" that consists of an "algorithm" with a bunch of numerical weightings, etc. that don't have clear meaning. If things don't turn out s

      • by ghoul ( 157158 )

        Edge cases happen in the real world too. In fact an AI will have fewer edge cases than a human doctor because an AI can be programmed with multiple doctors' knowledge

      • The problem with such systems is that we have absolutely no idea how they'll behave at various edge cases.

        I fail to see how this is a problem, unless these "edge cases" are things that doctors get correct far more often. My suspicion is that they will not be. As a good example, see the recent study that the older the doctor, the higher the patient mortality rate [arstechnica.com]. There was no causation in the study, only correlation, but regardless of the causation, it's pretty darn likely that AI will soon be doing a better job with diagnoses. AI can be updated with modern understanding and recent studies in a way that a brain

    • The question then becomes "who is the expert?"

      The programmer may have some knowledge of the medical field(s) that the A.I. will be used in, but likely will not have the in-depth understanding that at PhD with a decade of experience would have. On the flip side, whomever is consulted for accurate diagnoses of the dozens/hundreds/thousands of ailments may not be the one actively using the A.I.

      Is it a design flaw if the A.I. misdiagnoses a rare disease about which very little is known? If so, where does the bl

    • You're over-complicating this.

      Doctors bury their mistakes. Why should it be any different for an AI?

  • Easy to answer (Score:4, Insightful)

    by 140Mandak262Jamuna ( 970587 ) on Tuesday May 23, 2017 @08:47PM (#54474255) Journal
    The one with the deepest pocket is the one to blame. If others have any resources the ambulance chasers will go after them too.
  • by Trogre ( 513942 ) on Tuesday May 23, 2017 @08:54PM (#54474299) Homepage

    Obviously whoever hired him is to blame.

    After seeing his performance in "Like a Surgeon" I'm surprised anyone would hire him to diagnose anything.

  • Medical Error? (Score:4, Insightful)

    by methano ( 519830 ) on Tuesday May 23, 2017 @08:57PM (#54474309)
    I'm sorry, I just don't believe that medical error is the third greatest cause of death.That's just stupid. Nobody in his right mind would ever go to a doctor if the odds were that high. Does anybody ever question the stats people toss around these days?
    • Re:Medical Error? (Score:5, Insightful)

      by ceoyoyo ( 59147 ) on Tuesday May 23, 2017 @09:54PM (#54474591)

      A lot of people questioned that statistic. Vigorously. It held up.

      It's also supported by a lot of other research, ranging from what happens if you force a surgeon to use something as simple as a checklist (it prevents at least one potentially serious error in nearly every single surgery) to what happens if a community loses access to a hospital for some reason (death rates in that community decrease).

      Medicine is the only profession where life-critical decisions are made based on personal expertise and opinion, rather than carefully specified standard operating procedure. It's a weird historical holdover.

      If people knew the real statistics they wouldn't go to hospitals as much as they do, and they'd be much more skeptical about what doctors told them to do. The general public doesn't appreciate those statistics because they have a weird hero worship for physicians, and because the guild of physicians actively suppresses such information. Medical errors happen constantly, but are not routinely monitored or reported, unlike in virtually every other similar profession.

      • Really? Because it's presented as an estimate in the paper.

        It's probably about right in the US, and would explain a lot of the unusually adverse outcomes if you're unlucky enough to be admitted into a hospital there, but it's still not a statistic.

        • Re: (Score:3, Insightful)

          by jedidiah ( 1196 )

          > if you're unlucky enough to be admitted into a hospital there

          It's much easier than anywhere else. All of that money we "waste" means we have more capacity. Because of of a Republican president, hospiatls have to take you for life threatening conditions regardless of your ability to pay.

          Lay off the media narrative.

      • Re: (Score:2, Insightful)

        Medicine is the only profession where life-critical decisions are made based on personal expertise and opinion, rather than carefully specified standard operating procedure. It's a weird historical holdover.

        There are lots of professions like this: engineering, airline pilots, police, firemen, military etc. Just like medicine there are standard procedures for "standard" situations and it is up to the people involved to determine which "standard" situation is most applicable and to adapt the procedures for it to the particular situation. This is not a "weird historical holdover" but the best way we have of doing thigs: train for standard situations and use your experience, knowledge and intelligence to cope wit

    • Nobody in his right mind would ever go to a doctor if the odds were that high.

      That depends on the nature of the "medical error". I expect that the vast majority of these cases are people who have a serious condition which is misdiagnosed or incorrectly treated and because the condition is not treated it eventually kills them i.e. they die from the condition. If true then going to the doctor results in a 67% chance of proper treatment and possible survival vs. no going which will be 100% fatal.

      To be worse off going to the doctor there has to be a serious risk that the treatment of

      • by cryptolemur ( 1247988 ) on Wednesday May 24, 2017 @05:05AM (#54475755)

        If true then going to the doctor results in a 67% chance of proper treatment and possible survival vs. no going which will be 100% fatal.

        Otherwise you're correct, but third biggest killer in this case doesn't mean 33%, but something between 0.3-0.6% of yearly hospital visits. In other words, one patient out of 200-300 dies because of misdiagnosis, the rest actually survive the ordeal, and some may even regain their health...

    • by Kiuas ( 1084567 )

      I just don't believe that medical error is the third greatest cause of death.That's just stupid. Nobody in his right mind would ever go to a doctor if the odds were that high.

      Introduction to game theory. Suppose you have cancer that needs to be operated. The chances of you dying from complications of the operation can be quite high, in double digits even. However the chances of you dying from untreated cancer is 100 %. It's not a difficult choice to make.

      Same for every other scenario: you get into a car cr

    • Depends on how you measure.

      Car accident, head on collision, drunk driver with a BAC of 0.09 fell asleep at the wheel, driver of the car died in hospital from his injuries and someone in the medical profession thinks he may have been able to save him.

      Was this:
      a) Death by accident?
      b) Death due to drowsy driver?
      c) Death due to drunk driving?
      d) Death due to medical error?

      I'll go with e) all of the above.

      • by methano ( 519830 )
        I figured it out.

        Causes of death:
        1. Heart stops beating
        2. Stops breathing
        3. 1 or 2 happens with a Lawyer present
        4. Profit
      • Reminds me of a Raymond Smullyan murder mystery.

        A, B, and C are traveling in the desert and rest at an oasis. B and C both hate A, so B puts poison in A's water bottle and C arranges that it will start leaking badly a little while after being filled. A fills his canteen and rides off on his camel. He finds his water bottle is empty and dies of thirst. Who killed him?

        Did B kill him? A never touched the poison. Did C kill him? All C did was remove poison, not drinkable water.

    • I'm sorry, I just don't believe that medical error is the third greatest cause of death.That's just stupid. Nobody in his right mind would ever go to a doctor if the odds were that high. Does anybody ever question the stats people toss around these days?

      Speaking of statistics, there's damn near a 100% chance you will not treat yourself properly with no medical training or equipment.

      If a situation dictates you need to see a doctor, then anyone in their right mind would likely accept the risk.

      If you're healthy, then the overall risk is fairly low, because you may only see a doctor once or twice a year. If you really were worried about death, stop getting into moving vehicles. Shit you do every day is far more likely to harm you.

  • Most medical mistakes that result in death are not caused by misdiagnosis, though that does occur. Most of them are from a combination of surgical mistakes and human error while dispensing/choosing medication. There are a lot of much easier ways to reduce medical mistakes without going so far as to replace doctors with a computer.

    Even if the only thing we did was require that every packet of medication was tagged with a barcode and require that appropriate people scan the patient's chart and the barcode

    • by guruevi ( 827432 )

      Most hospitals have e-records these days. I would blame EPIC and consorts for screw-ups these days, E-Record systems seem to have a less than 95% uptime.

  • had to say it
  • Is it too early to be disappointed in Slashdot again? Maybe someone will post a funny joke that actually gets some funny mod points? Ditto insightful, eh? Even an actually interesting or informative comment? Not holding my breath. Short summary: No such luck yet (and including keyword searches).

    Ever hear the old philosophy joke: "If a tree falls in the forest and there is no one to hear it, does it still make a sound?"

    The equivalent question for today's feeble article is: "If a corporation's AI botches your diagnosis and there is no one to sue, does your death still matter?"

    You may safely anticipate that the EULA will protect the AI from liability much more than it protects the patient from mistakes or software glitches, no matter how egregious and flagrant. Actually, the hierarchy of protection will probably go something like (1) Corporation that created the AI, (2) Corporation that is licensing the AI, (3) The hospital corporation, (4) The doctors who use the AI, and so on. They may remember to include the patient somewhere in there, or maybe not.

    Compare to Dr Mayo's motto: "The best interest of the patient is the only interest to be considered..." The current incorporated Mayo Clinic still mentions patients on the website, but I couldn't find such a strong form.

    Not sure what the trigger was, but I recently realized that individuals don't count now. It's only the biggest corporations and political parties that matter. I had still been clinging to some delusions from my "respect for the individual" days, but now the individual is just a cog, and the only question is which cog can do the job most cheaply before being discarded. Trigger might have been the book Hitlerland , which has NO relation to #PresidentTweety, since it was published some years ago. Not even sure if I want to recommend it, though it's still bothering me...

  • Stupid (Score:5, Insightful)

    by ceoyoyo ( 59147 ) on Tuesday May 23, 2017 @09:30PM (#54474471)

    It's a stupid question that illustrates a misunderstanding about what diagnosis is.

    If a fortune teller fails to predict you're going to get hit by a bus tomorrow, who's to blame, her, her crystal ball or it's manufacturer?

    Physicians misdiagnose patients all the time because diagnosis depends on a variety of imperfect information and very often cannot be done accurately. That is nobody's fault.

    Physicians also misdiagnose patients all the time because they aren't very good at keeping up with new developments in medicine or are otherwise negligent. That's their fault.

    An AI could be wrong for the first reason. If so, nobody is at fault. If the AI is wrong because of a manufacturing defect, the manufacturer is at fault, or the supervising physician is, if they insist on being in that position.

    • by llZENll ( 545605 )

      IMO the entity you get the diagnoses from is to blame, the doctor/hospital most likely, you sue them, they have insurance and are covered, and they go after the hospital, diagnostic AI, manufacturer, or who/whatever else is in the pipeline. Ultimately is doesn't matter though as you can sue any of these yourself.

    • Physicians also misdiagnose patients all the time because they aren't very good at keeping up with new developments in medicine or are otherwise negligent. That's their fault.

      They also misdiagnose patients because there are some extremely rare disorders that look a lot like common ones. And instead of remembering to eliminate that rare one with further testing, they apply Occam's Razor.

      This is probably where the biggest benefit of AI comes in - a larger library of knowledge than a human can keep in their head. You still need a doctor to weigh the answers at this point, but a computer can't "forget" about things that are unlikely.

  • by frank_adrian314159 ( 469671 ) on Tuesday May 23, 2017 @09:39PM (#54474519) Homepage

    The doctor. All of these systems are marketed and sold with the proviso that these provide only advice for the physician. This is to make sure that liability is clearly allocated. And notice that it's not the software company accepting this liability.

  • Whoever thought the AI was accurate enough is to blame, that is, the person who approved its use for that patient and then signed off on the diagnosis. I mean, people are verifying this stuff right? It's not like the AI has any authority itself.

  • Wasn't there a scene in a movie about this, where poorly-behaved robots were tortured with hot irons on the feet and similar? I mean, if the AI messes up, then I can only assume the AI is to blame; don't we just slow down its power cycles or something like that?

  • The AI vendor and/or user could be required to carry liability insurance. The better the AI (fewer catastrophic errors), the lower the premium.
  • Blame game. (Score:2, Insightful)

    by WolfgangVL ( 3494585 )

    If we could all just stop looking for the whipping-boy every time something goes wrong, that would be great.

  • Who has the medical license, the AI or the Dr. using it ? You don't sue the gun manufacturer, or the stethoscope company, they are just tools used by the licensed to 'practice' medicine doctor.

    • Who has the medical license, the AI or the Dr. using it ? You don't sue the gun manufacturer, or the stethoscope company, they are just tools used by the licensed to 'practice' medicine doctor.

      I know you Yanks love your guns, but how the fuck does a doctor use one to practise medicine?

  • Safety net (Score:5, Insightful)

    by Sigma 7 ( 266129 ) on Tuesday May 23, 2017 @11:35PM (#54474987)

    A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account.

    If only there was some organization that provides a safety net in case something like this happens. In exchange for a small fee (whether through taxes or however else it's implemented), patients get economically protected should a misdiagnosis causes problems.

    Maybe the safety net could be called "insurance". Perhaps it may be possible to do a for-profit organization under this concept.

    What would this mean for an AI?

    If the AI turns out to be more accurate, then that would make insurance payouts less frequent.

    If you want to minimize problems, you can have the AI provide the most likely issues, and a competent human doctor make sure that the diagnosis is sane.

  • But who can get you unblocked listed form the pre-existing conditions list?

  • Easy (Score:4, Funny)

    by nospam007 ( 722110 ) * on Wednesday May 24, 2017 @01:53AM (#54475397)

    The IT guy's malpractice insurance naturally.

  • You blame the doctor. Then you lead the priestesses of Kubebe on a jihad against Richese and then on to destroy the thinking machines.
  • So, yes, the blame lies with the designers and trainers of the AI.

    Oh, and a botched diagnosis should also go in the pool of training cases for future AIs.

  • First, remove the AI and have an actual human do it. Human intuition will pick up things that even IBM's betrayal of humanity won't.

  • The medical licensing authority that licensed a machine to practice medicine.
  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Wednesday May 24, 2017 @04:35AM (#54475699)

    This type of question is a non-existant dilemma.

    The concept of blame should disappear if you are diagnosed by a system that is orders of magnitude more precise than any human could ever be. AFAIK that is exactly the point of medical AIs like Watson. If maintained well, a system like Watson can "know" things a human or an entire army of human medical experts could never know, can process cross-reference cases and drug interference and genetic information at a speed, scale and accuracy that will make the last 500 years of advancement in medicine look like a pre-school exercise in comparsion. Miss- or non-diagnosis by human medical experts is high, and we wouldn't be happier about it if we have someone to blame. Doctors can only operate because there are catch-alls in place that keep a doctor who screwed up from going to jail. Given the 80% chance of dying in the next 5 years or the 80% chance of being cured with an 20% chance of an operation done by a human still failing and killing me really fast I probably would still take my chances. It's always a trade-off and capable AIs driving for us or doing 95% of all medical diagnose work will tip the odds so far in favour of humans, playing the blame game if something at some point does go wrong would be nothing short of stupid and/or silly.

    The same goes for "Whos the AI driven car going to kill? The the young handicapped kid on life support or the old grandma 5 years away from the grave but with 4 grandchildren who love her?".

    This type of question entirely misses the point. AI will be let on to the streets when they drive way, way better than a human ever could, always and everywhere. Deaths in traffic will plummet by orders of magnitude and the occasional situation where an AI can't prevent someone from dying will be so rare society will shrug it off. Experts even expect an extreme organ donor shortage once AI hits the streets. Less idiots killing themselves and others. ... On second though, maybe we should keep a subset of roads for those who insist on racing around.

    • You're assuming that misdiagnosis will be a thing of the past, and I don't see that. People are complicated. Some stuff is simple and definite, and doctors don't misdiagnose (e.g., finding staph aureus in a culture). Other things are more complicated, and no system is going to get everything right. The AI is going to misdiagnose, and the survivor's relatives will argue that the AI should have prescribed a particular test (whether or not its chance of eliminating a misdiagnosis is significant enough for

  • The law tends to protect doctors from simple mistakes. And with AI I think the same would be true. If the software was diligently created and is known as a good product there is no expectation of perfection. The same is true for your surgeon. He can do great harm. But as long as he was sober, in a proper state of mind, and diligent in trying to render aid the law will not tend to land on him like a bag of bricks. Did the doctor or software do what other doctors or AI programs would have done? Is the b
  • ... was already settled in the context of insurance companies and health maintenance organizations. Corporations and their agents are already immune from malpractice liability in misdiagnoses or mistreatment of patients' medical conditions. This will be extended to corporate-owned AI systems.

Truly simple systems... require infinite testing. -- Norman Augustine

Working...