When AI Botches Your Medical Diagnosis, Who's To Blame? (qz.com) 200
Robert Hart has posed an interested question in his report on Quartz: When artificial intelligence botches your medical diagnosis, who's to blame? Do you blame the AI, designer or organization? It's just one of many questions popping up and starting to be seriously pondered by experts as artificial intelligence and automation continue to become more entwined into our daily lives. From the report: The prospect of being diagnosed by an AI might feel foreign and impersonal at first, but what if you were told that a robot physician was more likely to give you a correct diagnosis? Medical error is currently the third leading cause of death in the U.S., and as many as one in six patients in the British NHS receive incorrect diagnoses. With statistics like these, it's unsurprising that researchers at Johns Hopkins University believe diagnostic errors to be "the next frontier for patient safety." Of course, there are downsides. AI raises profound questions regarding medical responsibility. Usually when something goes wrong, it is a fairly straightforward matter to determine blame. A misdiagnosis, for instance, would likely be the responsibility of the presiding physician. A faulty machine or medical device that harms a patient would likely see the manufacturer or operator held to account. What would this mean for an AI?
Differential and management are not the same. (Score:5, Insightful)
So the computer produces a list of possible diagnosis. This list I understand is called a "differential diagnosis" and may have as few as 2 or 3 items or as many as several hundred.
I would expect that this diagnosis list, as well as a management plan, to be then put in the hands of a human. After a series of tests I would expect the AI to be consulted again if necessary.
In today's world and probably the near future, say the next decade, I doubt that medicine will become "autodoc" "robotic physician" the holographic "Doctor" or some "magic cryokit" There will be a human with a powerful tool to aid in diagnosis of the patient.
Now, what will happen in 50 years, that is to be seen.
Phil
Re: (Score:2)
I'm more interested in knowing who gets sued if the AI is hacked to deliberately misdiagnose...
Re: (Score:2)
who ever makes money from it
Re: (Score:2)
"who ever makes money from it"
More like whoever has money that can potentially be liberated for some noble cause like enriching lawyers.
Another factor is that a doctor generally has to screw up substantially to be sued successfully. I suspect the same isn't true of an AI computer program which is likely to be held to a much higher standard. That may cause AI to be relegated to an advisory role. "Alexa, I think this dude has cancer, what do you think?" "I think it's just a hangover. Tell him to take two
Re: (Score:2)
Umm.
I'm going to guess whoever hacked it?
Re: Differential and management are not the same. (Score:2)
Re: (Score:2)
The AI runs on data. Change the data you get new results. Clear the flag that a particular diagnosis restriction affects a gender and you will have woman being diagnosed with prostrate problems. Flood information with one sided data and that rare problem will seem common.
Re: (Score:2)
I expect in 10 years, 20 max, it will be illegal in civilized places to diagnose and prescribe treatment without consulting a computer, just like it is now without consulting a medically trained person.
This is going to change fast, as soon as it becomes widely apparent how bad the medical profession is at doing this stuff.
Re: (Score:2)
"I expect in 10 years, 20 max, it will be illegal in civilized places to diagnose and prescribe treatment without consulting a computer, just like it is now without consulting a medically trained person."
"Windows just crashed again. You'll have to stop that damn bleeding until IT can fix this thing ... ???
You sue the operator (Score:2)
the operator may sue the manufacturer, but you should sue whoever you bought the service from(doctor/hospital).
look, it's pretty much the same already now if you go get your eyes lasered and messed around with - the machine does 100% of the actual operation and the doctor is there just to press stop. but it's still his fault if something is fucked up.
Re: (Score:2)
The institution of the medical doctor is a very imbedded one. Medical schools are selective not because they will weed out the bad doctor, but to keep the supply of doctors down to prevent the institution from getting over saturated and lowering the paycheck amount. Then there are rules and regulations for what mid-level providers can can can't do. So you may go to a nurse practitioner (NP) it may appear that they are doing the same job as a MD but a doctor will need to sign off on the work and recommend
Re: (Score:2)
Medical schools are selective not because they will weed out the bad doctor, but to keep the supply of doctors down to prevent the institution from getting over saturated and lowering the paycheck amount.
No, medical schools are still very selective here in the UK where no one but an idiot would go into medicine for the money.
Re: (Score:2)
Even in the US, where things can be quite lucrative, you would be an idiot to get into it just for the money. It's an unbelievable grind that would take the most macho workaholic in Silicon Valley and spit him out into little pieces.
If you've never asked a doctor about this and let him rant, you really have no idea.
Re: (Score:2)
In practice(literally and figuratively in this case), mistakes are considered a bad thing; but getting medical diagnosis error-free(especially if you include things like "patient has multiple conditions contributing to their reported symptoms; doctor correctly diagnoses one of them; but that correct diagnosis delays treatment for the other one") is sufficiently difficult that mere error i
Re: (Score:2)
is sufficiently difficult that mere error isn't generally considered a matter of culpability unless it's accompanied by negligence or recklessness
In other words: medicine is based on imperfect information, it's impossible to correctly diagnose and treat anything, and people should stop expecting doctors to get everything right and focus more on getting things less-wrong.
I'm a fan of exploratory pharmacology, although I don't know if that's considered ethical. I don't particularly care, so long as it's safe.
I went in for psychiatric care due to ADHD, and eventually discovered my insomnia was severe (I thought I was getting 6.5 hours; I was getti
Re: (Score:2)
Aside from a human likely remaining in the loop; TFA seems to overstate the urgency of the 'who screwed up?' question.
...
There's also the fact that, unless a doctor more or less straight up murders a patient; the 'blame' in a financial sense tends to get kicked up the chain to whoever allowed the doctor to practice...
Indeed.
Doctors probably get most diagnoses correct, mainly because most people are sick with common problems. When it comes to non-common causes, doctors get it wrong, probably every day. I would bet that doctors are basically making wild arsed guesses for 10% of the people who walk in the door, really nothing better than even a very crappy AI could do, and those guesses are often wrong.
Does it matter? To the patient, yes, obviously, but the majority of patients end of getting at least a bit better r
Real world exemple : EKG (Score:2)
A real world exemple : electo-kardio-grams (traces of the heart activity).
They are a useful tool to help daignose heart rythm problems.
Since a couple of decade already, given how simple the data is (less than a dozen of 1D signals), we already have managed to do automatic recognition.
To the point that any modern EKG will give you a diagnostic printed after the traces them self on the report.
How do doctors use it ?
We are trained to first look at the traces, see if they seem obviously wrong or not,
then apply
Re: (Score:2)
Human doctors are pretty bad at getting the right diagnosis and treatment in my experience. AI will probably get better than a human relatively quickly (in the next couple of decades), especially if it has better access to sensor data. Humans already rely a lot on computers to interpret media sensors for them.
Re: (Score:2)
Now, what will happen in 50 years, that is to be seen.
Ultimately it is not doctors who have to deal with misdiagnosis, its medical insurers, specifically liability insurance. The same will be true in 50 years, get misdiagnosed by Dr Robot MD, get payout same as if you're misdiagnosed by Dr Meatbag today.
Re: (Score:2)
You have it wrong. The list is not to be put in human hands. The human is responsible for wrong diagnostics much more often than the software. This is known for a long time. This was already the case 40 years ago for some specific diseases where the software (expert system) was better than a human to diagnose the disease. Now, the spectrum is broaden because data is available to take advantage of machine learning. It is known, such systems are better than the expert physician at diagnostic. It is beaten onl
Re: (Score:2)
So, why going back to a physician once you have the best diagnostic possible?
Because humans are able to access and interpret a wider array of information. We see patterns in how people walk, how they breathe, and how the scientific analysis of data provides a particular diagnosis in detectable situations where a different diagnosis is correct.
Sometimes it's 0.01% likely that other diagnosis is correct, given the data you understand; and some available data you can't yet name or quantify identifies when it's almost-guaranteed the alternate option is the correct one. We call that
Re: (Score:2)
So the computer produces a list of possible diagnosis. This list I understand is called a "differential diagnosis" and may have as few as 2 or 3 items or as many as several hundred.
I would expect that this diagnosis list, as well as a management plan, to be then put in the hands of a human. After a series of tests I would expect the AI to be consulted again if necessary.
The scenario you describe is the easy one, because the AI generates a (potentially large) list of potential diagnoses, and the human doctor uses that as input to their decision making process. In that case, the human doctor is still in charge and ultimately responsible.
For a more troublesome case, consider the article (https://techcrunch.com/2017/05/08/chinese-startup-infervision-emerges-from-stealth-with-an-ai-tool-for-diagnosing-lung-cancer/) I read recently about automated lung cancer detection from CT
Re: (Score:2)
It should actually be the opposite right now
The human's diagnosis and management plan should be double checked by the AI, not the other way around
AI diagnostics is very good at catching all the "routine" findings, not so much at outliers. Human experts are better at catching outliers and novel cases, but will sometimes miss the "routine" ones for various reasons (e.g. fatigue etc...)
When you have the AI go first, it's results *further* constrain the human's diagnosis, leading to the outliers and novel cases
Re: (Score:2)
I'm not so sure about that. Humans will generally tend to think a disease is something they're already familiar with. Given hoofbeats, they look for horses. An AI can be programmed with a very large amount of data, which will generally have good information on outliers, so it might be better at spotting the occasional zebra.
(This depends on the doctor, of course. There are places where doctors will be prone t
Re: (Score:2)
I pay my doctor not the AI. I will hold my doctor responsible not the AI.
And ultimately some insurance company pays you regardless of who is at fault.
Re: (Score:2)
I pay my doctor not the AI. I will hold my doctor responsible not the AI.
And ultimately some insurance company pays you regardless of who is at fault.
Perhaps!
Re: (Score:2)
Of course the deep learning systems are not yet at a point that they could be even considered for widespread use, but that's a long way from never.
Medical diagnoses involve high amounts o
Re: (Score:2)
One thing an opaque deep-learning system could do is suggest possible diagnoses for the doctor's consideration. Sometimes doctors just overlook possibilities.
Re: (Score:2)
"AI needs to be trained just like graduating students. ..."
Yes, but my impression is that (much of?) the "training" already exists in the form of guidelines that doctors are supposed to, and often do, follow? e.g. Age 77, Male, BP=138/80 = OK. But BP 141/80 = needs medication.
I could be wrong of course.
Re: (Score:2)
Those few "domain experts" are the only ones with enough of a clue. The fact that this is the case is precisely the sort of problem these systems are trying to solve.
Your average doctor doesn't have the breadth of knowledge to handle every obscure diagnosis. Bridging that knowledge gap is really the value of an overpriced, overhyped version of pubmed.
The current training regimen is part of the problem.
Re: (Score:2)
Yeah. You poor victim. Being forced to pay a highly trained and educated professional the same as what you might pay for mindless luxuries or consumer goods.
Basic medical care is only "expensive" if you're on food stamps.
Spoiled entitled ingrate brats.
Re: (Score:2)
I can put off luxuries. When I've got an infection or heart attack, I need medical assistance then, not after my next paycheck.
Also, basic medical care is expensive, at least in the US. It's not expensive if you don't get sick, but you can rapidly come down with something expensive. I wasn't actually budgeting for a heart attack, for example. Many drugs that you might wind up needing are quite expensive. Many people will have trouble affording another $100/week.
Re: (Score:2)
You must be joking. Even well trained nurses will go to pieces over situations that are clearly not a problem. They simply don't have sufficient knowledge.
The nurses that don't do this are already fulfilling the role of quasi-doctor and they are nearly as well trained as one.
Your average nurse has enough trouble with basic nursing duties without trying to pretend to be a doctor.
Canada? (Score:5, Insightful)
They're not even a real country anyway....
But to be more serious, this is going to become a serious problem soon. Whether it's cars or medical diagnosis or some other AI application. I think promoters of AI underestimate just how outraged the public will be when someone is "killed by an evil robot." Human error we can understand and sometimes condemn. But I think there's the potential for a lot more backlash even for minor incidents with AI -- and even if they likely wouldn't have been preventable by a human doctor/driver/whatever. At that point, it won't matter that the stats say it actually saves more lives overall, if the error or the death is egregious enough.
Re:Canada? (Score:5, Informative)
Remember the story a few months back on Slashdot about the girl who was held hostage in a hospital, and separated from her parents, because they said she was being given the wrong treatment by her parents due to a misdiagnosis? It'd be pretty difficult for an AI to one-up the evils intentionally committed by humans in the medical industry. Sure it might've been incompetence at first (child abuse is more likely than a super-rare disease) but after a point it was all CYA. Medical malpractice happens all the time, and there are tons of cases of "small-town doctor misdiagnoses rare affliction he'd never seen, patient suffers for it" that never hit the news, yet an AI might never make that mistake. This is why assistants following a flowchart are less likely to misdiagnose than a doctor; we're just replacing the "human reading a flowchart" with a computer program that speeds up the process.
Re: (Score:3)
It's not quite like that...they insisted on removing her from a treatment that was working, to a treatment that hadn't worked and was actually hurting their child.
Sometimes parents and doctors (or even an AI) are going to give a different diagnosis, and it will always be an ethical quandary what to do. However, just as I don't believe parents have the right for their child to skip vaccinations or rely on faith healers, I think this had progressed into a point where the parents clearly were not acting in th
Re: (Score:2)
Robots have killed people. I haven't seen any great outcry about it. I don't agree with you, and won't until I see some actual evidence.
AI diagnosis can be forensically investigated (Score:5, Insightful)
A misdiagnosis by a human physician can only be analyzed and argued about. A misdiagnosis by an AI physician can be forensically investigated. It can even be perfectly reenacted, both with the same and different inputs. That would allow, for example, a determination of whether the fault was a design flaw or a problem with the supplied inputs.
This would allow for very precise determination of responsibility. Today, if the patient omits some medically relevant detail and a misdiagnosis occurs, the human physician can only argue that he could have possibly come to a different conclusion with the additional information. With an AI, we can feed the updated parameters to it and actually see whether the result would or would not have been different. If the result would have in fact been different and correct, then the fault lies with the patient, or possible whomever was responsible for collecting the input data. If the result would not have changed, then there is a possible design flaw for which the developer/manufacturer may be held liable.
In my mind, this can only mean an improvement from where things currently stand.
Re: (Score:3)
That would allow, for example, a determination of whether the fault was a design flaw or a problem with the supplied inputs.
But what if it's neither?
I think that's what TFA is really getting at. Many AI algorithms are opaque nowadays -- you create whatever combination of "neural networks" or whatever the buzzword de jour is for the adaptive algorithms, and then you just feed them a bunch of data. And you wait for the results to seem "good." At that point, you end up with a sort of "black box" that consists of an "algorithm" with a bunch of numerical weightings, etc. that don't have clear meaning. If things don't turn out s
Re: (Score:2)
Edge cases happen in the real world too. In fact an AI will have fewer edge cases than a human doctor because an AI can be programmed with multiple doctors' knowledge
Re: (Score:3)
The problem with such systems is that we have absolutely no idea how they'll behave at various edge cases.
I fail to see how this is a problem, unless these "edge cases" are things that doctors get correct far more often. My suspicion is that they will not be. As a good example, see the recent study that the older the doctor, the higher the patient mortality rate [arstechnica.com]. There was no causation in the study, only correlation, but regardless of the causation, it's pretty darn likely that AI will soon be doing a better job with diagnoses. AI can be updated with modern understanding and recent studies in a way that a brain
Re: (Score:2)
Algorithms are only as good as their programming, and doctors are only as good as their education.
You don't sue Harvard when a doctor screws up, do you? The primary blame would be on the hospitals, who would likely be leasing the AI systems from the programmers. Then if the system continues making the same mistakes (see above, forensic investigation) the hospital starts blaming the programmers.
Re: (Score:2)
The question then becomes "who is the expert?"
The programmer may have some knowledge of the medical field(s) that the A.I. will be used in, but likely will not have the in-depth understanding that at PhD with a decade of experience would have. On the flip side, whomever is consulted for accurate diagnoses of the dozens/hundreds/thousands of ailments may not be the one actively using the A.I.
Is it a design flaw if the A.I. misdiagnoses a rare disease about which very little is known? If so, where does the bl
Re: (Score:2)
You're over-complicating this.
Doctors bury their mistakes. Why should it be any different for an AI?
Easy to answer (Score:4, Insightful)
Whoever hired him (Score:4, Funny)
Obviously whoever hired him is to blame.
After seeing his performance in "Like a Surgeon" I'm surprised anyone would hire him to diagnose anything.
Medical Error? (Score:4, Insightful)
Re:Medical Error? (Score:5, Insightful)
A lot of people questioned that statistic. Vigorously. It held up.
It's also supported by a lot of other research, ranging from what happens if you force a surgeon to use something as simple as a checklist (it prevents at least one potentially serious error in nearly every single surgery) to what happens if a community loses access to a hospital for some reason (death rates in that community decrease).
Medicine is the only profession where life-critical decisions are made based on personal expertise and opinion, rather than carefully specified standard operating procedure. It's a weird historical holdover.
If people knew the real statistics they wouldn't go to hospitals as much as they do, and they'd be much more skeptical about what doctors told them to do. The general public doesn't appreciate those statistics because they have a weird hero worship for physicians, and because the guild of physicians actively suppresses such information. Medical errors happen constantly, but are not routinely monitored or reported, unlike in virtually every other similar profession.
Re: (Score:2)
Really? Because it's presented as an estimate in the paper.
It's probably about right in the US, and would explain a lot of the unusually adverse outcomes if you're unlucky enough to be admitted into a hospital there, but it's still not a statistic.
Re: (Score:3, Insightful)
> if you're unlucky enough to be admitted into a hospital there
It's much easier than anywhere else. All of that money we "waste" means we have more capacity. Because of of a Republican president, hospiatls have to take you for life threatening conditions regardless of your ability to pay.
Lay off the media narrative.
Re: (Score:2, Insightful)
Medicine is the only profession where life-critical decisions are made based on personal expertise and opinion, rather than carefully specified standard operating procedure. It's a weird historical holdover.
There are lots of professions like this: engineering, airline pilots, police, firemen, military etc. Just like medicine there are standard procedures for "standard" situations and it is up to the people involved to determine which "standard" situation is most applicable and to adapt the procedures for it to the particular situation. This is not a "weird historical holdover" but the best way we have of doing thigs: train for standard situations and use your experience, knowledge and intelligence to cope wit
Depends on Type of Error (Score:3)
Nobody in his right mind would ever go to a doctor if the odds were that high.
That depends on the nature of the "medical error". I expect that the vast majority of these cases are people who have a serious condition which is misdiagnosed or incorrectly treated and because the condition is not treated it eventually kills them i.e. they die from the condition. If true then going to the doctor results in a 67% chance of proper treatment and possible survival vs. no going which will be 100% fatal.
To be worse off going to the doctor there has to be a serious risk that the treatment of
Re:Depends on Type of Error (Score:5, Informative)
If true then going to the doctor results in a 67% chance of proper treatment and possible survival vs. no going which will be 100% fatal.
Otherwise you're correct, but third biggest killer in this case doesn't mean 33%, but something between 0.3-0.6% of yearly hospital visits. In other words, one patient out of 200-300 dies because of misdiagnosis, the rest actually survive the ordeal, and some may even regain their health...
Re: (Score:2)
Introduction to game theory. Suppose you have cancer that needs to be operated. The chances of you dying from complications of the operation can be quite high, in double digits even. However the chances of you dying from untreated cancer is 100 %. It's not a difficult choice to make.
Same for every other scenario: you get into a car cr
Re: (Score:2)
Depends on how you measure.
Car accident, head on collision, drunk driver with a BAC of 0.09 fell asleep at the wheel, driver of the car died in hospital from his injuries and someone in the medical profession thinks he may have been able to save him.
Was this:
a) Death by accident?
b) Death due to drowsy driver?
c) Death due to drunk driving?
d) Death due to medical error?
I'll go with e) all of the above.
Re: (Score:2)
Causes of death:
1. Heart stops beating
2. Stops breathing
3. 1 or 2 happens with a Lawyer present
4. Profit
Re: (Score:2)
Reminds me of a Raymond Smullyan murder mystery.
A, B, and C are traveling in the desert and rest at an oasis. B and C both hate A, so B puts poison in A's water bottle and C arranges that it will start leaking badly a little while after being filled. A fills his canteen and rides off on his camel. He finds his water bottle is empty and dies of thirst. Who killed him?
Did B kill him? A never touched the poison. Did C kill him? All C did was remove poison, not drinkable water.
Re: (Score:2)
I'm sorry, I just don't believe that medical error is the third greatest cause of death.That's just stupid. Nobody in his right mind would ever go to a doctor if the odds were that high. Does anybody ever question the stats people toss around these days?
Speaking of statistics, there's damn near a 100% chance you will not treat yourself properly with no medical training or equipment.
If a situation dictates you need to see a doctor, then anyone in their right mind would likely accept the risk.
If you're healthy, then the overall risk is fairly low, because you may only see a doctor once or twice a year. If you really were worried about death, stop getting into moving vehicles. Shit you do every day is far more likely to harm you.
Re: (Score:2)
Speaking of statistics, there's damn near a 100% chance you will not treat yourself properly with no medical training or equipment.
If a situation dictates you need to see a doctor, then anyone in their right mind would likely accept the risk.
If you're healthy, then the overall risk is fairly low, because you may only see a doctor once or twice a year. If you really were worried about death, stop getting into moving vehicles. Shit you do every day is far more likely to harm you.
It is a 100% chance that your life will end in death at this point so why bother going to a doctor anyway? None of them have a cure for the 100% fatal disease called life.
Chances are we do not want to solve for the disease of life. Not only would our fragile planet not be able to handle it, but it would tend to destroy the concept of humanity. Social media has done enough to bolster global narcissism. I can't imagine the God complex that would ultimately be created with immortality.
Re: (Score:2)
"Sometimes patients still die even when they receive the best of care."
I've been told that there's a substantial chance some of us might die sooner or later.
Re: (Score:2)
By that definition, everyone that dies of cancer while being under treatment would be counted as an error.
Medical mistakes? (Score:2)
Most medical mistakes that result in death are not caused by misdiagnosis, though that does occur. Most of them are from a combination of surgical mistakes and human error while dispensing/choosing medication. There are a lot of much easier ways to reduce medical mistakes without going so far as to replace doctors with a computer.
Even if the only thing we did was require that every packet of medication was tagged with a barcode and require that appropriate people scan the patient's chart and the barcode
Re: (Score:2)
Most hospitals have e-records these days. I would blame EPIC and consorts for screw-ups these days, E-Record systems seem to have a less than 95% uptime.
Re: (Score:2)
Y'know, 15 years or so ago, I had a part time job managing a network of a hundred or so Windows 9 PCs and a handful of WFWG PCs too incapable to even run Win95. Novell Server. It didn't crash.
Sometimes (about every third thunderstorm) we lost the internet for a few hours. Every now and then a PC died. But the network didn't crash. ... Until expert help tried to modernize it
Then it crashed.
A lot.
Maybe there is something to be said for simple and reliable.
this is retarded (Score:2)
If a tree falls in the forest, is there a sound? (Score:3)
Is it too early to be disappointed in Slashdot again? Maybe someone will post a funny joke that actually gets some funny mod points? Ditto insightful, eh? Even an actually interesting or informative comment? Not holding my breath. Short summary: No such luck yet (and including keyword searches).
Ever hear the old philosophy joke: "If a tree falls in the forest and there is no one to hear it, does it still make a sound?"
The equivalent question for today's feeble article is: "If a corporation's AI botches your diagnosis and there is no one to sue, does your death still matter?"
You may safely anticipate that the EULA will protect the AI from liability much more than it protects the patient from mistakes or software glitches, no matter how egregious and flagrant. Actually, the hierarchy of protection will probably go something like (1) Corporation that created the AI, (2) Corporation that is licensing the AI, (3) The hospital corporation, (4) The doctors who use the AI, and so on. They may remember to include the patient somewhere in there, or maybe not.
Compare to Dr Mayo's motto: "The best interest of the patient is the only interest to be considered..." The current incorporated Mayo Clinic still mentions patients on the website, but I couldn't find such a strong form.
Not sure what the trigger was, but I recently realized that individuals don't count now. It's only the biggest corporations and political parties that matter. I had still been clinging to some delusions from my "respect for the individual" days, but now the individual is just a cog, and the only question is which cog can do the job most cheaply before being discarded. Trigger might have been the book Hitlerland , which has NO relation to #PresidentTweety, since it was published some years ago. Not even sure if I want to recommend it, though it's still bothering me...
Stupid (Score:5, Insightful)
It's a stupid question that illustrates a misunderstanding about what diagnosis is.
If a fortune teller fails to predict you're going to get hit by a bus tomorrow, who's to blame, her, her crystal ball or it's manufacturer?
Physicians misdiagnose patients all the time because diagnosis depends on a variety of imperfect information and very often cannot be done accurately. That is nobody's fault.
Physicians also misdiagnose patients all the time because they aren't very good at keeping up with new developments in medicine or are otherwise negligent. That's their fault.
An AI could be wrong for the first reason. If so, nobody is at fault. If the AI is wrong because of a manufacturing defect, the manufacturer is at fault, or the supervising physician is, if they insist on being in that position.
Re: (Score:2)
IMO the entity you get the diagnoses from is to blame, the doctor/hospital most likely, you sue them, they have insurance and are covered, and they go after the hospital, diagnostic AI, manufacturer, or who/whatever else is in the pipeline. Ultimately is doesn't matter though as you can sue any of these yourself.
Re: (Score:2)
Physicians also misdiagnose patients all the time because they aren't very good at keeping up with new developments in medicine or are otherwise negligent. That's their fault.
They also misdiagnose patients because there are some extremely rare disorders that look a lot like common ones. And instead of remembering to eliminate that rare one with further testing, they apply Occam's Razor.
This is probably where the biggest benefit of AI comes in - a larger library of knowledge than a human can keep in their head. You still need a doctor to weigh the answers at this point, but a computer can't "forget" about things that are unlikely.
Who's to blame? (Score:3)
The doctor. All of these systems are marketed and sold with the proviso that these provide only advice for the physician. This is to make sure that liability is clearly allocated. And notice that it's not the software company accepting this liability.
Those who approved the AI (Score:2)
Whoever thought the AI was accurate enough is to blame, that is, the person who approved its use for that patient and then signed off on the diagnosis. I mean, people are verifying this stuff right? It's not like the AI has any authority itself.
Movie about this? (Score:2)
Wasn't there a scene in a movie about this, where poorly-behaved robots were tortured with hot irons on the feet and similar? I mean, if the AI messes up, then I can only assume the AI is to blame; don't we just slow down its power cycles or something like that?
Liability Insurance? (Score:2)
Blame game. (Score:2, Insightful)
If we could all just stop looking for the whipping-boy every time something goes wrong, that would be great.
Re: (Score:2)
This sounds better coming from Sean Connery.
https://www.youtube.com/watch?... [youtube.com]
Medical License ? (Score:2)
Who has the medical license, the AI or the Dr. using it ? You don't sue the gun manufacturer, or the stethoscope company, they are just tools used by the licensed to 'practice' medicine doctor.
Re: (Score:2)
Who has the medical license, the AI or the Dr. using it ? You don't sue the gun manufacturer, or the stethoscope company, they are just tools used by the licensed to 'practice' medicine doctor.
I know you Yanks love your guns, but how the fuck does a doctor use one to practise medicine?
Re: (Score:2)
That sounds wrong. Do you have example cases?
It's the doctor's stethoscope, and if it isn't up to snuff that's the doctor's problem. There's no reason why the doctor or the malpractice insurance company couldn't sue the stethoscope manufacturer for making bad stethoscopes.
Safety net (Score:5, Insightful)
If only there was some organization that provides a safety net in case something like this happens. In exchange for a small fee (whether through taxes or however else it's implemented), patients get economically protected should a misdiagnosis causes problems.
Maybe the safety net could be called "insurance". Perhaps it may be possible to do a for-profit organization under this concept.
If the AI turns out to be more accurate, then that would make insurance payouts less frequent.
If you want to minimize problems, you can have the AI provide the most likely issues, and a competent human doctor make sure that the diagnosis is sane.
But who can get you unblocked listed form the (Score:2)
But who can get you unblocked listed form the pre-existing conditions list?
Easy (Score:4, Funny)
The IT guy's malpractice insurance naturally.
Already answered (Score:2)
It's just a matter of insufficient training. (Score:2)
Oh, and a botched diagnosis should also go in the pool of training cases for future AIs.
Kill off the AI. (Score:2)
First, remove the AI and have an actual human do it. Human intuition will pick up things that even IBM's betrayal of humanity won't.
Who is to blame? (Score:2)
Non-existant dilemma. (Score:5, Interesting)
This type of question is a non-existant dilemma.
The concept of blame should disappear if you are diagnosed by a system that is orders of magnitude more precise than any human could ever be. AFAIK that is exactly the point of medical AIs like Watson. If maintained well, a system like Watson can "know" things a human or an entire army of human medical experts could never know, can process cross-reference cases and drug interference and genetic information at a speed, scale and accuracy that will make the last 500 years of advancement in medicine look like a pre-school exercise in comparsion. Miss- or non-diagnosis by human medical experts is high, and we wouldn't be happier about it if we have someone to blame. Doctors can only operate because there are catch-alls in place that keep a doctor who screwed up from going to jail. Given the 80% chance of dying in the next 5 years or the 80% chance of being cured with an 20% chance of an operation done by a human still failing and killing me really fast I probably would still take my chances. It's always a trade-off and capable AIs driving for us or doing 95% of all medical diagnose work will tip the odds so far in favour of humans, playing the blame game if something at some point does go wrong would be nothing short of stupid and/or silly.
The same goes for "Whos the AI driven car going to kill? The the young handicapped kid on life support or the old grandma 5 years away from the grave but with 4 grandchildren who love her?".
This type of question entirely misses the point. AI will be let on to the streets when they drive way, way better than a human ever could, always and everywhere. Deaths in traffic will plummet by orders of magnitude and the occasional situation where an AI can't prevent someone from dying will be so rare society will shrug it off. Experts even expect an extreme organ donor shortage once AI hits the streets. Less idiots killing themselves and others. ... On second though, maybe we should keep a subset of roads for those who insist on racing around.
Re: (Score:2)
You're assuming that misdiagnosis will be a thing of the past, and I don't see that. People are complicated. Some stuff is simple and definite, and doctors don't misdiagnose (e.g., finding staph aureus in a culture). Other things are more complicated, and no system is going to get everything right. The AI is going to misdiagnose, and the survivor's relatives will argue that the AI should have prescribed a particular test (whether or not its chance of eliminating a misdiagnosis is significant enough for
Not Liable (Score:2)
The blame issue .... (Score:2)
Re: Zuck! (Score:2)
Re: (Score:3)
Christ, you watch porn in 360p? Is this 1994?
Re: (Score:2)
Christ, you watch porn in 360p? Is this 1994?
When you're cranking one off in the company toilets watching Adult Entertainment on your phone, you really don't need high definition. Headphones are a good idea though. So I've been told.
Re: (Score:2)
Wrong forum. Go here: https://www.reddit.com/r/tipof... [reddit.com]
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Exactly. It also depends a bit on what you mean by 'AI'. If you mean 'correlation engine' (which seems to be what the media means most of the time it says AI), then whoever decided that it's safe to use it without checking that there's any causal relationship and deployed it without human oversight. If you mean 'expert system', then it's just another tool and depends heavily on the input from the doctor or nurse to work towards a diagnosis. If it goes wrong, either the tool was faulty or the operator was incompetent.
My theory is that no one should be allowed to use the term "AI" until an actual AI can convincingly argue for it.
Re: (Score:2)
X is pretty damn ambiguous. I've had two cornea transplants. In each case, the surgeon wrote "yes" on my forehead over the eye to be oprated on. And that after three nurses and the anesthesiologist asked me which eye, and verified that my answer matched the chart.
Re: (Score:2)
I had laser surgery on my cornea when I was suffering from Anterior Basement Membrane Dystrophy, which seems to mean something like cornea delamination. Anyway, my right eye was taped shut, and I was asked to confirm that I needed the surgery for the left.
Re: (Score:2)
Interesting story.
She had to use this one doctor but insurance still didn't pay for it.
Sounds like she could have just insisted on a better doctor if she was paying for it herself. Probably could have negotiated a cash rate comparable to the insurance discount.
These days, there are entire surgical centers that just take the "out of pocket" amount. You end up paying about the same as you would otherwise.
Re: (Score:2)
> Why does someone need to be blamed?
Some errors are actually gross incompetence and the offending party needs to lose his license.
"Perfection" is not required. Some degree of competence is. Any doctor should be better than a random layman. Any new treatment should be superior to alternatives. That goes for the schmuck pretending to be a physician.