AI Can Predict When Patients Will Die From Heart Failure 'With 80% Accuracy' (ibtimes.co.uk) 153
New submitter drunkdrone quotes a report from International Business Times: Scientists say they have developed an artificial intelligence (AI) program that is capable of predicting when patients with a serious heart disorder will die with an 80% accuracy rate. Researchers from the MRC London Institute of Medical Sciences (LMS) believe the software will allow doctors to better treat patients with pulmonary hypertension by determining how aggressive their treatment needs to be. The researchers' program assessed the outlook of 250 patients based on blood test results and MRI scans of their hearts. It then used the data to create a virtual 3D heart of each patient which, combined with the health records of "hundreds" of previous patients, allowed it to learn which characteristics indicated fatal heart failure within five years. The LMS scientists claim that the software was able to accurately predict patients who would still be alive after a year around 80% of the time. The computer was able to analyze patients "in seconds," promising to dramatically reduce the time it takes doctors to identify the most at-risk individuals and ensure they "give the right treatment to the right patients, at the right time." Dr Declan O'Regan, one the lead researchers from LMS, said: "This is the first time computers have interpreted heart scans to accurately predict how long patients will live. It could transform the way doctors treat heart patients. The researchers now hope to field-test the technology in hospitals in London in order to verify the data obtained from their trials, which have been published in the medical journal Radiology.
Re:Can it beat the doctors (Score:5, Funny)
Re: (Score:2)
...but you need the autonomous auto-loading shotgun mod...
Oddly enough, guns are one of the only things covered in the new US health plan.
Re: (Score:1)
That is the key. is 80% a good number? Are Doctors better? is the 20% random?
Is it useful as a pre-screen for doctors?
There is not a lot of useful information here.
Re: (Score:1)
It also depends on how it fails vs people.
If a doctor is 90% correct, but this this only gives false negatives, this + doctor could be used to save lives by using an OR style process in determining more aggressive treatment.
It if it's 80% accurate when never giving an uncertain answer, but very accurate when allowed a certainty interval but passing on making a determination on 20% of the cases it could at the very least act as a check against doctors that missed something.
Additionally, if only false positiv
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
indeed. health care is wasted on sick people, just like education is wasted on the ignorant and compassion is wasted on the rich.
Way ahead of you there pal, I waste no compassion on the rich.
Re: (Score:1)
I hope you get that 20% miss diagnosis and die due to lack of preventable treatment asshole.
Re: (Score:3)
Re: (Score:2)
Yes, and we already have that. There are people who die every day waiting for a transplant organ. There's a limited amount available so they must be rationed and someone (or a panel of people most likely) has to determine where the limited supply will do the most good.
That's what other western countries do. The US goes by who has the best health insurance, like how big of a bill can we justify sending...
Re: (Score:3)
So you're OK with death panels and health care rationing.
Just to add to what alvinrod said. We also have them due to economic disparities. Those who choose to be poor (by being lazy and not working 36 hours a day, or stupidly choosing to be born to poor parents) get significantly worse health care. Thus we have self-selection of health care, which is rationing in other words.
Re: (Score:3, Informative)
The point, as in TFA, is to suggest which patients should receive more intensive treatment. If there is a higher chance a patient is going to die, you would be more willing to apply riskier treatments with more potential to cause harmful side effects. If the chances are low, you then are more careful with treatments, because sometimes the treatments can do more damage if there is too low of a risk from the actual condition.
Re: (Score:1)
Hell, I can do that for you today:
int wants_coffee()
{
return 1;
}
Re:Can it beat the doctors (Score:5, Informative)
Yet another monolithic wall of text (Score:2)
"developed an artificial intelligence(AI) program" (Score:4, Insightful)
No. It's just a fucking program. All of these "AI" claims are just programs. AI doesn't exist. Just stop with the AI already, the word has lost all meaning.
Re: (Score:1)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Calling it a machine learning program is probably accurate. These are self taught predictors but there is still a lot of human involvement to calibrate the learning
My personal feelings is AI should refer to something that learns independently or in a black box manner. I think some neural networks can do this at a simple level but I don't know of any complex independent learning done by any computer programs.
Re: (Score:2)
The only reason the term doesn't have any meaning is because everyone's definition of "intelligence" is different in the first place.
I disagree that that is the only reason. Another reason is that we don't agree on the term "artificial." In most cases I would say that all the intelligence, regardless of how that part is defined, came from the programmer and not the software.
We actually can't agree on the meaning of any part of the term!
Re: (Score:2)
Re: (Score:1)
What this usually means is that the 'fucking program' uses neural nets which
emulate aspects of how the brain works, and trained these neural nets on the
problem data at hand. The neural net thus gains the information and can apply
this to predict some situation.
Anything in the 'artificial intelligence' department will be a 'fucking program'.
AI has not lost its meaning, rather - it has been rather poorly defined
to begin with. Some think it means sentient, other think it means sapient
other think it must be huma
Re: (Score:2)
Re:"developed an artificial intelligence(AI) progr (Score:5, Insightful)
This has always weirded me out as an argument. Intelligence is simply information processing in physical systems. What humans do when they make intelligent decisions is take in data from the environment and process it to make predictions. So when a doctor looks at the same data as the machine and makes a conclusion about treatment, he is making an intelligent decision, but when the machine does the same it's not intelligence it's 'just a program'? When a human being operates a motor vehicle and adapts his/her behavior to react to oncoming traffic he/she is using intelligence but when a machine does it it's 'just a program'? What?
This whole approach to me reeks to substance dualism; the human brain is a computer, a very advanced one at that, but it's just a computer. there is no 'soul' that somehow makes the human brain the only thing that's capable of intelligent operations. Right now brains are still able to cross-reference data better than computers, making humans as a whole more intelligent than computers but we're already at a point in which computer programs are able to take in data and adapt their behavior to meet a goal, whether it's driving a car or anything else, with better results than your standard humans, but somehow the platform that the program is being run on determines which of these 2 actions is categorized as 'intelligent'. This is nonsensical.
Intelligence is a scale, not a binary thing. The confusion about AI these days is people read 'AI' and they immediately equate that to either 'human level intelligence' and/or 'consciousness', neither of which are required for a system to be intelligent. A dog is more intelligent than a rat, a monkey is more intelligent than a dog, a human is more intelligent than a monkey and so far overall humans are also more intelligent than computers, but it doesn't mean that computers don't have a level of intelligence already, even though you cannot (yet) have a discussion/debate with the computer.
Imagine a few decades into the future wherein these systems are able to recognize speech so that a physician is able to consult with it during an operation. Or when they get to the point that the computers themselves are able to perform autonomous surgeries on people and react to complications on the fly. This is the direction we're headed to and getting there does not require the computers to become self-aware.
You can call it 'just a program' all you want, but using that definition the years of training and practice going on in the surgeon's head as he's trying to figure out the best way to cut the tumor out without causing a hemorrhage is also 'just a program'. The platform on which the program is being run may be wetware or hardware but it does not affect the intelligence of the program.
Human brain is NOT a computer (Score:4, Interesting)
This whole approach to me reeks to substance dualism; the human brain is a computer, a very advanced one at that, but it's just a computer. there is no 'soul' that somehow makes the human brain the only thing that's capable of intelligent operations.
Start here...https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer [aeon.co]
Re: (Score:2)
The entire point of that essay is to point out that the functioning of (human) brains and electronic computers is fundamentally different. I never claimed otherwise, The core of the point of the essay is summed up by the phrase: "Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They real
Re:Human brain is NOT a computer (Score:4, Insightful)
I read through the paper, but couldn't find any description of what the brain does which couldn't be considered information processing. It may not be digital processing, and it may not resemble how computers process instructions and retrieve data, but even if the physical architecture is different the brain still seems to be processing information.
He uses an example of a dollar bill, and how a person cannot recall every detail of a dollar bill from memory if asked to draw one. And that is somehow proof that the brain does not store data about the dollar bill? The person was still able to draw some details about the dollar bill, such as a person in the center and the numbers in the corners, so that data was stored somewhere. The author also makes a silly distinction between "storing data" and "changing the brain", as if the way the brain is changed isn't how it stores the data.
But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.
Sounds a lot like the brain stored the song or poem somewhere in a way it could be retrieved later, under certain conditions. Just because the brain stores information in a less precise way as a computer doesn't mean it isn't storing anything. The rest of the article continues to make similarly odd claims without backing them up. The researcher takes some very valid arguments about how many researchers rely too heavily on computer / human brain metaphors, but then he makes a lot of wild statements himself without backing them up either.
Re: (Score:1)
I read through the paper... and it may not resemble how computers process instructions and retrieve data...
OK so you did understand the parts that were in context of this thread!
"The brain is just a computer" is hand-wavy and objectively incorrect. It is not necessary to prove it is some other thing to demonstrate that it isn't a computer, for all useful values of "computer" including common English usage.
"The brain is just a computer" is entirely subjective, and like other opinions when it is claimed as objective fact it is simply false. If you want it to be true, you have to leave it as an opinion, where it is
Re: (Score:3)
"The brain is just a computer" is hand-wavy and objectively incorrect.
No it is not. In the context of the rest of the paragraph it is obvious he doesn't mean a digital computer with an x86 instruction set. He was referring to the brain as a machine which accepts inputs, processes the inputs based on it's current physical state, and produces various outputs to initiate action throughout the body. He mentioned Substance Dualism at the beginning of the paragraph the line you are quoting is found in order to clear up any confusion about what he meant by calling the brain a comput
Re: (Score:2)
In the context of the rest of my comment, it is also obvious that I wasn't talking about an x86 instruction set either!
The trick is to understand what was said first, before you decide it was wrong. ;)
And for the record, we are using English here.
You don't even attempt to prove that it is objective. My point was that it is subjective, and that is why a claim that the answer is this or that are both wrong and hand-wavy. Anything subjective, if you claim that it is __[opinion]__ then that is hand-wavy. For me
Re: (Score:2)
Re: (Score:2)
Interestingly both spellings, computer and computor, mean either an electronic computational device, or a human who does computations.
I was taught in CS that for most algorithms it is useful to picture the pseudocode being implemented by humans passing around slips of paper and doing the computations by hand, because it helps to visualize scale, organization, communication, etc. So if humans were implementing the same predictive algorithm by hand, would it still be accused of being artificial, or intelligen
Re: (Score:2)
That article does a lot of assertion without much in the way of persuasion or offering an alternative model. It's got an interesting premise - that describing the brain in terms and metaphors that come from computing may obscure rather than illuminate its functioning - but the few examples given weren't persuasive.
Re: (Score:2)
Wouldn't it be ironic if we are ultimately incapable of understanding the complete function of the brain, and it required an AI to comprehend it all? :)
Yeah, and to top that off, the AI won't be able to comprehend itself so it will need to build an AI that can understand it... It will be turtles all the way down dear AC.
Re: (Score:2)
Honestly, if you told someone to just "write a program" to recognize whether an image has a cat in it or not, what do you think they would come up with? Probably 20,000 lines
Re: (Score:2)
The only thing the 1950s needed to obtain recent results in convolutional neural networks, was the planar process [wikipedia.org] of 1959 and a suitably accelerated coefficient of Moore's law. We can get there by applying the inverse Hackermann function.
Dividing 18 by 2 and shifting to a lower unit gives us a doubling time of nine weeks. Probably we're recognizing cats by 1967. Be
Re: (Score:1)
Re: (Score:2)
Firstly, even if one agrees that this is true, the human brain is still a computer, even if it's not entirely a computer. We're able to perform calculations and predictions based on learned experience, which amounts to computing. You can argue that on top of this the human brain has some 'extra' abilities that make it 'more than just a computer' but I'm not so sure of that.
The brain is in charge of not only
Re: (Score:2)
This whole approach to me reeks to substance dualism; the human brain is a computer, a very advanced one at that, but it's just a computer. there is no 'soul' that somehow makes the human brain the only thing that's capable of intelligent operations.
I'd suggest that it has less to do with dualism and more to do with general intelligence vs. domain intelligence. Most of these AIs have decent domain intelligence, but very poor or nonexistent general intelligence. It's hard for most people to think of an autonomous car as being "intelligent" when it has no means at its disposal for answering trivial questions such as, "Can an alligator run the hundred-meter hurdle?".
When people today dismiss AIs as being "mere programs", what they're really doing is dismi
Re: (Score:2)
I for one am not being dismissive of the useful of the algorithms at all when I say they're not more or less artificial nor more or less intelligent than other programs.
That's the problem, it is just regular software. Like calling software an "app" doesn't change anything, it is just a label. The problem with the AI label it is that none of the words are accurate descriptions of how it differs from other software. The problem isn't that programs are "merely" programs, but that software programs are not tree
Re: (Score:2)
We're perfectly capable of making systems we can't really understand, or which develop uses we hadn't thought of earlier. Lots of software does things that the developers didn't have in mind when they wrote it. We can't understand the internal values in a complex artificial neural net; all
Re: (Score:2)
Nor can a baseball player understand all the complex quantum effects that are required for a baseball to bounce off the bat instead of just mixing together with it! That doesn't keep them from knowing how to hit the ball, though.
Waving your hands at complexity doesn't keep intent from being simple; an engineer intends to create a complex system, and didn't even intend it to be one that required calculation of all the details. It doesn't really mean they didn't understand what they were doing, it just means
Re: (Score:2)
Although I agree that "intelligence" is a scale rather than a switch and totally agree on the confusion point about people. Really people's fear isn't AI, but automation. But I disagree that your examples should be on that scale. Maybe on a "program complexity" scale but not intelligence. Maybe these two meet at one point or blur together a little.
But "intelligence" should be more than applying past algorithms to current parameters. Intelligence may start at "learning" new knowledge which is a bit more
Re: (Score:2)
10 input "What is your name";A$
20 print "Hello ";a$
30 goto 20
Artificial intelligence
Re: (Score:2)
So is a computer. Error-checking and adapting is something that computers are still not very good at, but the whole point of machine learning and adaptive algorithms is to achieve this same ability.
Humans are able to error-check their own decisions and calculations only so far as we've learned to do it. If I've performed some task 10 000 times I know what kind of errors can occur in the process an
Re: (Score:2)
Re:"developed an artificial intelligence(AI) progr (Score:4, Interesting)
Quoting the article, "as soon as AI successfully solves a problem, the problem is no longer a part of AI."
Re: (Score:2)
For some reason, in reporting, they prefer to use the term "AI" rather than "fucking program".
Unless you are planning sexual relationships, "AI" is actually the more accurate term.
Re: (Score:1)
Re: (Score:1)
But it looks like an opportunity for the return of Microsoft Bob in Windows 10.
"Hi! Looks like there is a 77.2% chance that you'll die from a heart attack within the next five minutes. Shall I call an ambulance for you?"
Re: (Score:2)
And all of those have been branches of the AI field. Since the field of AI was created, arguably during the 1956 Dartmouth workshop, it has included topics such as expert systems and statistical methods being used to make decisions based on both simple and complex input. Whether you want to accept it or not, even those Pacman ghosts' behavior can be accurately referred to as AI.
If you want a term which only refers to human level intelligence, perhaps you should use either Artificial Consciousness, Machine C
I ran the algorithm and here's what I got (Score:2)
1) Eventually.
2) Eventually.
3) Eventually.
4) Never.
5) Eventually.
Doctor, how long have I got? (Score:2)
You know the joke...
"10"
"10 months?"
"...9...8...7..."
Re: (Score:2)
A doctor examined his patient and asked, “Do you want the good news or the bad news?"
The patient replied, "Don't beat about the bush; give me the bad."
Doctor: "You've got six weeks to live"
Patient: "Ah shit. So what 's the good news?"
Doctor: "I've won the lottery!"
Don't let the GOP have this any flagged will be bl (Score:1)
Don't let the GOP have this any flagged will be black listed.
Just like the Death Clock (Score:2)
It can occasionally be off by a few seconds, what with free will and all.
not the headline (Score:3)
the software was able to accurately predict patients who would still be alive after a year around 80% of the time
Look at the doctors (Score:3)
Find good pathologists to report on every case. Link all past complex work with a pathologist's reports over years.
Who ordered what procedures? What did a specialist do? What did an average doctor on duty do?
Over time the really good professionals who have the skills to save lives will get listed and the average doctors who do things wrong will really stand out.
Work out who your best specialist are, support them and get them working with the next generation of experts.
No new AI needed. Just track all your doctors and have pathologist's report on every interesting case.
Why some doctors just cant get the same average results as a specialist on duty is a question that can be discovered by looking at results.
If a hospital wants good results, stop funding average doctors and having average doctors try to deal with the same very sick people every shift.
Move the average doctors to other areas of medicine and ensure only the best staff get supported.
How to stop the flow of very average doctors into the wider medial profession? Stop accepting very average students into universities to study medicine. Make sure every doctor sits the same national exams and passes well.
This is so depressing (Score:5, Insightful)
I can remember when /. attracted some of the best informed and intelligent people you could find anywhere on the Internet. Now, all we have are members who inform us that we do not need computers, and their ability to find correlations in huge data sets. All we need is the intuition of smart people.
Look back at the early proposed tests for artificial intelligence. When supervised deep learning systems can use the immense processing capabilities of modern computers, to not only match, but to exceed the capabilities of humans in a wide range of problem spaces, it is appropriate to describe the result as "artificial intelligence". We do not mean literally that we have an intelligent bunch of integrated circuits and harddrives. But, the overall system can produce results that we would until recently have considered only achievable by human experts. Indeed, our AIs, in many situations, exceed the capabilities of the best human minds.
I am used to the idea of the general public feeling threatened by the capabilities of modern technology. I just wish sites supposedly intended for intelligent, scientifically-informed individuals could be exempted from such lack of reason.
Re: (Score:2)
For some reason, Slashdot attracts a lot of armchair experts who read an article summary, immediately think up a trivial reason that a new bit of science or engineering can't possibly work, assume that the scientists/engineers never thought of this obvious flaw, and dismiss the whole project out of hand as a waste of time and money. But this has been the case for some years--I remember an article in 2010 about my then desk-neighbor's NASA-funded PhD work that the majority of posters just shat on without kn
Re: (Score:2)
I guess your point is that artificial intelligence can never exist. Even when, as now, the final trained system operates in ways we do not fully understand, we still know the algorithms that underlie the learning process. Only something mysterious and not properly understood in its entirety could be regarded as "intelligent". As a corollary, scientists who rely on computer simulations to predict climate change are not showing intelligence. Only climate deniers who use their intuition, unbiased by stupid com
Re: (Score:2)
Seriously, you come at us with anonymous elitism? Wow, this place has gone downhill! Even the academic snobs are fucking morons now.
So.... are the other 20% (Score:2)
Re: (Score:2)
Confusion matrix (Score:2)
self-fulfilling prophecy (Score:2)
Researchers from the MRC London Institute of Medical Sciences (LMS) believe the software will allow doctors to better treat patients with pulmonary hypertension by determining how aggressive their treatment needs to be. ... The LMS scientists claim that the software was able to accurately predict patients who would still be alive after a year around 80% of the time.
Hmm... Software predicts patient probably won't be alive after a year, doctors don't treat as aggressively, patient dies shortly thereafter. Sounds like researchers have discovered the self-fulfilling prophecy.
Of course, anyone could make those accurate predictions by simply killing off patients -- of course, not 100% accurate as that would be suspicious.
80% accuracy (Score:2)
does not have anywhere near the same meaning as
Re: (Score:2)
You don't even need to find out what their claim was to know that the headline is horse shit. The word "when" sets things up grammatically to offer a claim, but by itself it doesn't mean anything. But death has meaning, and a prediction having 80% accuracy has meaning, so it combines to a false claim. When could mean anything, it could mean the minute, second, picosecond, day, year, decade, millennium, etc. But it can't mean all of those, so it is false if you use the given precision.
Predictive Modeling, not AI (Score:1)
This is predictive modeling as done at least the last 20 years. For example SPSS Modeler (formerly Clementine) was released 1994, and similar stuff was done with custom solutions long before that..
Re: (Score:2)
Indeed. The AI fans are trying to make AI happen by misclassifying a lot of things as AI which are not. Fact of the matter is, even with a wider area as "AI", there are not many AI things that work, and if you go closer and require actual intelligence (sometimes called "true" or "strong" AI), there is absolutely nothing, not even credible theories.
The only thing this shows is that the AI fans are not very intelligent.
This is not AI (Score:2)
First, this is not AI, it is statistical classification. No intelligence involved at all. Second, at 80% accuracy (and the thing it actually predicts....), it sucks badly. You will probably have to search a while to find an experienced expert that does this badly on this classification task.
So while this may have some value as triage, that is about it. And even as triage, it may kill people because of the 20% where it is dead wrong. So maybe this is useful and maybe not. It will of course get misused as the
Re: (Score:2)
Second, at 80% accuracy (and the thing it actually predicts....), it sucks badly. You will probably have to search a while to find an experienced expert that does this badly on this classification task.
What stands out to me is that there is no control. They mention in the limitations of the study that they're comparing the output of the program to predictions based on the somewhat arbitrary reporting categories for heart disease, when the disease it known to have multiple components in real patients. The categories are not a viable stand-in for human predictive performance, they are intentionally over-simplified. If the software was given data segmented by reporting category it would have lower performanc
Re: (Score:2)
Excellent point.
I see that you are about to die (Score:2)
Can I haz ur toyz?
Yeah, I've been told my odds are bad. (Score:4, Insightful)
Funny thing - she didn't bother telling me what I might accomplish if I start eating right, or exercising more, or even if I quit smoking - in fact, she seemed rather dubious that it would have any real effect at all (except the smoking part, for which she was happy to suggest several types of help if I wanted it). She didn't even tell me what my odds would be if I did start spending money on these drugs. I'm sure that insurance will pick up almost all of the cost - and I'm also sure that some pharmaceutical company somewhere would make a fair chunk of change off me for the rest of my unnatural life, sort of an annuity for big pharma. Problem is, I couldn't be sure I'd always be able to afford the drugs, and I'm told "once you start, you can't stop".
Yeah. I think I'd rather die living my life than clutching for more days.
Re: (Score:2)
Once the damage is done, it's often irreversible, short of something like a heart transplant. Diet and exercise can help you lose weight and strengthen the hert and lungs, but only if the system isn't already broken.
Re: (Score:2)
Once the damage is done, it's often irreversible, short of something like a heart transplant. Diet and exercise can help you lose weight and strengthen the hert and lungs, but only if the system isn't already broken.
That is complete nonsense, my advice to you is: don't give out medical advice.
Re: (Score:2)
After my heart attack, I was told to eat better, exercise more, and, here, take these drugs. My cardiologist told me that, whatever my cholesterol level was, it's now officially too high. Admittedly, taking the drugs is the easiest of those three, but I can't imagine being told just to take them without any lifestyle changes.
Re:Yeah, I've been told my odds are bad. (Score:5, Informative)
Sounds like you need to get a better doctor.
Yeah. Generic Lipitor is $9 at Walmart. Generic Crestor is $15. KMart has generic Zocor for $3. That $.10-.$50 a day for the 3 dominate statins. That's not making any pharmaceutical companies rich.
Re: (Score:2)
That's not making any pharmaceutical companies rich.
In 2012 it was making them $29 billion per year [mercola.com].
By 2014 it was $100 billion. [healthimpactnews.com]
If that's not making anyone rich, they need better accountants.
Re: (Score:2)
You're selectively quoting me and taking it out of the original context. Lipitor name brand is $372 for 30 days. The generic equivalent, Atorvastatin, is the $9 I stated. The other drugs are similarly priced.
The name brand drugs are making pharmaceutical companies rich. The generic equivalent is making the generic manufacturers money, but it's not making them rich like the original patent holder.
Re: (Score:2)
I'm middle-aged (mid fifties). Slightly overweight at 185 pounds (I stand 5'7"). I only drink alcohol occasionally. Incidentally, I can point to three generations of males in my bloodline that have all died at the age of 72 - some nonsense about "threescore years and twelve".
Re: (Score:2)
Yeah. I think I'd rather die living my life than clutching for more days.
There is a great Pearls Before Swine comic strip from one Sunday. I *CAN'*T FIND IT even though I've looked on and off for years. It was a color two-row cartoon back in like 2010; the strip had beeen running for like 3 years.
Rat's at the doctor's office. Not good news, too fat, out of shape, the standard bit. Rat says "So I should start eating
Re: (Score:2)
Exactly! If somebody wants to be healthy, they want to do it already. If they're in the doctor's office with those problems, most likely they already don't value their health. These are usually easy decisions for everybody; people already know if they value themselves more or less than fried foods!
Can he really? (Score:2)
I'd sure like to meet him. Good old Al.
Not that difficult. (Score:2)
I'm pretty sure a good cardiologist with a table of all aggregated research on heart conditions and life expectancy can do pretty much the same.
Health insurances have been doing this sort of thing for decades.
No big deal really.
Re: (Score:1)
I'm pretty sure a good cardiologist with a table of all aggregated research on heart conditions and life expectancy can do pretty much the same.
Health insurances have been doing this sort of thing for decades.
No big deal really.
I would argue your first sentence is indicative of just how big of a deal this could be.
Good cardiologists are in short supply, and the demands on their time and attention are significant.
Now we potentially have software that can do this particular task "pretty much the same"
If such software could be made widely available to hospitals it could vastly improve scheduling and triage potentially giving some a better quality of life than if treatment is delayed months to years.
Yes I do realize with our current h
Input parameters (Score:2)
Closer examination revealed that the main parameters for decision making is the answer to "Patient is insured" and "Patient is a donor".
Dilbert (Score:2)
http://dilbert.com/strip/2015-... [dilbert.com]
http://dilbert.com/strip/2015-... [dilbert.com]
Not something I'd want to know (Score:2)
This would probably make me obsess over my own demise for the remainder of my 80% accurate lifespan. Not something I'd want to know with such accuracy.
Outlyers (Score:2)
Far more interesting than how good it is at spotting the average patient, is how close it was on its wrong results. How many people did it give a life expectancy of 30 years that croaked the next week? BMI is a decent measurement, when used on an average human, but is laughably retarded when used on anyone outside of normal. Using a AI, to decide when and how to do treatment might seem like a good idea when you look at the averages, but could be pretty bad for individuals.
Take for example the BMI again. I c
Still would rather have the human do it. (Score:2)
No thank you, but trusting the job-destroying AI is not exactly the best of ideas.
Its Machine Learning, not AI (Score:1)