Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Medicine Software Science Technology

AI Can Predict When Patients Will Die From Heart Failure 'With 80% Accuracy' (ibtimes.co.uk) 153

New submitter drunkdrone quotes a report from International Business Times: Scientists say they have developed an artificial intelligence (AI) program that is capable of predicting when patients with a serious heart disorder will die with an 80% accuracy rate. Researchers from the MRC London Institute of Medical Sciences (LMS) believe the software will allow doctors to better treat patients with pulmonary hypertension by determining how aggressive their treatment needs to be. The researchers' program assessed the outlook of 250 patients based on blood test results and MRI scans of their hearts. It then used the data to create a virtual 3D heart of each patient which, combined with the health records of "hundreds" of previous patients, allowed it to learn which characteristics indicated fatal heart failure within five years. The LMS scientists claim that the software was able to accurately predict patients who would still be alive after a year around 80% of the time. The computer was able to analyze patients "in seconds," promising to dramatically reduce the time it takes doctors to identify the most at-risk individuals and ensure they "give the right treatment to the right patients, at the right time." Dr Declan O'Regan, one the lead researchers from LMS, said: "This is the first time computers have interpreted heart scans to accurately predict how long patients will live. It could transform the way doctors treat heart patients. The researchers now hope to field-test the technology in hospitals in London in order to verify the data obtained from their trials, which have been published in the medical journal Radiology.
This discussion has been archived. No new comments can be posted.

AI Can Predict When Patients Will Die From Heart Failure 'With 80% Accuracy'

Comments Filter:
  • A /. editor who does understand what paragraphs are for....
  • by Anonymous Coward on Tuesday January 17, 2017 @11:50PM (#53687191)

    No. It's just a fucking program. All of these "AI" claims are just programs. AI doesn't exist. Just stop with the AI already, the word has lost all meaning.

    • Yep! It sounds like a statistics program - not really AI.
    • by mark-t ( 151149 )
      The only reason the term doesn't have any meaning is because everyone's definition of "intelligence" is different in the first place. If you can define an unambiguous metric for intelligence, then it becomes pretty obvious what AI has to be: intelligence that is artificial, rather than natural.
      • Calling it a machine learning program is probably accurate. These are self taught predictors but there is still a lot of human involvement to calibrate the learning
         
        My personal feelings is AI should refer to something that learns independently or in a black box manner. I think some neural networks can do this at a simple level but I don't know of any complex independent learning done by any computer programs.

      • The only reason the term doesn't have any meaning is because everyone's definition of "intelligence" is different in the first place.

        I disagree that that is the only reason. Another reason is that we don't agree on the term "artificial." In most cases I would say that all the intelligence, regardless of how that part is defined, came from the programmer and not the software.

        We actually can't agree on the meaning of any part of the term!

        • by mark-t ( 151149 )
          The definition of artificial could be obvious... it is anything that is not natural. which is to say that it is not made or caused by mankind. Artificial intelligence would therefore be intelligence that *has* been made or caused by mankind. However, since "intelligence" is so poorly defined, the expression still cannot mean anything, despite half of the expression having an unambiguous meaning.
    • by Anonymous Coward

      What this usually means is that the 'fucking program' uses neural nets which
      emulate aspects of how the brain works, and trained these neural nets on the
      problem data at hand. The neural net thus gains the information and can apply
      this to predict some situation.

      Anything in the 'artificial intelligence' department will be a 'fucking program'.

      AI has not lost its meaning, rather - it has been rather poorly defined
      to begin with. Some think it means sentient, other think it means sapient
      other think it must be huma

    • You mean like back in the day when it was called with less sexy Multivarate Analysis? [wikipedia.org] Ahhh, 1958 had such unimaginative people,
    • by Kiuas ( 1084567 ) on Wednesday January 18, 2017 @04:41AM (#53687957)

      All of these "AI" claims are just programs. AI doesn't exist.

      This has always weirded me out as an argument. Intelligence is simply information processing in physical systems. What humans do when they make intelligent decisions is take in data from the environment and process it to make predictions. So when a doctor looks at the same data as the machine and makes a conclusion about treatment, he is making an intelligent decision, but when the machine does the same it's not intelligence it's 'just a program'? When a human being operates a motor vehicle and adapts his/her behavior to react to oncoming traffic he/she is using intelligence but when a machine does it it's 'just a program'? What?

      This whole approach to me reeks to substance dualism; the human brain is a computer, a very advanced one at that, but it's just a computer. there is no 'soul' that somehow makes the human brain the only thing that's capable of intelligent operations. Right now brains are still able to cross-reference data better than computers, making humans as a whole more intelligent than computers but we're already at a point in which computer programs are able to take in data and adapt their behavior to meet a goal, whether it's driving a car or anything else, with better results than your standard humans, but somehow the platform that the program is being run on determines which of these 2 actions is categorized as 'intelligent'. This is nonsensical.

      Intelligence is a scale, not a binary thing. The confusion about AI these days is people read 'AI' and they immediately equate that to either 'human level intelligence' and/or 'consciousness', neither of which are required for a system to be intelligent. A dog is more intelligent than a rat, a monkey is more intelligent than a dog, a human is more intelligent than a monkey and so far overall humans are also more intelligent than computers, but it doesn't mean that computers don't have a level of intelligence already, even though you cannot (yet) have a discussion/debate with the computer.

      Imagine a few decades into the future wherein these systems are able to recognize speech so that a physician is able to consult with it during an operation. Or when they get to the point that the computers themselves are able to perform autonomous surgeries on people and react to complications on the fly. This is the direction we're headed to and getting there does not require the computers to become self-aware.

      You can call it 'just a program' all you want, but using that definition the years of training and practice going on in the surgeon's head as he's trying to figure out the best way to cut the tumor out without causing a hemorrhage is also 'just a program'. The platform on which the program is being run may be wetware or hardware but it does not affect the intelligence of the program.

      • by bayankaran ( 446245 ) on Wednesday January 18, 2017 @09:57AM (#53688735)

        This whole approach to me reeks to substance dualism; the human brain is a computer, a very advanced one at that, but it's just a computer. there is no 'soul' that somehow makes the human brain the only thing that's capable of intelligent operations.

        Start here...https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer [aeon.co]

        • by Kiuas ( 1084567 )

          Start here...https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

          The entire point of that essay is to point out that the functioning of (human) brains and electronic computers is fundamentally different. I never claimed otherwise, The core of the point of the essay is summed up by the phrase: "Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They real

        • by ranton ( 36917 ) on Wednesday January 18, 2017 @12:13PM (#53689455)

          I read through the paper, but couldn't find any description of what the brain does which couldn't be considered information processing. It may not be digital processing, and it may not resemble how computers process instructions and retrieve data, but even if the physical architecture is different the brain still seems to be processing information.

          He uses an example of a dollar bill, and how a person cannot recall every detail of a dollar bill from memory if asked to draw one. And that is somehow proof that the brain does not store data about the dollar bill? The person was still able to draw some details about the dollar bill, such as a person in the center and the numbers in the corners, so that data was stored somewhere. The author also makes a silly distinction between "storing data" and "changing the brain", as if the way the brain is changed isn't how it stores the data.

          But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.

          Sounds a lot like the brain stored the song or poem somewhere in a way it could be retrieved later, under certain conditions. Just because the brain stores information in a less precise way as a computer doesn't mean it isn't storing anything. The rest of the article continues to make similarly odd claims without backing them up. The researcher takes some very valid arguments about how many researchers rely too heavily on computer / human brain metaphors, but then he makes a lot of wild statements himself without backing them up either.

          • I read through the paper... and it may not resemble how computers process instructions and retrieve data...

            OK so you did understand the parts that were in context of this thread!

            "The brain is just a computer" is hand-wavy and objectively incorrect. It is not necessary to prove it is some other thing to demonstrate that it isn't a computer, for all useful values of "computer" including common English usage.

            "The brain is just a computer" is entirely subjective, and like other opinions when it is claimed as objective fact it is simply false. If you want it to be true, you have to leave it as an opinion, where it is

            • by ranton ( 36917 )

              "The brain is just a computer" is hand-wavy and objectively incorrect.

              No it is not. In the context of the rest of the paragraph it is obvious he doesn't mean a digital computer with an x86 instruction set. He was referring to the brain as a machine which accepts inputs, processes the inputs based on it's current physical state, and produces various outputs to initiate action throughout the body. He mentioned Substance Dualism at the beginning of the paragraph the line you are quoting is found in order to clear up any confusion about what he meant by calling the brain a comput

              • In the context of the rest of my comment, it is also obvious that I wasn't talking about an x86 instruction set either!

                The trick is to understand what was said first, before you decide it was wrong. ;)

                And for the record, we are using English here.

                You don't even attempt to prove that it is objective. My point was that it is subjective, and that is why a claim that the answer is this or that are both wrong and hand-wavy. Anything subjective, if you claim that it is __[opinion]__ then that is hand-wavy. For me

            • Bear in mind that, prior to 1940, "the computor" was, in fact, a person!
              • Interestingly both spellings, computer and computor, mean either an electronic computational device, or a human who does computations.

                I was taught in CS that for most algorithms it is useful to picture the pseudocode being implemented by humans passing around slips of paper and doing the computations by hand, because it helps to visualize scale, organization, communication, etc. So if humans were implementing the same predictive algorithm by hand, would it still be accused of being artificial, or intelligen

        • That article does a lot of assertion without much in the way of persuasion or offering an alternative model. It's got an interesting premise - that describing the brain in terms and metaphors that come from computing may obscure rather than illuminate its functioning - but the few examples given weren't persuasive.

      • by WDot ( 1286728 )
        In addition, it's such an ignorant devaluation of AI's incredible achievements. The field of AI, a term everyone except Slashdot naysayers have agreed upon, has performed incredible feats of image and language analysis (among other things) that the average software engineer stringing together API calls could never have come up with.

        Honestly, if you told someone to just "write a program" to recognize whether an image has a cat in it or not, what do you think they would come up with? Probably 20,000 lines
        • by epine ( 68316 )

          The only thing the 1950s needed to obtain recent results in convolutional neural networks, was the planar process [wikipedia.org] of 1959 and a suitably accelerated coefficient of Moore's law. We can get there by applying the inverse Hackermann function.

          When planning a project, increase the amount of time that you estimate it will take by doubling the number and going up to the next time unit.

          Dividing 18 by 2 and shifting to a lower unit gives us a doubling time of nine weeks. Probably we're recognizing cats by 1967. Be

      • The human brain is not a computer because computing is a very small part of thinking and intelligence.
        • by Kiuas ( 1084567 )

          The human brain is not a computer because computing is a very small part of thinking and intelligence.

          Firstly, even if one agrees that this is true, the human brain is still a computer, even if it's not entirely a computer. We're able to perform calculations and predictions based on learned experience, which amounts to computing. You can argue that on top of this the human brain has some 'extra' abilities that make it 'more than just a computer' but I'm not so sure of that.

          The brain is in charge of not only

      • This whole approach to me reeks to substance dualism; the human brain is a computer, a very advanced one at that, but it's just a computer. there is no 'soul' that somehow makes the human brain the only thing that's capable of intelligent operations.

        I'd suggest that it has less to do with dualism and more to do with general intelligence vs. domain intelligence. Most of these AIs have decent domain intelligence, but very poor or nonexistent general intelligence. It's hard for most people to think of an autonomous car as being "intelligent" when it has no means at its disposal for answering trivial questions such as, "Can an alligator run the hundred-meter hurdle?".

        When people today dismiss AIs as being "mere programs", what they're really doing is dismi

        • I for one am not being dismissive of the useful of the algorithms at all when I say they're not more or less artificial nor more or less intelligent than other programs.

          That's the problem, it is just regular software. Like calling software an "app" doesn't change anything, it is just a label. The problem with the AI label it is that none of the words are accurate descriptions of how it differs from other software. The problem isn't that programs are "merely" programs, but that software programs are not tree

          • But still, unless it is buggy all the intelligence in the system was engineered by the programmers; a "self-learning" algorithm only learns what it was engineered to learn, and what it learned accidentally due to bugs

            We're perfectly capable of making systems we can't really understand, or which develop uses we hadn't thought of earlier. Lots of software does things that the developers didn't have in mind when they wrote it. We can't understand the internal values in a complex artificial neural net; all

            • Nor can a baseball player understand all the complex quantum effects that are required for a baseball to bounce off the bat instead of just mixing together with it! That doesn't keep them from knowing how to hit the ball, though.

              Waving your hands at complexity doesn't keep intent from being simple; an engineer intends to create a complex system, and didn't even intend it to be one that required calculation of all the details. It doesn't really mean they didn't understand what they were doing, it just means

      • by orlanz ( 882574 )

        Although I agree that "intelligence" is a scale rather than a switch and totally agree on the confusion point about people. Really people's fear isn't AI, but automation. But I disagree that your examples should be on that scale. Maybe on a "program complexity" scale but not intelligence. Maybe these two meet at one point or blur together a little.

        But "intelligence" should be more than applying past algorithms to current parameters. Intelligence may start at "learning" new knowledge which is a bit more

      • 5 dim a(32)

        10 input "What is your name";A$

        20 print "Hello ";a$

        30 goto 20

        Artificial intelligence

    • by Place a name here ( 4508093 ) on Wednesday January 18, 2017 @05:53AM (#53688077)
      This is called the AI effect [wikipedia.org].

      Quoting the article, "as soon as AI successfully solves a problem, the problem is no longer a part of AI."
    • by GuB-42 ( 2483988 )

      For some reason, in reporting, they prefer to use the term "AI" rather than "fucking program".
      Unless you are planning sexual relationships, "AI" is actually the more accurate term.

    • Looks like AI is the buzzword of 2017 like "Big Data" back in the days.
    • But it looks like an opportunity for the return of Microsoft Bob in Windows 10.

      "Hi! Looks like there is a 77.2% chance that you'll die from a heart attack within the next five minutes. Shall I call an ambulance for you?"

  • 1) Eventually.
    2) Eventually.
    3) Eventually.
    4) Never.
    5) Eventually.

  • You know the joke...

    "10"
    "10 months?"
    "...9...8...7..."

    • A doctor examined his patient and asked, “Do you want the good news or the bad news?"
      The patient replied, "Don't beat about the bush; give me the bad."
      Doctor: "You've got six weeks to live"
      Patient: "Ah shit. So what 's the good news?"
      Doctor: "I've won the lottery!"

  • Don't let the GOP have this any flagged will be black listed.

  • It can occasionally be off by a few seconds, what with free will and all.

  • by phantomfive ( 622387 ) on Wednesday January 18, 2017 @12:25AM (#53687337) Journal
    This quote is not the same as the headline, but it's what the computer actually did:

    the software was able to accurately predict patients who would still be alive after a year around 80% of the time

  • by AHuxley ( 892839 ) on Wednesday January 18, 2017 @12:26AM (#53687345) Journal
    Start a better database on doctors and all their medical procedures.
    Find good pathologists to report on every case. Link all past complex work with a pathologist's reports over years.
    Who ordered what procedures? What did a specialist do? What did an average doctor on duty do?
    Over time the really good professionals who have the skills to save lives will get listed and the average doctors who do things wrong will really stand out.
    Work out who your best specialist are, support them and get them working with the next generation of experts.
    No new AI needed. Just track all your doctors and have pathologist's report on every interesting case.
    Why some doctors just cant get the same average results as a specialist on duty is a question that can be discovered by looking at results.
    If a hospital wants good results, stop funding average doctors and having average doctors try to deal with the same very sick people every shift.
    Move the average doctors to other areas of medicine and ensure only the best staff get supported.
    How to stop the flow of very average doctors into the wider medial profession? Stop accepting very average students into universities to study medicine. Make sure every doctor sits the same national exams and passes well.
    • by Mostly a lurker ( 634878 ) on Wednesday January 18, 2017 @04:33AM (#53687943)

      I can remember when /. attracted some of the best informed and intelligent people you could find anywhere on the Internet. Now, all we have are members who inform us that we do not need computers, and their ability to find correlations in huge data sets. All we need is the intuition of smart people.

      Look back at the early proposed tests for artificial intelligence. When supervised deep learning systems can use the immense processing capabilities of modern computers, to not only match, but to exceed the capabilities of humans in a wide range of problem spaces, it is appropriate to describe the result as "artificial intelligence". We do not mean literally that we have an intelligent bunch of integrated circuits and harddrives. But, the overall system can produce results that we would until recently have considered only achievable by human experts. Indeed, our AIs, in many situations, exceed the capabilities of the best human minds.

      I am used to the idea of the general public feeling threatened by the capabilities of modern technology. I just wish sites supposedly intended for intelligent, scientifically-informed individuals could be exempted from such lack of reason.

      • by imidan ( 559239 )

        For some reason, Slashdot attracts a lot of armchair experts who read an article summary, immediately think up a trivial reason that a new bit of science or engineering can't possibly work, assume that the scientists/engineers never thought of this obvious flaw, and dismiss the whole project out of hand as a waste of time and money. But this has been the case for some years--I remember an article in 2010 about my then desk-neighbor's NASA-funded PhD work that the majority of posters just shat on without kn

  • false positives or false negatives or what mix of those?
    • by AHuxley ( 892839 )
      A specialist, an expert who has years of experience. The kind of person expected to be on duty.
  • This is pretty meaningless unless the full Confusion matrix (with false positive and false negative %) is provided
  • Researchers from the MRC London Institute of Medical Sciences (LMS) believe the software will allow doctors to better treat patients with pulmonary hypertension by determining how aggressive their treatment needs to be. ... The LMS scientists claim that the software was able to accurately predict patients who would still be alive after a year around 80% of the time.

    Hmm... Software predicts patient probably won't be alive after a year, doctors don't treat as aggressively, patient dies shortly thereafter. Sounds like researchers have discovered the self-fulfilling prophecy.

    Of course, anyone could make those accurate predictions by simply killing off patients -- of course, not 100% accurate as that would be suspicious.

    • I'm sorry, but the statement

      The LMS scientists claim that the software was able to accurately predict patients who would still be alive after a year around 80% of the time

      does not have anywhere near the same meaning as

      ...capable of predicting when patients with a serious heart disorder will die with an 80% accuracy rate

      • You don't even need to find out what their claim was to know that the headline is horse shit. The word "when" sets things up grammatically to offer a claim, but by itself it doesn't mean anything. But death has meaning, and a prediction having 80% accuracy has meaning, so it combines to a false claim. When could mean anything, it could mean the minute, second, picosecond, day, year, decade, millennium, etc. But it can't mean all of those, so it is false if you use the given precision.

  • by Anonymous Coward

    This is predictive modeling as done at least the last 20 years. For example SPSS Modeler (formerly Clementine) was released 1994, and similar stuff was done with custom solutions long before that..

    • by gweihir ( 88907 )

      Indeed. The AI fans are trying to make AI happen by misclassifying a lot of things as AI which are not. Fact of the matter is, even with a wider area as "AI", there are not many AI things that work, and if you go closer and require actual intelligence (sometimes called "true" or "strong" AI), there is absolutely nothing, not even credible theories.

      The only thing this shows is that the AI fans are not very intelligent.

  • First, this is not AI, it is statistical classification. No intelligence involved at all. Second, at 80% accuracy (and the thing it actually predicts....), it sucks badly. You will probably have to search a while to find an experienced expert that does this badly on this classification task.

    So while this may have some value as triage, that is about it. And even as triage, it may kill people because of the 20% where it is dead wrong. So maybe this is useful and maybe not. It will of course get misused as the

    • Second, at 80% accuracy (and the thing it actually predicts....), it sucks badly. You will probably have to search a while to find an experienced expert that does this badly on this classification task.

      What stands out to me is that there is no control. They mention in the limitations of the study that they're comparing the output of the program to predictions based on the somewhat arbitrary reporting categories for heart disease, when the disease it known to have multiple components in real patients. The categories are not a viable stand-in for human predictive performance, they are intentionally over-simplified. If the software was given data segmented by reporting category it would have lower performanc

  • by mmell ( 832646 ) on Wednesday January 18, 2017 @04:45AM (#53687963)
    Doctor ran this sort of actuarial bit against me - told me convincingly and with great conviction that I have an 80% chance of suffering a severe cardiac event in the next decade. Says I need to start doing statins if I want to change that.

    Funny thing - she didn't bother telling me what I might accomplish if I start eating right, or exercising more, or even if I quit smoking - in fact, she seemed rather dubious that it would have any real effect at all (except the smoking part, for which she was happy to suggest several types of help if I wanted it). She didn't even tell me what my odds would be if I did start spending money on these drugs. I'm sure that insurance will pick up almost all of the cost - and I'm also sure that some pharmaceutical company somewhere would make a fair chunk of change off me for the rest of my unnatural life, sort of an annuity for big pharma. Problem is, I couldn't be sure I'd always be able to afford the drugs, and I'm told "once you start, you can't stop".

    Yeah. I think I'd rather die living my life than clutching for more days.

    • Once the damage is done, it's often irreversible, short of something like a heart transplant. Diet and exercise can help you lose weight and strengthen the hert and lungs, but only if the system isn't already broken.

      • Once the damage is done, it's often irreversible, short of something like a heart transplant. Diet and exercise can help you lose weight and strengthen the hert and lungs, but only if the system isn't already broken.

        That is complete nonsense, my advice to you is: don't give out medical advice.

      • After my heart attack, I was told to eat better, exercise more, and, here, take these drugs. My cardiologist told me that, whatever my cholesterol level was, it's now officially too high. Admittedly, taking the drugs is the easiest of those three, but I can't imagine being told just to take them without any lifestyle changes.

    • by cdrudge ( 68377 ) on Wednesday January 18, 2017 @09:02AM (#53688491) Homepage

      she didn't bother telling me what I might accomplish if I start eating right, or exercising more, or even if I quit smoking - in fact, she seemed rather dubious that it would have any real effect at all

      Sounds like you need to get a better doctor.

      I'm also sure that some pharmaceutical company somewhere would make a fair chunk of change off me for the rest of my unnatural life

      Yeah. Generic Lipitor is $9 at Walmart. Generic Crestor is $15. KMart has generic Zocor for $3. That $.10-.$50 a day for the 3 dominate statins. That's not making any pharmaceutical companies rich.

      • That's not making any pharmaceutical companies rich.

        In 2012 it was making them $29 billion per year [mercola.com].

        By 2014 it was $100 billion. [healthimpactnews.com]

        If that's not making anyone rich, they need better accountants.

        • by cdrudge ( 68377 )

          You're selectively quoting me and taking it out of the original context. Lipitor name brand is $372 for 30 days. The generic equivalent, Atorvastatin, is the $9 I stated. The other drugs are similarly priced.

          The name brand drugs are making pharmaceutical companies rich. The generic equivalent is making the generic manufacturers money, but it's not making them rich like the original patent holder.

      • by mmell ( 832646 )
        Wanna multiply that by 365 days per year? Multiply that by a decade (or maybe two)? Multiply that by tens of thousands of men and women just like me?

        I'm middle-aged (mid fifties). Slightly overweight at 185 pounds (I stand 5'7"). I only drink alcohol occasionally. Incidentally, I can point to three generations of males in my bloodline that have all died at the age of 72 - some nonsense about "threescore years and twelve".

    • I'm sorry to hear about your health problems. Good luck -- I'd try the "eat somewhat healthier and exercise more" bit, but *I* wouldn't (I _don't_!) overdo it.

      Yeah. I think I'd rather die living my life than clutching for more days.

      There is a great Pearls Before Swine comic strip from one Sunday. I *CAN'*T FIND IT even though I've looked on and off for years. It was a color two-row cartoon back in like 2010; the strip had beeen running for like 3 years.

      Rat's at the doctor's office. Not good news, too fat, out of shape, the standard bit. Rat says "So I should start eating

      • Exactly! If somebody wants to be healthy, they want to do it already. If they're in the doctor's office with those problems, most likely they already don't value their health. These are usually easy decisions for everybody; people already know if they value themselves more or less than fried foods!

  • I'd sure like to meet him. Good old Al.

  • I'm pretty sure a good cardiologist with a table of all aggregated research on heart conditions and life expectancy can do pretty much the same.
    Health insurances have been doing this sort of thing for decades.

    No big deal really.

    • by Anonymous Coward

      I'm pretty sure a good cardiologist with a table of all aggregated research on heart conditions and life expectancy can do pretty much the same.
      Health insurances have been doing this sort of thing for decades.

      No big deal really.

      I would argue your first sentence is indicative of just how big of a deal this could be.

      Good cardiologists are in short supply, and the demands on their time and attention are significant.
      Now we potentially have software that can do this particular task "pretty much the same"

      If such software could be made widely available to hospitals it could vastly improve scheduling and triage potentially giving some a better quality of life than if treatment is delayed months to years.

      Yes I do realize with our current h

  • Closer examination revealed that the main parameters for decision making is the answer to "Patient is insured" and "Patient is a donor".

  • This would probably make me obsess over my own demise for the remainder of my 80% accurate lifespan. Not something I'd want to know with such accuracy.

  • Far more interesting than how good it is at spotting the average patient, is how close it was on its wrong results. How many people did it give a life expectancy of 30 years that croaked the next week? BMI is a decent measurement, when used on an average human, but is laughably retarded when used on anyone outside of normal. Using a AI, to decide when and how to do treatment might seem like a good idea when you look at the averages, but could be pretty bad for individuals.

    Take for example the BMI again. I c

  • No thank you, but trusting the job-destroying AI is not exactly the best of ideas.

  • Calling it an AI is misleading. The paper itself reports it as machine learning "Machine Learning of Threedimensional Right Ventricular Motion Enables Outcome Prediction in Pulmonary Hypertension"

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...