Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Medicine Stats Science

Predicting the Risk of Suicide By Analyzing the Text of Clinical Notes 70

First time accepted submitter J05H writes "Soldier and veteran suicide rates are increasing due to various factors. Critically, the rates have jumped in recent years. Now, Bayesian search experts are using anonymous Veteran's Administration notes to predict suicide risks. A related effort by Chris Poulin is the Durkheim Project which uses opt-in social media data for similar purposes."
This discussion has been archived. No new comments can be posted.

Predicting the Risk of Suicide By Analyzing the Text of Clinical Notes

Comments Filter:
  • With significant development I think detection could become at least 50% successful. It would probably be more cost effective though to just not lose the quarters when you flip them.
    • That would make some sense if the suicide rate was around 50%. Thankfully it's much lower.

      • by Fwipp ( 1473271 )

        Well, if the suicide rate's 10%, just say "no soldiers commit suicide," and bam, you're 90% successful.

        • Clever and true, but...here is the difference between statistics and engineers who need statistics.

          Now all your mistakes are in the wrong direction. For different applications you will err on the side of False Alarm or Missed Detection. In this case, a false detection means you stages an unnecessary intervention (slightly costly / embarrassing, but at least they know you care); but, a missed detection literally means someone died.

          Basically this is the application of risk/reward to probability.

      • Um, this isn't about predicting the suicide rate or how likely someone in the general population is going to commit suicide.

        It's about how likely a veteran who writes a suicide note and gives it to someone else is going to follow through and try to commit suicide.

        That rate is probably closer to the flip of a coin than it is to 10%.

  • sop on intake for decades now. does the 'note' say otherwise?

  • False positives (Score:4, Insightful)

    by dala1 ( 1842368 ) on Wednesday January 29, 2014 @10:04PM (#46106465)

    According to the study this is 67% effective. But, once this is applied to the general population you have an issue, because the vast majority of people are not suicidal. In the US, about 122 in 100,000 people attempt suicide a year, and about one in ten are successful. Even with a test that is 99% accurate, you are going to end up with somewhere around seven million false positives every year if you screen everyone.

    • The question, though, is how harmful a false positive is. If false positive means "lock the person up for their own safety," then obviously you don't want many false positives. But, if false positive just means "bring them back in a week earlier for an extra session," then this might not be too bad. Even the "false positives" --- people who don't get as far as suicide attempts --- might be folks who would benefit from a little extra help. If the seven million most-likely-to-be-psychologically-vulnerable peo

      • Re:False positives (Score:4, Insightful)

        by dala1 ( 1842368 ) on Thursday January 30, 2014 @12:49AM (#46107275)

        Seven million extra doctors' visits are hardly inconsequential, especially considering that only about 1 in 175 would actually be suicidal. Consider the time and money spent, the extra doctors who have to be trained and hired (I'm assuming psychiatrists since a GP is hardly qualified to assess a potentially suicidal patient), and the days missed from work for false alarms. That's all before the psychological trauma and loss of trust from your doctor telling you out of the blue that they think you might be mentally ill when you're not.

        Besides, that's not how it would work, because once the tool is out there and used to profile everyone, someone who is suicidal will commit suicide before that 'extra session.' After that, it will be be considered negligent not to 'do something' immediately once someone is flagged, and that something would likely be intrusive and expensive.

        Also, their accuracy rate is 67%, not 99%. I used that number to demonstrate a best-case scenario. As it stands, they would flag around 83 million people while only correctly flagging around 200,000. Good luck with 99.75% false positives.

        • Seven million extra doctors' visits are hardly inconsequential, especially considering that only about 1 in 175 would actually be suicidal.

          An interesting attitude. Compare this to Foxconn, which reduced the suicide rate among its workforce from 1 in 60,000 to 1 in 400,000 in three years.

          • Re: (Score:3, Informative)

            Seven million extra doctors' visits are hardly inconsequential, especially considering that only about 1 in 175 would actually be suicidal.

            An interesting attitude. Compare this to Foxconn, which reduced the suicide rate among its workforce from 1 in 60,000 to 1 in 400,000 in three years.

            All things considered, I think they did it by making it harder to commit suicide, and possibly also by improving labor conditions.

            The usual process is to place somebody thought suicidal on a suicide watch. This can actually be very intrusive, and a test like this certainly is less than ideal if you're applying it at large--the accuracy here is for this population, and rather close to chance already. In a wider population, of a different makeup, its accuracy will be different, and probably lower.

            More impo

  • by SigmoidCurve ( 188795 ) on Wednesday January 29, 2014 @10:12PM (#46106509) Homepage Journal
    It's refreshing to see predictive data analysis used for positive efforts, rather than simply selling more ads. Here's a call to action for all you data scientists at Twitter, FB, and other SV startups who think they're changing the world when all they're doing is putting money in their advertisers' pockets. News flash: statistics can be used to benefit society for a change.
    • by Zynder ( 2773551 )
      They'll fuck that up like they do everything. At first it's finding suicide patterns, but then it's predicting future criminality ala Minority Report. Yeah, I'm pessimistic but I have good reasons.
      • I read something about predicting criminality not long ago, but from a legal ethics angle. Since the human genome has been mapped, and every criminal (in my country at least) provides a DNA sample, and DNA sequencing is relatively cheap, there is now a wealth of data to mine. Behavioural Geneticists are starting to look for 'criminal' behaviour markers that distinguish, and to some extent explain, criminal behaviour. Some creative defence lawyers are already using this knowledge to help their cases. The fli
    • It might be an attempt at using data mining for a good cause, but it is using a case where the source data itself is questionable. If you have ever wondered what it is like to be in a loony bin, it is a bunch of babysitters who hope to become doctors, playing the telephone game with the doctor-of-the-day, who reports to the actual head doctor. The observation skills of these babysitters is like a 4 year old at a gun and ammo show. Then as information is relayed twice over and statistical analytics lump you
  • by mark-t ( 151149 ) <markt.nerdflat@com> on Wednesday January 29, 2014 @10:47PM (#46106713) Journal
    Let's say that they diagnose somebody as "mentally unfit"... what happens then? Do they get locked up "for their own protection" or something?
  • Risk of suicide? You make it sound like suicide is a bad thing ... so negative ...
  • by wonkey_monkey ( 2592601 ) on Thursday January 30, 2014 @04:12AM (#46107883) Homepage

    Critically, the rates have jumped in recent years.

    The rates aren't the only thing that've ah screw it.

  • What they did was this: they identified 100 VA patients who committed suicide and then identified two "matched cohorts" who hadn't committed suicide, consisting of 70 patients each (one cohort had been hospitalized for psych reasons, the other hadn't). Then they gathered up all the doctors' notes and examined the frequency of all of the words and phrases occurring in the notes. Certain words and phrases occurred more frequently in the notes for patients who had committed suicide.

    The single word which appeared to predict suicide most strongly was "agitation". Want to know which word was the second-strongest predictor of suicide? "Adequately". That's right, "adequately". Here are some of the other "predictor" words: "swab", "integrated", "Lipitor".

    I guess the finding that "agitation" appears more frequently in the suicide cohort is of mild interest. (As the authors themselves point out, it simply confirms a piece of information that has already been well documented-- namely that agitated affect is a risk factor for suicide). The rest of it is obviously statistical noise. I don't know much about genetic algorithms or neural-net learning, but it seems to me that these techniques are being used to provide an end-run around any reasonable test for statistical significance.

    One thing that the authors didn't comment on-- was the identity of the clinician a predictor for suicide? Maybe there were one or two clinicians who, for whatever reason, experienced a significantly higher suicide rate among their patients. (This would explain why "adequately" showed up so often-- every doctor has their own writing style with their own collection of pet phrases/words, and my guess is that certain doctors like to use the word "adequately" more often than others).

    • by Guppy ( 12314 )

      This would explain why "adequately" showed up so often-- every doctor has their own writing style with their own collection of pet phrases/words, and my guess is that certain doctors like to use the word "adequately" more often than others.

      I would have to know the context of how the word "adequately" was used, but a possibility is that it could have been employed in the process of clinical butt-covering. Sometimes a physician gets a bad feeling about potential adverse outcomes, yet there's maybe nothing directly actionable, and you end up with a note written in guarded terms, in preparation for legal or disciplinary review -- including perhaps careful descriptions of things that have been "adequately" evaluated or performed.

      Lipitor

      Patients on Lipit

      • I would have to know the context of how the word "adequately" was used, but a possibility is that it could have been employed in the process of clinical butt-covering. Sometimes a physician gets a bad feeling about potential adverse outcomes, yet there's maybe nothing directly actionable, and you end up with a note written in guarded terms, in preparation for legal or disciplinary review -- including perhaps careful descriptions of things that have been "adequately" evaluated or performed.

        Yeah, I actually had the same thought. It's a butt-covering sort of word and it's not generally a word that leaps to mind when you are describing someone who is doing *well*. "Lipitor"-- sure, it correlates with cardiovascular disease, but it's also something that half the world takes so I doubt if it's predictive of very much (maybe it's a proxy for advanced age which increases suicide risk). "Integrated"-- the authors make the same point as you do, it suggests someone with lots of problems and lots of

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...