When They Warn of Rare Disorders, These Prenatal Tests Are Usually Wrong (nytimes.com) 92
The New York Times: After a year of fertility treatments, Yael Geller was thrilled when she found out she was pregnant in November 2020. Following a normal ultrasound, she was confident enough to tell her 3-year-old son his "brother or sister" was in her belly. But a few weeks later, as she was driving her son home from school, her doctor's office called. A prenatal blood test indicated her fetus might be missing part of a chromosome, which could lead to serious ailments and mental illness. Sitting on the couch that evening with her husband, she cried as she explained they might be facing a decision on terminating the pregnancy. He sat quietly with the news. "How is this happening to me?" Ms. Geller, 32, recalled thinking. The next day, doctors used a long, painful needle to retrieve a small piece of her placenta. It was tested and showed the initial result was wrong. She now has a 6-month-old, Emmanuel, who shows no signs of the condition he screened positive for. Ms. Geller had been misled by a wondrous promise that Silicon Valley has made to expectant mothers: that a few vials of their blood, drawn in the first trimester, can allow companies to detect serious developmental problems in the DNA of the fetus with remarkable accuracy.
In just over a decade, the tests have gone from laboratory experiments to an industry that serves more than a third of the pregnant women in America, luring major companies like Labcorp and Quest Diagnostics into the business, alongside many start-ups. The tests initially looked for Down syndrome and worked very well. But as manufacturers tried to outsell each other, they began offering additional screenings for increasingly rare conditions. The grave predictions made by those newer tests are usually wrong, an examination by The New York Times has found. That includes the screening that came back positive for Ms. Geller, which looks for Prader-Willi syndrome, a condition that offers little chance of living independently as an adult. Studies have found its positive results are incorrect more than 90 percent of the time. Nonetheless, on product brochures and test result sheets, companies describe the tests to pregnant women and their doctors as near certain. They advertise their findings as "reliable" and "highly accurate," offering "total confidence" and "peace of mind" for patients who want to know as much as possible.
Some of the companies offer tests without publishing any data on how well they perform, or point to numbers for their best screenings while leaving out weaker ones. Others base their claims on studies in which only one or two pregnancies actually had the condition in question. These aren't the first Silicon Valley firms to try to build a business around blood tests. Years before the first prenatal testing company opened, another start-up, Theranos, made claims that it could run more than a thousand tests on a tiny blood sample, before it collapsed amid allegations of fraud. In contrast with Theranos, the science behind these companies' ability to test blood for common disorders is not in question. Experts say it has revolutionized Down syndrome screening, significantly reducing the need for riskier tests. However, the same technology -- known as noninvasive prenatal testing, or NIPT -- performs much worse when it looks for less common conditions. Most are caused by small missing pieces of chromosomes called microdeletions. Others stem from missing or extra copies of entire chromosomes. They can have a wide range of symptoms, including intellectual disability, heart defects, a shortened life span or a high infant mortality rate.
In just over a decade, the tests have gone from laboratory experiments to an industry that serves more than a third of the pregnant women in America, luring major companies like Labcorp and Quest Diagnostics into the business, alongside many start-ups. The tests initially looked for Down syndrome and worked very well. But as manufacturers tried to outsell each other, they began offering additional screenings for increasingly rare conditions. The grave predictions made by those newer tests are usually wrong, an examination by The New York Times has found. That includes the screening that came back positive for Ms. Geller, which looks for Prader-Willi syndrome, a condition that offers little chance of living independently as an adult. Studies have found its positive results are incorrect more than 90 percent of the time. Nonetheless, on product brochures and test result sheets, companies describe the tests to pregnant women and their doctors as near certain. They advertise their findings as "reliable" and "highly accurate," offering "total confidence" and "peace of mind" for patients who want to know as much as possible.
Some of the companies offer tests without publishing any data on how well they perform, or point to numbers for their best screenings while leaving out weaker ones. Others base their claims on studies in which only one or two pregnancies actually had the condition in question. These aren't the first Silicon Valley firms to try to build a business around blood tests. Years before the first prenatal testing company opened, another start-up, Theranos, made claims that it could run more than a thousand tests on a tiny blood sample, before it collapsed amid allegations of fraud. In contrast with Theranos, the science behind these companies' ability to test blood for common disorders is not in question. Experts say it has revolutionized Down syndrome screening, significantly reducing the need for riskier tests. However, the same technology -- known as noninvasive prenatal testing, or NIPT -- performs much worse when it looks for less common conditions. Most are caused by small missing pieces of chromosomes called microdeletions. Others stem from missing or extra copies of entire chromosomes. They can have a wide range of symptoms, including intellectual disability, heart defects, a shortened life span or a high infant mortality rate.
Not unusual (Score:4, Informative)
Most screening tests for rare disorders have more false positives than actual positives. Lots more. People, including the ones administering and interpreting the tests, are shit at accounting for priors.
XKCD (Score:2)
https://xkcd.com/2545/ [xkcd.com]
https://xkcd.com/1132/ [xkcd.com]
Re: (Score:3)
Most screening tests for rare disorders have more false positives than actual positives. Lots more. People, including the ones administering and interpreting the tests, are shit at accounting for priors.
Yeah, but that's completely normal. If you have a 0.1% false positive rate and you have a one in a million disease then 90% of your positive tests will be false positive. Nothing wrong with that - you just have to have a second independent, probably more expensive and more accurate test that confirms the diagnosis. You don't trust the administrators with that, you just make it part of the testing procedure.
Re: (Score:2)
Sure, somebody who knows what they're doing designs the procedures. Problem is, typically somebody who *doesn't* know what they're doing talks to the patient. Which is what this story is about.
Anybody over a certain age has probably had a female friend who got a positive mammogram and spend a few weeks worrying about it because their GP neglected to mention that a positive mammogram means you probably don't have cancer.
Re: (Score:2)
Sure, somebody who knows what they're doing designs the procedures. Problem is, typically somebody who *doesn't* know what they're doing talks to the patient. Which is what this story is about.
Anybody over a certain age has probably had a female friend who got a positive mammogram and spend a few weeks worrying about it because their GP neglected to mention that a positive mammogram means you probably don't have cancer.
You are totally right, but the story is wrong.
She hadn't been mislead by the test producers. She had been misprepared by the doctor. Her doctor should have told her that "if you get a positive on this test it doesn't mean you have a problem, it means you need a better test". The "highly accurate" that the test producers should care about is primarily a lack of false negatives. I'm guessing this is a sick prop
Re: (Score:2)
Well, I doubt the test manufacturers are up front about the limits of their tests, either in the briefs they prepare for the physicians, or in their consumer ads, in countries where such a thing is legal.
That's a bit over simplified. The optimum balance between sensitivity and specificity is determined by the costs of either type of error, and can be complicated. In this case a positive test is likely to b
Re: Not unusual (Score:2)
Re: (Score:2)
Sure, it has to be in the paper. Depending on your jurisdiction, it might have to be on the sheet of small print they put in the package too. Also depending on your jurisdiction, the hot blonde who shows up at your office once a week to keep you up to date on the latest developments in Pharma Co's product line may be required to mention it at some point.
Do you know the PPV for any test you've ever received?
Re: (Score:2)
Sure, it has to be in the paper. Depending on your jurisdiction, it might have to be on the sheet of small print they put in the package too.
If doctors are using things without reading the information first, that's a wider issue. And it seems it's not small print. But you were certainly wrong on the idea that the information isn't provided: it is.
Re: (Score:2)
You don't know many physicians do you?
Are you not a native English speaker? Perhaps you don't know what the idiom "are not up front with" means. Not being up front doesn't mean you lie or omit the information. It means you don't go out of your way to convey it.
Re: (Score:2)
You don't know many physicians do you?
Au contraire (that's French, by the way), two of my friends are doctors, and I have three other friends that are healthcare professionals. I have also worked with healthcare providers professionally. So you are wrong.
Are you not a native English speaker?
Oh, but I am.
Perhaps you don't know what the idiom "are not up front with" means.
I do, but you don't seem to understand what this means in the context of pharmaceuticals. As the other poster noted (although the post seems to have vanished) as corroborating evidence, that the information is right there with the packaging. It's not hidden it's right there. How m
Re: (Score:2)
Most screening tests for rare disorders have more false positives than actual positives.
There are two things I didn't find in an initial scanning of the article, and that was the rate of false negatives and the available of more reliable (but more invasive and/or expensive) tests to confirm. The summary does mention an incidence where there was an invasive follow-up test which was able to confirm the false negative. This could be considered a success story. If the test needs to have 90% false positives to ensure a very low rate of false negative, and there is an accurate follow-up test, the in
Re: (Score:2)
IIRC (and the summary also implies) that the prenatal blood tests are quite useful. You draw a bit of the mother's blood and sort through it to find chunks of fetal DNA. Naturally it's a lot safer than drawing a sample of the actual fetus, or even the amniotic fluid, which are usually the more accurate followup tests. A lot more pleasant too. You should see the needle they use for that.
We have quite a few very useful minimally invasive screening tests and we're going to get a *lot* more in the next few year
Re: (Score:2)
The explanation is, of course, in Bayes' Theorem.
Re: (Score:2)
Bayes' theorem is way too complicated for people who often have difficulty with addition. In medicine you bundle everything up into two numbers: positive and negative predictive value. And still it's too complicated, involving numbers and all.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
And yours.
Re: (Score:2)
There's still time.
Re: (Score:2)
You first.
false positive rate is not the only factor (Score:2, Insightful)
Sheesh. Basic statistics of tests. I thought we'd all learned about that by now after, oh, HIV, Ebola, Zika, SARS, COVID-19, etc.
You get a result -- what's the percentage of time that result is correct if the result is negative, or what is the false negative rate, and what's the percentage of time if it is positive, the false positive rate? Lots and lots of tests are nearly 100% accurate for negative results, so the false negative rate is nearly zero, but not so good for positive results. While 100% acc
Re: (Score:2)
Exactly. That's why it's called a screening test. The problem comes in since too many in the medical profession either completely fail to explain that or don't actually understand it themselves.
That's how we get things like nurses terminated for popping positive on an opiate screening after eating poppy seed bagels. Indeed, the more expensive comprehensive test can discriminate poppy seed eating from taking drugs.
It's also how we end up with motorists locked up for a month for Krispy Kreme crumbs that happe
Re: (Score:2)
Re: (Score:2)
I've had Krispy Kreme doughnuts, I'm pretty sure they put crack in them.
That explains why they taste so bad.
Re: (Score:2)
Re: false positive rate is not the only factor (Score:1)
As it was, our doctors were amazing and took the time to have in-depth conversations with us about the data (or in this case, the lack of adequate data,) but if any of them pulled a number out of their rear like "99.8%," we would have left immediately and filed a c
Doctors can also be wrong and frequently are (Score:5, Interesting)
When my wife got the standard ultrasound to determine the sex of our second child, the local office called and told her we needed to see a specialist. They said they had never seen this before and didn't know what it was. So we went to the "specialist" in high-risk pregnancies in the Johns Hopkins network and he told us our son had a CCAM, he would likely die before being born, and do we want to schedule an abortion. He did not refer us to any other specialists or doctors, or give us any other information about the condition.
My wife took it upon herself to research what a CCAM was and what could be done about it. We found that there were 2 hospitals in the US that are considered experts in CCAM diagnosis and treatment. One in California, and one in Philadelphia - 3 hours away from us. We called and told the staff at Children's Hospital of Philadelphia what had happened. They scheduled us for an entire day of evaluations and meeting with their specialists in less than a week. When we saw the doctors at CHOP they evaluated my wife and the baby and told us he needed to be monitored by ultrasound twice a week and maybe delivered at CHOP. BUT they said, most of the cases they have turn out fine, with the baby being delivered normally and without issues. If there is a need for intervention during the pregnancy, they can do an procedure where the doctors do surgery on mom, take the baby out halfway, surgery on the baby, and then put the baby back and close mom up and let the baby continue to grow in utero. Amazing stuff.
CHOP specializes in lots of different childhood issues including cancers. The nurses told us- if you have to be here for something, a CCAM is what you want to be here for, because most of the time it turns out well. Which is a far cry from what the JH "specialist" told us. He basically said the baby was as good as dead. This was not the last time doctors gave us bad information or advice. The problem is that so many doctors speak with such conviction and authority that many people just take them at their word and don't get a second opinion or research for themselves.
Bad tests suck for sure, but the moral of the story here is to get a second opinion (or more) any time you have a major medical issue.
Our son is now 11 years old and was delivered at our local hospital with a higher APGAR score than our other 2 boys. If we had not found the CCAM at the ultrasound we would never know he had any issue at all. He has not had any interventions before or since being born.
Re: (Score:2)
I wish we could get to a point where a doctor not suggesting a second opinion on significant procedures (especially where life and death are involved) is considered medical malpractice.
I work in insurance now (for a short period so far) and one thing I didn't realize health insurance companies were useful for is monitoring the health plans of their participants, and in many some offering a second opinion. This is often portrayed in only a negative way, such as denying a specific procedure or drug, but in ca
Re: (Score:2)
One of the big difficulties is that doctors don't get feedback. They do a treatment, and then never see the patient again, so they make an assumption about what the result was.
Re: (Score:1)
This is why it is so important to get a second and third opinion on medical issues. I have heard and experienced many stories like this.
On four separate occasions spanning a decade our local hospital nearly killed my mother through malpractice. One time they were getting ready to needlessly amputate her leg. More than once giving her IV medication that she was clearly marked as being allergic to.
Another several counties away almost left me with a lifelong disability.
I know a woman who, after giving birth, h
Re: Doctors can also be wrong and frequently are (Score:1)
This story is very heartening. Best to all of you!
That's math and statistics (Score:4, Interesting)
Most people do not understand even a highly accurate test, if you test in high enough numbers and the likelihood of a actual positive is low enough the false positive cases will dominate and render the test pretty much useless, or worse, cause more harm than good.
That's the reason to stop screening young women for breast cancer in general population, you only test young women with history of breast cancer in the family because the number of false positive tests and biopsies was causing more harm due to stress, unnecessary procedures and even deaths than the missed cases.
Re: (Score:2)
Or, here's an idea - use (and talk about) screening for what it's for: screening, and stop pretending it tests to see if you have X. It doesn't.
Screening does not test to see if you have X. It only tests to see if you definitely don't have X, and can thus skip taking a more accurate (and usually much riskier, more expensive, and/or invasive) test.
In all but the most unusual cases, a positive screening test still means you *almost* definitely don't have it, but the cheap and easy test couldn't rule it out.
not in question? (Score:1)
So it seems the science is in question then, at least when it comes to looking for "less common conditions".
Re: (Score:2)
No, it is simply your lack of mathematical ability.
A test that is wrong once in a 10,000 times is usually considered incredibly accurate. But if the disease happens once in 50,000 births, that means most likely out 100,000 tests you get 10 false positives, 2 true positives, no false negatives and 99,988 true negatives.
That is simple math, nothing more.
The real issue here is that most people can not do anything more complicated than multiplication.
Re: (Score:1)
Is the way you speak to people face to face?
I'd argue it is the Theronos effect (Score:1)
Re: (Score:2)
Wow, total ignorance. The rest was not vaporware, the tests work. The problem is not the results, but instead how idiots use the results. You get a positive test on something that is right 999 times out of a thousand, but for something that happens only once in a million times, simple math says you most likely got a false positive and need to do the test again. Or best of all, a better test if possible.
As for Theranos, 3 deadlocks out of 11 charges sounds to me like that woman is going to jail. The US
Re: (Score:3)
Lizzy did get convicted of a few charges. Hopefully the judge is not sympathetic and goes form maximum jail time.
Usually wrong isn't the problem. (Score:3)
The problem is that the tests are marketed as reliable. From TFS/A:
Nonetheless, on product brochures and test result sheets, companies describe the tests to pregnant women and their doctors as near certain. They advertise their findings as "reliable" and "highly accurate," offering "total confidence" and "peace of mind" for patients who want to know as much as possible.
If these tests were marketed honestly, there would be no (or less of an) issue. Of course, if the box claimed 2% accuracy, across 5 patients, no one would use them and that would affect the companies' bottom lines. What's needed is a rule from the FDA requiring accurate statistics about the tests instead of allowing marketing blather in the doctor/patient literature.
Re: (Score:3)
The problem is that the tests are marketed as reliable.
The problem goes far deeper than we can discuss on a blog. The ten thousand foot view, which just scratches the surface, is that drug companies prop up the medical establishment, and they all lie through their asses in search of profits -- with nearly no consequences (even when people die from the obvious lies). There is also rampant, but somehow legal, bribery of doctors through the drug companies to tell people they need medications they really don't need. And Congress and the Federal agencies charged wit
Some basic stuff here (Score:3, Interesting)
If pre-screening is horribly inaccurate and stress is likely to cause health complications of its own, then you absolutely don't use prescreening. You want to use the most accurate test you can that has an acceptably low level of risk when being performed of causing harm (after factoring in stress from false positives).
In this case, doing a follow-up test rather than assuming a correct result was good, but why bother with the pre-screening if a lot of the time it doesn't avoid you having to do the direct test? Well, of course, in a private system, that's a great way to bilk people of money and thus inflate profits without improving outcomes. It also gives you double the number of people through the doors by doing lots of quick but pointless things, which creates the illusion of double the efficiency, whereas in fact you have half the efficiency.
Health organizations should be liable for psychological trauma/distress from inept testing protocols to the point where it's more economic to develop protocols that actually work or to use those that do. If you're going to do private healthcare rather than a national system, then those organizations should face meaningful consequences for poor practices. It wouldn't be a bad idea to tax health organizations as a function of test accuracy * frequency of use, cleaning up the damage afterwards is a necessary strategy, but preventing stupidity in the first place is usually going to be better.
Re: (Score:2)
In this case, doing a follow-up test rather than assuming a correct result was good, but why bother with the pre-screening if a lot of the time it doesn't avoid you having to do the direct test?
Because most of the time it does avoid you having to do it.
Re: (Score:2)
"Some of the companies offer tests without publishing any data on how well they perform, or point to numbers for their best screenings while leaving out weaker ones. Others base their claims on studies in which only one or two pregnancies actually had the condition in question."
This doesn't sound like the testing companies themselves believe that the test is any good.
"The analysis showed that positive results on those tests are incorrect about 85 percent of the time."
An 85% false positive rate most definite
Re: (Score:3)
"The analysis showed that positive results on those tests are incorrect about 85 percent of the time." An 85% false positive rate most definitely falls in the Not Good category.
Disease X has a frequency of 1 per 1,000,000. Test Y screens for it, with a false positive rate of 0.1%. You test 10,000,000 people. You get 10,000 positives. Only 10 are expected to be true positives. 99.999% of your positive results are 'wrong' and changed with a more accurate, more invasive and risky test, but you successfully saved 9,990,000 people from needing to do the more invasive and risky test by ruling it out.
Re: (Score:2)
Are you Donald Trump's accountant? A false positive rate of 85% is not the same as 0.1%.
Re: (Score:2)
So fafalone's numbers were wrong. Let's fix them:
Disease X has a frequency of 1 per 1,000,000. Test Y screens for it, with a false positive rate of 85%. You test 10,000,000 people. You get 67 positives. Only 10 are expected to be true positives. 85% of your positive results are 'wrong' and changed with a more accurate, more invasive and risky test, but you successfully saved 9,999,933 people from needing to do the more invasive and risky test by ruling it out.
For a rare condition, a high false positive rate
Re: (Score:2)
It's a matter of how good the screening test is and how it is presented to the patient. A good screening test might have a 10% false positive rate and no false negatives. If it is quick, cheap, and no risk, it eliminates the need for the more expensive/difficult/risky test for 90% of patients. If it comes back positive, DON'T tell the patient they have the condition. Tell them the test was "inconclusive" or "just doesn't work for everyone, lucky you" and do the gold standard test.
Re: (Score:1)
You have it backwards, IMHO.
In the USA, private health organizations can be sued but the USA Federal Government cannot.
This is one important factor & distinction in healthcare. Having lived under the UK national healthcare system, I saw patients who had no ability to complain about the healthcare personnel and facilities such as we have
Re: (Score:2)
Your opinion is, of course, your opinion. Firstly, I said meaningful consequences, so being sued is of no interest to me. The consequences are swallowed by insurance, not by the organization responsible, and tend to be negligible and easily classed as the cost of doing business.
Second, people in the UK do have the ability to complain. That's part of why the UK has half the number of medical errors per capita to the US.
Third, I said nothing about a national system being sued. I specifically discussed private
Had a fight with our doctor about rare conditions (Score:5, Interesting)
I will accept someone's opinion being different if we are off by 50%. Maybe you could argue 15 miscarriages are worth the cost of preventing one child with Down Syndrome but when you are off by a factor of 100 and you still think you are right maybe this man should not have been a doctor. In the end we ended up not getting a doctor till she was 7 months pregnant.
Re: (Score:2)
There are already things called "wrongful birth" malpractice lawsuits in the US.
I expect that doctors are going to push all these tests more to avoid trials with malpractice penalties from juries that are sympathetic to tragic circumstances.
Re: (Score:2)
Your doctor should have known that because the American Academy of Family Physicians only recommends Down Syndrome screening for mothers over 35 [aafp.org].
However, it looks like the down syndrome number is closer to 1 in 1000 not 1 in 150,000. [google.com] So that changes the calculus.
Re: (Score:2)
When my ex wife was pregnant with our first child he wanted to do a blood test for Down Syndrome..
It's "Down's Syndrome", not "Down Syndrome".
Re: (Score:2)
Re: (Score:2)
The preferred spelling in the US has been Down Syndrome for quite some time:
The preferred spelling in the US has been wrong for quite some time, then.
Re: (Score:2)
I'm like... wait a minute, WTF? What do you mean what do I want to do? My two options here are keep the baby or abort the baby; there's no cure or procedure. And frankly, humans make emotional decisions and relati
Re: (Score:2)
Scam (Score:3)
They sold my wife on one of these bullshit tests for ~$500. You can sell anything to a mom-to-be who has high anxiety. It came back that there was a 27.22% chance our baby had fartknocker syndrome (or something), but it was recessive so they needed to test me too.
Yeah. Another ~$500, and I had no choice.
Guess what, everything was fine.
Fuck all those scammers.
Re: (Score:2)
What does the mother having High Anxiety [wikipedia.org] have to do with the decision to have these tests?
Let's be honest (Score:3)
Re: (Score:2)
probabilities (Score:2)
I remember when my wife was pregnant with our first, there was a Down's test that came back as a 1 in 4000 probability. The doctor seemed excited and looked like she wanted to know why I was not excited - I said I'm not sure what to make of that number, there are a lot of people in the world. WTF difference does 1 in 4000 vs 1 in 10000 make? You still do not actually know anything. I did not find the Down's testing helpful at all, if you are not 100% positive then you still don't know anything.
My state r
Re: (Score:1)
Re: (Score:2)
How would you interpret the difference between 1 in 4000 vs 1 in 10000? We were 1 in 4000 for Down's, the congenital condition my kid was born with happens something like 1 in 15000 in the population, even if its 1 in 4 in our family. Based on that I should have been more worried about Down's, but I wasn't at all. After having 3 kids, anything that wasn't 100% was useless and can only drive decisions made in fear.
Garbage Article from NYT (Score:2)
Where do you start? First of all, let's start with "the wonderous promises that Silicon Valley has made to expectant mothers." SV has it's share of ethics issues for sure, but talking about the NIPT market as though it's a Silicon Valley thing is ludicrous. Ariosa (now owned by Roche) is in SV. Sequenom (now owned by LabCorp) is a San
Mod parent UP! (Score:2)
Re: (Score:2)
They're never going to describe it that way publicly, but that's how it's thought of o
Similar false positive results for us (Score:1)
A decade ago we had some similar test results that said our baby had an extra chromosome fragment.
After some agony we decided to do nothing. Kid was just fine.
Snake oil.... (Score:1)
False positives (Score:1)
Winning The ShÂt Lottery (Score:1)
We would have seen the physical markers of something wrong in a later ultrasound, although it took a prenatal EKG specialist to locate and catalog the various heart defects she had, so who *knows* what the poor med tech would have missed.
I'm not sure we'll NiPT test again, but it's good to know it's there.