Study Shows One Third of All Studies Are Nonsense 391
SydShamino writes "CNN has a report on new research to confirm claims made in initial, well-publicized studies. According to the new study, about a third of all major studies from the last 15 years were subsequently shown to be inaccurate or overblown. The study abstract is available."
Mathematically Challenged (Score:2, Insightful)
This is just silly! Far too silly! (Score:4, Funny)
Accurate studies, lets say 100 (I know nothing is 100% accurate, and I know most studies even if they are somewhat accurate probably don't exceed 70% probability even in the specific environments they are enacted in, but lets just be over-generous since this whole thing is rather ridiculous anyway)
Right?
Rats.
Re:Mathematically Challenged (Score:2, Insightful)
The text of the article does not suppport the 1/3 bad claim exactly. Instead, it reports that 1/6 of initial reports are subsequently contradicted and another 1/6 are subsequently only weakly supported.
Estimating from this range, the true number is probably somewhere in between, say 22.2% (=2/9) which is between 16.7% and 33.3%, or 24.5% which is the aveage of these?
Irony meter! (Score:5, Funny)
I think it is possible this is the most amusingly ridiculous piece of "legitemate" news I've read in awhile...
Anyone got anything to beat it? Post it, I need to shock my brain a little more
Re:Irony meter! (Score:5, Funny)
Yahoo link [yahoo.com]
Re:Irony meter! (Score:2)
Re:Irony meter! (Score:4, Interesting)
Nonsense! (Score:3, Insightful)
Re:Nonsense! (Score:3, Insightful)
Re:Nonsense! (Score:2)
I recall my church minister about ten years ago having a real problem with this and that he saw "trust" as a zero sum game. People were trusting science more than they were trusting the church.
While I can't say what the motivation behind this study way, I wouldn't doubt for a moment if the same people who fight about evolution having no scientif
Re:Nonsense! (Score:5, Insightful)
Re:Nonsense! (Score:2)
Re:Nonsense! (Score:5, Insightful)
The Slashdot summary of this particular article is more than a little misleading and sensationalist.
Re:Nonsense! (Score:5, Insightful)
At one time the NEJM was considered a "gold standard" in publishing medical research. IMO, that's no longer true. I do believe that journals are no longer as careful about what they publish. [Rhetorical question- why did Marcia Angell leave as editor of the NEJM?]
While we do expect "science" to eventually verify or refute claims made in scientific studies, I do not remember such a high percentage of studies being "overturned", as the lawyers would say. [Of course I am susceptible to "recall bias.
Along with journals more willing to print research, I do believe that the quality of research itself has worsened. [I've seen first hand some of the "fun and games" that take place in academia. That's one reason I'm no longer there!]
In addition, the public's hunger for news about health and lifestyle and the media's need for sensationalism have fueled a news feeding frenzy. I doubt that the public or the news media are really capable of judging the worth of clinical studies. I don't think the public or the media understand the medicine or science involved nor do they actually know how to evaluate research on scientific, methodological or statistical grounds.
I get really peeved when I see some study touted on TV that reports a 10% reduction in some disease supposedly caused by modifying some risk factor. Without seeing the confidence interval bouding this estimate of risk, I'm not willing to say the effect is real. As a rule of thumb, I was taught to view with skepticism any study that does not halve or double the relative risk attributed to a risk factor.
ERROR 0x381F (Score:4, Funny)
Re:ERROR 0x381F (Score:2)
Everyone is getting the irony and stating the same obvious joke
This study is actually alarming (Score:5, Interesting)
This is about the accuracy of clinical trial research. This is not about market research studies in the latest clothes fashions. Medicine is an extremely lucrative and risky field -- being associated with the group that pushes through the next Viagra can ensure that your family becomes the next Rockefellers. Your only opposition is the FDA (and the politicians that influence it, which are always hungry for money, which you have lots of).
There is a tremendous amount of pressure on pharmaceutical researchers to produce favorable results. Let's say that you're a new, idealistic researcher who runs some tests on a new drug that your employer wants to market. Your tests show that our drug produces an increased rate of cancer? Well, been nice having you work here...bye. Bob down the hall has consistently gotten us much better results to feed to the FDA for approval. We really don't know how or why he gets better results, but he's definitely the man we want on the job. Sure, maybe twenty years down the road there will be some complaining, but *we didn't know*...*we did all our due dilligence and somehow our results just wound up showing that our drug was okay*.
And even the more innocent "conclusive results" become suspect. A pharmaceutical doesn't want "inconclusive results", where "further tests are recommended". They have a bloody lifetime on the product ticking away, and a competition breathing down their neck. They want some scientist to sign off on this thing with a nice firm "Okay" or "Not Okay", or else what are they paying the guy for? He's not here to do ivory tower work -- he's here to serve the company, which is in the business of extracting savings from aging and achy baby boomers and subsidies paid for by their tax-paying children.
What is being said is that a full third of examined clinical trials were essentially horseshit. This is really not a laughing matter.
Oblig. Futurama quote (Score:5, Funny)
Studies inaccurate but not completely bogus (Score:2, Insightful)
Nonsense (Score:2, Insightful)
Comment removed (Score:3, Funny)
Falsifiability. (Score:2, Insightful)
According to a recent study involving 100 clones based on DNA fragments of Karl Popper, a statistically significant number of the clones agree that this is pretty goddamn good result, considering that that's how science is supposed to work.
You know - that silly process whereby you make a falsifiable claim, run an experiment, report your results, and encourage
Re:Falsifiability. (Score:5, Insightful)
This isn't about hypotheses turning out to be false, it's about experiments which produce bad data, seemingly, at there release, supporting bad hypotheses.
While even a good scientist can come up with wrong hypotheses, no good experimental scientist should be creating experiments which don't have proper controls to prevent them from drawing the wrong conclusions, nor should they be deriving conclusions based on an statistically insignificant sample.
Arguably, the ability to design and implement properly controlled experiments and derive statistically significant results is what makes an experimental scientist and experimental scientist.
Re:Falsifiability. (Score:3, Insightful)
I'm not sure that there's a way to ever
Re:Falsifiability. (Score:3, Insightful)
To claim that science is false or that scientists are the same as priests is to completely ignore a history of socially powerful, yet materially impotent priests and a contrasting history of socially impotent, yet materially powerful scientists.
I'm playing with you a little bit now, but you get the point.
Re:Falsifiability. (Score:5, Insightful)
Obviously errors are not completely avoidable because people are fallible; that's why we try to reproduce results and practice peer review. But I should think we ought to do better than having 33% of our supposedly "proven" hypotheses eventually disproved by subsequent experiments.
Note that I'm not talking here about trivial things like Netwon's laws of motion being "disproved" by relativity. Relativity is more like a generalization of Newton's laws than a refutation, and that *is* a part of the normal scientific process. I'm talking here about medical studies which come up with conflicting results or the innumerable global warming studies that the scientific community can't make up its mind on (for example).
Re:Falsifiability. (Score:3, Informative)
It indicates either an incorrectly formed hypothesis or errors in experimental methods.
Or limitations of methodology. I think a lot of cutting-edge science tends wander along the edge of this problem - there may be an effect, but the available data is only barely sufficient to see it, and obtaining a statistically sound sample size would be uneconomical. Lots of good research ends up exploiting clever tricks to get around this kind of limitation - and sometimes falls prey to unforseen effects and influ
Re:Falsifiability. (Score:3, Insightful)
Nonsense. Science is carried out by human beings, and human beings make mistakes. Healthy processes accomodate that fact.
Publishing the results of those mistakes, honestly and fully for the critique of others, is part of the scientific process. Having those mistakes corrected by later
Re:Falsifiability. (Score:3, Insightful)
Most importantly, the subscription-drug companies.
I dare you to look it up and prove different. Thus is the basis of science, as mentioned earlier.
Money motivates science just as much as any other. Look at asbestos, used to be it was approved by the FDA. Decades following, was proved harmful by too many studies to ignore.
Just a suggestion, give ANY "scientific fact" at least a decade before you believe it to hold any water
Politics and money drives the FUD. (Score:4, Interesting)
Better science education required. (Score:5, Insightful)
Lots of people can't think of a good reason to do science, maths and statistics at school. Well, a bloody good reason is so you can prevent the wool being pulled over your eyes.
Re:Better science education required. (Score:2)
Pareto or Sturgeon (Score:2)
No doubt the study of studies is itself of dubious quality.
Personally I would have expected that the Pareto principle (20% of anything is the important part) or Sturgeon's law (90% of everything is crap) would have been the operational forces here.
Slashdot Paradox (Score:2, Funny)
The topic is stated toooo broadly! (Score:3, Interesting)
it isn't, really
It is about the effectiveness of interventions... if you skipped over it, its worth a perusal to a skim... at the very least... but it would seem to me that the whole thing has lead to almost no positive conclusion itself... with 44% of the experiments being replications and 24% unchallenged... the 66% really don't seem to have much value...
Ahhh, academic research... only there can you get paid well to tell us absolutely nothing...
Re:The topic is stated toooo broadly! (Score:4, Informative)
GAH! Someone needs to look at the definition for "perusal"
Perusal [reference.com] - To read or examine, typically with great care.
And don't get me started on your use of "its" (its=possesive it's=contraction of "it is")
I know, I know... Mod me +5 Pedantic. I've had a long day, and that felt good.
Studies are wrong because... (Score:5, Funny)
More nonsense studies: (Score:2)
Re:More nonsense studies: (Score:5, Funny)
Depends. One would have to calculate the water content of beer versus the rate of dehydration produced by the alcohol content. Following through, one would conclude that 8 glasses of beer would fall short of the goal of 8 glasses of pure water, with the only recourse being to drink more beer.
This, kids, is a practical demonstration of how to make science work for you.
Re:More nonsense studies: (Score:2)
If you drink beer with 6% alcohol, you will only have drunk 7.52 glasses of water. So you would have to drink another half glass to compensate for the alcohol.
Cheers!
Obligatory... (Score:2)
Overblown (Score:3, Funny)
The actual figure turns out to have been 26.4% - much closer to 1/4 than 1/3.
Re:Overblown (Score:4, Informative)
Oh yeah! (Score:2, Funny)
All real facts contained in emails from your aunt (Score:2)
Aspartame causes MS. Andy Rooney has definitive proof that the founding fathers wanted prayer in the classroom. Microwaving food in plastic causes cancer. George Carlin lays out why librals are dumb. Bill Gates wants to buy you a car for forwarding this email.
All true!
Stefan
The Statistical Methods in Most Studies are Flawed (Score:2)
Re:The Statistical Methods in Most Studies are Fla (Score:3, Insightful)
Re:The Statistical Methods in Most Studies are Fla (Score:2)
Presumbly the "true value" is outside the "confidence interval" for 1 in 3 of the studies investigated which is a whole lot worse than 1 in 20 implied by a 95% confidence level.
saving throw... (Score:2)
7
didn't make the saving throw, I don't belive the story.
Re:saving throw... (Score:2)
Physician, Heal Thyself (Score:2)
No shit (Score:3, Insightful)
Who's surprised by this? Seriously.
Science by press conference (Score:5, Insightful)
Even worse are the lazy journalists who report it. After a New York Times piece last week claimed bisexual males were "lying" [nytimes.com] based on results from a highly questionable study, I reminded their editors of this excellent piece Blinded by Science [cjr.org] in Columbia Journalism Review.
This kind of sloppy reporting is perfect for lazy journalists-- it's a three-for-one deal. They get to break the news, and then later they have a second story when real experts point out the flaws, and a third when the people finally get discredited. More evidence of the shameful state of journalism in this country.
Re:Science by press conference (Score:4, Informative)
nonsense (Score:2)
Well now we know... (Score:2)
Reminds me of a study I did myself (Score:2)
It's a well known fact (Score:2)
Physicist 3, to herself: He's making that up...
From the scholarly journal... (Score:2)
With stuff like THIS... (Score:2)
Sooooooo 1/3 of all studies? (Score:2)
Is this study in the 1/3 that are nonsense? (Score:2)
My head hurts.
Global Warming? (Score:2)
The study proves itself! (Score:2, Funny)
Missing the point (Score:5, Informative)
First, notwithstanding the many good jokes about a self-referrential study that will proven to be exaggerated, this study specifically checked whether highly cited clinical studies had claims that were later contradicted or softened due to other research. This study was not claiming that 1/3 of all scientific studies published were wrong in some way. It's worth noting that doing clinical research is very difficult, and that the error bars will always be quite large. It's also important to keep in mind that sometimes clinical research may be unduly influenced by financial pressures... and that clinical research undergoes very heavy scrutiny.
So having 1/3 of all clinical studies be later contradicted should not make us worry that clinical research is being done wrong. We should be happy that so much verification occurs, that any erroneous conclusions will (probably) be checked again. One line from the CNN article rings true:
Experts say the report is a reminder to doctors and patients that they should not put too much stock in a single study and understand that treatments often become obsolete with medical advances.
I think that should be the take-home message for the casual reader. Science is doing its job of verification, but people need to stop jumping to conclusions (or worse, changing their life habits) based on the results of a single study. The results need to be double-checked. The study may have been a fluke, or have flaws, or the data may have been manipulated. Whatever the reason, we should not trust single experiments, especially where human lives are at stake!
Having (partially) read the JAMA article, I think their result is sobering and useful. It really shows how intense the competition is in that field (which leads both to people making exagerrated claims, but also alot of pressure to dis-prove other's claims and get at the "right answer").
concluding.. (Score:3, Funny)
And another third is inaccurate or overblown.
winner-take-all vs. long-tail effects (Score:5, Insightful)
At the same time, I wonder if the long tail efect [wikipedia.org] means that an increasing number of once-obscure, high-quality studies are being discovered, read, and used by an increasing number of people. Those that do create unbiased studies may not get much popular press, but they do become more widely read due to Google.
Ultimately, we seem to be floating in a rising tide of both good and bad studies. Perhaps the ratio of studies is being biased toward the bad (winner take all) but the ratio of impressions -- the numbers of times that good studies have been accessed -- has actually improved due to long-tail effects.
Nonsense? (Score:2)
AP Statistics (Score:2, Interesting)
I am willing to bet that the CNN study is correct in it's assumption that most studies are incor
Obligatory Monty Python quote (Score:3, Funny)
Post is misleading (Score:3, Insightful)
Word to the wise, don't trust the press at face value. Expect sources, preferably cited and available for you to review, and check your facts before you buy into whatever the press happens to be reporting today.
Parent Title Is Complete Crap (Score:3, Informative)
And who's to say the replications aren't the ones that missed the mark?
1/3 right
1/3 wrong
1/3 we have no idea what the answer means.
That's what I was told to expect from research in my first semester of grad school. Not from reading it -- from DOING it.
They really should teach science in school. Not the just areas of science, but the subject of science itself.
Re:And this study is not nonsense... (Score:2, Funny)
Re:And this study is not nonsense... (Score:3, Funny)
Re:more like (Score:3, Funny)
all studies are biased and show exactly what the person who is doing
them want. much like stats, you can make anything prove anything
through working the #'s and asking the right questions
Can you prove this with a study? If not, I'll not believe you. I asked 100 people if they thought studies were overblown, and 33 of them said 'yes.' Where's
Re:more like (Score:5, Insightful)
The fact is, yes, statistics can be misused. So can every other field of study. But used right, statistics are a tremendously powerful way to understand our world, and often reveal information that can't be obtained any other way. And believe me, nobody gets more peeved at statistics abuse than statisticians do.
But that's okay, pal. Just keep on making fun of things you don't understand. The smart people of the world will keep on working, keep doing things that make your and everyone else's life better, whether you know it or not.
Re:more like (Score:2)
Re:more like (Score:2)
Re:more like (Score:5, Insightful)
However, statistics are not determinative. This is a mistake I've heard from both laymen and experts. The fact that, according to what's known (and factored in to the calculation) an event is 99.999999999% likely to happen... well, that doesn't mean it will happen. Really, it doesn't matter how unlikely your statistics demonstrate something to be, it won't prevent the unlikely from happening.
In fact, it's demonstrable from statistical analysis that we should expect tremendously improbable events to happen quite often, and that the chances of the most probable outcomes to occur at every instance is an incredibly unlikely outcome as time stretches on.
So statistics are an interpretive tool, not an answer. Statistics alone cannot tell you what will happen, they can't tell you what has happened, and they certainly cannot tell you what should happen. And all these comments you're talking about, I think they come from a valid frustration borne from sloppy reporting telling us "scientists have discovered that 75% of" this and "they now know that 25% of" that outside of any meaningful context.
And what's the likelihood that all these percentages are correct? What's the margin of error, and what's the margin of error's margin of error? Certainly the people telling us these "facts" (reporters) have no idea.
Re:more like (Score:3, Interesting)
While I agree that some form of bias infects essentially all research, I don't think that's the whole picture here. Publishing new scientific results has two components. One is to put the data out into the literature where other researchers can read it, comment on it, build on it and improve on it. The other is to
Re:Next story (Score:3, Funny)
Re:Next story (Score:2)
Re:nice (Score:5, Interesting)
"This is not a very good time for something like this to happen."
So my question is this: when is a good time for an airplane full of people to crash into a residential neighborhood? This guy should designate a day for us so we can make sure all the airlines and pilots know when the good day for crashing is. Morons.
Re:nice (Score:2)
Assuming, of course, you aren't in the neighborhood designated to enjoy 15 minutes of fame.
Re:nice (Score:4, Funny)
When the passengers of that airplane are terrorists and that residential neighborhood is al-Qaeda's? or maybe when the passengers are kittens and that neighborhood is actually a big pile of yarn that coushins the fall. awwww, so cute.
Re:nice (Score:2)
Re:nice (Score:5, Funny)
"Public split on whether Bush is a divider."
Re:nice (Score:3, Funny)
If you're a suicide bomber long enough to become a veteran I think there's a good chance you're doing something wrong.
Re:Obviously flawed (Score:5, Interesting)
Think about it: they are measuring highly cited studies that get a "stronger" result than other subsequent studies. However, they simply say "stronger"(at least in the abstract). Whenever you measure something other than by a census, you take a sample. Therefore, as anyone familiar with elementary statistics should know, you have sampling variation. Researchers then usually appeal to the 'central limit theorem' to assume that the mean comes from a normal distribution, and then, knowing the distribution of the mean, make a statement like "We are 95% confident that x is between a and b". The later experiment will make a similar such claim.
Assume the experiment to measure x was performed correctly and identically both times, and there is no change in the effect with respect to time. Each time the experiment is performed, a different mean value of x will be obtained due to sampling error. Since both measurements of x come from an identically distributed population, by symmetry 50% of the time the earlier one will be "stronger" than the later one shows. However, not all highly cited studies are repeated after publication, and not all "me too" studies are published, hence the figure less than 50%.
Clearly, what they should be doing is comparing the confidence intervals, and looking for a statistically percentage of studies which do not overlap. This then would be relevant, as it would show us about non-sampling errors, rather than sampling errors, such as experiments designed or performed or interpreted incorrectly.
Study shows 66.6% chance that 1/3 of studies .... (Score:5, Funny)
How Many Statisticians does it take to .... (Score:3, Funny)
A: 3.215 ± 3%
. . . 95% of the time.
Re:Obviously flawed (Score:2)
Re:Obviously flawed (Score:5, Funny)
Re:Obviously flawed (Score:2)
I agree 100%.
Re:Obviously flawed (Score:3, Insightful)
This appears to be particularly frequent in more abstract (non-maths) sciences like environment. (I once had lectures on the topic where the speaker cited stats that did not match the notes and were inconsistent across presentations.)
Re:Obviously flawed (Score:2, Insightful)
It is!
No, please don't call my bluff!
What are those cuffs [telegraph.co.uk] about?
Screw you all! SCREW YOU ALL!
Re:Obviously flawed (Score:2, Redundant)
Re:/. posting shows 99% of /. postings are bullshi (Score:3, Insightful)
Re:ObSimpsons (Score:2, Funny)
Another one C/o Homer:
Facts are meaningless. You could use facts to prove anything that's even remotely true!
Please define religious nut (Score:3, Insightful)
While there are a large number of people who reject facts and reason due to their a priori commitment to a religious beliefs, there are a great number who do the same whose religion is science itself.
That is to say, preconceived notions and personal bias prevent many so-called scientists from acknowledging facts and realizing that their pet theories are baseless.
As an example, I offer Carl Sagan. Here was a man who made a nice living talking ab