Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Why Published Research Findings Are Often False 453

Hugh Pickens writes "Jonah Lehrer has an interesting article in the New Yorker reporting that all sorts of well-established, multiply confirmed findings in science have started to look increasingly uncertain as they cannot be replicated. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology and in the field of medicine, the phenomenon seems extremely widespread, affecting not only anti-psychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants. 'One of my mentors told me that my real mistake was trying to replicate my work,' says researcher Jonathon Schooler. 'He told me doing that was just setting myself up for disappointment.' For many scientists, the effect is especially troubling because of what it exposes about the scientific process. 'If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?' writes Lehrer. 'Which results should we believe?' Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to 'put nature to the question' but it now appears that nature often gives us different answers. According to John Ioannidis, author of Why Most Published Research Findings Are False, the main problem is that too many researchers engage in what he calls 'significance chasing,' or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. 'The scientists are so eager to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,'"
This discussion has been archived. No new comments can be posted.

Why Published Research Findings Are Often False

Comments Filter:
  • by IICV ( 652597 ) on Sunday January 02, 2011 @12:40PM (#34737738)

    This article has already been taken apart by P.Z. Myers in a blog post [scienceblogs.com] on Pharyngula. Here's his conclusion:

    But those last few sentences, where Lehrer dribbles off into a delusion of subjectivity and essentially throws up his hands and surrenders himself to ignorance, is unjustifiable. Early in any scientific career, one should learn a couple of general rules: science is never about absolute certainty, and the absence of black & white binary results is not evidence against it; you don't get to choose what you want to believe, but instead only accept provisionally a result; and when you've got a positive result, the proper response is not to claim that you've proved something, but instead to focus more tightly, scrutinize more strictly, and test, test, test ever more deeply. It's unfortunate that Lehrer has tainted his story with all that unwarranted breast-beating, because as a summary of why science can be hard to do, and of the institutional flaws in doing science, it's quite good.

    Basically, it's not like anyone's surprised at this.

  • Re:It's simple. (Score:4, Informative)

    by Anonymous Coward on Sunday January 02, 2011 @01:00PM (#34737906)

    Having worked in multiple academic establishments, I have never seen that. I have seen people argue their point, and respected figures get their way otherwise (offices, positions, work hours, vacation). But when it came to papers, no one was sitting around rejecting papers because it conflicted with a "respected figure." Oftentimes, staff would have disagreements that would sometimes be an agreement to disagree because of lack of data. Is this your personal experience? Because it I don't disagree that this may occur some places, I just haven't seen it. But I want to be sure you have, and are not just spreading an urban legend.

  • by bit trollent ( 824666 ) on Sunday January 02, 2011 @01:01PM (#34737910) Homepage

    If you had bothered to read the fucking article instead of jumping to some half assed conclusion you would see that the article has nothing to do with lying.

    It's not "the oil companies have paid scientists to lie about science"

    It's "I'm fascinated that trends I detected early in my research seem to fall apart as I continue to investigate"

    Anyway.. thanks for lowering the level of discussion on /. even further, douche.

  • Not that simple. (Score:5, Informative)

    by fyngyrz ( 762201 ) on Sunday January 02, 2011 @01:01PM (#34737916) Homepage Journal

    It's called "lying".

    That's not a given. Particularly in the soft sciences - psychology, for instance - it is extremely difficult to control for all factors (I'm more inclined to say nearly impossible) and so replication of results can be subsumed by other effects, or even simply not work at all. You know that whole generation gap thing? That's a good example of groups of people who are different enough that the reactions they will have to certain subject matter can be polar opposites. So something that was "definitively determined" in 1960 may be statistically irrelevant among the current generation.

    That's just one example of how squishy this all is. Without having to bring lying into it at all. And then, there will be liars; and there will be people who draw conclusions without scientific rigor at all, simply because it's just too difficult, expensive or time-consuming to attempt to confirm the ideas at hand. And there is the outlier personality; the one who accounts for those other few percent -- all the declarations of "this is how it is" are false for them right out of the gate.

    Hard sciences simply lend themselves a lot better to repeatability. Where I think we go wrong is assigning the same certainties to the claims of the soft scientists. I have personally seen psychiatrists, best intent not in doubt, completely err in characterizing a situation to the great detriment of the people involved, because the court took the psychiatrist's word as gospel truth.

    All science is an exercise in metaphor, but soft science is an exercise of metaphor that is almost always far too flexible. One place you can see this happening is the trendy / cyclic adherence to Froyd, Jung, Maslow, Rogers and so forth... the "correct" way to raise babies... Ferberizing, etc. This stuff isn't generally lies at all, but it also generally isn't "right." Good intentions do not automatically make good science.

    Serious medicine is another good example. Something that might work very well for you might not work at all for me; get the wrong group of test subjects, and your results will skew or worse. This is an area that I think is fair to call a hard science, but where we just don' t know enough about the systems involved. Generally speaking, I don't think our oncologist lies to us; further, I think he's pretty well aware of the limitations of his practice and the state of knowledge that informs it; but they just don't know enough. To which I hopefully add, "yet."

    On a personal level - since that's all I can really affect - I treat soft science about the same way I do astrology. If you believe it, you'll probably attempt to modify your behavior because of the predictions, which in turn may, or may not, affect your actual outcome. If you don't, it's either irrelevant or too uncertain to trust anyway. So it's low confidence all the way.

    I do, however, still place very high confidence in Boyle's law [wikipedia.org] for gasses. Hard science works very well. :)

  • Re:Hmmmmm (Score:5, Informative)

    by wanax ( 46819 ) on Sunday January 02, 2011 @01:59PM (#34738342)

    Well, passing over for the moment the likes of determinism and ecological psychology, I think you're mistaken in direct studies of human behavior. There are a number of very subtle effects, that when run through the non-linear recurrent processes of the brain can lead to significant behavioral changes (ie. the demand effect [wikipedia.org]). While some of these were touched on lightly in the New Yorker article (about blinding protocols and so on) there are second order effects that are impossible to control. A psychologist who does a large subject pool experiment needs funding, which to get generally requires pilot results. These results are exciting to the psychologist and their lab, they're more motivated, have higher energy, probably are interacting better with the subjects (or the technicians running the subjects, if they have enough money to blind it that deeply), more motivated people are going to produce different behaviors than less motivated people. If the blinded study is positive and appears significant, it may become a big thing.. but by the nth repetition of the original experiment by another lab to verify they understand the protocol, the lab might be completely bored with the initial testing and the result disappears, essentially a variation of the Hawthorne effect [wikipedia.org] (which has itself been disappearing). That may well mean that the effect exists in certain environments but not others, which is an ultimately frustrating thing to classify in systems as complex as human society.

    It essentially boils down to the fact that we're all fallible, social beings that interact with the environment, rather than merely observing it. Whether you want to say that this adds correlated but unpredictable noise to any data analysis that is not being appropriately controlled for (but can be), or is fundamental limit on our ability to understand certain aspects of the world, at our current level of understanding it does rather seem that there is a class of experiments in which a scientist's mental state affects the (objective) results.

  • Re:Hmmmmm (Score:5, Informative)

    by nedlohs ( 1335013 ) on Sunday January 02, 2011 @02:52PM (#34738710)

    No, scientists in many fields (and some of which you would expect the opposite) do not understand statistics well.

    If you dig through your well gathered data you will find correlations that are purely chance. Which is why you are supposed to be looking for the predetermined correlation not just any correlation. But when you've spend a lot of time and effort gathering a set of data, digging into it to find other things seems like a reasonable plan - and as long as you do another completely separate data gathering study to check what you find it is (but there's a great pressure to publish something now since you just spent a huge wad of cash and your performance is measured by what you publish not by actual scientific progress).

    Scientists do this. Traders at investment banks (and elsewhere) do this. People just do this.

    "Fooled by Randomness" by Taleb is a good look into this from the trading perspective. Assuming you don't mind his writing style, "ego-centric and pompous" is a common description (though I don't find it so).

    I'm pretty sure investment banking is dominated by "rightoids" which nullifies your ridiculous injection of politics into the universal human bias to see patterns in randomness.

  • by tgibbs ( 83782 ) on Sunday January 02, 2011 @03:25PM (#34738946)

    I think that people tend to underestimate the pervasive impact of regression toward the mean.

    Even without "data snooping" (improperly reanalyzing your data post-hoc in multiple ways to find something that appears to be statistically significant), there is still going to be bias. If I do an experiment and I happen to "luck out" and get a large (i.e. larger than the "true" mean of an infinite number of observations) effect size just by chance, I am far more likely to do follow-up experiments than if I am unlucky and the effect size is small or the result is not statistically significant. If subsequent experiments asking the same question in different ways also give a statistically significant result, my belief in the phenomenon is reinforced even if the effect size is a bit smaller.

    So I am far more likely to identify a real phenomenon if because of a statistical fluctuation I initially observe a larger effect size or a smaller standard error than the "true" value. And my figures from that initial study, showing a nice big effect and a small error bar are far more likely to pass peer review than if the effect size is smaller and the error bars are larger, even if the criterion for statistical significance is satisfied.

    If I am unlucky, and I get a lot of variation and/or a small effect size (again, compared to the "true" value from an infinite number of experiments), there is a good chance that the experiment will go into a drawer. Perhaps I'll give up on the idea, or perhaps I'll try it again, but I'll improve the experimental design in a way that I hope will reduce the statistical variability or give me a larger effect size. Of course, if it "works," I'll pat myself on the back for solving the technical problem and go on to do follow-up studies, even though statistically speaking it may well be the case that the prettier result from the new design is itself just a statistical fluctuation.

    Part of the problem is that by convention, we report a single value for effect size. Yes, some sort of estimate of standard deviation is appended, but what people remember is that single value. It simply is very hard for human beings to think in terms of statistical distributions. We tend to forget (even though we know it to be true in theory) that a statistically significant result does not show that our estimate of effect size is correct--all it tells us is that the effect size is unlikely to be zero.

    Thus, we can predict, just on statistical grounds, that effect sizes will tend to decline ("regress" toward the "true" mean) over time with follow-up studies, based on the simple fact that those follow-up studies are far more likely to happen if the measured effect size was initially larger than the "true" value than if it was smaller. And as far as I know, nobody has been able to come up with any statistically rigorous way of estimating the magnitude of this unavoidable bias.

  • Re:Hmmmmm (Score:5, Informative)

    by ShakaUVM ( 157947 ) on Sunday January 02, 2011 @04:58PM (#34739472) Homepage Journal

    >>so you cannot hunt for a statistical significance just somewhere in the data and then re-formulate your hypothesis

    Cannot? Or should not?

    I work as an external evaluator on federal projects, and have been told by one group I worked with, after I delivered a negative result on their data, that "we know that the stats can say anything - why don't you take another look at the stats and find something that makes us look better?" I refused, saying it would be dishonest to change the analysis. They fired me, saying "most evaluators make us look better than the data, but you're making us look worse."

    The entire point of an external evaluator is to have a third party looking at your data, so as to prevent this kind of analysis fudging, but when I reported it to the federal case officer overseeing the grant, they just shrugged and didn't care. They don't want any drama to crop up in the grants they oversee. Makes them look bad to *their* bosses.

  • Re:Hmmmmm (Score:5, Informative)

    by LurkerXXX ( 667952 ) on Sunday January 02, 2011 @05:43PM (#34739660)

    So let me guess, you are one of those 'The pharmaceutical industry is hiding all the cures from us so they can sell us drugs' nuts. Here's the deal. The bulk of basic research (towards finding cures) is done by university researchers throughout the country. The majority of it is funded by the NIH, one of the U.S.'s best contributions to the world. They spend ~$30 Billion a year on it. The big Pharma companies? Yes, they spend some money on basic research. But they spend the bulk of their research dollars on clinical trials. Putting a single drug through the trials process can cost $10-100 Million until they either find out they drug doesn't work, is too dangerous to use, or actually works and will be a viable product. So the pharmaceutical companies aren't hiding the cures, because they aren't the ones doing the bulk of looking for them. The university researchers are. And as a university researcher working off NIH funds, let me tell you, if I find a cure you will find out about it. I'll get it published, be cited in the journals about a zillion times, likely get a Nobel prize, get pretty much guaranteed funding for the rest of my career because of my success record, and be invited to give talks at universities, organizations around the world (on their dime, often with a nice little check).

    Scientists like to talk about their work. We have post-docs and Ph.D. students working in our labs who love to talk about our work. We have research techs who do a lot of the work and all know what's going on in the lab. All of these folks have friends and family, some of whom might be directly impacted if we find a cure for a disease. You think all those folks are going to keep quiet about it if we find something? What's the incentive? So take off the tin-foil had my friend. There are a lot of scientists out there working to find cures for diseases. You might not realize it, but it's kind of a tough thing to do.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...