Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Medicine Science

Why Most Published Research Findings Are False 259

Hugh Pickens writes "Researchers have found that the winner's curse may apply to the publication of scientific papers and that incorrect findings are more likely to end up in print than correct findings. Dr John Ioannidis bases his argument about incorrect research partly on a study of 49 papers on the effectiveness of medical interventions published in leading journals that had been cited by more than 1,000 other scientists, and his finding that, within only a few years, almost a third of the papers had been refuted by other studies. Ioannidis argues that scientific research is so difficult — the sample sizes must be big and the analysis rigorous — that most research may end up being wrong, and the 'hotter' the field, the greater the competition is, and the more likely that published research in top journals could be wrong. Another study earlier this year found that among the studies submitted to the FDA about the effectiveness of antidepressants, almost all of those with positive results were published, whereas very few of those with negative results saw print, although negative results are potentially just as informative as positive (if less exciting)."
This discussion has been archived. No new comments can be posted.

Why Most Published Research Findings Are False

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday October 19, 2008 @01:25PM (#25432787)
    Comment removed based on user account deletion
    • by symes ( 835608 ) on Sunday October 19, 2008 @02:03PM (#25433133) Journal

      Peer review can both help and hinder - there's the reputation effect of guest authorship where having a well-known, senior, academic's name on the paper helps it through no matter how absurd the findings.

      Then there are reviewers who review papers they do not have the expertise to review. And to be frank I've seen some pretty bloody ludicrous comments from supposedly expert reviewers - the sort of stuff 1st year students wouldn't make.

      But I do think that the majority of researchers are dilligent and beleive in what they submit. And lets face it - if it is an emerging area and you have a neat result that either refutes someone else's grand theory or is just really novel you're going to want to see that in print. It is because we seek to replicate research that findings are later falsified. This isn't evidence that the system is broke it is pricesly how it should work. It is the work that can't be falsified that stands the test of time and contributes to our knowledge.

      If there are people who think that falsifying published research is somehow a bad thing - that is shows there's a problem in research standards - the they really really need to go back to school and read some Karl Popper.

      • by pjt33 ( 739471 )

        If there are people who think that falsifying published research is somehow a bad thing - that is shows there's a problem in research standards - the they really really need to go back to school and read some Karl Popper.

        I think this could be phrased more carefully to make explicit the difference between theoretical and experimental research. Theoretical research being falsified is the scientific method at work. Experimental research being falsified is less cut and dried. Sometimes it's due to previously unknown effects and the result is an increase in knowledge: sometimes it's due to poor analysis of the results - failure to account for systematic errors, or statistical incompetence, and it would be better that it not be p

      • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Sunday October 19, 2008 @06:44PM (#25435515)

        Indeed in my field (a sub-area of computer science) people are usually highly skeptical of any supposedly important new result in the field that was first published in one of the highly prestigious but generalist journals, like Nature or Science. These often end up being, if not outright wrong, at the very least seriously over-extending their claims or the importance of their claims, in a way that would never get them published in a specialized journal filled with an editorial board who were actually experts in the specific area in question.

        This is only exacerbated by the fact that, because generalist publications know they don't have expertise in every specialized area on staff, they often ask the authors to suggest potential reviewers of their own papers. Of course, authors are likely to suggest reviewers who they think will like the paper, not the ones who would give it a grilling.

        I think the interest of this particular study is not so much that a lot of science turns out to be wrong, but that a lot of the most prestigious publication venues turned out to be wrong more often.

      • Peer review can both help and hinder - there's the reputation effect of guest authorship where having a well-known, senior, academic's name on the paper helps it through no matter how absurd the findings.

        Then there are reviewers who review papers they do not have the expertise to review. And to be frank I've seen some pretty bloody ludicrous comments from supposedly expert reviewers - the sort of stuff 1st year students wouldn't make.

        So... you might say peer reviewed is not a perfect system, and if there were a perfect system, we should switch to that.

        Lets see here... Divination! That is what we should do! Anyone have an ouija board?

        (Kidding. I know you were explaining the problem, not saying we should abandon peer review.)

    • Re:Peer review helps (Score:4, Informative)

      by phantomfive ( 622387 ) on Sunday October 19, 2008 @02:29PM (#25433347) Journal
      It isn't so much a problem of peer review. Peer review has limitations of course, to thoroughly review an article, one would have to repeat the experiment, which most reviewers (for good reasons) do not do. Peer review is good for what it does, give feedback to the authors of the paper, and as you said, it does have a filtering effect.

      The other issue is, a lot of papers aren't really worth much at all. Nature might get their share of interesting articles, but in smaller journals, a lot of research ends up being something like, "I had this idea, and I did a small little experiment to see if it was worth anything. Maybe it is." But of course, with a small little experiment, your chances of being wrong are greater: it's just an entry-point for someone else to maybe continue research in an interesting direction. And I have done peer review, FWIW (and if you trust random guys you meet on the internet).
      • Re: (Score:2, Offtopic)

        by ShieldW0lf ( 601553 )

        The problem is money. If we lived in a world where you gained power through the trust and esteem of your fellow man, no one would publish things they weren't confident in. But we don't. We live in a world where you gain power through the accumulation of leverage that allows you to dominate your fellow man despite their wishes and opinions. In this world, who gives a shit if you were wrong or not? Who gives a shit if anyone knows? It doesn't matter, as long as you got paid before they find out. Then y

        • by Surt ( 22457 )

          I don't think there's any evidence that either economics or politics forms a good basis for encouraging right behavior. Both seem to have failed miserably, and what is clearly needed is a third system no one has apparently thought of yet. Or perhaps such a system cannot exist, it is not within the fundamental nature of man or the universe he occupies.

      • Re: (Score:3, Insightful)

        by wisty ( 1335733 )

        "I had this idea, and I did a small little experiment to see if it was worth anything. Maybe it is."

        If you just had an idea, and you did a small little experiment you would get knocked back for "methodological issues". You are better having an idea, then using a big simulation output and datamining (with a boutique distance metric) to try and bluster your the readers into thinking you had done something original. Just testing ideas with experimentation is soooo 1950s. Besides, it blows the budget and doesn't attract funding.

    • Re:Peer review helps (Score:4, Informative)

      by Amenacier ( 1386995 ) on Sunday October 19, 2008 @05:43PM (#25435135) Journal
      Peer review doesn't always help. I've studied papers that have the most detailed, thoroughly tested research but end up relegated to some obscure journal because the people peer reviewing the topic don't agree with it. In one case, the pioneers of the field of siRNA lambasted a study which showed that short RNAs can enhance transcription, as well as negatively regulate it. Because this was so far outside of their model for how siRNAs work, they dismissed the work as nonsense, despite the paper showing five replicates of every experiment, and practically putting their entire work, step by step, into the supplementary materials. Papers get into good journals with far less than what I saw for this one, but peer review condemned it to obscurity. I think peer review works, but only if the people reviewing keep an open mind and don't get piqued if the findings disagree with their own views.
  • by 2muchcoffeeman ( 573484 ) on Sunday October 19, 2008 @01:25PM (#25432791) Journal

    How long until some researcher releases a study showing that Dr. Ioannidis' research findings are themselves wrong?

    • by CaptainPatent ( 1087643 ) on Sunday October 19, 2008 @01:36PM (#25432913) Journal
      Actually, someone already has. [slashdot.org]
    • by Roger W Moore ( 538166 ) on Sunday October 19, 2008 @02:30PM (#25433353) Journal

      How long until some researcher releases a study showing that Dr. Ioannidis' research findings are themselves wrong?

      Who needs a study? Simply reading the article shows that he has fallen precisely into the trap that he is complaining about i.e. overstating his results. He forgets one very simple point: not all science is medicine/biology.

      As a particle physicist I would strongly disagree with his conclusions, at least as applied to experimental particle physics. It is certainly true that some papers turn out to be wrong but this is rare and usually ends up as a 'big thing' in the field. Outside my field I'd be very surprised if the majority of physics or even chemistry papers turn out to be wrong (but I certainly not a chemist so this is just my impression).

      As for medicine I can certainly see that they have a problem. Afterall how many times have we been told "don't eat X/do Y it is bad for you" only later to find out that actually it isn't half as bad as they thought and may even have benefits? Just because a lot of medical research is often flawed does not mean that all of science has the problem on the same scale.

      So, Dr. Ioannidis either show us some data from chemistry, maths and physics or stop complaining that all of science has a problem on this scale. From where I stand your evidence points to a problem with bioscience/medical research only.

      • I would argue that this problem is not only pretty much non-existent in chemistry and physics, but that even biology, at least cell and molecular biology do not have this issue either. Typically when a biologist publishes a protein structure or sequences an organism's DNA no one shows up later and says it is wrong. In fact, it's rather large news when it does.

        For example, there was a bit of a controversy over protein crystallographers recently. A person had published a paper on a protein structure that seemed to contradict all previous though functions for the protein. It turned out that they had used the wrong parameters in their phasing program. However, this doesn't happen in most to most papers, and certainly not a majority of them.

        I would say that this problem is mostly specific to medical research. By its very nature, medical research is a good deal more prone to human fallibility since both subjects and researchers are human beings.

        • by eli pabst ( 948845 ) on Sunday October 19, 2008 @09:30PM (#25436705)
          No, you are exactly right. His paper was really only intended for the field of population genetics and genetic epidemiology (his fields), where people have been using the standard p0.05 statistical cutoff as their metric for whether a given analysis is significant. So if you have 20 research groups analyze the same question (like is a mutation in gene X responsible for disease Y), according to that methodology by definition 1 of the 20 researchers will find a statistically significant result simply due to chance alone. This is old news and journals stopped accepting papers that *only* had that statistical analysis about 3-5 years ago. Almost without exception, they now require you to show some kind of biological verification (show mutant protein X actually is defective and has reduced activity) OR you can do a replication in a completely independent sample, which is unlikely (again 1/20) to be significant by chance. Unfortunately people have misinterpreted his paper and are applying his point to other fields like chemistry, astrophysics, or even areas of biology where it doesn't apply.
      • Re: (Score:3, Insightful)

        by extrasolar ( 28341 )

        Afterall how many times have we been told "don't eat X/do Y it is bad for you" only later to find out that actually it isn't half as bad as they thought and may even have benefits?

        But how often do you actually see "don't eat X/do Y it is bad for you" in a legitimate research paper? Usually I see those kinds of statements on the morning news or in newspaper headlines or in populist nutrition books (always a bad place to look for advice). Usually some new research comes out that scientists think is interesting; the mass media picks up on it and wants to dumb it down for their readers/listeners. But the next time you see such a statement in the mass media try to take a look at the ac

      • by Capsaicin ( 412918 ) on Sunday October 19, 2008 @07:07PM (#25435649)

        Afterall how many times have we been told "don't eat X/do Y it is bad for you" only later to find out that actually it isn't half as bad as they thought and may even have benefits? Just because a lot of medical research is often flawed does not mean that all of science has the problem on the same scale.

        The problem here is that the popular press always report the very latest 'finding' in what is a complex field. Yet we should know that not only in medicine, but in virtually all experimental sciences, a single paper is not sufficient to establish some new profound truth.

        Dr Ioannidis' largest problem is that he thinks he has identified a problem. There isn't one. This is how science is supposed to work! We publish methodologies so that the work can be replicated by other teams. Some findings survive futher scrutiny, some don't. The "hotter" the field, the less you are going to rely on the latest single study, no?

        So he's found 1/3 of studies were refuted, but later work. Great, they were refuted, what's the problem? And how do we move from that to the conclusion that "most" scientific papers (even outside the hotter fields of bio-medical research) are wrong. And what about looking at outcomes? The advances of medicine even in my lifetime are astounding, this is hardly the result of a system that isn't working!

      • by ceoyoyo ( 59147 )

        Published articles are also not necessarily all supposed to be correct in every way. Some are openly speculative, MANY show an interesting result and call for further investigation. Do those count as incorrect too?

      • by Futurepower(R) ( 558542 ) on Sunday October 19, 2008 @09:38PM (#25436753) Homepage
        You said, "So, Dr. Ioannidis either show us some data from chemistry, maths and physics or stop complaining that all of science has a problem on this scale."

        I'm sympathetic to the direction you are going, but I don't agree completely.

        The problem is due to being able to get extra money by exaggerating claims. The problem is in every area of science, in my experience. If there is no chance to get more money by exaggerating claims, then I agree, the problem seems minimal.

        In computing, claims about "Artificial Intelligence" have been extremely exaggerated.

        In physics, there are those who claim they may have found a method of cold nuclear fusion. Search for Sonofusion [google.com], for example, fusion that is caused by extremely intense ultrasonic sound. Some of those claims are exaggerated, or there are omissions of the limitations.
    • by ceoyoyo ( 59147 )

      "... incorrect findings are more likely to end up in print than correct findings."

      "... his finding that, within only a few years, almost a third of the papers had been refuted by other studies."

      Last I checked, almost one third doesn't count as "more likely," much less "most."

  • Of course there is absolutely no chance that this particular piece of research is also wrong...
    • by Kneo24 ( 688412 )
      Yes, but to what extent? Is it the entire notion? Or just some numbers of his data? And how far off are those numbers if that's the case?
  • Misleading (Score:5, Insightful)

    by Idiomatick ( 976696 ) on Sunday October 19, 2008 @01:28PM (#25432819)

    Title is wrong. It says that the FDA is corrupt. And that published papers take around 3years to get peer reviewed where the bad ones are removed. What a blatent attack on science generally. Sure paper publishing needs to be reviewed but 'most published research is false' is an outright LIE. 'Most published research' includes all of our basis of scientific knowledge. If most of our theories on biology were wrong really we realistically wouldnt have been able to move forwards into working with genes if we didnt know what a cell did.

  • by sumdumass ( 711423 ) on Sunday October 19, 2008 @01:31PM (#25432851) Journal

    At the risk of being modded down to oblivion, I am still curious to how this effects popular theories like global warming. We already has people claiming that the science is wrong and they are generally mocked and ignored because their works are published in major journals. Well, this story seems to indicate that publishing those claims will give them a larger change of it being incorrect.

    Anyways, it seems that if you don't tow the line on climate change, there is no room for you anywhere. So where does this leave the accuracy of the claims in light of how common it seems that they can be wrong even when published in a respectable scientific journal. I know the IPCC looked at them, but they didn't validate any of the claims, they only looks at whether or not Humans were the cause (that was their charter and they acknowledged this in their reporting).

    • by Samantha Wright ( 1324923 ) on Sunday October 19, 2008 @01:49PM (#25433023) Homepage Journal
      The idea behind this is that pharmaceutical studies are difficult and expensive to perform, so they rarely get challenged for some time, and when they do, the challengers are more often in more obscure journals: so when people go to cite statistics and findings, they don't notice the fact that what they're quoting has been invalidated. Climate change has been studied over and over again and subjected to extensive analysis by many minds for many years, particularly because so many people have questioned it. To suggest what you are saying now is a little behind the times.
      • by sumdumass ( 711423 ) on Sunday October 19, 2008 @04:05PM (#25434261) Journal

        But the concept isn't unique to the pharmaceutical studies. With the general attitude towards dissenters of the Faith that has grew from global warming, I don't see why it isn't true here either. I mean the IPCC used faulty temperature data in their evaluations, Al Gore exaggerated quite a bit and used outdated charts because they proved his point better and Hansen, the guy who pretty much brought Global warming into the lime light admitted to exaggerating claims and justifying it by claiming it was necessary to make people aware of the problems.

        I mean it is probably even more prevalent when the data sets used in studies aren't availible to people, the temp data that was proven to be wrong was reverse engineered because Hansen refused to disclose the data. People wanting to review these studies have been mocked and denied access to the data or had the data set obfuscated to make it even more difficult to work with. There was even one instance where someone was told that he couldn't have the data because he was going to pick the work apart and the author didn't want to help him do that. Not very scientific if you ask me. There is definitely room to question what is being said. Most people in disagreement today are in contention over the causes and the purposed solutions which to date, doesn't seem to be helping out in Europe.

      • Re: (Score:3, Insightful)

        by budgenator ( 254554 )

        Nobody is arguing that the climate isn't changing, it's about the cause, the AGW premise is plausible and now even looks very likely, yet you have to consider the following the the Earth has warmed and cooled before without human intervention and establishing cause based on evolving and unproven computer modeling eliminating other causes, while we are still discovering potential causes is a bit of a stretch. Why would you think that Climatology is easier, cheaper and more certain than pharmacology and bio-m

    • Re: (Score:2, Interesting)

      by wormBait ( 1358529 )
      The article is focused on largely medical studies, which are attempting to move toward curing people. Therefore, studies that cure people are interesting, studies that don't cure people aren't interesting. Global warming is different. Disproving global warming is VERY interesting, and would get published more readily than something supporting global warming.
    • Re: (Score:3, Insightful)

      by shma ( 863063 )
      I fail to see how you can draw any conclusions about the reliability of atmospheric physics papers from a study of biomedical research papers.
      • If X=Y then there is no reason that X or Y can equal Z either. The process is just as transparent in other areas and just because the limit in this particular article is with pharmaceutical studies, it doesn't mean that the same garbage in garbage out rules don't apply.

        In fact, the IPCC did their reports using flawed temperature data and people are still pulling up the exaggerated Mann hockey stick graph as their proof even when there is a more accurate one availible. The inconvenient truth by Al Gore did t

      • by jmorris42 ( 1458 ) * <jmorris@[ ]u.org ['bea' in gap]> on Sunday October 19, 2008 @03:12PM (#25433757)

        > I fail to see how you can draw any conclusions about the reliability of atmospheric physics papers from a study of biomedical research papers.

        Biomedical research is a lot more amendable to verification and falsification, thus an argument can be made that errors are getting corrected. Global Warming is faith based, it's predictions aren't made in anything resembling a controlled scientific environment and the only way to test it's predictions is to do nothing for twenty years and see if the disasters predicted come to pass. Now consider that rerunning a medical test and the origional paper wrong will get a researcher rewarded while writing anything whatsoever questioning human caused global warming gets a researcher labeled a whore of the oil companies and the argument that the science on GW might be at least as flawed as these biomedical papers grows.

        • FAITH based??? (Score:3, Informative)

          by mangu ( 126918 )

          Global Warming is faith based, it's predictions aren't made in anything resembling a controlled scientific environment and the only way to test it's predictions is to do nothing for twenty years and see if the disasters predicted come to pass.

          You seem to have NO idea at all about what you are saying. Global warming is based on very exact scientific studies, where "faith" is needed only in that one believes that there exists a reality around us that follows some self-consistent laws.

          The studies that present

        • Now consider that rerunning a medical test and the origional paper wrong will get a researcher rewarded while writing anything whatsoever questioning human caused global warming gets a researcher labeled a whore of the oil companies and the argument that the science on GW might be at least as flawed as these biomedical papers grows.

          I think it's a mistake to imply that biologists are better people or scientists than atmospheric physicists. The main difference is not in scientific integrity, ethics, or bias, but that there is one major split in the atmospheric physics field: whether or not global warming is going on. In biology, there are still divisions where either side starts adopting the "If you disagree with me you're a (insult here)," but they are many smaller ones. You don't have 30 labs entrenched on either side, you have one

      • by dkf ( 304284 )

        I fail to see how you can draw any conclusions about the reliability of atmospheric physics papers from a study of biomedical research papers.

        It's not much more difficult than extrapolating it to high-energy physics. (It doesn't.)

        In other news, researchers have discovered that in the medical sciences, 1/3 == "most". When asked for a comment, a leading mathematician who refused to be named described this as "a crock of shit", and added that the old saw about it being dangerous to let a medic near a calculator remains true.

    • His study merely observes the obvious.

      Scientific consensus in any field is reached by research, which is then published and subsequently challenged by other scientists.

      At the point of initial research, the idea being tested is merely a hypothesis, otherwise known as "educated conjecture".

      In the case of medicine, there are numerous uncontrollable variables which lead to a high degree of error in small case studies. (the placebo effect, environmental factors, the variable resilience of each patient, etc)

      Glo

      • Re: (Score:3, Insightful)

        by Daimanta ( 1140543 )

        "Global warming does not fall under this. It has been researched, and retested, and re-challenged numerous times.'

        On this point you are walking on loose sands. What do you mean retested? You cannot test Global Warming. You can only make observations about it and then form your own opinion based on those facts. You cannot create 2 identical planets ,mess with their CO2 levels and then compare the results. All your data is based on your measurements and conclusions you draw from it. That is where the controve

        • Re: (Score:2, Insightful)

          by bjourne ( 1034822 )
          That's about as smart as saying that you can't test the theory of evolution because we didn't have two identical earths from 2 million years ago so we cant verify that species evolve. Most of science works using models and any basic chemistry lab can verify that increased levels of CO2 reflects more light of certain wavelengths which lead to higher temperatures. Any more professional biology lab can verify that organisms evolve by growing bacteria for a few days.
          • The issues on global warming that are in dispute are numerous... we keep hearing predictions from models that have not been proven to have any predictive powers, and they keep getting more alarmist, and the increasingly ridiculous claims that every example of bad weather is a function of global warming. The issue is the "hockey stick" part of the forward feedback loop... that's the claim that because events will create forward feedback, we will hit a point in a few years where it isn't preventable, becaus

            • Re: (Score:3, Interesting)

              They do have predictive powers. They predicted a decade ago the trend in markedly increased storm severity and frequency.

              They predicted the droughts and climate changes which are afflicting numerous regions world wide, including france, spain, and my area of the US.

              The issue with global warming is we know that the current models show this forward feedback, but we KNOW that the models are incomplete

              This statement is intellectually dishonest

              we also know the model of the universe (including most of our scientific theory) is incomplete. This doesn't stop us from applying the current model to everything from the production of your computer to

        • by Waffle Iron ( 339739 ) on Sunday October 19, 2008 @05:19PM (#25434943)

          In order to test the Global Warming theory you need 2 carbon copies of 1900 earth

          That wouldn't work, as the copies would be comprised mostly of carbon powder, so the geophysics would be completely different.

        • yes you can.

          Numerous independent climatology models (we're talking virtually every accredited university and think tank on earth with enough resources) based on hundreds of thousands of years of data from geological, oceanographic, and ice core samples, run through supercomputers millions of times.

          The only ones that "disagree" are tied to oil companies.

    • Re: (Score:3, Insightful)

      by kisak ( 524062 )

      At the risk of being modded down to oblivion, I am still curious to how this effects popular theories like global warming.

      Global warming is not popular, it is down right scary. If a group of scientists disprove that our use of hydrocarbons have a significant effect on global warming, these scientists will be extremely popular and probably share a Nobel Prize.

    • by guanxi ( 216397 )

      These are just baseless rumors, repeated by people who are politically opposed to action on climate change. Here's an old political trick: When the facts aren't going your way, change the subject:

      1) Politicize the issue: Change it from an issue of fact (is there anthropomorphically forced climate change?) to an issue of politics (don't let the liberals get their way on climate change).

      2) Attack the credibility of the messenger: The {liberal media/elite/scientific community/whistle blower/evil corporation} h

  • by istartedi ( 132515 ) on Sunday October 19, 2008 @01:32PM (#25432859) Journal

    I would think that "Publish or Perish" must contribute to a lot of crappy papers getting published. Shovel it out the door, somebody else says it's wrong, write another grant for a study to verify that, shovel that one out the door, rinse, lather, repeat...

    • by Seakip18 ( 1106315 ) on Sunday October 19, 2008 @02:05PM (#25433149) Journal

      How true that is.

      The significant other is quitting grad school as soon as she gets her Master's in Neruoscience(she's in the PhD/Master Program). She can't stand the constant pressure of publishing nor the need constantly justify grant writing. She's not the best researcher, but the pressure is enough to drive her to not caring anymore. She'll get her consolation prize and get on with her life.

      Maybe she's just not cut out for academia, though it's losing out on the great potential she has.

    • by timholman ( 71886 ) on Sunday October 19, 2008 @03:33PM (#25433969)

      I would think that "Publish or Perish" must contribute to a lot of crappy papers getting published. Shovel it out the door, somebody else says it's wrong, write another grant for a study to verify that, shovel that one out the door, rinse, lather, repeat...

      It does indeed. Thirty years ago an assistant professor could get tenure by publishing one good paper per year in an archival journal. Nowadays an assistant professor is expected to publish four or more journal papers per year. This leads to the well-known academic concept of the "MPU", i.e. the minimum publishable unit, or "just how many papers can I squeeze out of this one good idea?". This also leads to the backwards situation where a senior professor sitting on a Promotion & Tenure Committee may have fewer published papers (and fewer awarded research dollars) over his entire career than the assistant professor whose tenure he is voting on. Believe me when I say that the hypocrisy of this double standard is not lost on the junior faculty.

      There's no doubt in my mind that the signal-to-noise ratio in archival journal papers has plummeted in the past two decades. 90% of all journal papers are superfluous, repetitive, or lacking in any significant advancement of the art, and I'll plainly admit that includes my own papers. Everyone in academia realizes what's going on, and knows it isn't good for the students or the faculty, but unfortunately that's the way the beans get counted in the academic world.

    • by kisak ( 524062 )
      Crappy papers don't get much citations or attention in general. Also, a scientific article does not need to be crap even though it is later shown not to be correct. Even the greatest scientists have made wrong theories and connections.
  • by ArbitraryDescriptor ( 1257752 ) on Sunday October 19, 2008 @01:40PM (#25432945)
    Those responsible for refuting the research of the people who have just been refuted, have been refuted.
  • by Badge 17 ( 613974 ) on Sunday October 19, 2008 @01:41PM (#25432953)

    Basic idea: high-profile journals want papers that are new and exciting. This means that scientists have an incentive to 1) rush their work, 2) choose fields that are popular, and 3) claim that their papers solve more than they actually do. This leads to sloppy, dishonest papers.

    I'm not going to judge this paper - I haven't read it thoroughly - but to pair a title like "Why most published research findings are false" to a pretty well-known problem seems itself like an example of problem 3!

    • by syousef ( 465911 )

      I'm not going to judge this paper - I haven't read it thoroughly

      Chances are it's false.

      If it's right the chance of it being wrong and claiming to solve more than it can is high, since surely the finding applies to itself.

      If it's wrong, it's wrong.

      Therefore chances are it's wrong.

      All this circular logic is making me thirsty.

  • Context (Score:2, Interesting)

    by edcheevy ( 1160545 )
    News flash! What works in one situation (or for one person) might not work so well in another. Too little research takes the context into account, particularly regarding any research that is human-related, and so it becomes easy to "disprove" prior findings.
  • Who's to say that the papers refuting this research are correct? It seems to be taken for granted that the dissenting papers are correct, and thus the original papers are wrong. It seems likely that the refuting papers may be wrong, or that there are complex situations in which both papers are correct (to differing degrees).
  • by MosesJones ( 55544 ) on Sunday October 19, 2008 @01:53PM (#25433047) Homepage

    My prediction for this thread:

    Several people will post about how this validates the TINY, TINY, TINY, number of scientists and LARGE number of completely uneducated "opinion" formers and MASSIVE number of people who think that "belief" is the same as fact.

    What they will miss:

    This article talks about how things are put out there then invalidated by SUBSEQUENT PUBLISHED RESEARCH, not about how there is a great conspiracy around something being "right" and everyone shouting down those who dare to disagree. Global Warming is something that has consistently been found to be happening and while certain bits have been revised due to subsequent research, most of that research has found that previously incorrect models were in fact too optimistic in their view.

    This article doesn't strengthen your misguided, and uneducated, belief that Global Warming isn't real. When even the Republican candidate says its real then its time to let go and become part of the solution.

    • Yes I agree - this paper is not about "Most Published Papers" in Science. It is about published papers in the area of therapy effectiveness. Especially those where we do not have a good model. Thus of course about half should be wrong I would guess, as established by later studies. This is statistics in action. When you are looking for high correlations and selecting for the positive, you will will get false ones. As long as this paper's authors could find LATER PUBLISHED RESEARCH showing this stuff was wro

  • Obvious (Score:3, Insightful)

    by serviscope_minor ( 664417 ) on Sunday October 19, 2008 @02:00PM (#25433117) Journal

    Many are wrong. This is pretty obvious, and it's also why science works. Eventually, the wrong ones will be replaced with something less wrong, and so on.

  • This is in a large way because the empirical method is flawed for humans. It requires complete rationality and unaffected views, two things you just aren't going to find in human beings. Even our currently taught scientific method promotes going into an experiment with expectations. The fact these experiments cost money, time, etc... (many times in the forms of grants that will be pulled if you don't show significant progress), results are often overblown or outright false. Its not that the lofty ideals
  • by Ranger ( 1783 ) on Sunday October 19, 2008 @02:02PM (#25433131) Homepage
    that proves most published research findings are true.
  • by Anonymous Coward

    Why is it that a large portion of scientific research today is garbage. Well one very powerful reason, money. I saw this firsthand working at a major university medical center on large scale behavior research projects. The outcomes of our studies directly effected existing and experimental drugs and the drug company representatives were right there alongside the researchers at all levels of the process. Professors received "gifts" and other unofficial incentives from them regularly. I saw at least one study

  • I agree, but not for the same reasons (although the author's reasoning sounds plausible for big-name journals).

    I've found that peer reviewers very seldom give good critiques of the methodology. Rather, most of the comments appear to be on the scope of the paper - comments of the variety "You're doing X. Smith et al. has done Y (which is tangentially and usually very weakly related to X) yet you don't mention this." I suspect that this is because most reviewers don't know enough about the research methods be

  • by PCM2 ( 4486 ) on Sunday October 19, 2008 @02:10PM (#25433211) Homepage

    Scientific research is just that -- research. If it were as easy as doing a couple of experiments, revealing the "truth" and moving on to the next thing, we'd all be living around Alpha Centauri by now. But science is hard and therefore a lot of conclusions are naturally going to be wrong. If that weren't the case then we wouldn't even need any scientific journals -- all we'd need would be newspapers.

    Remember the whole "theory of evolution" issue that the creationists keep harping on? "They call it a theory so it must not really be true?" We all know that evolution is just about as "true" as any science gets -- and yet surely there are some portions of the current body of knowledge about evolution that will one day be falsified by later research. That's not a bad thing.

    Notable research that has since been thought to be flawed or insufficient: Newtonian physics. Niels Bohr's model of the atom. Gregor Mendel's research into genetics. Einstein's theory of general relativity. Koch's postulates for determining disease causation. Quantum mechanics. And so on.

    • We all know that evolution is just about as "true" as any science gets -- and yet surely there are some portions of the current body of knowledge about evolution that will one day be falsified by later research. That's not a bad thing.

      It's not bad from a scientific perspective. But your life doesn't depend on one particular facet of "Evolution" (or Relativity, or Quantum Mechanics) being accurate.

      When doctors are prescribing antidepressants to you, based on the latest medical research, the resilience of th

    • by EmbeddedJanitor ( 597831 ) on Sunday October 19, 2008 @05:14PM (#25434899)
      There is a spectrum. Some sciences are very robust and some are highly speculative. If you include mathematics as a science, then it is the most robust and needs a robust proof to get accepted as being a "truth". At the other end of the spectrum are the "wishy-washy" sciences which have no proofs, just uncontrolled observations: climatology, paleontology, social sciences, etc.

      This latter group are far more prone to different interpretation by different people.

      • by dkf ( 304284 )

        At the other end of the spectrum are the "wishy-washy" sciences which have no proofs, just uncontrolled observations: climatology, paleontology, social sciences, etc.

        Don't forget to add astronomy to that list. After all, nobody's ever managed to conduct an experiment on a galaxy under laboratory conditions...

  • Please replace "false" with "incorrect". The word "false" implies deliberate fraud, and while that undoubtedly happens the cited articles do not suggest that fraudulent papers are in the majority.

  • New information supplants old ideas and how does this process become sensible? I often run into manuals that are a few versions out of date on the web and they never get removed. Until somebody can come up with a way that everyone can share information in a common data base that is version controlled, the problem will persist. Wikis and Wikipedia are good ideas, but the concept needs some kind of extension so that information and its corrections are connected in some way.

    The fact that government / companie

  • by Sapphon ( 214287 ) on Sunday October 19, 2008 @02:19PM (#25433273) Journal

    Even after reading the article, I'm still not sure if the authors are saying:

    A) Given that research has been published, it is more likely to be false than not; or
    B) Given that research is false, it is more likely to be published than is the case for true research.

    I mean, it says:

    Dr Ioannidis made a splash three years ago by arguing, quite convincingly, that most published scientific research is wrong.

    So, (Wrong Articles)/(Total Articles) = >=0.5, right?
    But the only figures I can find in the same article are:

    Dr Ioannidis based his earlier argument ... on a study of 49 papers ... (H)e found that, within only a few years, almost a third of the papers had been refuted by other studies.

    So.. "most" is now "less than one third"?

    I'm somewhat alarmed that The Economist lets people who don't seem to grasp basic statistics write their articles.

  • I think most of the mess is caused by statistics forming the conclusion of the research.

    "If your experiment needs statistics, you ought to have done a better experiment."

    -- Ernest Rutherford (1871-1937)
    [In N. T. J. Bailey: the Mathematical Approach to
    Biology and Medicine]

  • "most"? (Score:2, Insightful)

    by jwilty ( 1048206 )
    When did "almost a third..." become "most?"
  • by l2718 ( 514756 ) on Sunday October 19, 2008 @02:43PM (#25433477)
    So, a paper claiming that small sample sizes lead to wrong conclusions is based on analysing a sample of only 49 other papers? The mind boggles with the self-applicability ...
  • what research findings were published to promote medicines in order to further the corporate profits (greed)...
  • Ioannidis argues that scientific research is so difficult -- the sample sizes must be big

    Dr John Ioannidis bases his argument about incorrect research partly on a study of 49 papers

    Oh, the irony!

  • Wow, headline grabbing, potential break-through scientific theories get a lot of scrutiny and citations from fellow scientists, and many of the theories fail the test of peer review. Daring theories are important for the further advances of science, but by definition most of them will fail.

  • Fixable? (Score:3, Insightful)

    by philspear ( 1142299 ) on Sunday October 19, 2008 @04:02PM (#25434229)

    It's important to identify a problem no matter what, but are any of these biases fixable? I would argue that some of them, specifically the bias toward positive results, is not fixable and is inherent to how science works.

    To quote one of the articles "...negative results are potentially just as informative as positive results, if not as exciting." But negative results often require much much more verification than positive results, if they can be verified at all, and are limited in how much they can tell you. The antidepressant studies mentioned, a negative result, that the antidepressants did nothing, only tells you that in the patients tested, the doses tested did not give you a noticeable positive result. Publishing a negative result on that would have very limited conclusions. The next year, they could find that doubling the dose was actually effective, making the writeup of the earlier negative result pointless and even more trivial. A waste of time, plus then you've published saying your own product doesn't work.

    Negative results get even more pointless in other fields. If someone does a mutagenesis screen for a particular defect in C. elegans, and doesn't find any mutants affecting that, it could be noteworthy, indicating that any genes affecting that process were so vital that when you took one away you didn't get a worm at all, or it could just be luck that genes affecting the process were never mutated, or the researcher didn't do it correctly, or all genes involved were redundant, or some combination. What conclusions could you draw from that? It would be a negative result that would be nigh impossible to tell anything from. Without any positive hits, you could go to the trouble of making sure you did it correctly, but you're not going to make sure every gene got hit at least once, that would be impossible.

    In still other cases, a negative result is often retrospectively found to be the fault of the researcher. Who wants to publish something that is basically telling your peers how dumb you are?

    There's also that it requires a lot of extra work to make sure it's a negative rather than a null result. Usually when I hit a negative result, my inclination is to see if I did it wrong by repeating the experiment if possible, if it comes up negative again I usually take a different approach, if that also gets a negative result I re-evaluate. I don't ever do all the other supporting experiments that would be needed to convince a reviewer it's a real negative result. If I use an RNAi construct to knock down a gene, and it doesn't do what I'm expecting or anything else interesting, I don't verify the gene is actually knocked down, since that's more effort that would probably be a waste. I'm definitely taking a risk that it's a real result, but it's hard to prove a negative and there's also less motivation to do so.

    The limited ability to make positive conclusions about negative results also limits where they could be published. There is a journal for negative results, but a publication there is not something I personally would put on a CV.

    So while it is interesting that a bias against negative results may be throwing us off, it's not very usefull knowledge, because I don't see us able to do anything about it.

  • So the real news here is that Wikipedia is about as accurate as science journals.
  • If medical journals are publishing bad research, that's a problem with medicine. Real science doesn't take three years to peer review and real science is usually (iteratively) correct. Medical research is more similar to social science these days than physical science. The reliance on large samples of different people to smear out inconsistencies in data is a mark of poor understanding of the system they're studying. It's fine to use this method, but it needs to be differentiated from science in general

  • Alan Goldhammer, deputy vice president for regulatory affairs at the Pharmaceutical Research and Manufacturers of America, said the new study neglected to mention that industry and government had already taken steps to make clinical trial information more transparent. "This is all based on data from before 2004, and since then we've put to rest the myth that companies have anything to hide," he said.

    All I can say to that is :

    ROFL

    LMAO

    Hillarious!

  • While Ioannidis, the author of the original work that was discussed here [slashdot.org] may be correct that a large fraction of a very specific type of clinical research findings, are incorrect, there is no reason to believe (from his published work) that "most published research findings are false". Most of the ones he looked at were not reproduced, but they had well understood limitations. My papers, I can assure you, are only incorrect about 10% of the time.

  • by xPsi ( 851544 ) * on Sunday October 19, 2008 @06:30PM (#25435431)
    Science in general isn't about "publishing what is right" but rather creating a network of accountability in the form of methods, ideas, data, procedures, etc. so others can try and reproduce and critique the results. Even if the published results are shown to be incorrect by other studies, this does not mean the system is broken. The scientific process is an iterative, self correcting, one. However, if after many years and many studies, a particular field fails to converge on an accepted baseline conclusions, there is a good chance something is wrong (you may even be doing pseudoscience).
  • by drfireman ( 101623 ) <dan&kimberg,com> on Sunday October 19, 2008 @07:07PM (#25435647) Homepage

    Ioannidis wrote a previous article, titled "Why Most Published Research Findings Are False." A very provocative title, one that practically begged the scientific community to read it just to accumulate a laundry list of holes in his argument. A good marketing maneuver. It may or may not be true that most published research findings are false, but the article certainly didn't demonstrate it. Within the context of the discourse he intiated, that would have to be viewed as a kind of willful stupidity, or perhaps marketing brilliance. After all, if the same journal received ten equally well argued articles with titles like "why most research is pretty good," they would still of course prefer to publish his. This truism seques nicely into the new article (on which he is not first author), which is titled, "Why Current Publication Practices May Distort Science." Use of the word "may" is quite helpful here. Does the article live up to its title? It may not be convincing, but it would be even less so without the word "may."

  • Dental Studies (Score:4, Informative)

    by Sharkeys-Day ( 25335 ) on Sunday October 19, 2008 @11:15PM (#25437331) Homepage

    I read a bunch of dental papers recently, and discovered something rather disturbing. A good 90% or more of studies for dental procedures do NOT use any control group. They all say, "we did X and got the expected result." There is no checking whether the procedure is better than other procedures or even doing nothing at all.

    Something to think about next time someone you know is told they need wisdom teeth extracted or some orthodontic appliance.

  • by jandersen ( 462034 ) on Monday October 20, 2008 @05:05AM (#25438803)

    I can't see what is so surprising here. Basically when you do research, you are groping in darkness - after all, it wouldn't really be worth doing if the results were already known, would it? Approaching a new problem is a bit like looking at the notorious elephant through a keyhole; different people will have different guesses as to what it is and most will be wrong, until at some point enough observations are made and you can construct a more complete picture.

    When you publish scientific articles you don't claim that "THIS IS THE TRUTH" - you are merely putting forward your opinion and then somebody else comes along and says "No, because ...". And even the articles with the "wrong" results are valuable, because they tell us that this particular interpretation is not the right one. It can take a lot of false turns before you find the right way through a maze, and in fact it tells us something about the generally high quality of research that we are not seeing about 90% wrong results.

"I got everybody to pay up front...then I blew up their planet." "Now why didn't I think of that?" -- Post Bros. Comics

Working...