Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Most Scientists 'Can't Replicate Studies By Their Peers' (bbc.com) 331

Science is facing a "reproducibility crisis" where more than two-thirds of researchers have tried and failed to reproduce another scientist's experiments, research suggests. From a report: This is frustrating clinicians and drug developers who want solid foundations of pre-clinical research to build upon. From his lab at the University of Virginia's Centre for Open Science, immunologist Dr Tim Errington runs The Reproducibility Project, which attempted to repeat the findings reported in five landmark cancer studies. "The idea here is to take a bunch of experiments and to try and do the exact same thing to see if we can get the same results." You could be forgiven for thinking that should be easy. Experiments are supposed to be replicable. The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions. Sadly nothing, it seems, could be further from the truth.
This discussion has been archived. No new comments can be posted.

Most Scientists 'Can't Replicate Studies By Their Peers'

Comments Filter:
  • by Dave Marney ( 2977859 ) on Thursday February 23, 2017 @10:27AM (#53917557)
    If you can't reproduce it, it's either fake or you were just being sloppy. Either way, it's no wonder ordinary civilians have doubts.
    • by Anonymous Coward on Thursday February 23, 2017 @10:32AM (#53917587)

      If you can't reproduce it, it's either fake or you were just being sloppy. Either way, it's no wonder ordinary civilians have doubts.

      Or it took years to perfect the experiment technique. It took me 2.5 years to get my injury model and staining protocol optimized for my PhD research. A fair amount of success comes down to technique, not the written protocol.

      • by borcharc ( 56372 ) * on Thursday February 23, 2017 @10:40AM (#53917633)

        Then why not describe the novel techniques you developed to complete the research in the paper? Any process that is claimed to require special abilities is actually one the needs training.

        • by Dissenter ( 16782 ) on Thursday February 23, 2017 @11:57AM (#53918223)

          I suppose that's a key element to the issue this article is discussing. If only the standard methods are in the publication and some novel augmentation of a process is necessary to produce those results, there is missing data and it could not be reproduced. Too many people are anxious to publish simply because it is part of their job to do so, but if some novel component is being persued through patent or other non-disclosed intellectual property, the publication should probably be either post-poned or not submitted. It's an odd catch 22 for folks in this area of research. I tend to agree that publishing something incomplete, however, simply extends ignorance rather than contributing to the education of your peers.

          • If you haven't described a technique so that "someone skilled in the arts" can understand it, your patent protection maybe voided.

      • by Baron_Yam ( 643147 ) on Thursday February 23, 2017 @10:41AM (#53917651)

        I have to disagree. To me, "a fair amount of success comes down to technique, not the written protocol" means you're not documenting your protocol adequately.

        It would certainly be fair to say that some manual actions could take a lot of practice before the experimenter would likely be skilled enough to perform them, but there shouldn't be anything missing from the protocol documentation that someone attempting to reproduce the results would have to learn from scratch.

        Excepting well-established standard practices of the field, of course. You don't have to teach from kindergarten up to post-grad.

        I'm no bio researcher, but I am an IT guy and we could fill a library with books on substandard documentation making it difficult for others to follow in our footsteps.

        • by Dthief ( 1700318 ) on Thursday February 23, 2017 @11:18AM (#53917923)
          sometimes thats done purposefully so that you can get to the next paper before the other person. /p not a good system but I know plenty of researchers who put in enough to explain what they did, but not enough so you could reproduce it without a bit of effort/research yourself. /p definitely not the ideal, but people care about getting the next paper first so they can advance their career. publish or perish
        • "If the code was hard to write it should be hard to understand!"
        • "You didn't use the right technique" is the first excuse used by researchers when their results don't hold up.
          In Bio science this reproducibility problem is, at heart, a problem with having an experimental system that is under control, well defined and "stable".
          There are plenty of very precise measurements made that are not accurate because there is something about the experiment that is not under control.
          In biology, even if you do your best to account for statistical variation, it can often be the case tha

        • I agree, documentation of protocols needs to be improved; however, it's hard to document everything you did for a paper when the journal doesn't give you very many words at all to actually explain what you did, and many don't support video sections for online papers.
        • by elgatozorbas ( 783538 ) on Thursday February 23, 2017 @02:23PM (#53919329)

          we could fill a library with books on substandard documentation

          There is some irony in there somewhere...

      • by sjames ( 1099 ) on Thursday February 23, 2017 @11:01AM (#53917803) Homepage Journal

        If it's not a standard protocol, why isn't it documented?

    • by Thud457 ( 234763 ) on Thursday February 23, 2017 @10:48AM (#53917707) Homepage Journal
      This pretty sums up my experiences reproducing experiments in the lab : ob. Electron Band Structure In Germanium, My Ass [wisc.edu]
      • What a fucking GEM!!!! You had me at:

        Abstract: The exponential dependence of resistivity on temperature in germanium is found to be a great big lie. My careful theoretical modeling and painstaking experimentation reveal 1) that my equipment is crap, as are all the available texts on the subject and 2) that this whole exercise was a complete waste of my time.

        And kept delivering right through to the end! Thank you sir!

        Also:

        The diagram on the right side labeled: Fig. 1: Check this shit out.

        Awesome!!

      • Your results on the graph are very, very funny :-D
    • by ranton ( 36917 ) on Thursday February 23, 2017 @10:55AM (#53917771)

      If you can't reproduce it, it's either fake or you were just being sloppy. Either way, it's no wonder ordinary civilians have doubts.

      Or perhaps this is the reason why you need to build a consensus among many scientists with similar results before you put much validity in any cutting edge research. All research is going to include assumptions, specialized techniques, biases, random variability of data, and many other factors which can reduce the validity of its conclusions.

      Your comment reeks of a double standard where if scientists cannot come to a consensus they are being fake or sloppy, but if they point out an overwhelming consensus they are not being scientific because a consensus shouldn't matter.

      • How many studies? If a published peer reviewed study states a 95% confidence in the results, but has a 60% chance of being wrong, on average, how many studies must be conducted before we have significant evidence? If we have no tools in which to measure the confidence of an individual study, how do you suggest we come up with the confidence in an aggregate of studies?

        A Bogosort will eventually get to the right answer, because it has a way of measuring the correctness of an answer. But without any method of

      • by Dread_ed ( 260158 ) on Thursday February 23, 2017 @01:06PM (#53918767) Homepage

        Woah, I am confused by what you just wrote.

        You seem to be saying that if a bunch of scientists agree that *this* (whatever *this* is) is the way it is, but none of the experiments that prove that *this* is the way it is are reproducible, then we should just go with the consensus?

        Soooo, Galileo had some experiments he could back up, but the other scientists at the time had a consensus view of the cosmos. Their results were not reproducible, his were. You seem to be arguing for heliocentrism based on consensus.

        Essentially, your introduction of the concept of consensus based on results with a total lack of any comments on how reproducible those results are leaves me wondering just what you think the scientific method is predicated on.

        I don't care how "cutting edge" the research is. If it can be successfully reproduced and is based on sound principles I would consider it first before any "consensus" based on what a bunch of people think but can't back up with reproducible results.

      • by CharlieG ( 34950 )

        The problem is, there is often consensus without duplication, or actual peer review. "If X says it is so, then I agree". The RIGHT answer is "I don't know, but X says it is so"
        Consensus without duplication of experiment, or at LEAST running your OWN models on the other person's RAW data (if it is too hard to duplicate the data) is herd following

      • Science is not about consensus building. What you're describing is a political process not a scientific process.
    • by Dread_ed ( 260158 ) on Thursday February 23, 2017 @12:49PM (#53918625) Homepage

      Exactly right. If it is not reproducible, it is not "Science," period.

      If, as others have written, your experiment is so finely predicated on the experimental set-up (using certain equipment, preparing samples or data in a certain way, etc.) then you need to document that so specifically that anyone can repeat it. Why, you ask, with an dumbfounded and incredulous look on your face? Because you could be introducing very specific bias in the way you set up your experiment. If it only works when you do it *just like this,* maybe the reason is that the experimental result is a direct consequence of your set-up and not an actual measurement of naturally occurring phenomena. I like to call it "measuring your equipment."

      If you cannot, through your extensive documentation, re-create the experiment in another lab with completely different humans what you have published is essentially science fiction.

      • Kind of like the science fair project I judged one time. It concluded that Elmer's glue was the same strength as Gorilla glue (and a few other glues). Looking at the experiments I informed the middle school student that their study setup sadly had determined the tensile strength of popsicle sticks, not the shear strength of glue. You also have to make sure your experiment end point isn't influenced by limited equipment or testing setup.

    • If *someone else* can't replicate it, then they might be doing it differently than you were, even if they think it's the same way because journals don't give you enough space to actually explain what you did in a lot of detail.
  • by Anonymous Coward

    As per the subject, this comes from the collision of two things that are completely counter to the process of science.

    1) Patent theory. Since many more nations have access to patents than actually respect patents, it is self-destructive to put enough detail in a patent to actually build what the patent is for. For research papers, this has the benefit of being informed of any attempts to replicate the study, because the other labs will call in and ask questions to find out what was left out. This lets th

  • From the article, it seems like people are trying to write things in a way to make them prettier... and less accurate. Quote: "The trouble is that gives you a rose-tinted view of the evidence because the results that get published tend to be the most interesting, the most exciting, novel, eye-catching, unexpected results. "

    This is slightly on topic... take the wording from wikipedia that seems to be designed to appeal to the masses and probably has misinformation (looks like big pharmacy got their hands in

  • by FellowConspirator ( 882908 ) on Thursday February 23, 2017 @10:37AM (#53917623)

    In the experimental protocols listed in a paper, it is not unusual to have a method's section that is more or less an executive summary rather than a very detailed account of the underlying protocol. This is for two reasons: to great a level of detail leads to a methods section as big as the publication that the paper appears in, and second because many protocols more or less boil down to using a particular series kit or out-sourced lab service. Most journals require data supplements where an author must share their datasets in electronic form as an online addendum to the publication. I would support a similar requirement for a long-form protocol for reproduction of the study.

    That said, some protocols necessarily take a lot of money, special equipment, a carefully selected population of volunteers, and time. Reproducing some studies can be outright impractical.

    In computational biology and other computational extensions of the physical science, the reproducibility basically comes in the form of requirements to provide the software and raw data for a study. It's easy for the individual that compiles this information to verify that they get the same result as the one they report in the article. The concern there boils down to the provenance of the source data, which may be from registries, public data sets, or some combination of public and private data.

    • by RyanFenton ( 230700 ) on Thursday February 23, 2017 @11:05AM (#53917831)

      Why are we still using printed journals?

      Why is the amount of space a report takes up still an issue?

      Details are important. If you want a short version, then make a summary, but don't cut out the detail available to do that.

      In terms of ascii/unicode text, we're not going to run out of bytes to explain important scientific details.

      Heck - make videos of the processes, mention part numbers, and even show mistakes that you encountered along the way in your notes! Video hosting is free, and shouldn't be going away anytime soon. Making a process replication video should be a normal thing.

      If you're spending so much time anyway, so much of your life in these studies, what's the value in holding back important information?

      Ryan Fenton

      • In practice, we are not using printed journals. Almost every journal has an on-line supplement section where authors can include all of the information you just mentioned and much more.
    • by ooloorie ( 4394035 ) on Thursday February 23, 2017 @11:13AM (#53917887)

      In computational biology and other computational extensions of the physical science, the reproducibility basically comes in the form of requirements to provide the software and raw data for a study. It's easy for the individual that compiles this information to verify that they get the same result as the one they report in the article. The concern there boils down to the provenance of the source data, which may be from registries, public data sets, or some combination of public and private data.

      The purpose of reproduction is to guard against statistical accidents, bad assumptions, and fraud, and running the same software over the same data doesn't do that. Reproduction in the scientific sense means that someone who collects their own data and writes their own software using just the assumptions stated in your paper gets the same scientific result.

      Reproducibility in computational and data intensive disciplines is actually a bigger problem than in experimental sciences, because it's so easy to share code and data; that means that lots of people run basically the same code over basically the same data and seem to be "reproducing" a result, when in fact, they are adding no new information.

  • by known_coward_69 ( 4151743 ) on Thursday February 23, 2017 @10:39AM (#53917631)

    getting the results you need means you can push a drug through trials and make a lot of money

    if you hurt or kill someone it won't happen for 20-30 years and by that time you will be retired and the person in charge at the time will be legally responsible while you chill in your nice house

    • Drug tests occur in an area where mistakes can ruin a company through lawsuits. I think the reliability there is far above average. That is, reliability where health hazards are concerned. I would not be so sure about effectivity testing but I still suspect that this more visible area of science is one where experimental rigor is way above average. I may be a sceptic about what average means.
      Finding out whether your medicine will hurt someone in thirty years time can be pretty hard. Failing to do so doesn't

      • that's the point. look at vioxx. it was first developed in the 80's and the lawsuits didn't happen until 20 years later. the people who originally developed it and oversaw it being released to market were long gone by then. and during the whole lawsuit hype there were old scientists on TV who read the original research at the time and said it would most likely cause problems due to the way it worked

  • The authors should have done it themselves before publication

    In the rush to "publish or perish," you don't have time to re-run your experiment.

    A "solution" would be "split publication" - publish results after the first experiment but call it "unverified." Then when you or another researcher reproduces the experiment, publish again.

    The first researcher would receive the primary "credit" but only if the results held up under scrutiny.

    Over time, researchers who accumulated a lot of "un-verified" initial publications would see their reputations suffer.

  • by Anonymous Coward on Thursday February 23, 2017 @10:50AM (#53917727)

    Scientists are not rewarded for reproducing/debunking previous work. You can't easily get it published, because it is not regarded as new. Honestly, I think grad student projects should be almost entirely reproducing other results. It would insure that every important result is reproduced, and increase the emphasis of doing the science correctly rather than finding some novel result (which is usually a 2 sigma result which again can't be reproduced).

    • That's a really good point and I would have to agree. Letting grad students do research while they have no practical application of their yet inexperienced education is like letting a 10 year old try to drive a car just because he was able to read the manual. Giving them the task of reproducing experiments and prooving or disprooving their validity is an excellent way to get hands on experience without adding elements of risk to an already challenging field.

      • by WrongMonkey ( 1027334 ) on Thursday February 23, 2017 @12:50PM (#53918635)
        You are vastly underestimating grad students. All grad students already have a bachelor's degree, had high enough grades to get accepted to grad school and had previous undergraduate research experience. Most incoming grad students have already published research papers. Many of them have had industry experience between undergrad and grad school. The average age of grad students is 33 years old. We're talking about mid-career professionals, not wet-behind the ears newbies.
    • I did this once as a grad student. I pointed out an error to the authors of a published study. They were nice, but some of the email exchanges were a little tense. Understandably, emotions get high in a situation like this. However, I would not recommend grad students do this as a general project, because they can be easily attacked by more senior researchers with better standing in their fields.
  • by Roger W Moore ( 538166 ) on Thursday February 23, 2017 @10:57AM (#53917779) Journal
    This is medical research, not science. Medicine uses science because often the best way to cure something is to understand it but, very importantly, it has a very different motivation to science. Finding a "magic" pill which cures disease X without side effects but whose mechanism is completely unknown is great medicine but appalling science. Science is all about understanding how things work, medicine is all about treating human ailments.

    This leads to a different approach using the tools of science. Medical researchers tend to focus far more on correlation over causation because that is what is most important to this. Unfortunately this approach leaves them open to random statistical effects which require a very good understanding of statistics to avoid and even then it can still be very easy to fool yourself e.g. the Monty Hall effect [wikipedia.org].

    So lets call this problem what it is: a problem with medical research.
    • Medical researchers tend to focus far more on correlation over causation because that is what is most important to this

      No, they focus on correlation because causation is frequently so complex that it can not be deciphered with our current level of knowledge. It can often take decades after we find the correlation that we can nail down the causation.

  • by Artem S. Tashkinov ( 764309 ) on Thursday February 23, 2017 @11:09AM (#53917875) Homepage

    The human body is the most complex organism in the known universe so there's nothing to be sneezed at or be surprised by. For instance recent studies have shown that for a lot of people placebo works even when people have a perfect knowledge [harvard.edu] that they are given placebo.

    As another confirmation, the brain has the ability to directly change/affect the chemical processes in the body as demonstrated by Wim Hof [wikipedia.org] who can manage his body's temperature at will.

    • > The human body is the most complex organism in the known universe ...

      Found the anthropocentrist.

  • by dark.nebulae ( 3950923 ) on Thursday February 23, 2017 @11:16AM (#53917909)

    If I were publishing a paper on something that could lead to a serious pile of greenbacks, you can be damn sure my paper is going to exclude some details that would prevent others from monetizing off of my work...

    After all, science these days is not solely for the pursuit of truth and knowledge. Research is bought and paid for, and like any venture capital, the investments are expected to pay off.

    • by pipingguy ( 566974 ) on Thursday February 23, 2017 @12:28PM (#53918451)
      The part of Eisenhower's speech that is rarely quoted, for some reason: "Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

      The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.

      Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific technological elite.
      "
  • Can two thirds of the results not be reproduced? That is a huge problem with our academic credibility and what we consider science.

    Can two thirds of the scientists not reproduce results? That's a huge problem with our academic standards and what we consider scientists.

    • Most published papers are so condensed with so many steps in the process brushed over it is not straight forward to actually recreate it. The devil is in the details is the apt description of the process of actually implementing something. Insignificant details which are not important in describing the concepts and results can be essential for implementation. Sometimes luck, skill and sometimes there is a bit of trial of and error which does not get described. Sure, there is an assumption of skill level b

  • If at first you don't succeed, try try again. Then if you succeed, try try again. Carry on until you have constructed a body of results you can evaluate as a whole.

    There is a reproducibility problem for who have a model of the universe that works like this: If A is true, then investigation will uncover evidence supporting A, and no evidence supporting not-A. If this is your world view, then the instant you have any contradictory data you have a worldview crisis.

    It is perfectly normal for science to yield c

  • If you are unable to reproduce the results of a certain study, it appears to me that there may just not be enough knowledge of all the factors that affect the end result. For example, if you study something believing the main factors that determine the outcome are 'A', 'B', and 'C', but do not have the insight (yet) that factor 'D' is also very influential, then factor 'D' may have value '1' for the original study but value '2' for the reproduction study, influencing the end result and resulting in the diff
  • A lot of times stuff is not replicatable (suck it spellchecker, i just invented the word) because it's fucking difficult. I mean I have spent thousands of dollars and even worse wasted many hours in the lab on getting something I thought should be straightforward, obvious, and simple to work. Sometimes you want things to work so badly, you might even see things (usually fluorescence) where there is none. It's like how Percival Lowell saw canals on Mars. As a scientist you have to fight hard against your o

  • and doesn't need no stinking experiments.

  • Honestly? In my experience, this is more a result of materials sections in papers being incomplete than it is evidence of poor science. Oftentimes, procedures just aren't described in enough detail to repeat something unless you already know how to do it, and even then, small differences in the way you do things can add up. Journals need to raise the word limit on materials and methods sections, because those are usually too low.

Trap full -- please empty.

Working...