Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Medicine Science

Independent Labs To Verify High-Profile Research Papers 74

ananyo writes "Scientific publishers are backing an initiative to encourage authors of high-profile research papers to get their results replicated by independent labs. Validation studies will earn authors a certificate and a second publication, and will save other researchers from basing their work on faulty results. The problem of irreproducible results has gained prominence in recent months. In March, a cancer researcher at Amgen pharmaceutical company reported that its scientists had repeated experiments in 53 'landmark' papers, but managed to confirm findings from only six of the studies. And last year, an internal survey at Bayer HealthCare found that inconsistencies between published findings and the company's own results caused delays or cancellations in about two-thirds of projects. Now, 'Reproducibility Initiative,' a commercial online portal is offering authors the chance of getting their results validated (albeit for a price). Once the validation studies are complete, the original authors will have the option of publishing the results in the open access journal PLoS ONE, linked to the original publication."
This discussion has been archived. No new comments can be posted.

Independent Labs To Verify High-Profile Research Papers

Comments Filter:
  • Re:cool! (Score:4, Insightful)

    by Geoffrey.landis ( 926948 ) on Wednesday August 15, 2012 @01:36PM (#40999013) Homepage

    Negative results are sometimes just as interesting as positive ones. As you usually learn something.

    You would think.

    In the ideal world, that would be true, but in the real world, what you most often learn is that there are many different ways to screw up a delicate measurement in ways from which you learn little or nothing.

  • by Vario ( 120611 ) on Wednesday August 15, 2012 @01:44PM (#40999091)

    The idea to reproduce important results is good and is part of the scientific method. In practice this is much harder to accomplish due to several constraints. I can only speak for my field but I think this applies to other fields as well that the reproduction is hard by itself.

    This leads us to a bunch of problems. If it takes a graduate student a year to collect a data set on a custom made machine that is expensive and time consuming who has the resources to reproduce the results? In most branches we are limited by the available personnel. It is hard to imagine giving someone the task of 'just' reproducing someone else's result, as this does not generate high-impact publications nor can be used for grant applications.

    The thought behind this would benefit the scientific progress, especially to weed out questionable results that can lead you far off track but someone needs to do it. And it better not be me, as I need time for my own research to publish new results. Any reviewer always asks him/herself whether this is really an achievement that it is worth publishing, which reviewer would accept a paper stating "We reproduced X and did not find any deviations from the already published results" ?

  • by Joe Torres ( 939784 ) on Wednesday August 15, 2012 @01:55PM (#40999201)
    The article says that the "authors will pay for validation studies themselves" at first. This is a nice idea, but it is not practical in an academic setting. Academic labs would rather spend money on more people or supplies instead of paying an independent lab to replicate data for them. New ideas are barley able to get funding these days, so why would extra grand money be spent to do the same exact studies over again. There could be a use for this in industry, but they would probably pay their own employees to do this instead if it is worth it.
  • Well now (Score:4, Insightful)

    by symes ( 835608 ) on Wednesday August 15, 2012 @02:00PM (#40999235) Journal

    I can see this happening for some fairly small studies, but many very big studies simply can't be replicated. For example, a big public health study will possibly change the sampling population. What about the LHC? How could anyone realistically replicate that work? The deal is replication isn't really replication as you can't always copy what someone has already done. This idea just seems more like profiteering than anything else. What we really need are options for research groups to publish studies that failed but say something interesting about why they failed. This is much more useful. This way we all learn. Plus big labs aren't always free from suspicion themselves.

  • by Geoffrey.landis ( 926948 ) on Wednesday August 15, 2012 @02:03PM (#40999259) Homepage

    Yes, the more I look at this, the more I see little chance for it to work.

    A graduate student will typically spend ~2 years of dedicated study in a narrowly specialized field to learning enough lab technique to do a difficult experiment, often either building their own custom-made equipment, or using one-of-a-kind equipment hand-built by a previous graduate student, and do so with absurdly low pay, in order to produce new results. You can't just buy that; they're working for peanuts only because they are looking for the credit for an original contribution to the field. And then you're going to say "oh, by the way, the original authors get the publication credit for your work if it reproduces their results, and we won't publish at all if you don't."

    And, frankly, why would the original researchers go for this? You're really asking institutions to pay in order to set up a laboratory at a rival institution, and then spend time and effort painstakingly tutoring their rivals in technique so as to bring them up to the cutting edge of research? And even if you can bring a team from zero up to cutting-edge enough to duplicate your work, what you get out of it is a publication in a database saying you were right in the first place?

  • by Geoffrey.landis ( 926948 ) on Wednesday August 15, 2012 @03:14PM (#41000163) Homepage

    In the ideal world, that would be true, but in the real world, what you most often learn is that there are many different ways to screw up a delicate measurement in ways from which you learn little or nothing.

    What are you talking about? Negative results doesn't mean someone screwed up a measurement. Negative results means the experiment ran correctly but the results went counter to the hypothesis.

    That would be nice if things were that simple.

    Negative results are the fruit of good science just as much as positive results are. Screwing up the measurements in an experiment is simply bad science, or not science at all.

    What planet are you from? I want to move to your planet, where science is so easy, and stuff always works unless it's "bad science," which apparently comes with a label so anybody can tell which is which.

    On my planet, stuff doesn't always work. When it doesn't work, it's not always easy to figure out why it doesn't work. When you don't get a result, it's hard to be confident that you didn't do something wrong-- and the people who are confident that they didn't do something wrong... are often wrong. It's not always trivial to say whether or not you did something wrong, or whether the experiment set up had a flaw, or there was something that turns out to be important that you didn't know was important, or whether the result you're trying to replicate was just wrong to start with.

"If it ain't broke, don't fix it." - Bert Lantz

Working...