Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

How To Better Verify Scientific Research 197

Hugh Pickens DOT Com writes "Michael Hiltzik writes in the LA Times that you'd think the one place you can depend on for verifiable facts is science but a few years ago, scientists at Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology and found only six could be proved valid. 'The thing that should scare people is that so many of these important published studies turn out to be wrong when they're investigated further,' says Michael Eisen who adds that the drive to land a paper in a top journal encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn't a safeguard because the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws. 'The journals want the papers that make the sexiest claims,' Eisen says. 'And scientists believe that the way you succeed is having splashy papers in Science or Nature — it's not bad for them if a paper turns out to be wrong, if it's gotten a lot of attention.' That's why the National Institutes of Health has launched a project to remake its researchers' approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience. 'The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole,' says Hiltzik. 'NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path.'"
This discussion has been archived. No new comments can be posted.

How To Better Verify Scientific Research

Comments Filter:
  • Half right (Score:5, Interesting)

    by SirGarlon ( 845873 ) on Monday October 28, 2013 @09:10AM (#45257625)
    Well, it's good to see a major scientific institution waking up to a phenomenon Richard Feynman warned about in the 1970s [columbia.edu]. Yet it seems to me the proposed solution is a little ad hoc. If scientists want to restore integrity to their field(s) -- and I applaud their efforts to do so -- why aren't they using an experimental approach to do so? I think they should try several things and collect data to find out what actually works.
  • Re:Eyeballs and Bugs (Score:5, Interesting)

    by Sattwic ( 545957 ) on Monday October 28, 2013 @09:41AM (#45257839) Homepage Journal

    The problem with the present method is that each paper is scrutinized before publication only by a very small select cohort of experts. And once this decision is taken, its 'published and stays published for ever' (in most cases, discounting the outright fraudulent ones that are retracted)'.

    I am a professor of pharmacology and we do critical appraisal of scientific papers in our department all the time for symposiums. You won't know what kind of mistakes my undergrads pick up in journal clubs, of papers published in prestigious journals.

    By enough eyeballs, I do mean qualified eyeballs. Not just eyeballs.

  • Re:problems (Score:5, Interesting)

    by Opportunist ( 166417 ) on Monday October 28, 2013 @09:47AM (#45257883)

    Someone tack an insightful to that guy? Why don't get I Modpoints when I need them?

    Because this is exactly the source of the problem: All the credit, all the fame, all the limelight and of course in turn all the grant money goes to whoever or whatever organization publishes it first. Adding insult to injury, since they also got the patents, of course.

    We need to put more emphasis on proof. I keep overusing the "Assertion without proof" meme lately, but it fits in science as much as it fits for the NSA, just because you say so doesn't make it so, unless someone else who has an interest to debunk your claims has to confirm that you're right your assertion is essentially worthless.

    And yes, a confirmation from a buddy isn't much better either.

  • by voislav98 ( 1004117 ) on Monday October 28, 2013 @10:01AM (#45257997)
    What is not discussed is that in science as in life it's all about incentives. All you have to is look at who is paying for these studies, directly (through research grants) or indirectly (speaking or consulting fees), and things will become much clearer. The biomedical and life sciences are most vulnerable to corruption because the incentives are very high, successful drug/treatments are worth a lot of money. Even unsuccessful ones, given the proper appearance of effectiveness are worth money.

    Other sciences are less susceptible because there is no incentive to hype the results, not because those scientists are more ethical. There is two solutions for the problem. One is to remove incentives, which would mean overhauling the whole system of scientific funding. The other is to mandate raw data sharing. This would make it easier for people to reanalyze the data without actually redoing the experimental parts.

    A good example of this is Reinhart-Rogoff controversy in economics, where they claimed one thing in their widely publicized 2010 paper (high debt levels impede growth), but their statistical analysis was shown to be riddled with errors, skewing the data to the desired conclusion. This was discovered the when they shared their raw data with a University of Massachusetts grad student. While data sharing would not eliminate these issues it would make is harder to perform "statistical" analysis that introduces biases.
  • Re:problems (Score:5, Interesting)

    by SirGarlon ( 845873 ) on Monday October 28, 2013 @10:22AM (#45258173)
    That is the journals' problem. One could imagine devoting a quarter of each issue to one-page papers confirming previously-published results. In fact, that could be a great way for graduate students to break into prestigious journals. In my not-so-humble opinion, the fact that most or all journals don't make efforts to publish corroborating studies is biting criticism of the journals and their role in undermining the proper scientific method.
  • by Vanderhoth ( 1582661 ) on Monday October 28, 2013 @10:41AM (#45258411)
    Completely agree, I have a friend who's getting a doctor in Sociology with a concentration in women studies. Some of the crap she's made me read is ridiculous. She stopped talking to me for awhile after I said what she does isn't science. She can't even replicate an "experiment" from one group to another let alone across a generational, cultural, or geographic gap, and yet some of these "studies" are used to set employment policies that discriminate against majorities and created the "we don't care if you're qualified to do this if you don't help us meet our quota" environment.
  • by Anonymous Coward on Monday October 28, 2013 @11:21AM (#45258855)

    The problems that plague science today are much deeper than the simple, solvable problem of peer review. If you actually listen to the critics who have been speaking out on the issue for decades now, the problems start in grad school. See Jeff Schmidt's book, Disciplined Minds, which exposes the details of how consensus actually forms in science today. The public likes to imagine that consensus is decided by individuals who are aware of alternative options for belief. The truth is that the consensus is simply manufactured in the grad schools, through an over-reliance upon memorization (as opposed to checking for actual conceptual comprehension, like with force concept inventory tests) and the weeding out of students who stray from the technical details of the problems they are assigned to. The truth is that the features we desire in professionals -- obedient thinkers who can fit into large organizations without "getting political" -- is really quite different than the values we associate with thinking like a scientist (which necessarily includes open-mindedness and skepticism). The notion of "professional scientist" is actually an idea with internal conflicts. It's a contradiction out in the open which apparently few have put any thought into. But, once you look at the way we train professionals today, it becomes apparent that we are not training them to actually think like scientists.

    We actually had an incredible chance to have this debate back in May of 2000 when Noam Chomsky stood up with around 700 researchers in support of Jeff Schmidt. Schmidt even won his case against the American Institute of Physics, but the AIP's purpose has always been to obscure this debate from national discourse.

    The AIP realizes that the credibility of much of science is basically on the line. If consensus is largely manufactured, then the public cannot rely upon it as a guide in the more empirically challenged domains.

  • by WrongMonkey ( 1027334 ) on Monday October 28, 2013 @11:45AM (#45259121)
    The follow-up papers aren't just repeating the previous experiment, but building on it. If I publish a paper that claims a method that accelerates stem cell development, that might get a splashy publication. But if other people try the method and their stem cells die, they're not going to cite my paper. Next time I submit a paper on stem cell development, someone who got burned using my previous method might be on the panel of reviewers and they won't take a favorable view.

    There's never a point where someone officially stamps the work as "wrong", but unreproducible results gradually end up in the dust bin.

  • by Anonymous Coward on Monday October 28, 2013 @12:31PM (#45259647)

    I can't speak for the life sciences and social sciences, but as a physicist, I reproduce results all the time. You don't usually have follow-up papers that only reproduce results because pretty much *any* follow-up paper will have to do this as a minimum.

    If a paper is interesting (relevant), others will want to do research that builds off of and extends it. The first step in this is usually to ... reproduce the original results. This is necessarily not because you are skeptical of them, but so you can make sure you understand what they did, and that you are capable of performing the same experiment/calculation. In fact, I can't imagine how you could proceed without doing so. This is why we don't worry about peer review being only cursory, because if the results are interesting (ie, worth reproducing), they will get reproduced many times in due course as part of subsequent research.

  • by Anonymous Coward on Monday October 28, 2013 @04:54PM (#45262643)

    Basically, the current system of peer review and replication is failing. Peer reviewers actually miss many errors, rarely check statistics, and almost never re-run any software. The current publishing system has little interest in printing replication, and spending time replicating experiments is a dead end career path. The existing system doesn't work well in the era of "big science" and "big data".

    No, it's not. Peer review isn't meant to catch all errors, just errors in logic. You're assuming that the review process that goes on in mathematics is the same as the review process in all other fields of science, which just isn't true.

    Replication is given to grad students -- if they succeed, you know they learned the method, and if they fail, it goes into a chapter of their thesis -- if they're a crappy grad student and they can't get anything to work, they wash out, and if they're good grad students that have other good results, then you go around for the next twenty years saying "so-and-so tried to redo that guys work and couldn't figure it out, it's probably crap, so don't work for that guy". And that's how replication is done.

    Sure, I don't want to spend my time replicating experiments, but why should I? And what is the point to publishing it? If I'm any good you don't want me wasting my time with that. I'll have students and then they can do it, and in the process they'll learn, and so life goes on. Science is a community process... you can't break it down into something so cut and dried as "it must all be on paper, published and formatted nicely" -- nobody has time for that crap.

Old programmers never die, they just hit account block limit.

Working...