Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Science

How To Better Verify Scientific Research 197

Hugh Pickens DOT Com writes "Michael Hiltzik writes in the LA Times that you'd think the one place you can depend on for verifiable facts is science but a few years ago, scientists at Amgen set out to double-check the results of 53 landmark papers in their fields of cancer research and blood biology and found only six could be proved valid. 'The thing that should scare people is that so many of these important published studies turn out to be wrong when they're investigated further,' says Michael Eisen who adds that the drive to land a paper in a top journal encourages researchers to hype their results, especially in the life sciences. Peer review, in which a paper is checked out by eminent scientists before publication, isn't a safeguard because the unpaid reviewers seldom have the time or inclination to examine a study enough to unearth errors or flaws. 'The journals want the papers that make the sexiest claims,' Eisen says. 'And scientists believe that the way you succeed is having splashy papers in Science or Nature — it's not bad for them if a paper turns out to be wrong, if it's gotten a lot of attention.' That's why the National Institutes of Health has launched a project to remake its researchers' approach to publication. Its new PubMed Commons system allows qualified scientists to post ongoing comments about published papers. The goal is to wean scientists from the idea that a cursory, one-time peer review is enough to validate a research study, and substitute a process of continuing scrutiny, so that poor research can be identified quickly and good research can be picked out of the crowd and find a wider audience. 'The demand for sexy results, combined with indifferent follow-up, means that billions of dollars in worldwide resources devoted to finding and developing remedies for the diseases that afflict us all is being thrown down a rathole,' says Hiltzik. 'NIH and the rest of the scientific community are just now waking up to the realization that science has lost its way, and it may take years to get back on the right path.'"
This discussion has been archived. No new comments can be posted.

How To Better Verify Scientific Research

Comments Filter:
  • So basically they want to introduce a Slashdot for scientists..

    Prepare for a brand new style of flame-wars!

  • problems (Score:5, Insightful)

    by Anonymous Coward on Monday October 28, 2013 @08:03AM (#45257557)

    I think the real problem is that scientists aren't lending any prestige to reproducing experiments so nobody bothers. Journals want to publish new results, not confirmation. Advisors discourage students from reproducing experiments, which makes sense since they won't be published.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      This. No one wants to read: "We've confirmed this."
      Actually, I might want to read it, and even write it, but good luck getting it into ieee or acm.

      • Re:problems (Score:5, Interesting)

        by SirGarlon ( 845873 ) on Monday October 28, 2013 @09:22AM (#45258173)
        That is the journals' problem. One could imagine devoting a quarter of each issue to one-page papers confirming previously-published results. In fact, that could be a great way for graduate students to break into prestigious journals. In my not-so-humble opinion, the fact that most or all journals don't make efforts to publish corroborating studies is biting criticism of the journals and their role in undermining the proper scientific method.
        • Re: (Score:3, Insightful)

          by Anonymous Coward

          No, it's not the journals' problem. It's a funding problem.

          Nobody wants to pay for scientists to reproduce and verify each others' work.

        • Re:problems (Score:4, Insightful)

          by celticryan ( 887773 ) on Monday October 28, 2013 @09:44AM (#45258445)
          As AC said below - there is no funding to do this. In addition to no funding, there is no incentive. Speaking in generalized terms, scientists are judged on their research record. That is a combination of:
          1. How much money they have brought in through grants.
          2. How many papers they have published.
          3. The prestige of the journals they have published in.
          4. How many times their papers have been cited by other researchers.
          So, if you keeping your job depends on those 4 things, where is the incentive to check the work of someone else? Especially large, difficult, and expensive experiments. At best, you get a quick "Comment on XYZ" paper that questions some findings and the authors reply with a "Reply to Comment on XYZ" telling you why your comment is rubbish and you didn't understand what they were saying.
    • Re:problems (Score:5, Interesting)

      by Opportunist ( 166417 ) on Monday October 28, 2013 @08:47AM (#45257883)

      Someone tack an insightful to that guy? Why don't get I Modpoints when I need them?

      Because this is exactly the source of the problem: All the credit, all the fame, all the limelight and of course in turn all the grant money goes to whoever or whatever organization publishes it first. Adding insult to injury, since they also got the patents, of course.

      We need to put more emphasis on proof. I keep overusing the "Assertion without proof" meme lately, but it fits in science as much as it fits for the NSA, just because you say so doesn't make it so, unless someone else who has an interest to debunk your claims has to confirm that you're right your assertion is essentially worthless.

      And yes, a confirmation from a buddy isn't much better either.

    • There's more than one problem at work here. You've identified one, but I think the lack of quality review is also serious. The reviewers' names and their recommendations (accept/reject) should be published along with the paper when it is accepted, to give them an incentive to reject garbage instead of rubber-stamping it.
      • I like the idea. Only concern might be it would cause the number of people willing to do the reviews to drop... quality goes up but quantity of reviewed material goes down - which isn't good for anybody. Not sure which is worse though.
        • I think an ineffective review is worse than no review, because an ineffective review imparts a false sense of confidence in the paper's methods and findings. So a sharp decline in the number of reviewers could be a good thing, if it weeds out the liars who say that have rigorously criticized the manuscript when they haven't.
        • Actually, I think that would be good. I'm pretty sure the majority of papers would drop as well because shoddy work would get you the wrong kinds of headlines and those submissions would dwindle. You can ensure that by providing in synopsis form the papers that you rejected and the reasons.
      • The problem that comes along with that approach is that if you give someones paper a negative review, and your name is attached, they will then see it, and when they get your latest paper to review, they may give you a negative review as retribution.

        That's why the current anonymous reviewer system exists for many journals. Your 'solution' may lead to more rubber stamping, instead of less, for fear of reprisals.

        • The problem that comes along with that approach is that if you give someones paper a negative review, and your name is attached, they will then see it, and when they get your latest paper to review, they may give you a negative review as retribution.

          I think there are two ways to write a negative review: without scientific rigor and logical arguments, or with them. In the first case, the reviewer would just be making an ass of him/herself and casting doubt on both his/her scientific and personal integrity. I

          • I'm inclined to interpret that as humans are involved, with their own motivations and bias, and there is no real way around that except in some unrealistic idealized world. The world is not ideal, and neither is any review process I've seen suggested.

            There are always additional experiments that could be done by the primary researcher. The cut off as to what is reasonable to publish as a single paper, and what could be put put aside as part of a second paper can often largely be up for debate. Antagonisti

            • Antagonistic reviewers could often quite easily argue that additional experiments are needed which could push back publication, allowing the antagonist to scoop you, while looking like a simple well-argued and substantial review.

              The whole idea of "scooping" someone in science is sickening. If there is competitiveness, it should be competitiveness for greater thoroughness and rigor, not quicker results and headline-grabbing. But the type of predatory behavior you describe is not prevented by the anonymous s

              • #1) Yes, scooping is bad, but until you can fix the whole grant/citation/patent-system/publish-or-perish issues, that's going to be an issue.

                #2) I never claimed scooping was prevented by the current system.

                I was pointing out that there are ways for reviewers with a grudge to give a negative, and damaging review, without looking malicious. Your ideal system does not prevent this.

                The current anonymous system doesn't give them a name to hold a grudge against for a previous bad review.

      • That would likely reduce the number of negative reviews, as the (probably famous) paper authors would seek vengence against those reviewers who "make them look bad".
    • by LihTox ( 754597 )

      I think the real problem is that scientists aren't lending any prestige to reproducing experiments so nobody bothers. Journals want to publish new results, not confirmation. Advisors discourage students from reproducing experiments, which makes sense since they won't be published.

      It's not just about prestige, it's about cash. The NIH (etc) should offer grants for reproducing results, not just coming up with new ones.

      Then again, it would help if the NIH offered more grant money, period. The sequester is killing American science.

    • I think the real problem is that scientists aren't lending any prestige to reproducing experiments so nobody bothers. Journals want to publish new results, not confirmation. Advisors discourage students from reproducing experiments, which makes sense since they won't be published.

      Which is sad, because that's the exact kind of study that any graduate student should have for a first project. Only after you've proven your abilities should you be given the resources for original work.

    • Re:problems (Score:4, Informative)

      by RespekMyAthorati ( 798091 ) on Monday October 28, 2013 @02:14PM (#45261603)
      It's also true that many experiments are extremely expensive to perform,
      and getting the grant $$ to repeat a published experiment may be all but impossible.
      This is especially true in medicine, where clinical trials can run into millions of dollars.
  • by Anonymous Coward on Monday October 28, 2013 @08:06AM (#45257581)

    Follow-up studies are where the validation/replication/testing happens. This is not new. Any decent scientist knows this. Peer review is a filter, but it's a pretty basic sanity check, not a comprehensive evaluation of the work. Once published, that opens a paper and the ideas within it to critique by ALL readers, not only the reviewers. Thus, post-publication is when the real scientific review happens. Peer review merely removes the stuff that isn't formulated, measured, and organized well enough to bother reading it in the first place (i.e. it gets rejected). It's an imperfect process, so sometimes stuff slips through anyway. That's what the follow-up papers are for.

    • by Chris Mattern ( 191822 ) on Monday October 28, 2013 @08:25AM (#45257713)

      Which is exactly the problem. Nobody's doing follow-up papers. Follow-up papers aren't sexy, don't get published in top-line journals and don't get a lot of cites. In short, they don't advance your career. So scientitsts don't do them.

      • by dkf ( 304284 )

        Which is exactly the problem. Nobody's doing follow-up papers. Follow-up papers aren't sexy, don't get published in top-line journals and don't get a lot of cites. In short, they don't advance your career. So scientitsts don't do them.

        Follow-up papers are the usual things that a doctoral student starts out their career writing. Sometimes even masters students (depending on the discipline and the difficulty of conducting the experiments). The grad-school grunts don't have a lot of expectation of being heavily cited, so the risk to them is much lower. Once they've duplicated someone else's results, they can start thinking about what they'd do differently...

        • But the problem with this model is that there's no way for a grad student to publish a negative result if they fail to replicate the results. To compound the problem, if a student starts getting negative results, they will quickly change their course of research to something that may produce results. PhDs are not granted for negative results - there is little incentive to pursue research paths that aren't fruitful.

          In the end, the student will know original the result is questionable, but the scientific co

          • But the problem with this model is that there's no way for a grad student to publish a negative result if they fail to replicate the results. To compound the problem, if a student starts getting negative results, they will quickly change their course of research to something that may produce results. PhDs are not granted for negative results - there is little incentive to pursue research paths that aren't fruitful.

            In the end, the student will know the original result is questionable, but the scientific community will not.

            Since being first-to-publish would not come in here, can't they keep the results and write them up afterwards?

            For the sake of completeness or furthering science :-] , or to pad their C.V. :-[ down the line?

      • Re: (Score:3, Interesting)

        The follow-up papers aren't just repeating the previous experiment, but building on it. If I publish a paper that claims a method that accelerates stem cell development, that might get a splashy publication. But if other people try the method and their stem cells die, they're not going to cite my paper. Next time I submit a paper on stem cell development, someone who got burned using my previous method might be on the panel of reviewers and they won't take a favorable view.

        There's never a point where some

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        I can't speak for the life sciences and social sciences, but as a physicist, I reproduce results all the time. You don't usually have follow-up papers that only reproduce results because pretty much *any* follow-up paper will have to do this as a minimum.

        If a paper is interesting (relevant), others will want to do research that builds off of and extends it. The first step in this is usually to ... reproduce the original results. This is necessarily not because you are skeptical of them, but so you can ma

    • And don't you think that's a tad bit dangerous? Imagine you're building your research based on the findings of antigravity, only to find out (after investing a lot of dough into the whole process) that your foundation is completely bogus?

      Now imagine the danger inherent to such an approach when it comes to human medicine.

      When you build upon a foundation, you don't test the foundation beyond the obvious points, i.e. whether it can hold your building, you don't really stress test it. You neither have the time

  • Replication (Score:2, Insightful)

    by Hognoxious ( 631665 )

    The best way of checking spurious, biased, or erroneous results is for someone else to independently do the same experiment. However there's no money or glory in replication. So nobody does it.

    I wonder which will be most amusing, Fox's interpretation of this story or the tardbaggers' interpretation of that. I've already assigned "herp", "derp" and "6,000 years" to hotkeys.

    • by gtall ( 79522 )

      Yes, there's no money in replication. More importantly, there is no money for replication. Who's going to fund replication studies? And it wouldn't be small amount of money.

      Maybe what is needed is a tax on research to fund replication studies. That opens up another can of worms, is one replication study enough? Who decides? Whether the Tea Baggers like it or not, this seems like an area that will require government intervention. Taxing research is unlikely to bring in enough money. Taxes will have to be rai

    • The *best* way would be to do a different experiment with the expectation of getting the same results if the original research was valid and understanding of the studied phenomena good. Then, regardless of whether the second study validates the first one or not, we would actually have more data and better understanding of the issue and problems regarding it's study.

      Invalidating shoddy research would be a bonus.
    • Indeed, replication is good. One should always look askance at "hot new science" until it's repeated enough. But then you just go off the rail and make it political.

      Fox's interpretation of this story or the tardbaggers' interpretation of that. I've already assigned "herp", "derp" and "6,000 years" to hotkeys.

      Right... so believing that the federal government is too big and out of control, equates, in your mind, to a complete lack of scientific understanding to the point of mental retardation.

      Got it.

      Just about everyone believes in something nonsensical and unscientific. Whether it's the 6,000 year nonsense of the religious wingnuts, or th

    • by jythie ( 914043 )
      There is some money in replication, but funding does tend to be harder to get. It can be hard to explain to politicians and board members why one wants money to do what has already been done. Which is a pity given how important it is.
    • There's a lot of money in replicating results that big business has a vested interest in disagreeing with. Chew on that the next time somebody tells you that climate scientists only support global warming to get grant funding.

  • by methano ( 519830 ) on Monday October 28, 2013 @08:09AM (#45257615)
    It's important to remember that in vivo biology is not all of science. It's a lot harder to know what you're doing in biology. If you want excellent reproducible science, let's just roll balls down inclines, measure that and hope we don't get sick.
  • Half right (Score:5, Interesting)

    by SirGarlon ( 845873 ) on Monday October 28, 2013 @08:10AM (#45257625)
    Well, it's good to see a major scientific institution waking up to a phenomenon Richard Feynman warned about in the 1970s [columbia.edu]. Yet it seems to me the proposed solution is a little ad hoc. If scientists want to restore integrity to their field(s) -- and I applaud their efforts to do so -- why aren't they using an experimental approach to do so? I think they should try several things and collect data to find out what actually works.
  • Whenever one of these stories is posted about inaccurate and falsified research papers, it's always a field related to biology. This doesn't seem to be nearly as much of a problem with the hard sciences (physics, chemistry). We should avoid rhetoric like "science has lost its way" since the problem is mostly isolated to one branch of science and such statements only serve as ammo for the anti-science crowd. Disclaimer: I'm a physicist.
    • Physics is not immune to parasitic and mercenary research phenomena either, especially in more exotic areas with great funding potential, such as quantum computing & crypto where exaggerations and self-puffery are common. One might say the whole field is of that kind, since their whole theorizing (which is all they got) rests on the speculative aspects of quantum measurement theory, the foundations of which are still awaiting unambiguous experimental demonstration (such as the "loophoole free" violation [arxiv.org]

  • by voislav98 ( 1004117 ) on Monday October 28, 2013 @09:01AM (#45257997)
    What is not discussed is that in science as in life it's all about incentives. All you have to is look at who is paying for these studies, directly (through research grants) or indirectly (speaking or consulting fees), and things will become much clearer. The biomedical and life sciences are most vulnerable to corruption because the incentives are very high, successful drug/treatments are worth a lot of money. Even unsuccessful ones, given the proper appearance of effectiveness are worth money.

    Other sciences are less susceptible because there is no incentive to hype the results, not because those scientists are more ethical. There is two solutions for the problem. One is to remove incentives, which would mean overhauling the whole system of scientific funding. The other is to mandate raw data sharing. This would make it easier for people to reanalyze the data without actually redoing the experimental parts.

    A good example of this is Reinhart-Rogoff controversy in economics, where they claimed one thing in their widely publicized 2010 paper (high debt levels impede growth), but their statistical analysis was shown to be riddled with errors, skewing the data to the desired conclusion. This was discovered the when they shared their raw data with a University of Massachusetts grad student. While data sharing would not eliminate these issues it would make is harder to perform "statistical" analysis that introduces biases.
    • Insightful, but I think your two solutions aren't. Solution 1 wouldn't work well - you can't remove incentives, only change them (which is admittedly what you really suggested by a funding overhaul). The trouble with this approach is you trade one set of problems for another set, which could be worse or just different. It's not a solution if it doesn't actually make things better, and that's hard. Solution 2 wouldn't work because you gave an example of a very powerful disincentive: the risk of being exp

  • Really there is a simple reason for this. (Of course, it's not the only one but presumably the primary one.)

    Tenure-track positions and funding are to a large extent determined on the basis of the number of publications weighted by the reputation of the journals, not by the quality of publications.

    The idea is that good journals will reject bad papers, which doesn't work as well as is desirable due to the extreme amount of submissions the journals receive, which have to be reviewed by relatively small numbers

    • In my experience good journals aren't about rejecting bad papers - once you get above a certain level in a subject each journal is sending its papers to the same reviewers - but that good journals reject unimportant papers. That's why things like publications in big-name journals are significant, they imply important work. Not, necessarily, correct work, but anyone who's unwilling to publish research that turns out to be wrong shouldn't be a scientist in the first place.

  • by Chalnoth ( 1334923 ) on Monday October 28, 2013 @09:35AM (#45258333)

    The correct take-away from this kind of study is not that a specific field of science is "broken" (also, cancer research is not all of science), but rather that there is room for improvement.

    There is no question whatsoever that cancer research has made leaps and bounds over the last few decades in terms of improving the lives of many people with cancer, both by helping them to live longer, and by helping them to live better. What this kind of study shows is that we can do even better still, if we can find ways to fix the flaws that remain in cancer research.

  • It has been said in other papers too that a lot of the literature is wrong (http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124) and that this is more likely in higher impact journals and for papers with lower sample sizes (http://www.nature.com/nrn/journal/vaop/ncurrent/full/nrn3475-c6.html). The idea is that a smaller sample size is more likely to lead to a Type I error (incorrectly finding a statistically significant result) or over-estimating the size of an effect. Consequently, these smaller sample size studies find what looks like a stunning effect but what they're really seeing is an outlier. The paper looks awesome so it gets published somewhere high impact, where it is sensationalised. This effect is exacerbated by the "publish or perish" mentality, where researchers are pressured to produce many high impact papers in order to get grants. It's also a function of the fact that a lot of research is being done, so the high volume increases the odds of this shit happening. Cancer biology is particularly prone to this sort of effect because it's very competitive, there's a lot of interest in it and so it generates high impact papers, and there are a lot of big screening studies that depend heavily on statistics to confirm effects. In some branches of biology you hardly need a stats test because variables are few in significance is obvious. However, when you're screening vast numbers of drug targets then you have all sorts of problems with multiple comparisons and the like. You need elaborate stats tests and they have to be done right. Overall, however, whether the community as a whole believes something is determined by state of the literature in general and not just a single study. What we consider true or false is influenced by the politics of science as well as the data. This is nicely reviewed in the controversial book, "The Golem", by Collins and Pinch (http://www.amazon.com/The-Golem-Should-Science-Classics/dp/1107604656).
  • by Anonymous Coward

    The problems that plague science today are much deeper than the simple, solvable problem of peer review. If you actually listen to the critics who have been speaking out on the issue for decades now, the problems start in grad school. See Jeff Schmidt's book, Disciplined Minds, which exposes the details of how consensus actually forms in science today. The public likes to imagine that consensus is decided by individuals who are aware of alternative options for belief. The truth is that the consensus is

    • . The notion of "professional scientist" is actually an idea with internal conflicts. It's a contradiction out in the open which apparently few have put any thought into. But, once you look at the way we train professionals today, it becomes apparent that we are not training them to actually think like scientists.

      But science has never been done the "pure" way. It's been decided by consensus all along, this isn't a new thing related to training in grad school. You can see this is so because the phenomenon (science by consensus) is world-wide, yet the grad school works differently in different countries. e.g. in the UK you don't even have "grad school." A graduate student enters a PhD lab after their first degree (MSc not necessary in most cases) and gets on with it. In a good university, the first year or two are de

  • by WrongMonkey ( 1027334 ) on Monday October 28, 2013 @11:15AM (#45259459)
    A single paper with a novel result is just the beginning of the scientific process. If someone published a paper that claims X kills cancer cells in vitro, then the next step is to check if X kills cancer cells in mice. If the original paper is bogus, then follow up research is unlikely to yield any results. So the original paper doesn't get any citations and the next time that researcher makes a similar claim, they will be met with more skepticism.

    It's true that the system can be gamed in the short run. And sometimes someone can be game it enough to get tenure. But without follow up and citations, they'll just end up in academic limbo of being an associate professor with no funding.

  • Get rid of the big money and political interests funding it.
    (i.e. Exxon, BP Global, Al Gore Carbon Exchanges.)

    The people are not interested in climate change man made or otherwise, they are interested in profits.

    To avoid this, government in the past was usually employed to carry out science. In the golden age of scientific discovery, (50-60's) gigantic paces in scientific and technological progress were made, not because it was profitable to do so, but because one country in the world, the United States dec

  • Not a bad idea, just be sure to post lots of pictures to keep people interested :-P

  • You verify research by reproducing the results. If you can get them, it's science. If you can't get them, it's bullshit. It's as easy as that.

There are two ways to write error-free programs; only the third one works.

Working...