Forgot your password?

Majority of Landmark Cancer Studies Cannot Be Replicated 233

Posted by Unknown Lamer
from the scientists-at-work dept.
New submitter Beeftopia writes with perhaps distressing news about cancer research. From the article: "During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 'landmark' publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature (paywalled) . ... But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers." As is the fashion at Nature, you can only read the actual article if you are a subscriber or want to fork over $32. Anyone with access care to provide more insight? Update: 04/06 14:00 GMT by U L : Naffer pointed us toward informative commentary in Pipeline. Thanks!
This discussion has been archived. No new comments can be posted.

Majority of Landmark Cancer Studies Cannot Be Replicated

Comments Filter:
  • by Naffer (720686) on Friday April 06, 2012 @09:27AM (#39596611) Journal
    See this discussion of the same paper on In the Pipeline, a blog devoted to organic chemistry and drug discovery. []
  • by Anonymous Coward on Friday April 06, 2012 @09:37AM (#39596719)

    The incentive to fake results is always present in academia, as is the incentive to believe faked results. I recommend reading "Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World" by Eugenie Samuel Reich, which details the case of Heinrich Schon. Reading this book, it isn't hard to see how so many people could fall into the trap of trying to get the numbers you think you should see as well as the academic prestige that comes with the cooked numbers.

    Amazon link to the book []

  • by Hatta (162192) on Friday April 06, 2012 @09:40AM (#39596747) Journal

    Full text available here [].

  • by kahizonaki (1226692) on Friday April 06, 2012 @09:58AM (#39596909) Homepage
    A recent PLOS article (free to view!) analyses modern research, coming to the conclusion that most research findings are false.
    TLDR: Because of the nature of the statistics used and the fact that only positive results are reported. []
  • by Anonymous Coward on Friday April 06, 2012 @10:16AM (#39597069)

    These findings are no surprise to those who have been following medical science and research for the past decades. See for example what Dr John Ioannidis has to say about the consistency, accuracy and honesty (or lack thereof) of medical science in general []: "as much as 90 percent of the published medical information that doctors rely on is flawed","There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases", "he was struck by how many findings of all types were refuted by later findings" - and not just in epidemiological (statistical) studies, but also in randomized, double-blind clinical trials: "Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals ... 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials."

    Gary Taubes too denounced this accumulation of bias back in 2007 []: compliance bias, information bias, confirmation bias, etc. routinely introduce non-uniform effects that can be bigger than what you try to measure. And you cannot compensate for them because you cannot quantify them.

    As Sander Greenland, one of the editor/authors of Modern Epidemiology, wrote in chapter 19 "Bias Analysis":

    Conventional methods assume all errors are random and that any modeling assumptions (such as homogeneity) are correct. With these assumptions, all uncertainty about the impact of errors on estimates is subsumed within conventional standard deviations for the estimates (standard errors), such as those given in earlier chapters (which assume no measurement error), and any discrepancy between an observed association and the target effect may be attributed to chance alone. When the assumptions are incorrect, however, the logical foundation for conventional statistical methods is absent, and those methods may yield highly misleading inferences. Epidemiologists recognize the possibility of incorrect assumptions in conventional analyses when they talk of residual confounding (from nonrandom exposure assignment), selection bias (from nonrandom subject selection), and information bias (from imperfect measurement). These biases rarely receive quantitative analysis, a situation that is understandable given that the analysis requires specifying values (such as amount of selection bias) for which little or no data may be available. An unfortunate consequence of this lack of quantification is the switch in focus to those aspects of error that are more readily quantified, namely the random components.

    Systematic errors can be and often are larger than random errors, and failure to appreciate their impact is potentially disastrous. The problem is magnified in large studies and pooling projects, for in those studies the large size reduces the amount of random error, and as a result the random error may be but a small component of total error. In such studies, a focus on “statistical significance” or even on confidence limits may amount to nothing more than a decision to focus on artifacts of systematic error as if they reflected a real causal effect.

  • by pepty (1976012) on Friday April 06, 2012 @10:27AM (#39597151)
    Can't blame all of this on grants-whores. There are a lot of reasons for published results to "revert to the mean": honest mistakes like cell lines that drift from their normal phenotype, lack of budget necessary to run additional or larger experiments, and publication bias at the journals. One of the biggest problems is brought up in TFA: the observed effect was only seen under extremely specific conditions. Often that means that the company had to adapt the experiment to a model (say a different animal or cell line) that was relevant to their own work, and they couldn't reproduce the result in that model. In which case the result is still true, but much less likely to be useful for identifying a drug target.

    On the other hand, this isn't really news, or limited to oncology. Bayer published a report last year covering its analysis of targets in CV, womens health, and onco and overall could only verify ~25% of targets in house.

  • Re:I call bullshit (Score:4, Informative)

    by Black Parrot (19622) on Friday April 06, 2012 @10:33AM (#39597217)

    well, you need to sign confidentiality agreements to reproduce the studies..

    No you don't. You just need to read the published paper and attempt to reproduce what the paper reports. (A good scientific paper includes enough information to make the work it reports on reproducible.)

    *However*, I suspect my post was over-reactive in a couple of regards:

    a) They might have asked the authors for their unpublished raw data, in which case a confidentiality agreement becomes plausible.

    b) When I read "landmark studies", I think longitudinal studies or clinical trials. However, it appears that they were using their own notions of "landmark", and included things like the effect of a chemical on cell biology. That sort of thing can be reproduced at a cost a private company would undertake.

    However, the "I can't tell you" criticism still stands. Among the posts at the on-line article in the Slashdot update, someone points out that the Nature article is a complaint about irreproducible results, but is not itself reproducible. Basically, from what I'm reading, Nature published an anecdote.

    Maybe it was a letter rather than a paper?

  • Re:I call bullshit (Score:5, Informative)

    by Anonymous Coward on Friday April 06, 2012 @10:50AM (#39597345)

    And look at many of the complaints he has about academic research in this 'paper'.

    "In addition, preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug."

    When I publish a paper, I'm trying to present the the interesting findings in regards to the molecular system I'm studying. Finding a bunch of associated biomarkers is an entirely different study. If you expect me to write multiple studies/papers rolled into one and replicate the experiments over and over in dozens of cell lines and animal models, you better be paying me to do it, because I don't have the funding for that in the small budget I got for the proposed study funded by taxpayers through the NIH. When I wrote the grant to do the study, I didn't even know I would find anything for sure, so I didn't ask for a budget 20x the size so I could replicate it in every model out there to look for biomarkers as well, just in case I found something. Asking for that is a good way to have your grant rejected in the first place. My paper was to find the interesting thing. By publishing it I'm saying, 'we found something interesting, please fund us or other researchers to take the next step and replicate, test in other systems, etc, etc. Publishing my results does not mean 'Big Pharma should go ahead and spend $10 million trying this in humans right now!'

    "Given the inherent difficulties of mimicking the human micro-environment in preclinical research, reviewers and editors should demand greater thoroughness."

    It's tough to do. That's why we typically do small bite-size chunks. That and the size of our grants allow us to do the bite-size chunks. Want greater thouroughness? Increase our funding. Ohh, but that's from taxpayers, and you want them to spend all the money doing research so you don't have to.

    "Studies should not be published using a single cell line or model, but should include a number of well-characterized cancer cell lines that are representative of the intended patient population. Cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools, as well as adopting more robust, predictive tumour models and improved validation strategies. "

    Cancer researchers must commit to the costly transition? Yes, yes, research is being held up because all those academics, with all their mega-millions in earnings each year, just aren't willing to pony up the cash to do their experiments right. We live off grants. If there isn't funding to do a huge study, we can't. Simple. No 'not willing to commit' involved.

    "Similarly, efforts to identify patient-selection biomarkers should be mandatory at the outset of drug development."

    Once again, the budget for that wasn't in my grant because I didn't know I would even find anything, let alone need to find every associated biomarker. Want to know the biomarkers? Then pay for it, or wait for me to publish this first paper, then write another grant asking for funding to look for biomarkers now that I've got a very good reason to look for them and spend the money. In a rush? Either pony up the cash or stop whining about taxpayer-funded academics not providing everything to you on a silver platter in record time.

  • by Missing.Matter (1845576) on Friday April 06, 2012 @11:54AM (#39597995)
    This isn't a problem in my field. In fact, it's standard procedure to present work first at a conference, build on it, and then expand on it in a Journal paper. You're expected of course to cite your previous work in your new paper and explain explicitly what the new results are. One of the biggest journals actually suggests this before submitting. In the journals I review for, usually, several reviewers look at a paper, and recommend a decision to an editor, who makes the final call based on those reviews, so the bias or agenda of one reviewer is diluted.
  • by hey! (33014) on Friday April 06, 2012 @02:15PM (#39599923) Homepage Journal

    Eugenics is junk science for four reasons:

    1) It makes invalid extrapolations from valid science.
    2) There is little empirical evidence for it, and what there is turns out to be tortuously misinterpreted.
    3) The validity or definition of some of its core concepts is questionable, so many of its ideas are untestable.
    4) It is statistically naive, ignoring things like regression to the mean.

    You can't extrapolate between dogs and people; dogs are domestic animals, and people are wild animals. At the risk of making another list, there are two problems with doing that:

    (A) Dogs are domestic animals; people are wild animals with far greater genetic diversity. Sexual reproduction is supposed to increase genetic diversity; only by starting with an artificially uniform genetic population can you breed for specific traits with any reliability. Look at several large human families and while you'll see some resemblance, it's far less than the resemblance of purebred puppies born in the same litter.

    (B) A dog reaches sexual maturity in 6-12 months; humans in 12-14 years. In the course of a 40 career a dog breeder gets to work with 30-50 generations, a human breeder would only get 2-3. Even if you start by inbreeding a single human family, it'll take you centuries to get the kind of results a dog breeder can get in 20 years.

    I'll leave off debunking eugenics here, because I'll have to start using roman numerals.

  • Re:Indeed (Score:2, Informative)

    by Anonymous Coward on Friday April 06, 2012 @03:07PM (#39600627)

    Some readers might not be familiar with p-value [].

    - T

"You know, we've won awards for this crap." -- David Letterman