Majority of Landmark Cancer Studies Cannot Be Replicated 233
New submitter Beeftopia writes with perhaps distressing news about cancer research. From the article: "During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 'landmark' publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature (paywalled) . ... But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers."
As is the fashion at Nature, you can only read the actual article if you are a subscriber or want to fork over $32. Anyone with access care to provide more insight?
Update: 04/06 14:00 GMT by U L : Naffer pointed us toward informative commentary in Pipeline. Thanks!
For detail and commentary... (Score:5, Informative)
Re:Grants-whores and publicists in academia?!?!? (Score:2, Informative)
The incentive to fake results is always present in academia, as is the incentive to believe faked results. I recommend reading "Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World" by Eugenie Samuel Reich, which details the case of Heinrich Schon. Reading this book, it isn't hard to see how so many people could fall into the trap of trying to get the numbers you think you should see as well as the academic prestige that comes with the cooked numbers.
Amazon link to the book [amazon.com]
Re:For detail and commentary... (Score:4, Informative)
Full text available here [davidrasnick.com].
Most published research findings are false (Score:4, Informative)
TLDR: Because of the nature of the statistics used and the fact that only positive results are reported.
http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 [plosmedicine.org]
Ioannidis said as much for years (Score:2, Informative)
These findings are no surprise to those who have been following medical science and research for the past decades. See for example what Dr John Ioannidis has to say about the consistency, accuracy and honesty (or lack thereof) of medical science in general [theatlantic.com]: "as much as 90 percent of the published medical information that doctors rely on is flawed","There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases", "he was struck by how many findings of all types were refuted by later findings" - and not just in epidemiological (statistical) studies, but also in randomized, double-blind clinical trials: "Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals ... 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials."
Gary Taubes too denounced this accumulation of bias back in 2007 [nytimes.com]: compliance bias, information bias, confirmation bias, etc. routinely introduce non-uniform effects that can be bigger than what you try to measure. And you cannot compensate for them because you cannot quantify them.
As Sander Greenland, one of the editor/authors of Modern Epidemiology, wrote in chapter 19 "Bias Analysis":
Re:Grants-whores and publicists in academia?!?!? (Score:4, Informative)
On the other hand, this isn't really news, or limited to oncology. Bayer published a report last year covering its analysis of targets in CV, womens health, and onco and overall could only verify ~25% of targets in house.
Re:I call bullshit (Score:4, Informative)
well, you need to sign confidentiality agreements to reproduce the studies..
No you don't. You just need to read the published paper and attempt to reproduce what the paper reports. (A good scientific paper includes enough information to make the work it reports on reproducible.)
*However*, I suspect my post was over-reactive in a couple of regards:
a) They might have asked the authors for their unpublished raw data, in which case a confidentiality agreement becomes plausible.
b) When I read "landmark studies", I think longitudinal studies or clinical trials. However, it appears that they were using their own notions of "landmark", and included things like the effect of a chemical on cell biology. That sort of thing can be reproduced at a cost a private company would undertake.
However, the "I can't tell you" criticism still stands. Among the posts at the on-line article in the Slashdot update, someone points out that the Nature article is a complaint about irreproducible results, but is not itself reproducible. Basically, from what I'm reading, Nature published an anecdote.
Maybe it was a letter rather than a paper?
Re:I call bullshit (Score:5, Informative)
And look at many of the complaints he has about academic research in this 'paper'.
"In addition, preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug."
When I publish a paper, I'm trying to present the the interesting findings in regards to the molecular system I'm studying. Finding a bunch of associated biomarkers is an entirely different study. If you expect me to write multiple studies/papers rolled into one and replicate the experiments over and over in dozens of cell lines and animal models, you better be paying me to do it, because I don't have the funding for that in the small budget I got for the proposed study funded by taxpayers through the NIH. When I wrote the grant to do the study, I didn't even know I would find anything for sure, so I didn't ask for a budget 20x the size so I could replicate it in every model out there to look for biomarkers as well, just in case I found something. Asking for that is a good way to have your grant rejected in the first place. My paper was to find the interesting thing. By publishing it I'm saying, 'we found something interesting, please fund us or other researchers to take the next step and replicate, test in other systems, etc, etc. Publishing my results does not mean 'Big Pharma should go ahead and spend $10 million trying this in humans right now!'
"Given the inherent difficulties of mimicking the human micro-environment in preclinical research, reviewers and editors should demand greater thoroughness."
It's tough to do. That's why we typically do small bite-size chunks. That and the size of our grants allow us to do the bite-size chunks. Want greater thouroughness? Increase our funding. Ohh, but that's from taxpayers, and you want them to spend all the money doing research so you don't have to.
"Studies should not be published using a single cell line or model, but should include a number of well-characterized cancer cell lines that are representative of the intended patient population. Cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools, as well as adopting more robust, predictive tumour models and improved validation strategies. "
Cancer researchers must commit to the costly transition? Yes, yes, research is being held up because all those academics, with all their mega-millions in earnings each year, just aren't willing to pony up the cash to do their experiments right. We live off grants. If there isn't funding to do a huge study, we can't. Simple. No 'not willing to commit' involved.
"Similarly, efforts to identify patient-selection biomarkers should be mandatory at the outset of drug development."
Once again, the budget for that wasn't in my grant because I didn't know I would even find anything, let alone need to find every associated biomarker. Want to know the biomarkers? Then pay for it, or wait for me to publish this first paper, then write another grant asking for funding to look for biomarkers now that I've got a very good reason to look for them and spend the money. In a rush? Either pony up the cash or stop whining about taxpayer-funded academics not providing everything to you on a silver platter in record time.
Re:Grants-whores and publicists in academia?!?!? (Score:4, Informative)
Re:Grants-whores and publicists in academia?!?!? (Score:5, Informative)
Eugenics is junk science for four reasons:
1) It makes invalid extrapolations from valid science.
2) There is little empirical evidence for it, and what there is turns out to be tortuously misinterpreted.
3) The validity or definition of some of its core concepts is questionable, so many of its ideas are untestable.
4) It is statistically naive, ignoring things like regression to the mean.
You can't extrapolate between dogs and people; dogs are domestic animals, and people are wild animals. At the risk of making another list, there are two problems with doing that:
(A) Dogs are domestic animals; people are wild animals with far greater genetic diversity. Sexual reproduction is supposed to increase genetic diversity; only by starting with an artificially uniform genetic population can you breed for specific traits with any reliability. Look at several large human families and while you'll see some resemblance, it's far less than the resemblance of purebred puppies born in the same litter.
(B) A dog reaches sexual maturity in 6-12 months; humans in 12-14 years. In the course of a 40 career a dog breeder gets to work with 30-50 generations, a human breeder would only get 2-3. Even if you start by inbreeding a single human family, it'll take you centuries to get the kind of results a dog breeder can get in 20 years.
I'll leave off debunking eugenics here, because I'll have to start using roman numerals.
Re:Indeed (Score:2, Informative)
Some readers might not be familiar with p-value [wikipedia.org].
- T