Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Majority of Landmark Cancer Studies Cannot Be Replicated 233

New submitter Beeftopia writes with perhaps distressing news about cancer research. From the article: "During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 'landmark' publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature (paywalled) . ... But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers." As is the fashion at Nature, you can only read the actual article if you are a subscriber or want to fork over $32. Anyone with access care to provide more insight? Update: 04/06 14:00 GMT by U L : Naffer pointed us toward informative commentary in Pipeline. Thanks!
This discussion has been archived. No new comments can be posted.

Majority of Landmark Cancer Studies Cannot Be Replicated

Comments Filter:
  • by hey! ( 33014 ) on Friday April 06, 2012 @09:36AM (#39596709) Homepage Journal

    I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field).

    Well, science is rather like democracy in that regard. It doesn't prevent mistakes, but what makes it better than other things people have tried is that it has a mechanism for correcting them.

  • by Black Parrot ( 19622 ) on Friday April 06, 2012 @09:42AM (#39596763)

    Same way when I was a student it was always the premeds who did all the cheating.

    During my orientation at a university, the Dean of my college said that's exactly what they found among the people who get busted.

  • by peter303 ( 12292 ) on Friday April 06, 2012 @09:42AM (#39596767)
    Two weeks ago said the social science studies are usually not replicated. Either because they are not true or too expensive. They were trying to explain the rise in psychology paper retractions and job firings as poor science.
  • I call bullshit (Score:5, Interesting)

    by Black Parrot ( 19622 ) on Friday April 06, 2012 @09:52AM (#39596841)

    Given the expense, I flat don't believe that a private company just decided to replicate 53 studies.

    And he claims that authors "made" his team sign confidentiality agreements. How do authors force that?

    So, he claims, he can't even tell us which studies failed.

    Now he works at a different cancer research company. Conflict of interests?

    I don't doubt that we've got problems in the "medical industry", but the linked article reeks of bullshit.

    Has anyone looked at Nature?

  • by AtomicDevice ( 926814 ) on Friday April 06, 2012 @09:53AM (#39596847)

    While certainly there are those who will publish findings they know to be false, that's not really the big issue I see here. Good science demands that studies be replicated so they can be upheld or refuted. Sure, there's confirmation bias in science all over the place - the bigger problem I see is that there's very little incentive to publish a paper that simply refutes another. Busting existing studies should be a glorious field, but it's not. If big-name scientist A publishes a result in nature, and no-name scientist B publishes a paper in the journal-of-no-one-reads-it, everyone just assumes scientist B is just a bad scientist (assuming he even managed to actually get published at all).

    Another major issue is that the null hypothesis is a very un-enticing story. No one wants to publish the paper: "New Drug does nothing to cure cancer". If you spent a year and a ton of money researching New Drug, you're damn sure going to try and make it work. It's unfortunate, because often the null hypothesis is very informative, but it doesn't get you paid or published. Or how about the psychology paper: "Brain does not respond to stimulus A in any meaningful way", don't remember that paper? That's because it never got published.

    I think this is less about malicious behavior, and more about a lack of interest (which can somewhat be blamed on the way universities/journals/grants handle funding, notoriety, etc) in replicating and refuting studies.

    Do you want to be the guy who cured cancer, or the guy who disproved a bunch of studies?

  • Medical scientists (Score:3, Interesting)

    by nedlohs ( 1335013 ) on Friday April 06, 2012 @09:55AM (#39596863)

    are famously bad at sceince and statistics.

    And there's no benefit (to the researcher) in replicating a study that's already been done which makes for an obvious problem.

    Medical science isn't alone in this of course, it just seems to be worse than most.

  • by tbannist ( 230135 ) on Friday April 06, 2012 @10:11AM (#39597019)

    Not exactly. A drug that works no better than the placebo could be used for years or decades before anyone figures out that it doesn't do anything but create side effects. As long as there is no evidence of intentional malfeasance and there isn't a bunch of corpses linked to the drug, it would probably have little impact on profits even if it was exposed as useless.

  • by crmarvin42 ( 652893 ) on Friday April 06, 2012 @10:31AM (#39597195)
    Your post makes me think of two recent instances in my field (I am a non-ruminant nutritionist).

    1st had to do with a Professor down in Texas who is pushing the feeding of supplemental L-arginine to sows and a "consultant" for an Arginine manufacturer. He's been pushing it based on (frequently) contradictory reports of improved litter sizes and reduced piglet mortalitites. However, he's never had sufficeint statisitcal power. You need at least 100 sows per treatment because of the high standard deviations involved, but he frequently uses less than 10 sows per treatment. At the Midwest American Society of Animal Science meeting in Des Moines, IA this year there were two presentations from industry where they EACH used over 100 sows per treatment and found no positive effects of feeding supplemental L-arginine. They never mentioned the Texas professor directly, but you could tell that both studies were intended to be a rebuttal of what they considered bad, and self-serving science.

    2nd has to do with an article I read critiquing the use of what is called "Nutritional Epidemiology," and can be found here []. It is incredibly long, but very insightfule critique of a field that is given far too much credence simply becuase of where the scientists work, and how free they are with chicken-little-esq proclomations about how meat is going to increase your chances of dieing by 30%!! (everyone has an exactly 100% chance of dieing).
  • by Anonymous Coward on Friday April 06, 2012 @11:04AM (#39597467)

    These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new. This compensation system has obscured the true purpose of publication: what was once a means to disseminate your work to the general population is now a means to get you and your lab more money.

    The incentives do cause number to be more important than quality, but as a grad student who does understand the concept of self-plagiarism, I gotta tell you that every single academic (not other students, actual professors) I've encountered will try to walk that line. It's not that we republish the same paper, it's that when we get results we're encouraged to think, "how many papers can we divide this up in?" So you publish one part in a journal, then an incremental improvement that you already had in a second journal (while citing the paper in the first one, but they are still fairly similar work). Depending on the nature of those incremental improvements, I can see someone trying to publish a paper that crosses the line.

    We need to change the culture to have journals publish papers that are just verification of data from other papers. That's the ideal work for grad students that are just starting to get in the field anyway, and it helps peer review immensely. Once you have a degree and a job doing research, you can start working on publishing new work, but then you'll be more worried about publishing half-baked ideas because you know there's an army of grad students just waiting to see if they can replicate your results.

  • Indeed (Score:5, Interesting)

    by Benfea ( 1365845 ) on Friday April 06, 2012 @12:18PM (#39598361)
    Physicists get cranky if you show them research with p values higher than one in ten thousand, but medical research routinely produces p values as high as 0.2. On top of that, you have a large number of wealthy corporations who have powerful financial motivations to influence the results of medical research fraudulently, which probably doesn't come up as often in other fields. There has been a lot of news about problems with medical research lately, mostly because the medical research community itself is discussing these issues, which is a promising sign.
  • by MozeeToby ( 1163751 ) on Friday April 06, 2012 @12:47PM (#39598789)

    Also conflating things, the placebo effect is stronger if you tell the recipient that there are side effects and list them off. Basically, whenever the patient becomes aware of experiencing one of the side effects it serves as a reminder that "I'm on that drug for my X", which increases the effect. In other words, drugs with more side effects are more likely to erroneously be shown effective.

Adding manpower to a late software project makes it later. -- F. Brooks, "The Mythical Man-Month"