Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Medicine

Majority of Landmark Cancer Studies Cannot Be Replicated 233

New submitter Beeftopia writes with perhaps distressing news about cancer research. From the article: "During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 'landmark' publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature (paywalled) . ... But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers." As is the fashion at Nature, you can only read the actual article if you are a subscriber or want to fork over $32. Anyone with access care to provide more insight? Update: 04/06 14:00 GMT by U L : Naffer pointed us toward informative commentary in Pipeline. Thanks!
This discussion has been archived. No new comments can be posted.

Majority of Landmark Cancer Studies Cannot Be Replicated

Comments Filter:
  • by crazyjj ( 2598719 ) * on Friday April 06, 2012 @08:21AM (#39596543)

    But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

    As I've said before [slashdot.org], back when I was in academia, there were always grant-whores and academics more interested in their own interests than science around. Too many people have come to treat science with a reverence more appropriate to a religion than a system of knowledge and discovery, however. And so when I point out that there are scientists out there willing to cook the numbers, exaggerate, play to politics and/or public opinion, etc. I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field). But science is only as good as the people practicing it. And, in any field, there are always those willing to put their own personal interests ahead of the greater good.

    I just hope this doesn't cast a shadow over those out there who *are* doing good work and *are* trying to do honest work. Sadly, some of the best researchers out there are the ones who make the least noise, get the least attention, get the least grants, and are least likely to get tenure.

    • by Anonymous Coward on Friday April 06, 2012 @08:28AM (#39596625)

      Those of us in other fields of science tend to hold up biomed as an example of how not to run a science. They tend to have a shoddy idea of experiment design and statistics. Same way when I was a student it was always the premeds who did all the cheating.

      • by crazyjj ( 2598719 ) * on Friday April 06, 2012 @08:34AM (#39596683)

        premeds

        I'll thank you not to use that kind of language on a family forum, sir!

      • by Black Parrot ( 19622 ) on Friday April 06, 2012 @08:42AM (#39596763)

        Same way when I was a student it was always the premeds who did all the cheating.

        During my orientation at a university, the Dean of my college said that's exactly what they found among the people who get busted.

        • Most pre-meds go on to become doctors. To be relevant, your comment would have to focus on those pre-med students who go on to become medical researchers. I'm as concerned about the current state of medical research as anyone, but I can't let this argument slide. I believe scientists would call this "bad sample selection" or somesuch. ;)
      • Mod this AC up.

        Medicine is not your typical science, it's a beast of all its own with huge political, financial, and industrial apparatus. Look up the share of public research grants taken up by NIH for instance - NASA's budget is a mere mosquito bite in comparison.

        Throw in the Big Pharma, the insurance industry, AMA, for-profit (and nonprofit) hospital industry, and no wonder scientific integrity is a mere footnote in medicine.

      • Indeed (Score:5, Interesting)

        by Benfea ( 1365845 ) on Friday April 06, 2012 @11:18AM (#39598361)
        Physicists get cranky if you show them research with p values higher than one in ten thousand, but medical research routinely produces p values as high as 0.2. On top of that, you have a large number of wealthy corporations who have powerful financial motivations to influence the results of medical research fraudulently, which probably doesn't come up as often in other fields. There has been a lot of news about problems with medical research lately, mostly because the medical research community itself is discussing these issues, which is a promising sign.
    • by hey! ( 33014 ) on Friday April 06, 2012 @08:36AM (#39596709) Homepage Journal

      I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field).

      Well, science is rather like democracy in that regard. It doesn't prevent mistakes, but what makes it better than other things people have tried is that it has a mechanism for correcting them.

      • Like Democracy, the mechanisms take a long time, often decades and centuries.

        • And like Democracy far too many people are enamoured by Science and completely shut off their minds because Science cannot be wrong.
          "A guy in a lab coat said it, and mentioned something about a study, this is obviously the absolute and proven truth."

      • by crazyjj ( 2598719 ) * on Friday April 06, 2012 @10:07AM (#39597495)

        Yes, but unless that system is made as efficient as possible, it can take a very long time to correct itself. Eugenics [wikipedia.org] is the classic example. Sure, it was eventually shown to be so much junk science, but not before it contributed to millions being killed/lobotomized/institutionalized. Even though there were skeptics of it almost from the beginning, the biology and medical fields did a piss-poor job at self-correcting, and people suffered for decades after this should have been laughed away as humbug.

        Simply saying "Well, it will eventually sort itself out" is not an excuse to avoid reform.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      The incentive to fake results is always present in academia, as is the incentive to believe faked results. I recommend reading "Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World" by Eugenie Samuel Reich, which details the case of Heinrich Schon. Reading this book, it isn't hard to see how so many people could fall into the trap of trying to get the numbers you think you should see as well as the academic prestige that comes with the cooked numbers.

      Amazon link to the book [amazon.com]

    • by Black Parrot ( 19622 ) on Friday April 06, 2012 @08:41AM (#39596749)

      But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

      A few years back one of the USA's leading medical journals changed their rules to allow doctors who are receiving money from pharmaceuticals to publish reviews of the drugs sold by those same pharmaceuticals. We may have a problem that runs deeper than "cutting corners".

      • Still, a pharmaceutical company would have much deeper problems with a false drug then a scholar with a false paper.
        • by tbannist ( 230135 ) on Friday April 06, 2012 @09:11AM (#39597019)

          Not exactly. A drug that works no better than the placebo could be used for years or decades before anyone figures out that it doesn't do anything but create side effects. As long as there is no evidence of intentional malfeasance and there isn't a bunch of corpses linked to the drug, it would probably have little impact on profits even if it was exposed as useless.

          • by MozeeToby ( 1163751 ) on Friday April 06, 2012 @11:47AM (#39598789)

            Also conflating things, the placebo effect is stronger if you tell the recipient that there are side effects and list them off. Basically, whenever the patient becomes aware of experiencing one of the side effects it serves as a reminder that "I'm on that drug for my X", which increases the effect. In other words, drugs with more side effects are more likely to erroneously be shown effective.

          • A drug that works no better than the placebo could be used for years or decades before anyone figures out that it doesn't do anything but create side effects.

            Not years, more like decades, maybe even a century....

            As a child, I was subjected to almost every known type of prescription cough syrup/suppressant stuff to almost no effect (even the heavy duty codine laced stuff). Now ~30years later, they are only just admitting that the non-prescription stuff cough stuff is no better than placebo. Next thing you know they'll say the same thing about the prescription stuff.

            By the time they take this stuff off the market I'm sure it'll have been sold for 100 years and t

      • Do you have a problem with, say, pharma-employed scientists publishing their phase I and phase II results in biochemical journals? Because it's the same thing: nobody is going to work for free, and the doctors are paid to conduct Phase III trials.
    • by ATMAvatar ( 648864 ) on Friday April 06, 2012 @08:41AM (#39596753) Journal
      It is not that this cannot happen in science - more that the bad science will always eventually be revealed eventually. TFA only serves to reinforce this idea. Though it is a tragedy that these particular problem studies were so lacking in scientific rigor, it is reassuring that the peer review system ultimately brought them to light, even if it took some time to do so.
      • by Nadaka ( 224565 ) on Friday April 06, 2012 @08:59AM (#39596917)

        Failure to replicate an experiment is not certain indication the the original experiment was flawed or manipulated. But it does wink suggestively in that direction.

        • Failure to replicate an experiment is not certain indication the the original experiment was flawed or manipulated. But it does wink suggestively in that direction.

          To clarify, failure to replicate one experiment in a particular field of study does perhaps allow for some obvious error in the original experiment.

          However, failure to replicate 47 out of 53 experiments only brings forth suggestion that we should question ALL results. That's a hell of a lot more than a mere "wink" in the wrong direction.

      • Yes, but it can take decades to correct even the simplest of errors. Take the Milikan oil drop experiment. A brilliantly simple experiment to measure the charge on the electron. Unfortunately, Milikan and Fletcher made a small error in their analysis which led to an incorrect result. It took a long time before the published values of the charge on the electron was correct.

        Researchers would unintentionally fudge their results to match Milikan's, because the idea that the published figure could be wrong wasn'

      • by Empiric ( 675968 )

        And, as it stands, for every one instance of bad science that is revealed, two new ones have already taken its place.

        Here we have the inevitable result of the trend of advocating "memes" as the overarching model of human cognition.

        If you don't like theism, Pirsig's "Quality", for one, is a much better guiding metaphysics.

    • by Missing.Matter ( 1845576 ) on Friday April 06, 2012 @09:10AM (#39597009)

      I've actually been on a ranting spree the past couple of days due to terrible journal manuscript submissions I've had to review recently. I can't tell you the number of times I've read a submission that was outright published in another periodical. Many foreign submitters don't understand the concept of plagiarism, let along self plagiarism. These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new. This compensation system has obscured the true purpose of publication: what was once a means to disseminate your work to the general population is now a means to get you and your lab more money.

      I take my responsibility as a reviewer very seriously; the job of a scientist is not only to create new research, but to critique and evaluate the research of others. But many academics who have been in the field longer than I approach reviewing as a chore, and only focus on half of the interesting part of their job, the research. I don't know how many of these terrible publications slip through the cracks due to lazy reviewers, but I'm sure it's more than I'm comfortable to admit to myself.

      • by Anonymous Coward on Friday April 06, 2012 @10:04AM (#39597467)

        These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new. This compensation system has obscured the true purpose of publication: what was once a means to disseminate your work to the general population is now a means to get you and your lab more money.

        The incentives do cause number to be more important than quality, but as a grad student who does understand the concept of self-plagiarism, I gotta tell you that every single academic (not other students, actual professors) I've encountered will try to walk that line. It's not that we republish the same paper, it's that when we get results we're encouraged to think, "how many papers can we divide this up in?" So you publish one part in a journal, then an incremental improvement that you already had in a second journal (while citing the paper in the first one, but they are still fairly similar work). Depending on the nature of those incremental improvements, I can see someone trying to publish a paper that crosses the line.

        We need to change the culture to have journals publish papers that are just verification of data from other papers. That's the ideal work for grad students that are just starting to get in the field anyway, and it helps peer review immensely. Once you have a degree and a job doing research, you can start working on publishing new work, but then you'll be more worried about publishing half-baked ideas because you know there's an army of grad students just waiting to see if they can replicate your results.

      • by pavon ( 30274 ) on Friday April 06, 2012 @10:13AM (#39597545)

        These "scientists" are ranked and compensated by the number of publications they produce, so they publish one piece of research and try to pass it off in as many periodicals as possible, essentially representing old research as brand new.

        This problem has also created backlash that affects genuine researchers. My adviser had been working on some new research and was invited to present at a conference. So he wrote up his work in-progress (limited to 4-pages), and presented there. When the work was completed he tried to submit it to a journal, and one of the reviewers rejected it because it was "just a copy of this prior work". This is despite the fact that the 12-page journal paper went into far more detail, provided proof for what were conjectures at the time of the conference, and corrected significant errors in that preliminary work. So now the only scientific record of this work is an incomplete incorrect account.

        • by Missing.Matter ( 1845576 ) on Friday April 06, 2012 @10:54AM (#39597995)
          This isn't a problem in my field. In fact, it's standard procedure to present work first at a conference, build on it, and then expand on it in a Journal paper. You're expected of course to cite your previous work in your new paper and explain explicitly what the new results are. One of the biggest journals actually suggests this before submitting. In the journals I review for, usually, several reviewers look at a paper, and recommend a decision to an editor, who makes the final call based on those reviews, so the bias or agenda of one reviewer is diluted.
        • by sunhou ( 238795 )

          When that happens, you revise the paper to make it more clear how the material differs from the earlier, smaller paper, and submit to another journal (if that journal won't consider a re-submission), and also in the cover letter to the editor/reviewers emphasize why this new paper is worthy of being published even with the existence of the prior one.

          At least with journal articles you have the opportunity to provide these extra out-of-band communications, and there is room for back-and-forth exchanges betwee

    • by crmarvin42 ( 652893 ) on Friday April 06, 2012 @09:31AM (#39597195)
      Your post makes me think of two recent instances in my field (I am a non-ruminant nutritionist).

      1st had to do with a Professor down in Texas who is pushing the feeding of supplemental L-arginine to sows and a "consultant" for an Arginine manufacturer. He's been pushing it based on (frequently) contradictory reports of improved litter sizes and reduced piglet mortalitites. However, he's never had sufficeint statisitcal power. You need at least 100 sows per treatment because of the high standard deviations involved, but he frequently uses less than 10 sows per treatment. At the Midwest American Society of Animal Science meeting in Des Moines, IA this year there were two presentations from industry where they EACH used over 100 sows per treatment and found no positive effects of feeding supplemental L-arginine. They never mentioned the Texas professor directly, but you could tell that both studies were intended to be a rebuttal of what they considered bad, and self-serving science.

      2nd has to do with an article I read critiquing the use of what is called "Nutritional Epidemiology," and can be found here [garytaubes.com]. It is incredibly long, but very insightfule critique of a field that is given far too much credence simply becuase of where the scientists work, and how free they are with chicken-little-esq proclomations about how meat is going to increase your chances of dieing by 30%!! (everyone has an exactly 100% chance of dieing).
    • ...when I point out that there are scientists out there willing to cook the numbers, exaggerate, play to politics and/or public opinion, etc. ...

      If course it could be simple error/sloppiness of the researcher(s) in the original experiment that they didn't document something or an issue with the people trying to replicate the experiment (i.e., they're not that good). See Hanlon's Razor [wikipedia.org], "Never ascribe to malice that which is adequately explained by incompetence." That said, 47/53 non-reproducible results seems suspicious.

    • by tibit ( 1762298 )

      Yeah -- just look at this little gem:

      Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
      "We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

      That's exactly what drove Feynman up the wall, what made him speak so loudly against pseudoscience. Sigh.

    • Some thirty years ago, I was doing software development for a number of research outfits (neurological data acquisition, that sort of thing.) I recall one conversation between a lead research scientist and his division chief. They were working on a rather large grant proposal at the time. The dialog revolved around a key dataset that had some points that didn't support their conclusions. The chief was suggesting that they simply remove the (ahem!) "bad" data from the proposal. The scientist, who was becomin
    • And so when I point out that there are scientists out there willing to cook the numbers, exaggerate, play to politics and/or public opinion, etc. I inevitably run into those who say "Science wouldn't allow that" (like my friend who's still in the field). But science is only as good as the people practicing it. And, in any field, there are always those willing to put their own personal interests ahead of the greater good.

      The field of 'biomedical engineering' has done just that to secure lucrative funding. You might be on the cutting edge of signal processing and I might be looking at something garden variety, but are you doing under the guise of building a better prosthetic hand?

  • by Naffer ( 720686 ) on Friday April 06, 2012 @08:27AM (#39596611) Journal
    See this discussion of the same paper on In the Pipeline, a blog devoted to organic chemistry and drug discovery. http://pipeline.corante.com/archives/2012/03/29/sloppy_science.php [corante.com]
  • by concealment ( 2447304 ) on Friday April 06, 2012 @08:35AM (#39596699) Homepage Journal

    There is no greater "absolute power" than knowing that if you say or write something that others will like, they will pay you lots of money and make you famous.

    It's not that money corrupts. Money is not the root of all evil; the full quote is "the love of money is the root of all evil." When our society decided that money was more important than truth, we surrendered truth to the void.

    A research scientist thinks about his day. He can slightly fudge his cancer study, make big headlines, get a ton of grant money and get appointed chair at the university. Or he can go down the long hall to his boss and say, "Nope, this one didn't work either, and while I'd like to start a religion based on false hope, this isn't the false hope you're looking for." If he does that, he can then watch one of his subordinates fudge the cancer study, make big headlines, and be his boss at the same time next year.

    Which choice would you make?

    • by WhiplashII ( 542766 ) on Friday April 06, 2012 @10:04AM (#39597471) Homepage Journal

      No, its a little worse than that. The honest guy doesn't get tenure, and is eventually fired. The dishonest guy remains a "scientist" for life. So in the steady state, there will be many, many more dishonest "scientists" than honest ones.

      • It has very little to do with "dishonesty". No one is fabricating results. In fact, their are very few cases of genuine scientific fraud. What this article is referring to is the unfortunate reality that you can't prove a negative, but you do need to publish something. If you have six trials and one works, then you can publish that. And it's not dishonest - 1 trial did work. And you're free to speculate about why that should be. And at no point have you actually falsified any results.

        Conversely, no one's go

    • by crazyjj ( 2598719 ) * on Friday April 06, 2012 @10:21AM (#39597605)

      Or he can go down the long hall to his boss and say, "Nope, this one didn't work either, and while I'd like to start a religion based on false hope, this isn't the false hope you're looking for."

      And that's what *really* pissed me off about academia. Guys like that never get tenure, never get thanked. With so many of the people I worked with, "hypothesis" was synonymous with "foregone conclusion." The standard practice was to come up with your hypothesis, cook up a bunch of data to support it (dismissing any evidence that contradicted it with a little intellectual sleight-of-hand), publish, and then get your promotions and tenure. The guys who treated their hypotheses as ACTUAL hypotheses (that they might actually find to be wrong) were treated like bad researchers, when in fact, they were the *good* researchers. With so many people cooking the numbers, it began to be assumed that if your hypotheses weren't always proven right, it meant you were somehow flawed.

    • by geekoid ( 135745 )

      Since I actual know a lot of scientists, and can say you are full of bullshit.

      A) proving something wrong is worth more money and recognition.

      I would make the same choice most scientists make, the honest one.

      Sure, you would cheat to get ahead, but stop seeing that in everyone else.

      The problem here is that for some reason he expected single studies to be 100% accurate.

      Oh, and if what you said is true(it isn't) then we would be reading about it.

      And finally: I'ts an article, don't expect to to be a very good re

      • A) proving something wrong is worth more money and recognition.

        Proving something someone else's claims wrong is worth more money and recognition. Proving your own theories wrong, not so much.
        Even without money, there are incentives to finding that your hypothesis is correct. There is really no good way to fix this problem.

    • by fermion ( 181285 )
      There is basic, what I will call real research, and then there is the handwaving applied stuff. One big issue we have now in science is the confusing of the two. Increasingly medical research has become the bane of science. Paid for by firm that want a specific result rather than governments who profit from the increase of actual knowledge, research has become the a means to increase personal wealth rather than a means to increase the wealth of all humans. We see this with the desire patent life. Can y
  • by danbuter ( 2019760 ) on Friday April 06, 2012 @08:37AM (#39596717)
    Many of these scientists are getting big grants to do their research. At least a few of them might be skewing their results, even just a little, to give the answers their backers want, in order to keep the money flow open. This goes for a lot of scientific research. (Not to mention the politics of getting published if your research contradicts a heavyweight in your field).
    • So... we should only trust rich people?

      Good call, I'm sure they are all fair, honest, and equitable in their decision making. After all, that's how they got to be rich, right?
      • So... we should only trust rich people?

        No, never trust the rich, the last thing they want is more company at the top. The rich cartoonist Scott Adams said so...

  • by peter303 ( 12292 ) on Friday April 06, 2012 @08:42AM (#39596767)
    Two weeks ago said the social science studies are usually not replicated. Either because they are not true or too expensive. They were trying to explain the rise in psychology paper retractions and job firings as poor science.
  • I call bullshit (Score:5, Interesting)

    by Black Parrot ( 19622 ) on Friday April 06, 2012 @08:52AM (#39596841)

    Given the expense, I flat don't believe that a private company just decided to replicate 53 studies.

    And he claims that authors "made" his team sign confidentiality agreements. How do authors force that?

    So, he claims, he can't even tell us which studies failed.

    Now he works at a different cancer research company. Conflict of interests?

    I don't doubt that we've got problems in the "medical industry", but the linked article reeks of bullshit.

    Has anyone looked at Nature?

    • by gl4ss ( 559668 )

      well, you need to sign confidentiality agreements to reproduce the studies..

      what, you think the scientific papers have all that you need for duplication, implying real science? you think peer review is nowadays actually trying to reproduce any of the findings? fuck no. it's much easier to provide vague results from tests the reader can't verify - that way you're not a fraud but are still "working".

      • Re:I call bullshit (Score:4, Informative)

        by Black Parrot ( 19622 ) on Friday April 06, 2012 @09:33AM (#39597217)

        well, you need to sign confidentiality agreements to reproduce the studies..

        No you don't. You just need to read the published paper and attempt to reproduce what the paper reports. (A good scientific paper includes enough information to make the work it reports on reproducible.)

        *However*, I suspect my post was over-reactive in a couple of regards:

        a) They might have asked the authors for their unpublished raw data, in which case a confidentiality agreement becomes plausible.

        b) When I read "landmark studies", I think longitudinal studies or clinical trials. However, it appears that they were using their own notions of "landmark", and included things like the effect of a chemical on cell biology. That sort of thing can be reproduced at a cost a private company would undertake.

        However, the "I can't tell you" criticism still stands. Among the posts at the on-line article in the Slashdot update, someone points out that the Nature article is a complaint about irreproducible results, but is not itself reproducible. Basically, from what I'm reading, Nature published an anecdote.

        Maybe it was a letter rather than a paper?

    • A few things here:

      His team replicated 53 studies? I haven't read through the details but I would imagine these were long term and difficult studies to complete. I smell BS.

      His employees could very well be inept. This would explain why they were unable to replicate experiments. Science is hard.
    • Re:I call bullshit (Score:5, Informative)

      by Anonymous Coward on Friday April 06, 2012 @09:50AM (#39597345)

      And look at many of the complaints he has about academic research in this 'paper'.

      "In addition, preclinical testing rarely includes predictive biomarkers that, when advanced to clinical trials, will help to distinguish those patients who are likely to benefit from a drug."

      When I publish a paper, I'm trying to present the the interesting findings in regards to the molecular system I'm studying. Finding a bunch of associated biomarkers is an entirely different study. If you expect me to write multiple studies/papers rolled into one and replicate the experiments over and over in dozens of cell lines and animal models, you better be paying me to do it, because I don't have the funding for that in the small budget I got for the proposed study funded by taxpayers through the NIH. When I wrote the grant to do the study, I didn't even know I would find anything for sure, so I didn't ask for a budget 20x the size so I could replicate it in every model out there to look for biomarkers as well, just in case I found something. Asking for that is a good way to have your grant rejected in the first place. My paper was to find the interesting thing. By publishing it I'm saying, 'we found something interesting, please fund us or other researchers to take the next step and replicate, test in other systems, etc, etc. Publishing my results does not mean 'Big Pharma should go ahead and spend $10 million trying this in humans right now!'

      "Given the inherent difficulties of mimicking the human micro-environment in preclinical research, reviewers and editors should demand greater thoroughness."

      It's tough to do. That's why we typically do small bite-size chunks. That and the size of our grants allow us to do the bite-size chunks. Want greater thouroughness? Increase our funding. Ohh, but that's from taxpayers, and you want them to spend all the money doing research so you don't have to.

      "Studies should not be published using a single cell line or model, but should include a number of well-characterized cancer cell lines that are representative of the intended patient population. Cancer researchers must commit to making the difficult, time-consuming and costly transition towards new research tools, as well as adopting more robust, predictive tumour models and improved validation strategies. "

      Cancer researchers must commit to the costly transition? Yes, yes, research is being held up because all those academics, with all their mega-millions in earnings each year, just aren't willing to pony up the cash to do their experiments right. We live off grants. If there isn't funding to do a huge study, we can't. Simple. No 'not willing to commit' involved.

      "Similarly, efforts to identify patient-selection biomarkers should be mandatory at the outset of drug development."

      Once again, the budget for that wasn't in my grant because I didn't know I would even find anything, let alone need to find every associated biomarker. Want to know the biomarkers? Then pay for it, or wait for me to publish this first paper, then write another grant asking for funding to look for biomarkers now that I've got a very good reason to look for them and spend the money. In a rush? Either pony up the cash or stop whining about taxpayer-funded academics not providing everything to you on a silver platter in record time.

    • amgen is a big company--they got $15.6 billion in revenue last year and spent $3.2 billion on research in 2011 according to their fact sheet [amgen.com]. Presumably they have some money to spend to try to replicate published studies, or at least their main findings. I would think that replicating results would be part of their due diligence; if they're going to invest time, money, and resources developing a product based on the results of a research paper, they need to have some confidence that that investment is base

    • This smelled funny to me as well. Aside from whether they really attempted to replicate that many studies - which would be a very major undertaking - they are saying that "the studies can't be replicated" rather than "we couldn't replicate the studies." On top of that, rather than reporting these failures as they occurred and subjecting them to peer review, they come out after they are finished with precious few details, so that their own replications cannot be replicated. Good Grief.

      For me, that casts mor

    • These replicated studies were a cost saving measure. Doing a clinical trial on people is more expensive than doing a preclinical trail on cell cultures. Before committing to the more expensive trail they tried to repeat the research the trial was based on. Over 10 years they did this 53 times and only got positive results 6 times. They then focused their money on those results.
  • by AtomicDevice ( 926814 ) on Friday April 06, 2012 @08:53AM (#39596847)

    While certainly there are those who will publish findings they know to be false, that's not really the big issue I see here. Good science demands that studies be replicated so they can be upheld or refuted. Sure, there's confirmation bias in science all over the place - the bigger problem I see is that there's very little incentive to publish a paper that simply refutes another. Busting existing studies should be a glorious field, but it's not. If big-name scientist A publishes a result in nature, and no-name scientist B publishes a paper in the journal-of-no-one-reads-it, everyone just assumes scientist B is just a bad scientist (assuming he even managed to actually get published at all).

    Another major issue is that the null hypothesis is a very un-enticing story. No one wants to publish the paper: "New Drug does nothing to cure cancer". If you spent a year and a ton of money researching New Drug, you're damn sure going to try and make it work. It's unfortunate, because often the null hypothesis is very informative, but it doesn't get you paid or published. Or how about the psychology paper: "Brain does not respond to stimulus A in any meaningful way", don't remember that paper? That's because it never got published.

    I think this is less about malicious behavior, and more about a lack of interest (which can somewhat be blamed on the way universities/journals/grants handle funding, notoriety, etc) in replicating and refuting studies.

    Do you want to be the guy who cured cancer, or the guy who disproved a bunch of studies?

  • Medical scientists (Score:3, Interesting)

    by nedlohs ( 1335013 ) on Friday April 06, 2012 @08:55AM (#39596863)

    are famously bad at sceince and statistics.

    And there's no benefit (to the researcher) in replicating a study that's already been done which makes for an obvious problem.

    Medical science isn't alone in this of course, it just seems to be worse than most.

  • by kahizonaki ( 1226692 ) on Friday April 06, 2012 @08:58AM (#39596909) Homepage
    A recent PLOS article (free to view!) analyses modern research, coming to the conclusion that most research findings are false.
    TLDR: Because of the nature of the statistics used and the fact that only positive results are reported.
    http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 [plosmedicine.org]
  • by golden age villain ( 1607173 ) on Friday April 06, 2012 @09:04AM (#39596965)
    Let's publish it quickly, then get a tenured job and change topic before others find out it's all wrong!
  • by Anonymous Coward

    These findings are no surprise to those who have been following medical science and research for the past decades. See for example what Dr John Ioannidis has to say about the consistency, accuracy and honesty (or lack thereof) of medical science in general [theatlantic.com]: "as much as 90 percent of the published medical information that doctors rely on is flawed","There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases", "he was struck by

  • John Ioannidis, a medical statistics researcher on a small island in the Aegean, leads a group that has done significant work in this area. Here [theatlantic.com] is an article in The Atlantic about his work.

    From the article: ". . . Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simp
  • ...re: double slit experiment and what happens when you try to observe....the experiment.

    If they want the same results then don't observe..... As apparently they didn't the first time

  • It seems to be affecting all branches of science - http://wmbriggs.com/blog/?p=4832 [wmbriggs.com]

  • When drug companies force you to sign NDAs to conduct studies, what they're really doing is putting a failsafe in so that if the study goes bad they can sit on it and not let the bad news get out.

  • Well, it's time to start a new journal (among 100 000 of already existing journals). Create a huge lab, get lots of funding to pay them for their work. And tell the to reproduce every result from every submitted paper. Publish only if result was reproductible. Expensive as hell, but soon that would be the only journal that people will bother to spend time on reading.

  • I assume many here like me engage in conversations with people who are... even less up on science than we are (and most of us here are very far from being actual scientists). This is a good opportunity to clear up a very common misconception I find among the less scientifically literate. Too many people think that once something has been published in a peer-reviewed journal that you can take it as fact, and this is simply false. Getting published in a peer-reviewed journal is the beginning of the peer revie

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...