Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Medicine Stats Science

Reanalysis of Clinical Trials Finds Misleading Results 74

sciencehabit writes: Clinical trials rarely get a second look — and when they do, their findings are not always what the authors originally reported. That's the conclusion of a new study (abstract), which compared how 37 studies that had been reanalyzed measured up to the original. In 13 cases, the reanalysis came to a different outcome — a finding that suggests many clinical trials may not be accurately reporting the effect of a new drug or intervention. Moreover, only five of the reanalyses were by an entirely different set of authors, which means they did not get a neutral relook.

In one of the trials, which examined the efficacy of the drug methotrexate in treating systemic sclerosis—an autoimmune disease that causes scarring of the skin and internal organs—the original researchers found the drug to be not much more effective than the placebo, as they reported in a 2001 paper. However, in a 2009 reanalysis of the same trial, another group of researchers including one of the original authors used Bayesian analysis, a statistical technique to overcome the shortcomings of small data sets that plague clinical trials of rare diseases such as sclerosis. The reanalysis found that the drug was, as it turned out, more effective than the placebo and had a good chance of benefiting sclerosis patients.
This discussion has been archived. No new comments can be posted.

Reanalysis of Clinical Trials Finds Misleading Results

Comments Filter:
  • Decline Effect (Score:5, Informative)

    by Pino Grigio ( 2232472 ) on Wednesday September 10, 2014 @08:46AM (#47870923)
    Isn't this generally know as The Decline Effect [newyorker.com]? It's not just clinical trials, it applies to almost everything (to varying degrees). It's also been interpreted as The Half-Life of Knowledge [wikipedia.org].
    • by tomhath ( 637240 )
      That article is a very long winded way of saying, yes - a small sample size can give results that are far from the mean. Flip a coin 5 times and you might get heads five times; does that mean the coin will always come up heads if you try the experiment again? No.
      • I don't think that's what it means, no. It means there are difficult to discover sources of bias in the design, implementation and interpretation of studies. Over time they change and so (remarkably) do the results.
  • by Archtech ( 159117 ) on Wednesday September 10, 2014 @08:50AM (#47870945)

    Now that is an interesting observation! Mostly, in science, when someone does an experiment that supposedly proves a theory, the next step is to document and publish every detailed step. Only when a number of peers have replicated the results can they be accepted with any confidence.

    Yet in clinical trials of new drugs, it seems, only a single trial is ever done. How did that ever get accepted as proper scientific evidence?

    • by Archtech ( 159117 ) on Wednesday September 10, 2014 @08:56AM (#47870999)

      Whoops, I misunderstood the article for a moment there. If it's a matter of incorrect or misleading statistical analysis, that seems to be rife in studies of nutrition at least. Part of the problem may be that the same people develop a theory, conduct studies to test it, and do the statistical analysis on their numbers. Naturally, the numbers usually turn out to support their theory!

      It might be safer if the three different activities were done by separate teams, with a "blind" system so no team knows who the other teams are. Thus the theory is developed by Team A, then studies/experiments to test it are created by Team B, and the number are analyzed by Team C. Thus Team C would have no idea what theory they were analyzing, or what might be the meaning of any correlations they found.

      • by oh_my_080980980 ( 773867 ) on Wednesday September 10, 2014 @09:05AM (#47871075)
        Jesus christ people read the article not the title. Hell even the abstract pointed out the problem: SMALL DATA SETS! There was nothing wrong with the original reporting. Based on the sample size that was the proper conclusion to reach. I would not jump to the conclusion that the Bayesian analysis overturned the original conclusion. What the Bayesian analysis points to is that a new trial should be conducted with a larger sample size.

        FYI there's already a blind system. Again - read people.
    • Drugs go through four phases of clinical trials [nih.gov], as required by the FDA.

      That being said, the whole clinical trial process to get a drug approved by the FDA is pretty messed up. From how pharmaceutical companies are involved, how patients are (are not) qualified to participate, how adverse events [wikipedia.org] aren't necessarily properly documented, the list goes on.

    • by Shimbo ( 100005 )

      Yet in clinical trials of new drugs, it seems, only a single trial is ever done.

      That's not true at all. Generally, multiple trials are done and the most favourable results published. http://www.alltrials.net/ [alltrials.net]

    • by Rich0 ( 548339 )

      There are lots of things that are messed-up about clinical trials, and the main reason for this is that they are VERY expensive to run. The problem is that you can't give somebody a drug unless their doctor is involved. Doctors make a lot of money, and clinical trials take a lot of time for them to participate in. So, if you want somebody who makes $500k/yr to spend 10 hours per week on a clinical trial, what is the only way to get them to cooperate? You have to pay them a LOT of money. Multiply this b

    • by Anonymous Coward

      I think there's an underlying assumption to your comment that is incorrect, which is about the purpose of clinical trials. That's ok though, most people do not understand the purpose of the FDA and clinical trials.

      The FDA's purpose is not to validate your science. That is not their mission, that mission is covered via journals/publishing/peer review processes. The FDA's job is in fact a marketing clearance organization.

      The FDA was created because it was recognized that health is very complex that many pe

    • by Altrag ( 195300 )

      It didn't. Clinical trials are performed for exactly one reason: To get FDA approval. Not for scientific integrity, not for human safety. Purely for political purposes.

      Its a crappy system. You hear all kinds of stories about companies just discarding the results of any trial that didn't prove their drug "worked" (because they're only required to provide a certain number of trial results. They're not required to provide the results of any trials they may have performed, or even note that they happened,

  • Selection bias? (Score:5, Insightful)

    by Chris Mattern ( 191822 ) on Wednesday September 10, 2014 @09:15AM (#47871165)

    They looked at reanalyses that had already been done for other reasons, rather than doing their own reanalyses on randomly selected trials. It occurs to me that these trials may have been subjected to reanalysis precisely *because* there were doubts about the initial analysis.

    • So does that mean a re-analysis of the article on re-analysis leads to different conclusions than the original article?! HA!

      But I have the sneaking suspicion that this re-analysis won't be published, which is a whole nother kind of selection bias can of worms.

  • This seems to highlight the reasons behind the All Trials movement: http://www.alltrials.net/ [alltrials.net]
  • by Rich0 ( 548339 ) on Wednesday September 10, 2014 @09:35AM (#47871351) Homepage

    Anytime you re-analyze data you run into this. [xkcd.com]

    Think about it. There are a million ways you can analyze any dataset. There are millions of datasets out there to analyze. There are millions of people who can independently decide to go back and do a re-analysis.

    So, the issue is that if somebody goes back and does a re-analysis and the results are boring, nobody publishes. However, if the results are controversial, it gets published. Since there are so many permutations, you're guaranteed to find something exciting.

    This is why you're supposed to establish your methods BEFORE you collect the data, and then stick to the methods you established to analyze the data. Otherwise your 95% confidence turns into a more realistic 1% confidence.

    In practice, though, I'm sure the initial analyses are just as prone to this kind of problem. It just gets REALLY bad when you look backwards.

  • by silfen ( 3720385 ) on Wednesday September 10, 2014 @09:51AM (#47871465)

    There are many things wrong with clinical trials, but this isn't one of them. Both the original article and the reanalysis use valid statistical procedures and do not contradict each other. The original analysis didn't prove absence of an effect, it merely failed to show the existence of an effect. The new analysis shows that the drug is, in fact, more effective under some (weak, reasonable) a priori assumptions.

    Whether to use statistical hypothesis testing (frequentist methods) or Bayesian analysis is a long-running debate in statistics and medicine. Both techniques are mathematically valid. Statistical hypothesis testing makes fewer a priori assumptions, which is why people have traditionally trusted it more and why it is widely taught and used in science. But over the years that people have come to realize that pessimistic assumptions can be harmful, such as when you continue clinical trials too long or reject the use of life saving drugs. Although I personally think Bayesian methods are a better way of analyzing the data, I think the debate over which methods to use is the way scientific debate and change should happen: slowly and with careful re-analysis and re-examination of data and experimental results.

  • by Anonymous Coward

    This is why the meta-research regarding the safety and efficacy of GMO food organisms is so important. We are constantly told by the industry and their online astroturf army that there are "thousands" of studies showing the safety of GMOs, but it turns out they're basically the same shallow study 2000 times. The most disturbing findings have come from the kind of reanalysis that this story describes. But now, those studies get shouted down because supposedly it's "settled science".

    Whenever there's money

  • but with other studies they say researchers do have much incentive to redo trials/experiments. You don't make a name for yourself by just confirming someone else's work.
  • by Dimwit ( 36756 ) on Wednesday September 10, 2014 @10:18AM (#47871693)

    Let's compare two companies that depend on science - IBM and GlaxoSmithKline.

    Let's say IBM discovers a new method of lithography for building microchips. They publish their results, and their results are replicated. More importantly, IBM gets a new, presumably better way of making microchips.

    GlaxoSmithKline makes a new drug that treats a psychological illness. To some degree, because there are no objective physical tests for most psychological illnesses, the determination of effectiveness is made subjectively.

    Both companies want the science to turn out right, because it makes them money. One of them has a much easier time massaging the results of any studies.

    • by Rich0 ( 548339 )

      Well, in your analogy there is another big difference between IBM and Glaxo. In your example, if IBM does a faulty study that shows that a new way of making microchips is better, then the result will be that they spend a billion dollars on a new fab and it produces faulty microchips that nobody buys. In the other case, it is others who are relying on the results of the study to make a decision about whether to buy Glaxo's products. This is why it is much more important that drug trials be regulated. If

  • A real shocker. As basically no profession has a good grip on statistics except specialized mathematicians, it is no surprise so many are wrong or misleading.

    • ... As basically no profession has a good grip on statistics except [statisticians], ...

      There I fixed it for you.

      • by gweihir ( 88907 )

        You did fix exactly nothing. A statistician can also work in applied statistics and then the understanding is not required. So you basically BROKE the statement. Pathetic.

  • Please push a serious statistics course through the throat of medicine students, that would benefit everyone health!
  • There have been recent cries for reproducible results in science.
    The scope is too limited.
    There should be a cry for reproducible results in any research prior to its publication.
    Long and short of it for researchers: if only you can get the results and conclusions, then the results and conclusions are not publishable.

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...