Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Medicine Science

Registered Clinical Trials Make Positive Findings Vanish 118

schwit1 writes: The requirement that medical researchers register in detail the methods they intend to use in their clinical trials, both to record their data as well as document their outcomes, caused a significant drop in trials producing positive results. From Nature: "The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000. Study author Veronica Irvin, a health scientist at Oregon State University in Corvallis, says this suggests that registering clinical studies is leading to more rigorous research. Writing on his NeuroLogica Blog, neurologist Steven Novella of Yale University in New Haven, Connecticut, called the study "encouraging" but also "a bit frightening" because it casts doubt on previous positive results."

In other words, before they were required to document their methods, research into new drugs or treatments would prove the success of those drugs or treatment more than half the time. Once they had to document their research methods, however, the drugs or treatments being tested almost never worked. The article also reveals a failure of the medical research community to confirm their earlier positive results. It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.
This discussion has been archived. No new comments can be posted.

Registered Clinical Trials Make Positive Findings Vanish

Comments Filter:
  • It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.

    Uhh... that's not even wrong. It's either something that started out OK and has been edited down into nonsense, or it shows such a fundamental misunderstanding of how controlled experiments work that, well, it's not even wrong.

    • Re:Not even wrong (Score:5, Insightful)

      by JoshuaZ ( 1134087 ) on Sunday August 16, 2015 @07:40AM (#50325983) Homepage
      No, it is a pretty accurate summary of what is happening. You focus post-hoc on where you got a good result. Say for example you want to test a new anti-diabetes drug. Does the drug work in the general population? Well, data doesn't support that. So then you look at subgroups. Does your data show success in say just men or just women? What about black men? Black women? White women? Etc. This isn't the only serious problem, sometimes one can choose which statistical tests to do or how to compensate for complicating factors. If you have enough choices you can make anything looks successful.
      • So what you're saying is that clinical trials totally win!

        ...at failing.

        • by mspohr ( 589790 )

          What is says is that the drug companies were:
          Step 1: Gaming the clinical trials to get the results they wanted.
          Step 2: Profit!
          It didn't really matter to them that the results were invalid or in some cases may have caused harm. It typically takes years to sort out bad results and in the meantime, the profits are real.

          • by pepty ( 1976012 )
            I don't think it was entirely due to drug companies. A lot of studies are proposed and led by academics with public funding. For them, the reward is the publication, so there's a big disincentive to complete studies if it looks like they aren't leading to publishable results. A couple of years ago there was a review of how the compulsory registration rules were affecting research. It found that academics were actually less likely to follow the registration rules than pharmas.
      • Re:Not even wrong (Score:5, Informative)

        by drooling-dog ( 189103 ) on Sunday August 16, 2015 @08:58AM (#50326139)

        Correct. Even if you specify your subgroups beforehand in the experimental design, you still need to modify your interpretation of statistical significance (downward) to account for the consideration of multiple hypotheses. If you're going on a fishing expedition by identifying subgroups post hoc, then you ideally need to base this correction on the potentially large number of conceivable subgroups that are available to be drawn. It's very hard to achieve real significance under those circumstances. On the other hand, you might find a subgroup result suggestive and conduct a separate follow-on study to test it independently; that's perfectly legitimate.

        • There's more to it still: you can actually exploit the incertitude on the measurements you're using to categorize your subjects into subgroups, in a way to ensure your drug WILL report positive effects even if it has zero real effect. It's very well explained in this short article by Tom Naughton [fathead-movie.com], complete with a numerical demonstration.

          To put it shortly: you can design the subgroups' criterion in a way that overrepresents false positives and underrepresents the false-negatives that would otherwise counterb

      • 'There was a derisive laugh from Alexandrov.
        "Bloody argument," he asserted.
        "What d'you mean 'bloody argument'?"
        "Invent bloody argument, like this. Golfer hits ball. Ball lands on tuft of grass - so. Probability ball landed on tuft very small, very very small. Millions other tufts for ball to land on. Probability very small, very very very small. So golfer did not hit ball, but deliberately guided on tuft. Is bloody argument. Yes? Like Weichart's argument"'.

        - "The Black Cloud" by Fred Hoyle

        • by sl149q ( 1537343 )

          Hit golf ball.

          Go to where it landed, paint a circle around it. Take picture. It landed in the circle, exactly what you wanted.

      • The trouble is that when a pharmaceutical corporation carries out research on a new drug, it very much wants that drug to be found useful, safe, and fit for use. (Although it's in the corporation's interest that any really serious side-effects should be identified, as it would lose even more money if it marketed something that turned out to be a new thalidomide).

        Unlike a proper scientific team, the corporation undertakes its research wanting a particular outcome. That alone is almost enough to guarantee tha

        • I have heard that a popular method of drug research is to do clinical studies of a large number of people, but only report on the people that had positive results. You had 500 people in the trial, 10 got better, you report your 10 person drug test. Doctors should be required by law to report negative drug effects to the FDA from all their patients, even for clinical trials.
          • Yes, that's exactly what is wrong.

          • by nbauman ( 624611 )

            I have heard that a popular method of drug research is to do clinical studies of a large number of people, but only report on the people that had positive results. You had 500 people in the trial, 10 got better, you report your 10 person drug test. Doctors should be required by law to report negative drug effects to the FDA from all their patients, even for clinical trials.

            The only people who do that are quacks like Burzynski, who published a book describing his "successes" who may not have actually had cancer in the first place, and whom he didn't follow more than 6 months.

            You couldn't publish something like that in a major peer-reviewed medical journal today.

            What the drug companies did do until recently was commission a lot of studies, and publish only the studies with good results. The journals put an end to that by requiring every investigator to report the start of a stu

            • Re: (Score:3, Interesting)

              by pepty ( 1976012 )

              You couldn't publish something like that in a major peer-reviewed medical journal today.

              Anil Potti, Sheng Wang, and many others managed to do it just fine. Where drug companies get into trouble it is usually when they hire people at hospitals/companies/universities to run sites for a clinical trial. The contracts often create huge conflicts of interest that can skew the results of the trial.

        • by nbauman ( 624611 )

          The trouble is that when a pharmaceutical corporation carries out research on a new drug, it very much wants that drug to be found useful, safe, and fit for use. (Although it's in the corporation's interest that any really serious side-effects should be identified, as it would lose even more money if it marketed something that turned out to be a new thalidomide).

          Unlike a proper scientific team, the corporation undertakes its research wanting a particular outcome. That alone is almost enough to guarantee that any results reported will be unreliable and unsafe.

          That's why some of the best studies, such as the ones on cardiology drugs, were done by the Veterans Administration. You go to a medical conference and you can hear doctors referring to "the VA study".

          Kaiser Permanente and Blue Cross/Blue Shield did some good studies too, but their economic model doesn't encourage it.

      • No, it is a pretty accurate summary of what is happening. You focus post-hoc on where you got a good result. Say for example you want to test a new anti-diabetes drug. Does the drug work in the general population? Well, data doesn't support that. So then you look at subgroups. Does your data show success in say just men or just women? What about black men? Black women? White women? Etc. This isn't the only serious problem, sometimes one can choose which statistical tests to do or how to compensate for complicating factors. If you have enough choices you can make anything looks successful.

        Now here's an interesting though experiment. Assume the researchers were doing a perfectly good job of analyzing results but were just terrible at predicting what that result would be. For instance, drug X doesn't work for the general population but it does actually work for black men, or this drug failed at treating angina, but it's great for erectile dysfunction [wikipedia.org].

        Now this is a testable hypothesis because a second trial could confirm the surprise result from the first trial, ie Trial A: Test for angina but

        • by pepty ( 1976012 )

          does anyone know if they're redoing those trials with the new goalposts?

          When the new goalpost looks like it might address an unmet medical need: Pretty much as soon as the patent application has been filed. If it looks like revenue, the company will either develop it or sell the IP to a company that can.

      • Sometimes you actually find something you didn't expect. There is serendipity.

        Many times of course it is p-hacking, trying to make something out of nothing.

        The problem is that if this result holds (an 86% reduction in results for clinical trials form 17/30 to 2/25) then research will simply become too expensive.

        If you feel that no further research is needed, great. If you look at your own body and that of the people you know and if you see how much lower our quality of living has become (obesity epidem
      • No, it is a pretty accurate summary of what is happening.

        It's actually subject to some misinterpretation. What the OP seemed to be saying was that an experiment consists of running a trial ("I want to determine car colours, the first car I see is red, experiment concluded") and taking a conclusion from that. Obviously any subsequent "experiment" can prove the first one wrong, which is why an experiment should be about falsifying the null hypothesis. If performed correctly (which is why I used the term "controlled experiment"), you only need to do it once (alth

    • Re:Not even wrong (Score:4, Interesting)

      by TheRaven64 ( 641858 ) on Sunday August 16, 2015 @09:07AM (#50326165) Journal
      It's not even a good description of what was happening. Positive clinical trials were published, but if a company had invested in a drug then it would often do a dozen clinical trials. Of these, a few (due to normal statistical variation) would show that there was some correlation with whatever they were looking for, and those are the ones that they'd publish. It's similar to the stock market scam, where you create a dozen funds, trade them at random, and then cherry-pick the one that had done a lot better than the market, claim that it's due to your brilliance and then ask for investors (charging a hefty administration fee to each one).
      • When I first saw the article I said 'Just follow the money'. Well duh, most trials proved the newest 'wonder drug' from some big company worked when they had no need to document all of the things that went into the research. Only drugs that couldn't manage in any way to produce significant positive results would fail. That makes perfect sense when the medical scientists in question usually know what side is paying the bills it's always in their best interest.

      • It's similar to the stock market scam, where you create a dozen funds, trade them at random, and then cherry-pick the one that had done a lot better than the market, claim that it's due to your brilliance and then ask for investors (charging a hefty administration fee to each one).

        You meant "mutual fund scam".

    • by pepty ( 1976012 )

      A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.

      If by "taken seriously" one means "approved by the FDA", then two trials is pretty much the minimum (one phase II and one phase III).

  • by JoshuaZ ( 1134087 ) on Sunday August 16, 2015 @07:37AM (#50325975) Homepage
    Similar issues have shown up in other fields. Psychology has had serious faillures to replicate many major studies http://www.slate.com/articles/health_and_science/science/2014/07/replication_controversy_in_psychology_bullying_file_drawer_effect_blog_posts.html [slate.com] and when there have been attempts to replicate them they have often not gotten the same results. And there are very similar problems in education https://www.insidehighered.com/news/2014/08/14/almost-no-education-research-replicated-new-article-shows [insidehighered.com]. Pre-registration of experiments is important, but it would also help a lot if there were journals dedicated to replication and also if academia took replication more seriously: I know people who are tenure track who haven't tried to replicate some studies because it doesn't look as good for tenure promotion to just replicate something rather than do something new. There are serious cultural issues that need to change.
    • by Halo1 ( 136547 )

      It's the same in computer science. Many measurement methodologies are plain wrong or misleading [acm.org]. And in many cases, the source code of what people did is not available, so independent evaluation is not possible (someone also published a paper about that, where the author couldn't even get the code in many cases after explicitly asking for it, but I forgot the title). It's not just a problem of alpha sciences.

      • by Morpf ( 2683099 )

        I also had this problem. And when I asked an PhD student about a thing that seems suspicious in his papers experiment setup he admitted that it's actually not as great as described.

        Another big problem is research in development processes like Scrum, XP, Waterfall etc.. I argued quite a bit with my professor as all the experiments cited where like "take a bunch of students and let them work on a toy project, which never has changing requirements and is tiny in code size." Well this of course can't be used to

        • by Khyber ( 864651 )

          "The problems in software projects arise after _months_ not hours."

          I'm doing some game coding right now. I get problems cropping up every COMPILE. What the hell do you mean by MONTHS?

          • by henni16 ( 586412 )

            Uh, "not sure if serious"!?

            Forget compile time bugs or errors in algorithm, think more towards project management, development processes, maintainability, impact of requirement changes or new feature requests to a large project after it's basically done etc.

          • Any problem the compiler can find is easy.

    • by AthanasiusKircher ( 1333179 ) on Sunday August 16, 2015 @10:01AM (#50326363)

      Similar issues have shown up in other fields.

      Indeed. The biggest issue is statistical ignorance, but even people with a decent amount of training in stats can be fooled if they want to find a particular result. Anyhow, whenever things like this come out, everyone always thinks it's about scientists who manipulate data deliberately. While that happens, it's more often just researchers who "try things out" after collecting data and notice a pattern (unintentionally skewing things). If they have to declare methods and statistical tests beforehand, it's harder to make these errors.

      A few months back, I happened upon a very useful guide to the problems in modern scientific publication, which can be found partly online here [statisticsdonewrong.com]. I ended up buying the print edition, and the sheer number of examples of completely bogus research ending up being accepted in various scientific fields due to erroneous stats and various biases that creep into the publication process... well, it's just shocking. Seriously.

      As the book notes, the other problem is that even finding these errors is incredibly time-consuming and labor-intensive. I specifically remember one case where a new oncology test was proposed by Duke researchers and seemed to have great results. This case eventually became so infamous that it was reported on [nytimes.com] in the popular media [economist.com].

      Anyhow, basically they had a couple independent statisticians analyze the work (where they found HUGE numbers of problems in mislabeled data, mistakes in analysis and basic computation, etc., which appear par for the course in many labs, if you believe the studies on this stuff in the book). Ultimately, estimates are that it took TWO THOUSANDS HOURS of work for these independent statisticians to complete their analysis and render a verdict.

      And once they did this, the statisticians tried to publish it -- but major journals didn't want it. Groundbreaking results are much more interesting that tedious statistical analysis. The National Cancer Institute caught wind of the problems and initiated an independent review, which found no errors (probably because the review was done by cancer experts, not stats experts, and they hadn't been giving the stats analysis done by the other researchers).

      The only reason any of this ever really got much attention is because one of the lead researchers was accused of falsifying some aspects of his resume, which led to people actually going back and questioning his papers.

      The book is full of stories like this, though, as well as citations of analyses of how many journal articles in various fields suffer from serious statistical problems.

      It's all really scary when you start realizing how much bogus research is out there... most of it completely unintentional, and most of it passing peer review because it follows the field's "standard methodologies."

    • by Nemyst ( 1383049 )
      Problem is, I doubt the actual researchers could do anything about it. If I, as a researcher, decided tomorrow morning that I'd attempt to reproduce results from a bunch of other papers, I'd most likely lose all of my funding and would have no support from my university or research institution. You'd need a full top-down rework of science for something like that to pan out.

      It's ironic too because we tend to waste a lot of time trying things that don't work, concluding that they don't work, and moving on t
  • by Todd Knarr ( 15451 ) on Sunday August 16, 2015 @07:52AM (#50326001) Homepage

    The basic flaw is worse. They didn't just run one test, find the results they wanted and go with it. They ran a test with only an idea of what they wanted, then took all the results they got and picked out ones that were positive for conditions or treatments they could go with. It's like going into a test for a drug to treat heart attacks, finding that it doesn't do anything for heart attacks but does seem to lower cholesterol levels, and announcing that the trials of your new cholesterol medication were positive.

    Having to declare up front what their goals are destroys the ability to cherry-pick like this. What we're seeing with the drop in positive results isn't so much the difference in clinical effectiveness of the drugs but the dragging into the spotlight of the pharma companies' ability to predict what their drugs will do and how well they'll do them. There's a very interesting blog here [sciencemag.org] that covers a lot of this, and one conclusion that keeps coming up again and again is that medical biochemists and researchers don't really have a good way of predicting from lab results what a compound will do in a live human. It also highlights fairly often how the drug companies will keep pushing a drug through trials even though the results aren't encouraging. It's a common attitude in business and finance, that now that you've invested this much money in something you have to get some return out of it to justify the cost. It's also a common failing in gambling, the belief that now that you're in the hole you have to dig yourself out somehow. But in gambling, if you're holding a bad hand your best bet is to fold. Don't worry about how much you've already got in the pot, it's already lost. Fold and cut your losses before you throw any more money away. Drug companies are notoriously bad at making that decision to walk away. They're also notoriously bad at dealing with a field where there aren't many good rules you can follow to get results. MBAs like process and procedure and predictable results, and right now biochemical research is in a situation where the new stuff is all likely out in areas where there isn't a lot of research, there isn't a good map of the territory and you're going to be doing a lot of "poke it with a pointy stick and let's see what it does" work.

    • You hit the nail on the head. By forcing researchers to declare, a priori, what they were looking for and how they were going to measure the effect they were looking for, the registry prevented cherry-picking. A researcher could not change his or her mind after the results came in, and choose another effect that had shown up in the data. Cherry picking is like staring at random noise and accidentally seeing patterns that aren't there.
  • It appears the medical research field has forgotten this basic tenet of science: A result has to be proven by a second independent study before you can take it seriously. Instead, they would do one study, get the results they wanted, and then declare success.

    Nobody "forgot" that. It's just that clinical trials often take years and involve thousands of people in the later stages, making them very expensive. Funding to replicate results is always lacking, particularly if it might disconfirm a positive find

    • That is why I believe funding of trials should not be done by pharmaceutical companies. It is a conflict of interest from beginning. Somethings by their very nature should be funded by taxes, because it is too important to simply let the organization with the biggest wallet win.

  • by Anonymous Coward

    no, no, no...

    it's not about the 'basic tenet of science',
    it's about the 'basic tenet of profit'.

  • Science? (Score:4, Interesting)

    by John.Banister ( 1291556 ) * on Sunday August 16, 2015 @08:35AM (#50326095) Homepage

    It appears the medical research field has forgotten this basic tenet of science:

    It's almost as if it isn't science at all, but rather advertising, where the target audience is a government agency that gives the company permission to transfer the product to their other advertising division who then advertise it to doctors [youtube.com] and the public. What percentage of these clinical trials are trials of something not destined to produce wealth for an organization if the results of the trial are positive? When a wealth generating organization approves expenditure of the large amount of money to do a clinical trial, I'm sure it also adds extra management, or transfers management of the research so as to help the results of the trial be more profitable than just doing science ever could be. I'm thankful that this requirement to register these trials exists.

    • It's almost as if it isn't science at all, but rather advertising

      It's also a massive barrier to entry for competitors. At fault, either way, is the FDA and how it operates.

      • While I have no doubt that there's plenty of fault to be found in the operation methods of the FDA, so long as corporate entities have corporate personhood, they get to have fault for their moral failings and criminal prosecution for the impact those failings have on others, just like everyone else with personhood.
        • so long as corporate entities have corporate personhood, they get to have fault for their moral failings and criminal prosecution for the impact those failings have on others, just like everyone else with personhood.

          "Just like everybody else"? You're saying that if I donate money to my city councilor and then ask him to adopt a policy that benefits me, I should be criminally prosecuted? If that's the rule, then about 90% of teachers should be thrown in jail, because they all donate to, and lobby and petitio

          • If you lobby the government saying "help, these people are starving" when in fact, no one is starving, but you own the only freight service capable of reaching those people and you plan is to make a nice profit from the government paying you to transport food to people who don't need it, then you committed fraud, and you merit criminal prosecution. When a drug company tells the government "we tested this drug, and it does this and this and this," they also need to be telling the truth. There's a difference
            • When a drug company tells the government "we tested this drug, and it does this and this and this," they also need to be telling the truth

              They are telling the truth: they (generally) complied with the old FDA requirements and reported their results truthfully, and now they are complying with the new ones. The problem isn't with companies committing fraud, it's with regulations not achieving what you want them to achieve. That is, no matter what regulations you come up with, people and companies will figure

  • I want to show that all cars are red, so I go out and start counting cars. It turns out that not all cars are red, but by choosing every seventeenth car, except if the previous one had a number-plate ending with an odd number OR a number x such that the x-th fibonachi number has a 4 as the second digit, THEN all those cars are red! Dang, that's actually what I meant, forget that thing about all cars red. Publish!

    Another serious issue is that it is hard publishing negative results. So even though every re
  • This basic idea should be applied much more broadly, and given that most publishing is electronic now, all experiments should be published and evaluated on the validity of their methodology and how interesting or insightful the hypothesis being tested is, not their result.
    How much money and time is wasted trying the same negative result experiments at different labs because they don't know that someone else has tried it and gotten a negative because it was not published?

  • This is what happens when they intermingle...

  • by mbeckman ( 645148 ) on Sunday August 16, 2015 @10:08AM (#50326377)
    Where are the experimental controls? This brazenly silly meta study sports all the symptoms of a classic research pathology: confirmation bias.
  • So far as I'm concerned they hadn't been testing new drugs thoroughly enough, and people were getting hurt or dying because of it.
    • So far as I'm concerned they hadn't been testing new drugs thoroughly enough, and people were getting hurt or dying because of it.

      A lot of people also get hurt and killed by not making new drugs available that could help them.

      Ultimately, "they" should do the testing, and each patient should decide for themselves what risks to take. Unfortunately, in our paternalistic system, people are denied those choices.

      • Prove to me that the average person who might need a new, untested medication is intelligent enough, can do and understand the research competently enough to make an informed decision, and I'll agree with you. As is, however, I don't believe that to be the case. Therefore there must be rules and procedures to protect people from something that might potentially harm or kill them.
  • A result has to be proven by a second independent study before you can take it seriously.

    That's not sufficient. In many experiments, it is easy for multiple independent researchers to share incorrect assumptions or to make the same mistake.

    Trusting scientific results is something that should take decades, many repeated experiments, and an understanding of how those results fit in with the rest of science.

    • by ihtoit ( 3393327 )

      many things need to be taken into account:

      - experimental brief
      - detailed experimental method
      - margins for error
      - depth of reporting

      unfortunately none of this is available when it comes to eg pharmaceutical studies. All we end up with is "Clinically proven!". Great. Which clinic, and how was the proof arrived at?

  • ... if they work really hard at curbing egos and huge financial incentives. As it is, quite a bit of medical "research" is utterly pathetic and counter-productive. Unfortunately, more and more of that is happening in other fields too. For example in CS, not a lot of real scientific progress has been made in the last 30 years, due to the same stupidity, arrogance and perverted incentives.

  • The problem is fairly simple actually.

    Say you want to find out if certain candies act as a libido increaser.

    You a do a study on 20 different candies. One of them seems to act like a weak form of viagra, at a 95% certainty that it isn't random chance. Except 95% is also known as one in 20 and you tested 20 candies...

    If you didn't start out saying you were testing if M&M's were the viagra, you can claim that you have a positive effect. But if you were forced to specify the candy you were testing,

  • This is great news, it shows that money paying for results spoils the process. When it's opened up for peer review, one leg of any review being the ability to repeat the experiment using the exact same conditions and expecting the exact same results, corporate bias in reporting results shows up like Fat Waldo in a herd of adele penguins.

    The bias being brought about by the sponsored research simply dismissing negative results as experimental error rather than seeing it as legitimate reading as far as varianc

  • Comment removed based on user account deletion

Your own mileage may vary.

Working...