Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

Medical Journals Fight Burying of Inconvenient Research 32

A dozen leading medical journals have announced that they plan to refuse publication of clinical studies unless those studies were publicly registered ahead of time. Part of the intention is to prevent researchers from privately doing multiple studies and then selectively releasing for publication only those which yield favorable results. There are many other journals which have not signed on to this plan, however, and it remains to be seen what will happen. Personally, I'm surprised it's taken this long; as Karl Popper wrote, "what distinguishes the scientific approach and method from the prescientific approach is the method of attempted falsification."
This discussion has been archived. No new comments can be posted.

Medical Journals Fight Burying of Inconvenient Research

Comments Filter:
  • Great idea! (Score:4, Insightful)

    by keiferb ( 267153 ) on Thursday September 09, 2004 @12:08PM (#10201666) Homepage
    Now, if only this could be extended to include IT studies. I wonder how many studies people run that show that their product isn't that hot.

    On second thought, they're all funding their own studies, so it's probably a non-issue.
  • Finally (Score:2, Insightful)

    by hubs99 ( 318852 )
    The is great news. If the JAMA can do it then so should everyone else. In addition to the NIH proclaiming that they are hoping to force Researches to publish to public domain, research and reporting of research is moving in a positive direction.
    • Re:Finally (Score:3, Interesting)

      by wetlettuce ( 765604 )
      Exactly, all research should be published whether it be good or bad and let the reviewers decide whether the results are valid and the methdology is good and is worth publishing. In that way the researchers will probably gain from the positive critism allowing them to do even better research the next time around.
      If they don't know why things are failing, how can they improve?
      • Re:Finally (Score:5, Insightful)

        by bitingduck ( 810730 ) on Thursday September 09, 2004 @10:02PM (#10208788) Homepage
        I've always wanted to start a journal called "Journal of Null Results" where people can publish research that's well done, but came up with a result that doesn't qualify as earthshaking or "sexy".

        Publishing null results would help people avoid repeating things that have been done already, as well as help refine research to see if there is a positive result hidden in the null.

        In the case of big medical studies it could provide a great source for data mining, where metastudies of a bunch of null results might suggest something that would be hard to see without a lot of overlapping data.
        • Back in the day when I was still working directly in research as a pure physicist, I got a lot of null results. People used to tell me that old lie, "A null result is just as good as a positive result."

          I got fed up with this after a while, and when other people got positive results, I started telling them, "Don't worry, a positive result is just as good as a null result." It got me a lot of funny looks.

          Later, working as an applied physicist, where null results are far less common, I tried to include a s
  • Meanwhile (Score:5, Informative)

    by Otter ( 3800 ) on Thursday September 09, 2004 @12:11PM (#10201701) Journal
    Derek Lowe was just posting on this topic [corante.com] and on the general inability of the pharmaceutical industry to get basic facts to the public's attention.

    Please note that despite Jamie's spin (and the Times') all studies have to be reported to the FDA. I'm not quite sure of all the differences between this new registry and the existing registration, but the idea that drug companies can perform trials in complete secrecy is as wrong as the idea that NIH-funded research does all (or even much) of the work of drug development -- a point Lowe addresses elsewhere.

    • Re:Meanwhile (Score:5, Interesting)

      by hung_himself ( 774451 ) on Thursday September 09, 2004 @02:12PM (#10203354)
      as wrong as the idea that NIH-funded research does all (or even much) of the work of drug development

      No they just do the basic research that results in the drug leads. The companies then do the expensive but scientifically easy trials and rake in all the money (and now it seems, the credit as well).

      And since when is an industry spokesman considered a reliable source of information..?
      • Re:Meanwhile (Score:5, Interesting)

        by Otter ( 3800 ) on Thursday September 09, 2004 @03:09PM (#10204285) Journal
        No they just do the basic research that results in the drug leads. The companies then do the expensive but scientifically easy trials and rake in all the money (and now it seems, the credit as well).

        This nicely illustrates Lowe's point: what you're saying is widely believed, but is absolutely, utterly, entirely, absurdly false. (Except for the non-sequitur at the end about "credit" which I don't understand at all.)

        And since when is an industry spokesman considered a reliable source of information..?

        First, he's a chemist, not a "spokesman". Second, he (and I) do precisely the work you claim doesn't exist and might be thought to have something to say on the matter. But, if you want to limit your "reliable sources of information" to people who don't know what they're talking about, that's certainly your right.

        • I never claimed that no research was done at the pharmaceuticals - they do have huge compound libraries - but it is all very applied not to say that it is trivial - it is just not my cup of tea. I also do not doubt that many drugs come out this work. I have no doubt however that this is an extremely small percentage of the overall "research" costs that the companies like to quote.

          My point was that to claim that no drug development is done with public funding is misleading and mendacious - because while it
        • If you work at a major pharmco what do you think of so much of the yearly budget being spent on Ads not research?
    • I'm pretty sure that the FDA has to keep silent unless they perceive that the drug in question is unnecessarily dangerous, in which case all they can do is pull it off the market. They can't force any of the studies to be released or disseminate their contents.
  • Partial solution (Score:4, Insightful)

    by crow ( 16139 ) on Thursday September 09, 2004 @12:12PM (#10201706) Homepage Journal
    So what stops someone who registers a study from still deciding not to publish the results if they aren't favorable to the funding drug company? It does mean that there is public information that a study was started, which can mean some pressure to publish.

    And nothing stops a drug company from funding a bunch of studies that aren't registered, and then registering duplicate studies that they then expect to be most favorable. Of course, the registration process would then add expense and delay in getting out this sort of slanted results.

    At least even with the current system, I expect the peer reviewed journals are much better than the sort of "studies" that get published regarding the computer industry (e.g., TCO of Windows vs. Linux).
    • Re:Partial solution (Score:4, Informative)

      by KWTm ( 808824 ) on Thursday September 09, 2004 @12:37PM (#10202032) Journal
      And nothing stops a drug company from funding a bunch of studies that aren't registered, and then registering duplicate studies that they then expect to be most favorable.

      The results of the studies are random, or contain randomness. If you fund a bunch of studies (say, 20 studies) and one of the shows positive results, you can't just duplicate the results. You'd have to duplicate the study, again asking a hospital or two to recruit 300 patients to test the drug, and this time the results might not be positive. You can't even recruit the same patients over, because the standard is to recruit the first consecutive 300 patients to walk in the door who fit the criteria (and consent to the study, of course) to prevent selection bias. Any other way of recruiting would raise red flags that any medical student could spot a mile away.

      (For the record --yes, IAAD.)

      • I think the point is that the company would do a study using methodology A to determine the efficacy of their drug. If the results were favorable, then they would register a study using methodology A and publish the results. If the results from using methodology A the first time were bad, then they would move on to mthodology B and give it a whirl. This way they could do many studies using different methodologies but only register and publish the favorable ones.
        • Re:Partial solution (Score:3, Interesting)

          by Ayaress ( 662020 )
          When the article was talking about companies doing multiple studies and only publishing the best one, though, they were talking about this sort of thing. The companies weren't changing the experiment to make it work. Doing that would raise red flags - maybe not so obvious as creative methods for patient recruiting, but something that other doctors would likely catch. They're just doing the studies several times and publishing the best one (or just doing one big study and only using part of the data set for
        • by KWTm ( 808824 ) on Thursday September 09, 2004 @05:03PM (#10205953) Journal
          If the results from using methodology A the first time were bad, then they would move on to mthodology B and give it a whirl. This way they could do many studies using different methodologies but only register and publish the favorable ones.

          It's much harder to set your own study methodology (and still get the same respect). The family doc who makes an effort to do evidence-based medicine (as opposed to "this is so because my doddering old professor back in medical school said that his teacher said it was so") looks for several key points in the abstract of the journal articles. (What, you think s/he has time to read the whole thing?) You want a randomized, double-blind placebo-controlled multi-centre study with a large sample of the general population . If these keywords are there, the doc jumps to the "conclusion" heading and generally acknowledges it to be valid. The main text of the article is just for medical students who have to do their presentations. :)

          • randomized: what/who determines which patient goes into which comparison group? If you ask for patients to volunteer to "try the experimental drug" or "try the placebo", you get selection bias: the patients willing to try the drug may be sicker (this could mean that the drug works better, or doesn't work as well). If you say, "Okay, everyone at Dr. Smith's clinic gets the real drug, and everyone at Dr. Jones' gets placebo," there may be other biases. If it ain't randomized, people get purty suspishus.
          • double-blind: not only do the patients not know whether they're taking the real drug (that would be single-blind), but the doctors who treat the patients also don't know (double-blind). This prevents a doctor from saying, "I'm not telling you whether you're taking the real drug, but do you feel better? Are you sure? Don't you feel just a teensy weensy bit better?" All patients are assigned a number, and at the end of the trial, some oversight comittee reveals which patients got which drug.
          • placebo-controlled: you have to compare it with something, even if it's just placebo. Got a group where 60% of people improved on the drug? If 50% of the people improved on placebo, it certainly puts a different light on things.
          • multi-centre: this means that the study took place at various locations, hopefully drawing on different groups. If it were one inner city hospital, that's one thing; if it's five different urban centres in five different countries, that's another.
          • large sample size: Having a thousand patients (500 per group) would be nice. I generally ignore anything less than N=200. (Multiple low-budget studies can be combined by meta-analysis to achieve the same effect, *if* the experimental conditions are similar.)
          • general population: some studies target certain segments of the population, such as "diabetics over age 65", which would narrow the market. The wider the population to whom the study is applicable, the better.

          There are other study methodologies with such names as "cohort studies", "case-controlled", "retrospective" which get some respect but are viewed generally as a steppingstone to getting funding for the ultimate gold standard, the randomized clinical trial. So if a company's drug trial works with case-controlled, that might not be enough to get FDA approval for whatever labeled use they're after.

  • by KWTm ( 808824 ) on Thursday September 09, 2004 @12:32PM (#10201961) Journal
    Funny that a colleague and I were talking about this the other day. Clinical studies and general statistics in (peer-reviewed) medical journals generally use a "0.95 confidence interval".

    What this means is that, if left to chance, the experiment/trial/study would be positive in only 1 in 20 times. Example: you give a bunch of sick people an experimental drug, and you give another bunch of sick people a placebo (or a known standard of treatment). The people on the experimental drug get better. Was it really because of the drug? Maybe it was just random chance --but if that chance is calculated to be only 1 in 20 or less, then we say, "Yeah, it's probably because of the drug, and not just chance."

    The overwhelming majority of drug trials are corporate-funded. A company that's desperate enough to get its drug to market could easily fund 20 studies, and even if the drug were just placebo, chances are good that 1 in 20 of those studies would turn out positive. (Yes, yes, I'm just approximating.) Without the "negative results registry", you'd think that the drug was working.

    Would companies be desperate enough to do this? You bet. I'm not saying that any particular company did this, but consider what happens to get a drug to market. Someone invents a molecule (typically a lab with 12 employess or something), gets bought by a biotech firm with 4 employees (they subcontract everything out), some other lab tests it on animals, some other firm develops a formulation (tablets, capsules, makes sure it doesn't melt inside the bottle or degrade, etc.), a big drug company buys the formulation (or the entire firm) and starts gearing up for clinical trials while submitting for FDA approval. This all takes about five years. What if it doesn't work out? As a backup, the Big Pharma Company also invests in about five backup compounds, and each of those compounds has five backup compounds. We're talking about, after ten years and researching thirty compounds, you might get ONE drug out to the market. (Btw, my wife is the project manager for a bunch of these drug research pipelines at one such Big Pharma company.) But, boy, will that drug make it big! What if the drug didn't really work? Well, let's make it look like it did! (I can see Big Pharma CEO's rationalizing this as "let's put it in the best light possible.")

    Example of drug research being biased? Ever heard of celecoxib (Celebrex)? Wow, anti-inflammatory pain reliever that, unlike ASA (Aspirin) or ibuprofen (Advil, Motrin), does NOT irritate your stomach! No stomach bleeding (uncommon but serious side effect of ASA/ibuprofen)! They did the research and showed that people actually did (statistically) significantly better than ASA after six months. JAMA (Journal of the American Medical Association) published the study and even sang its praises in the editorial. They get it out and market it to all the physicians all over the place.

    And then we find out that the study went on for more than 6 months. We find out that beyond 6 months, the people using Celebrex got WORSE, and deteriorated until at 12 months, they were no better than ASA users. Boy was the JAMA editor mad! (If I recall, he even publicly lambasted Searle for this in the New York Times.)

    But you know what? It didn't matter. Celebrex was everywhere, on American TV ads, and people asked for it. Docs who don't really have time to delve into the medical literature already had established in their mind that "Celebrex is better". (My colleagues certainly continued to use it even when ASA was sufficient.) And the drug reps, who ooze snake oil from their skin pores, keep pushing it. One drug rep even questioned my choice of medication when I was getting it from our drug sample closet. I lit into her like you wouldn't believe. Is she the doctor or am I?? (whew, catharsis, feel better now)

    So, yes, I think the companies are perfectly capable of doing this (stacking the studies). The benefits are just too great. I welcome the use of the clinical trial pre-registry.

    • The problem with statistics is that they can often be made to say what you want. There was a recent advertisement around here that said 80% of accidents happen within a five mile radius of your home. The ad was pushing to use seatbelts even for a short trips. However, if 90% of driving is done within that same five mile radius, then your actually less likely to get in an accident then when your driving close to home. The message is still good, but the statistics may not actually support what the message
    • The overwhelming majority of drug trials are corporate-funded. A company that's desperate enough to get its drug to market could easily fund 20 studies, and even if the drug were just placebo, chances are good that 1 in 20 of those studies would turn out positive. ...

      Would companies be desperate enough to do this? You bet.


      I bet not. These studies can cost millions. Sure, BigPharma has BigMoney, but you're suggesting that they can easily soak a 2000% increase in their phase 4 clinical trial budget!? N
      • True but how about this you have money to run 1 - 4 studies. With 4 you get a 18.5% chance it works and little chance of a false negative.

        1: drug works and you wasted a few mill but your about to make 100's of mill so not a big deal.
        2: drug works and one study would have failed to detect that... HUGE WIN!
        3: drug fails and random chance favor's you. Make back most / all your reasearch money.
        4: drug fails on all study's... O well you lost all your money anyway.

        Now I can easly see this being worth i
      • Would companies be desperate enough to do this? You bet.

        I bet not. These studies can cost millions. Sure, BigPharma has BigMoney, but you're suggesting that they can easily soak a 2000% increase in their phase 4 clinical trial budget!? No way.

        If you could take a placebo and fund 20 studies to show that it worked, wouldn't you? That's like saying, I can sink millions and billions into this research to prove that potato chips cure cancer. And I hold the patent on the potato chip. I gain the abili

  • that given enough time, you are able to prove anything using statistics. Not only that, but depending on how the 'facts' are presented, they may be interpreted differently as well. We will never be able to completely erradicate ambiguous or even downright fraudulently presented research, but at least this is a start.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...