Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Medicine

Study Can't Confirm Lab Results For Many Cancer Experiments (apnews.com) 126

Hmmmmmm shares a report from the Associated Press: Eight years ago, a team of researchers launched a project to carefully repeat early but influential lab experiments in cancer research. They recreated 50 experiments, the type of preliminary research with mice and test tubes that sets the stage for new cancer drugs. The results reported Tuesday: About half the scientific claims didn't hold up. [...] For the project, the researchers tried to repeat experiments from cancer biology papers published from 2010 to 2012 in major journals such as Cell, Science and Nature. Overall, 54% of the original findings failed to measure up to statistical criteria set ahead of time by the Reproducibility Project, according to the team's [two studies] published online Tuesday by eLife. The nonprofit eLife receives funding from the Howard Hughes Medical Institute, which also supports The Associated Press Health and Science Department.

Among the studies that did not hold up was one that found a certain gut bacteria was tied to colon cancer in humans. Another was for a type of drug that shrunk breast tumors in mice. A third was a mouse study of a potential prostate cancer drug. The researchers tried to minimize differences in how the cancer experiments were conducted. Often, they couldn't get help from the scientists who did the original work when they had questions about which strain of mice to use or where to find specially engineered tumor cells. "I wasn't surprised, but it is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful," said Michael Lauer, deputy director of extramural research at the National Institutes of Health. NIH will try to improve data sharing among scientists by requiring it of grant-funded institutions in 2023, Lauer said.

This discussion has been archived. No new comments can be posted.

Study Can't Confirm Lab Results For Many Cancer Experiments

Comments Filter:
  • Pretty Common (Score:5, Insightful)

    by LKM ( 227954 ) on Wednesday December 08, 2021 @06:05AM (#62058229)
    This is, unfortunately, very common across all scientific fields. It's incredibly important that preregistration of studies becomes the norm. Also, journals need to be more open to publishing negative results and replication studies. This decreases the incentives to work a study until the result is positive, and incentivizes more replications.
    • Re: Pretty Common (Score:5, Insightful)

      by VeryFluffyBunny ( 5037285 ) on Wednesday December 08, 2021 @06:12AM (#62058235)
      Yes. This. Please mod parent up. Another thing would be to reduce perverse incentives such as the competitive 'publish or perish' conditions under which many researchers work & upon which their careers & livelihoods depend. High quality science is difficult enough to do under the best of conditions!
      • Re: (Score:3, Insightful)

        Another thing would be to reduce perverse incentives such as the competitive 'publish or perish' conditions

        They're largely the result of the mistrust of science by the general populace, who elect politicians that demand taxpayer funded universities produce results at all times, while giving free reign to the industrial military complex to waste billions with no repercussions.

        A bulk of time researchers spend is on preparing for grants instead of doing research.

        • It's a result of a high demand for professionals given lab space, staff, equipment, and authority over their research to produce something verifiable. It can be very burdensome for a competent theoretical or practical scientist to also negotiate for funding, but that is a distinct requirement from publishing their work frequently.

          • Re: Pretty Common (Score:5, Insightful)

            by The Evil Atheist ( 2484676 ) on Wednesday December 08, 2021 @07:37AM (#62058375)
            They're part of the same package. You can't publish your work frequently if you don't get grants to do research in the first place. You don't get grants unless you can show that there's a high chance of getting published frequently.
            • I was at a session once that was mostly for bioinformatics PhD students, and the advice from the older professor - stated as common practise - was that you write your grants based on work that's already mostly done. That way you build up a reputation for consistently doing publishable work, while getting funding for all the exploratory work that mostly won't work out but that's needed before you come up with your next publishable experiments. And when your next round of experiments is almost done, you wri
        • The incentive of publish or perish tends to produce quantity over quality. This means reviewing the research on a particular topic can be particularly burdensome when reviewers have to wade through piles of low-quality crap to find a few decent quality papers. If you read meta-studies, you'll see that they usually pointedly enumerate how many papers they evaluated & how many they had to reject because they were of such poor quality. Sometimes, what should be single coherent, cohesive research papers get

          • Re: Pretty Common (Score:4, Insightful)

            by ShanghaiBill ( 739463 ) on Wednesday December 08, 2021 @04:00PM (#62060305)

            Basically, we're describing Goodhart's law: When a metric becomes a target, it ceases to be a good metric.

            It is a good metric if it is what you are actually trying to accomplish.

            For years, DARPA funded robotics research the traditional way: By giving out money based on grant proposals. The result was money going to people who were good at writing grant proposals but not much meaningful research.

            Then DARPA changed course and instead issued a "grand challenge" with a series of contests. The money went to the team with the best result, not the best proposal.

            A similar "money for results" approach may also produce better batteries, solar panels, and high-temperature superconductors.

            I'm not sure how competitive funding would apply to cancer treatments.

      • If they published more repeat studies, and more repudiations, that would count. You could have someone with a reputation as a study buster publishing often.
        • Re: Pretty Common (Score:4, Informative)

          by crmarvin42 ( 652893 ) on Wednesday December 08, 2021 @11:21AM (#62058993)
          That only works when the money for the studies is not coming from a funding agency with a vested interest in the outcome of the results. I should know, I work for a company that funds research in my field, and if we think there is any chance of the results turning out negative, we do them in-house or with contract research organizations with no interest in publishing. Only after we know it works will we take it to universities for them to conduct, and publish

          Biomedicine has that to some extent, due to the massive budget of NIH. However, that massive budget doesn't go as far as you might think considering that NIH has a hard-on for larger more expensive studies, and doesn't like to look at smaller grants. In my experience, its $1 million or up. Smaller studies, if they can't be linked together into a larger study, are unlikely to get funded by NIH. There is also he novelty criteria. Replication work, if it adds nothing new, is also of little interest to NIH.

          In other fields, like mine, the vast majority of research dollars come from private industry funding research they think they can profit off of in the next 2 to 5 years. It works out ok when there are two groups working at cross purposes, so that pissing of one group doesn't matter because you win over the other in the process. But in more narrow areas, where there is no one who stands to gain from proving something does/doesn't work, it is much harder to find funding.
          • "if we think there is any chance of the results turning out negative, we do them in-house or with contract research organizations with no interest in publishing. Only after we know it works will we take it to universities for them to conduct, and publish"

            "Replication work, if it adds nothing new, is also of little interest to NIH."

            These are great examples of some of the problems in science biasing the outcomes. There is also political bias interfering in some areas. Showing a failure to replicate or strong
          • I think something similar to speeding tickets would be in order here. It would be nice if we re-routed a few hundred billion dollars from say defense into things that actually save and improve many lives like science, health & medicine, heck even education research. And in doing so could provide perhaps 1-3 follow-up studies for some of these things to prove validity and discourage cheating. Since that's not going to happen perhaps the various institutions, govt orgs, or publishers could fund a set of

      • This is a very difficult problem with no easy solution. The root problem is that non-experts cannot judge the quality of scientific work, so there is an attempt to create "quantifiable metrics" like number of papers published in high impact journals. Initially these are valid, because the correlate well with real scientific output. Soon though you get a peacock effect where scientists are motivated to optimize the metrics rather than the science. Those that continue to concentrate on actual science fi
      • This is really key. And it's not just being open to publishing negative results. Rather the publish or perish model has led to a type of thinking among scientists that I find toxic. I have the good fortune to work with many scientists of different fields, but I am shocked at how little actual logic most scientists have. This results in 2 major things:

        1) poor design of experiment so the results don't really correlate to the hypothesis

        2) assuming that data that shows the hypothesis to be true is "proof

    • Re:Pretty Common (Score:5, Insightful)

      by Ambassador Kosh ( 18352 ) on Wednesday December 08, 2021 @07:05AM (#62058303)

      Publishing negative studies is EXTREMELY important. A lot of this work is done by PhD students and normally you need 2-3 publications to graduate. If you don't get them, you don't get your PhD. You don't know ahead of time if the project you are working on for years is going to work or not. Putting in years of effort and then not getting the degree because the process didn't work but you did learn why is really bad. It is also wrong because it sends the message that science has to always work. Even worse PhD students don't normally choose their exact topics so because of something your advisor decided on would determine if you could even get the PhD or not.

      Being able to publish negative results easier would remove a lot of these screwed up incentives. I also think it would be easier to get scientists to cooperate because they probably know the results are false but can't say it.

      Publish or perish creates some really nasty problems. If I told you that at the end of 5 years you would either get to move on to another job or you would waste the 5 years and leave with nothing of value and the entire determinant was on if your project worked you would make sure it worked regardless of it if actually worked. This has created a lot of safe science where people take something they already know works and make a minor improvement to it that they are sure will work ahead of time or they take more risks and find a way to make it look like it worked even if it failed.

      • Publishing negative studies is EXTREMELY important. A lot of this work is done by PhD students and normally you need 2-3 publications to graduate... Putting in years of effort and then not getting the degree because the process didn't work but you did learn why is really bad. It is also wrong because it sends the message that science has to always work.

        I totally agree, and I think we need to find ways to convince the average person of the value of negative studies. First, eliminating that which doesn't work increases the likelihood of finding something that does work. Second, negative studies can discover approaches that may actually be harmful, and do so under controlled conditions where the potential harm can be limited and mitigated. Success is almost built on a foundation of necessary failures - that's why good science is often tedious and always pain

      • Re:Pretty Common (Score:4, Informative)

        by apoc.famine ( 621563 ) <apoc.famine@g m a i l . com> on Wednesday December 08, 2021 @11:13AM (#62058953) Journal

        Publishing negative studies is EXTREMELY important.

        Yep.

        My thesis didn't get published because it was a negative result. I demonstrated that a "novel" mathematical software package a major player in the field was using had no computational benefit, and produced no better results than not using it. However, it was flashy and hard to understand, and had a couple of good papers backing it up. Those papers work because they carefully constructed the situations in which they demonstrated it to work. When applied to almost any other situation, it's garbage.

        Lots of people are still using that software package today, and there's no reason to. There's no reason to spend the time tweaking it and configuring it and trying to figure out how it does its magic. The time and effort put into fucking with it would be far, FAR better spent on other things.

        The papers they put out make it look like a fucking sonic screwdriver, and in reality it's more like a lockpick. Handy in very limited circumstances, but not useful for most applications.

        • This is something I sympathize with. I have worked with methods that are supposed to be amazing and found they only worked under REALLY narrow circumstances that are basically identical to what the paper shows.

          The ones I have found more annoying are dealing with optimization algorithms. So many papers show off their amazing algorithm but if you look really carefully often the problem is carefully chosen to look hard but not to actually be hard.

      • Tenure was supposed to be the balancing force -- once you had your degree, then you could take the big chances. But it hasn't worked in practice.

    • by RobinH ( 124750 )
      To be fair, physics hasn't had the same scale of reproducibility problem that the social sciences and medicine have had.
      • No profit motive. Biosciences is dominated by pharmaceutical bros looking to cash in on specious research.

      • To be fair, physics hasn't had the same scale of reproducibility problem that the social sciences and medicine have had.

        There's a significant difference between physics and social sciences/medicine: humans.

        Physics gives you precise results. Human responses to questions (in psychology, sociology, etc.) are very imprecise.

        Experiments in medicine are difficult because most people consider it unethical to inject drugs or cut up humans in whatever way you feel like.

      • To be fair the scientific method specifically breaks a few rules of sound logic with the understanding that it is constantly and reproducibly tested experimentally against objective physical reality. Physics and the physical sciences are the ideal targets for this framework. Even within the physical sciences some phenomenon occurs over such lengths of time that we can't properly experiment.

        It is highly questionable to expand this framework to many dimensional higher order topics which may have no underlying
    • And more people need to drill down and look at the actual studies, not a media summery of the summery of the abstract. One study that has formed a lot of Covid policy in the US was a total of 39 people. In the study, they say this needs more study. And there was no more study, but lots of media hype!
    • I want to start a science journal called Null that only publishes studies that prove the null hypothesis. With a focus on studies debunking older studies. Give researchers an incentive to dismantle bad science, and a much needed avenue to publication of results that haven't been doctored to show "interesting" results.

    • I think the physical sciences do a lot better on this, The criteria for statistical significance (usually >4 sigma) are much stronger. Of course there is less financial incentive to reach some particular conclusion about the mass of the Higgs boson than there is about determining whether a particualr cancer drug works.
    • by ceoyoyo ( 59147 )

      It's not unfortunate. It's the way it's supposed to work. Journal papers are really the modern equivalent to writing to your colleagues and saying "hey, I did this and I saw this. How about you?"

      What's unfortunate is that we've somehow decided a published paper in a journal is somehow "true."

      What we *do* need to get better at is actually replicating experiments, and doing so in such a way that we produce negative results. The vast majority of those "negative results" that are never published are not negativ

  • by dcw3 ( 649211 ) on Wednesday December 08, 2021 @06:17AM (#62058237) Journal

    People often talk about peer review as if it's a panacea, but this and many other reports have shown that the current state of peer review is an utter failure. The reasons span from things like there's no money aimed at peer review, to there's no glory in peer review...nobody makes a name for themselves by confirming study results.

    • by Ambassador Kosh ( 18352 ) on Wednesday December 08, 2021 @07:09AM (#62058309)

      There are other incentives than that. The people reviewing the paper also know that the ability to graduate for the PhD students involved is often tied to the paper being published. They know that the person working on it didn't get to choose if reality would work out or not and that an honest negative paper that covers exactly why something did not work and why will probably not be publishable.

      Would you want to lose all the work you put into a PhD and leave with nothing just because something didn't work? You still learned a lot and expanded our understanding of a subject and there was no way to know ahead of time if the idea would work or not. That encourages either zero risk science or science that falsifies if something does not work.

      • by dcw3 ( 649211 )

        "There are other incentives than that. "

        Yes, I believe I alluded to that when I said that the reasons span...certainly there are others I didn't itemize.

    • by houstonbofh ( 602064 ) on Wednesday December 08, 2021 @10:07AM (#62058693)

      ...nobody makes a name for themselves by confirming study results.

      I do not think that has to be the case. Someone could make a hell of a name for themselves debunking bad studies. Especially right now.

    • Agree with the failure of peer review. Scientists used to do peer review as a public service, but as science budgets have felt more constrained, this happens less. Good peer review takes a LOT of time. I used to peer review papers, but no longer do - I my laboratory provides no support, I would have to charge project budgets, and the projects have no interest in supporting general science.
      • by dcw3 ( 649211 )

        Exactly. Maybe at some point we should require that the budget for any study include funds for an independent peer review.

    • by ceoyoyo ( 59147 )

      Peer review mostly works as intended. The purpose is to filter out obvious garbage. It's *important* to publish things that are probably wrong. Journals are a conversation, not a bible containing absolute truth.

      • by dcw3 ( 649211 )

        Then why are there countless articles about it not working? Feel free to google.

        • by ceoyoyo ( 59147 )

          There are countless articles about Elvis being sighted too.

          Peer review has problems for sure. This isn't one of them.

    • People often talk about peer review as if it's a panacea, but this and many other reports have shown that the current state of peer review is an utter failure

      Peer review is a low-pass filter. It ensures that the really bad stuff has been filtered out. It improves the quality of papers that get published (basically by proofreading). It doesn't ensure that a study is reproduceable.

  • by gosso920 ( 6330142 ) on Wednesday December 08, 2021 @06:28AM (#62058249)
    ... for putting Elizabeth Holmes in charge of data collection.
  • Others mention the issues at work here. However, the effect of those issues leading to scientists not helping each other reproduce the results will be the death of meaningful science.

    I wonder if someone can summarize why they failed, giving a breakdown. I am sure it's covered in the paper.

    • Only two likely answers; They are too busy and do not have time, or there was fraud and they can not assist without admitting it.
      • And in both cases something is dreadfully wrong with science... If your a head of a lab and you are too busy fine, but consider delegating. This person posted results that a bunch of experiments failed to be reproducible which in itself seems like quite the achievement. Having my lab "called out' in this way would be embarrassing though I do not think the publication goes about it in this manner. The lack of supporting reproducibility in your research, is like admitting it's horse shit.

        Btw, I say this as a

  • "NIH will try to improve data sharing among scientists by requiring it of grant-funded institutions in 2023.."

    Stock tip: digital storage manufacturers and resellers
  • by Ubi_NL ( 313657 ) <joris.benschop@g ... Ecom minus punct> on Wednesday December 08, 2021 @07:23AM (#62058345) Journal

    as a professional scientist in industry, this number is not surprising at all. However, putting this in a bigger picture, the current attempts to apply machine learning to biology keeps on failing miserably because of exactly this point. The ML people just feed literature into their algo, not understanding that the signal-to-noise ration is so high nothing reliable ever comes out

    • The ML people just feed literature into their algo, not understanding that the signal-to-noise ration is so high nothing reliable ever comes out

      I think you meant to say "the signal-to-noise ratio is so low". Personally, I always strive for as much signal and as little noise as possible... ;-)

    • That failure of Watson could be used as an algorithmic test of whether the problem is fixed in the future: when a machine chewing on all the collected results starts returning consistent results, then we could believe that there's enough signal to trust the aggregate body of knowledge.

  • by bradley13 ( 1118935 ) on Wednesday December 08, 2021 @08:23AM (#62058469) Homepage

    "It is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful"

    "Publish or perish" at its best. If you spend months on a study, you must publish some significant result. "We didn't find anything" is not publishable, so the scientists fudge their data and twist the statistics into pretzels. It's no surprise that they aren't helpful, when someone is trying to reproduce their results, because they know what's going to happen.

    Reproducing studies is important work. Not only because of academic malfeasance, but because studies can also simply be mistaken, through a variety of legitimate error sources.

    No paper should be published without full and complete data. What went into the study? How it was conducted? What was the raw data produced? What computer code was used? Etc. That should all be permanently available, and checked by the reviewers, so that anyone can attempt to reproduce the study. Incomplete or inconsistent information means no publication.

    • Sometimes you find something negative that is really important. I have seen people do some really amazing work that proved something we had long thought was true was wrong and exactly why it was wrong. It really added a lot of useful information but that stuff is very hard to get published. If you want to graduate you better turn that into a success in some way or you are screwed.

  • by JoeRobe ( 207552 ) on Wednesday December 08, 2021 @09:00AM (#62058533) Homepage

    There's non-reproducible and then there's bad science. Those are frequently one and the same but not always. Data from the LHC for example can't be reproduced anywhere else but that doesn't make it bad science. That's an obvious example but appears more subtly in other fields. Almost all experiments have a set of variables that are under control, and ones that aren't under control but may or may not make a difference in your final result. Many labs have spent years finding those uncontrolled variables and getting them under control, allowing them to get data with higher signal to noise than if they let those variables go uncontrolled. Some of that is "institutional knowledge" like passed from grad student to grad student (e.g. "we found that we need to autoclave this spatula between uses to prevent bacterial crossover, even if you don't think it needs it"; or "we found that the hplc-grade water from supplier A is higher purity, so we use that"). Ideally this is all laid out in papers, but not always.

    So the question is whether lab #2 that tried to reproduce the result had the same level of expertise and institutional knowledge, or if their data was noisier due to lack of that knowledge.

    Either way, there's also the possibility that Lab #1 had great data, or Lab #2 had bad data because there was an uncontrolled variable in both cases that biased their results.

    My takeaway from this sort of study is that there's either a lot of finesse to getting these results right, or there are unknown variables that need to get controlled to determine if the original results are correct.

    I'm dismayed by the lack of support from previous researchers and hope they are explicitly called out.

    • by chiguy ( 522222 )

      I've been told "reproducibility" is a requirement of good "science". Otherwise it's anecdotal.

      What you're saying is similar to the Vienna Beef story about moving buildings. But the goal of science is understanding the why. So if an experiment is not reproducible, then the results are not valid because you haven't ascertained the cause.

      If the second lab doesn't have the institutional knowledge to reproduce the first lab's results, then it's up to the first lab to help the second lab gain the knowledge, becau

      • by JoeRobe ( 207552 )

        Most experiments do have reproducibility built in, i.e. they do the same experiment on 10 mice instead of just one. Or they will repeat the same experiment over and over. So in that regard it's usually reproduced. The problem is when the same experiment is reproduced in a different lab. Unfortunately writing something like that into a grant proposal (having two labs do nominally the same thing) would probably be a non-starter, albeit admirable.

        I'm reminded of a photoelectron spectroscopy battle I had in

      • This is not actually always true. For instance if you are trying to breed yeast to create and survive in higher concentrations of ethanol it is common to use a random mutation factor and selective breeding. Even if the odds are extremely low it only has to work once for you to make more of that organism. Anyone can take your sample and grow it and verify it works but if they try the same experiment they probably won't end up with the same result.

  • If I remember correctly, a similar study carried out by Pfizer, found that 80% of the results published in the top biomedical journals were not reproducible.

    And it's not the same across all fields of science. Results in the harder sciences like chemistry and physics are much more reproducible. Of course, with the proliferation of journals, and publications, most of them aren't worth repeating anyhow.
    • Amgen and Bayer have done the same things. Many companies have at this point.

      I find that engineering journals have a FAR higher rate of being correct. Usually above 95% in my experience. How many scientists don't like publishing in engineering journals because they get low citations and thus don't help their careers very much. There is a lot that goes on in getting hired and journals have scores and your points get added up and part of your resume for getting jobs.

  • try searching for a developer exit! (old Mario Maker wisdom). ;)

  • This is nowhere nearly as bad as in social science, that over last decade or so went into complete quackery territory. A few years back a group of scientists had multiple hoax papers [economist.com] published, showing that peer review is dead in that field. So what happened next? Nothing changed, but they all get pushed out, including people with tenure.
  • If I'm a cancer patient seeking a cure, the last thing I want to hear is that the treatment they're prescribing may not work because they skipped a few steps.
    Oh wait, we're doing that with a pandemic right now so I guess it'll just be the new normal.

    • by ceoyoyo ( 59147 )

      Don't confuse basic science results in cancer research with approved therapies. Randomized clinical trials are very expensive, but they're also the only way to be sure a therapy actually works. We do not want to set the bar that high for more basic results (i.e. the ones that don't get injected into cancer patients) because nothing would ever get done.

      • disclosure is disclosure and it's up to the patient to decide what risks they're willing to take vs. the potential benefits of a therapy. What's astonishing from the TFA is the lack of reproducibility, to me that means a lack of professional standards. Medicine isn't an exact science but at least you should be assured that a theraputic will actually help the condition, not just placebo it.

        • Research tests on very few people and they can't control for a lot of factors because of the few people. If you hold them to the same standards as a clinical trial you will just have almost none of the research done anymore and most new medicines will stop being created.

        • by ceoyoyo ( 59147 )

          This isn't medicine, it's biology. As I said in my post, don't confuse basic science results in cancer research with approved therapies.

          I don't know what you think disclosure is or what it has to do with this topic.

  • is another way of saying that bullshit on the edge of statistical significance is being published.

    Let's make a useful analogy. Picture a bell curve representing the natural noise in any measurement. If the bell curve is dead center on the threshold called "statistical significance" based on a prior noise model based on some assumption about underlying reality, then exactly half the stuff will be "right" and half the stuff will be "wrong" just from experimental error.

    How can the bell curve of experimental er

  • People can fabricate their research, tweak the results as they want, and publish. There is no penalty for this. The reasons why they do it are the same as for any other type of fraud: money, success, prestige, etc. There needs to be a risk to breaking the public trust in science before it will stop at all. People who do this act in humility and grace, while at the same time being con artists with a skill to acquiring the nomenclature of a field of science. How did they get their PhD? Same method likel
    • Actually this if false. If you are caught for academic fraud your career is normally over. If that fraud was used to get your degree then your degree is normally also removed. The penalties are quite harsh.

  • Sorry but I’m confused. It could be any, several, or all of the following:- An inability to write good articles that explain how to reproduce the experiment. Shabby / poor peer review standards of modern journals. Deliberate skewed interpretation. Unpublished manipulation of the experiment Deliberate falsification of the experiment Deliberate falsification of the methodology Deliberate falsification of the steps needed to reproduce. and many others - all of which point to bad science, done badly, pee

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...