Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

Four Ways To Reduce Bad Scientific Research (nature.com) 78

An experimental psychologist at the University of Oxford argues that in two decades, "we will look back on the past 60 years -- particularly in biomedical science -- and marvel at how much time and money has been wasted on flawed research." [M]any researchers persist in working in a way almost guaranteed not to deliver meaningful results. They ride with what I refer to as the four horsemen of the reproducibility apocalypse: publication bias, low statistical power, P-value hacking and HARKing (hypothesizing after results are known). My generation and the one before us have done little to rein these in. In 1975, psychologist Anthony Greenwald noted that science is prejudiced against null hypotheses; we even refer to sound work supporting such conclusions as 'failed experiments'...

The problems are older than most junior faculty members, but new forces are reining in these four horsemen. First, the field of meta-science is blossoming, and with it, documentation and awareness of the issues. We can no longer dismiss concerns as purely theoretical. Second, social media enables criticisms to be raised and explored soon after publication. Third, more journals are adopting the 'registered report' format, in which editors evaluate the experimental question and study design before results are collected -- a strategy that thwarts publication bias, P-hacking and HARKing. Finally, and most importantly, those who fund research have become more concerned, and more strict. They have introduced requirements that data and scripts be made open and methods be described fully.

I anticipate that these forces will soon gain the upper hand, and the four horsemen might finally be slain.

This discussion has been archived. No new comments can be posted.

Four Ways To Reduce Bad Scientific Research

Comments Filter:
  • Not to my awareness (Score:4, Informative)

    by TimothyHollins ( 4720957 ) on Saturday April 27, 2019 @10:45AM (#58500864)

    They have introduced requirements that data and scripts be made open and methods be described fully.

    This is the opposite of what I have experienced. Several of the journals are getting less interested in the methods and code. My reviews have instead come back asking me for *less* detail on code and methods, and more focus on the results and conclusions.

    Admittedly, I do publish mostly bioinformatics oriented research in medical journals. The biology oriented papers have gotten slightly better responses (though they are also mostly interested in results/conclusions).

    • by guruevi ( 827432 )

      Grants (especially >250k R grants) require open methodology and data exchange. Although that is purely theoretical, I haven't seen many paper/result that fulfills those requirements, generally because the open source tools require a programmer and the closed source stuff is point-and-click. The closed source stuff doesn't scale though.

      • I do supply code to anyone that wants it, but I don't submit the code to the journal. There is simply no process for doing that. As for open data, that is impossible in the medical field. If I were to publish the medical journals of 1.8 million patients I would end up in jail. I also report everything to the grant committee, but again excluding the data.

        • by guruevi ( 827432 ) on Saturday April 27, 2019 @01:05PM (#58501340)

          You are probably one of the few that does supply code. Most people in science still use a closed source shrink wrap software because it's simpler or what they're used to. If you would want to replicate their results, you'd have to fork over $50k for licensing.

          You can publish medical data de-identified, I've dealt with that many times and there is plenty of open source software for that as well. The sharing requirement from NSF/NIH is a "making available" clause but rarely (if ever) is there sufficient funding included for long term storage and publishing of massive datasets

          • The sharing requirement from NSF/NIH is a "making available" clause

            It can take years to get the data transferred even when it is available.

          • I'm afraid I can't reasonably publish de-identified data, as it is quite often a sequence of diagnostic entries over time (diabetes, various types of cancer, alzheimers etc) and it is deemed fully plausible that such data could be linked to their source. Even the individual aggregated data is considered a touchy subject since there are some unique sequences.

  • by sinij ( 911942 ) on Saturday April 27, 2019 @11:10AM (#58500940)
    I worked in science, article is correct in pointing flaws but I would say it understates the magnitude in many field. It is much easier to publish a positive finding with completely BS model than negative finding of an already existing and accepted model. As such, BS models multiply and propagate.

    All journals need to introduce mandatory quotas for replication studies and negative findings or science will drown in bullshit the way social sciences and psychology did. Peer review is just not enough, especially since you don't generally get to see the data.
    • by rtb61 ( 674572 )

      The problem is new science is getting really complex. Not so much running an experiment, but setting up a simulation, allowing it to run and then get to the desired state to monitor changes in state, pushing the bounds of what can be measured and what can not be measured but it impacts on what can be measured, is measured. The fields of quantum science, the infinite small and infinitely fast, well, relative to normal space, that can only be measured by it's impact upon normal space. So running whole series

  • by joe_frisch ( 1366229 ) on Saturday April 27, 2019 @11:25AM (#58500980)

    It sounds reasonable at first glance - high level decisions are made on what projects should be funded, and then the groups that succeed continue to be funded.

    The problem is that that provides a strong incentive for scientists to interpret and publish their results with a positive bias. I think actual fraud is quite rare - but its very easy to mislead readers and reviewers without actually lying. The groups that make problems clear, don't get funded next time, so there is a sort of evolutionary pressure toward deception.

    Peer review is supposed to fix that, but IMHO it is weakening. I'm a scientist at a national lab, and we are all extremely busy, working on very tight budgets. This is seen as a way to improve efficiency. The effect though is that we are not specifically funded to review papers, and many of us have to decline doing reviews because we simply don't have time, and would essentially be stealing money from project budgets if we did. A good review takes a lot of time / effort, especially if the reviewer has to try to look for things the writers omitted. This reduces the pool of available reviewers.

    Without an overall change in funding schemes, I don't see how to avoid this decline. Scientists are humans and while in general honest, they are subject to the same sort of pressures as anyone else. Set up a system that rewards deception and that is what you will get.

    • I think it's both a change in funding and a change in publishing that's needed.

      My master's thesis did not get published because it found that the best thing since sliced bread was not statistically better than sliced bread. Since then a number of people in the field have published stuff based said next best thing, because "everyone knows" that it's better. $100 says another grad student before me did similar research, and likewise didn't get published because it wasn't interesting or notable. And $100 says that at least a couple since then have done likewise since.

      It's a criminal shame that we're not publishing our dead ends. That's setting everyone back, and there's a ton of repeated work that never sees the light of day, and we keep going around and around in a vicious cycle because we can't learn from the failures of others.

      I think we need to move to decade-long funding rather than annual funding, with the requirement that everything done with the money get published. And tie into that a percentage dedicated to publishing and reviewing, so that there's no issue of lack of either. Then, when we're deciding the next decade of funding, look at the overall research of that lab, and gauge how much they're getting accomplished with that funding. That way everyone has time to create a robust research group, do a deep dive into the topic, and share their work with the world. No, not everything will be a positive finding, but we can learn from all of it.

      • Your points are right on the money.

        Ten year funding makes sense from a research, academic, healthy project, get-the-job-done-well point of view, but that would never fly. The funders are either short sighted, or budgeted by sponsors with a yearly budget process, or they would get harangued by grant applicants who object to funds tied up. But 4-6 years at a time has precedent - that's how the government itself runs.

        Government funded studies ought by law to be published, negative or null results as well as

      • Velociraptor = Distiraptor / Timeraptor

        You should cancel out the two raptors if you want to get published :P

    • by pesho ( 843750 )

      Exactly that!

      Everything else is a consequence of the incentives generated by the current funding mechanisms. Academic careers are tied to receiving government grants (no promotion or tenure without extramural funding); receiving grants is tied to publishing papers; publishing papers requires "positive" results.

      Large overhead costs that are allowed by organizations like NIH (money the institution receives on top of every grant; currently 40-60% on top of the direct research costs for an average public uni

      • I agree with a lot of what you said, but "overhead" is more complicated. People tend to conflate "overhead" and "waste", but at least for us its completely different. While there is wasteful overhead for bureaucracy, for us overhead also covers pre-proposal work. I work on instrumentation (mostly for radio telescopes) and often quite a bit of work is needed to get an idea to the state were it can be funded.

        If I just say I want to use electronically cooled, physically room temperature amplifiers as some 2

  • From TFA:

    "Second, social media enables criticisms to be raised and explored soon after publication."

    We really need to know what Kim Kardashian thinks of the research. As the undisputed queen of Social media, she will keep us pointed in the right direction.

    Jebuz farkin Kryste. Why don't we just go back to getting our science from the Bible. At least stoning people who have the wrong ideas will be entertaining, amirite?

    • A lot of scientists today are on twitter and once a paper is published you can expect huge amounts of comments on it from other scientists via twitter. That is probably what he is referring to.
      • A lot of scientists today are on twitter and once a paper is published you can expect huge amounts of comments on it from other scientists via twitter. That is probably what he is referring to.

        Twitter isn't a scientific tool. It is a tool for destroying your career, for narcissistic mental masturbation, and any response that can be contained within a tweet, belongs there.

        Twitter is more emblematic of the problem the writer whines about than a solution. Regardless, Kim needs to weigh in on this, because her response carries the same weight, perhaps more. All coming from the same place.

        I like to drink water. I don't get it from the commode.

        • Regardless there are right now tons of scientific discussions using Twitter. Don't know how it's in other sciences but in nutrition and exercise this is very prevalent.
          • Regardless there are right now tons of scientific discussions using Twitter. Don't know how it's in other sciences but in nutrition and exercise this is very prevalent.

            If a billion people believe something that is wrong, it is still wrong. It must be wonderful and groundbreaking scientific discussion that fits within Twitter's text limits.

            Looks like we're going to have to have others thinking Twitter is part of the answer, and me thinking it's a symptom of the problem.

      • That is probably what he is referring to.

        Author of TFA is one Dorothy Bishop, an experimental psychologist at the University of Oxford, UK.
        But yeah, that is what she is referring to.

  • by zkiwi34 ( 974563 ) on Saturday April 27, 2019 @11:41AM (#58501014)

    Egos. The expectation of monetization. Pressure - to publish, to produce "something." Complete incompetence in the application of statistics - ignoring data, flagrant changing of data. Pathetic experimental design that would shame a 10 year old.
    Has it ever been so? Probably, but not on the 20th/21st century scale.

  • hypothesizing after results are known

    Do an experiment, find results, make a hypothesis that will explain THESE results, then hand hypothesis off to someone independent of you and your team to conduct ANOTHER study to see if the hypothesis is sound.

    Have yet a third person or group make sure the new experiment isn't similar enough to yours that it's not a good test of the hypothesis.

    If the 2nd study supports the hypothesis, then do a joint publication and invite the discipline to come up with other, dissimilar experiments to support or refute th

  • Perhaps this assessment is too harsh and the scientists are leaning on their college experience. I found it easier in college to do the experiment, or not, if I knew the expected results should be.

news: gotcha

Working...