Four Ways To Reduce Bad Scientific Research (nature.com) 78
An experimental psychologist at the University of Oxford argues that in two decades, "we will look back on the past 60 years -- particularly in biomedical science -- and marvel at how much time and money has been wasted on flawed research."
[M]any researchers persist in working in a way almost guaranteed not to deliver meaningful results. They ride with what I refer to as the four horsemen of the reproducibility apocalypse: publication bias, low statistical power, P-value hacking and HARKing (hypothesizing after results are known). My generation and the one before us have done little to rein these in. In 1975, psychologist Anthony Greenwald noted that science is prejudiced against null hypotheses; we even refer to sound work supporting such conclusions as 'failed experiments'...
The problems are older than most junior faculty members, but new forces are reining in these four horsemen. First, the field of meta-science is blossoming, and with it, documentation and awareness of the issues. We can no longer dismiss concerns as purely theoretical. Second, social media enables criticisms to be raised and explored soon after publication. Third, more journals are adopting the 'registered report' format, in which editors evaluate the experimental question and study design before results are collected -- a strategy that thwarts publication bias, P-hacking and HARKing. Finally, and most importantly, those who fund research have become more concerned, and more strict. They have introduced requirements that data and scripts be made open and methods be described fully.
I anticipate that these forces will soon gain the upper hand, and the four horsemen might finally be slain.
The problems are older than most junior faculty members, but new forces are reining in these four horsemen. First, the field of meta-science is blossoming, and with it, documentation and awareness of the issues. We can no longer dismiss concerns as purely theoretical. Second, social media enables criticisms to be raised and explored soon after publication. Third, more journals are adopting the 'registered report' format, in which editors evaluate the experimental question and study design before results are collected -- a strategy that thwarts publication bias, P-hacking and HARKing. Finally, and most importantly, those who fund research have become more concerned, and more strict. They have introduced requirements that data and scripts be made open and methods be described fully.
I anticipate that these forces will soon gain the upper hand, and the four horsemen might finally be slain.
Re: (Score:2, Interesting)
We know that sex in mammals including the humans is primarily determined by the chromosome 23. If both specimen of chromosome 23 are of type X, chances are good that the phenotype of the individual will be female. If one of the specimen is of type Y, and the other is of type X, there is a good chance that the phenotype of the individual will be male. There are some exceptions to the rule. For instance, there is the Turner Syndrome, where the individal only has a single
If you *believe*, you're already doing it wrong! (Score:2, Insightful)
This is something that bugs the hell out of me about US discusstions about science.
The concept of *believing* in science. Like they just lost their faith and are unable to no think in religious terms.
The *whole point* of science, is that you do not have to believe it! Hell, it does no even concern itself with truth!
It is *useful* for predicting reality, no matter your beliefs or lack thereof!
Even when we know, that quantum physics and relativity are directl incompatible with each other, and hence cannot pos
Re:Vote for Parties that Believe in Science (Score:4, Insightful)
" the past 60 years -- particularly in biomedical science -- and marvel at how much time and money has been wasted on flawed research" is also the time when idolization of science seems to never have been greater.
As Nassim Taleb said, we have managed to transfer religious belief into gullibility for whatever masquarades as science.
Re: Vote for Parties that Believe in Science (Score:2)
Thatâ(TM)s perhaps the stupidest Slashdot comment Iâ(TM)ve read. In response to an article talking about how online discussions speed the criticism, and how each scientific paperâ(TM)s methods are being formalized (all the way down to code and data) and shared for scrutiny, and how research details are being registered for pub consideration before the work is even done, you think that weâ(TM)re getting dogmatic....
Yeah. Dumbest in a generation.
Re: (Score:2)
You clearly don't mean ANY political party then. They only believe in their own narrative, and not even that, when they think they've come up with another vote catching idea.
Not to my awareness (Score:4, Informative)
They have introduced requirements that data and scripts be made open and methods be described fully.
This is the opposite of what I have experienced. Several of the journals are getting less interested in the methods and code. My reviews have instead come back asking me for *less* detail on code and methods, and more focus on the results and conclusions.
Admittedly, I do publish mostly bioinformatics oriented research in medical journals. The biology oriented papers have gotten slightly better responses (though they are also mostly interested in results/conclusions).
Re: (Score:2)
Grants (especially >250k R grants) require open methodology and data exchange. Although that is purely theoretical, I haven't seen many paper/result that fulfills those requirements, generally because the open source tools require a programmer and the closed source stuff is point-and-click. The closed source stuff doesn't scale though.
Re: (Score:1)
I do supply code to anyone that wants it, but I don't submit the code to the journal. There is simply no process for doing that. As for open data, that is impossible in the medical field. If I were to publish the medical journals of 1.8 million patients I would end up in jail. I also report everything to the grant committee, but again excluding the data.
Re:Not to my awareness (Score:4, Interesting)
You are probably one of the few that does supply code. Most people in science still use a closed source shrink wrap software because it's simpler or what they're used to. If you would want to replicate their results, you'd have to fork over $50k for licensing.
You can publish medical data de-identified, I've dealt with that many times and there is plenty of open source software for that as well. The sharing requirement from NSF/NIH is a "making available" clause but rarely (if ever) is there sufficient funding included for long term storage and publishing of massive datasets
Re: (Score:2)
The sharing requirement from NSF/NIH is a "making available" clause
It can take years to get the data transferred even when it is available.
Re: (Score:1)
I'm afraid I can't reasonably publish de-identified data, as it is quite often a sequence of diagnostic entries over time (diabetes, various types of cancer, alzheimers etc) and it is deemed fully plausible that such data could be linked to their source. Even the individual aggregated data is considered a touchy subject since there are some unique sequences.
It will get even worse (Score:3)
All journals need to introduce mandatory quotas for replication studies and negative findings or science will drown in bullshit the way social sciences and psychology did. Peer review is just not enough, especially since you don't generally get to see the data.
Re: (Score:2)
The problem is new science is getting really complex. Not so much running an experiment, but setting up a simulation, allowing it to run and then get to the desired state to monitor changes in state, pushing the bounds of what can be measured and what can not be measured but it impacts on what can be measured, is measured. The fields of quantum science, the infinite small and infinitely fast, well, relative to normal space, that can only be measured by it's impact upon normal space. So running whole series
Juniors reviewing seniors face to face works too (Score:4, Interesting)
> At the conference, the committee found a reviewer who decided to sit one-on-one with me so that he could better understand what I had done. This not-anonymous peer review was a very good experience, since this senior scientist suggested some improvements to a paper by a novice scientist.
I've been surprised how well it can work when a smart junior person reviews the work of a senior person. I've been working in my field for 20 years, and I seek out review by certain much more junior people because they look carefully to understand all of the details. Sometimes they find where I did a detail wrong.
At my first flying lesson, my instructor was showing me how to preflight the plane (inspect it before flight). At the end of the process, not knowing much, I asked if the bubbles in the fuel line were normal. It turned out they indicated a problem that could have killed us both. The carb bowl was full of fuel, but it would have been empty after about 60 seconds of full throttle - when we were 300 feet off the ground from take off and nose up.
Funding mechanism is part of the problem (Score:5, Interesting)
It sounds reasonable at first glance - high level decisions are made on what projects should be funded, and then the groups that succeed continue to be funded.
The problem is that that provides a strong incentive for scientists to interpret and publish their results with a positive bias. I think actual fraud is quite rare - but its very easy to mislead readers and reviewers without actually lying. The groups that make problems clear, don't get funded next time, so there is a sort of evolutionary pressure toward deception.
Peer review is supposed to fix that, but IMHO it is weakening. I'm a scientist at a national lab, and we are all extremely busy, working on very tight budgets. This is seen as a way to improve efficiency. The effect though is that we are not specifically funded to review papers, and many of us have to decline doing reviews because we simply don't have time, and would essentially be stealing money from project budgets if we did. A good review takes a lot of time / effort, especially if the reviewer has to try to look for things the writers omitted. This reduces the pool of available reviewers.
Without an overall change in funding schemes, I don't see how to avoid this decline. Scientists are humans and while in general honest, they are subject to the same sort of pressures as anyone else. Set up a system that rewards deception and that is what you will get.
Re:Funding mechanism is part of the problem (Score:5, Interesting)
I think it's both a change in funding and a change in publishing that's needed.
My master's thesis did not get published because it found that the best thing since sliced bread was not statistically better than sliced bread. Since then a number of people in the field have published stuff based said next best thing, because "everyone knows" that it's better. $100 says another grad student before me did similar research, and likewise didn't get published because it wasn't interesting or notable. And $100 says that at least a couple since then have done likewise since.
It's a criminal shame that we're not publishing our dead ends. That's setting everyone back, and there's a ton of repeated work that never sees the light of day, and we keep going around and around in a vicious cycle because we can't learn from the failures of others.
I think we need to move to decade-long funding rather than annual funding, with the requirement that everything done with the money get published. And tie into that a percentage dedicated to publishing and reviewing, so that there's no issue of lack of either. Then, when we're deciding the next decade of funding, look at the overall research of that lab, and gauge how much they're getting accomplished with that funding. That way everyone has time to create a robust research group, do a deep dive into the topic, and share their work with the world. No, not everything will be a positive finding, but we can learn from all of it.
Re: (Score:3)
Your points are right on the money.
Ten year funding makes sense from a research, academic, healthy project, get-the-job-done-well point of view, but that would never fly. The funders are either short sighted, or budgeted by sponsors with a yearly budget process, or they would get harangued by grant applicants who object to funds tied up. But 4-6 years at a time has precedent - that's how the government itself runs.
Government funded studies ought by law to be published, negative or null results as well as
Re: (Score:3)
Velociraptor = Distiraptor / Timeraptor
You should cancel out the two raptors if you want to get published :P
Re: (Score:2)
Exactly that!
Everything else is a consequence of the incentives generated by the current funding mechanisms. Academic careers are tied to receiving government grants (no promotion or tenure without extramural funding); receiving grants is tied to publishing papers; publishing papers requires "positive" results.
Large overhead costs that are allowed by organizations like NIH (money the institution receives on top of every grant; currently 40-60% on top of the direct research costs for an average public uni
Re: (Score:3)
I agree with a lot of what you said, but "overhead" is more complicated. People tend to conflate "overhead" and "waste", but at least for us its completely different. While there is wasteful overhead for bureaucracy, for us overhead also covers pre-proposal work. I work on instrumentation (mostly for radio telescopes) and often quite a bit of work is needed to get an idea to the state were it can be funded.
If I just say I want to use electronically cooled, physically room temperature amplifiers as some 2
Social media? (Score:2)
"Second, social media enables criticisms to be raised and explored soon after publication."
We really need to know what Kim Kardashian thinks of the research. As the undisputed queen of Social media, she will keep us pointed in the right direction.
Jebuz farkin Kryste. Why don't we just go back to getting our science from the Bible. At least stoning people who have the wrong ideas will be entertaining, amirite?
Re: (Score:3)
Re: (Score:1)
A lot of scientists today are on twitter and once a paper is published you can expect huge amounts of comments on it from other scientists via twitter. That is probably what he is referring to.
Twitter isn't a scientific tool. It is a tool for destroying your career, for narcissistic mental masturbation, and any response that can be contained within a tweet, belongs there.
Twitter is more emblematic of the problem the writer whines about than a solution. Regardless, Kim needs to weigh in on this, because her response carries the same weight, perhaps more. All coming from the same place.
I like to drink water. I don't get it from the commode.
Re: (Score:2)
Re: (Score:2)
Regardless there are right now tons of scientific discussions using Twitter. Don't know how it's in other sciences but in nutrition and exercise this is very prevalent.
If a billion people believe something that is wrong, it is still wrong. It must be wonderful and groundbreaking scientific discussion that fits within Twitter's text limits.
Looks like we're going to have to have others thinking Twitter is part of the answer, and me thinking it's a symptom of the problem.
You're close. (Score:2)
That is probably what he is referring to.
Author of TFA is one Dorothy Bishop, an experimental psychologist at the University of Oxford, UK.
But yeah, that is what she is referring to.
Re: (Score:2)
Well, the problems in science are manifold... (Score:3)
Egos. The expectation of monetization. Pressure - to publish, to produce "something." Complete incompetence in the application of statistics - ignoring data, flagrant changing of data. Pathetic experimental design that would shame a 10 year old.
Has it ever been so? Probably, but not on the 20th/21st century scale.
Re: (Score:2)
Re: (Score:2)
This is driven by the funding model. Funding agencies want a quantitative way to determine success, so they use publications (generally weighted in some way by journal impact) as a measure of success.
Of course its not obvious what would be a better measure .
HARKing is a good "first step" but incomplete (Score:2)
hypothesizing after results are known
Do an experiment, find results, make a hypothesis that will explain THESE results, then hand hypothesis off to someone independent of you and your team to conduct ANOTHER study to see if the hypothesis is sound.
Have yet a third person or group make sure the new experiment isn't similar enough to yours that it's not a good test of the hypothesis.
If the 2nd study supports the hypothesis, then do a joint publication and invite the discipline to come up with other, dissimilar experiments to support or refute th
Re: (Score:3)
Too Harsh (Score:2)