Poor Scientific Research Is Disproportionately Rewarded (economist.com) 81
A new study calculates a low probability that real effects are actually being detected in psychology, neuroscience and medicine research paper -- and then explains why.
Slashdot reader ananyo writes:
The average statistical power of papers culled from 44 reviews published between 1960 and 2011 was about 24%. The authors built an evolutionary computer model to suggest why and show that poor methods that get "results" will inevitably prosper. They also show that replication efforts cannot stop the degradation of the scientific record as long as science continues to reward the volume of a researcher's publications -- rather than their quality.
The article notes that in a 2015 sample of 100 psychological studies, only 36% of the results could actually be reproduced. Yet the researchers conclude that in the Darwin-esque hunt for funding, "top-performing laboratories will always be those who are able to cut corners." And the article's larger argument is until universities stop rewarding bad science, even subsequent attempts to invalidate those bogus results will be "incapable of correcting the situation no matter how rigorously it is pursued."
The article notes that in a 2015 sample of 100 psychological studies, only 36% of the results could actually be reproduced. Yet the researchers conclude that in the Darwin-esque hunt for funding, "top-performing laboratories will always be those who are able to cut corners." And the article's larger argument is until universities stop rewarding bad science, even subsequent attempts to invalidate those bogus results will be "incapable of correcting the situation no matter how rigorously it is pursued."
In other news... (Score:1)
Poorly written Economist articles are Disproportionately Rewarded with attention and discussion.
Re: (Score:2)
Indeed. The words "yo" and "dawg" spring to mind.
management (Score:1)
Re: management (Score:3, Insightful)
No, a lot of this work is done in academic institutions that rely on grants to fund research. The funding cycles tend to be about three years long, and there's pressure to generate lots of publications rather than do good work. Institutions also tend to skim a lot of money off the top through F&A costs, and there's a lot of corruption involved. As a result, money isn't spent well and there's not enough to go around.
Medical research (Score:3)
In medical research when we are comparing groups it is normal to specify the power/ do a power calculation
power is a measure of the risk of finding a result when none exists (falsely rejecting the null hypothesis)
the null hypothesis is that your two treatments are equal
more here:
http://powerandsamplesize.com/... [powerandsamplesize.com]
Re: Medical research (Score:3)
With the neuroscience papers involved in this analysis, though, I would want to know what they're looking at. Some papers oughtn't be counted simply because the research relies on peop
Re: (Score:2)
there's pressure to generate lots of publications rather than do good work.
True, but this is only part of the problem. There is also a huge problem of researchers producing trivial results (whether shoddy or not) because they are afraid to ask the big questions and pursue revolutionary results. If they go long, and fail, their career may be over since nobody publishes negative results. If they fake successful results, they will draw intense scrutiny and be exposed. So they play it safe and do research that nobody cares about.
Top tier publications like Science and Nature have
Re: (Score:1)
Top tier publications like Science and Nature have some good papers.
Science and Nature publish exciting papers with comparatively little data. They're the place you publish things that might be groundbreaking, so they get wide exposure. After a couple of years, most of the revolutionary stuff fails to pan out, but whatever does will always cite Science as the first report.
Serious science is mostly not revolutionary
Re: (Score:2)
Re: management (Score:1)
The cure then is to shut down approx 85% of 'research' so the funding ratio returns to 30%.
Re: (Score:3)
Couldn't have anything to do with short term outlook by poor management in companies?
Very few companies do any published research. This is about academia and government funded labs that seek grants, not industry.
The "short term outlook" in companies actually improves the situation, because it puts pressure on researchers to come up with real results that can be put into products, rather than bogus research papers.
Re: (Score:2)
It really doesn't help. There's all sorts of research you can't do if you're interested in turning it into a product within 2 years.
Re: (Score:2)
Re: (Score:1)
The vast majority of universities in the US and Europe are either publicly funded or non-profits; when they are non-profit, almost all their research funding still comes from the government.
When publicly funded or non-profit organizations tell you that they are "run like businesses", they are lying to you. Businesses need to be run such that they make a profit from what th
Re: (Score:2)
Ah, yes. Academia is trying to copy that "success story". In companies, at some time so much trash will have accumulated from this strategically utterly demented approach that they go down the drain or at least into a major crisis.
Re: (Score:1)
Companies generally deliver cheaper and better products year after year. In contrast, the cost of public education grows faster than inflation while the quality is either stagnant or actually declining.
The "utterly demented approach" is that we keep shoving more and more money int
Re: (Score:1)
Most published scientific research comes from academic institutions and public funding. Therefore, these problems have mostly to do with the way the US government awards research grants to academic researchers, and the perverse incentives that creates for public and non-profit research universities in how they hire and promote.
Corporate researchers are generally
Re:Change the funding cycles (Score:5, Insightful)
Are you serious?
"Then you fund graduate students, who in my experience tend to rush their work at the end and don't produce research anywhere close to the value of what they are paid."
Grad students are paid barely above minimum wage, if that. They actually aren't expected to produce *any* research output, and anything they get out of their project is regarded as a bonus. Remember, a PhD is a *training* exercise and students are *learning* how to become scientists, no matter how "good" they may seem. This doesn't stop many grad students being exploited. You'd be hard pressed to find a smarter more "capable" (I put that in scare quotes since some grads can't even tie their shoes) group of people being treated like dirt and generally undervalued. They only tolerate it because they're clueless or they just want to tough it out and get their qualification and move on. For yourself, if you are running your research group on the output of grad students (and yes, I know many are) then you're bound to be sunk sooner or later. Remember: pay peanuts, get monkeys!!
It's a strange claim to make, since hardly anyone in science is overpaid. The discrepancies become apparent once you scale income against level of responsibility, perhaps crudely converted to dollar terms based on the equipment they are using/responsible for. It's not uncommon to find a post-doc managing $2-5 million worth of equipment while being paid $40-60 per year. In the private sector such a management policy would be viewed as fascicle at best and negligent at worst.
I do agree with you entirely on one point: the administrative overheads charged against grants are disgustingly inflated by parasitic policies.
Re: (Score:2)
*farcical
Re: (Score:2)
*$40-60k
Yeah, you're right, that's the second correction to my post. There are probably many others to be made as well...
What would a McDonalds store manager make these days?
Re: (Score:2)
Here's one answer [payscale.com]. (Looks like it tops out at around 55K/year.)
Re: (Score:3)
Grad students are paid barely above minimum wage, if that. They actually aren't expected to produce *any* research output, and anything they get out of their project is regarded as a bonus.
I don't know what field you're coming from, but that's not the case in neuroscience. Anyone coming out of a PhD in this field with no publications isn't going to be happy with their performance and it will likely count against them in looking for a good Post Doc.
Re:Change the funding cycles (Score:5, Informative)
You're not quite on the mark there. PhD students are indeed learning how to become research scientists, and the way they practice and prove they have learned is by doing original research. A thesis has to have original research in it or it's not a thesis. In almost all cases that is published somewhere peer reviewed as well.
Re:Change the funding cycles (Score:5, Informative)
There are also "taught" graduate degrees opposed to research degrees.
Yes, but you're talking about graduate degrees, not PhDs. A PhD is a research course. Some have a taught component, sometimes even a whole year, but that's to bring the student up to speed, and so is simply pass/fail with no further effect after a pass. I've examined/viva'd a couple of PhDs, and original research was a major part of the criteria for examination.
Re: (Score:2)
Disagree. Sort of.
A couple points.
First academic science isn't alone. IT folks have managed and are responsible for systems/databases/applications worth in the tens of millions for similar pay. Probably both in public and private sector.
Second, much of this has as much to due with the glut of Phd's, those hoping to go into academia, and the actual amount of that kind of work available. Much of the glut is probably due to a number of trends, everyone going to university for one (and not all University studie
Re: (Score:1)
Nein. Consider how many scales are "validated" by Psychology post-docs, either with PCA that they call factor analysis or confirmatory factor analysis. In both cases the measurement structures are wrong (non-continuous data --> a Hessian that's overly inflated, and therefore presumed to be more informative that reality holds): PCA only works for formative measurement structures (i.e., the items cause the latent structure, like the Hollingshead SES scale), whereas nearly all social science measurement ass
Re: But not climate change research (Score:4, Funny)
If your paper confirms that GMOs are as safe as mother's milk, you are also more likely to get funding. Also, if your study shows that vaccines are safe, you are more likely to get funding.
Are those examples of confirmation bias too?
Re: (Score:3)
If your paper confirms climate change, you are more likely to get funding.
If your paper disproves it you get a Nobel prize.
Re: (Score:2)
</thread>
(Already posted in this discussion else I'd mod you up.)
Re: But not climate change research (Score:5, Informative)
The "massive consensus" has been going down every year, more and more scientists are pulling out of the consensus. You will rarly hear about that because politicians and news organizations make a lot of money in making people think it is real.
Citation please?
All of the climate change data sets are made by computer models which always get out the results desired, and the desired result is confirming climate change, because if it does not, their funding is cut. So politicians, news organizations AND scientists benefit from lying, the ones that disprove it are shouted down. And the results? Billions of tax payer money (all of it that our children will have to pay) get sent over to other countries.
You have it backwards. Models are constructed from data, not the other way around. To paraphrase plasma physicist Kenneth Birdsall, the purpose of models is to generate insight, not data.
36%? Yea, there is a reason why I don't believe in any science study unless it makes sense.
Strawman, and a sloppy one at that. The 36% in TFS refers to reproducibility of psychological studies, not climate studies.
Re: But not climate change research (Score:2)
The fact that a computer program produces the same results when executed again may be science, but it's computer science, not climate science.
The reproducability of a computer program proves jack shit about the environment.
Re: (Score:1)
Re: (Score:2)
How the hell do you reproduce a climate study anyway? Where are the controls?
John Christy has been doing that for over 30 years. For example, he wanted to know if the temperature record was accurate or not. So he developed a secondary way to measure temperature (with satellites). That is one example for you to get the feel, he's done the same thing in other areas of climate science.
Re: (Score:2)
Not Kenneth ... In the literature, he was typically listed as CK Birdsall ... for Charles Kennedy ... went by Ned.
(Quick correction from a former phd student of his.)
Oops. Thanks for the correction. I have his plasma physics simulation book, but obviously haven't looked at it recently. :-|
Re: (Score:2)
Other exceptions to this rule are studies that show GMOs are safe. Those scientists are impeccable, their studies well-designed and their research should never be questioned. Climate scientists, however, are the bunk.
Re: (Score:2)
I see the sarcasm, but I think you're playing with fire doing it like that.
(I'll STFU now and continue to enjoy your posts quietly. Cheers.)
Re: (Score:3)
Please be self-destructive _without_ dragging the rest of the human race into it. While there surely is some bogus climate-change research, the whole field is not broken and the whole field has a consensus that it is going to be at least pretty bad and may well get catastrophic.
Re: (Score:2)
There is no such consensus.
Please prove your assertion that " has a consensus that it is going to be at least pretty bad and may well get catastrophic".
You see, you lie or repeat lies without even knowing it, which makes it much worse, because thats called ignorance.
Its people who argument/debate like you do, without knowledge who are the most dangerous.
Re: (Score:2)
There is such a consensus and it is a strong one. It may need an actual scientist to see it though. You obviously do not qualify.
Re: (Score:2)
So you are basically saying. You have nothing to prove your assertion.
Because I do not believe your blind assertion, I do not qualify?
The supposed consensus is about the earth warming and it "possibly" being caused by humans. And even that consensus is shaky at best.
There i NO concensus at all, about your assertion of thing being really bad or catastrophic, except in the media and young brainwashed eco greenies.
So again, show me the consensus that proves your assertion. Your word, does not count.
Re: (Score:1)
oh well done (Score:2)
This formalizes what is already known (Score:2, Insightful)
This original study is here [royalsocie...ishing.org].
The study presents an accurate description of how research is funded in the US (biomedical sciences in particular). I can't speak in detail about other countries, but the major issues seem to be the same in other developed nations.
The problem is how do you decide which study to fund. You have 100 scientist asking for money but you can fund only 10 of them. So you must come with some criteria that will allow you to decide which studies are worth pursuing and of these which ones ha
MOD PARENT UP (Score:1)
The original research describes the problem in the area of psychology, but the problems are the same there that they are in the bio-sciences. There is a positive glut of bio- and psych- science applicants, which is the real cause. Competition is incredibly fierce, which is what you are seeing in the above problems. "Produce papers or we'll show you the door" && "Get funding or we'll show you the door" && "We'll pay you nothing anyways" are really the byproducts of high competition.
In the
Peer Review (Score:2)
Clearly, peer review is working as intended in academia.
I'd rather the government fund its own labs more and fund academic less.
Re: (Score:2)
Well, I'll start by ignoring your idiotic glib comment at the beginning.
The government research labs have or had a great reputation. Naturally, though not enough people were getting rich, so the government decided to privatise the easy, potentially lucrative management part without exposing the pressure entities to the risk that the research might not yield anything useful.
That didn't help of course.
But that aside, apart from taking ill informed digs at academia (government researchers are peers to academic
Re: (Score:2)
Ill informed? I work in academia, thank you very much. You don't have take my word on it. Slashdot has had many stories on the horrendous failings of peer review in the past couple of years.
Re: (Score:2)
Ill informed?
Then what would you call it? Firstly you say this:
Clearly, peer review is working as intended in academia.
Now you say this:
Slashdot has had many stories on the horrendous failings of peer review in the past couple of years.
So which is it? Peer review working as intended or a failing of peer review?
I work in academia, thank you very much.
So, as an academic, you intend peer review to reward bad research? I'm not currently in academia, though I was for a while. I can't recall ever meeting anyone
Also happens in CS research (Score:5, Insightful)
I have seen quite a bit of it and know of several CS PhDs that are based on bogus results. The tragedy is that people doing their research properly will take significantly longer and have much diminished chances at an academic career. And this effect propagates: First PhD students advance on bogus results, then they become professors on fraud and finally the whole research field is broken.
Re: (Score:2)
Donald Knuth later said, "That these wouldn't have passed if I had been around."
Re: (Score:2)
Nice one.
Scientific Research (Score:1)
Indeed. But I think there is another angle to this that, while in some ways less obviously sloppy, is in some ways worse.
The need to publish at a furious pace might not always result in cutting corners - indeed, a lab that has done the same thing for the last 10 years might well have refined its techniques to a high level. But in order to do this, they need to remain within rigid boundaries, always using essentially the same methods. Every paper becomes a minor variation on the same old theme. It's the on
CS/machine learning perspective (Score:1)
There are a few well-known issues: not publishing negative results, overinflating importance of your new method, and the drive to publish like crazy discourages collaboration to some extent.
In the first issue, most well-known conferences and journals have so many positive results to choose from (partly because of the second problem), that they just don't care about the negative results. Negative results also don't draw a lot of attention in CS, as they'd pertain to someone trying their own random idea. Som