'Pay Researchers To Spot Errors in Published Papers' 24
Borrowing the idea of "bug bounties" from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature. Malte Elson, writing at Nature: Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering. That's why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward -- up to a maximum of 2,500 francs.
Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.
Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.
There’s absolutely no way that (Score:3)
Increase the profit motive... (Score:2)
...that'll ensure scientific integrity!
It's hysterical that this guy thinks that bug bounties are what provide technical quality.
Troublemakers? (Score:5, Insightful)
'There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.'
We've got to a sad situation if this is the case. The core point of research is to find out what is true, and the person who spots a mistake in a scientific paper should be praised, not rejected.
Perhaps a solution lies in penalising the institution of the authors of a paper that proves seriously flawed. This might incentivise faculties to encourage their PhD students to review proposed articles before publication and reward them for spotting mistakes, whose discovery would thus be avoiding the institution being fined for submitting naff data.
Re:Troublemakers? (Score:4, Insightful)
'There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.'
It's been a while since I did serious research. Even back then, in any particular niche (and all research is in some niche or other), there were only a few well-known researchers. I was fortunate to be in one of the top labs in my area. We ignored the great mass of irrelevant stuff and concentrated on the result out of the other top labs. You tried to get them to reference your work, and you certainly referenced theirs.
In my particular niche, one of those important labs published great looking papers. They weren't quite believable, but no one looked too closely. Certainly I, as a young researcher, was not going to go out on a limb and criticize their results. And my supervisor? Why stir up trouble? Just concentrate on our work and publish our results. As it happened, my supervisor found me my first post-doc, and it was in exactly that lab. (And, yes, their work really was crap).
So, yeah: highlighting errors of important researchers doesn't happen. And for the 90% (or 95% or 99%) of papers that are just publish-or-perish crap? Who is going to bother even reading them, much less examining their results?
Fascinating - and very depressing (Score:2)
It's always interesting to hear from someone with real experience; thank you for your informative comments. Is there a solution? Clearly an element could be far more public praise for those who do take down deceptive reports; the emergence of a 'fact checking' industry within science offering a career path for the curmudgeonly might help. Otherwise: it's going to be hard to break what is an arms race of creating ever more 'published papers'. Arms races tend not to end well...
Re: Fascinating - and very depressing (Score:2)
Honestly, I don't know of a solution. It would help somewhat to get rid of low quality programs producing so much publish more perish trash.
For the actually good labs, ultimately they do self-regulate. In our lab, we pretty much knew which papers and authors to pay attention to and which to ignore.
Re: (Score:2)
A lot of published work is crap. Reviewers don't care, except reviewer #3 who hates your work and has done it better in their own works (cited)
Re: (Score:1)
Re: (Score:3)
'There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.'
We've got to a sad situation if this is the case. The core point of research is to find out what is true, and the person who spots a mistake in a scientific paper should be praised, not rejected.
Perhaps we wouldn’t have such a large demand for correcting errors if there wasn’t a corrupt amount of motivation to simply shit out “research” by the metric fuckton in order to gain funding by any half-truth-means necessary. Greed has unfortunately become a “core” point of research.
Perhaps a solution lies in penalising the institution of the authors of a paper that proves seriously flawed. This might incentivise faculties to encourage their PhD students to review proposed articles before publication and reward them for spotting mistakes, whose discovery would thus be avoiding the institution being fined for submitting naff data.
I find it both ironic and sad that a profession centered around the concept of the Hippocratic Oath, needs to be reminded of the importance of teaching integrity and ethics.
If they’re
Re: (Score:2)
> The core point of research is to find out what is true
No it's not. The core of it now is to get grants from agencies, pass along overhead, juice the H-index, and ensure job security.
You describe an ideal world, not Big Science.
The journals have been shown to be complicit in corruption, especially during the lockdowns.
We need distributed p2p blockchain science publication, with delayed attribution, probably using zksnarks to underpin it.
Reputation should come from merit - right now the merit is largely
Ouch (Score:2)
I wish I could disagree. Sadly I suspect you're spot on.
Re: (Score:1)
Perhaps a solution lies in penalising the institution of the authors of a paper that proves seriously flawed. This might incentivise faculties to encourage their PhD students to review proposed articles before publication and reward them for spotting mistakes, whose discovery would thus be avoiding the institution being fined for submitting naff data.
That's a brilliant idea. It's the institutions that mandate "publish or perish", and yet the responsibility for reviews, and any fallout from fabrication is entirely on the publishers. Institutions therefore have little concern for how their employers scam the system by publishing garbage. By scoring the institutions that employ the authors based on the integrity, as well as the quantity of their scholarly output, responsibility for quality becomes shared by all involved. It becomes an ecosystem in which ev
You got moded down (Score:2)
I wonder why... ;)
Bug bounties (Score:3)
Re: (Score:2)
This is a test. This is only a test. We are giving away money to see if people will take it. This are serious research.
They will lose their shirt (Score:4, Informative)
There is no pot of bug bounty money large enough to solve this problem.
So sad (Score:2)
pays specialists to check highly cited published papers, starting with the social and behavioural sciences
That's so sad, I wish they'd had a chance to review scientific papers before going bankrupt.
I would rather see reproducability (Score:4, Interesting)
Re: (Score:2)
One thing to say (Score:2)
"I'm gonna research me a minivan this afternoon!"
Incentifying Referees (Score:1)