Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Science

'Pay Researchers To Spot Errors in Published Papers' 24

Borrowing the idea of "bug bounties" from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature. Malte Elson, writing at Nature: Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering. That's why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward -- up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work. ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors. I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient. Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.
This discussion has been archived. No new comments can be posted.

'Pay Researchers To Spot Errors in Published Papers'

Comments Filter:
  • by hdyoung ( 5182939 ) on Tuesday May 21, 2024 @07:08PM (#64489075)
    Could be gamed. Eyeroll.
  • ...that'll ensure scientific integrity!

    It's hysterical that this guy thinks that bug bounties are what provide technical quality.

  • Troublemakers? (Score:5, Insightful)

    by Bruce66423 ( 1678196 ) on Tuesday May 21, 2024 @07:25PM (#64489093)

    'There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.'

    We've got to a sad situation if this is the case. The core point of research is to find out what is true, and the person who spots a mistake in a scientific paper should be praised, not rejected.

    Perhaps a solution lies in penalising the institution of the authors of a paper that proves seriously flawed. This might incentivise faculties to encourage their PhD students to review proposed articles before publication and reward them for spotting mistakes, whose discovery would thus be avoiding the institution being fined for submitting naff data.

    • Re:Troublemakers? (Score:4, Insightful)

      by bradley13 ( 1118935 ) on Wednesday May 22, 2024 @01:36AM (#64489551) Homepage

      'There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.'

      It's been a while since I did serious research. Even back then, in any particular niche (and all research is in some niche or other), there were only a few well-known researchers. I was fortunate to be in one of the top labs in my area. We ignored the great mass of irrelevant stuff and concentrated on the result out of the other top labs. You tried to get them to reference your work, and you certainly referenced theirs.

      In my particular niche, one of those important labs published great looking papers. They weren't quite believable, but no one looked too closely. Certainly I, as a young researcher, was not going to go out on a limb and criticize their results. And my supervisor? Why stir up trouble? Just concentrate on our work and publish our results. As it happened, my supervisor found me my first post-doc, and it was in exactly that lab. (And, yes, their work really was crap).

      So, yeah: highlighting errors of important researchers doesn't happen. And for the 90% (or 95% or 99%) of papers that are just publish-or-perish crap? Who is going to bother even reading them, much less examining their results?

      • It's always interesting to hear from someone with real experience; thank you for your informative comments. Is there a solution? Clearly an element could be far more public praise for those who do take down deceptive reports; the emergence of a 'fact checking' industry within science offering a career path for the curmudgeonly might help. Otherwise: it's going to be hard to break what is an arms race of creating ever more 'published papers'. Arms races tend not to end well...

        • Honestly, I don't know of a solution. It would help somewhat to get rid of low quality programs producing so much publish more perish trash.

          For the actually good labs, ultimately they do self-regulate. In our lab, we pretty much knew which papers and authors to pay attention to and which to ignore.

      • by Dr. Tom ( 23206 )

        A lot of published work is crap. Reviewers don't care, except reviewer #3 who hates your work and has done it better in their own works (cited)

    • The way academic peer review works, is that often many fields are so specialized, that the "anonymous" reviewers (professors for example) will know exactly whose work they are judging. And sometimes they will pick up the phone to get questions or concerns addressed. So, it's a kind of closed ecosystem in some ways, with resulting bias or sometimes overlooking the "meat" of a paper. I signed up with my areas of expertise. Thanks slashdot.
    • 'There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.'

      We've got to a sad situation if this is the case. The core point of research is to find out what is true, and the person who spots a mistake in a scientific paper should be praised, not rejected.

      Perhaps we wouldn’t have such a large demand for correcting errors if there wasn’t a corrupt amount of motivation to simply shit out “research” by the metric fuckton in order to gain funding by any half-truth-means necessary. Greed has unfortunately become a “core” point of research.

      Perhaps a solution lies in penalising the institution of the authors of a paper that proves seriously flawed. This might incentivise faculties to encourage their PhD students to review proposed articles before publication and reward them for spotting mistakes, whose discovery would thus be avoiding the institution being fined for submitting naff data.

      I find it both ironic and sad that a profession centered around the concept of the Hippocratic Oath, needs to be reminded of the importance of teaching integrity and ethics.

      If they’re

    • > The core point of research is to find out what is true

      No it's not. The core of it now is to get grants from agencies, pass along overhead, juice the H-index, and ensure job security.

      You describe an ideal world, not Big Science.

      The journals have been shown to be complicit in corruption, especially during the lockdowns.

      We need distributed p2p blockchain science publication, with delayed attribution, probably using zksnarks to underpin it.

      Reputation should come from merit - right now the merit is largely

    • by tstex ( 3917293 )

      Perhaps a solution lies in penalising the institution of the authors of a paper that proves seriously flawed. This might incentivise faculties to encourage their PhD students to review proposed articles before publication and reward them for spotting mistakes, whose discovery would thus be avoiding the institution being fined for submitting naff data.

      That's a brilliant idea. It's the institutions that mandate "publish or perish", and yet the responsibility for reviews, and any fallout from fabrication is entirely on the publishers. Institutions therefore have little concern for how their employers scam the system by publishing garbage. By scoring the institutions that employ the authors based on the integrity, as well as the quantity of their scholarly output, responsibility for quality becomes shared by all involved. It becomes an ecosystem in which ev

  • by timeOday ( 582209 ) on Tuesday May 21, 2024 @07:37PM (#64489111)
    I won't say they don't work, but they've hardly become prevalent in the industry.
    • by Dr. Tom ( 23206 )

      This is a test. This is only a test. We are giving away money to see if people will take it. This are serious research.

  • by laughingskeptic ( 1004414 ) on Tuesday May 21, 2024 @08:31PM (#64489195)
    In fields like social and behavioral sciences, the reproducibility of published papers has been found to be less than 50%. See for instance https://royalsocietypublishing... [royalsocie...ishing.org]

    There is no pot of bug bounty money large enough to solve this problem.
  • pays specialists to check highly cited published papers, starting with the social and behavioural sciences

    That's so sad, I wish they'd had a chance to review scientific papers before going bankrupt.

  • by Skinkie ( 815924 ) on Wednesday May 22, 2024 @02:44AM (#64489641) Homepage
    Recently I have read some computer vision papers about animal orientation and I was very surprised that only the Chinese papers actually had the option to look at the code and reproduce their results. If a paper is written in a way that does comparisons with previous algorithms, but if everyone would require to implement their own reimagination of a paper that does not speed up the research. I would say that it would be more important to actually have data, code, etc. to check if the results match what was published then checking for reasoning errors in the first place.
    • what has the world come to? In Physics the norm used to be to ensure that both the experimental setup was defined precisely and your underlying logic so that your experiment could then be replicated elsewhere, if it was an experimental physics paper. If it was a theoretical physics paper, then of course you expounded on your logic and showed your equations such that others in the field could go through what your hypothesis was and either agree or disagree and if disagreeing, show their math and logic behind
  • "I'm gonna research me a minivan this afternoon!"

  • I wonder if requiring including the names of the referees on all papers would change their behavior in a good way. Signing off on a paper that was later found to be bogus would detract from the reputations of the referees as well as the authors.

Your password is pitifully obvious.

Working...