Peer Review Highly Sensitive To Poor Refereeing 233
$RANDOMLUSER writes "A new study described at Physicsworld.com claims that a small percentage of shoddy or self-interested referees can have a drastic effect on published article quality. The research shows that article quality can drop as much as one standard deviation when just 10% of referees do not behave 'correctly.' At high levels of self-serving or random behavior, 'the peer-review system will not perform much better than by accepting papers by throwing (an unbiased) coin.' The model also includes calculations for 'friendship networks' (nepotism) between authors and reviewers. The original paper, by a pair of complex systems researchers, is available at arXiv.org. No word on when we can expect it to be peer reviewed."
Review content matters (Score:4, Informative)
When you're talking about scientific papers, a "bad apple" reviewer may be able to skew the record in terms of 1-10 scales, but reviewers also do a qualitative write-up of the material. That's really the only important part and if one or two people fall outside the line of general consensus, they'll just be ignored.
Re:Highly political subjects? (Score:3, Informative)
Re:Highly political subjects? (Score:5, Informative)
It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.
Re:Highly political subjects? (Score:3, Informative)
Disclaimer: I both write and review journal articles in a few fairly narrow Computer Science sub-disciplines.
Bit of an arbitrary model (Score:4, Informative)
Each reviewer produces a binary recommendation within the same timestep: ’accept’ or ’reject’. If a paper gets 2 ’accept’ it is accepted, if it gets 2 ’reject’, it is rejected, if there is a tie (1 ’accept’ and 1 ’reject’) it gets accepted with a probability of 0.5.
If a single 'bad' reviewer (i.e. one that gives the 'wrong' answer as determined by the 'correct' method of reviewing mentioned as a control in the paper) can cause a paper to have a 50:50 chance of acceptance or rejection it doesn't seem too suprising to me that a relatively small number of them could cause the process to become '[not] much better than by accepting papers by throwing (an unbiased) coin' - because in their model, in the case of a reviewer disagreement, that's exactly what is happening!
Re:Climategate for example (Score:5, Informative)
Broken record time, but yes. Such subversion of the peer review process did show up. The culprits weren't the ones you expect. [csicop.org]
In general however, I think that this study is rather pessimistic. And anyway, it hasn't been peer reviewed, so who knows... ;-)
(yes, I did read TFA, but not the paper
Re:Highly political subjects? (Score:3, Informative)
Re:The Social Text Affair (Score:5, Informative)
Social Text is emphatically not prominent nor was peer-reviewed at the time of the affair.
http://en.wikipedia.org/wiki/Sokal_affair [wikipedia.org]
While the Sokal affair is interesting, it has nothing to do with the matter at hand.
Re:change the system (Score:3, Informative)
And it would also help the readers understand the article, a good referee report is quite illuminating. However, this has already been tried out by Nature in 2006
http://www.nature.com/nature/peerreview/ [nature.com]
and didn't work so well. Apparently, scientists are somewhat reluctant to openly criticise each other's work. But there's PLoS ONE that is alive and well, giving us some hope.
Michael Nielsen has a fine essay about this in his blog:
http://michaelnielsen.org/blog/the-future-of-science-2/ [michaelnielsen.org]
Re:Highly political subjects? (Score:3, Informative)
It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.
Whether the authors are revealed to the reviewers or not varies from journal to journal. All of the large handful of reviews that I've done had the author information presented to all of the reviewers; I've not reviewed for really big name journals though (at least not yet). The reviewers' identities are not made known to the authors, though. It is often, however, rather easy to identify the reviewers because my field is not that large, and personalities can shine right through unedited writing like reviewer's comments. Similarly, even if the author were to be anonymized, it's normally pretty easy to identify the laboratory the work came from based on the references cited, since most labs build on previous work in the lab, so cite their own papers more than others.
Re:Highly political subjects? (Score:3, Informative)
Of course it is. But that doesn't mean that peer review is worthless.
Remember, a large enough (I'd say a majority, but I haven't actually done the numbers to claim that) number of people who get into science are doing it because they care passionately about their field. Eventually, the best of the breed floats to the top, and is distilled to give us things like PageRank and better safety in automobiles (see, I worked in a car analogy too!)...
Re:Just like the Slashdot moderation system (Score:3, Informative)
A few things:
1. Where is this money to hire 10,000 technical writers going to come from when, excluding tenured faculty, all academic scientists are threatened with losing their jobs due to lack of funds at least one year in three?
2. What exactly is a technical writer going to be doing? They will never have the necessary background to write a review article (seriously--a review article can have anywhere from 150 to well over 300 referenced papers, selected from an even larger pool). A research paper is the description of what 1-30 people have spent the last couple of years doing, plus a small literature review, plus discussion and future directions. Is the technical writer going to go through a few to a few dozen notebooks from all of these people and assemble it, despite not knowing what's going on and somehow predict where it's going to go? Or are they there to check over a rough draft and polish it? If they're the polisher, how much time do I have to spend getting them to understand the terminology, which can be and often is extremely precise? How much time do I have to spend going over the paper, which will have my name on it and not theirs, and which could (especially in the case of a very poorly written or wrong paper) have a huge impact on my career, checking to make sure that the wording is correct? Nuance can be critical in a scientific paper, especially if you're attacking somebody else's results. Communities are small and egos can be large.
3. Your "technical writers write it and stick it on Google model" sounds like science reporting to me. I've had research described by the local paper. Thankfully the reporter was extremely diligent and emailed us what was about to be final copy. The boss was livid with the changes introduced by the reporter's editor to make the work "punchier." Had it gone out as it was it would have been a major embarrassment to the lab but we managed to do triage on it. You know all those stories that get written up on
4. Google-based moderation will increase the incidence of posturing. Ever seen a website pushed to the top of the search heap by artificial means?
5. Lots of scientists read slashdot. We're well aware of the crapfest that is the slashdot moderation system. See any post having anything to do with global warming or evolution. Hell see any post having anything to do with biology and there's some asshole tagging it "whatcouldpossiblygowrong" and at least a dozen highly modded comments demanding that the scientist should be prevented from playing God...when it's poking about in a few systems in a highly benign fashion.
6. There are barriers to publication. Some of them are a good thing. Any ignorant crank can spew crap online, as is their right. A paper in a peer-reviewed journal ideally, and in fact normally, means more. It means that it has been read by peers of the authors who should, and usually do, spot outright bullshit. It means that it has gone through a process of criticism--not necessarily 100% conductive but papers are usually made a little bit better; think add/remove/modify figure X or do this one extra experiment. Is peer review perfect? Hell no! Anyone who's published more than two papers has gotten back comments that are useless and we're all familiar with the occasional 100% bull
Re:You mean whine when a POS paper is printed (Score:3, Informative)
Average temperature between two different (widely separated) points might be meaningless, but the average of a continuous measurement is definitely significant. Even spatially, average temperature has a physical meaning. For example, the average surface temperature of the sun is 6500 K, though if you measure at various points, you may get more or less than that.
In any case, that's only one of the many "interesting" ideas in that paper...
Re:This is not news to scientists (Score:3, Informative)
Yeap, old news.
http://www.genomeweb.com/peer-review-broken [genomeweb.com]
http://www.slate.com/id/2116244/ [slate.com]
All it takes is one bad reviewer that doesn't know what he's talking about, or only skimmed over the paper, to get a paper rejected.
I call shenanigans on this one (Score:5, Informative)
I have several publications that were significantly improved through the peer review process. When I review papers my goal is not to shoot down the work, rather I try find ways to improve it. Of course there are 'bad' reviewers, who think that reviewing a paper is shredding it to pieces. These are actually easy to spot, because they rarely suggest anything useful and are often ignored by the journal editors. Speaking of which, journal editors are yet another part of the peer review process that is missing from their model
Re:The climate skeptics will have a field day (Score:4, Informative)
Speaking as someone who's at least read up on this stuff (not an expert, but definitely an informed layman), such large-scale adoption of nuclear power comes with its own problems. For one, building such plants is going to be extremely costly, and probably can't be done in time to make a useful difference.
You talk about reprocessing, but even after that, you eventually end up with some radioactive waste products, to say nothing about radiation leakage into the environment.
Finally, I think it's just yet another "all our eggs in one (radioactive) basket" solution. I'd rather have a wide range of options, from renewables like wind, solar or geothermal, to, yes, nuclear power where that's appropriate.
It's difficult to comprehend why a place with ample local generation capability (say, solar power in the Thar desert in India) should go with an expensive nuclear power plant, when the alternative is cheaper, and a more efficient use of resources readily available (as opposed to resources mined from the ground a few thousand kilometers away in another continent), as you "nuclear only" types keep coming up with.
Re:The climate skeptics will have a field day (Score:4, Informative)
Re:The climate skeptics will have a field day (Score:4, Informative)
As usual, the strong caveat at the end of the article goes unnoticed:
But Tim Smith, senior publisher for New Journal of Physics at IOP Publishing, which also publishes physics world.com, feels that the study overlooks the role of journal editors. "Peer-review is certainly not flawless and alternatives to the current process will continue to be proposed. In relation to this study however, one shouldn't ignore the role played by journal editors and Boards in accounting for potential conflicts of interest, and preserving the integrity of the referee selection and decision-making processes,"
IRL the reviewers are not chosen at random. Which burns the straw men built by the summary, most of the article, and the skeptics.
Re:The climate skeptics will have a field day (Score:3, Informative)
Re:just like /.? (Score:3, Informative)
Note also that ideas don't tend to be repeatedly suppressed unless they are truly out-there radical. I sometimes see posts promoting things that to me smell of the crank-shafting kookery that is regularly debunked as crap, and yet it doesn't get downmodded. Why? Probably because the opinion-modders simply couldn't be bothered then. I also tend to see posters loudly complaining that their opinions are being systematically downmodded, when it's really their arseholyness that is being systematically downmodded, with the dissenting opinion being the final straw.
Sure, that is a kind of suppression of opinion, but polite, clear and coherent posts should also be promoted (that's how I mod).