Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Science

Peer Review Highly Sensitive To Poor Refereeing 233

$RANDOMLUSER writes "A new study described at Physicsworld.com claims that a small percentage of shoddy or self-interested referees can have a drastic effect on published article quality. The research shows that article quality can drop as much as one standard deviation when just 10% of referees do not behave 'correctly.' At high levels of self-serving or random behavior, 'the peer-review system will not perform much better than by accepting papers by throwing (an unbiased) coin.' The model also includes calculations for 'friendship networks' (nepotism) between authors and reviewers. The original paper, by a pair of complex systems researchers, is available at arXiv.org. No word on when we can expect it to be peer reviewed."
This discussion has been archived. No new comments can be posted.

Peer Review Highly Sensitive To Poor Refereeing

Comments Filter:
  • by lymond01 ( 314120 ) on Friday September 17, 2010 @11:10AM (#33611282)

    When you're talking about scientific papers, a "bad apple" reviewer may be able to skew the record in terms of 1-10 scales, but reviewers also do a qualitative write-up of the material. That's really the only important part and if one or two people fall outside the line of general consensus, they'll just be ignored.

  • by ElektronSpinRezonans ( 1397787 ) on Friday September 17, 2010 @11:17AM (#33611344)
    A referees rejection can be overruled by the editor. It's his job to choose referees that will understand the research and make sure they are just.
  • by prefect42 ( 141309 ) on Friday September 17, 2010 @11:20AM (#33611380)

    It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.

  • by Shrike82 ( 1471633 ) on Friday September 17, 2010 @11:21AM (#33611388)
    If you're a reasonably active researcher in a specific discipline (even more so if you work in a small sub-field) then you'll likely get to know your peers when you meet them at conferences and when you collaborate with other groups in projects. These same people will be the first to be asked to review a paper in their (and your) field, and will either recognise your work or simply see your name at the top. Now if they have no specific involvement in the work then ethically they're not in the wrong for reviewing it. They of course must be unbiased, but that's a subjective term in the world of paper reviewing.

    Disclaimer: I both write and review journal articles in a few fairly narrow Computer Science sub-disciplines.
  • by gnutrino ( 1720394 ) on Friday September 17, 2010 @11:40AM (#33611590)
    First off in case anyone is in doubt this study use a model of peer review - no experiment or observation of an actual peer review process was done. That's not to say interesting and enlightening things can't come from modeling but in this case the moldel they use seems very questionable and highly arbitrary. This part in particular is highly dubious:

    Each reviewer produces a binary recommendation within the same timestep: ’accept’ or ’reject’. If a paper gets 2 ’accept’ it is accepted, if it gets 2 ’reject’, it is rejected, if there is a tie (1 ’accept’ and 1 ’reject’) it gets accepted with a probability of 0.5.

    If a single 'bad' reviewer (i.e. one that gives the 'wrong' answer as determined by the 'correct' method of reviewing mentioned as a control in the paper) can cause a paper to have a 50:50 chance of acceptance or rejection it doesn't seem too suprising to me that a relatively small number of them could cause the process to become '[not] much better than by accepting papers by throwing (an unbiased) coin' - because in their model, in the case of a reviewer disagreement, that's exactly what is happening!

  • by oiron ( 697563 ) on Friday September 17, 2010 @11:43AM (#33611618) Homepage

    Broken record time, but yes. Such subversion of the peer review process did show up. The culprits weren't the ones you expect. [csicop.org]

    In general however, I think that this study is rather pessimistic. And anyway, it hasn't been peer reviewed, so who knows... ;-)

    (yes, I did read TFA, but not the paper

  • by TheRaven64 ( 641858 ) on Friday September 17, 2010 @11:52AM (#33611704) Journal
    It's often difficult to hide it. Someone qualified as a referee has to be familiar with the state of the art in a subject, and when it comes to journals the field is usually a very specialised subset of a broader field. The people qualified as a referee will generally recognise the work of their colleagues, and also of their competitors. People can also communicate out-of-band. If you say to the top dozen people in your field 'I submitted a paper about X to this journal / conference this year' then the odds are that one or two of them will be reviewers.
  • by iris-n ( 1276146 ) on Friday September 17, 2010 @12:01PM (#33611810)

    Social Text is emphatically not prominent nor was peer-reviewed at the time of the affair.

    http://en.wikipedia.org/wiki/Sokal_affair [wikipedia.org]

    While the Sokal affair is interesting, it has nothing to do with the matter at hand.

  • Re:change the system (Score:3, Informative)

    by iris-n ( 1276146 ) on Friday September 17, 2010 @12:07PM (#33611884)

    And it would also help the readers understand the article, a good referee report is quite illuminating. However, this has already been tried out by Nature in 2006

    http://www.nature.com/nature/peerreview/ [nature.com]

    and didn't work so well. Apparently, scientists are somewhat reluctant to openly criticise each other's work. But there's PLoS ONE that is alive and well, giving us some hope.

    Michael Nielsen has a fine essay about this in his blog:

    http://michaelnielsen.org/blog/the-future-of-science-2/ [michaelnielsen.org]

  • by pz ( 113803 ) on Friday September 17, 2010 @12:43PM (#33612268) Journal

    It is done as you've guessed, but it's still often obvious who the author is. Don't forget that sometimes a bad review has nothing to do with knowing who the author is. If you come across a paper that's done almost exactly the same work as you have done, or criticises your work, you could choose to give it a false bad review to try to prevent it from being published. I've seen papers that have received three reviews, two that say it's good, and one that says it's nowhere near worthy of being published. You often question the outliers.

    Whether the authors are revealed to the reviewers or not varies from journal to journal. All of the large handful of reviews that I've done had the author information presented to all of the reviewers; I've not reviewed for really big name journals though (at least not yet). The reviewers' identities are not made known to the authors, though. It is often, however, rather easy to identify the reviewers because my field is not that large, and personalities can shine right through unedited writing like reviewer's comments. Similarly, even if the author were to be anonymized, it's normally pretty easy to identify the laboratory the work came from based on the references cited, since most labs build on previous work in the lab, so cite their own papers more than others.

  • by oiron ( 697563 ) on Friday September 17, 2010 @01:37PM (#33612960) Homepage

    Of course it is. But that doesn't mean that peer review is worthless.

    Remember, a large enough (I'd say a majority, but I haven't actually done the numbers to claim that) number of people who get into science are doing it because they care passionately about their field. Eventually, the best of the breed floats to the top, and is distilled to give us things like PageRank and better safety in automobiles (see, I worked in a car analogy too!)...

  • by Bowling Moses ( 591924 ) on Friday September 17, 2010 @01:52PM (#33613166) Journal
    "Imagine how much faster science could advance if we had a system that actually let scientists focus on research, let people trained in technical writing do the reporting, and let Google design a post-publication moderation system to sort out the useful advances from the career posturing."

    A few things:
    1. Where is this money to hire 10,000 technical writers going to come from when, excluding tenured faculty, all academic scientists are threatened with losing their jobs due to lack of funds at least one year in three?

    2. What exactly is a technical writer going to be doing? They will never have the necessary background to write a review article (seriously--a review article can have anywhere from 150 to well over 300 referenced papers, selected from an even larger pool). A research paper is the description of what 1-30 people have spent the last couple of years doing, plus a small literature review, plus discussion and future directions. Is the technical writer going to go through a few to a few dozen notebooks from all of these people and assemble it, despite not knowing what's going on and somehow predict where it's going to go? Or are they there to check over a rough draft and polish it? If they're the polisher, how much time do I have to spend getting them to understand the terminology, which can be and often is extremely precise? How much time do I have to spend going over the paper, which will have my name on it and not theirs, and which could (especially in the case of a very poorly written or wrong paper) have a huge impact on my career, checking to make sure that the wording is correct? Nuance can be critical in a scientific paper, especially if you're attacking somebody else's results. Communities are small and egos can be large.

    3. Your "technical writers write it and stick it on Google model" sounds like science reporting to me. I've had research described by the local paper. Thankfully the reporter was extremely diligent and emailed us what was about to be final copy. The boss was livid with the changes introduced by the reporter's editor to make the work "punchier." Had it gone out as it was it would have been a major embarrassment to the lab but we managed to do triage on it. You know all those stories that get written up on /. about some massive new breakthrough that is going to cure cancer and halitosis but instead is just another small-medium technical advance and everyone demands to know who the jackass scientists were that overinflated their results, how dare those overfunded eltist blowhard frauds? Yeah, that's what almost happened to me. How exactly will your system avoid massively amplifying this?

    4. Google-based moderation will increase the incidence of posturing. Ever seen a website pushed to the top of the search heap by artificial means?

    5. Lots of scientists read slashdot. We're well aware of the crapfest that is the slashdot moderation system. See any post having anything to do with global warming or evolution. Hell see any post having anything to do with biology and there's some asshole tagging it "whatcouldpossiblygowrong" and at least a dozen highly modded comments demanding that the scientist should be prevented from playing God...when it's poking about in a few systems in a highly benign fashion.

    6. There are barriers to publication. Some of them are a good thing. Any ignorant crank can spew crap online, as is their right. A paper in a peer-reviewed journal ideally, and in fact normally, means more. It means that it has been read by peers of the authors who should, and usually do, spot outright bullshit. It means that it has gone through a process of criticism--not necessarily 100% conductive but papers are usually made a little bit better; think add/remove/modify figure X or do this one extra experiment. Is peer review perfect? Hell no! Anyone who's published more than two papers has gotten back comments that are useless and we're all familiar with the occasional 100% bull
  • by oiron ( 697563 ) on Friday September 17, 2010 @01:56PM (#33613216) Homepage

    Average temperature between two different (widely separated) points might be meaningless, but the average of a continuous measurement is definitely significant. Even spatially, average temperature has a physical meaning. For example, the average surface temperature of the sun is 6500 K, though if you measure at various points, you may get more or less than that.

    In any case, that's only one of the many "interesting" ideas in that paper...

  • by societyofrobots ( 1396043 ) on Friday September 17, 2010 @02:00PM (#33613248)

    Yeap, old news.

    http://www.genomeweb.com/peer-review-broken [genomeweb.com]
    http://www.slate.com/id/2116244/ [slate.com]

    All it takes is one bad reviewer that doesn't know what he's talking about, or only skimmed over the paper, to get a paper rejected.

  • by pesho ( 843750 ) on Friday September 17, 2010 @02:09PM (#33613342)
    Chances are that this paper is not going to pass peer review. A brief read paper shows that they don't even attempt to validate the model with real data (too lazy for real research I guess). Their model is also overly simplistic to the point of stacking the deck towards proving that peer review is bad. The reviewer role is simplified to an accept/reject decision, which has nothing to do with reality. They completely eliminate the revision step in the peer review process, where authors address either through new experiments or through argument the comments of the reviewers. If you look at the 'characters', what they call a 'rational' reviewer looks more like a 'bastard' reviewer. They completely ignore the possibility that a reviewer can make suggestions that improve the paper.

    I have several publications that were significantly improved through the peer review process. When I review papers my goal is not to shoot down the work, rather I try find ways to improve it. Of course there are 'bad' reviewers, who think that reviewing a paper is shredding it to pieces. These are actually easy to spot, because they rarely suggest anything useful and are often ignored by the journal editors. Speaking of which, journal editors are yet another part of the peer review process that is missing from their model

  • by oiron ( 697563 ) on Friday September 17, 2010 @02:21PM (#33613458) Homepage

    Speaking as someone who's at least read up on this stuff (not an expert, but definitely an informed layman), such large-scale adoption of nuclear power comes with its own problems. For one, building such plants is going to be extremely costly, and probably can't be done in time to make a useful difference.

    You talk about reprocessing, but even after that, you eventually end up with some radioactive waste products, to say nothing about radiation leakage into the environment.

    Finally, I think it's just yet another "all our eggs in one (radioactive) basket" solution. I'd rather have a wide range of options, from renewables like wind, solar or geothermal, to, yes, nuclear power where that's appropriate.

    It's difficult to comprehend why a place with ample local generation capability (say, solar power in the Thar desert in India) should go with an expensive nuclear power plant, when the alternative is cheaper, and a more efficient use of resources readily available (as opposed to resources mined from the ground a few thousand kilometers away in another continent), as you "nuclear only" types keep coming up with.

  • by techno-vampire ( 666512 ) on Friday September 17, 2010 @03:00PM (#33613876) Homepage
    You left one out: ignorant fools who conflate highly radioactive waste products with wastes with long half-lives. If you listen to them, you come away with the impression that the wastes from a reactor stay Highly Radioactive for thousands of years, ignoring the fact (if they're even aware of it) that unstable isotopes are either highly active or have long half lives, never both, because the two qualities are mutually incompatable.
  • by P0ltergeist333 ( 1473899 ) on Friday September 17, 2010 @03:59PM (#33614518)

    As usual, the strong caveat at the end of the article goes unnoticed:

    But Tim Smith, senior publisher for New Journal of Physics at IOP Publishing, which also publishes physics world.com, feels that the study overlooks the role of journal editors. "Peer-review is certainly not flawless and alternatives to the current process will continue to be proposed. In relation to this study however, one shouldn't ignore the role played by journal editors and Boards in accounting for potential conflicts of interest, and preserving the integrity of the referee selection and decision-making processes,"

    IRL the reviewers are not chosen at random. Which burns the straw men built by the summary, most of the article, and the skeptics.

  • by hkmwbz ( 531650 ) on Friday September 17, 2010 @05:16PM (#33615352) Journal
    Lindzen is actively publishing. Unfortunately for him, his research falls apart on closer scrutiny. He might be skeptical, but his research has failed to support his skeptical position.
  • Re:just like /.? (Score:3, Informative)

    by fbjon ( 692006 ) on Friday September 17, 2010 @05:45PM (#33615644) Homepage Journal
    If it is abolished, you get a flood of BS and trolling, hiding every interesting post. There already is enough of that on every other popular site in the entire internet. To put it differently: this system serves to promote ideas, because it suppresses them less than most any other system I've encountered. Though smaller sites can obviously get away with more freedom.

    Note also that ideas don't tend to be repeatedly suppressed unless they are truly out-there radical. I sometimes see posts promoting things that to me smell of the crank-shafting kookery that is regularly debunked as crap, and yet it doesn't get downmodded. Why? Probably because the opinion-modders simply couldn't be bothered then. I also tend to see posters loudly complaining that their opinions are being systematically downmodded, when it's really their arseholyness that is being systematically downmodded, with the dissenting opinion being the final straw.

    Sure, that is a kind of suppression of opinion, but polite, clear and coherent posts should also be promoted (that's how I mod).

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...