Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Media Science

Rejected Papers Get More Citations When Eventually Published 73

scibri writes "In a study of more than 80,000 bioscience papers, researchers have illuminated the usually hidden flows of papers from journal to journal before publication. Surprisingly, they found that papers published after having first been rejected elsewhere receive significantly more citations on average than ones accepted on first submission. There were a few other surprises as well...Nature and Science publish more papers that were initially rejected elsewhere than lower-impact journals do. So there is apparently some reason to be patient with your paper's critics — they will do you good in the end."
This discussion has been archived. No new comments can be posted.

Rejected Papers Get More Citations When Eventually Published

Comments Filter:
  • by Anonymous Coward on Friday October 12, 2012 @06:44PM (#41637237)

    Peer review *does* work. Yes, part of its job is to filter out the poor papers that don't deserve publication. That's the obvious part. But I've gotten plenty of papers back with comments like "deserves publication, but X, Y, and Z need to be fixed". Or even "rejected, but if X were addressed, should be reconsidered", and so on. So, you go off and do X, Y, and Z, resubmit, and you've got a better paper because you've addressed the critical comments. Good papers are ones that incorporate constructive criticism, so it makes sense those might eventually get cited more. Also, if it's a paper that was rejected somewhere, then it might be something controversial that people want to argue about. So, publish a paper that makes a claim some people don't agree with (hence the rejection), and those critics will publish their own paper slagging the original one. Putting it another way, in order to say someone else's paper is full of crap, you have to cite it, and if a lot of people are saying it's crap, then you'll get a lot of citations :-)

    Peer review isn't perfect, but the described pattern makes sense. What I'm surprised at is their ability to statistically detect these patterns given all the other variables involved, but I guess a sample size of 80000 helps.

  • by Brett Buck ( 811747 ) on Friday October 12, 2012 @06:45PM (#41637245)

    I haven't done a lot of publishing in open literature, but many times, the papers that fly through the vetting process with little effort are are on topics that are somewhat straightforward/trivial. And would thus not be as likely to be useful as a citation. The interesting topic raises many more questions and is more likely to require multiple tries to get through the review, but ultimately is more useful and more likely to get a citation.

        Brett

  • maybe not editing? (Score:2, Insightful)

    by Anonymous Coward on Friday October 12, 2012 @06:47PM (#41637273)

    The summary seems to suggest that when a paper is rejected, the author edits it in hope of being less rejection-worthy the second time around.

    I don't think the data provided is adequate to show that. An alternative hypothesis is that papers vary in risk and "risky papers" are more likely to both be rejected and , once approved, to be cited.

  • by merlinokos ( 892352 ) on Friday October 12, 2012 @06:53PM (#41637347)
    "So there is apparently some reason to be patient with your paper's critics — they will do you good in the end." I have a different possible viewpoint. The papers that are most likely to be rejected are the ones that are controversial because they challenge the status quo. But once they're accepted, they're game changers. And since they're game changers, and the first publications with the new viewpoint, they're cited disproportionately frequently by follow up work.
  • by merlinokos ( 892352 ) on Friday October 12, 2012 @06:55PM (#41637373)
    "So there is apparently some reason to be patient with your paper's critics — they will do you good in the end."

    I have a different possible viewpoint. The papers that are most likely to be rejected are the ones that are controversial because they challenge the status quo. But once they're accepted, they're game changers. And since they're game changers, and the first publications with the new viewpoint, they're cited disproportionately frequently by follow up work.

    (formatted correctly this time)
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday October 12, 2012 @06:56PM (#41637379)
    Comment removed based on user account deletion
  • Re:Surprisingly? (Score:5, Insightful)

    by Arker ( 91948 ) on Friday October 12, 2012 @09:59PM (#41638687) Homepage

    Not at all. Papers that were previously rejected benefit from additional, careful revisions by their authors, therefore they end being of higher quality than they would have.

    The conclusion the journals would like you to reach, and it may explain some of the effect. Seems to me likely the larger effect is simply that papers which break new ground tend to be controversial to old guard, thus accumulate rejections, but when and if finally published they also tend to accumulate more citations as well, as even the old guard will then have to cite in their rebuttals.

  • by Chuckstar ( 799005 ) on Saturday October 13, 2012 @12:27AM (#41639289)

    This is one of those "I came on here to say this but you said it better" posts.

    The only study they site about how much editing is done between submissions seems to indicate "not much at all".

    Also, this could explain the prevalence of Nature and Science in the study. Risky papers may be rejected by the orthodoxy of more specialized journals. Nature and Science may avoid that kind of orthodoxy simply by having a broader array of reviewers than a more specialized journal might.

    Pick a completely zany example in a field I know nothing about: You've done cutting edge work on the transport of lipids in palm fronds. There aren't that many people in the field, and most of them have been following a line of reasoning that lipids are transported by osmosis, as there has been some evidence of that. You have the hypothesis that lipids are transported by tiny aphids. You do some interesting lab work, that seems to support the hypothesis. The results are supportable, publishable, but not entirely definitive -- you know, statistically significant, but there's really a lot more work to be done before anyone can say you've really proven your hypothesis correct.

    You submit your paper to a botany journal. They have it reviewed by a bunch of palm frond experts. All of them have studied palm fronds very closely. They mostly adhere to the unproven pet theory of osmosis. They do not consciously reject your paper because it contradicts their pet theory, but between the kinda shaky results, and their unconscious bias, it gets rejected.

    Now you submit to Nature. They only have a couple palm frond experts in their stable, so they have a fern expert and a deciduous leaf expert review it. Neither the fern expert nor the deciduous expert have any unconscious bias. They think the whole thing seems pretty interesting, and worthy of broader discussion. So it gets approved. Now that the whole palm frond field has seen your work, a couple guys start trying to replicate your work, adding their own twists. They come up with supporting evidence and publish, with a reference to your work. A couple more guys come up with evidence that supports a different hypothesis. Even if you end up having been wrong at this point, your work gets referenced because it is what stimulated them to do their work. Let's say they discovered a third mechanism, that seems to explain transport even better. This third mechanism is unrelated to osmosis, so few of the osmosis studies are being referenced. But because your work stimulated the whole line of inquiry, your wrong study still gets lots of references.

    (I added the part at the end about the work ending up being wrong just to illustrate that risky scientific invetigation doesn't have to end up being right in order to get referenced a lot. It just has to stimulate inquiry in a different direction such that people keep referencing the original study. A paper that just advances the field in an orthodox direction may still be great science, but may get lost in a sheaf of other similar studies advancing the field in that similar direction.)

  • Re:Surprisingly? (Score:4, Insightful)

    by Sir_Sri ( 199544 ) on Saturday October 13, 2012 @03:10AM (#41639841)

    Reviews, in my experience in physics and computer science anyway, tend to be more about process than results. Did the work you do cite whomever the reviewer knows about in your field (usually not), is the process you used for your work valid, or importantly, is it clear that it is valid? Are your results understandable to someone who didn't do your particular experiment? Is your paper clear enough that an expert in the field but not your research can build on it? Admittedly I might be biased because I'm bad at explaining myself.

    Reviewers aren't out to screw submitters for the fun of it, because they don't want to be screwed themselves and the whole point of research is to find new stuff. But internally when you're preparing a paper your supervisor, or your grad students or the like know what the fuck you're talking about, and someone on the outside can say something to the effect of 'this makes no sense'. "This makes no sense' by the way doesn't mean the work isn't valid, just that you might have done a terrible job trying to write about within whatever constraints your target journal has.

    Journals are also becoming a bit overspecialized, and it makes it hard when you do something really high impact that isn't very narrow to know where to put it. My particular corner of the academic universe is broadly under the field of game development, but we really combine work in AI, computational social science, strategic studies and economics, and an AI journal may look at our work and feel (correctly) that it's not enough of an AI problem for them, the computational social science people will say the same thing and so on. The big dogs of Nature and Science publish things that can be cross disciplinary and that aren't (and shouldn't be) pigeon holed into a particular basket. The place I'm at has, or had at least, some really stellar computer vision and computer algebra researchers but odds are if you aren't specifically in those fields you couldn't care less what they do. That can be good work with few citations just because it's really really important to one very tiny problem. A journal rejecting you because you aren't doing enough pathfinding for their pathfinding edition doesn't mean you didn't do good work, it just means the work you didn't isn't as applicable to what they're doing.

    I expect as time goes on we'll see more of this. New researchers aren't as biased by picking up a physical journal and reading it, they do internet and database searches (which are becoming much easier and much higher quality) and glue together concepts from multiple journals, but then their work may not really fit with what the previous ones did. It can be broadly interesting and broadly useful, but it's not the feeder disciplines.

    If you want a good example take Quantum computing. The 'Quantum' mechanics side of the research fits in easily a dozen different journals (e.g. MRI, Laser spec, semiconductors etc) but very little of the work is meaningfully original, it's a new application of an old problem. The theory of computing side of things probably only belongs in one or two different theory of comp journals, but again, it's re-examining some of the fundamentals of computing theory in a different way and there's not much to do after the first guy does it. But they are, together very interesting.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...