Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Researchers Feel Pressure To Cite Superfluous Papers 107

ananyo writes "One in five academics in a variety of social science and business fields say they have been asked to pad their papers with superfluous references in order to get published. The figures, from a survey published in the journal Science (abstract), also suggest that journal editors strategically target junior faculty, who in turn were more willing to acquiesce. The controversial practice is not new: those studying publication ethics have for many years noted that some editors encourage extra references in order to boost a journal's impact factor (a measure of the average number of citations an article in the journal receives over two years). But the survey is the first to try to quantify what it calls 'coercive citation,' and shows that this is 'uncomfortably common.' Perhaps the most striking finding of the survey was that although 86% of the respondents said that coercion was inappropriate, and 81% thought it damaged a journal's prestige, 57% said they would add superfluous citations to a paper before submitting it to a journal known to coerce. However, figures from Thomson Reuters suggest that social-science journals tend to have more self-citations than basic-science journals."
This discussion has been archived. No new comments can be posted.

Researchers Feel Pressure To Cite Superfluous Papers

Comments Filter:
  • by Anonymous Coward on Saturday February 04, 2012 @01:11AM (#38924477)

    The surest way to get something on Wikipedia, is get something published then cite it. Accuracy notwithstanding.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Yes. I remember a lot of articles or sections of articles citing Medical Hypotheses and other junk journals. They were eventually challenged and killed, but new ones would always crop up. It felt like a losing battle :\

    • Some of my work is referenced by wikipedia (just like a lot of other researchers) but it doesn't mean much of anything to me professionally. A wikipedia reference doesn't add on to the total number of citations a paper I wrote gets, only citations by other peer-reviewed papers does that. I doubt references on wikipedia gets me much in the way of other researchers' eyeballs on my papers either.

      Maybe more in terms of TFA, I'm a biochemist and on the last paper I wrote I got the opposite pushback from the
  • by KBehemoth ( 2519358 ) on Saturday February 04, 2012 @01:13AM (#38924481) Homepage
    I think we know the number one culprit here, ahem [citation needed] ahem.
    • Who modded this flamebait? It's right on point.

    • by jhoegl ( 638955 )
      Here is the Internet's citation.
      http://www.youtube.com/watch?v=dQw4w9WgXcQ [youtube.com]
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      not necessarily ... wikipedia is an encyclopedia, not a research platform. also decent normal encyclopedias have citations to where they get their knowledge from or will tell you on request. So, for an encyclopedia citations are everything (so number of citations could be a decent measure)

      however, number of citations is not a good measure for articles that produce original research, as the field they research might be rather narrow or the article might be written as an answer to another article, verifying o

    • by Hentes ( 2461350 )

      Now come on Wikipedia (like all encyclopedias) is not a peer-reviewed journal you can't expect them to check the scientific value of their sources.

    • Nobodies career is riding on getting their wikipedia edits accepted.

  • Metrics (Score:5, Insightful)

    by rknop ( 240417 ) on Saturday February 04, 2012 @01:32AM (#38924551) Homepage

    This is what happens when you have metrics. You create a metric like "impact factor", and before long people will figure out ways to maximize "impact factor" that have nothing to do what the metric was originally supposed to measure. Hyperfocusing on metrics like that ends up undermining the things you really value in favor of increasing your scores.

    This happens all over the place. Games in every game find ways to increase their score in ways that the game designers wouldn't really consider valid. Universities do things simply to make their "US News" ratings go up, not because they will make themselves better. Students figure out ways to raise their grades that have nothing to do with mastering the material of the course. Heck, the entire US (and world?) economy suffers from this; the most reliably rich people are the ones who manipulate money transactions, and do absolutely nothing with the underlying reality that money is supposed to be an abstract representation of.

    People strive to improve the things that they are rewarded for and that they are evaluated on. When you focus too much on the wrong thing, people will do the wrong things in response.

    • by bytesex ( 112972 )

      It's like Google ranking. Academics have found a way to hack their own system.

      • Re: (Score:2, Insightful)

        by mapkinase ( 958129 )

        "their own system" nope.

        Perceived word of mouth reputation of journals preceded impact factor. I do not know who decided to use impact factor as a main measure of a journal, but I take an insult in the fact that you called them "academics"

    • This is one of the most truly insightful comments I have read on /. in 10 years.

      As usual with great insights and discoveries, it strikes you with triviality of a great thought that nevertheless never crossed your mind (that's the sign of greatness I learned from my scientific advisor).

    • Re:Metrics (Score:4, Interesting)

      by TheTurtlesMoves ( 1442727 ) on Saturday February 04, 2012 @08:32AM (#38926093)
      Impact factor is such a awful and quite unscientific metric. I hate it. However after 2 postdocs i need to move on, and first screening of applicants is often done with impact factor of your papers. Even worse some journals can jump large amounts year to year.

      As a reviewer i have often suggested citations that IMO where missing. In fact some scientist deliberately leave out citations that may have inconvenient viewpoints/data/results, I have such a paper that once side of the debate just pretends does not exist. Out of all the times i have reviewed and suggested citations, i have only suggested one of my papers once. Also i typically don't require anonymous review, ie i give my name when permitted.
      • by rmstar ( 114746 )

        Well, after a couple of years as a postdoc I can tell you there is no good metric for success. Except looking like you were appreciated and considered good by your senior peers. Maybe you published only four papers in crappy journals, but it so happens that somebody who matters thinks you are a genius. And there you go. You can win against highly cited people who are considered morons by the powers that be.

        That is the way of the scientist. Never forget you are in fact a monk. There might be sacrifices.

        I usu

    • by eh2o ( 471262 )

      I am skeptical of the article's claim.

      When an editor reads a paper and feels that the author is misinformed or making incorrect assumptions, they may respond by requesting the author to cite certain papers, with the intention that by forcing the author to read the papers, they will become better informed and correct their erroneous assumptions.

      Meanwhile, the author, not believing themselves to be wrong, refuses to acknowledge the suggested references and considers the material "superfluous" to the work. As

  • Not exactly on topic but nevertheless an interesting read:

    http://100rsns.blogspot.com/ [blogspot.com]

  • But to ignore previous authors might be disrespectful. But isn't the citing of authors used for extrapolation?
    • by Sir_Sri ( 199544 ) on Saturday February 04, 2012 @02:12AM (#38924719)

      There's a tricky balance between saying we build on this work and so necessarily reference it, and it is related to this other work (or may incorporate results from it, even if I'm not directly involved in that research) which is trying to give credit where it is due, not too much.

      Science struggles how to cite widely disbursed facts. What is the speed of light? Right. You *can* cite the people who precisely measure it, but most research that relies on that data doesn't really need to cite it, because they aren't building on it. If I'm looking at spectra from a gas or a star the speed of light is really really really important to my work, but how the actual number was arrived at isn't that important. If you're a young researcher you want your work to look like it is related to a lot of things (think resume padding) so you cite more than you need to, and if you're new to the field you want your work cited as a measure how much impact it has - but judging what counts as impact is not always clear and you're pretty easy to persuade that it is better to over cite than under cite and risk being called out for plagiarism.

      Another example. I'm working on a current project. One of the relevant facts to the work is the history of the Bismarck battleship (the nazi one). This is because the documentation about the history is relevant to how to quantify the statistics of the ship. But what is a valid reference for that? If anything? How about the mere existence of the ship? What factual information am I digging for that isn't sufficiently well known to be on wikipedia? Do I cite comparable information about the dozen or so other ships and aircraft she actually fought with, or do I just sort of take for granted that the ship had 38 cm guns (which directly maps to the problem I'm talking about which is balancing the relative combat power of the ship). If it was a history paper it's sort of obvious that the historical work is to be cited - discovering (or rediscovering) that information would be a worthwhile historical paper, but what about something that is only tangentially related, which is trying to define those statistics in a game?

      It hasn't been uncommon for scientists to base research on 3 or 4 papers (possibly one or two of which was from their own group), and then when they're getting ready to publish to look for papers in the target journal that are related that can be cited as well (this is like self censorship, or self coercion, rather than be asked to do it, you do it on your own first0. It's not really a good practice, but I'm not sure it's as bad as the article tries to make it out to be. You really are legitimately looking for work that might be related to what you did, especially if you didn't do your literature search very well (which is harder than it sounds sometimes), and you are, as you say, citing authors you're extrapolating from (or at the very least doing related work to).

      • There will also be some bias in terms of what makes a citation "superfluous." A good editor is concerned about insuring that the submitter's statements are accurate and well-sourced. On good legal journals, editors actually go and look at every cited source (hundreds per article) to see if it contains the proposition it is cited for. When a journal is well-sourced, it is more reliable as a source for practitioners and as a research tool for academics, as well as being a better stepping-off point for furt

  • by oheso ( 898435 ) on Saturday February 04, 2012 @01:44AM (#38924601)
    ... in the last sentence of the summary. I think the word you're looking for is "Naturally."
  • by bky1701 ( 979071 ) on Saturday February 04, 2012 @02:07AM (#38924687) Homepage
    What goes around, comes around. Considering this is basically the norm for student essays, it was only a matter of time until the students became the professors, and the professors fully saturated the journal editorial boards. It is just a promotion of the status quo to a level it is visibly a bad thing, really.
  • by forkfail ( 228161 ) on Saturday February 04, 2012 @02:13AM (#38924723)

    Shocked [3][12][21], I tell you! [4][7]! Studies [14][17][31] have shown [11][15] that this [26] never [21][22] happens [25] with reputable [5][14][24] papers [19]! How could this [32] have happened? [12][16]

  • Happened to Me (Score:4, Insightful)

    by Bob9113 ( 14996 ) on Saturday February 04, 2012 @02:13AM (#38924727) Homepage

    I put together an economics paper and sent it around to a few PhDs I know. Two of them came back with the exact response that this article indicates; "It needs more references if you want to get published." I asked if the math, logic, or conclusions were off, both responded they were not, but that was not the point. They made it clear that to get published it had to have more references to existing work, regardless of the content.

    I can come up with arguments why such a policy has some merit -- keeping wacky stuff like modern monetary theory's hypothesis that there is no such thing as too much debt from distracting researchers, for example -- but good, bad, or indifferent; the fact that there is a barrier to papers which do not pay homage to existing academics is very real.

    • Re:Happened to Me (Score:4, Insightful)

      by Anonymous Coward on Saturday February 04, 2012 @04:41AM (#38925207)

      Yes, but that's not what the problem is. It's not about "paying homage", it's about being honest about the novelty of your work. Academic publishing is not a no-holds-barred debate. Every paper is expected to present a balanced view of the subject at hand, even if the author has a particular point of view they want to get across. This is why we have a peer review process. There is a reason for accurately representing previous work.

      The problem that the article discusses is specifically *irrelevant* citations added for suspicious reasons. That is a different problem.

      • It's discussing *superfluous* citations, not irrelevant ones. The authors are the ones deciding if they are superfluous. Authors will always think that having to add a citation is superfluous. That doesn't mean that they should be able to ramble on for a forty page paper with less than an absolute minimum of, say, 120 footnotes. Is it possible that the paper is perfectly correct without them? Sure. But if I'm putting a journal's name on them, and I'm responsible for the journal's reputation, I'd like

    • There is a difference between randomly sprinkling a paper with references in a superficial effort to make it look "serious" and conform to the usual academic mold; and actively researching, citing and discussing earlier relevant references in comparison to your own work in a balanced way. The latter is how good quality academic writing should be done. The former tends to give rise to papers with pointless laundry lists of citations. I hope your friends were suggesting the latter way. Even if they were not a

    • Re:Happened to Me (Score:4, Interesting)

      by zachie ( 2491880 ) on Saturday February 04, 2012 @05:22AM (#38925375)

      Including copious references is not only a way to "pay homage to existing academics". It makes sure that you went through the literature to see if your contribution is really a novelty, and forces you to compare your work against others', which is great both for the expert in the field to better understand your contribution, and for the non-expert, who gets pointers to better grasp some parts or to navigate towards the important papers of a field of research. These are still very important, even if you think your work is technically sound.

      I'm talking out of my ass now, and it depends on the research area and the paper, but "It needs more references if you want to get published" might be a polite way for your acquaintances to say, have you provided sufficient motivation for the problem you are solving, thoroughly explored the literature for related proposals, you should compare your ideas against other papers', etc.

      • by Rich0 ( 548339 )

        So, suppose the guy adds 47 references to prove to the reviewers that the material is a genuine contribution (as if that even made sense - any number of citations does not prove that you didn't miss a complete previous duplication)? The reviewer is satisfied that the guy has done his homework and gives the OK. Why not then remove the references before publication? If the purpose is to ensure that due diligence is done then removing the references would have no negative effect at all.

        No, references are pr

        • by zachie ( 2491880 )

          You are off with this idea that citations are just a way to pay homage. They are very important. And you want to remove citations after the author has gone through the great effort of reviewing other people's work, comparing their ideas, etc? That makes no sense.

          The very particular case you point of citations to old well-known papers is pretty irrelevant, and unrelated to the problem that TFA is discussing. While that might be purely homage, what is the big problem of giving credit to a given researcher if

          • by Rich0 ( 548339 )

            The suggestion that was raised was that it was important to have citations to prove that the author did their homework and was aware of related work. My point was that if that were REALLY the goal then it wasn't important to publish the citations. Obviously it is silly to add them and then remove them, but the obvious solution to that is to never add them in the first place.

            What in your opinion is the true importance of citations? I certainly don't think they should be used to test the author's knowledge

            • by zachie ( 2491880 )
              Well yes, publishing is some sort of test. That's the point of peer review.
              • by Rich0 ( 548339 )

                It isn't supposed to be a test of the author's abilities - just of the accuracy of the paper. The author could be an idiot but if the paper accurately describes the experimental results and their validity then the paper should be published.

  • Sample Size Errors (Score:5, Informative)

    by CycleMan ( 638982 ) on Saturday February 04, 2012 @02:31AM (#38924801)
    I did RTFA. The authors of the paper surveyed 54,000 academics, and about 1,300 responded to say, "Yes we felt pressured." That's 2.5%. Only 1/3 of those named a single journal that pressured them. Another 2.5% said, "We've heard that others have been pressured, but never us." 7.5% said, "We've never heard of it." And 87.5% didn't respond. The survey shows extreme self-selection as 7 of 8 academics did not respond. So before someone gets excited that 20% of academics are pressured, note that under 13% of academics responded.
    • by znerk ( 1162519 ) on Saturday February 04, 2012 @02:49AM (#38924865)

      I did RTFA. The authors of the paper surveyed 54,000 academics, and about 1,300 responded to say, "Yes we felt pressured." That's 2.5%. Only 1/3 of those named a single journal that pressured them. Another 2.5% said, "We've heard that others have been pressured, but never us." 7.5% said, "We've never heard of it." And 87.5% didn't respond. The survey shows extreme self-selection as 7 of 8 academics did not respond. So before someone gets excited that 20% of academics are pressured, note that under 13% of academics responded.

      ... because the other ~87% were pressured to keep silent?

    • I've already posted - someone else mod up please.

    • by EdIII ( 1114411 ) on Saturday February 04, 2012 @03:04AM (#38924907)

      I'm sorry, but it's hard for me to take this post seriously without at least one citation.

    • by cvtan ( 752695 )
      Put me down as one of the academics that, "Never heard of it".
  • by Karmashock ( 2415832 ) on Saturday February 04, 2012 @02:45AM (#38924853)

    They're being asked to pad their paper because the actual evidence being cited might not look that convincing on it's own right. And many of the conclusions aren't properly supported. Come on, we all had that experience writing papers. You've got a deadline, you're trying to get from point A to point B and you just don't have enough to make it all the way. So you make a statement you don't have support for and then link it to source material you know no one will read. So it looks like your conclusion is supported when in fact it isn't. You don't care though because the point is to get from point A to point B... and the only person you have to fool is the teacher or in this case the peer review that probably doesn't care that much anyway. Also... everyone else is doing it... and for the teacher to actually verify all those citations would be pretty much impossible. The only thing you have to be careful of is to not say something the teacher knows is false or will think is false. If you do that they might check the citation. But if you go outside of their knowledge forcing them to basically check everything or nothing... or stick closely to whatever the teacher is likely to believe anyway... you can get away with about 99 times out of a hundred. And the time you're caught... slap on the wrist or a small hit to your grade.

    Now I have no experience with what happens when you actually start publishing things. I fully admit my ignorance here. But I'd be surprised if an academic history conditioned by this environment didn't predispose graduates to try the same thing. And really, who is going to stop them? They've had their whole academic career to perfect the best ways to scam the system. All those years they weren't just learning the subject but they were also learning how the subject is taught, how it is graded, the social characteristics of their judges, human psychology as it relates to auditing, etc. We learn all this stuff naturally.

    Anyway, that there is fraud in academia isn't shocking. All human interactions involve fraud. If there's a benefit in deceiving someone then we probably do it and we get very good at it. This is indifferent to morality. It has more to do with intelligence. If you're clever whether you're a good person or a bad person... you learn to lie. Even if you don't use it for evil it's just a skill you acquire.

    If there is anything I find bothersome here it is the conspiratorial aspect where someone is encouraged to decieve others. This sort of thing is marginally less offensive when it's kept isolated to individuals even if everyone is doing it. And really what people SHOULD be doing rather then finding bogus sources is find ACTUAL sources.

    It's actually not useful to anyone if it's fifty percent bullshit. I don't care if it's half brilliant and half bullshit. Even ten percent bullshit isn't acceptable. Strip out everything that isn't backed by bullshit. If you can't get from point A to point B without using bullshit sources then maybe those two dots don't actually connect. I know you need to make a connection and maybe you are even required to make that specific connection because your peers won't tolerate anything short of it. But that isn't science and it isn't academically useful. Sure you get your grade or you get your degree or you get your job or you get paid. You get what you want. But you do it at the expense of system's integrity.

    I don't know... it's hard to audit this stuff without investing unreasonable numbers of man hours.

  • by PolygamousRanchKid ( 1290638 ) on Saturday February 04, 2012 @03:23AM (#38924959)

    Maybe there is some sort of Internet Reference Counter worming its way through the Web. It looks at papers, and gives points to people who get referenced a lot. People who reference you are tallied as your friends, so you will know to reference them. People who publish in your area, but don't reference you, are foes, and get negative points. You can buy or sell references or points on eBay and pay for them with Bitcoins. People with lots of points are "Blue Chip" in the points futures markets. Points can be used to suppress rival research.

    Hey, doesn't Facebook or Google do this already . . . ? . . . for an extra fee ? . . . ?

    • by dkf ( 304284 )

      Hey, doesn't Facebook or Google do this already . . . ? . . . for an extra fee ? . . . ?

      Google does a lot of it for free via Google Scholar. Almost all the rest is done through various funding agencies (who have long used bibliometrics to determine scientific relevance, the same core algorithm as Google's ranking algorithm). The exception is the friends/foes part, which is left to you to organize.

  • by jopet ( 538074 ) on Saturday February 04, 2012 @04:10AM (#38925099) Journal

    money and career depend on what a brainless bean-counter adds up in a spread sheet.

  • by drolli ( 522659 ) on Saturday February 04, 2012 @06:44AM (#38925651) Journal

    In my (former researcher who left to industry) opinion/experience its not the editors who put the pressure, but the possibility that you ignored a work of somebody who is important enough to referee for Nature or Science. There are some components of these phenomena:

    a) Maybe the work really is important, and you did not know it because it's too long ago. There is usually nothing wrong with a referee saying "hey that is similar to what [xyz]" did, even if they are on the list of authors on the reference in question.

    b) some referees dont react positively to not getting cited and will shoot down any paper not referring to *their* theory for other reasons (i believe that happened to me once)

    c) In the abstract (which is the part really read by the editors before the refereeing process) you compare your paper to the previous publications. Authors are under the impression that comparing your work to previous important papers makes a better impression. How far this is true i cant judge. I found the editor stage *before* the refereeing in Nature and Science the most intransparent thing I have experienced as an author. Unlike the refereeing process there is no way to appeal, there is not information on what the editors disliked so much to refuse directly. (There is the saying that once you had Nature/Science papers it gets more likely to pass this stage, and i have seen at least one example of a paper being passed to Nature which for sure would have been rejected by the editors had it come from a less important group in the field)

  • in fact, these kids got their research published without any references *at all*. http://rsbl.royalsocietypublishing.org/site/misc/BlackawtonBees.xhtml [royalsocie...ishing.org]

    i particularly like the section headings "once upon a time" and "the puzzle duh duh duhhhhh". i think however in the context of this article, the following exerpt from the background puts the corruption that has been highlighted by TFA to shame:

    "So what follows is a novel study (scientifically and conceptually) in ‘kids speak’ without references

  • In the researcher world, it's all about publishing, publishing, publishing. 90% of the papers is crap, and at leat 75% of the conferences is crap. People fake results, publish even before they have the actual results, only to get noticed. I'm in the middle of it, and I sometimes feel so ashamed by the quality of what I'm forced to publish that I feel like stepping out.

  • In my field, I usually end up struggling prior to submission to cut references, because many of the journals I submit to have limits on the numbers of citations. Often, this means citing a review rather than the primary literature (because one review can take the place of multiple primary papers), or citing a recent work using the most current methods and dropping citations of the earlier ground-breaking work in the field

  • I am an author and an editor of a journal that could use a higher impact factor to get noticed. But I have never been "encouraged" to add a reference that was not clearly missing (there have been one or two of those, due to inadequate research on my part), and as an editor I have never asked for additional references except in cases where there was clearly prior work that the authors should have been aware of and should have cited, usually because the missing references actually showed the results the autho

E = MC ** 2 +- 3db

Working...