Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Science

Researchers Feel Pressure To Cite Superfluous Papers 107

ananyo writes "One in five academics in a variety of social science and business fields say they have been asked to pad their papers with superfluous references in order to get published. The figures, from a survey published in the journal Science (abstract), also suggest that journal editors strategically target junior faculty, who in turn were more willing to acquiesce. The controversial practice is not new: those studying publication ethics have for many years noted that some editors encourage extra references in order to boost a journal's impact factor (a measure of the average number of citations an article in the journal receives over two years). But the survey is the first to try to quantify what it calls 'coercive citation,' and shows that this is 'uncomfortably common.' Perhaps the most striking finding of the survey was that although 86% of the respondents said that coercion was inappropriate, and 81% thought it damaged a journal's prestige, 57% said they would add superfluous citations to a paper before submitting it to a journal known to coerce. However, figures from Thomson Reuters suggest that social-science journals tend to have more self-citations than basic-science journals."
This discussion has been archived. No new comments can be posted.

Researchers Feel Pressure To Cite Superfluous Papers

Comments Filter:
  • by Sir_Sri ( 199544 ) on Saturday February 04, 2012 @02:12AM (#38924719)

    There's a tricky balance between saying we build on this work and so necessarily reference it, and it is related to this other work (or may incorporate results from it, even if I'm not directly involved in that research) which is trying to give credit where it is due, not too much.

    Science struggles how to cite widely disbursed facts. What is the speed of light? Right. You *can* cite the people who precisely measure it, but most research that relies on that data doesn't really need to cite it, because they aren't building on it. If I'm looking at spectra from a gas or a star the speed of light is really really really important to my work, but how the actual number was arrived at isn't that important. If you're a young researcher you want your work to look like it is related to a lot of things (think resume padding) so you cite more than you need to, and if you're new to the field you want your work cited as a measure how much impact it has - but judging what counts as impact is not always clear and you're pretty easy to persuade that it is better to over cite than under cite and risk being called out for plagiarism.

    Another example. I'm working on a current project. One of the relevant facts to the work is the history of the Bismarck battleship (the nazi one). This is because the documentation about the history is relevant to how to quantify the statistics of the ship. But what is a valid reference for that? If anything? How about the mere existence of the ship? What factual information am I digging for that isn't sufficiently well known to be on wikipedia? Do I cite comparable information about the dozen or so other ships and aircraft she actually fought with, or do I just sort of take for granted that the ship had 38 cm guns (which directly maps to the problem I'm talking about which is balancing the relative combat power of the ship). If it was a history paper it's sort of obvious that the historical work is to be cited - discovering (or rediscovering) that information would be a worthwhile historical paper, but what about something that is only tangentially related, which is trying to define those statistics in a game?

    It hasn't been uncommon for scientists to base research on 3 or 4 papers (possibly one or two of which was from their own group), and then when they're getting ready to publish to look for papers in the target journal that are related that can be cited as well (this is like self censorship, or self coercion, rather than be asked to do it, you do it on your own first0. It's not really a good practice, but I'm not sure it's as bad as the article tries to make it out to be. You really are legitimately looking for work that might be related to what you did, especially if you didn't do your literature search very well (which is harder than it sounds sometimes), and you are, as you say, citing authors you're extrapolating from (or at the very least doing related work to).

  • Sample Size Errors (Score:5, Informative)

    by CycleMan ( 638982 ) on Saturday February 04, 2012 @02:31AM (#38924801)
    I did RTFA. The authors of the paper surveyed 54,000 academics, and about 1,300 responded to say, "Yes we felt pressured." That's 2.5%. Only 1/3 of those named a single journal that pressured them. Another 2.5% said, "We've heard that others have been pressured, but never us." 7.5% said, "We've never heard of it." And 87.5% didn't respond. The survey shows extreme self-selection as 7 of 8 academics did not respond. So before someone gets excited that 20% of academics are pressured, note that under 13% of academics responded.
  • by korean.ian ( 1264578 ) on Saturday February 04, 2012 @02:54AM (#38924879)

    Not all economists habitually claim that wealth inequality and deregulation are good things. Also there is a strong movement to get economics away from simply math-based models and starting to use more empirical data. For example - if you take any decent class on international trade, you should find countless examples that disprove the basic models that have been around for ages. The Heckscher-Ohlin and Ricardian models (two of the most basic models upon which trade is based) have been tested and found wanting, especially H-O. Ricardian has been modified and updated somewhat but it still contains flaws. Newer models such as the New Trade Theory (inventive naming schemes abound! there's also a New New Trade Theory) and the Gravity Model have come into vogue, especially with the advent of FTAs in the past 20 years and the WTO.
    As for monetary policy, it has certainly been the unfortunate case that in America and the UK (and to a certain degree Canada - although with Harper and his cronies with their goddamned majority it's sure to increase) the Austrian school has been prominent. However, people are starting to see that regulation needs to be enforced. If we look at the differences in banking between Canada and the US for example, we can see that Canada has fewer regulations, but enforcement is stricter and penalties more severe. I'm not, by the way, advocating that the Canadian banking system is without flaws, just that it tends to be less volatile, and their practices are less risky due to the regulations being observed. Now obviously the US is not going to adopt the Canadian banking system wholesale, but there are lessons to be learned.
    There is a growing understanding that economists need to look at bigger pictures, and so political economy and comparative governance are big fields of study now.
    I know that being on a board dominated by engineers and "hard" scientists this post will probably get downmodded, but it's important to recognize that there is a great diversity in the opinions of professional economists, and many of the younger generation will not buy into the simple ideas of "deregulate, lower taxes, and eliminate minimum wage".

  • by G3ckoG33k ( 647276 ) on Saturday February 04, 2012 @03:33AM (#38924993)

    Researchers Misunderstand Confidence Intervals and Standard Error Bars.
    Belia, Sarah;Fidler, Fiona;Williams, Jennifer;Cumming, Geoff
    Psychological Methods, Vol 10(4), Dec 2005, 389-396.

    Little is known about researchers' understanding of confidence intervals (CIs) and standard error (SE) bars. Authors of journal articles in psychology, behavioral neuroscience, and medicine were invited to visit a Web site where they adjusted a figure until they judged 2 means, with error bars, to be just statistically significantly different (p .05). Results from 473 respondents suggest that many leading researchers have severe misconceptions about how error bars relate to statistical significance, do not adequately distinguish CIs and SE bars, and do not appreciate the importance of whether the 2 means are independent or come from a repeated measures design. Better guidelines for researchers and less ambiguous graphical conventions are needed before the advantages of CIs for research communication can be realized.

    (http://psycnet.apa.org/journals/met/10/4/389/)

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...