Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Science

The Science That's Never Been Cited (nature.com) 91

An anonymous reader quotes a report from Nature: One widely repeated estimate, reported in a controversial article in Science in 1990, suggests that more than half of all academic articles remain uncited five years after their publication. Scientists genuinely fret about this issue, says Jevin West, an information scientist at the University of Washington in Seattle who studies large-scale patterns in research literature. After all, citations are widely recognized as a standard measure of academic influence: a marker that work not only has been read, but also has proved useful to later studies. Researchers worry that high rates of uncitedness point to a heap of useless or irrelevant research. In reality, uncited research isn't always useless. What's more, there isn't really that much of it, says Vincent Lariviere, an information scientist at the University of Montreal in Canada.

To get a better handle on this dark and forgotten corner of published research, Nature dug into the figures to find out how many papers actually do go uncited (explore the full data set and methods). It is impossible to know for sure, because citation databases are incomplete. But it's clear that, at least for the core group of 12,000 or so journals in the Web of Science -- a large database owned by Clarivate Analytics in Philadelphia, Pennsylvania -- zero-citation papers are much less prevalent than is widely believed. Web of Science records suggest that fewer than 10% of scientific articles are likely to remain uncited. But the true figure is probably even lower, because large numbers of papers that the database records as uncited have actually been cited somewhere by someone.
"The new figures [...] suggest that in most disciplines, the proportion of papers attracting zero citations levels off between five and ten year after publication, although the proportion is different in each discipline," the report adds. "Of all biomedical-sciences papers published in 2006, just 4% are uncited today; in chemistry, that number is 8% and in physics, it is closer to 11%. In engineering and technology, the uncitedness rate of the 2006 cohort of Web of Science-indexed papers is 24%, much higher than in the natural sciences."
This discussion has been archived. No new comments can be posted.

The Science That's Never Been Cited

Comments Filter:
  • Citations are abused (Score:5, Interesting)

    by Anonymous Coward on Friday December 15, 2017 @10:36PM (#55749807)

    One main purpose of citations is to use prior observations and experiments to build the case for a hypothesis that is then tested in the remainder of the paper. The other main purpose is to provide support for portions of the methodology that aren't intuitive. There are other reasons for citations, but these are the main ones.

    However, this is frequently abused by reviewers and editors. A comprehensive literature review is often expected at the start of papers, which really isn't necessary to support the hypothesis. Many times this is used by reviewers and editors to insist that their own works be cited and increase the profile of their own papers.

    A comprehensive list a review shouldn't be necessary at the start of papers, yet it's frequently expected in the peer review process. Citing prior literature is important, but just to the extent necessary to support the hypothesis that the paper intends to examine.

    • I'm afraid it's worse. Certain classes of paper, such as those which do meta-analyses of other papers, cite them in doing statistical analyses of those papers. The result is a churn of analyses of analyses, with no actual experimentation or analysis other than statistical analysis of the other papers. It's no longer science because the underlying hypotheses are not falsifiable.

      I'm afraid that the result has been stunning skew in the results of the meta-analysis by tuning the analysis to the pre-disposed des

      • by Shadow of Eternity ( 795165 ) on Saturday December 16, 2017 @04:00AM (#55750413)

        You have no idea how bad this has gotten in the social sciences. These days even the most trivial MA coursework level paper is expected to have upwards of 50 citations and be several thousand words long. The only real place you have to pad your work out to meet the ever more obscene wordcount and citation requirements is your lit review, which has resulted in pretty much every single MA candidate cranking out multi-thousand word multi-decade lit reviews as a critical part of every single paper they write.

        Something I noticed while completing my own MA was that the further back I went the shorter papers and bibliographies got, to the point the original paper that first discovered the "Democratic Peace Theory" was a mere handful of pages and had somewhere around 12 citations. For all people talk of grade inflation my experience has been the opposite, almost no professor at a university today would be able to pass their own program or classes using their own work from when they originally got their graduate degrees. It's not grades that are inflating, it's expectations.

        It's the bastard spawn of the tenure-journal-complex and the modern idea that we can quantify and over-manage every little thing by just making numbers go up or down.

        • by Anonymous Coward

          These days even the most trivial MA coursework level paper is expected to have upwards of 50 citations and be several thousand words long.

          How is that bad? I have presented undergrad information theory papers that are 13k words and 45 citations just for midterm projects. The publishable novel bits take 4k and 30 citations.

          This is typical and necessary to explain without plagiarism.

          Something I noticed while completing my own MA was that the further back I went the shorter papers and bibliographies got, to the point the original paper that first discovered the "Democratic Peace Theory" was a mere handful of pages and had somewhere around 12 citations. For all people talk of grade inflation my experience has been the opposite, almost no professor at a university today would be able to pass their own program or classes using their own work from when they originally got their graduate degrees. It's not grades that are inflating, it's expectations.

          This is terribly true. It's really bad in technical classes where professors with PhDs need to be brought up to speed by the students on what they are teaching on top of setting up the lab and doing the work without any assistance from the professor and no fewer expe

        • Maybe someone needs to write a paper with an statistical analysis on how the length of the lit review relates to the usefulness and citability of a paper...

      • by Uecker ( 1842596 ) on Saturday December 16, 2017 @06:06AM (#55750579)

        Huh? There is nothing fundamentally wrong with meta analysis. I do not understand why you think "it's no longer science because the underlying hypotheses are not falsifiable". Most meta-analysis papers I have seen are about hypotheses which are falsifiable.

        • There are several common difficulties with meta analysis. Political or social whims can, and do, profoundly skew the data. So do commonplace procedural errors in the experiments that are being analyzed. So does the tendency of analyzers to discount outlying cases that may contradict the expected outcome as "obvious errors", even excluding them from mention in their meta-analyses.

          I'm afraid that meta analysis is a tool that can be and often is misused to confuse correlation with causation.

      • It's most visible in the publications of the "fake" sciences, such as political analysis, sociology, and economics

        FTFY.

    • by slew ( 2918 )

      I wonder how the citation numbers would change if you subtract the citations with authors citing their earlier works and work of others in their own research groups...

      Actually the numbers for Engineering don't surprise me. Many of the IEEE journals I read are seemly are filled with reams of research that do not appear to be cite worthy (too specific and/or you are left wondering how it got past peer review)... ACM journals are a bit better (by a little). Often it appears that the lead-time to appear in s

      • by Anonymous Coward

        Most of the time, a researcher or their group will publish multiple papers that are related. That's because they're funded by the same grant or a follow-up grant from the original. When you publish a paper in a journal, you sign over the copyright to your work. Using your previously published work is then considered plagiarism. If you need to use the prior work in a new paper, the only option is to cite yourself or whoever in your group published that paper. It's a consequence of how journals and copyrights

        • by slew ( 2918 )

          Most of the time, a researcher or their group will publish multiple papers that are related. That's because they're funded by the same grant or a follow-up grant from the original. When you publish a paper in a journal, you sign over the copyright to your work. Using your previously published work is then considered plagiarism. If you need to use the prior work in a new paper, the only option is to cite yourself or whoever in your group published that paper. It's a consequence of how journals and copyrights work. I suspect most people who cite themselves aren't really doing it to inflate their citations but because they have to do so.

          Well, if the author (or others in their research group) are citing the paper, I'd argue whether it is "necessary" or not, they are effectively inflating their citations. One might also rightly argue that if these are the *only* citations of their research, that their research effectively falls into the uncited category since their "necessary"
          citations pushes their number above zero (which was the point I was trying to make).

        • by Anonymous Coward

          WTF? Plagiarism is not about copyright, it is about authorship or inventorship. And those can't be sold away.

          • I thought the same; the former's an academic infraction (can get your degree torn up) and the latter is a civil matter (can cost many dollarpounds).

      • I wonder how the citation numbers would change if you subtract the citations with authors citing their earlier works and work of others in their own research groups...

        You also need to subtract "cycles": I cite your papers, you cite mine. Sometimes citation cycles have 3 or more participants: A cites B, B cites C, C cites A.

      • Most papers in my industry fall along the lines of âoeWe tested our proprietary weld filler / spray-on coating against some others, and ours is superior for this limited and specific applicationâ. These are more for marketing than scientific advancement, and are not useful for citing (especially if you are a competitor playing the same game!)

        There are entities in my field that are âoeindependentâ and arenâ(TM)t in the business of actually using the product. Their papers tend to b
    • by Kjella ( 173770 )

      A comprehensive list a review shouldn't be necessary at the start of papers, yet it's frequently expected in the peer review process. Citing prior literature is important, but just to the extent necessary to support the hypothesis that the paper intends to examine.

      I think "support" is too narrow, very often you want to cite papers that are "sideways" to your own and explain how your research is different from theirs, like for example how it's not fully applicable, limitations of the model, choice of methodology, data quality issues and so on. I think it's also important to remember that citations was science's way of linking things together before you had hyperlinks and search engines, it's what set your research in context with other work in the field. It should be

    • Not-Invented-Here is a major cultural component in engineering.

    • A comprehensive list a review shouldn't be necessary at the start of papers, yet it's frequently expected in the peer review process. Citing prior literature is important, but just to the extent necessary to support the hypothesis that the paper intends to examine.

      Nope. It's an important part of academic research. It serves multiple purposes. It puts the work in perspective for the reader, that isn't necessarily up to speed on that particular area. It demonstrates that you've done your homework when you claim that this work is new and an improvement on previous work (that's why its called "previous work"), and it helps to put your work in perspective; i.e. is it a major step, or a small incremental one. None of these are strictly necessary to "support your hypothesis

  • Ob (Score:5, Funny)

    by Anonymous Coward on Friday December 15, 2017 @10:42PM (#55749829)

    Researchers worry that high rates of uncitedness point to a heap of useless or irrelevant research.

    [citation needed]

  • by laughingskeptic ( 1004414 ) on Friday December 15, 2017 @11:28PM (#55749933)
    I was once the director of a university lab. I would expect completely uncited papers to be rare, perhaps the last in a series of useless papers. Most academics cite their own papers and the papers of a small circle of peers. The citation web has to be full of these self-scratching cliques. The papers that are cited across multiple cliques are the real influencers. These are much less common. Rather than debunking the uncited myth, they should be debunking the myth that cited papers are influential. Most are not.
    • by Anonymous Coward

      As I posted elsewhere, you often have to cite yourself. It's reasonable that you need to refer to prior work when building upon it. When you publish a paper, you're signing over the copyright to that journal. If you simply go and reuse that, it's considered plagiarism. For that reason, if you need to reference work you've published previously, you have to cite yourself.

    • by Brett Buck ( 811747 ) on Saturday December 16, 2017 @12:01AM (#55750019)

      It's perfectly clear why this is. You *have* to publish something, whether it has wider merit or not. So you end up with a large number of probably correct and probably original paper, that nonetheless don't advance the state of the art and don't get any cites. There's a very strong disincentive to wait until you have something genuinely unique and innovative.

      • by Anonymous Coward on Saturday December 16, 2017 @12:15AM (#55750049)

        Except the modern academic is, in many ways, rated on the number of publications. So rather than wait until a new paper, with some result that is unique and innovative, is written, you get a long series of papers.
        Here's a paper describing my idea. Here's a paper (citing the first) describing a hypothesis that might validate my idea. Here's a paper (citing the second) that describes an experiment that would test the hypothesis. Here's a paper...

        And then if you don't get the result you wanted (and you don't p-hack to get it) you can write a paper describing what went wrong, and restart the cycle again.

      • Conferences have a similar problem. Most labs require attendees to present something. So if you want to go to a conference you need to put together something to show - even if you know it isn't very interesting.

        Its a classic problem of using the wrong metric of success.

      • by Uecker ( 1842596 )

        I don't agree with the sentiment that only genuinely unique and innovative things should be published. Science is incremental and also less important reports are useful for a various number of reasons. For example. - as many have pointed out - reporting even about unsuccessful or failed experiments is useful. Publishing intermediate results also helps preventing duplicate work or provides technical details helpful to others. For a scientist the primarily output to society are the publications. A scientist w

    • by Anonymous Coward

      You have to realize you are in some niche of science and can't speak for the whole. For me, I'm not really in "science" but engineering. Most of my papers are read by folks in industry, not in academia, and as a consequence my citation scores are quite low. However as an engineer I don't care since this isn't a metric I am judged by. Some very important papers go uncited and unnoticed for years before their relevance is observed. For example, read this article: https://www.scientificamerican... [scientificamerican.com]

      I have also w

    • Yes. Has been in the field for 30 years. Yes.

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Friday December 15, 2017 @11:36PM (#55749963) Homepage

    well... if the research papers weren't in PAYWALLED journals then it would be possible for people to get at them and read them, wouldn't it? *sigh*...

    • by tepples ( 727027 )

      The featured article covers that, though briefly:

      (It’s possible that a drive to make articles open-access is also helping.)

    • by Agripa ( 139780 )

      well... if the research papers weren't in PAYWALLED journals then it would be possible for people to get at them and read them, wouldn't it? *sigh*...

      This happens all the time with me. I either follow citations to papers which are unavailable due to being locked behind an exclusive paywall (or for other reasons are just unavailable) or exclude paywalled citations in my own articles in favor of ones which are at least nominally available through other means. Sometimes this results in me reinventing the wheel so to speak but that is the way it goes.

      If the authors of those papers wanted them cited, then they would have made them more easily available. My

  • by Anonymous Coward

    Proof: Suppose, by way of contradiction, that uncited papers exist. Then one could be cited as an example. But then it would be cited, contrary to the assumption that it is not. This is a contradiction. Therefore there are no uncited papers.

    • by vux984 ( 928602 )

      Nice try. Here's an equivalent formulation...

      Proof: Suppose, by way of contradiction, that untouched gummybears exist. The one could be picked up as an example. But then it would have been touched, contrary to the assumption that it is not. This is a contradiction. Therefore there are no untouched gummybears.

      The problem with the proof? There is no contradiction. It was untouched. Then we touched it. That is not a contraction, that is a state change. For it to be a contradiction it needs to remain untouched

  • by Nemyst ( 1383049 ) on Saturday December 16, 2017 @12:29AM (#55750101) Homepage
    I think the problem of uncited papers isn't that big of a deal, it's quite rare and it doesn't necessarily say that the paper was entirely useless (e.g. the industry will often use academic papers but rarely cite them since they do not publish, or do so very rarely).

    What I find much more concerning is that modern peer-reviewed journals only care about successful hypotheses. Doing something interesting isn't enough, it also has to be demonstrably better, stronger, faster or something else along those lines. Failure is brushed aside and quickly forgotten, even though having access to all of the failed attempts of thousands of scientists would be an absolute treasure trove.

    How many hours, days, weeks of work could be avoided by knowing that someone else has already traveled down your current path and figured out that it wasn't working? How many ideas have been lost due to a minor issue that the original would-be author didn't catch? How much more efficient would our science be if we also documented legitimate failure (as opposed to failure from sloppiness, outright bad ideas, and so on)?
    • by gweihir ( 88907 )

      Indeed. That is an utter fail. Well founded and reasoned negative results are immensely important. At the same time, they are almost impossible to publish. This is damaging science as a whole to a large degree, because everybody has to repeat the failures.

    • Try to publish a paper disproving someone else's hypothesis. I have two of that kind - both were largely dismissed, despite one of them being in a prominent journal.

  • by gweihir ( 88907 ) on Saturday December 16, 2017 @02:24AM (#55750283)

    An entirely predictable result from "publish or perish". People publish a lot of irrelevant, marginally incremental and generally boring and worthless papers.

  • My best article has only one citation. I have no idea why this is the case, except maybe because it's a very multidisciplinary work and all the disciplines tackle a complex scientific problem. It's an article difficult to read for one versed only in his/her narrow field of interest.

  • what percentage of articles contain fallacies or improper experimentation?

  • Citations only mean given research was used in further scientific research. But researchers aren't the only people who read this stuff. Engineers do to - and they don't publish articles, they make projects - they directly utilize results to create useful real-world creations. Never-ending tables of material properties, new chemical processes, new methods of simulation and analysis for implementation in software, this all comes from white papers. Engineers do a lot of own research, and use a lot of methods t

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...