The Science That's Never Been Cited (nature.com) 91
An anonymous reader quotes a report from Nature: One widely repeated estimate, reported in a controversial article in Science in 1990, suggests that more than half of all academic articles remain uncited five years after their publication. Scientists genuinely fret about this issue, says Jevin West, an information scientist at the University of Washington in Seattle who studies large-scale patterns in research literature. After all, citations are widely recognized as a standard measure of academic influence: a marker that work not only has been read, but also has proved useful to later studies. Researchers worry that high rates of uncitedness point to a heap of useless or irrelevant research. In reality, uncited research isn't always useless. What's more, there isn't really that much of it, says Vincent Lariviere, an information scientist at the University of Montreal in Canada.
To get a better handle on this dark and forgotten corner of published research, Nature dug into the figures to find out how many papers actually do go uncited (explore the full data set and methods). It is impossible to know for sure, because citation databases are incomplete. But it's clear that, at least for the core group of 12,000 or so journals in the Web of Science -- a large database owned by Clarivate Analytics in Philadelphia, Pennsylvania -- zero-citation papers are much less prevalent than is widely believed. Web of Science records suggest that fewer than 10% of scientific articles are likely to remain uncited. But the true figure is probably even lower, because large numbers of papers that the database records as uncited have actually been cited somewhere by someone. "The new figures [...] suggest that in most disciplines, the proportion of papers attracting zero citations levels off between five and ten year after publication, although the proportion is different in each discipline," the report adds. "Of all biomedical-sciences papers published in 2006, just 4% are uncited today; in chemistry, that number is 8% and in physics, it is closer to 11%. In engineering and technology, the uncitedness rate of the 2006 cohort of Web of Science-indexed papers is 24%, much higher than in the natural sciences."
To get a better handle on this dark and forgotten corner of published research, Nature dug into the figures to find out how many papers actually do go uncited (explore the full data set and methods). It is impossible to know for sure, because citation databases are incomplete. But it's clear that, at least for the core group of 12,000 or so journals in the Web of Science -- a large database owned by Clarivate Analytics in Philadelphia, Pennsylvania -- zero-citation papers are much less prevalent than is widely believed. Web of Science records suggest that fewer than 10% of scientific articles are likely to remain uncited. But the true figure is probably even lower, because large numbers of papers that the database records as uncited have actually been cited somewhere by someone. "The new figures [...] suggest that in most disciplines, the proportion of papers attracting zero citations levels off between five and ten year after publication, although the proportion is different in each discipline," the report adds. "Of all biomedical-sciences papers published in 2006, just 4% are uncited today; in chemistry, that number is 8% and in physics, it is closer to 11%. In engineering and technology, the uncitedness rate of the 2006 cohort of Web of Science-indexed papers is 24%, much higher than in the natural sciences."
Citations are abused (Score:5, Interesting)
One main purpose of citations is to use prior observations and experiments to build the case for a hypothesis that is then tested in the remainder of the paper. The other main purpose is to provide support for portions of the methodology that aren't intuitive. There are other reasons for citations, but these are the main ones.
However, this is frequently abused by reviewers and editors. A comprehensive literature review is often expected at the start of papers, which really isn't necessary to support the hypothesis. Many times this is used by reviewers and editors to insist that their own works be cited and increase the profile of their own papers.
A comprehensive list a review shouldn't be necessary at the start of papers, yet it's frequently expected in the peer review process. Citing prior literature is important, but just to the extent necessary to support the hypothesis that the paper intends to examine.
Re: (Score:3)
I'm afraid it's worse. Certain classes of paper, such as those which do meta-analyses of other papers, cite them in doing statistical analyses of those papers. The result is a churn of analyses of analyses, with no actual experimentation or analysis other than statistical analysis of the other papers. It's no longer science because the underlying hypotheses are not falsifiable.
I'm afraid that the result has been stunning skew in the results of the meta-analysis by tuning the analysis to the pre-disposed des
Re:Citations are abused (Score:5, Interesting)
You have no idea how bad this has gotten in the social sciences. These days even the most trivial MA coursework level paper is expected to have upwards of 50 citations and be several thousand words long. The only real place you have to pad your work out to meet the ever more obscene wordcount and citation requirements is your lit review, which has resulted in pretty much every single MA candidate cranking out multi-thousand word multi-decade lit reviews as a critical part of every single paper they write.
Something I noticed while completing my own MA was that the further back I went the shorter papers and bibliographies got, to the point the original paper that first discovered the "Democratic Peace Theory" was a mere handful of pages and had somewhere around 12 citations. For all people talk of grade inflation my experience has been the opposite, almost no professor at a university today would be able to pass their own program or classes using their own work from when they originally got their graduate degrees. It's not grades that are inflating, it's expectations.
It's the bastard spawn of the tenure-journal-complex and the modern idea that we can quantify and over-manage every little thing by just making numbers go up or down.
Re: (Score:1)
These days even the most trivial MA coursework level paper is expected to have upwards of 50 citations and be several thousand words long.
How is that bad? I have presented undergrad information theory papers that are 13k words and 45 citations just for midterm projects. The publishable novel bits take 4k and 30 citations.
This is typical and necessary to explain without plagiarism.
Something I noticed while completing my own MA was that the further back I went the shorter papers and bibliographies got, to the point the original paper that first discovered the "Democratic Peace Theory" was a mere handful of pages and had somewhere around 12 citations. For all people talk of grade inflation my experience has been the opposite, almost no professor at a university today would be able to pass their own program or classes using their own work from when they originally got their graduate degrees. It's not grades that are inflating, it's expectations.
This is terribly true. It's really bad in technical classes where professors with PhDs need to be brought up to speed by the students on what they are teaching on top of setting up the lab and doing the work without any assistance from the professor and no fewer expe
Re: (Score:2)
Maybe someone needs to write a paper with an statistical analysis on how the length of the lit review relates to the usefulness and citability of a paper...
Re: (Score:2)
I can do it with two words: "Inverse Correlation".
Re:Citations are abused (Score:4, Insightful)
Huh? There is nothing fundamentally wrong with meta analysis. I do not understand why you think "it's no longer science because the underlying hypotheses are not falsifiable". Most meta-analysis papers I have seen are about hypotheses which are falsifiable.
Re: (Score:2)
I'm afraid this is precisely backwards. The analysts explore the data with a hypothesis in mind. Discovering that your hypothesis is mistaken can lead to a paper, but leads to a much less _publishable_ paper. They and especially students involved have strong motivations to skew their findings, and I'm afraid that it is a matter of course that legitimate analysts try to remain aware of. It's a problem in all fields, but the purely mathematical analysis makes testing the hypothesis more difficult.
Re: (Score:2)
There are several common difficulties with meta analysis. Political or social whims can, and do, profoundly skew the data. So do commonplace procedural errors in the experiments that are being analyzed. So does the tendency of analyzers to discount outlying cases that may contradict the expected outcome as "obvious errors", even excluding them from mention in their meta-analyses.
I'm afraid that meta analysis is a tool that can be and often is misused to confuse correlation with causation.
Re: (Score:2)
It's most visible in the publications of the "fake" sciences, such as political analysis, sociology, and economics
FTFY.
Re: (Score:3)
I wonder how the citation numbers would change if you subtract the citations with authors citing their earlier works and work of others in their own research groups...
Actually the numbers for Engineering don't surprise me. Many of the IEEE journals I read are seemly are filled with reams of research that do not appear to be cite worthy (too specific and/or you are left wondering how it got past peer review)... ACM journals are a bit better (by a little). Often it appears that the lead-time to appear in s
Re: Citations are abused (Score:1)
Most of the time, a researcher or their group will publish multiple papers that are related. That's because they're funded by the same grant or a follow-up grant from the original. When you publish a paper in a journal, you sign over the copyright to your work. Using your previously published work is then considered plagiarism. If you need to use the prior work in a new paper, the only option is to cite yourself or whoever in your group published that paper. It's a consequence of how journals and copyrights
Re: (Score:2)
Most of the time, a researcher or their group will publish multiple papers that are related. That's because they're funded by the same grant or a follow-up grant from the original. When you publish a paper in a journal, you sign over the copyright to your work. Using your previously published work is then considered plagiarism. If you need to use the prior work in a new paper, the only option is to cite yourself or whoever in your group published that paper. It's a consequence of how journals and copyrights work. I suspect most people who cite themselves aren't really doing it to inflate their citations but because they have to do so.
Well, if the author (or others in their research group) are citing the paper, I'd argue whether it is "necessary" or not, they are effectively inflating their citations. One might also rightly argue that if these are the *only* citations of their research, that their research effectively falls into the uncited category since their "necessary"
citations pushes their number above zero (which was the point I was trying to make).
Re: Citations are abused (Score:1)
WTF? Plagiarism is not about copyright, it is about authorship or inventorship. And those can't be sold away.
Re: (Score:2)
I thought the same; the former's an academic infraction (can get your degree torn up) and the latter is a civil matter (can cost many dollarpounds).
Re: (Score:2)
I wonder how the citation numbers would change if you subtract the citations with authors citing their earlier works and work of others in their own research groups...
You also need to subtract "cycles": I cite your papers, you cite mine. Sometimes citation cycles have 3 or more participants: A cites B, B cites C, C cites A.
Re: Citations are abused (Score:2)
There are entities in my field that are âoeindependentâ and arenâ(TM)t in the business of actually using the product. Their papers tend to b
Re: (Score:2)
A comprehensive list a review shouldn't be necessary at the start of papers, yet it's frequently expected in the peer review process. Citing prior literature is important, but just to the extent necessary to support the hypothesis that the paper intends to examine.
I think "support" is too narrow, very often you want to cite papers that are "sideways" to your own and explain how your research is different from theirs, like for example how it's not fully applicable, limitations of the model, choice of methodology, data quality issues and so on. I think it's also important to remember that citations was science's way of linking things together before you had hyperlinks and search engines, it's what set your research in context with other work in the field. It should be
Engineers are frustrated inventors (Score:2)
Not-Invented-Here is a major cultural component in engineering.
Re: (Score:2)
A comprehensive list a review shouldn't be necessary at the start of papers, yet it's frequently expected in the peer review process. Citing prior literature is important, but just to the extent necessary to support the hypothesis that the paper intends to examine.
Nope. It's an important part of academic research. It serves multiple purposes. It puts the work in perspective for the reader, that isn't necessarily up to speed on that particular area. It demonstrates that you've done your homework when you claim that this work is new and an improvement on previous work (that's why its called "previous work"), and it helps to put your work in perspective; i.e. is it a major step, or a small incremental one. None of these are strictly necessary to "support your hypothesis
Ob (Score:5, Funny)
[citation needed]
Re: (Score:2)
This.
Also classified research papers. They may in fact be cited, but we will never know.
Citation cliques shouldn't be counted (Score:5, Interesting)
Re: Citation cliques shouldn't be counted (Score:1)
As I posted elsewhere, you often have to cite yourself. It's reasonable that you need to refer to prior work when building upon it. When you publish a paper, you're signing over the copyright to that journal. If you simply go and reuse that, it's considered plagiarism. For that reason, if you need to reference work you've published previously, you have to cite yourself.
Re: (Score:1)
You obviously don't know how peer reviewed journals work. If you don't sign over your copyright, you won't get published. If you don't publish your results, you're not going to be able to get more grants, and that certainly won't help you advance in academia.
Re:Citation cliques shouldn't be counted (Score:5, Insightful)
It's perfectly clear why this is. You *have* to publish something, whether it has wider merit or not. So you end up with a large number of probably correct and probably original paper, that nonetheless don't advance the state of the art and don't get any cites. There's a very strong disincentive to wait until you have something genuinely unique and innovative.
Re:Citation cliques shouldn't be counted (Score:4, Interesting)
Except the modern academic is, in many ways, rated on the number of publications. So rather than wait until a new paper, with some result that is unique and innovative, is written, you get a long series of papers.
Here's a paper describing my idea. Here's a paper (citing the first) describing a hypothesis that might validate my idea. Here's a paper (citing the second) that describes an experiment that would test the hypothesis. Here's a paper...
And then if you don't get the result you wanted (and you don't p-hack to get it) you can write a paper describing what went wrong, and restart the cycle again.
Re: (Score:2)
Conferences have a similar problem. Most labs require attendees to present something. So if you want to go to a conference you need to put together something to show - even if you know it isn't very interesting.
Its a classic problem of using the wrong metric of success.
Visa denial (Score:3)
the conferences fight this by requiring the author to commit to presenting for themself.
Which breaks when nationalist governments get elected and enact travel policies making presentation in person impractical or impossible.
Re: (Score:2)
I don't agree with the sentiment that only genuinely unique and innovative things should be published. Science is incremental and also less important reports are useful for a various number of reasons. For example. - as many have pointed out - reporting even about unsuccessful or failed experiments is useful. Publishing intermediate results also helps preventing duplicate work or provides technical details helpful to others. For a scientist the primarily output to society are the publications. A scientist w
Re: (Score:1)
You have to realize you are in some niche of science and can't speak for the whole. For me, I'm not really in "science" but engineering. Most of my papers are read by folks in industry, not in academia, and as a consequence my citation scores are quite low. However as an engineer I don't care since this isn't a metric I am judged by. Some very important papers go uncited and unnoticed for years before their relevance is observed. For example, read this article: https://www.scientificamerican... [scientificamerican.com]
I have also w
Re: Citation cliques shouldn't be counted (Score:2)
Yes. Has been in the field for 30 years. Yes.
paywalled (Score:3)
well... if the research papers weren't in PAYWALLED journals then it would be possible for people to get at them and read them, wouldn't it? *sigh*...
Re: (Score:2)
The featured article covers that, though briefly:
Re: (Score:2)
well... if the research papers weren't in PAYWALLED journals then it would be possible for people to get at them and read them, wouldn't it? *sigh*...
This happens all the time with me. I either follow citations to papers which are unavailable due to being locked behind an exclusive paywall (or for other reasons are just unavailable) or exclude paywalled citations in my own articles in favor of ones which are at least nominally available through other means. Sometimes this results in me reinventing the wheel so to speak but that is the way it goes.
If the authors of those papers wanted them cited, then they would have made them more easily available. My
There are no uncited papers (Score:1)
Proof: Suppose, by way of contradiction, that uncited papers exist. Then one could be cited as an example. But then it would be cited, contrary to the assumption that it is not. This is a contradiction. Therefore there are no uncited papers.
Re: (Score:2)
Nice try. Here's an equivalent formulation...
Proof: Suppose, by way of contradiction, that untouched gummybears exist. The one could be picked up as an example. But then it would have been touched, contrary to the assumption that it is not. This is a contradiction. Therefore there are no untouched gummybears.
The problem with the proof? There is no contradiction. It was untouched. Then we touched it. That is not a contraction, that is a state change. For it to be a contradiction it needs to remain untouched
Re: (Score:2)
But ... but ...
I did not touch your gummybear! I swear!
Re: (Score:3)
Oh my god, I feel another #metoo coming...
Re: (Score:2)
Schrodinger's gummybear?
More worrisome is science that isn't published (Score:5, Insightful)
What I find much more concerning is that modern peer-reviewed journals only care about successful hypotheses. Doing something interesting isn't enough, it also has to be demonstrably better, stronger, faster or something else along those lines. Failure is brushed aside and quickly forgotten, even though having access to all of the failed attempts of thousands of scientists would be an absolute treasure trove.
How many hours, days, weeks of work could be avoided by knowing that someone else has already traveled down your current path and figured out that it wasn't working? How many ideas have been lost due to a minor issue that the original would-be author didn't catch? How much more efficient would our science be if we also documented legitimate failure (as opposed to failure from sloppiness, outright bad ideas, and so on)?
Re: (Score:2)
Indeed. That is an utter fail. Well founded and reasoned negative results are immensely important. At the same time, they are almost impossible to publish. This is damaging science as a whole to a large degree, because everybody has to repeat the failures.
Cite negative results just before methodology (Score:3)
No one is going to write about how a paper reporting a negative result spared them from hours of wasted time.
I disagree with this. The right place to cite a negative result, as I see it, is at the end of the literature review, just before the methodology. This way, the cited negative result helps to justify the choice of one methodology over another seen not to work.
Re: (Score:2)
Define "published". Is distribution to the public through non-peer-reviewed channels such as arXiv considered "publication"? 17 USC 101 says yes [cornell.edu].
In any case, a culture of citing negative results, even if said results are not peer-reviewed, would increase the measurable impact of negative results. This would in turn encourage authors to put more effort into "Don't Bother With This" articles showing significant negative results and journals to start accepting them more often to get their impact factors up.
Re: (Score:2)
While I agree, this has the limit that you also need some positive results. How do you publish an important negative result when you have nothing else?
Re: (Score:2)
Would multiple related negative results become a "publishable unit"? Pitch it as the scholarly counterpart to a listicle: "Three techniques for XYZ that failed".
Re: (Score:2)
You may be able to sell that as a "survey" paper, but it would probably still be difficult to get it published.
Re: (Score:2)
It depends on how you define "published". It may take publication in less formal channels to bootstrap a culture of citing negative results. See my reply to Anonymous Coward [slashdot.org].
Re: (Score:1)
Re: (Score:2)
At that time, publications were judged on importance and soundness. Not this "all must be positive" bullshit that currently rules scientific publication.
Re: (Score:2)
Indeed. These are utterly perverted incentives.
Re: More worrisome is science that isn't published (Score:2)
Try to publish a paper disproving someone else's hypothesis. I have two of that kind - both were largely dismissed, despite one of them being in a prominent journal.
Re: (Score:2)
Nah. Loathing is way more emotional input than she warrants.
Re: (Score:2)
Most papers are just not very good (Score:3)
An entirely predictable result from "publish or perish". People publish a lot of irrelevant, marginally incremental and generally boring and worthless papers.
Today's science is random (Score:2)
My best article has only one citation. I have no idea why this is the case, except maybe because it's a very multidisciplinary work and all the disciplines tackle a complex scientific problem. It's an article difficult to read for one versed only in his/her narrow field of interest.
could be a good thing (Score:2)
what percentage of articles contain fallacies or improper experimentation?
Engineers don't cite. (Score:2)
Citations only mean given research was used in further scientific research. But researchers aren't the only people who read this stuff. Engineers do to - and they don't publish articles, they make projects - they directly utilize results to create useful real-world creations. Never-ending tables of material properties, new chemical processes, new methods of simulation and analysis for implementation in software, this all comes from white papers. Engineers do a lot of own research, and use a lot of methods t