Scientific Study Finds There Are Too Many Scientific Studies 112
HughPickens.com writes: Chris Matyszczyk reports at Cnet that a new scientific study concludes there are too many scientific studies — scientists simply can't keep track of all the studies in their field. The paper, titled "Attention Decay in Science," looked at all publications (articles and reviews) written in English till the end of 2010 within the database of the Thomson Reuters (TR) Web of Science. For each publication they extracted its year of publication, the subject category of the journal in which it is published and the corresponding citations to that publication. The 'decay' the researchers investigated is how quickly a piece of research is discarded measured by establishing the initial publication, the peak in its popularity and, ultimately, its disappearance from citations in subsequent publications.
"Nowadays papers are forgotten more quickly. Attention, measured by the number and lifetime of citations, is the main currency of the scientific community, and along with other forms of recognition forms the basis for promotions and the reputation of scientists," says the study. "Typically, the citation rate of a paper increases up to a few years after its publication, reaches a peak and then decreases rapidly. This decay can be described by an exponential or a power law behavior, as in ultradiffusive processes, with exponential fitting better than power law for the majority of cases (PDF). The decay is also becoming faster over the years, signaling that nowadays papers are forgotten more quickly." Matyszczyk says,"If publication has become too easy, there will be more and more of it."
"Nowadays papers are forgotten more quickly. Attention, measured by the number and lifetime of citations, is the main currency of the scientific community, and along with other forms of recognition forms the basis for promotions and the reputation of scientists," says the study. "Typically, the citation rate of a paper increases up to a few years after its publication, reaches a peak and then decreases rapidly. This decay can be described by an exponential or a power law behavior, as in ultradiffusive processes, with exponential fitting better than power law for the majority of cases (PDF). The decay is also becoming faster over the years, signaling that nowadays papers are forgotten more quickly." Matyszczyk says,"If publication has become too easy, there will be more and more of it."
"Publish or die" killed the science star (Score:5, Insightful)
Re:"Publish or die" killed the science star (Score:4, Insightful)
Or maybe the fact that there are more scientists publishing today than there were 20 years ago.
The article not the study, completely ignores one fact. The world population has tripled in 70 years. More papers being published is a result of more people existing. While not the sole reason, it is something to remember.
Or even more... (Score:2)
Having so many publishing venues available right now, with (thankfully) every day more of them available under open access licensing schemes, we can get to much more research in our field.
That, however, means that when I start reading on a subject related to my area of study, there are too many documents fighting for my attention. And I will undoubtedly miss many among them, just because of sheer probability.
Of course, the same will happen to my published works: They will no longer be _so_ unique, they will
Re:Or even more... (Score:5, Funny)
Re: (Score:2)
Or maybe the fact that there are more scientists publishing today than there were 20 years ago.
Add to that the expansion of science. The more we know the more there is to study which would naturally produce more studies.
Re: (Score:3)
And I'll add to that science is progressing faster than in the past. Computers help speed up analysis. More researchers help speed the process.
Regarding cites, as science progresses the people getting the cites now are the ones who cited the earlier paper before. It would be interesting to see a cite tree to see how people who cited you are getting cited and so on. That might be the real measure of the strength of a study.
(Sorry to reply to myself but I had to add that.)
Re: (Score:2)
It does make me wonder if we're hitting some sort of plateau where its just too hard to be a generalist in any field anymore. Like, is physics too specialized now that we won't see another Einst
Re:"Publish or die" killed the science star (Score:5, Insightful)
And even worse: Publish positive results or die. As a consequence, failed experiments get repeated all the time, because nobody else knew they failed and there is a high level of incentives to lie or at least overstate success. The root-cause, IMO, is the bean-counters that allocate funding. They do not understand that Science is exploration, that mostly it will fail and that well-documented failure is just as important as success and does not in any way reflect negatively on the scientists involved. But the bean-counters only want to see "success", and by that they make it much, much harder to obtain.
Scientific culture needs to re-invent itself. As it is, it is mostly a problem and does not benefit society much anymore.
Re: (Score:3)
19th century system to a 21st century world.
Science today is far more complex then it was a hundred years ago. Back then it was easy to get a superstar scientist. Experiment with a few hundred dollars of equipment you can find a new principal. Publish it and you are big news.
Most of the easy stuff had been found we get some rare finds such as the discovery of graphine, but most of today's work is with expensive equipment needing a larger teams of scientist. That publish or parish methodology is antiquated.
Re: (Score:3)
19th century system to a 21st century world. Science today is far more complex then it was a hundred years ago. Back then it was easy to get a superstar scientist. Experiment with a few hundred dollars of equipment you can find a new principal. Publish it and you are big news. Most of the easy stuff had been found we get some rare finds such as the discovery of graphine, but most of today's work is with expensive equipment needing a larger teams of scientist. That publish or parish methodology is antiquated. The better approach would be open and accessable sharing of data and results in real time where more can work on you work of progress, and less trying to be Mr. Know it all scientist, who will get the Nobel prize for stumbling on the best answer.
There is plenty of easy science still yet to be done in taboo subjects. The possibilities for illegal drugs alone are huge. Can't get funding? Crowdfund it. There are plenty of people who will contribute to good science in these areas, like this one [walacea.com] which essentially is just putting people on LSD in a fMRI machine and looking at the results. I donated some money and it looks like 1279 other people did too. They are currently at 177% of their funding goal with 34 days remaining.
Right now, there are
Re: (Score:2)
Let's take politics out of the issue, because there will always be people who will fight the science on both sides of the Spectrum. Liberals have a good history of not believing science that points out that something is actually save, because their nature is to try to change things. Conservatives have a history of no believing in science that points out that something is dangerous, because it is their nature to keep things the same.
But to the point when we think scientist we think of a person who is master
Too many studies to keep track of? (Score:2)
The solution is simple. Throw out studies that sound "too meta".
Re:Too many studies to keep track of? (Score:5, Insightful)
The solution is simple. Throw out studies that sound "too meta".
Another solution would be to shoot idiotic journalists that misrepresent what studies say. The actual study does not say there are "too many" studies. What it says is that, since there are more studies, individual studies are cited less frequently, and may be read by fewer people. But nobody expects every scientist to read every paper published in their field. I probably read less than 1% of the papers published in my field, but if there is a specific topic I need to research, I often can't find enough papers that focus on what I need. So, from my point of view, there aren't enough studies.
Also, there are too many books published. Proof: Amazon lists over a million titles, and there is no way that one person can read them all.
Re: (Score:2)
If only we had the technology to be able to search the available research for specific items of interest, so we wouldn't have to rely upon poorly-written study titles and could narrow down the available research to the items that apply to
Re: (Score:2, Insightful)
If only we had the technology to be able to search the available research for specific items of interest [...]
Actualy, the technology is here, the problem is in the paywalls. Probably No scientific institution in the world has access to all the journals that cover the relevant fields of the institution.
Big problem.
Re: (Score:2)
>the technology is here, the problem is paywalls.
Whilst both indexing and accurate bibliographic citation in those paywalls has improved, for fields of study that are way off the mainstream, the only way to ensure that all relevant articles from a journal are correctly indexed, for that non-mainstream field of study, is to go through each article, in each volume of the journal, doing the appropriate indexing it yourself.
> Probably No scientific institution in the world has access to all the journals t
Re: (Score:2)
I feel like this is indicative of the internet age, rather than isolated to academia, and the solution has evolved to become a penchant for sorting through the noise.
Perhaps true insight is to be realized by a more talented eye toward the sifting through the chaff
Re: (Score:2)
If only we had the technology to be able to search the available research for specific items of interest, so we wouldn't have to rely upon poorly-written study titles and could narrow down the available research to the items that apply to our own narrow subject of interest.
Ironically, I used to do research in this very area. It's a little easier in the biomedical field because most papers are tagged in ontologies like MeSH and ICD, but if you're trying to find the latest research on algorithms to solve a problem that you have, you're pretty much screwed.
Re: (Score:1)
"I probably read less than 1% of the papers published in my field, but if there is a specific topic I need to research, I often can't find enough papers that focus on what I need. So, from my point of view, there aren't enough studies."
Exactly. As a biomedical engineer I don't read (and don't want to read) every single thing that is published in the biomedical engineering field (which can be anything from drug infusion devices, dumb implants, smart implants, prosthetics, clinical devices etc). But when I'm
Re: (Score:3)
I probably read less than 1% of the papers published in my field, but if there is a specific topic I need to research, I often can't find enough papers that focus on what I need. So, from my point of view, there aren't enough studies.
I took a writing class where I had to write 'scientific' papers. On my chosen topic, admittedly very narrow(but that's kind of what the teacher wanted), I found myself having to kind of 'circle' my chosen target with tangential studies. I ended up writing into the review paper that here's my hypothesis, it's supported, at least in theory, by the results of these studies(and the same 3 names popped up in quite a few of them), but that the topic itself doesn't appear to be directly studied, so it would be u
Re: (Score:2)
There ARE too many studies being published, or rather too many piss poor studies. I remember when I used to be in research (plastic solar cells) and would read tens of journals all studying the same phenomenon, all trying to modify the same variables (take polymer and anneal, see boost in efficiency) and basically competing for the best sounding paper (we got the bestestest efficiency so far).
Meanwhile none of them actually tried to come up with anything ground-breaking.
This just sounds like a problem which can be solved by opening up research to the search industry instead of behind paywalls. I have used IEEE and ACM recently and find their searching capabilities to be horrible. Give a company like Google full access to all research papers by both organizations and research productivity would be greatly increased.
Re: (Score:2)
There are lots of studies on studies, and in general they are a good idea. Here's my take on that (from SoylentNews [soylentnews.org]), slightly paraphrased to hopefully demonstrate why meta-studies can be good:
Keeping track of information is difficult, and journals generally don't like people to pepper their articles with too many citations. If the same information gets spread around, then the chance of citation drops for any particular article that contains that information. This is a problem, even with Watson-level recall
Re: (Score:2)
Not at all. One of the few areas where science still works is well done meta-studies.
Re: (Score:2)
If it's "too meta", it's not well-done.
Re: (Score:2)
Or alternatively, the reviewer was too stupid or to lazy to understand what it is about.
Re: (Score:2)
P.S. More like 90% or more. You just have keep calling.
Wow (Score:5, Funny)
Where's the study which examines studies about studies and found that 50% of them are fueled by irony.
Further study is needed to confirm that number.
Re:Wow (Score:4, Interesting)
Try digging into this list [wikipedia.org], you might be able to find something relevant through there.
Re: (Score:2)
Awesome!
I love Wikipedia's lists, but lists of lists of lists is definitely new to me!
Re: (Score:1)
I do believe we will need a study about studies of studies about studies. And if that study about studies of studies about studies gets repeated we will need to do a study of studies about studies of studies about studies..........
Obvious question... (Score:2)
Does that include this one?
I know what to do. (Score:2)
Not necessarily a bad thing (Score:5, Informative)
As Slashdot patrons are eager to point out every time this sort of story gets published, the phenomenon described is not necessarily a bad thing.
Many physical chemists these days are investigating ways to build nanostructures that can demonstrate interesting phenomena. For example, chemists have known for a long time that certain molecules will scatter light in the visible range but decrease its frequency by a molecule-specific constant. This process is called Raman scattering. These molecules are often dissolved in water, and it was recently shown that the adding metal nanoparticles to the solution will dramatically increase the amount of observed Raman scattering. Suddenly there's a lot of new research to do: How does the increase depend on the nanoparticles' sizes? On their shapes? On the particular metal of which they're made? On whether their surfaces are smooth or rough? What if the nanoparticles are hollow, or composed of layers of different materials? What are the theoretical explanations for the observed behaviors? And do any phenomena *other* than Raman scattering benefit from the presence of these nanoparticles?
Many papers have been (and are still being!) published on all the clever things people have tried with these nanoparticles. Ten years from now, we'll have a pretty understanding of all the properties of surface-enhanced Raman scattering and most of these papers will be "forgotten" as researchers consolidate their knowledge into a couple of good textbooks. But that's perfectly fine---in fact, that's the whole point of scientific progress. Science is the process of observing a lot of complicated stuff and finding the most compact explanations for everything that was observed. It's nice that eighty years ago one researcher could sometimes discover a new phenomenon and provide a complete explanation for it before publishing his knowledge to the world. Today we have more researchers exploring a larger space of possible experiments, and the things they're studying are much more complicated. So they publish more papers as "scratch work" to help other researchers who are investigating the same phenomena, and eventually these papers are replaced by books. Again, that's perfectly fine.
Duh (Score:2)
Re: (Score:2)
Re: (Score:1)
Even scientists want thei 15 minutes of fame (Score:2)
Everyone wants their 15 minutes of fame, even scientists. Perhaps more so than others, because their pay scales and tenure often depend on being published and cited as often as possible.
The sad thing is that even a plethora of citations does not demonstrate the quality of a given paper. It just means it had one or a few quotable paragraphs; not that it's methodology or conclusions were necessarily stellar.
When I worked on some research back in the university days, the prof in charge of publishing the
While publish or perish has problems... (Score:2)
... we do need some way to separate good scientists that are working really hard and shit ones that are slacking off.
So... do we have another method besides demanding that they be in various journals at some interval?
Why do these studies need to be in journals at all? why not just have publications put out by every university where they internally audit every paper and if it is valid... publish it.
Sure, you're going to have a lot of boring studies but so what? Science doesn't have to be exciting to be usefu
Re: (Score:3)
Of course we are publishing more. We are pushed to publish, so obviously paper get forgotten. But it is not clear to me that it is a bad thing. What pulish or perish accomplished is that we are communicating more. So clearly we are communicating smaller ideas, smaller experiements, smaller contributions but we are also communicating earlier in the process.
It is frequent nowadays that one idea is spinned into 3 papers, one preliminary workshop, one conference and one journal. Clearly once the journal is publ
Re: (Score:2)
When publish or perish would be about communication, we could just post all our preliminary results in our personal blog or in a forum and discuss it there. Instead we write papers which represent a minimal increment of knowledge. We then evaluate that minimal increment with often non standardized case studies and get through with it. For every topic I have to enter there are tons of publications, but most are rubbish. Some rubbish gets even published on ICSE, ASE or Models. The metric is ruining the scient
Re: (Score:2)
That's good to hear, laymen like myself only hear this expressed in negative terms.
At some level we have to take the scientist's word for it. Though of course... they're people and people lie. Just as Dr House said "everybody lies". So people issuing grants or auditing or whatever... they have to rely on metrics and independent reviews and independent reviews of independent reviews.
There's fishy things that go on in any organization. And the health of that organization is dependent on review least the whole
Re: (Score:2)
Any metric you use which is not directly linked to the quality of the publication will cause side effects, as people optimize towards the metric. The problem with scientific quality is that it is almost impossible to come up with a solid metric. It is even impossible to come up with a ordinal ranking. Many brilliant scientists had their work rejected first time but later it was a breakthrough. So obviously the reviewers were in error when they rejected those papers. In some fields people reject papers when
Re: (Score:2)
There aren't enough academic jobs to accept all applicants. Therefore some sort of selection process has to be in effect.
I guess I could just break a pool cue in half, throw each one a sharp cue shard, and tell them there is one opening.
https://www.youtube.com/watch?... [youtube.com]
eh?
We have to have something.
Re: (Score:3)
No there are just the right number of first posts. But there are way to many posts proclaiming "First Post!!!!!" or variants that aren't.
AKA as Database Syndrome (Score:3)
The crop of PhDs from the last 10 or so years are either unable or unwilling to 'hit the books'. If they can't find it in an electronic database AND easily download a PDF, they will ignore the existence of the work.
Such work often includes seminal publications, REVIEW articles of a field, and things like conference proceedings before 'everything-PDF' – all of which contain a wealth of information.
It really bugs me when I see cited references from "whoever did something like that most recently," rather than drilling down to the original source. Unfortunately, there seems little we can do about it, aside from good scientists not referencing lazy scientists.
Re: (Score:2)
Re: (Score:2)
It helps to get to know the guys who did the original work. They often have books they will share –books that had tiny print runs.
And yes, good scientists trace back via References in articles. Lazy ones don't/
Re: (Score:3)
One of the primary reasons we even have computers is to help organize and locate information. Meanwhile, because computers are so good at it and we now have so much information to process, information that is not available to a computer in 2015 is not useful information.
Re: (Score:3)
As in CS every invention gets reinvented every 10-20 years it is important that the old stuff cannot be found anymore. Beside that for key ideas I start with any publication I can find and start to find from there the most recent publications in the field and then try to find the original contributions again backward in time. However, this is a time consuming process and if it is only to document a minor argument, I will stop much earlier for instance with a survey on that topic.
Re: (Score:3)
This is extremely and wildly not true. The most basic part of doing literature review is following original sources and everyone I know does this. You have to, because reviewers pick this stuff up. Even when I couldn't find a pdf or physical copy of an original source, I'd still cite it. Also, you're fooling yourself if you think that just because something was done 30 years ago, there's no point in citing more recent sources. A lot of more recent work is nothing more than just repeating old ideas but with
Re: (Score:2)
I shouldn't feed the trolls, but just for the record:
Actually, what I said is true. As a reviewer I DO pick this stuff up. And manuscripts with inadequate citations are rejected. Many submissions come in lacking any citation to a source (say, from 25 years ago). They will instead cite one of their buddies w
Re:AKA as Database Syndrome (Score:5, Insightful)
For citations central to your argument, sure, you need to track down the main papers. It's not that difficult - just look at what papers everybody else is citing. But most citations are just fulfilling the [citation needed] reqs for facts you use in your work. Any one of dozens, sometimes hundreds, of papers would easily fill in for that role.
You find two references about the same thing. As far as citing the fact you need they're essentially equivalent. One will take three weeks and thirty dollars - and half a day of arguing to make the lab pay those thirty dollars - to get, and half the time your thirty bucks will give you a badly printed paper copy. The other you can download into your paper manager and read right now. Guess which one almost everybody will use?
Re: (Score:2)
For citations central to your argument, sure, you need to track down the main papers. It's not that difficult - just look at what papers everybody else is citing. But most citations are just fulfilling the [citation needed] reqs for facts you use in your work. Any one of dozens, sometimes hundreds, of papers would easily fill in for that role.
You find two references about the same thing. As far as citing the fact you need they're essentially equivalent. One will take three weeks and thirty dollars - and half a day of arguing to make the lab pay those thirty dollars - to get, and half the time your thirty bucks will give you a badly printed paper copy. The other you can download into your paper manager and read right now. Guess which one almost everybody will use?
The most frequently cited papers are not great steps forward, but method papers; somebody does some doofy study of fish farts, but it includes a great method for analyzing exhaust gases, so every gas analysis paper ever afterwards references it
I read a study that showed that, ironically.
Re: (Score:2)
>until a study gets obsoleted by newer, superior studies thus gets shorter
In my field, most of the research from the last two decades is pure unmitigated crap. Basic errors in research protocol are so common, that the few people who read, and review everything, remark when there are no research protocol errors.
(It is pathetic to see a study by an author of a university textbook or research protocol, do a study that fails to adhere to what is in the book on research protocol that has their name on it.)
So many terrible posts (Score:5, Insightful)
It saddens me to see so many sarcastic and cynical posts which fail to demonstrate that the poster has given the issue any thought whatsoever. Does the Slashdot community really consider it self-evident that scientific research is a failed enterprise? And does the Slashdot community really have no idea how scientific research works?
Scientific papers aren't published for your benefit, you silly Slashdot reader. They're published for the benefit of other researchers. Suppose that some meta-researcher studied email patterns at your place of employment and found that this year a smaller percentage of your emails are replies to other messages [as compared to last year]; that is, a higher percentage of this year's emails are about new subjects. Then this paper gets referenced on Slashdot and someone (the author of the original article, the Slashdot submitter, or the editor) suggests that the lower reply percentage implies that intelligent discussion must be on the decline at your workplace because discussion requires people talking back-and-forth about the same topics. Then imagine that a bunch of people make short sarcastic posts that agree with that interpretation and variously lament about the decline of society as a whole or of your workplace in particular.
Let us now make the biggest assumption of all and suppose that you have enough self-respect to be offended by this challenge to your intelligence. What would be the most mature contribution you could make to this discussion? I suppose it could be something like, "Your statistical analysis of my company's email habits is interesting, but your interpretation seems a bit misguided; it seems like a pretty big jump to go from 'percentage of emails which are replies to other emails' to 'abundance of intelligent discussion.' "
So too it is with research papers. A statistical analysis has shown that researchers in various fields are more likely to cite recent papers than older papers, and the "half-life" of the typical paper (in the author's own words) has decreased somewhat over the last couple of decades. What conclusion should we draw from this? If scientists are less likely to cite a ten-year-old paper today than they were a decade or two ago, does that mean that there are "too many papers" and they're just swamped with recent stuff? Or does it mean that they're sufficiently well-organized that problems that used to take fifteen years to work out now only take five, and the investigations are moving on to new things?
To paraphrase an old joke: I don't go to where you work and statistically analyze all the dicks in your mouth. So stop doing the same to scientists.
Re: (Score:1)
Maybe because most studies are put out before even verified. Seems like the studies are either stupid, and a lot of them are later disproved, and many are paid to get the desired results. (See climate change)
How do you verify the study before it is published? Ask the same guy to do it again, making all the same mistakes, and see it if comes out the same? Or make peer review include repeating the study?
A good thing? (Score:3)
Although this could be due to the "publish or perish" mentality, that often forces researchers to break down their work in several publications of lesser impact than make a single publication of larger impact, the fact that the "lifetime" of publications is getting shorter may also mean that the research is speeding up. Knowledge moves faster from papers, then to books, and then to being "common", and before you know it you don't really have to cite someone every freaking time anymore because everyone knows what you're talking about (I'm talking about things that are considered "common knowledge" here; you surely don't cite Newton every time you mention that white light can be broken up using a prism). More commonly, somebody will sum the "state of the art" into a book or in a good introductory chapter of a doctoral dissertation and people will cite that, instead of all the papers. Also, books keep getting cited for decades after their publication, so maybe a follow-up study could check whether there is a similar trend in the citation of books?
While the plurality of journals has made publishing quite easy nowadays, I don't think this is the reason for the observation that papers get forgotten faster. A bad paper will not even get noticed and will probably get cited only by its own authors in subsequent publications. Since we are talking about papers that do get cited here, this means that they have managed to attract some attention, and can therefore not be too crappy.
Re: (Score:1)
Has science gone too far?
Stupid GPS.
Article progression (Score:2)
For a change, this is something that appeared on SoylentNews before Slashdot. It has been interesting tracking this article through the social media sites that I frequent:
Reddit [reddit.com] — Submitted Wed, Mar 11; 211 comments at the time of writing this comment
SoylentNews [soylentnews.org] — Submitted Sunday, Mar 15; 16 comments at the time of writing this comment
Slashdot [slashdot.org] — Posted Monday, Mar 16; 30 comments at the time of writing this comment
We need more meta science (Score:2)
These authors are a shoe-in for the (Score:2)
Re: (Score:1)
data mining? (Score:2)
It seems like this (too many scientific papers) is a problem that could be solved by data mining. I know that concept is considered evil these days, but it does have it's practical, non-evil uses.
It was inevitable, really, that at some point there would be more science going on than could conveniently be published.
Obvious from just looking at your own network (Score:1)
Of course there are too many (useless or only marginally useful) scientific studies. Just look at the people who are working as scienctists that you know personally or that you otherwise vaguely know, how smart they are (everyone cannot be an Einstein) and your estimate of the quality of knowledge artifacts that they would produce, and what is the research they do, not just limited to your own field of schooling or expertise. And what do your friends and connection who are researchers have to say about the
Faster rate of scientific progress (Score:2)
Meetings at work (Score:2)
filtration is key (Score:2)
Eliminate biased studies and the rest can see the light of day.
'Scientific Studies' today are a creation of a Marketing department in many cases. There is a product to be sold and it needs support and affirmative publicity. A company may do several studies in hope that one or two will be useful in their advertising. The others tend to disappear.
The US government (and other governments and non-profits) conducted studies for many years with the intention of proving that smoking and second hand smoke were dang
Re: (Score:2)
>Eliminate biased studies and the rest can see the light of day.
If it wasn't for some researchers fudging data, breaking every rule in the book, about research design, there never would have been any pilots from Tuskegee, during WW2.
In this case, the bias of the researchers, and the funders, was a goodness.
Re: (Score:3)
Seems to me that Einstein was rather big on proving his ideas correct, and therefore was biased.
yo dawg... (Score:2)
Salvageable through open science? (Score:2)
If everyone must make their data available, then a paper will be judged on the strength of its:
a) academic contribution; and
b) quality/usefulness of the data.
So you might not be the author of the greatest paper, yet your impact might be the quality of the experimentation and resulting data.
Right now, papers appear and the data is just hearsay. In that environment, anybody can publish anything ... and today, there's is a strong incentive to do just that.
Chris who? (Score:1)
dimly remembered cartoon strip from grad school (Score:1)
"My brother is doing his on the letter G".
Is it also possible... (Score:2)
That rather than shorter attention spans, or more useless papers, papers are not staying relevant as long simply because the rate of technological progress continues to increase?
For example, a paper on VHS would have been cited during a longer period than a paper on DVD, which would have been cited more than a paper on Blu-Ray... The rate of innovation has increased, and thus the duration of the usefulness of the discoveries as compared to updated versions of the same has gone down.