Scientists Don't Read the Papers They Cite 351
WatertonMan writes "Very interesting and sure to be controversial study that suggests most scientists don't read the papers they cite. This means that if one paper misreads a work the misreading propagates. It's a very interesting study and has big implications for science, in my opinion. New Scientist has a good overview of the work. Given that most attention to work has been in sloppy work on the experimental side (poor methadology or outright fraud) this suggests a whole other problem. A lot of the ultimate problem is that many in research are concerned more about publishing than in solving the issues they investigate. Ideally the point both in science and in academics in general is to understand the ideas. Yet those of you who've looked up footnotes realize that actually engaging the ideas of other researchers typically falls by the wayside. Often footnotes are there simply because references are needed. Engaging others works is secondary. I've always thought that the hard sciences were more immune to that effect than the humanities. I guess not."
Well duh (Score:5, Funny)
lol (Score:2, Interesting)
The article is total bullshit (Score:5, Insightful)
Formatting citations is fussy, tedious, and annoying. You have to look up the page numbers in the journal (which you may not even have in these days of online papers), figure out who the publisher was, the issue or journal number.
I read every single one of the papers I've ever cited. But it was rare that I ever typed in a citation from scratch. Usually you get them either from an on-line citation database, from the bibtex entry helpfully supplied on the cited author's web page (scientists like being cited!) or, yes, by typing out a citation from a printed paper.
In any given field, usually some kind-hearted soul starts collecting a database of citations for others to use. For instance, here's one here:
http://www.helios32.com/resources.htm#Bibliograph
Have a look; you'll soon twig to why people don't type these in from scratch.
Creating the citation all over from scratch when it's right there in front of you is about as pointless as adding a link to a web page by retyping some monstrous 200-character URL. Just because you copy & pasted a link doesn't mean you didn't read the article did you? (I guess slashdot is the wrong place for that particular piece of rhetoric.)
I'm disappointed in New Scientist. The pissy little diatribe about science in the story submission is par for the course. Please, leave the pontificating to people who have a clue.
In fact, how about a retraction? (Ha ha ha ha!)
A.
Re:The article is total bullshit (Score:3, Insightful)
Another problem is that they seem to be estimating the frequency with which people read papers they cite from the frequency of repeating errors. The underlying assumption of this analysis is that scientists who read all the papers they cite and those who don't are equally likely to make errors. But it seems quite likely to me that scientists who are careful not to cite papers they have not read are also extra careful to get their citations correct. So sloppy scientists will tend to be overrepresented in the authors' statistics.
Re:The article is total bullshit (Score:3, Insightful)
So it's not the famous papers you have to worry about, it's the low profile papers covering recent developments or still poorly understoon areas of nature. I'd like to see those crackers at the new scientist research that., becuase everyone I know reads those ones very carefully.
first post? (Score:5, Funny)
Re:first post? (Score:2)
People know how to bend the rules and make it seem valid. These principles carry on (apparently) into the rest of their careers.
Re:first post? (Score:5, Informative)
When I was a postdoc at Argonne National Lab, my group's policy was that every group member got his name on every paper. On this [arxiv.org] paper, one of my coauthors refused to read the paper before publication. He said he was busy, and it was a long paper. I wanted to take his name off, but he insisted on having it on. This is not uncommon at all. I'm just quoting an example that I know of from personal experience.
There was a recent scandal where a group at Berkeley claimed to have discovered a new element. Later on, it turned out that the evidence had been fabricated. However, the group claimed it was only one guy who was a loose cannon who had invented the data. In other words, if it was right, they were ready to take credit. But if it turned out to be a fraud, they had plausible deniability.
There is huge pressure on young people in the sciences to establish a long list of publications, because permanent jobs are so hard to get. Another common phenomenon is where you have a senior administrator at a lab who "blesses" every paper done at the lab, and gets his name on every single one of them, even though he never actually makes any scientific contribution.
Re:first post? (Score:5, Informative)
I'm now in a position where I don't let that happen any more; I've asked to be taken off of papers because my contribution was minimal, and the last time I wrote something I stated up front who would be on the author list. I do not forsee that this will be a permanant solution, however; I'm still too junior a scientist to have much sway. I've seen other cases where someone asked to be added to a project because he "needed more publications", or where a senior investigator did not realize he was a co-author until well after the paper had been published.
Re:first post? (Score:3, Insightful)
Watched presentation; corrected spelling in three places.
Re:first post? (Score:2)
Re:first post? (Score:3, Insightful)
Re:first post? (Score:3, Informative)
The people who insist on such models tend to have a very parochial view of science.
If I spend 5 years designing an experimental apparatus and gather data then a collegue (or more likely his grad students) takes that data and produces an analysis I have the right to have my name on the paper, the analysis is only a part of the work.
Science is based on trust. Consider the case where a physicist wants to measure some effect but does not know how to build the apparatus to test the theory. I might well design the apparatus to test his theory even though I don't fully understand the theory he is testing, ultimately I have to trust him on that point. Equally if I provide him with a bunch of experimental data he trusts me not to have fabricated it.
On the large experiments (500+ authors) I have worked on there has been a review committee that checked over the paper in detail of 30 or so people.
um... (Score:2, Insightful)
You're assuming the paper which mis-cites another gets read when it gets cited.
Just like /. (Score:5, Funny)
...where no one reads the articles they cite. We are in good company!
DOOM3! (Score:2, Funny)
Slashdotters are interested in Science, Slashdotters don't RTFA
Scientists are interested in Science, Scientists don't RTFA
Therefore, People who read Slashdot must be Scientists!
Now if I can only convince my employer that I need a dual procesor box with a GeForce4 for research in "Hand-eye coordination and its development as influenced by realistically rendered 3D environments: A study in vitual ordinance trajectories and avoidance" [idsoftware.com]
Re:Just like /. (Score:3, Funny)
Damn, great minds think alike. Maybe that is why scientists don't need to read other's papers, because they already know what they are going to say...
shhhhh!!! (Score:2)
Ironic (Score:2, Funny)
Re:Ironic (Score:2, Informative)
They cover that.
even if they do read other's work... (Score:2, Insightful)
Most people twist previous research to fit what they are trying to say anyway (that's the nature of it). AFAIAC it's all bullshit anyway.
Unless they are showing HARD evidence (which in recent months they have been making up as well) and others have reproduced the same results, it's all about money/greed/profit.
Yes.
1. Research
2. Make up shit.
3. Lie.
4. ???
5. Profit.
Re:even if they do read other's work... (Score:5, Interesting)
As far as twisting up evidence, yes, this does happen. But most definitely not 100% of the time. How was the solar neutrino problem ever discovered in the first place? How was a re-evaluation of the cosmological constant initiated? These (and many other ideas) were brought forth not because someone wanted their ideas to be put forth, but because their hypotheses did not match the experimental data! It most definitely is not bullshit. AFAIAC, science is still the most altruistic of professions, not to mention one of the most self-sacrificing.
Re:even if they do read other's work... (Score:2)
Who's profitting? (Score:2)
Companies can truly profit from research. Their research tends to be more application-oriented, and thus profit-oriented. They are doing research for a market. Academia is doing research for knowledge (for the most part). There is a huge difference.
You seem to think that there is no difference between academic research and industrial research. Nothing could be farther from the truth. They are very distinct, and have completely different motives. If a scientist wanted to make money based off of productivity and profit, he works for industry, where the pay is substantially higher than in academia. There is also the added pressure of marketing departments, supervisors, human resources, etc. In academia, you are pretty much your own boss. They are completely different worlds. I've seen it with my own eyes at Bell Labs, TRW, and several universities, and also NASA. I think your assessment is a little misguided.
Re:even if they do read other's work... (Score:2)
Being a research Professor has its advantages and prestige is definitly one of those. If you really want to go the altruistic way, do like the Chinese (from the olden days) and publish your work anonymously.
Would you be willing to publish your work anonymously?
Re:even if they do read other's work... (Score:5, Insightful)
1. Read
2. Form angry, uninformed opinion.
3. Post
4. ????
5. Karma!
Doing science for the money is like having sex for
the exercise. There are many other ways to make considerably more money that require
far less work. The raison d'etre of science is the joy
of discovery; no one spends 6-8 years in higher education
getting a PhD just for the paycheck. People do it
because they love it.
As far as scientists faking results, yes, it happens.
However, the beauty of the scientific method is that
it is self-policing. Anyone can read the journals;
anyone can write the editors of said journals and
report anything that's not above board. As for papers
not being read in the first place, well, let's hop on
the Magic School Bus and take a quick tour of the
scientific publishing process.
First, write the paper. Then, submit it to either a
journal or a conference. In either case, the pool
of available papers will be divided over the number
of people on the review board of the respective
journal/conference, so a bunch of people read a few
papers. Once here, the aforementioned paper is either
rejected or accepted. If accepted, it is published.
After the paper is published, other scientists read
the paper. If it is useful for their work, they may
incorporate some of the ideas into their own work,
at which point, they'll test the idea that they're
borrowing to see if it makes sense.
If it does make sense, they'll use it. If not, they'll
tell the whole world, discrediting the work and
embarassing the original author. Thus there is plenty
of pressure to do good science. The people doing legitimate
work far outnumber the charlatans just submitting
gibberish.
Matt
This is ludicrous. (Score:4, Funny)
What about referencing one's own stuff? (Score:3, Informative)
Re:What about referencing one's own stuff? (Score:5, Interesting)
The root problem is papers are a form of scientific social capital. And when people think you are well read, your paper is worth more. I worked in a research facility where grad students were literally held hostage so they could produce more papers for the professors to take credit for. One student came to use with his masters and was held *7* years for his PHD. It was getting so bad the graduate department was *forcing* the director to graduate students by saying, "So and so has to leave by the end of the year -- with or without his degree." (and after 7 years, who could blame them)
Add this to an already paper obsessed culture, and you have a serious problem.
Re:What about referencing one's own stuff? (Score:5, Interesting)
This is a serious problem in a lot of disciplines, though I've heard of a rather elegant solution to the problem that's now become common (if informal) practice. The solution is that when a student thinks that he's done enough to justify getting a PhD, he starts applying for jobs that require a PhD. When somebody is willing to offer him one, that's proof that an outsider views his accomplishments as being worth a degree and his advisor has to let him write up his dissertation. It serves as a very effective independant outside check on the system.
Re:What about referencing one's own stuff? (Score:2)
I've been in situations where I was basically done writing and then was told "um, okay, you should cite all these papers. just figure something out."
Not necessarily... (Score:5, Insightful)
Re:Not necessarily... (Score:5, Insightful)
Proof: in one occasion, I misquoted a paper that I had written myself, just because I copied its title from the preliminary version, forgetting that eventually the title had (slightly) changed in the published version.
IMHO, this is cheap sensationalism. On the other hand, it is true that the academic profession is too loaded with the "publish or perish" thing, which leads researchers (and eventually publishing) sloppy research.
Re:Not necessarily... (Score:2)
These days, there are a number of programs like procite and endnote that manage your citations. If you were to (as I do) type in new references as soon as you hear about them, you could propogate errors inadvertantly.
This doesn't mean that the authors didn't read the papers they cited. It is an interesting finding, as it does show the spread of ideas -- but it doesn't indicate intellectual dishonesty.
Re:Not necessarily... (Score:5, Insightful)
I think that the article itself is making a huge leap here, and it's not one I'm about to believe.
They go even further... Copying the reference format from a paper does not mean that the scientist has not read the original. When writing my papers [research conferences, not just assignments] I'd often grab citations out of papers that were in the proper conference format. I had the paper in my hands -- but sometimes the citation information gets separated from the paper, and you need to rely on someone else's citation. That doesn't mean I didn't read the paper, nor does it mean that I was using another author's interpretation of the original work. It's an absurd leap.While I do believe that authors do skimp on what they've read and what they just pretend to have read, I'm not sure that using typos in references is the best way to determine the degree to which this occurs. What it *does* show is that people are clearly lazy when it comes to references, and many will copy a reference without checking to see if it's correct. I'd be interested to see if they've looked into lesser known papers -- a popular paper is more likely to withstand an error, since everyone knows what it is anyway...
Re:Not necessarily... (Score:2)
Mod parent up! (Score:2)
I have had to fix the spelling of my own name in a reference I copied...other than that it's a good thing. I try to be really careful when I have to type in the reference myself, usually a recent paper or something from another field. I have to decide whether to list all the authors or do an "et al" after who you think are the main authors. Whether to write out the complete name of the journal/conference or its common short name. I do try to proof read the reference, but sometimes I read the paper and used it but can't find it and the deadline is quickly approaching, it would be more wrong to leave it out than to use the possibly slightly off reference, esp if I used that reference to find the paper in the first place. If it had been far off enough to cause me trouble I would have remembered. Of course you want them to be correct since the editor will look at your references when looking for people to review your paper...
It happened to me (Score:2)
Looking into it what happened was this. She had found some interesting citations at the end of a paper, looked up the articles and read them. Then when she was preparing our manuscript rather than track down the papers in a file drawer she just copied the citations as they had appeared in the original citing paper. thus she preserved the exact same typos. word for word.
I have however seen another way that typos propagate. Say I was a graduate student on a series of papers where my advisor often trotted out the same citations to make the same points about the same seminal peices of work. Other papers I write with him as co-author get the same boilerplate. Eventually I write a paper on which he is not on the author list but I put in the same boiler plate. oops. I just cited a paper I never read myself. Believe me it happens, though it is not neccessarily inaccurate.
Re:Not necessarily... (Score:5, Informative)
Many citiations now are copied from ONLINE SOURCES. We read the papers, but we hate typing in our bibliography from scratch. I can just go to ADS (http://adsabs.harvard.edu) and have it print out the reference in BIBTeX format for me. Now, there are quite a few typos in that database. I know because we're finding them while creating a bibliography for the upcoming Juptier book.
None of this implies that we're not reading papers we cite. In fact, from experience, people in my field (astronomy) know the papers they reference pretty well, as a rule. There are times when we don't read an entire paper because we're just taking a few numbers or an equation, but as a whole, it pays to know the papers. If you don't, you'll usually find yourself under fire from your collegue who wrote the mis-quoted paper. Quite often, that collegue will be the journal referee who is reviewing the paper before publication. This is *not* a position you want to be in, so people generally work to avoid it.
Reasons (Score:3, Interesting)
Not the only problem (Score:5, Interesting)
As many of you know, the Internet is a great research tool these days. But unfortunately, it's too dynamic for the research world. "Most URL references [stand] more than a 50 percent chance of not existing after only six months." (from a Cornell study at http://www.news.cornell.edu/chronicle/00/12.14.00
I don't care as much if some researcher only reads parts and pieces of papers that they cite, but when the entier papers dissappear, that's a much bigger problem.
"The study, using term papers between 1996 and 1999, found that after four years the URL reference cited in a term paper stood an 80 percent chance of no longer existing."
Re:Not the only problem (Score:3, Insightful)
A research library used to serve a double role, both providing access to resources and in some sense backing up them up, but with many libraries moving their journal subscriptions from paper to web-based electronic ones, should the journal go away for some reason these resources have a much grater chance of simply disappearing.
Electronic papers are great---they allow for better searches, easier distribution, and let me avoid peeling my butt out of my chair to go to the library. However, libraries really must endeavor to keep local copies of as much of their inventory as possible.
Slashdot's catching up... (Score:3, Informative)
Charles Darwin is known to have cited other people's work that he hadn't read (I forget the name of the author involved - not being in the field myself). Then there was the entire field of molecular biology in the 1990s, which suffered more scandals than a dyslexic shoe factory.
Slightly more relevant (though still stretching back decades) is that some authors don't read the papers they co-author - look at all the people who co-authored papers with Jan Schoen, the team who, with Ninov, "discovered" Ununoctium, etc.
Next you'll be telling us that (shock! horror!) some scientist pass off other peoples' work as their own, with a fascinating NEW revelation about Rosalind Franklin's work in the discovery of the DNA structure.
That's usual... (Score:5, Interesting)
What I call "academical science" is full of huge problems, which sometimes reach the level of flagrant falsifications and demagogic manipulation of facts. While not being a scientist per se, I have seen how these things pass the limits ethics and moral in such a thing like Mars. There is one scientist who tragically died in a very strange situation. Apart of the conditions of the tragedy, there was one big "authority" on Mars who lied with all his teeth about the work of his deceased colleague. Frankly, it was shocking to see how this guy flagrantly and demagogically "reinterpreted" the intentions of the scientific work of his colleague. One should note that both guys were highly considered in the community. However, they were adversaries. One died, the other became a big scientific authority on Mars. One of the reasons, was that he made a lot to desmise the works that went against his theories
Re:That's usual... (Score:2)
I'll take this even further. If you are writing a paper on a particular topic and you fail to cite another researcher's paper(s) who has done similar work (but whose paper you have not read!), then you can run into some embarassing situations. I'll give an anecdote.
A fellow grad student was at a conference a couyple years ago. A research had just concluded giving a talk of his paper when a man sitting in the audience started asking question after question. This is normal, but the questions started taking on a more accusatory tone, "Didn't you know that this has already been done?" "Why didn't you cite XXXX?", etc. It turns out that this guy was either XXXX himself or his close friend. On the surface, it appears that the presenter should have been more thorough in finding papers and articles, but on the other hand, it's quite possible that he just took another path to developing his results.
There's a paranoia among researchers (and especially grad students) that if they don't cite every single work, no matter how minutely relevant, then this sort of thing will happen to them.
Peer Review (Score:2)
And what about the brothers who were awarded PHDs in physics for what looks like a hoax ala the Social Text incident?
http://www.thepoorman.net/archives/001517.html [thepoorman.net]
Humanities vs. Hard Sciences (Score:2)
Spoken like someone who hasn't done science (Score:2, Informative)
Not reading != misreading (Score:5, Insightful)
This is because heavily cited papers become very widely known and understood. Not everybody who's ever cited "The Origin of the Species" has read the whole thing, but it certainly then does not follow that they took their understandings of its conclusions from a single other citing paper.
They end their article with a smug admonition to "read before you cite." These guys sound like the guy with a clean desk who never gets anything done complaining about all the clutter on your desk. Smug social scientists criticizing physicists for their lack of citation rigor does not impress me. There are plenty of better reasons to criticize physicists this year (e.g., Ninov and Schoen). This one seems a bit silly.
Re:Not reading != misreading (Score:4, Funny)
Of course, I'd hope if you cited it, you'd list the title correctly as The Origin of Species, not The Origin of the Species.
Not really a big problem. (Score:3, Informative)
As a student starting my PhD studies, I once asked a researcher at the department about a paper. He told me he hadn't read it.
The next day, I saw that he had indeed quoted that paper in one of his.
However, it usually isn't such a big problem,
when papers are cited without being read, since it usually only happens with papers periferial to the subject.
(For example to justify a certain method or procedure that is common practice)
Also, sometimes the relevant portion of an paper can be summed up in one sentence, or in the abstract.
Re:Not really a big problem. (Score:4, Insightful)
Of course, which is why the article sort of misses the point. For instance, if I were to mention offhand in an introduction that protein synthesis by the ribosome is done by catalytic RNA, there is an obvious reference to cite [Nissen et al. (2000) Science etc.]. I know this is correct, it's been extensively covered, and I have a copy lying around somewhere, but I've never actually read it all the way through. You can just look at the abstract and that's plenty for these purposes- if I were extensively discussing the mechanism I'd need to thoroughly read the paper, but for an introduction I just need to mention the proper source.
Now, I could be making an error- what if they just pulled something out of their ass, or used sloppy methodology? Usually, people will just say "if it's good enough for the editors (and peer reviewers) of Science, who am I to argue?"
Scientists were students once (Score:2, Interesting)
I had one professor who would randomly check up on various references in papers submitted to him (the joys of statistical sampling
Knowing that there's some (realistic) percentage of being found out, presumably most people would be careful - but nonetheless since he cannot possibly check EVERYTHING, some people will be tempted to try their luck. At least some (most?) will get away with it. And for those who know their profs won't be hunting down everything... what's to stop them?
And like most crimes, you just keep doing it and doing it... until (if) you get caught (I don't think Winona Ryder's *only* "shoplifting experience" was the one she went to trial for). So presumably at least a percentage of those doing scientific research had had undergrad + postgrad experiences of "getting away with it". Could get to be a habit.
Heck, if even reusing *faked* graphs in multiple papers can be gotten-away-with...
This happens a lot...especially in Academia (Score:2, Interesting)
The funny part is that I received the largest portion of help from a couple of Sun engineers who were able to get me through some code which my advisor could not and except for the acknowledgement, their contribution was poorly documented (at least in my mind, not the advisor's).
So, if you read my paper, you would think that I am an idiot because some of the referenced work is so basic and at other times a super genius because the code was assisted by some great programmers (after all, how many people read the acknowledgements).
Did not prove causality (Score:3, Insightful)
Idiots. (Score:5, Interesting)
1. Find a reference to a paper which looks interesting.
2. Walk down to the library, remembering that you're looking for Bob's paper about bars in the Journal of Foo.
3. Arrive in the library, find the paper, read it, decide it is important.
4. Walk back to computer, copy out reference string.
It's quite easy to look up a paper from a slightly-wrong reference, and as long as the reference is close to correct, it's fairly easy to not realize that the reference was wrong in the first place.
Re:Idiots. (Score:2)
Yet, I don't think it is wise to disregard this. I think it may well be a problem. But then, I didn't RTFA... :-)
No one has time to do anything but skim (Score:3, Funny)
Warning: (Score:2)
Piled Higher and Deeper (Score:3, Interesting)
Anyway, this comic [stanford.edu] seems appropriate.
My Own Research... (Score:2)
Flawed logic (Score:4, Informative)
Gee ... most scientists use a program (like Endnote) to format bibliographies, using data downloaded from a database (like PubMed). I suspect that this is more a deficiency in proofreading reference lists and assuming that databases are correct, rather than a lack of reading the original material. Whether people read articles carefully is another matter, of course.
In fact, a blatant miscitation of a given reference would often get caught during the peer review process. This happened to me once when I rewrote part of a paper and forgot to remove one of the references that no longer applied ...
Idealism vs. Reality (Score:2, Insightful)
This, of course, does not justify in any way the falsification of work (which, I think, is extremely uncommon - it's just that we've recently heard about a particularly egregious case of this), nor does it justify propagating misinformation as a result of improper literature citation. I'm just pointing out that the ideal mentioned by the submitter is just that - an ideal.
It's not a new thing (Score:5, Interesting)
To support the view that observations got better and better, requiring more and more circles, you'll probably find most of these sources citing a book by J.L.E. Dreyer, written in the beginning of the previous century, but it exists in a few editions published later.
But Dreyer says the opposite:
Basically, if these people had actually read Dreyer, we wouldn't have had to struggle with this myth any longer. Of course, there's a lot more to this story than this, but I don't have time to write it now... :-)
Peer review (Score:4, Interesting)
Depends on the papers (Score:2)
Who Actually Wants to Read Articles? (Score:4, Informative)
The issue here is that people expect articles to have a certain shape, form, and style, including a literature review. And a lit review can be a pain. You don't want to read an article more than is required to get the basic gist of its relevance to your work. Sometimes, that can be done by reading just the abstract.
The suggested rate of non-reading articles is also possibly overstated. That one has mis-cited a work does not necessarily mean that one has not read it. I can, for example, read an article ten years ago and remember the basic meaning I need to take out of it, and include it in my own references upon seeing it in the references of another's work without refreshing my knowledge of the work. Or I could just use another work's references as a reading checklist and not bother to correct it (or be unaware of the mistake if I sent a poor grad student or some other lackey to the library to copy the journal for me).
I assume the full article by Simkin and Roychowdhury probably states the likely sources of commonly copied errors. I'm a tad curious to se whether the authors of those progenitor articles propagated their own mistakes in future articles or if they corrected them.
While the article claims that "a billion different versions of erroneous reference are possible," in practice that may not be as true. With the errors being volume, page, or year, the most likely errors are transposition of two digits, deletion of a digit, insertion of a digit, or replacement of a digit. In the latter two, the error will most likely be the use of a neighboring number on the keyboard. A one is much less likely to be replaced by a nine than by a two. That is unlikely to lower the probably number of copied citations to below 50%, but it is still a possible source of error that may or may not be accounted for.
In university settings.. (Score:3, Interesting)
In university settings, it is all about how many papers you have published. When a professor is first accepted to the faculty of a university, he/she must "publish or perish" for the first 5(+[?]) years. If you do not publish often enough in those first years, you are not retained. Things get better after you get tenure; you are not required to publish as often. So, it should not come as too great a surprise if people are more interested in publishing than solving the issues.
I personally think the requirements of universities should change so that we are not searching through a glut of papers, all saying many of the same things (or close enough). I am more concerned with the falsification of data, which totally throws everything off, than with a tendency to publish papers that don't necessarily solve the issues, which makes finding relevant research difficult but shouldn't substantially hurt the future of the field.
That explains everything (Score:2)
I can't speak for others, but I always read the papers I cite in mine. That's because I try to limit myself to citing papers that are actually relevant to what I'm talking about and have exerted some kind of influence on the contents of my paper. Now it's becoming clear to me why my papers always seem to have so many fewer references than the other papers I read.
In other news.... (Score:2)
Seriously, you should qualify your statements before you go creating new negative stereotypes. I've known my share of publish-aholics, but I've also known several scientists with deep personal integrity who only care about results.
QUALIFY!
Duh! (Score:4, Insightful)
Sometimes all someone wants is a certain result from a paper. Reading and understanding the full reasoning behind a result rather than the result itself may mean the difference between an afternoon of work and 3 weeks of work. Multiply that by the number of citations a paper has, and a hapless but well-meaning scientist would spend all their time digesting their citations rather than publishing papers and would soon be relieved of their position.
Understanding the details behind cited results is certainly very important, but in the real world there are real tradeoffs that researchers constantly have to evaluate professionally regarding how much time they spend understanding and in how much detail they understand any given result.
This posting is interesting, certainly, but it is not news.
Anyone For a Tall Glass Of Irony? (Score:2)
LOL. An obvious case of a submitter without ZERO sense of irony.
"There's a new study which suggests scientists don't read the studies which they cite.
crib
Huh? (Score:3, Insightful)
If they're not reading the papers, why would it propagate?
So, to summarize... (Score:2)
With so many similar topics appearing all across the IP landscape, here's the trend I'm seeing:
The simple capatalistic need to own and be given various forms of credit for ideas has taken precidence over the need to actually solve and understand problems.
That's not to say that capitolism is at all bad, but this aspect of our modern version of it is something that appears can lead to eventual deadlock in societies' and individuals' ability to get anything done. Scientists need to work on something they can own, so many ignore many otherwise important topics. Inventors need to avoid anything in the commercial market, so many find their ability to improve things is greatly hampered. Writers and archivists must carefully avoid soemtimes broad concepts that are claimed by powerful interests, so must limit their imagination as important ideas rot en mass.
Completely new ideas are a powerful thing, and should of course be encouraged - but the ideas that are actually useful to people are not often completely new. Our encouragement of new ideas should not be at the cost of the very usefulness of ideas in general! Exploitation of ideas is the overall idea behind copyright and the like, but one does not have to own the very core concepts themselves to exploit the ideas - to own the core concepts themselves ends up exploiting people rather than exploiting ideas, keeping everyone else from being able to bring many new ideas to society. New ideas don't often just spring from nowhere - people have to be able to combine concepts, using existing ideas.
Ryan Fenton
How research is done in nutrition 'science' (Score:4, Insightful)
I've grown tired of hearing members of the so-called 'medical' profession lecture me on how 'risky' my 'high-protein' diet is (seems most doctors are functionally deaf and/or immune to learning anything at all from a non-doctor). I gotta wonder how much more 'risky' my MODERATE protein is than being more than 100 lbs overweight. Seems doctors only read the conclusions of studies, and not the actual studies. I have come to the conclusion (based on my personal experience, and comparing notes with several dozen others in the same situation) that the typical 'research' paper follows these steps:
1: Write down a conclusion
2: Write a paper supporting that conclusion
3: Do some 'research', carefully structured to support that conclusion
4: Discount or discard any data that doesn't support that conclusion
5: Get the paper reviewed by a group of associates that agree with your conclusion
6: Publish the paper in some mutual-admiration society journal
My favorite along these lines is one entitled "Type 2 Diabetics Benefit From Reducing Intake Of Animal Protein" [pslgroup.com]. If you read the summary very carefully, you will see that the 'researchers' removed the SUGAR from the diet, and then concluded, from the resulting health improvements, that animal protein causes type II diabetes. (!!) This is, unfortunately, typical of what passes for 'science' in the study of diet.
Re:How research is done in nutrition 'science' (Score:2)
Of course, we're arguing about a paper which neither of us has read... seems a bit amusing considering the topic of the post.
Re:How research is done in nutrition 'science' (Score:2)
When I met my wife, she was a medical wreck. A sufferer of Type II diabetes since she was 8, her blood sugars were always off, she was underweight, listless, etc.
Now my diet is one that makes most GP's cringe. I live off of cheesesteaks, hot dogs, etc. High fat, med carb, high protein. Add to that that I am very sensitive to Antibiotics, and I can not eat farm raised poultry and fish. So, it's red meat for me.
Of course, my wife ended up adopting my diet after time, and her cholesterol is down, her blood sugars are normal, and she has achieved her target weight. Her doctor asked what she had been doing, and when she explained "Eating alot of Cheeseburgers" he refused to believe it.
The problem with modern science is that it is based on too many antique fallacies. Much like the 10th century monk never thought about germs because GOD caused disease... When the devil didn't. Most of Einstein's later work was highly speculative musings on extra-dimensions and trans-dimensional physics. Most of his conclusions cannot be verified, and those that can have shown anomolies.
This however, does not mean that super-string theory isn't still the basis for most high level physics research. Indeed, disagreeing with super-string theory is enough to convince many universities that you don't belong in their program. Gee, I always thought it was religion that placed so much importance on blind faith.
Cut the strings!
Physics doesn't demand
Any vibrating band
Of string.
I won't step in your noose
I don't believe in your loops
Of string.
Demenchuk "Cut the strings" - From "A 5th dimension of Beethoven"
~Hammy
In college... (Score:2, Insightful)
I blame it on (Score:2)
1. Write paper and cite other papers I haven't read.
2. Publish paper.
3. Profit!
Alternative?
work for greater good of humanity
and starve.
What would you do?
this proves little (Score:2, Insightful)
Pervasive Problem (Score:4, Informative)
Take a look at the ACM or IEEE and the number of journals they support, then toss in folks like Springer Verlag. Figure out how many articles are published in these each year. Just from counting you might determine that many of these are pretty meaningless. Try reading a few at random and see if you change your mind.
Now remember that the folks on a tenure/promotion committee know nothing about what a researcher might do - they're even more ignorant of the research field of someone else than they are of their own. So, how do they determine how good a researcher might be? They're sure as hell not going to wade through yet another meaningless paper. Its simple. They count. How many publications? How many grants? How many citation from other papers to the researcher's papers?
And its an interesting feedback loop: even getting a publication or grant can depend on your publication and grant history. And if you suspect that someone might be reviewing your paper/proposal who works in the same area, you might want to make sure there are a couple of citations (always positive, naturally) of that persons work included.
So, we know someone wants publications/grants/citations and they need p./g./c. to get p./g./c.. They do some research, it depends heavily on two or three other bits of research. But two or three citations aren't enough. So they might want to use the citations they find in the work they cite. OK. This citation looks good perhaps, but the original article isn't available in the local library and inter-library-loan will take a month to get it and the deadline is next week. Oh well. Cite away - the original author isn't likely to complain (after all this is another citation to his/her work).
And so it goes.
Cost of journals, pride of reviewers, contribute (Score:5, Insightful)
1) Cost of journals -- often there is an article that ought to be cited in your work (because it was published before yours, and is related), but is in a journal unavailable at your university's library. There are thousands of journals, and their high costs (often thousands of dollars a year each) means that no library can have them all. But why not simply ignore an article you haven't read? Read on.
2) Pride of Reviewers -- When a scientific article is sent to a journal, it is passed on to several researchers who are doing similar work for peer review. While it would nice to think that reviewers are not so petty, the fact is, if you haven't cited their work, they might get angry and reject the paper. So, authors feel that it is better safe than sorry and cite freely.
Re:Cost of journals, pride of reviewers, contribut (Score:2)
True, and I'm not saying it's a wise idea to do if you can possibly avoid it. But often you can get a pretty good idea about what the paper says from reading the abstract (which is generally freely available on-line, even if the paper itself isn't)
The Real Problem (Score:3, Interesting)
What needs to be done is to reform the way merit is assigned in academia. Research funding and tenure need to be allocated based not only on the quantity of publications but on other factors which may be harder to measure, factors that would be better indicators of the value of their research.
A somewhat related issue is that more and more private sector funding is flowing into universities and along with that funding comes the expectation of a quick return on investment. This creates more pressure to pursue short-term goals with little long-term impact on the field of study.
Taken together, US scientific research is destined to fall behind and stop making new breakthroughs. Seemingly, the only apparent solution to this is to increase the amount of public funding available for basic research. It would seem, though, this is not likely to happen given the current regime in Washington. A more likely outcome will be that our scientific institutions will all be doing R&D for the big corporations in the near future.
Publication number. (Score:5, Insightful)
A lot of the ultimate problem is that many in research are concerned more about publishing than in solving the issues they investigate.
The problem is that the higher-ups in the university system essentially mandate a certain number of peer reviewed publications for promotions, hell even to keep your job if you're not tenured. This, I feel, is part of the problem in that we're pushed so hard to get X number of publications per year. In a sense it's necessary to weed out the smucks (anyone can get a Ph.D. nowadays), but it also can cause the quality of the research to decline. The whole quality vs. quantity argument.
Just my $0.02.
Highly dubious! (Score:2)
*removing tongue from cheek*... I hope everyone got that.
It's easy to do! (Score:2)
Plus even if you read and really do understand the main part of a paper you might make a simple mistake like missing a key assumption in the introduction (which I don't imagine too many people pay much attention to) and then end up stating a result without that assumption, and then anybody who uses your paper is getting bad information. It probably doesn't happen often, but even just a few times could cause huge problems.
It's probably excusable when a novice like myself doesn't look too closely. After all, we're still learning. Plus I know it's a bad habit and I'm trying not to continue it. However, if this is happening with more experienced academics, it is quite scary!
Pertinence (Score:2)
No other option, no time (Score:2)
I'm writing right now... (Score:2)
Now you reap the benefits of dumb assignments (Score:2)
Totally naturally, we go out, find 1.5*X citations, winnow out the obvious losers, and randomly cite them at the end of our papers, having read maybe one of them. Because we all know the teacher/prof doesn't have time to check even one of them from each of our papers, let alone check them all. How many of us have completely manufactured a citation from whole cloth for one of these things and totally gotten away with it? (I haven't myself, but I certainly thought about it; the only reason I didn't is it was generally easier to just go get likely looking citations on the Internet. Teacher never realizes you "used the Internet" if you cite paper journals....)
Certainly you don't think this habit is going to go away just because they got a degree, when the stakes are even higher? Everybody else's six-page research papers have 40 citations at the end, if yours don't you'll stick out, and that's bad.
It would probably be better to require that students cite as appropriate, and require at least a spot check of the citations for at least one random assignment at some point in a student's career.
I'm writing something in my spare time that might in some sense be considered an academic paper, but I just use footnotes as appropriate. Citations are often overrated when they are used as a cover for "We've known this and endlessly debated this in the field for the past 50 years, but I can squeeze seven pointless, information-free citations out of this" sorts of things.
Note I'm not saying that citations are unimportent or that they should be abolished; they are legitimately importent and useful. I'm just saying the the stupid way they are handled in school has natural consequences in the resulting academics, and their value is unnecessarily diminished as a result.
The Hard Sciences (Score:3, Interesting)
What gives the Hard Sciences the right to that title is that, eventually, someone will root out the bull that someone else has published, brand it as such, other people will check it and agree, and it dies. You can prove someone WRONG. Try that in the social sciences - has anyone ever heard of a huge scandal where someone faked results in the social sciences? They would get in trouble if they didn't do the studies and were found out, but can you prove that they cheated just by taking their conclusions, working with them, and crying foul when something doesn't work? In the Hard Sciences, you can. That's what makes them so strong and practical.
Not that Social Sciences are worthless, mind you. It's just that BS seems to be a lot easier to get away with there. Sort of like in English class, when we were supposed to get the meaning out of a book. I never get the meaning the author's trying to convey (or at least what they say later he/she was trying to convey), but I wrote down something and got a good grade. Because how could they prove my thinking about the book wrong? I think the social sciences have a little of that problem in them somewhere. Controlled experiments are really tough to do, so you run into problems.
Chuck Darwin will take care of it. (Score:2)
But, the Scientific Method is clearly not going anywhere, and reproduceability is a very strict standard.
Now, I agree that there would be some difference between the experimental sciences and the evidential sciences (archaeology, etc...) in this regard, where the temptation to "promote" an idea is not as tempered by the fear of immediate embarrassment.
However, certainly in experimental science, the lifespan of any unsupportable idea is inversely proportionate to the degree of interest in that idea, which is the perfect governor.
Pay for results, not process. (Score:2)
Universities, governments, and corporate science divisions have been paying for raw output without validating the quality of that output. The result is a vast sea of crap masquerading as the truth.
How often is a scientist given the job of vetting another's work? So how often do you suppose it happens? And how much do you suppose it's worth to a scientist to participate in validating the truth, and how much to participate in publishing over validating?
the Humanities (Score:3, Insightful)
Incidentally, it seems to me that the peer review process that exists in both the humanities and the sciences ought to catch these people who are completely misreading their source material. If neither the people writing the papers nor the reviewers are familiar with secondary materials, a real problem exists.
Bad Article, Bad Science (Score:3, Informative)
During the reserch shage of a project, you read the papers. Error in th citation - no sweat; you know authers and title, and a search engine will give it to you in nothing flat.
Weeks or months later, it is writeup time. Open the first paper to cite it. And there are all the other references you followed (a little trouble in the lookup is long forgotten) and dutifully read. And - get this - it is easier to cut-and-past the citation than to go back to the paper and assemble - separately - the publication, title, authors and page numbers.
Then only thing the research quoted proves is that papers are overwhelmingly circulated electronically ans the dead tree format is, for scientific papers, obsolete.
Errors in cites don't mean you didn't read papers (Score:3, Interesting)
Yes, that is the tinfoil hat explanation.
Now try this one: authors are human beings who make typos. They cut and paste erroneous references because they don't want to waste time retyping the reference. They read articles from the online versions of journals, and sometimes the citation info provided online is incorrect or altogether absent.
One thing that does disgust me is the explosion in the number of footnotes associated with a typical academic paper these days. I recently submitted a paper with a not-particularly-important result to a not-very-important journal, and the paper had forty-one footnotes. (Most were added by my coauthor.) If you visit an mature university library, pull out a copy of an older periodical. Copies of Philosophical Transactions from the nineteenth century are a delight to read. I read a paper by Kelvin from (IIRC) 1807, and it had seven references. Seven!
The growth of massive, searchable databases of papers (eg Medline) has led to many more footnotes per paper, and many more potential typos. For the record, the paper I mentioned above contained at least three errors in the footnotes that were noted and corrected by the journal publisher. Perhaps New Scientist should be writing a scathing expose on the decline of proofreading and rise of profligate namedropping in footnotes.
Lawyers don't either (Score:4, Interesting)
This wasn't a hick lawyer either.. She was senior partner in one of the largest law firms in BC, had a reputation for never losing a case, and became a judge a year or so later (Judgeship is more of a peer-review process in Canada than it appears to be in the US).
This left me with a feeling that lawyers don't pay as much attention to their authorities as they could. Probably more so than scientists do with their citations.
Re:exactly... (Score:3, Informative)
Their claim is that this exact thing happened in the early 80's, and that instead of actually reading the research that said that HIV may cause AIDS (which was inconclusive) they simply took the ball and ran with it, causing years of research to be based on the same incorrectly cited source.
Who knows what the answer is, but it's a fascinating subject to read up on.
Re:exactly... (Score:2)
The Relationship between HIV and AIDS [nih.gov]
The vidence that HIV causes AIDS [nih.gov]
As for the "prominent scientists", some of them are operating waaaay outside their area of expertise. Kary Mullis had one brilliant discovery and doesn't seem to have done much else, aside from taking acid and surfing. One particularly loud denier that I know of is actually a math professor known for his aggressive crusades against anyone holding opinions contrary to him. He was once caught making a bold claim about media coverage of AIDS that was quickly proved wrong by a reporter with LEXIS/NEXIS access.