Misconduct, Not Error, Is the Main Cause of Scientific Retractions 123
ananyo writes "One of the largest-ever studies of retractions has found that two-thirds of retracted life-sciences papers were stricken from the scientific record because of misconduct such as fraud or suspected fraud — and that journals sometimes soft-pedal the reason. The study contradicts the conventional view that most retractions of papers in scientific journals are triggered by unintentional errors. The survey examined all 2,047 articles in the PubMed database that had been marked as retracted by 3 May this year. But rather than taking journals' retraction notices at face value, as previous analyses have done, the study used secondary sources to pin down the reasons for retraction if the notices were incomplete or vague. The analysis revealed that fraud or suspected fraud was responsible for 43% of the retractions. Other types of misconduct — duplicate publication and plagiarism — accounted for 14% and 10% of retractions, respectively. Only 21% of the papers were retracted because of error (abstract)."
Publish or perish (Score:5, Interesting)
"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.
Re:Publish or perish (Score:4, Insightful)
...The academic community needs to find another metric for researcher quality other than papers published...
such as?
Number of citations? No, it would take a 30-year probationary period before the trend was reliable.
Have experts evaluate your efforts? No, that would require extra effort on the part of expensive tenured experts.
Roll some dice? Hmm, maybe that could work.
Re: (Score:1)
Roll some dice? Hmm, maybe that could work.
I have the sudden urge to make a D20 modern "Tenure of Educational Evil" (If you don't get it, look up Temple of Elemental Evil) and propose that any researcher must be able to take their level 1 researcher through the module to get any funding. (and funding based on how many objectives they complete along the way)
Bloodsport (Score:2)
Have an annual fight to the death for tenure.
As an added byproduct I bet A) Your average tenured professor would start to look a bit different, and B) You gotta bet they would be taking way less shit from students and TA's...
Though seriously though, I know in some circles it has been discussed that not every university be structured in the same way. For the most part most/many are more less training centres rather than places of deep discovery.
Tenure and papers, might make sense if your primary goal is the
Re: (Score:2)
Two Scientists enter!
Eight Scientists leave!
(This might not work out they way you had planned it ... )
Re: (Score:3)
This implies that an extra -5 scientists entered.
Re: (Score:2)
Electrolytes! (Score:2)
It's what Scientists Crave!
Re: (Score:2)
How about calculated gross earnings of the students you have taught?
Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).
Or you could measure net income from licensing of IP for creation of new technology. Notice I said licensing and creation, I wouldn't want to encourage our universities to continue acting like asses pursuing IP lawsuits as a means to make money.
Re:Publish or perish (Score:4, Insightful)
How about calculated gross earnings of the students you have taught? Of course that puts a value on teaching, which is something being discouraged for tenured faculty (which I obviously don't agree with).
That would also make it in new professors' best interests to not teach the intro level courses where much of the class will change majors and doesn't want to be there in the first place. They'll instead focus on the upper level courses where the weak links have been weeded out.
Guess which courses new faculty get stuck doing now? That would be rewarding the really weaselly ones who were able to skip the hard work.
Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants. They're voting with their wallets for schools where research is valued more than instruction. So your solution is lacking a problem, at least according to the teachers and students of such schools.
Re: (Score:2)
Furthermore, the students don't care about quality teachers, or else they'd be going to smaller schools known more for teaching than for research grants.
Why? The respective research school selling point is that the teaching is better because the faculty is top notch.
Re: (Score:2)
Why? The respective research school selling point is that the teaching is better because the faculty is top notch.
Huh? Where did you get the idea that a good reseach faculty means a good teaching faculty? There's too much pressure on research faculty to do research to expect them to spend alot of their time concentrating on teaching. (Yeah, some research faculty are good teachers, but there is no causation.)
You go to a reseach school if you want to get involved in research, because that's where the student jobs in research are. You go to a teaching school if you want to learn, because as an undergraduate you aren't g
Re: (Score:3)
Where did you get the idea that a good reseach faculty means a good teaching faculty?
Note the use of the phrase, "selling point" in my original post. Where does anyone get that impression that research means better teaching and/or better education? From research schools marketing that angle.
Fundamentally, your assertion that "the students don't care about quality teachers" is based on flawed premises. Prospective college students aren't well known for understanding the nuances of a college and an education. So why expect them to "know" that "smaller schools known more for teaching" have
Re: (Score:1)
Most students are going to choose institutions where the certificate they get at the end will have the highest prestige attached. Now if certificates from universities with better teaching would provide higher prestige (which would make sense, because the students from there should be better educated, after all), then students would select the universities with the best teaching, and thus universities in turn would have a higher interest in improving their teaching in order to get more students.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
With the public retreat from education, universities have to take their funding from more private sources. As a result, there is outside pressure to do research to favor these outside sources of funding, and you get a recipe for fraud and misconduct. Of course, the universities won't admit that they have had to make a deal with the devil to keep the doors open - and a large part of our (United States) political system is dead-set on taking us backward in terms of scientific progress to appease their less-
Re: (Score:2, Interesting)
Well, you've got the devils on the corporate side, who may be trying to avoid bad press, say large organic potato farmers who don't wan to see studies that show the deleterious effects of carbohydrate intake on obesity, diabetes, heart disease and other chronic diseases. Fewer carbs sold means less profit to the company.
But then, you've got the devils on the government side, who also may be trying to avoid bad press, say the USDA regulators who don't want to see studies that show the deleterious effects of
Re: (Score:1)
Most of the organic potatoes I've seen were small to middlin'.
Re: (Score:3)
With the public retreat from education, universities have to take their funding from more private sources.
Last I checked, there was no such retreat from education. There's been a remarkable decline in the quality of education and what public funds buy. But it's a dangerous illusion to claim that there has been a retreat from education when the problem is elsewhere.
Re: (Score:2, Interesting)
Re:Publish or perish (Score:5, Insightful)
Journals don't only publish papers reporting "positive results," whatever that may be. Even if your study comes out a way you didn't expect, if you did it right, you should still be able to get it published. There's something beyond publish or perish that is at work here.
That's what you might think, but getting (most) journals to publish negative results is very difficult.
Re:Publish or perish (Score:5, Insightful)
Journals don't only publish papers reporting "positive results," whatever that may be
HAHAHAHAHAHAHAHAhahahaha... ha... oh, oh I'm sorry, but that's funny.
Yes. Journals have a very long history of not publishing 'negative results'. (id est: "We tested to see if X happened under situation Y, but no it doesn't.") Mostly because it's not 'interesting'.
If you want a good example of this, check out the medical field, where the studies which don't pan out aren't published, the ones which do are, leading to severely misleading clinical data [ted.com], and it leads to problematic results.
Re: (Score:2, Interesting)
The problem with biomed research is that the field is rife with people who don't understand models. Biomed research is not really science in that we are not yet at the point where we can express mathematical models to make predictions which are then falsified or not.
All too often, it is a case of "I knock down/over-express a gene, find that it does something, and then make up some bullshit where I pretend it'll cure cancer". In many cases, articles get published because the reviewers don't say "this claim i
Re:Publish or perish (Score:5, Interesting)
A positive result is the rejection of a null hypothesis. In the frequentist statistical paradigm, a failure to reject the null hypothesis is simply not significant. Insignificant results are not usually considered worthy of publication. "If your study comes out a way you didn't expect," then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance. This way you can explain the significance of what you learned from the "failure" of your experiment, and there is no reason you should not be able to publish it.
That's the statistical paradigm. Results just aren't significant unless you can state them in a positive way.
Re: (Score:3)
then the way you expected your study to come out is a null hypothesis which can supposedly be rejected with some measurable degree of significance.
You have to be very careful here. In serious studies, you don't get to choose your null hypothesis or how you're going to analyse the data after collecting it. That's a textbook example of introducing confirmation bias.
Re: (Score:2)
Yes you do.
There is also the danger of making an unjustified assumption of objectivity. In preliminary studies, scientists will have gathered data, analyzed it, looked for patterns, and tried to come up with all kinds of hypotheses that could be tested. Even the most final, definitively ob
Re: (Score:2)
often enough the scientist will have a pretty good idea of the expected nature of the data to be collected.
Sure. I was clarifying that you can't just change to a different method of analysis according to the data you got, just because your original analysis didn't give you the result you expected.
In other words, you simply can't change your plan once you've seen the data. In this lecture [lhup.edu], Feynman gives a very clear and real example of what can go wrong even if you're trying to be completely honest (look for the part where he talks about Millikan). That's a neat example because it's a very controlled experiment m
Re: (Score:2)
I think the OP was referring more to the fact that if you make the hypothesis that if you give drug X to rats their hair will turn green and instead giving drug X to rats caused them to, with a reasonable degree of statistical correlation, grow a third ear instead, you might still have a paper. What you thought was the case was wrong, but you still got an interesting result.
On the other hand the result "nothing happened" or "the rats all died" isn't necessarily all that interesting unless there was an exist
Re: (Score:1)
In biomed, there are not models to invalidate. the Michelson-Morley was significant because it proved that a model of the universe was wrong. To do the equivalent in biomed would require someone to have a model in the first place...
Re: (Score:3)
In biomed, there are not models to invalidate.
That statement is one of the stupidest things I've ever read on Slashdot ... which is pretty impressive.
Okay, I'm going back to building models of developmental gene regulation now. You're welcome.
Re: (Score:2)
These are not models in the sense that the Ether Theory was a model. You build gene regulation networks by accumulating data on what gene acts as a promoter/repressor of another and what are the activation cascades.
No one will ever invalidate that work through an experiment -- some of the network might be revised, but it is not the case that someone will come up with some experimental proof that there is no such thing as a gene expression network, which would be the equivalent of the MM experiment.
You could
Re: (Score:2)
The point I think you're missing is that simple phenomena are amenable to simple models, while more complicated phenomena require more complicated models. The propagation of light is, in and of itself, a simple thing, and the luminiferous ether model and the photon model which replaced it are pretty simple too--which is why it was possible to choose one over the other based on simple experiments.
The interaction of gene regulation within a living organism, on the other hand, is tremendously complicated, and
Re: (Score:2)
Listen. Learn. Positive, sensational results get published. Negative results only get published if they contradict a well known study/result/notion. Of course there are exceptions, as always.
Re: (Score:2)
Positive results are interesting results. Lets say you prove that watermelons cause 80% of all cancers, that is headline and front page stealing news. The report detailing how you proved watermelons have no link to cancer is not.
Similarly, your scientific paper on how to cure cancer is worth billions and extremely interesting news, while you paper on how NOT to cure cancer is neither.
Re: (Score:2)
It's also a great way to keep the grant money flowing in even after you have tenure, particularly if you're publishing findings that are likely to get you grants. And no one gets paid a nice bonus for finding inconclusive or negative results.
Re: (Score:3, Interesting)
It's not the academic community that is at fault. It is our society.
I've long held the view that science only gained the credibility it has because it was free from politics and power.
But since science has gained such credibility, people think we should now *trust* with power. Which of course destroys the very thing that gave it that trust. Ye old saying 'power corrupts and absolute power corrupts absolutely'.
For one thing, we now have government funding for science. Sounds like a good idea... except of
Re: (Score:3)
"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.
I think the issue is not that they need a new metric for researcher quality but to realize that not every professor needs to be an active researcher their whole career.
Re: (Score:2)
Re: (Score:2)
Aye. The result isn't surprising at all. In fact it is one of the big reasons why science demands that results be reproducible and methods be published.
Re: (Score:2)
"Get only positive results or never get tenure" is a policy that dooms us to this exact course. Publishing is no longer a consequence of having a brilliant idea, but rather a means to an ends(keeping your job). The academic community needs to find another metric for researcher quality other than papers published. It's costing everyone the truth.
This sounds an awful lot like something teachers told me in grade school:
"Show me you're busy working on something or I'll send you to the office."
Then, an awful lot like something I heard when working during the dotcom era:
"We need to look like we're doing busy work or we'll all get canned."
The numbers (Score:3)
Re: (Score:3)
That's not the point. The point is that journals need to be clearer about why a paper is retracted. Fraudsters shouldn't be able to hide behind the assumption that a vague retraction notice means someone made an honest error. The authors specifically state that they cannot make statements about the fraud rate because they don't have a good measure of the total number of papers published.
Re: (Score:1)
Before I judge this paper, I'll first wait some time whether it will get retracted because of misconduct. :-)
Re: (Score:2)
Re: (Score:2)
Percentage-wise, we're talking about a very small number of papers. They quote one of the authors:
large "culture of cheating" in school now (Score:3)
The mystery has been how one progresses from a cheating culture in grade school, then lose it by the time you reach grad school and professorship. Apparently fewer dont escape this culture. Significant science will be attempted to be replicated and fraud discovered.
Just stupid (Score:3, Insightful)
Re: (Score:2)
Fair point - but where are these peer reviewers hiding?
Re: (Score:1)
Fair point - but where are these peer reviewers hiding?
In tenure.
Re: (Score:2)
What do you think peer reviewers do? They sure as hell aren't replicating your work! The feedback you get also varies greatly in quality and importance.
From what I've seen, the average reviewer doesn't spend more than an hour or two of their time on you. Even if they set aside a whole week for you, they can't guard against a completely fabricated experiment. Short of some gross or obvious error, I can see how such a thing could easily slip past.
Conference papers are worse -- I doesn't surprise me in the
Re: (Score:2)
Re: (Score:3)
Well, FWIW, if Hendrik Schön [wikipedia.org] hadn't gotten stupid and made some pretty massive (and physics-defying) claims in his paper, and stuck with semi-muddy results that looked pretty (as opposed to sexy), but were harder to replicate? His career would have likely lasted years, if not decades, before he got caught.
It all depends, from the fraudster's point-of-view, whether he wants rockstar status, or to make a comfortable living...
Re: (Score:2)
Hoping to not see a retraction of this (Score:2, Funny)
That would be too ironic.
Misconduct! Fraud! Please ... (Score:5, Informative)
In context -- PubMed has more than 22 million documents and accepts 500,000 a year, according to Wikipedia.
So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.
Re: (Score:2, Interesting)
Yep. That's a very important, and very *missing* bit of information. Even if *ALL* of the retracted articles were for *blatant* and *intentional* misconduct (not duplicate publication), and all of them were published in the same year, and all of them were in PubMed, that would be a whopping 0.4% fraud rate.
It boggles my mind that this number wasn't asked for by the article's author.
Well, it *should*, but instead I'm just getting more cynical and assuming either incompetence (the author is writing about so
Re: (Score:1)
It boggles my mind that this number wasn't asked for by the article's author.
Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.
Re: (Score:3)
It boggles my mind that this number wasn't asked for by the article's author.
Not me. There's a blatant and obvious movement going on to discredit science in general. No one mentions how much different medical science is from physical science when they talk about this either. Find one bad scientist and they think they've won. Guilt by association.
Here's a case in point. Someone does research on fraud in science and the first thing that you and the parent poster think, "What is the ulterior motive?" That's just another anti-scientific attitude.
Re: (Score:2, Informative)
> So, to do the math: Number of fraudulent articles, total, = vanishingly small percentage of the total articles.
Those are only the ones that get discovered. I roll my eyes often when I read medical papers. The statistics are frequently hopelessly muddled (and occasionally just plain wrong on their face), the studies are set up poorly (as in, I would expect better study designs from first year students), or they are obvious cases of plagarism.
EX: does early fertility predict future fertility. "We div
Re: (Score:2)
Assuming that all fraudsters get caught which knowing the situation of medical science is very far from the truth. The paper doesn't talk about the number of erroneous articles, only the ratio between the number of frauds and the number of genuine mistakes.
Re: (Score:2)
I didn't say that fraud does not exist, or that there isn't pressure to produce publishable results that might affect accuracy or ethics (on occasion.) I said that this is a much smaller proble
Re: (Score:2)
http://pmretract.heroku.com/byyear [heroku.com]
Seems like despite the small percentage (say, like CO2 ppm in the atmosphere), the trend is alarming.
Re: (Score:2)
Re: (Score:1)
Except that my point about the article -- that it implies that there is lots of fraud in science -- has already been made by the fact that a fair number of commenters jumped right to that unproven implication.
And it would be quite reasonable to complain about such a study of cancer deaths if the article implied that the deaths were substantially greater than might be expected in the general population, without offering evidence.
And that's why.... (Score:4, Insightful)
this [slashdot.org] is a bad idea.
Re: (Score:2)
Why? Are you suggesting that the scientific studies are suddenly somehow not living up to standards of repeatability and peer-review?
We already have ways of dealing with these issues...
Are the retracted articles localized? (Score:1)
It would be fun to know how many of those were made in China (a country with a record of forged and fraudulent papers) and how many are from the US.
FTFY (Score:3)
Other than Hendrik Schön are there some in Math or Physics that are as likely to commit misconduct?
http://www.nature.com/nature/journal/v418/n6894/full/418120a.html [nature.com]
Re: (Score:2)
Life sciences. Not medical. That was in the first sentence of the summary.
Yes, math and physics have issues with misconduct. The article you link to mentions several physical scientists who think it's a problem. You identified a famous one. Retraction Watch lists others, quit a few in chemistry for some reason. Complete fabrication might be a bit less common for the reasons mentioned in your article, but I have no doubt that there's data pruning, faking extra results because you don't have time to do
Re: (Score:2)
They were unintentional errors (Score:2)
It's not like they *intended* to get caught.
So what? (Score:2)
If a researcher follows proper procedure but ends up with an incorrect result, it's still valid science. Perhaps it's the exception to some theory that will lead to later breakthroughs in the future. Simply being incorrect is not a reason to retract. Rather, a retraction is wiping the slate clean, hoping to forget that the research was ever done. The only reason to do that is if the research itself was unethical.
Suspected fraud? (Score:2)
An honest journalist would have separated "demonstrated fraud" from "suspected fraud".
Study has since been retracted (Score:1)
Turns out they made up all the retraction numbers
Because ordinary errors don't lead to retractions (Score:5, Informative)
You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.
Re: (Score:2)
You might be tempted to think that this means ordinary errors aren't as common as we thought. Lots of papers - actually most papers, at least in medicine - are wrong for reasons like the author being confused, doing the statistics wrong, or using a type of experiment that can't support the conclusions drawn. But merely publishing a paper that's bullshit? That usually isn't enough to trigger a retraction, because retracting papers looks bad for the journals. Only an accusation of Serious Willful Misconduct can reliably force a retraction.
Bingo. Mod to +5.
Re: (Score:3)
Parent is right. Small errors which don't affect the outcome are published as short "correction" notes. Larger, more subtle errors are corrected by the author and/or whoever noticed he was wrong writing a new paper which critiques the old one. But the original paper remains, because it's a useful part of the dialogue.
(And *that* is why you should always do a reverse bibliography search on any paper you read.)
We apologize for the misconduct. (Score:3)
The researchers hired to write papers after the other papers had been retracted, wish it to be known that their papers have just been retracted.
The new papers have been completed in an entirely different style at great expense and at the last minute.
---
Mynd you, m00se bites Kan be pretty nasti ...
academic tenure is an elitist system (Score:1)
The whole idea of an academic ecosystem distinct from the reality that the rest of the world operates in is an elitist adaptation of medieval socio-political structures. Granting someone an insulated job from which they can not be removed is ridiculous under any conditions. Whether someone publishes a peer reviewed article on something is irrelevant to whether they know what they are talking about in the current model.
The REAL peers are the folks doing work in the profession day in and day out. As a rule
Re:academic tenure is an elitist system (Score:4, Insightful)
The REAL peers are the folks doing work in the profession day in and day out.
As an astrophysicist in a research University, I'd like to know where these REAL peers are. I thought I was the expert, but now you tell me there's someone working hard at an astrophysics day job — so hard, in fact, they're too busy to review the papers I write while quaffing champagne by the bucket-load in the penthouse suite of my ivory tower.
I'm all ears.
Re: (Score:3)
The REAL peers are the folks doing work in the profession day in and day out. As a rule most peer reviews are conducted by people with a decidedly academic focus - the experts in the field are working day jobs that don't afford them time to participate in silly self congratulatory exercises.
And in most scientific fields, those folks are overwhelmingly to be found at academic institutions, and most of those who aren't in academia are in government. Corporate R&D is almost all "D" these days. There used to be a lot more research and publication, and peer review, by people outside academia--in light of your username, you might want to consider the history of Bell Labs, and how sad that history's been in recent years.
The way it should be. (Score:2)
Getting it wrong an important part of doing science. Papers with errors should be corrected by new publications, not retracted. The incorrect paper inspired the correct one, and so is a useful part of the dialogue. Also, anyone else who has the same wrong idea can follow the paper trail, see the correction, and avoid making the same mistake again.
Classic but extreme example: the Bohr model of the atom, with the electrons orbiting the nucleus like planets around a star. It's wrong. Very wrong. But we s
Re: (Score:2)
We also teach it because it works. Sure it's wrong. Sure it gives the wrong answers in a lot of cases. But it gives the right answers for some useful cases.
And it isn't a classical description. It's a quantum theory (not quantum mechanics though).
Scientific Fraud or MEDICAL Fraud? (Score:2)
Re: (Score:2)
Because http://www.ncbi.nlm.nih.gov/pubmed/23019611 [nih.gov] is a medical science paper?
Re: (Score:2)
Re: (Score:2)
Yeah, my day job involves processing pubmed data. It's a true PITA when irrelevant (for the stuff we care about) stuff like that comes in and messes with the author lists. And it's not uncommon...
Peer review design (Score:2)
One part of the problem is that peer review is set up in a way to catch mistakes, not really to vet for misconduct. I have no idea what would be required to properly vet for misconduct, but I'm guessing that it should be a good idea to statistically analyse any numbers presented, that should catch the most blatant cheaters.
Re: (Score:3, Funny)
You've discovered representative traits of different societies. In many Asian societies, individual achievement is valued highly, so each individual must work the hardest to be outstanding. In many Indian societies, the collective effort is what's valued, so a team gathering bits and pieces from myriad sources and reassembling them into a new product is the respectable path to success. In many European and American societies, slacking off and blaming others for the consequences is a venerated tradition.