Forgot your password?
typodupeerror
Medicine Science

How Common Is Scientific Misconduct? 253

Posted by Soulskill
from the don't-steal-plutonium-from-terrorists dept.
Hugh Pickens writes "The image of scientists as objective seekers of truth is periodically jeopardized by the discovery of a major scientific fraud. Recent scandals like Hwang Woo-Suk's fake stem-cell lines or Jan Hendrik Schön's duplicated graphs showed how easy it can be for a scientist to publish fabricated data in the most prestigious journals. Daniele Fanelli has an interesting paper on PLoS ONE where she performs a meta-analysis synthesizing previous surveys to determine the frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct. A pooled, weighted average of 1.97% of scientists admitted to having fabricated, falsified or modified data or results at least once — a serious form of misconduct by any standard — and up to 33.7% admitted other questionable research practices. In surveys asking about the behavior of colleagues, admission rates were 14.12% for falsification, and up to 72% for other questionable research practices. Misconduct was reported more frequently by medical/pharmacological researchers than others. 'Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct,' writes Fanelli. 'It is likely that, if on average 2% of scientists admit to have falsified research at least once and up to 34% admit other questionable research practices, the actual frequencies of misconduct could be higher than this.'"
This discussion has been archived. No new comments can be posted.

How Common Is Scientific Misconduct?

Comments Filter:
  • Yeah... (Score:3, Interesting)

    by dword (735428) on Saturday May 30, 2009 @11:45AM (#28149731)

    ... and 78% of the surveys are made up on the spot.

  • by JanneM (7445) on Saturday May 30, 2009 @12:00PM (#28149855) Homepage

    2% - one in 50 - committing fraud to get ahead (or simply to keep their job) in a very competitive, volatile career environment. Sounds like it's in the right ballpark, and probably comparable to other professions. Some people are so career and status driven, and so unconcerned with the effects of their actions on other people, that they will break rules and cut corners no matter what the field.

    I do question the other figures though, simply because "questionable research conduct" is such a very nebulous kind of categorization. You can delimit it in very different ways, all perfectly reasonable. You could even effectively decide which number you want then define the term in such a way that you reach it (a practice that would most likely be included in the term). Notably, the author excludes plagiarism, even though that is a serious offense in research for good reason, and one that I'd expect most surveys to include, not drop.

    Also, the numbers for incidents by colleagues is rather pointless, since there is no indication of how many those colleagues are. If each participant has had a minimum total of eight colleagues altogether in their career up until this point, then the 14% rate fits very well with the self-reported 2% above. But of course, the participants do not know how many incidents they missed, and the number of times the mistakenly thought fraud was taking place is unknown. I would be very hesitant in trying to read anything at all into the numbers about witnessed incidents.

  • by Jawn98685 (687784) on Saturday May 30, 2009 @12:16PM (#28149971)
    The funding for the "research" is provided by an entity with an agenda other than pure research, e.g. having a vested interest in a particular outcome or finding. Nowhere is this more common that in the U.S. pharmaceutical industry, where entire ersatz journals have been published to provide the appearance of well-documented and peer-reviewed research.
    Beyond jailing those involved in such grand misconduct, I don't know where to draw the line, but I believe that separating profit from research, as far as possible, is a good first step. And yes, I am indeed advocating that medical research be "socialized". I have nothing against corporate profits, but when truth, not to mention the public good, takes a back seat to profits, the system is broken when viewed from any impartial perspective.
  • Re:It's quite common (Score:5, Interesting)

    by speculatrix (678524) on Saturday May 30, 2009 @12:19PM (#28149999)
    when I did chemistry at 6th form college (UK term, in US I suppose you'd call it senior high?), I recall doing a practical test in chemistry (titration) where you had some mystery chemicals and a colour change. the experiment was rigged so that it was somewhat like a reaction we'd already seen, but was in fact something quite different. the instructions were to make accurate measurements first, draw the appropriate graphs and *then* speculate on the mystery ingredients.

    it turned out that we'd never encountered the particular reagents before, and if you did the test accurately you'd have realised it wasn't the old familiar reaction, but had to be something new - the figures would simply not add up. however, a significant number of people rejigged their results to match the known reaction and failed the test totally for two reasons, first being for failing to make accurate measurements and secondly for faking the results.
  • gray area (Score:5, Interesting)

    by bcrowell (177657) on Saturday May 30, 2009 @12:26PM (#28150053) Homepage
    There's a big gray area. For instance, the Millikan oil drop experiment [wikipedia.org], which established quantization of charge, was arguably fraudulent. Millikan threw out all the data he didn't like, and then stated in his paper that he had never thrown out any data. His result was correct, but the way he went about proving it was ethically suspect.
  • by Paul Fernhout (109597) on Saturday May 30, 2009 @12:38PM (#28150135) Homepage

    From:
        http://www.its.caltech.edu/~dg/crunch_art.html [caltech.edu]
    """
    The crises that face science are not limited to jobs and research funds. Those are bad enough, but they are just the beginning. Under stress from those problems, other parts of the scientific enterprise have started showing signs of distress. One of the most essential is the matter of honesty and ethical behavior among scientists.

    The public and the scientific community have both been shocked in recent years by an increasing number of cases of fraud committed by scientists. There is little doubt that the perpetrators in these cases felt themselves under intense pressure to compete for scarce resources, even by cheating if necessary. As the pressure increases, this kind of dishonesty is almost sure to become more common.

    Other kinds of dishonesty will also become more common. For example, peer review, one of the crucial pillars of the whole edifice, is in critical danger. Peer review is used by scientific journals to decide what papers to publish, and by granting agencies such as the National Science Foundation to decide what research to support. Journals in most cases, and agencies in some cases operate by sending manuscripts or research proposals to referees who are recognized experts on the scientific issues in question, and whose identity will not be revealed to the authors of the papers or proposals. Obviously, good decisions on what research should be supported and what results should be published are crucial to the proper functioning of science.

    Peer review is usually quite a good way to identify valid science. Of course, a referee will occasionally fail to appreciate a truly visionary or revolutionary idea, but by and large, peer review works pretty well so long as scientific validity is the only issue at stake. However, it is not at all suited to arbitrate an intense competition for research funds or for editorial space in prestigious journals. There are many reasons for this, not the least being the fact that the referees have an obvious conflict of interest, since they are themselves competitors for the same resources. This point seems to be another one of those relativistic anomalies, obvious to any outside observer, but invisible to those of us who are falling into the black hole. It would take impossibly high ethical standards for referees to avoid taking advantage of their privileged anonymity to advance their own interests, but as time goes on, more and more referees have their ethical standards eroded as a consequence of having themselves been victimized by unfair reviews when they were authors. Peer review is thus one among many examples of practices that were well suited to the time of exponential expansion, but will become increasingly dysfunctional in the difficult future we face.

    We must find a radically different social structure to organize research and education in science after The Big Crunch. That is not meant to be an exhortation. It is meant simply to be a statement of a fact known to be true with mathematical certainty, if science is to survive at all. The new structure will come about by evolution rather than design, because, for one thing, neither I nor anyone else has the faintest idea of what it will turn out to be, and for another, even if we did know where we are going to end up, we scientists have never been very good at guiding our own destiny. Only this much is sure: the era of exponential expansion will be replaced by an era of constraint. Because it will be unplanned, the transition is likely to be messy and painful for the participants. In fact, as we have seen, it already is. Ignoring the pain for the moment, however, I would like to look ahead and speculate on some conditions that must be met if science is to have a future as well as a past.
    """

  • by dbrutus (71639) on Saturday May 30, 2009 @10:25PM (#28154837) Homepage

    Here's a sensible requirement. If you submit a paper, you submit your data and sufficient information that anybody can rerun your stuff. The whole MBH 98 idiocy was largely about how climate scientists would dance around releasing their data and methods. In the UK, if you take the public's money, you can't do that. In the US you can. The US should follow the UK on this one.

There is no distinction between any AI program and some existent game.

Working...