Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Japan Medicine Science

A New Record For Scientific Retractions? 84

sciencehabit writes "An investigating committee in Japan has concluded that a Japanese anesthesiologist, Yoshitaka Fujii, fabricated a whopping 172 papers over the past 19 years. Among other problems, the panel, set up by the Japanese Society of Anesthesiologists, could find no records of patients and no evidence medication was ever administered. 'It is as if someone sat at a desk and wrote a novel about a research idea,' the committee wrote in a 29 June summary report."
This discussion has been archived. No new comments can be posted.

A New Record For Scientific Retractions?

Comments Filter:
  • by OzPeter ( 195038 ) on Tuesday July 03, 2012 @10:00AM (#40527117)

    From TFA

    German anesthesiologist Joachim Boldt is believed to hold the dubious distinction of having the most retractions—about 90. Boldt's scientific record also came under fire several years ago by some of the same journal editors questioning Fujii's work.

    Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

  • by AB3A ( 192265 ) on Tuesday July 03, 2012 @10:07AM (#40527177) Homepage Journal

    So this guy was writing, what, approximately nine or ten papers a year on average? Was anyone paying attention? Didn't anyone notice something strange about his "discoveries?"

    What does that say about the field of academic medical research?

  • by MadKeithV ( 102058 ) on Tuesday July 03, 2012 @10:38AM (#40527627)

    Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

    This could easily be a case of "fool me once, shame on you, fool me twice, shame on me". These editors seem to be taking themselves seriously (slashdot editors, take note :) ). After being caught out by a cheater once they are probably now in the process of going over their past record with a fine-toothed comb looking for more fakers.
    Looks like their new review process is working better - as they've found another one.
    Maybe other papers should now be giving the suspiciously prolific submitters a new look as well.

  • Re:Um (Score:4, Interesting)

    by slew ( 2918 ) on Tuesday July 03, 2012 @11:04AM (#40528101)

    I'm not so sure that fraud is more or less prevalant among academic scientists than commercial scientist. As you alluded to, peer review is not designed to catch fraud. If a researcher published in an obscure backwater field that no-one would likely try to replicate, a researcher could go on for a long while w/o being caught or even being considered incompetent. The same is true for a commercial scientist (witness the cosmetic and nutritional suppliment industries).

    On the other hand, scientist in "hot" fields will have hoards of researcher in that field and if you do research in an uninteresting niche in that field, you might also escape detection for a while (or not, as bayer and pfizer have recently leaked out, most research that they have internally attempted to replicate to try to find new avenues for drugs had non-replicable results).

    People are people no matter in academia or industry...

  • by Cassini2 ( 956052 ) on Tuesday July 03, 2012 @11:10AM (#40528201)

    A professor at my local university periodically gets undergraduate and starting graduate students to try reproducing the work of interesting research papers.

    One engineering professor at my local institution figures about one-third of the papers can be reproduced to demonstrate the effect in question.

    At most schools, graduate students are required to published paper on a "new" idea in an academic journal in order to receive their degree. As such, journals must exist to collect all the ideas students generate, and this is the driver behind the modern academic journal system. Huge pressure is put upon the students to describe "new" ideas, and as such, the paper must sell itself as being "new".

    Complicating this effort, it the reality that most students are working on student projects. These projects don't have the necessary resources (time and money) to be developed into fully effective and reproducible ideas. As such, the results from these projects are fundamentally suspect. Also, the students working on the project, may not fully understand all the relevant effects on their research (because they are students). In particular, many students do not understand statistics. As such, students deliberately or inadvertently conduct biased experiments to show the desired effect in question, because the academic requirement is a "new" idea and not a "new and reproducible" idea.

    The result is a collection of papers that all describe themselves as having "new" and "brilliant" ideas on topics that cannot be easily reproduced. When the ideas are reproduced, practicing engineers quickly discover they are reproducing a marginal student project. It is actually really tough to find reproducible, inventive and commercializable research ideas in academic journals, because of all the noise.

  • Re:Um (Score:5, Interesting)

    by rgbatduke ( 1231380 ) <rgb@@@phy...duke...edu> on Tuesday July 03, 2012 @11:11AM (#40528239) Homepage
    I disagree about the rarity, on the basis of empirical evidence, for example the recent paper in Science (IIRC, sorry I don't have the reference handy) in which a cancer researcher failed to replicate 46 out of 53 papers published -- all of them with peer review -- prior to embarking on new research in the field. Similar meta-studies have turned up astounding rates of non-reproducible results in other fields (some more than others -- sociology and IIRC social psychology topping the non-medical list).

    One problem is that we have constructed a system that rewards the publication of positive results and punishes negative results published or unpublished. Punishes as in makes or breaks the entire career of young researchers, if the negative result occurs when they are up for tenure. Rewards as in ensures research funding and professional advancement as long as positive results keep flowing out.

    Another fundamental problem that peer review has a terrible time with is confirmation bias. Science in general has a serious problem with confirmation bias. If one ever embarks on a study where one seeks evidence for some causal linkage associated with some phenomenon in a general population where the phenomenon occurs, one can always find exemplars that support your hypothesis. Lacking actual work to replicate your results using sound methodology (e.g. double blinded and/or conducted using competent statistical analysis, something still as rare as hen's teeth in science in general because to it is difficult to do statistics correctly in a complex problem, not easy, and certainly not easy as in covered in one or two undergrad stats courses which is all that it is probable that the researcher has ever taken) confirmation bias can not only worm its way into the literature, it can come to dominate entire fields as a significant fraction of scientists who do the reviewing for both publication and grants are "descended" from one or two original researchers and their papers. It can take decades for this to be discovered and work out in the wash.

    Peer review works better in some disciplines than others. Math it works well, because there is literally nothing up a publisher's sleeve -- fraudulent publication is indeed impossible and even mistaken publication is relatively rare and conditional on involving math so difficult even the reviewers have a hard time following it. Physics and the very hard sciences are also fortunate in that it works decently (although less perfectly), at least where there is competition and the proper critical/skeptical eye applied to results new and old. At least there a mix of laboratory replication and strong requirements of consistency usually keep one out of the worst trouble.

    A simple rule of thumb is: The more a result relies on population studies, especially ones conducted with any kind of selection process or worse selection process plus the actual modification of the data according to some heuristic or correction process, where the study itself is conducted from the beginning to confirm some given hypothesis, the more likely it is that the result (when published) is bullshit that will eventually, possibly decades later, turn out to be completely wrong. If you have enough places for a thumb to be subtly placed on the scales and the owner of the thumb has any sort of vested or open interest in the outcome, it is even odds or better that a teensy bit of pressure will be applied, quite possibly without even the intention of the researcher. Confirmation bias is not necessarily "fraud" -- it is just bad science, science poorly done.

    There is a move afoot to do something about this. We know that it happens. We know why it happens. We know a number of things that we can do to reduce the probability of it happening -- for example requiring the open publication of all data and methods contemporary with any paper produced from them, permitting absolutely anybody to look at them and see if t
  • Re:Um (Score:5, Interesting)

    by Blahah ( 1444607 ) on Tuesday July 03, 2012 @11:55AM (#40529017)

    Great summary here. As a young scientist I see this a serious problem for the credibility of science in general. The gross fraud cases seem to be mostly limited to a few fields - anaesthesiology in particular has had a few major retractions bouts recently.

    As you point out, medicine and other fields involving population studies are much more prone to confirmation bias. In a similar vein, any field where the cutting edge involves extremely expensive experiments is open to direct abuse or failure of scrutiny to discover mistakes because it's prohibitively expensive to replicate experiments. Open data is one part of the solution to that problem, but to understand the data you need a good precise methodology published along with it, and often methods are lacking in detail to the point where they could never be accurately replicated. I think openness with data and methods need to go hand in hand.

    The major lesson I've taken from all this is not to allow myself space for confirmation bias. In my field that means always performing the complete set of experiments to confirm the causative link you are exploring, not just getting a fat load of correlations. That needs to go hand in hand with a thorough understanding of the relevant statistics, not just blindly working with standard confidence intervals.

  • by Cute Fuzzy Bunny ( 2234232 ) on Tuesday July 03, 2012 @12:43PM (#40529877)

    I saw a study done recently where the author found that the results of many studies are quite difficult to reproduce, and he found that the more you tried to reproduce them and the more you talked publicly about your results, the more difficult it became to reproduce the results.

    The problem is that researchers usually aren't approaching a study as "Lets do xxx and see what happens, then write about that". They've been funded by someone who has a particular result or proof point they'd like to see, or the study operator has a vested interest in the study outcome. At least an expectation of what they think they'll find.

    Our happy little brains then lead us to that conclusion or desired outcome, and we'll gleefully ignore the things that detract from the results.

    And yes, the guy who did this study of study results also found that his ability to reproduce his own results became more difficult as time went by.

    For a couple of good examples of how this works, see the studies on salt and saturated fats in our diet. The intersalt study folks threw out 40% of the data that said that salt had no effect on health, suggesting that since its well known that salt affects your health that the people who weren't affected must be lying about their salt consumption. So almost half the data suggested no result, but it was discarded because it didn't fit with the desired determination. Same thing happened with saturated fats. The original researcher took 21 countries worth of data but only 5 of the 21 showed health issues that were allegedly correlated to the consumption of saturated fats. The other 16 showed no correlation at all. In fact, the real correlation was to high caloric, high sugar/carb, highly processed foods and health issues, not anything to do with saturated fats. There are cultures that eat 50-70% of their food intake as fat and they have little to no cancer, obesity or diabetes. Take one of those people and move them to the US or England and put them on our diet? They get fat and sick.

    Of course, even when the study obviously sucks, the press can be counted on to come to conclusions that the study didn't even address.

For God's sake, stop researching for a while and begin to think!

Working...