Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.

 



Forgot your password?
typodupeerror
Japan Medicine Science

A New Record For Scientific Retractions? 84

Posted by timothy
from the my-study-on-this-record-is-forthcoming dept.
sciencehabit writes "An investigating committee in Japan has concluded that a Japanese anesthesiologist, Yoshitaka Fujii, fabricated a whopping 172 papers over the past 19 years. Among other problems, the panel, set up by the Japanese Society of Anesthesiologists, could find no records of patients and no evidence medication was ever administered. 'It is as if someone sat at a desk and wrote a novel about a research idea,' the committee wrote in a 29 June summary report."
This discussion has been archived. No new comments can be posted.

A New Record For Scientific Retractions?

Comments Filter:
  • is this Yoshitaka Fujii a worthy novelist? if his writing style is appealing I see a second career chance :)

  • by crash123 (2523388)
    Why weren't these papers peer reviewed?
    • Re:Um (Score:5, Insightful)

      by Anonymous Coward on Tuesday July 03, 2012 @09:03AM (#40527147)

      Peer review is not designed to catch fraud, although it can, as to work on the assumption that the work may be fraudulent would cost too much and would not even be effective against cleverer frauds. The only way to catch a clever fraud is to try and replicate their work, and this can only happen after publication, usually when another researcher tries to build on the original work. If you do this all fraud will eventually be caught, the best you can hope for in the long run, as a scientist committing fraud, is to be thought of as critically incompetent. For this reason fraud is rare among academic scientists, but is unfortunately more common among their commercial counterparts.

      • by Anonymous Coward

        Ug - should have read TFA, - "The blog notes that three recently retracted papers only garnered six, four, and three citations. Despite the low impact of the work, ..." so he was not caught as few people used his work despite the quantity.

      • Re: (Score:3, Insightful)

        by J'raxis (248192)

        Unfortunately, it is upon publication when a study is picked up by the media and exposed to the general public. By the time other scientists try to replicate the experiments and find they're bullshit, it's "too late" in a sense.

        Some sort of independent verification needs to be worked into the process before a new study is put out there for general consumption. This either means before "publication" or the media needs to learn (hah) not to cite studies that haven't been independently verified, no matter how

        • Re:Um (Score:4, Insightful)

          by ceoyoyo (59147) on Tuesday July 03, 2012 @09:42AM (#40527707)

          "Some sort of independent verification needs to be worked into the process before a new study is put out there for general consumption."

          The media, and the public, need to learn. Publishing and dissemination are a critical part of science and shouldn't be compromised to make some reporters' jobs easier. Fraud isn't even the big problem with jumping to conclusions based on unverified studies - FAR more studies will be incorrect simply due to honest false positives than to fraud.

        • Obviously none of these papers were sensationalistically important sounding judging from the number of times they were cited. If the papers did attract a lot of attention no doubt they would have been discredited quickly by people trying to reproduce the results.

        • by DragonWriter (970822) on Tuesday July 03, 2012 @10:47AM (#40528863)

          Unfortunately, it is upon publication when a study is picked up by the media and exposed to the general public. By the time other scientists try to replicate the experiments and find they're bullshit, it's "too late" in a sense.

          Some sort of independent verification needs to be worked into the process before a new study is put out there for general consumption.

          The whole point of the scientific method is that putting work out for general consumption is the best avenue for independent verification (to adapt a phrase familiar to this audience, one might think of it as "with many eyes, all non-reproducible results are shallow".)

          The fact that reporters covering science in the popular media lack a basic understanding of the scientific method is a reason to change something, but the thing that needs change isn't scientific publishing.

          • by wrook (134116)

            They also have no incentive to begin to understand the scientific method. Reports of amazing discoveries followed up by scandals and retractions leads to more sales than waiting to see if anyone is able to duplicate the results.

            Having said that, there are occasional media figures who have a very solid grasp of science. It's unfair to paint everyone with the same brush. But the general state of affairs is somewhat grim and I don't see it getting much better.

      • Re:Um (Score:4, Interesting)

        by slew (2918) on Tuesday July 03, 2012 @10:04AM (#40528101)

        I'm not so sure that fraud is more or less prevalant among academic scientists than commercial scientist. As you alluded to, peer review is not designed to catch fraud. If a researcher published in an obscure backwater field that no-one would likely try to replicate, a researcher could go on for a long while w/o being caught or even being considered incompetent. The same is true for a commercial scientist (witness the cosmetic and nutritional suppliment industries).

        On the other hand, scientist in "hot" fields will have hoards of researcher in that field and if you do research in an uninteresting niche in that field, you might also escape detection for a while (or not, as bayer and pfizer have recently leaked out, most research that they have internally attempted to replicate to try to find new avenues for drugs had non-replicable results).

        People are people no matter in academia or industry...

        • I expect fraud will be most common where there is the most motivation and the most opportunity. Motivation can be a direct profit motive - so drug tests need special scrutiny. Motivation can also be for career / academic success - so universities might want to carefully consider policies that promote scientists based heavily on their number of publications.

          Fields where small groups of researchers work on expensive to reproduce projects are more suspect. Medical tests requiring large numbers of subjects are

      • Re:Um (Score:5, Interesting)

        by rgbatduke (1231380) <rgb@[ ].duke.edu ['phy' in gap]> on Tuesday July 03, 2012 @10:11AM (#40528239) Homepage
        I disagree about the rarity, on the basis of empirical evidence, for example the recent paper in Science (IIRC, sorry I don't have the reference handy) in which a cancer researcher failed to replicate 46 out of 53 papers published -- all of them with peer review -- prior to embarking on new research in the field. Similar meta-studies have turned up astounding rates of non-reproducible results in other fields (some more than others -- sociology and IIRC social psychology topping the non-medical list).

        One problem is that we have constructed a system that rewards the publication of positive results and punishes negative results published or unpublished. Punishes as in makes or breaks the entire career of young researchers, if the negative result occurs when they are up for tenure. Rewards as in ensures research funding and professional advancement as long as positive results keep flowing out.

        Another fundamental problem that peer review has a terrible time with is confirmation bias. Science in general has a serious problem with confirmation bias. If one ever embarks on a study where one seeks evidence for some causal linkage associated with some phenomenon in a general population where the phenomenon occurs, one can always find exemplars that support your hypothesis. Lacking actual work to replicate your results using sound methodology (e.g. double blinded and/or conducted using competent statistical analysis, something still as rare as hen's teeth in science in general because to it is difficult to do statistics correctly in a complex problem, not easy, and certainly not easy as in covered in one or two undergrad stats courses which is all that it is probable that the researcher has ever taken) confirmation bias can not only worm its way into the literature, it can come to dominate entire fields as a significant fraction of scientists who do the reviewing for both publication and grants are "descended" from one or two original researchers and their papers. It can take decades for this to be discovered and work out in the wash.

        Peer review works better in some disciplines than others. Math it works well, because there is literally nothing up a publisher's sleeve -- fraudulent publication is indeed impossible and even mistaken publication is relatively rare and conditional on involving math so difficult even the reviewers have a hard time following it. Physics and the very hard sciences are also fortunate in that it works decently (although less perfectly), at least where there is competition and the proper critical/skeptical eye applied to results new and old. At least there a mix of laboratory replication and strong requirements of consistency usually keep one out of the worst trouble.

        A simple rule of thumb is: The more a result relies on population studies, especially ones conducted with any kind of selection process or worse selection process plus the actual modification of the data according to some heuristic or correction process, where the study itself is conducted from the beginning to confirm some given hypothesis, the more likely it is that the result (when published) is bullshit that will eventually, possibly decades later, turn out to be completely wrong. If you have enough places for a thumb to be subtly placed on the scales and the owner of the thumb has any sort of vested or open interest in the outcome, it is even odds or better that a teensy bit of pressure will be applied, quite possibly without even the intention of the researcher. Confirmation bias is not necessarily "fraud" -- it is just bad science, science poorly done.

        There is a move afoot to do something about this. We know that it happens. We know why it happens. We know a number of things that we can do to reduce the probability of it happening -- for example requiring the open publication of all data and methods contemporary with any paper produced from them, permitting absolutely anybody to look at them and see if t
        • Re:Um (Score:5, Interesting)

          by Blahah (1444607) on Tuesday July 03, 2012 @10:55AM (#40529017)

          Great summary here. As a young scientist I see this a serious problem for the credibility of science in general. The gross fraud cases seem to be mostly limited to a few fields - anaesthesiology in particular has had a few major retractions bouts recently.

          As you point out, medicine and other fields involving population studies are much more prone to confirmation bias. In a similar vein, any field where the cutting edge involves extremely expensive experiments is open to direct abuse or failure of scrutiny to discover mistakes because it's prohibitively expensive to replicate experiments. Open data is one part of the solution to that problem, but to understand the data you need a good precise methodology published along with it, and often methods are lacking in detail to the point where they could never be accurately replicated. I think openness with data and methods need to go hand in hand.

          The major lesson I've taken from all this is not to allow myself space for confirmation bias. In my field that means always performing the complete set of experiments to confirm the causative link you are exploring, not just getting a fat load of correlations. That needs to go hand in hand with a thorough understanding of the relevant statistics, not just blindly working with standard confidence intervals.

          • Precisely. Indeed, a major part of the solution is to make scientists their own greatest skeptic, to mistrust our own pet ideas, to hesitate to claim "proof" to a fault even if evidence or model computations seem to support it.

            The latter are an entire category in and of themselves. I "do" predictive modelling and moderately advanced statistics on a professional basis, and even have a patent pending in the field. I've done Monte Carlo computations in physics for well over a decade, and know a lot about
            • by Belial6 (794905)
              Nice link. It is kind of funny that while dismissing witch doctors as obviously fake, he discusses parapsychology as a legitimate field of study. I don't mean that in a bad way. The fact that it was considered a legitimate field of study at the time, drives his point home that much better.
              • It is a legitimate field of study. One where negative results should have been (and eventually were) published. And also one where confirmation bias produced "surprising" results indeed -- until you looked for the man behind the curtain.
                • by Belial6 (794905)
                  Yes. In the same way that studying the magic of witch doctors is also legitimate field of study. They are equivalent, but at the time of the writing, he clearly did not see this.
        • by Kergan (780543)

          A simple rule of thumb is: The more a result relies on population studies, especially ones conducted with any kind of selection process or worse selection process plus the actual modification of the data according to some heuristic or correction process, where the study itself is conducted from the beginning to confirm some given hypothesis, the more likely it is that the result (when published) is bullshit that will eventually, possibly decades later, turn out to be completely wrong. If you have enough places for a thumb to be subtly placed on the scales and the owner of the thumb has any sort of vested or open interest in the outcome, it is even odds or better that a teensy bit of pressure will be applied, quite possibly without even the intention of the researcher. Confirmation bias is not necessarily "fraud" -- it is just bad science, science poorly done.

          The more interesting aspect of this is how economic facts [debunkingeconomics.com] are so rooted, mainstream and rehashed, using this very same process, that they become political ideology... Sad world...

          • Economics is one of the worst offenders, agreed, and the primary target of The Black Swan. But a number of other fields, including some that are supposedly in mainstream science, are almost as bad. And not necessarily intentionally. The saddest thing is that the researchers themselves are often totally oblivious as to just how biased and/or weakly founded their own results are, because they always get them "using statistics" in a traditional way, not realizing that they are using Statistics 101 (freshma
    • It seems they were, but catching fabricated results like these isn't exactly easy and it won't happen in the review process.

      In order to catch fabricated results you'd either have to repeat the experiment, which nobody wanted to do since the research was low impact, or catch discrepancies in the data, which was how he was caught out.

  • by starworks5 (139327) on Tuesday July 03, 2012 @08:52AM (#40527017) Homepage

    news at 11pm

    • by devitto (230479)

      Except Americans. Unless it matches their beliefs in religon, politics, nature and economics.

      • by Bob-taro (996889)

        Except Americans. Unless it matches their beliefs in religon, politics, nature and economics.

        Given the context of this discussion, is it necessarily a bad thing not to automatically accept as fact anything called "science"?

    • Parent is modded +5 Insightful?! Since when did Slashdot moderators develop an anti-science bent?

      • by ceoyoyo (59147)

        You must be new here.

        The average Slashdotter, or at least the ones who moderate and post, seems to have the "I know all about science/statistics/whatever" and "stupid scientists don't know as much as I do" attitudes. Although you can really replace "science" with just about anything and that statement would probably be true.

      • by digitig (1056110)
        That wasn't "an anti-science bent". It was anti using the word "science" as a magical invocation over anything at all, scientific or not, to make people suspend their critical judgement. Using the word "science" as an invocation in that way is all too effective in some circles, and I for one am against it.
      • Re: (Score:2, Insightful)

        Technically atheism is a "belief", since the absence of certain supernatural forces, and parallel universes purported to be accessible upon death isn't completely proven.

        When people are prevented from attempting to carry out (nuclear tests are banned by international treaty), cannot (because they lack the means or large equipment like the LHC) or simply do not carry out experiments themselves (out of sheer laziness, or dropping out of school), then they must take the ones who actusally DO carry out scien
        • by Belial6 (794905)
          Holy men don't claim to have performed experiments on your behalf. In religion, no one ever claims to have actually performed an experiment.
          • Right... right... but what about when Scientists indeed CLAIM to have performed and experiment, even though they never did? So instead of demanding belief in fiction, with no supporting evidence, we have people demanding acknowledgement of fabricated evidence, in support of "a more reasonable" fiction.

            This article points out how a lack of integrity within the scientific community threatens to sabotage the very trust that the public and the 24 hour news cycle would like to imbue upon Science. (even though
        • Sure, everything is a "belief" just like everything is a "theory," but that's just playing with semantics. I agree that you CAN'T verify ANY empirical fact beyond a shadow of a doubt, but that doesn't mean you imagine everything will lose its mass tomorrow and you'll just float off into space, right? (Among a multitude of other possible scenarios)

          You're not making the distinction between reasonable belief and unfounded or poorly founded belief. Reasonable belief follows a chain of reasoning that is at least

          • I'm not arguing in favor of equivocating Religion and Science. But there's an interesting side-effect that retractions and fraud, like this, can have. It's not so much that "Science" is ruined by these incidents, but really it sabotages the innate credibility of Scientists, and in the minds of some, it might reduce them to the same level as holy men, by introducing that seed of doubt. When Scientists play the "let's not and say we did game" when it comes to experimentation, and get caught, the tangible evid
  • Not the First Time (Score:4, Informative)

    by Quantum_Infinity (2038086) on Tuesday July 03, 2012 @08:54AM (#40527037)
    During World War II, Americans were very keen and excited to get their hands on scientific data from the Japanese after nuking them, especially all the data from human experiments which were not feasible in US. When they got the data, they realized most of it was non-sense. They had been randomly doing experiments on humans without any clear hypothesis or theory and most of the data did not make much sense.
    • by gl4ss (559668) on Tuesday July 03, 2012 @09:01AM (#40527127) Homepage Journal

      .. and then they gave the data to cia to use as basis for strategy.

  • by Bronster (13157) <slashdot@brong.net> on Tuesday July 03, 2012 @08:55AM (#40527055) Homepage

    And this, ladies and gentlement, is why real science is done by not only performing the experiement and recording the results, but by writing up your method with sufficient clarity that your results can be replicated by independent researchers.

    Once that has been done sufficient times, if your method itself is sound, then the results are valid.

    • by digitig (1056110) on Tuesday July 03, 2012 @10:51AM (#40528953)

      And this, ladies and gentlement, is why real science is done by not only performing the experiement and recording the results, but by writing up your method with sufficient clarity that your results can be replicated by independent researchers.

      Once that has been done sufficient times, if your method itself is sound, then the results are valid.

      Nope, not good enough. It's not enough to write it up in such a way that it can be replicated by independent researchers. You should only start to trust the results when it has been replicated by independent researchers. Unfortunately that hardly ever happens, because it's all but impossible to get funding for replication of work that's already been done.

    • And this, ladies and gentlement, is why real science is done by not only performing the experiement and recording the results, but by writing up your method with sufficient clarity that your results can be replicated by independent researchers.

      I had a referee rejecting my paper because, I shit you not, my description of the experiment and apparatus was too detailed. But I have not engaged in any pedantery, I described the bare minimum necessary to reproduce the results, didn't use or describe any exotic chemicals or mixtures, nothing that a person in the same community of research wouldn't have found normal and necessary to repeat my experiment.

      I thought of writing a strongly-worded e-mail to the editor, but my adviser... advised against.

  • by OzPeter (195038) on Tuesday July 03, 2012 @09:00AM (#40527117)

    From TFA

    German anesthesiologist Joachim Boldt is believed to hold the dubious distinction of having the most retractions—about 90. Boldt's scientific record also came under fire several years ago by some of the same journal editors questioning Fujii's work.

    Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

    • Re: (Score:3, Interesting)

      by MadKeithV (102058)

      Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

      This could easily be a case of "fool me once, shame on you, fool me twice, shame on me". These editors seem to be taking themselves seriously (slashdot editors, take note :) ). After being caught out by a cheater once they are probably now in the process of going over their past record with a fine-toothed comb looking for more fakers.
      Looks like their new review process is working better - as they've found another one.
      Maybe other papers should now be giving the suspiciously prolific submitters a new look

    • From TFA

      German anesthesiologist Joachim Boldt is believed to hold the dubious distinction of having the most retractions—about 90. Boldt's scientific record also came under fire several years ago by some of the same journal editors questioning Fujii's work.

      Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

      Also, what is it about anesthesiology and its practitioners that makes them succumb to the lure of academic forgery? Something about people who enjoy putting other people at the edge of death being power-mad? Or, it could be that these journal editors just decided to crack down, and if a bunch of editors of say, cardiology journals did the same, they'd find just as much fakery in their field.

      • by godel_56 (1287256)

        Is this coincidence or a pattern? I have no idea how the journal publishing is supposed to work, but being the "victim" of the two most prolific forgers leaves me a little suspicious of the quality of the publishing in general.

        Also, what is it about anesthesiology and its practitioners that makes them succumb to the lure of academic forgery? Something about people who enjoy putting other people at the edge of death being power-mad?

        Maybe some of their gases are leaking?

  • by AB3A (192265) on Tuesday July 03, 2012 @09:07AM (#40527177) Homepage Journal

    So this guy was writing, what, approximately nine or ten papers a year on average? Was anyone paying attention? Didn't anyone notice something strange about his "discoveries?"

    What does that say about the field of academic medical research?

    • by ceoyoyo (59147)

      Nine or ten papers a year isn't terribly unusual for someone with a decent sized lab.

    • by Cassini2 (956052) on Tuesday July 03, 2012 @10:10AM (#40528201)

      A professor at my local university periodically gets undergraduate and starting graduate students to try reproducing the work of interesting research papers.

      One engineering professor at my local institution figures about one-third of the papers can be reproduced to demonstrate the effect in question.

      At most schools, graduate students are required to published paper on a "new" idea in an academic journal in order to receive their degree. As such, journals must exist to collect all the ideas students generate, and this is the driver behind the modern academic journal system. Huge pressure is put upon the students to describe "new" ideas, and as such, the paper must sell itself as being "new".

      Complicating this effort, it the reality that most students are working on student projects. These projects don't have the necessary resources (time and money) to be developed into fully effective and reproducible ideas. As such, the results from these projects are fundamentally suspect. Also, the students working on the project, may not fully understand all the relevant effects on their research (because they are students). In particular, many students do not understand statistics. As such, students deliberately or inadvertently conduct biased experiments to show the desired effect in question, because the academic requirement is a "new" idea and not a "new and reproducible" idea.

      The result is a collection of papers that all describe themselves as having "new" and "brilliant" ideas on topics that cannot be easily reproduced. When the ideas are reproduced, practicing engineers quickly discover they are reproducing a marginal student project. It is actually really tough to find reproducible, inventive and commercializable research ideas in academic journals, because of all the noise.

  • So much for frauds being caught by peer-review, huh?

    • by exploder (196936)

      Your comment has already been made [slashdot.org], and addressed [slashdot.org], in this discussion.

    • So much for frauds being caught by peer-review, huh?

      That's an odd comment for an article about the peer-review system catching a fraudster. The system will find it eventually, though the time scale can be quite large, especially if you don't publish in a 'hot' field. You can't expect the peer review system to catch every bit of fraud as it comes in, it doesn't appear like a glowing fireball in the sky. This is likely a small amount of fabricated data about medical proceedure that didn't happen. Moreover, the author is probably quite bright (just lazy) and ha

      • by fatphil (181876)
        Faked studies are only detected if someone attempts to reproduce them. People will only try to reproduce them if journals adopt a policy of publishing papers that are either confirmations or refutations of prior studies. On the whole this isn't the case.

        I'd like to know how many of the studies could have been detected as fake through thorough enough statistical analysis of results - humans are notorious bad at faking data, even when they're trying their hardest to make it believable (as they then make it to
    • by Sique (173459)

      He was caught in the end. Sometimes it takes a while, especially if no one really cares about the papers you publish.

  • Made in Japan.
  • Science fiction, a popular genre, always needs an element to suspend disbelief. Lying that is is real science seems as good as any, literarily speaking.
  • It is important to shine light on fraudulent work in science, for sure. As others have already pointed out in this discussion, some work is impractical to reproduce and that is not the purpose of peer review any ways.

    In science, just like every other occupation on earth, there are people doing shoddy and/or fraudulent work. It is a function of humanity in general and no occupation is immune to it. The important thing is that this person has been exposed as a fake, and his identity and record are well
  • Well, yes. (Score:4, Funny)

    by Black Parrot (19622) on Tuesday July 03, 2012 @11:33AM (#40529669)

    If you're going to fabricate 1 paper, you might as well fabricate 172.

  • by Cute Fuzzy Bunny (2234232) on Tuesday July 03, 2012 @11:43AM (#40529877)

    I saw a study done recently where the author found that the results of many studies are quite difficult to reproduce, and he found that the more you tried to reproduce them and the more you talked publicly about your results, the more difficult it became to reproduce the results.

    The problem is that researchers usually aren't approaching a study as "Lets do xxx and see what happens, then write about that". They've been funded by someone who has a particular result or proof point they'd like to see, or the study operator has a vested interest in the study outcome. At least an expectation of what they think they'll find.

    Our happy little brains then lead us to that conclusion or desired outcome, and we'll gleefully ignore the things that detract from the results.

    And yes, the guy who did this study of study results also found that his ability to reproduce his own results became more difficult as time went by.

    For a couple of good examples of how this works, see the studies on salt and saturated fats in our diet. The intersalt study folks threw out 40% of the data that said that salt had no effect on health, suggesting that since its well known that salt affects your health that the people who weren't affected must be lying about their salt consumption. So almost half the data suggested no result, but it was discarded because it didn't fit with the desired determination. Same thing happened with saturated fats. The original researcher took 21 countries worth of data but only 5 of the 21 showed health issues that were allegedly correlated to the consumption of saturated fats. The other 16 showed no correlation at all. In fact, the real correlation was to high caloric, high sugar/carb, highly processed foods and health issues, not anything to do with saturated fats. There are cultures that eat 50-70% of their food intake as fat and they have little to no cancer, obesity or diabetes. Take one of those people and move them to the US or England and put them on our diet? They get fat and sick.

    Of course, even when the study obviously sucks, the press can be counted on to come to conclusions that the study didn't even address.

    • by guises (2423402)

      The problem is that researchers usually aren't approaching a study as "Lets do xxx and see what happens, then write about that".

      This is not a problem, this is part of the scientific method. You always start with a hypothesis, from that create a theory about how it would apply to the natural world, and design your experiment to disprove this theory in every way possible. Doing it the other way around leads to that correlation/causation mantra that people like to throw around: given a data set - it's easy to make tons of conclusions which may or may be true but sound great when you come up with them.

      Maintaining your objectivity is o

      • That was my point...the study I mentioned demonstrated that a lack of objectivity led to results that couldn't be reproduced as easily by someone without the same lack of objectivity. Even that guys study turned out to lack some objectivity.

        Its in the breed. School beatings can't and won't help that.

        The problem here isn't the flaky science. Part of the problem is that the media and the public think that if someone ran a study with vague correlation and very little causation, that the results are facts.

  • Are you pro-atheism guys going to use this example as a vector to bash all scientists the same way these guys [slashdot.org] were used to bash all religious groups?

Sentient plasmoids are a gas.

Working...