Forgot your password?
Biotech Science

Study Linking GM Maize To Rat Tumors Is Retracted 341

Posted by Soulskill
from the arguing-over-food dept.
ananyo writes "Bowing to scientists' near-universal scorn, the journal Food and Chemical Toxicology has fulfilled its threat to retract a controversial paper which claimed that a genetically modified (GM) maize causes serious disease in rats after the authors refused to withdraw it. The paper, from a research group led by Gilles-Eric Séralini, a molecular biologist at the University of Caen, France, and published in 2012, showed 'no evidence of fraud or intentional misrepresentation of the data,' said a statement from Elsevier, which publishes the journal. But the small number and type of animals used in the study means that 'no definitive conclusions can be reached.' The known high incidence of tumors in the Sprague-Dawley rat 'cannot be excluded as the cause of the higher mortality and incidence observed in the treated groups,' it added. Today's move came as no surprise. Earlier this month, the journal's editor-in-chief, Wallace Hayes, threatened retraction if Séralini refused to withdraw the paper, which is exactly what he announced at a press conference in Brussels this morning. Séralini and his team remained unrepentant, and allege that the retraction derives from the journal's editorial appointment of biologist Richard Goodman, who previously worked for biotechnology giant Monsanto for seven years."
This discussion has been archived. No new comments can be posted.

Study Linking GM Maize To Rat Tumors Is Retracted

Comments Filter:
  • by Trepidity (597) <delirium-slashdot AT hackish DOT org> on Friday November 29, 2013 @12:20PM (#45555741)

    Imo, withdrawing papers makes sense mainly if there is indeed, "evidence of fraud or intentional misrepresentation of the data". Faked data doesn't help advance science, and should be purged from the record.

    But merely questionable conclusions are another story. Science is a back-and-forth process: someone publishes a study purporting to show X, and then someone else criticizes their conclusions, re-analyzes their data, attempts to replicate it, etc. Then they publish their own conclusions, purporting to show not-X. Withdrawing the original study in this case doesn't make sense to me, if it was not fraudulent: we don't typically retroactively go into old journals and blank out the articles that have subsequently turned out to be wrong. We just write new articles with better analysis.

    • by cranky_chemist (1592441) on Friday November 29, 2013 @12:34PM (#45555839)

      You are correct.

      Weak science and insufficient sample sizes are matters for the journal's referees to suss out and, if necessary, recommend that the journal not publish the paper. The fact that the paper passed peer review should have the journal re-examining their editorial/peer-review policies.

      Ultimately, the decision to publish (and responsibility for publishing) a paper lies with the journal's editor in chief.

    • Re: (Score:3, Insightful)

      I agree. If Elsevier thought the study was too weak, they shouldn't have published it.

      Asking the authors to retract it makes it look like they just wanted to save face by not doing it themselves. Didn't work.

      The "Nature" post says Elsevier bowed to "scientists' near-universal scorn"; I have no idea what that means. It suggests perhaps that the study was unconvincing. But it's Elsevier's job to screen for that. It's not their job to retroactively delete honest experiments with honest data which have bee

      • "have been honestly reproduced"
        [citation needed]

        (There seems to be a not-uncommon misconception that reproduction of the results by other groups is part of the pre-publication "peer review" -- this is simply not the case. If you're not under that delusion, but think some group has reproduced these results, do share.)

      • by Chalnoth (1334923) on Friday November 29, 2013 @02:32PM (#45556663)

        The study wasn't just unconvincing. It was riddled with serious flaws. The first and clearest complaint: they didn't do any statistical analysis. At all. Plus, some of the GMO and pesticide groups lived noticeably longer than the control group. The highest-dose pesticide or GMO group rarely did the worst, and sometimes did the best among the groups.

        But perhaps the most damning problem of all is that the very design of the study was such that it was guaranteed that they would be able to find something wrong with the GMO/pesticide groups (at least superficially). This is due to the virtue of having in effect 20 different experimental groups of 10 mice each (10 male, 10 female for 10 different dosages of GMO's or pesticides). And they measured dozens of different things over the course of the study. In essence, if the rats in the GMO/pesticide groups hadn't had (superficially) more tumors, they would have had something else wrong with them more often, just due to random chance.

        Whether this execrable excuse of a paper is so terrible due to abject incompetence or outright fraud, it deserves to be retracted. It should never have been published in the first place, but I'm glad the journal has decided to retract it in the end.

        • by ceoyoyo (59147)

          You had the most damning problem of all with your first paragraph. If it ain't got stats, it ain't science. The only science you can do (temporarily) without stats is theory, and that's only science if you're going to test it with experiment... using stats.

          If they'd done proper stats it would have taken into account their plethora of experimental groups and ensured that they didn't get any positive results at all.

          I didn't believe you that they hadn't done any stats so I looked up the paper. The only one

    • by gregor-e (136142) on Friday November 29, 2013 @12:49PM (#45555963) Homepage
      The case made for withdrawal bases its objections on bad science. The response from the authors was an ad-hominem attack against one of the editors.
      • And bad it was. Fractal badness.

        I mean, really bad. []

        It shouldn't have been published in the first place, but at least they're admitting their mistake.
    • by phantomfive (622387) on Friday November 29, 2013 @01:02PM (#45556091) Journal
      Think of it this way, imagine someone did a study, where a single kid was vaccinated and later got autism. The authors of this study drew the conclusion that vaccines cause autism.

      Would you consider that to be poor science? Because that is essentially what happened here, there were obvious problems with the experiment, and the science was badly done. Elsevier was being kind by saying there was no evidence of fraud, because either it was fraud or incompetence that motivated these scientists to publish.

      What they should do is repeat the experiment with a better sample size.
      • " a single kid... that is essentially what happened here"

        Really? They "essentially" had a sample size of 1 with no control group?

        "either it was fraud or incompetence that motivated these scientists to publish"

        How do you know their motive?

        • Really? They "essentially" had a sample size of 1 with no control group?

          Yes. The sample sizes weren't large enough to draw any conclusions. At least read the summary, please.

          How do you know their motive?

          I don't, which, as I said, means it's possible they are incompetent.

        • by plover (150551) on Friday November 29, 2013 @02:02PM (#45556467) Homepage Journal

          No, it wasn't a sample size of 1 with no control group. But according to one expert, the control group was way too small to derive statistically valid results from. According to UCD researcher Martina Newell–McGloughlin, quoted in the Discovery article [] (from 2012), here's what they did wrong:

          • They had a control group of 10 or 20 rats in an overall population of 200 rats (Discovery claimed the study should have had a control group that was two or three times the size of the experimental rat population.)
          • The breed of rat is tumor prone (I assume this is a problem because the researchers were pre-supposing the outcome will be tumors.)
          • The rats were two years old (a very old rat for such a study, and at two years old are likely to randomly develop tumors independently.)
          • The rats were allowed to eat unlimited quantities of the food (which is known to lead to tumors even with untainted food.)
          • They found no dose-dependent correlation between the quantity of food consumed and the tumor rate (expected in toxicology studies.)
          • They performed no independent confirmation analysis to determine if the outcome they saw could have been arrived at by chance.

          So yeah, while it's not as bad as the vaccine hoaxers, it was apparently not good research.

      • Re: (Score:3, Informative)

        by tlhIngan (30335)

        Think of it this way, imagine someone did a study, where a single kid was vaccinated and later got autism. The authors of this study drew the conclusion that vaccines cause autism.

        Would you consider that to be poor science? Because that is essentially what happened here, there were obvious problems with the experiment, and the science was badly done. Elsevier was being kind by saying there was no evidence of fraud, because either it was fraud or incompetence that motivated these scientists to publis

        • by ceoyoyo (59147)

          "But no, you don't withdraw published papers for bad science - you release another one proving the original was bad. (Unlike the original Lancet paper, which was discovered to be fraudulent which does demand removal)."

          You absolutely do retract published papers for bad science. You don't retract them for incorrect conclusions, but you DO retract them for things like fraud, misrepresentation, unjustified conclusions, etc. I read the paper. It looks like these guys played some fancy analysis games to get so

      • by Rutulian (171771)

        Um, no, not really.

        You are right, they should do more work on the study and get more data. But any questions about statistical significance and/or experimental design should have been addressed at the peer review stage. And then after that, there is even a final decision by the editor-in-chief to actually publish the study. Bad science or no, the study made it through. It happens all the time. See the arsenic in DNA controversy, or the huge argument over a generalized mechanism for antibiotic killing by rea

        • Calling for retraction in the absence of any kind of experimental evidence is not the way to handle this. I am not surprised the authors refused. Retraction has a huge stigma associated with it, and if they weren't deliberately fraudulent, they don't deserve it. The scientific community can scorn all they want, but it means nothing without experimental evidence to back it up.

          It's more comparable to the cold fusion of Fleischmann and Pons . Not necessarily scientific malpractice, but the poorly done experiments followed by attempts to cash in on the results drew down the wrath of the scientific community (in this case the authors of the study are trying to make money off a book and movie about the study).

    • by Blakey Rat (99501)

      No, they made a conclusion not supported by the data available. What they *should* have done is expanded the study to include more and more diverse test animals to firm-up the conclusion. They could have retracted their study and re-done it while retaining some dignity.

      What they did instead is throw a hissy-fit and then blame a new editor, which strikes me as extreme paranoia at best.

    • Can we say arsenic in DNA? []

      It was only a few years ago, but I guess it has already left the public memory. A group of scientists rush to a hasty conclusion because they want to make a big splash. Science publishes it because they like controversy. A large flurry of criticism from the scientific community, but ultimately a number of papers get published refuting the original findings. We can ask the question...should it have been published? A lot of people think

  • Recent History (Score:4, Informative)

    by Jah-Wren Ryel (80510) on Friday November 29, 2013 @12:22PM (#45555753)
  • by jellomizer (103300) on Friday November 29, 2013 @12:31PM (#45555815)

    Corn is a major export crop of the United States.

    Europe government wants to promote food that is grown within the Union. It really makes sense that a European scientist would feel pressured to find evidence against a primary US import.
    As the US agriculture system is very efficient at making low cost food.

    I know it is trendy to be Anti-American as it must be some conspiracy from big US companies to hide the truth, like with Big Tobacco.
    But what if GM Food is actually perfectly safe like the science says it is.

    • by AlecC (512609)

      In this case, I really don't think it is anti-American but anti-GM. There is a very widespread fear of GM. Which, as it happens, I disagree with. But, right or wrong, people are afraid of GM and shouting at their politicians about it.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Fears about GM being somehow more unhealthy or poisonous than regular food are pretty irrational.
        However, there are other fears that are more sensible, that have gotten conflated with the health fears, and for some people it's now impossible to separate them:

        - Fear that untested modified genes will escape into the environment and mess up the local ecosystem
        - Fear that GM crops (roundup-ready!) will increase the use of insecticides, which is a whole other barrel of worms
        - Fear that GM foods will give well-co

    • by phantomfive (622387) on Friday November 29, 2013 @12:53PM (#45556021) Journal
      Fortunately, in this case, it's ok to be annoyed by both sides: Elsevier and the guys who did the study. Both are bad.
    • I have nothing against the US selling GM food in Europe, as long as every customer can decide for himself, but this is in reality not the case. The problem is that in European countries the labelling requirements for GM food are generally inadequate and do not cover all cases, e.g. there is no labelling for ingredients below a certain percentage, some pre-processed ingredients or meat from animals fed with GM plants. In fact, most of the labelling is so fine-print that it cannot even be read through a looki

    • We don't want to risk the 'if'.
      You know, in the USA you can do what you want, usually there is no law hindering you. You wait till you get sued into oblivion when something goes wrong.
      In europe we usually have laws. Regarding GM food, most of the people, and luckily also most of the politicians, are against it.
      I don't want to wait till we see 'if it is dangerous' or till we see 'if it is harmless'.
      I simply don't want it at all.
      That is my choice AND my right!

  • Study Linking GM Maize To Rat Tumors Is Retracted

    Thank heaven for that! Somebody pass the corn please.

  • by presidenteloco (659168) on Friday November 29, 2013 @01:16PM (#45556175)

    Direct health effects of GMO foods are IMHO only the third most important potential concern with GMOs.

    The first concern is that whatever you have engineered, it is self-reproducing and could potentially take over a niche in a whole ecosystem, displacing other species or naturually adapted varieties, and you in general could not stop this if it happened. So eco-systems then become fully the responsibility of human biology tweakers.
    This seems generally unwise. The consequences of such ecosystem shifts is too complex to be predicted.

    A second concern is that each genetic engineering modification needs to be fully assessed separately from all others, due to the complexity of the systems into which they are being inserted. Or at least, very narrow equivalence classes of modifications need each to be individually, and in combination, re-tested for long term effects, viability, viability and effects of likely mutations of the tweak etc, each time they are tweaked.
    The cost of such repeated and long term safety testing is well beyond the capability of the companies producing the products, so we can be sure that such rigorous, long term, and repeated (when product is varied) testing is not being done.
    Instead, smaller numbers of specific tests on a subset of engineered varieties are generalized in alleged applicability and conclusion, to save money.

    So there is still a lot of know unknown and unknown unknown out there, and it is the kind of product that in general, self-reproduces and also expands in range.

  • Science wins and political extremists lose today, and for that progress for humanity is made. Any time a political extremist tries to hijack science to push a political agenda they should be subject to the greatest of scrutiny. Science can and must rise above politics for the greater good of humanity and in this case it did. Here's hoping science can do so in other realms as well.

  • Rats! (Score:4, Informative)

    by westlake (615356) on Friday November 29, 2013 @01:38PM (#45556319)

    When does ambition or the will to believe begin to look more like fraud?

    The biggest criticism from both reviews is that Seralini and his team used only ten rats of each sex in their treatment groups. That is a similar number of rats per group to that used in most previous toxicity tests of GM foods, including Missouri-based Monsanto's own tests of NK603 maize. Such regulatory tests monitor rats for 90 days, and guidelines from the Organization for Economic Co-operation and Development (OECD) state that ten rats of each sex per group over that time span is sufficient because the rats are relatively young. But Seralini's study was over two years --- almost a rat's lifespan --- and for tests of this duration, the OECD recommends at least 20 rats of each sex per group for chemical-toxicity studies, and at least 50 for carcinogenicity studies.

    Moreover, the study used Sprague-Dawley rats, which both reviews note are prone to developing spontaneous tumors. Data provided to Nature by Harlan Laboratories, which supplied the rats in the study, show that only one-third of males, and less than one-half of females, live to 104 weeks. By comparison, its Han Wistar rats have greater than 70% survival at 104 weeks, and fewer tumors. OECD guidelines state that for two-year experiments, rats should have a survival rate of at least 50% at 104 weeks. If they do not, each treatment group should include even more animals --- 65 or more of each sex.

    ''There is a high probability that the findings in relation to the tumor incidence are due to chance, given the low number of animals and the spontaneous occurrence of tumors in Sprague-Dawley rats,'' concludes the EFSA report. In response to the EFSA's assessment, the European Federation of Biotechnology --- an umbrella body in Barcelona, Spain, that represents biotech researchers, institutes and companies across Europe --- called for the study to be retracted, describing its publication as a ''dangerous case of failure of the peer-review system.."

    Yet Seralini has promoted the cancer results as the study's major finding, through a tightly orchestrated media offensive that began last month and included the release of a book and a film about the work. Only a select group of journalists (not including Nature) was given access to the embargoed paper, and each writer was required to sign a highly unusual confidentiality agreement, seen by Nature, which prevented them from discussing the paper with other scientists before the embargo expired.

    Hyped GM maize study faces growing scrutiny [] [Oct 2012]

  • Gilles-Eric Séralini has published a whole series of journal articles purporting to expose the dangers of GMOs, glyposate etc.

    They are all lapped up and given great exposure by the mainstream media. They are all pointed at with great glee by the anti-GMO crowd as evidence that GMOs are really really bad for you.

    They are all junk science that should have never been published.

    The source of most of the funding for this work is Greenpeace.

    No doubt there will be more crap like this in the future. Hope

  • scientific progress never fails to amaize me.

    take that karma!

  • by queazocotal (915608) on Friday November 29, 2013 @01:55PM (#45556431) []

    The study involved 200 rats, half female, split into 10 groups.
    As I understand it, the greatest 'statistical significance' comes from the female rats.

    Taking one part, and closely analysing it.
    'Up to 14 months, no animals in the control groups showed any signs of tumors whilst 10–30% of treated females per group developed tumors, with the exception of one group (33% GMO + R). By the beginning of the 24th month, 50–80% of female animals had developed tumors in all treated groups, with up to 3 tumors per animal, whereas only 30% of controls were affected.'

    Starting with the first statement. 'up to 14 months, 1-3 rats in some of the groups developed tumors, whereas no rats in the control group or the group fed GMO + roundup did' So, of 7 groups, 2 groups were cancer free.

    Going onto the next part.
    3 rats got cancer in the control group.
    5-8 in the other 6 groups.
    But, half of those 6 groups were also fed roundup.

    So, a total of between 9 and 15 extra rats got cancer, apparantly, if you multiply up the control group.

    But - the whole basis of this paper now rests on two rats.
    If in the control group at the 24th month, 5 rats would normally have gotten cancer, and 2 happened to get lucky, the paper largely becomes non-statistically significant.

    I am not a statistician.

    If normally, half of rats get cancer at 24 months, then you would expect 5 rats, not 3 in the control group to have it.
    How likely is it that only three rats would die?
    Only if this chance is under 5% does the rest of the paper have any weight whatsoever.

  • by Uncle_Meataxe (702474) on Friday November 29, 2013 @10:35PM (#45558983)

    Here's the Seralini team response to FCT. Basically, Seralini is challenging them to also retract the Monsanto study (e.g., Hammond et al. 2004): []

    Professor Seralini replies to FCT journal over study retraction

    Professor Gilles-Eric Séralini and his team have responded to the letter from A. Wallace Hayes, editor of Food and Chemical Toxicology (FCT), telling Prof Séralini that he intended to retract his study on NK603 maize and Roundup.

    Here’s the retraction notice from Elsevier, the publisher of FCT: []
    Response by Prof GE Seralini and colleagues to A. Wallace Hayes, editor of Food and Chemical Toxicology
    28 Nov 2013

    We, authors of the paper published in FCT more than one year ago on the effects of Roundup and a Roundup-tolerant GMO (Séralini et al., 2012), and having answered to critics in the same journal (Séralini et al., 2013), do not accept as scientifically sound the debate on the fact that these papers are inconclusive because of the rat strain or the number of rats used. We maintain our conclusions. We already published some answers to the same critics in your Journal, which have not been answered (Séralini et al., 2013).

    Rat strain

    The same strain is used by the US national toxicology program to study the carcinogenicity and the chronic toxicity of chemicals (King-Herbert et al., 2010). Sprague Dawley rats are used routinely in such studies for toxicological and tumour-inducing effects, including those 90-day studies by Monsanto as basis for the approval of NK603 maize and other GM crops (Sprague Dawley rats did not came from Harlan but from Charles-River) (Hammond et al., 2004; Hammond et al., 2006a; Hammond et al., 2006b).

    A brief, quick and still preliminary literature search of peer-reviewed journals revealed that Sprague Dawley rats were used in 36-month studies by (Voss et al., 2005) or in 24-month studies by (Hack et al., 1995), (Minardi et al., 2002), (Klimisch et al., 1997), (Gamez et al., 2007).Some of these studies have been published in Food and Chemical Toxicology.

    Number of rats, OECD guidelines

    OECD guidelines (408 for 90 day study, 452 chronic toxicity and 453 combined carcinogenicity/chronic toxicity study) always asked for 20 animals per group (both in 1981 and 2009 guidelines) although the measurement of biochemical parameters can be performed on 10 rats, as indicated. We did not perform a carcinogenesis study, which would not have been adopted at first, but a long-term chronic full study, 10 rats are sufficient for that at a biochemical level according to norms and we have measured such a number of parameters! The disturbance of sexual hormones or other parameters are sufficient in themselves in our case to interpret a serious effect after one year. The OPLS-DA statistical method we published is one of the best adapted. For tumours and deaths, the chronology and number of tumours per animal have to be taken into account. Any sign should be regarded as important for a real risk study. Monsanto itself measured only 10 rats of the same strain per group on 20 to conclude that the same GM maize was safe after 3 months (Hammond et al., 2004).

    The statistical analysis should not be done with historical data first, the comparison is falsified, thus 50 rats per group is useless

    The use of historical data falsifies health risk assessments because the diet is contaminated by dibenzo-p-dioxins and dibenzofurans (Schecter et al., 1996), mercury (Weiss et al., 2005), cadmium and chromium among other heavy metals in a range of doses that altered mouse liver and lung gene expression and confounds genomic analyses (Kozul et al., 2008). They also contained pesticides or plasticizers released by cages or from water sources (Howdeshell et al., 2003). Historical

The more cordial the buyer's secretary, the greater the odds that the competition already has the order.