Print

Professor Peter SaundersOver and over again the claim has been made that Seralini's recently published study, which found high levels of tumours in rats given GM feed and tiny amounts of Roundup, can be safely ignored because he didn't use sufficient numbers of experimental animals. But in the following article Prof. Peter Saunders, like the renowned French statistician Paul Deheuvels, points out that the smaller numbers actually make Seralini's findings MORE – not less – significant.

Prof. Peter Saunders is Emeritus Professor of Mathematics at King's College London and a leading expert in Mathematical Biology. His recent work has focused on modelling physiological control and finding the cause of Type II diabetes. He is a Vice-President of the UK Parliamentary and Scientific Committee.

Excess Cancers and Deaths with GM Feed: the Stats Stand Up

Prof. Peter Saunders
ISIS Report, October 16 2012
http://www.i-sis.org.uk/Excess_cancers_and_deaths_from_GM_feed_stats_stand_up.php

In September 2012, the research team led by Gilles-Eric Seralini at the University of Caen published the findings of their feeding trial on rats to test for toxicity of Monsanto's genetically modified (GM) maize NK603 and/or Roundup herbicide in the online edition of Food and Chemical Toxicology [1].

Seralini and his colleagues had previously found evidence for toxicity of GM feed in data from Monsanto's own experiments, which they had obtained through a Freedom of Information demand [2]. Monsanto challenged their conclusions and, to no one's great surprise the European Food Standards Agency (EFSA) supported Monsanto [3]. So the team decided to run their own experiment, using an unusually large number of animals and over a period of about two years, roughly the life expectancy of the rats, rather than the usual 90 days required in toxicity trials including Monsanto's.

What Seralini and his colleagues found was that NK603 and Roundup are not only both toxic as expected, but also carcinogenic, which was unexpected. The proportion of treated rats that died during the experiments was much greater than the controls; moreover, in almost all groups a higher proportion developed tumours, and the tumours appeared earlier.

As soon as the paper appeared, the GM lobby swung into action. In particular, the Science Media Centre (SMC), a London-based organisation partly funded by industry, quickly obtained quotes from a number of pro-GM scientists and distributed them to the media [4]. According to a report in Times Higher Education [5], the SMC succeeded in influencing the coverage of the story in the UK press and largely kept it off the television news.

Seralini has rebutted the pro-GM critics point by point on the CRIIGEN website [6]. The statistician Paul Deheuvels, a professor at the Universite Pierre et Marie Curie in Paris and a member of the French Academie des sciences, has now drawn attention to another serious error in the criticisms [7]: the complaint that Seralini used only 10 rats per group when the OECD guidelines [8] recommend 50 for investigations on carcinogenesis. Because the experiments did not follow the accepted protocol, their results, they argue, can be safely ignored.

In the first place, this was not a wilful disregard of the guidelines. The experiment was designed to test for toxicity, and for that the recommended group size is 10.

But Deheuvels pointed out that the fact Seralini and his colleagues had used smaller groups than recommended makes the results if anything more convincing, not less. That is because using a smaller number of rats actually made it less likely to observe any effect. The fact that an effect was observed despite the small number of animals made the result all the more serious.

To see why, we have to look carefully at how common statistical tests are carried out. We begin with a null hypothesis, which as the name suggests is essentially the hypothesis that nothing unusual has happened. Here it is the hypothesis that rats fed on GMOs and/or herbicide are no more likely to develop cancer than the controls. Clearly, we would like to reject the null hypothesis if it is false and accept it if it is true. But statistics is about taking decisions in the face of uncertainty if there were no uncertainty there would be no need to use statistics and so however careful we are, we may come to the wrong conclusion.

There are two ways in which we can go wrong. On the one hand, we can make a "Type 1 error" in rejecting the null hypothesis when it is correct. Here that would mean reporting that GMO and/or herbicide are carcinogenic when they are not. Or, we can make a "Type 2 error" in accepting the null hypothesis when it is false. Here that would mean reporting that GMO and/or herbicide are not carcinogenic when in fact they are.

Naturally we would like to design experiments to make either of those probabilities as small as possible, but there is a problem. The two types of error are linked. We can reduce the probability of making a Type 1 error by requiring stronger evidence before we reject the null hypothesis. But if we do that we necessarily require less evidence to accept it, but that increases the probability of making a Type 2 error. We have to find a balance, and usually what we do is insist that the probability of a Type 1 error must be very small, conventionally 0.05. That's the origin of the "significant at 5 percent" level.

A probability of 0.05 is very small, so what we are saying is that we will only accept that the effect is real if we can be convinced "beyond reasonable doubt"; and most of the time that makes sense. If you're thinking of installing a new manufacturing process or a new way of running your farm, you want to be very confident that it really is better before you make a major investment.

It is not so obviously sensible when safety is concerned. If there is scientific evidence that a product is hazardous, then it is hardly surprising if the manufacturer would not want to withdraw it unless the evidence is very strong indeed. The rest of us, however, might take a different view. Are we really willing to accept NK603 maize, or Roundup herbicide, unless and until they have been shown beyond reasonable doubt to be carcinogenic?

The standard statistical test does seem to be the wrong way around, but that's partly because so far we have only been considering the Type 1 error, the false positive. But as Deheuvels reminds us, there is also the Type 2 error, the false negative. If NK603 and/or the herbicide are actually carcinogenic, what is the probability that we will fail to observe that?

The way to reduce the probability of a Type 2 error is to use larger groups. Because we would expect carcinogenicity to be slower to appear and harder to detect than toxicity, the group size for experiments on carcinogenicity should be larger than for toxicity, and this is precisely what the OECD Guidelines require.

If the experiment had not detected carcinogenicity, that might have been because the groups were too small. As the experiment did detect it, that the groups were small is not an issue. The scientists who were asked to supply sound bites for the Science Media Centre were quick to object that Seralini and his group had used the protocol for testing toxicity rather than the one for carcinogenesis. Had they taken a moment to ask themselves why the two protocols are different, they would have realised that in using the toxicity protocol (and remember, that was because it was what the experiment was designed to test) Seralini and his group made it less likely that they would detect carcinogenesis. To criticise a result because the experiment was conducted in a way that was more conservative than required is totally unjustifiable.

References

  1. Seralini G-E, Mesnage R, Gress S, Defarge N, Malatesta M, Hennequin D and de Vend´mois JS (2012), Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize. Food and Chemical Toxicity.http://dx.doi.org/10.1016/j.fct.2012.08.005
  2. Seralini G-E, Cellier D and de Vend´mois JS (2007). New analysis of a rat feeding study with a genetically modified maize reveals signs of hepatorenal toxicity. Archives of Environmental Contamination and Toxicity 52, 596-602.
  3. EFSA review of statistical analyses conducted for the assessment of the MON863 90-day rate feeding study, 2007,http://www.efsa.europa.eu/en/efsajournal/doc/19r.pdf
  4. Science Media Centre press release: Expert Reaction to GM maize causing tumours in rats. 19 September 2012,
    http://www.sciencemediacentre.org/pages/press_releases/12-09-19_gm_maize_rats_tumours.htm 
  5. "Shock troops check 'poor' GM study", Paul Jump, Times Higher Education, 4 October 2012.
  6. Criigen Research Team FAQs, accessed 12 October 2012,http://www.criigen.org/SiteEn/index.php?option=com_content&task=view&id=368&Itemid=1
  7. De Heuvels P. ‰tude de Seralini sur les OGM : pourquoi sa m©thodologie est statistiquement bonne. Le nouvel observateur Le Plus, 2012, accessed 12 October 2012,http://leplus.nouvelobs.com/contribution/646458-etude-de-seralini-sur-les-ogm-pourquoi-sa-methodologie-est-statistiquement-bonne.html?utm_source=outbrain&utm_medium=widget&utm_campaign=obclick&obref=obinsource
  8. OECD Guidelines for the Testing of Chemicals 451: Carcinogenicity Studies, 2009.
    http://www.oecd-ilibrary.org/docserver/download/fulltext/9745101e.pdf?expires=1350053297&id=id&accname=freeContent&checksum=BB6C78E3268AD83DB887899FF18E8147