Print

NOTE: Below is a further comment on the Seralini study by the former government research analyst (30 years' + experience) who previously supplied two other comments on why Seralini's findings likely show real toxic effects and must be taken seriously:
 http://www.gmwatch.org/component/content/article/14249
 http://www.gmwatch.org/index.php?option=com_content&view=article&id=14260

The Hammond research study to which the analyst refers is the 90-day feeding trial commissioned by Monsanto, on the basis of which NK603 maize was approved in the EU:
Hammond, B., R. Dudek, et al. (2004). "Results of a 13 week safety assurance study with rats fed grain from glyphosate tolerant corn." Food Chem Toxicol 42(6): 1003-1014.
---
---
Comment from former government research analyst on Seralini's findings (3)
Published by GMWatch 26 October 2012

This controversy is not about science as much as the industry and industry-captured regulator politics and self-protection. I would like to know where you could even begin to mount a challenge to GMOs that had any chance of a fair hearing.

Seralini challenged the Hammond research before [de Vendomois, J. S., F. Roullier, et al. (2009). A comparison of the effects of three GM corn varieties on mammalian health. Int J Biol Sci 5(7): 706–726.] and got treatment very similar to what he is getting now. I read that he got Monsanto's raw data from Swedish sources, and argued they found evidence of early liver (and maybe kidney) toxicity. The 90 day study period ended any further development of this, if it was real. The 2012 Seralini study biochemistry results claim to show liver and kidney markers of toxicity.

I accept that the Seralini 2012 study has problems and agree with some of the EFSA critique, but not enough to dismiss it without further analysis of the results, or to agree with just sweeping it away.

Most of those who are capable work for the industry or are on the industry tab. Those who are not on the industry tab, and who challenge, get the full force of these armies brought to bear on them, no matter what the merits of their evidence or arguments.

Seralini's critics are trying to destroy the study on simple, result specific statistical significance grounds of the number of rats used and the number affected, when in fact, 3 results in the pathology (Table 2) are significant (p<0.05) using Fischer's Exact Test, and two are near significant (p<0.10). All but 1, which is male liver GMO 22%, are due to the Roundup exposure. This is easy because the results are binary and I know the sample size.

Hammond et al (2004) used the same rat, and number of rats (10) in many of the results they reported, even if they say they dosed 20 rats. That is, only 10 rats were selected from the 20 for detailed evaluation.

The problem, as I see it, is a refusal to consider the statistical properties of all the results taken together - differences in biochemistry, timing, rats affected, and number of pathologies, severity of cancers, and so on. I think they need to pose and answer the question: what is the probability that the joint occurrence of all these differences, taken together, are due to chance? The raw data of Seralini need to be analysed from this hypothesis viewpoint, among other things.

There is also something to look for in the Hammond et al 2004 paper, that pertains to the Seralini claim (de Vendomois, 2009, above) that he saw liver toxicity in the detailed data of the 2004 study (I noted this above). There were several instances of significant differences between treatments and controls in the Hammond study. However, the Reference Controls, dosed only at the high dose of 33% diet, and which were extra on top of the experiment controls, added a lot of extra variation, which was then averaged to a resulting variation larger than the two experimental controls at 11% and 33%. So there was only 1 experimental control at 33% compared to six extra Reference Controls using different corn varieties grown at different locations. I think these were used as a bucket of noise to say the statistical differences seen between the 2 treatments and the 2 experimental controls (11% and 33%) were not significant in comparison to the average of the 6 References.

The point is, there were significant experimental differences between the test control and the treatments, but then enter the 6 Reference Controls, with all their added variation and noise, which were used to flush away the significance noted. The differences were not elaborated on in any details in Hammond et al 2004. It looked like sleight of hand to me. It may be legitimate, but I want to see an argument for this practice before I can just accept it. EFSA just accepted it.

Also used was the assertion and assumption of the linear monotonic dose-response, which is not certain in the case of endocrine-related effects. Seralini notes this in his 2012 paper. THe EFSA review repeats the same criterion to criticize Seralini et al 2012.

It's easy to see what a swamp this is. Where do I send this comment and criticism and have it fairly heard and answered with science?

One thing is for sure, 90 days is not long enough to test the safety of something consumed by humans for their lifetime and over generations.