Print

Regulators rely on poor quality studies and often no detailed studies at all to assess the safety of GMOs, writes Prof Jack Heinemann

GMWatch reported on the review on which this article is based here:
http://gmwatch.org/index.php/news/archive/2014/15707

EXCERPT: The authors identified 47 GM crop plants that were approved by at least one food safety regulator somewhere… For these 47 approved products, only 18 published, peer-reviewed studies could be found. These studies were restricted to only 9 of the 47 approved GM food crops. The lack of studies isn’t the only interesting finding. Critically, many of this small number of studies also failed to adequately describe the methodology, other basic information needed to determine the level of confidence in the results, or even the results!

Ultimate experts

Jack Heinemann
Rightbiotech, 20 Oct 2014
http://rightbiotech.tumblr.com/post/100437995195/ultimate-experts

Do regulators rely on quality scientific information when they assess the safety of genetically engineered plants intended for use as food or animal feed?

I addressed this question in a recent blog on The Conversation. The short answer is that they don’t routinely rely upon sources of evidence that have been through a process of blind peer-review at the time that they make their conclusions about the safety of these products. This isn’t to say that their conclusions are necessarily wrong as a result. However, where the ultimate product is trust, it is relevant how society views sources of information.

It was a choice by governments to use industry-sourced - and frequently secret - information for use by regulators, rather than to use data generated by disinterested scientists and that could be contested by free exchange of materials and protocols.

"It was a political decision to make the person or organisation wanting to place a product, such as a GMO, on the market legally responsible for demonstrating that it is safe. This has proved to be contentious, because the company or organisation given this responsibility also has an interest in the product being found safe." - COGEM

A new open access article by Zdziarski et al. in the leading risk assessment journal Environment International reports for the first time the breakdown of availability of peer-reviewed articles using rat feeding studies to assess adverse effects of GM products. This research considered both the number of peer-reviewed articles in total, and those that had had at least this level of quality assurance at the time the product was being reviewed by a government regulator.

The authors identified 47 GM crop plants that were approved by at least one food safety regulator somewhere. There are more than this number approved, but the authors limited the search to GM plants of three modifications - a kind of herbicide tolerance (glyphosate tolerance) or a kind of insect resistance (by expression of cry1Ab or cry3Bb1). Herbicide tolerance and insect resistance are overwhelming the two most common traits in commercialised crops, however, making this paper broadly relevant.

For these 47 approved products, only 18 published, peer-reviewed studies could be found. These studies were restricted to only 9 of the 47 approved GM food crops. The lack of studies isn’t the only interesting finding. Critically, many of this small number of studies also failed to adequately describe the methodology, other basic information needed to determine the level of confidence in the results, or even the results!

Study limitations

There are two limitations to the Zdziarski et al. (2014) study. The first is that it only considers one of the kinds of studies that may be used to assess safety, namely histopathological evidence from rats fed the product. There may have been in existence and available to the regulator other kinds of studies (e.g., compositional analyses, use of different animals) that had been published. However, from my experience, that too is unlikely at the time of the assessment. In any case, animal feeding studies are routinely done by the parties (public or private) that produce these products for food safety approval and as a consequence the information is routinely available to regulators. Therefore, the information should be capable of being published. The focus on rat feeding studies and histopathology adopted by Zdziarski et al. was a reasonable endpoint to survey when asking the general question of how much data provided to regulators ultimately is blind peer-reviewed.

Second, not all regulators require data from animal feeding studies much less specifically rat histopathology. Food Safety Australia New Zealand (FSANZ), for example, does not. It believes that such studies do not add to its confidence when determining the safety of food derived from GM plants. At other times, however, FSANZ does recognise the value of histology. For example, FSANZ did view histological evidence as useful for investigating the causes of stomach inflammation observed in an earlier study. It said that: “The presence of “inflammation” was determined by visual appearance (reddening) only, without any microscopic (histological) confirmation. This is not considered a reliable method for establishing the presence of true inflammation.” FSANZ appears to believe that histology should be a part of proving harm, but unnecessary for establishing safety. A mixed message from regulators might reduce the priority manufacturers place on histology as part of a safety assessment.

What the public hears

The chief scientist of FSANZ, Dr. Paul Brent said on Radio New Zealand National that: “It isn’t the case that all of the information we get from industry isn’t published. In fact, much of it is published in peer-reviewed journals” (Nine-to-Noon, 28 March 2013). When statements such as this are confronted with peer-reviewed research suggesting that the very opposite is true, who is the public supposed to believe?

Even if much of the research Dr. Brent refers to were eventually published – and this is a big IF – the defensive stance by the regulator only raises additional concerns. First, as the Zdziarski et al. study shows, even less research has been through a quality assurance process independent of the regulator at the time the regulator recommends that the product be approved for use in food. The often multi-year gap between approval and publication is not reassuring. Moreover, papers published after regulatory approval might have different information in them than was used by the regulator. It would be interesting to compare the propriety studies given to the regulator with those eventually published.

Second, some kinds of studies are more important than others for food safety. Which those are may be debated, or may differ depending on product, use or consumer. But the upshot is then that not all products may receive the same kind of testing. We don’t have a solid idea of how comprehensively these products are tested by the same or similar methodologies and how uniformly these tests are applied to all products. For example, were the other 38 relevant GM products not tested for histopathology, or tested but the data not published? Finally, assuming that food safety studies would be at least as, if not more, rigorous than environmental safety studies, one can only wonder how a survey of this type, with a focus on environmentally relevant endpoints, would turn out.

It has been argued that the regulator is the peer reviewer. Some might say that it would be a nonsense to suggest that a journal’s peer-review system was better at evaluating safety studies than is a regulator. However, when regulators such as FSANZ issue opinions on safety or critiques of actual peer-reviewed science but fail to disclose the names and qualifications of its own authors, it only undermines public confidence. For example, I’ve been repeatedly ignored when I’ve asked FSANZ, consistent with their policy of transparency, to reveal the names and qualifications of those authoring particular opinions in the agency’s name.

[see original article for extracts from Twitter illustrating this]

A general problem

The issues raised by the new study and my own experience are not unique to safety testing products of biotechnology. A study by Boone et al. (2014) looking at the pesticide industry, found similar problems there:

"Pesticide use results in the widespread distribution of chemical contaminants, which necessities [sic] regulatory agencies to assess the risks to environmental and human health. However, risk assessment is compromised when relatively few studies are used to determine impacts, particularly if most of the data used in an assessment are produced by a pesticide’s manufacturer, which constitutes a conflict of interest." – Boone et al.

Not only are the problems the same, but so are the solutions.

"Although manufacturers who directly profit from chemical sales should continue to bear the costs of testing, this can be accomplished without COIs by an independent party with no potential for financial gain from the outcome and with no direct ties to the manufacturer." – Boone et al.

Despite the constraints on regulators by their legislation, resources and political masters, I think that they routinely do a good job. I certainly would not trade the existing regulators in my country for no regulation. If we want better outcomes of regulation, we need better laws and society needs to be a more active participant.

Still, sometimes the regulator does not serve itself well. Hyping the quantity and quality of evidence it uses and understating conflicts of interest creates cracks that serve as footholds for climbing doubts.