CATCH THE BUZZ – Syngenta’s experiments were on such a small scale that little useful could be concluded from them

   Statisticians at the University of St Andrews have debunked a study by Swiss agrochemical company Syngenta that concluded there was only a low risk to honey bees from widely used neonicotinoid pesticides.

   The Syngenta study was on the effects of the neonic thiamethoxam on honey bees in the field.

   New research conducted at the Scottish university’s Center for Research into Ecological and Environmental Modelling (CREEM) shows even large and important effects could have been missed because the Syngenta study was statistically too small.

   The research by Dr. Robert Schick, Prof. Jeremy Greenwood and Prof. Steve Buckland is published today in the journal Environmental Sciences Europe.

   The Syngenta study involved two experiments –:an oilseed rape experiment conducted at two locations and a maize experiment at three locations.

   At each location, the experiments used pairs of fields – in one field the crop was treated with thiamethoxam at levels normally used by farmers, in the other field the crop was untreated.

   The Syngenta study concluded that because the experiments involved so little replication (two cases for oilseed rape and three for maize) a formal analysis of the data “would lack the power to detect anything other than very large treatment effects, and it is clear from a simple inspection of the results that no large treatment effects were present. Therefore, a formal statistical analysis was not conducted because this would be potentially misleading”.

   The St Andrews team says this is fundamentally wrong, because formal statistical analysis is only potentially misleading if the wrong method is used and because the mere inspection of the results is always potentially misleading because it is an entirely subjective procedure.

   “In order to reach valid conclusions about the results of an experiment such as this, one needs not just to estimate the effect of the treatment but also to measure the precision of the estimate,” Greenwood says. “That is what we have done, using standard statistical techniques.

   “What we found was that the estimates of the treatment effects were so imprecise that one could not tell whether the effects were either too small to pose a problem or, in contrast, so large as to be of serious concern.

   “In effect, the experiments were on such a small scale that little useful could be concluded from them.”