Quality matters: applying healthcare best practice to environmental policy-making

Honey Bee Over Clover by Jason Milch, CC BY-ND 2.0
Honey Bee Over Clover by Jason Milch

A guest post from Gary Bilotta, a senior lecturer at the University of Brighton, in which he discusses his recent article published in the journal Environmental Evidence.

From badgers and bovine tuberculosis, pesticides and pollinators, to shale gas and pollution, environmental policies can attract a lot of attention from the public and experts on all sides. When even the experts have their own views on environmental topics, policy-makers need mechanisms to take these views into account. Policy implementation is multidimensional and rightly includes electoral, ethical, cultural, practical, legal and economic considerations alongside scientific evidence. But, if policy-makers wish to discover what the evidence base is on a given topic, they must attempt to navigate personal biases. They must also attempt to identify bias in the scientific evidence itself, as part of the assessment of the quality of the evidence.

Though assessments of study quality can be made informally using expert judgement, we know that experts in a particular area frequently have pre-formed opinions that can bias their assessments (Burgman et al., 2011; Oxman and Guyatt, 1993). In order to reduce the potential for reviewer bias, and to ensure that the findings of an evidence synthesis are transparent and reproducible, organisations such as the Cochrane Collaboration, the Campbell Collaboration, and the Collaboration for Environmental Evidence, recommend formal quality assessment tools, recognising that the merits of a formal approach outweigh the drawbacks. Could similar assessment tools be used for environmental policy-making?

Around 300 formal study quality assessment tools have been identified in the literature. They are designed to provide a means of objectively assessing the overall quality of a study using itemised criteria, either qualitatively in the case of checklists or quantitatively in the case of scales .

By Killianwoods (Template:University Observer) [Public domain], via Wikimedia Commons
Image by Killianwoods
Perhaps unsurprisingly given the diverse range of criteria included within quality assessment tools, it has been empirically demonstrated that the use of different quality tools for the assessment results in different estimates of quality for the same studies. This could potentially reverse the conclusions of an evidence synthesis and potentially lead to misinformed policies (Colle et al., 2002; Herbison et al., 2006;).

In the healthcare field, a meta-analysis of 17 trials comparing the effectiveness of low-molecular-weight heparin (LMWH) with standard heparin for two treatments for prevention of post-operative thrombosis, trials that were identified as ‘high quality’ by some of the 25 quality scales interrogated, indicated that LWMH was not superior to standard heparin, but trials identified as ‘high quality’ by other scales led to the opposite conclusions . It is therefore very important to consider carefully, the choice of quality assessment tool to be used in evidence syntheses.

In my article co-authored by Defra’s Chief Scientific Advisor, Professor Ian Boyd, and University College London’s Dr Alice Milner, published this week in Environmental Evidence, we argue that quality assessment tools should have demonstrable link with what they purport to measure, facilitate inter-reviewer agreement, be applicable across study designs, and be quick and easy to use.

The Cochrane Collaboration, who are internationally renowned for their systematic reviews in healthcare, recently developed an approach to quality assessment that satisfies these four criteria. Before 2008, the Cochrane Collaboration used a variety of quality assessment tools, mainly checklists in their systematic reviews. Acknowledging inconsistency in using different tools to assess the same studies, and growing criticisms of many of these tools , in 2005 the Cochrane Collaboration’s Methods Groups, including statisticians, epidemiologists, and review authors, embarked on developing a new evidence-based strategy for assessing the quality of studies. The resultant approach is now used by the WHO among twenty other organisations internationally.

Shale Gas Exploration by K A, CC-BY SA ND 2.0
Shale Gas Exploration by K A

Our article investigates the extent to which this best practice approach could be useful for assessing the quality of evidence from environmental science. We believe that the feasibility of this has been demonstrated in a number of existing systematic reviews on environmental topics published by the Collaboration for Environmental Evidence. It is not difficult to imagine that the approach could be adapted and applied routinely, as part of the quality assessment of environmental science studies cited in evidence syntheses, and we propose a pilot version of the modified approach for this purpose.

‘Learning by doing’ with the pilot versions of these tools is exactly how healthcare refined its own tools. Better formal quality assessment will improve how science is used to inform policy decisions, helping policy-makers to navigate personal biases while identifying biases and weaknesses in primary studies that affect their reliability. Better formal quality assessment should also improve the quality of policy-orientated environmental science in the future.

View the latest posts on the On Health homepage

Comments