Articles Widening the definition of ‘gold standard’

Widening the definition of ‘gold standard’

Michael O’Donnell at Bond, the UK membership body for NGOs working in international development, explains why commissioners and managers should always consider the ‘design triangle’ when selecting an evaluation method – and a new guide that gives them the information they need to know.

Despite what some guidance occasionally still says, debates among evaluators have long since moved past the idea of which evaluation methods are most rigorous, ‘best’ or the ‘gold standard’. All the energy expended on those methodological arguments have all been a distraction. It’s about as sensible to debate which item of clothing is better than another: there’s an inevitable ‘it depends…’ answer. Fortunately, the focus of discussion has shifted to the more practical question of which evaluation methods and designs are most appropriate for different circumstances.

A new paper by Elliot Stern, and commissioned by the Big Lottery Fund, Bond, Comic Relief and the Department for International Development (DFID) makes current debates about the appropriateness of different impact evaluation methods more accessible. ‘Impact Evaluation: A Guide for Commissioners and Managers’ aims to translate the detailed 2012 report by Stern et al. for DFID into something more accessible that can reduce the knowledge and information gap regarding evaluation methods between evaluators and commissioners.

As Professor Stern said, launching the guide at the UK Evaluation Society Conference in May, you can’t produce a “Brain Surgery for Cab Drivers” guide, and this guide doesn’t try to make everyone an expert on evaluation methods. But it empowers commissioners and managers with information to have more sensible discussions and make better decisions when faced with a choice of evaluators and possible methods.

The paper focuses on the idea of the ‘design triangle’. There are a range of evaluation methods and designs out there. There are many different evaluation questions you may want to ask. And there are many different attributes or design elements of interventions that affect how best they can be evaluated. Choosing the right evaluation methods involves seeing how those elements fit together.

If your programme involves a consistently implemented set of activities intended to produce a specific, defined impact, and you want to know how much impact you achieved, then a Randomised Controlled Trial might indeed be ideal. If you have been experimenting in different places with a range of capacity-building and influencing approaches to improving various aspects of good governance, and want to understand under what institutional circumstances various interventions seemed to make a difference, then maybe try Qualitative Comparative Analysis.

Commissioners of evaluations don’t need to be experts on all evaluation methods to do their job well. But we live in a world where there are donor biases for and against different evaluation designs and methods, where evaluators may offer to use their preferred methods even if they’re not the most appropriate choice and where (as Bob Picciotto recently stressed) the increasing complexity of development problems requires creative evaluation solutions. In that context, it is important to know enough to ask the right questions and push back against choices that could lead to the generation of useless evidence.

 

Views expressed are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence. Join us (it’s free and open to all) and find out more about the how we champion the use of evidence in social policy and practice.