Dr Stefanie Ettelt and Professor Nicholas Mays (London School of Hygiene and Tropical Medicine) share their simple tips for ensuring that policy pilots generate useful evidence.
Piloting of public policy is almost universally seen as a good idea. If done well, piloting can minimise the risks arising from implementing novel initiatives and generate much needed knowledge to inform future policy roll-out.
While this rationale seems straight forward, our research on piloting national initiatives in health and social care has shown that policy pilots can be initiated for a variety of purposes, with evaluation often being an afterthought rather than the main objective.
One of the main purposes we identified in our study was for pilots to be seen as the first phase of national implementation rather than a ‘trial’ phase in which the effectiveness (and hence desirability) of policy is rigorously tested.
We have also been involved in evaluations of pilots that were so difficult to implement that there was no chance of generating the amount of data required for robust outcome evaluation (let alone economic evaluation) simply because there was not enough of the programme in existence.
To improve the odds of policy piloting to generate useful evidence we have published guidance to policy-makers in a discussion paper. The guidance is largely based on our experience as evaluators of national health and social care policy, but it has wider resonance across government departments, as judged by the comments we have received so far. Here are three of our ‘top tips’:
- Organise the pilot programme in such a way that it is suitable for evaluation. This means that you should involve evaluators as early as you possibly can and involve them in decisions about selecting pilot sites. If you wish for a rigorous outcome evaluation (because you know that evidence of ‘what works’ matters), make sure that there is enough activity in pilot sites so that sufficient data can be generated. If no-one enrols in your scheme, there will be no outcomes to measure. Local enthusiasm for the scheme may be harder to sustain than you expect.
- Describe in detail the intervention ‘logic’ of the policy that is to be piloted. In some government departments this is done routinely, while others expect the evaluators to work out what the mechanisms of cause and effect might be. However, thinking through the steps that are required for policy to be put into action can be a good reality check. If it is difficult to describe how the policy is expected to produce the results that Ministers aspire to, this may be because the causal link between intervention and outcomes is unclear or non-existent and hence not amenable to evaluation. Such pilots may benefit from a more descriptive, exploratory approach to evaluation in the first instance before entertaining ideas of outcome evaluation, including randomised controlled trials.
- Keep in mind that most independent evaluators, in academia and elsewhere, expect to publish their findings. The obvious advantage of publishing, especially in peer-reviewed journals, is that the evaluation you commissioned will contribute to the (national and international) repository of knowledge that governments aspire to build and use to underpin policy decisions. However, this also means that you have to be able to live with the results from the independent assessment – even if they are not supportive of the policy initiative that you worked so hard to bring to life. Do a thought experiment before commissioning the evaluation and imagine what you (and your Ministers) will do if the findings showed little or no positive effects, which is more likely than not when implementing novel policy in existing complex systems.
Views expressed are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence. Join us (it’s free and open to all) and find out more about the how we champion the use of evidence in social policy and practice.