Articles Day 9: evidence in the real world

Day 9: evidence in the real world

 

“You say “evidence”. Well, there may be evidence. But evidence, you know, can generally be taken two ways” – Dostoevsky, Crime & Punishment, 1866

The blogs over the past two weeks have demonstrated that embedding rigorous evidence in decision making is not always a straightforward task. As the above quote shows, this is further complicated by data not always showing a single course of action for decision makers to take.

Earlier in the week we discussed how decision making is influenced by politics, values, ideology and objectives. The interpretation of data is influenced in much the same way. Take the widespread discussion surrounding climate change or the value of herbal medicine, for instance.  Or the well documented case of the MMR jab, when parents were caught between doctors and the scientific community – the supposed ‘experts’ – disagreeing over whether there is a potential link to autism. When situations like that arise how do decision makers then weigh this up? What course of action should be taken?

Then there may be other instances when the evidence may not yet be available to provide specific solutions or guidance. Systematic reviews are seen as an effective way of putting research studies in a scientific context. However, a key criticism is that they “often conclude that little evidence exists to allow the question to be answered” (although there is a counter-argument that this a useful finding in its own right). If further analysis is then needed, “How can the need for rigour be balanced with the need for timely findings of practical relevance?

Identifying effective programmes and policies are not the end result of course. We have already noted the need to drive the demand for such evidence, but we also need to improve our ability to ensure that they are implemented with fidelity to the original model to help increase their chances of success when they are rolled out. For instance, if a programme is evaluated and found to be effective, can we rely on these findings to implement it in a different area or context? Improving our understanding of implementation science is crucial.

Even when a programme has been successfully identified, implemented and scaled, the need for evaluation does not end. We need to ensure that the programme or policy continues to be effective. The most intensive work is arguably over, but we should still ensure that its impacts are as optimal as they could be. Yet we need to ensure that we recognise what types of evaluations are needed at different stages. National Institute for Health and Clinical Excellence (NICE) has a grading system which starts at ‘very low’ to indicate where any estimate made is uncertain, up to ‘high’ which shows that any further research is unlikely to change confidence in estimates of effects. Do we need similar systems in other areas to ensure the most appropriate evaluations are being undertaken at different stages and prevent efforts in those areas where further research may not reveal anything new?

What has become clear over this blog series is that we need to be strengthening the supply and generation of research, as well as the demand for it. Tomorrow’s blog post will discuss the development of the UK Alliance for Useful Evidence, a new initiative being developed to play this role. The Alliance for Useful Evidence will explore the infrastructural improvements and changes that are needed to embed rigorous evidence in decision making across social policy and practice. To hear more about the Alliance for Useful Evidence please join us at an event on Monday, 24 October at NESTA.

As with all the challenges we have outlined over the past few days, those outlined here should not excuse rigorous testing and evaluation. Instead we hope that they set the need for rigorous research to be set within context.