As an agency tasked with stimulating innovation in the UK a question we’re frequently asked is “How can you both stimulate innovation and have an evidence agenda?” We would argue that evidence is a vital part of a functioning innovation system. Research and development is, after all, a traditional cornerstone of innovation systems. If we fail to test and experiment with new innovations, how do we know whether they work?
Yet we recognise the need to balance the drive for better evidence of effectiveness without creating insurmountable barriers to those developing innovative new approaches. From our research to date there are a number of potential barriers that could hinder developing innovations. For instance, many providers developing potentially effective approaches lack the skills, capabilities or willingness to evaluate themselves. Providers can find the prospect of evaluation daunting, for instance, would unfavourable findings mean they lose their future funding?
When identifying effective programmes many organisations rely on academic literature as the primary source of information and evidence. Academic literature can provide a robust and reliable evidence base, but the “lag” between research into “new” practice can lead to potentially better approaches not gaining the recognition they deserve. This means many approaches can remain below the radar. Then there are the well documented instances of academic publication bias which can complicate the reliance of academic literature even further.
To gain academic attention, there may need to have been a randomised control trial (RCT). As we will discuss tomorrow (in Day 3: Debunking the myths about Randomised Control Trials), the “gold standard” of RCTs should be the level of ambition we aim for in those instances where this is appropriate, but we need to recognise that this could be a long way off for many providers, especially if the intervention is at an early stage of development. For instance, we don’t want to be in a situation where an intervention is selected as it reaches perceived “top tier” standards of evaluation when there could be (potentially) better solutions which have a “lower standard” of evidence and need more investment and support. To ensure innovation and evidence can coexist we need to understand what an appropriate scale of evaluation is for different size programmes at different stages of development.
This is where programmes like Greater London Authority’s Project Oracle are so important. Oracle builds the evidence behind the interventions and approaches being developed by community groups and charities, many of which are very small and struggle to evaluate their work. Oracle clearly demonstrates that it is possible to develop the capacity and capabilities of providers to move up academically rigorous “standards of evidence” at a speed which is appropriate to the provider as their approach develops and matures. The other interesting element of the Oracle approach is that it is not obligatory for those receiving GLA funding to take part, instead they willingly sign up in recognition that evaluation can be a useful tool in improving their approach and in attracting additional funding and support.
Evidence is vital to the innovation system. A lack of evidence may lead to a lack of confidence in new approaches, and in a time when we need to be developing effective solutions to tackle many of our long term challenges, we need to ensure that the best approaches don’t remain marginal.