In the very first blog from the Alliance, we outline over ten days, ten of the challenges and barriers that are frequently encountered when trying to improve the use of evidence in decision-making.
Day 1: moving beyond discussing evidence-based
Since the 1990s the term “evidence-based” has become a central part of public policy discourse in the UK. Yet despite the term becoming common parlance we lack an agreed understanding of what it actually means. What evidence do we need in order to know that a programme “works”? Who does it work for? When and in what situations?
Day 2: enabling evidence and innovation to co-exist
As an agency tasked with stimulating innovation in the UK a question we’re frequently asked is “How can you both stimulate innovation and have an evidence agenda?” We would argue that evidence is a vital part of a functioning innovation system. Research and development is, after all, a traditional cornerstone of innovation systems. If we fail to test and experiment with new innovations, how do we know whether they work?
Day 3: debunking the myths about randomised control trials (RCTs)
There is much contention around the use of Randomised Control Trials (RCTS), with examples of their use being blocked or vetoed. Yet if used correctly they can be one of the most powerful tools in helping test whether a service you receive is effective, or indeed harmful
Day 4: institutionalising the demand for evidence
Despite decades of producing excellent research its use in decision-making remains limited in many areas of social policy and practice. The answer may not always be generating and gathering more evidence, rather we should focus on stimulating the demand for it.
Day 5: dealing with negative findings
Most people would recognise that we need to improve how we measure the impact of services and programmes. Yet what do we do when an evaluation brings back negative findings? In the quest for ‘what works’ do we shy away from discussing what doesn’t?
Day 6: Managing the politics of decision-making
Research, evidence and data do not exist in a vacuum. To influence decision making, sources of information have to compete with a myriad of other factors, ranging from political pressure, lobbyists, public opinion, ideology and personal values. If the research findings clash with the dominant view, how can these factors be managed to embed evidence into decision making?
Day 7: making the debate relevant
Not everybody thinks that evidence is the most important thing in the world. But most would recognise that knowing whether a programme of intervention is going to be harmful to them, their family or friends, is a big deal.
Day 8: opening up data for better, cheaper evidence and the new army of armchair evaluators
We have talked about the need for more and better use of evidence, but this does not always mean commissioning costly academic research. Instead we can find new ways of utilising the information already available and empowering wider society to make use of it. This means that as well as innovating with new programmes and policies, we also need to innovate with the tools we use to evaluate them.
Day 9: evidence in the real world
“You say “evidence”. Well, there may be evidence. But evidence, you know, can generally be taken two ways” – Dostoevsky, Crime & Punishment, 1866
The blogs over the past two weeks have demonstrated that embedding rigorous evidence in decision making is not always a straightforward task. As the above quote shows, this is further complicated by data not always showing a single course of action for decision makers to take.
Day 10: developing a UK Alliance for Useful Evidence
We are delighted to announce that we are working with the ESRC – and others – to create an Alliance for Useful Evidence.
The whole series of blogs can be downloaded as a PDF: http://www.nesta.org.uk/library/documents/TenStepsBlog.pdf