Articles Day 5: dealing with negative findings

Day 5: dealing with negative findings

 

Most people would recognise that we need to improve how we measure the impact of services and programmes. Yet what do we do when an evaluation brings back negative findings? In the quest for ‘what works’ do we shy away from discussing what doesn’t?

It is commonly acknowledged that testing is essential to see what is effective. To truly learn about effectiveness we need to know what hasn’t been successful. But how then do we deal with negative findings? In the quest for ‘what works’ do we shy away from discussing what doesn’t?

As a programme developer could this mean the termination of funding and reputational damage? Or for a politician could the admittance of a particular policy being less than successful provide ammunition to the opposition to discredit their programme of work? Does this then lead to negative findings being hidden or even actively disincentivise evaluations being undertaken at all?

Then there are differing degrees of “failure”.  There are well-known examples of ineffective programmes, such as Scared Straight or DARE, but how do we deal with those when the findings are arguably less clear cut? When do we decide that negative findings are indicative of areas to improve or of total failure? How do we decide when to make improvements or pull the plug entirely? How much latitude do funders give providers to amend or adapt their approach if they found that they weren’t meeting certain outcomes? Should evaluation be about pass or fail, or can we see it as a tool for continual improvement?

Alongside providers feeling like they need to give success stories to funders, the same can be true of some evaluators, researchers and academics who prefer to the discuss the positive outcomes of experiments and evaluations. For instance, evaluators commissioned to undertake studies may feel compelled to tell the provider who is funding the work what they want to hear, toning down negative findings about a programme, to try and secure future contracts. Then there are the widely discussed issues of publication bias. This can mean that the “boring” findings of unsuccessful studies are less likely to get written up or published in an academic journal. To counter this one journal has created a ‘negative results’ section. Although it is a positive move that these studies are made available, not giving them the same emphasis as studies in mainstream publications could lead to them being overlooked.

To advance the evidence agenda we need to emphasise what doesn’t work as strongly as we strive to find what does. For this to happen and for programmes and policies to improve there needs to be a move towards being more open and frank about negative findings, perceiving them as an experiment to learn from. This will only happen if honesty is encouraged and with evaluations used for improvement, not as a test of pass and fail.