Articles DRIVE: An opportunity to strengthen the evidence base

DRIVE: An opportunity to strengthen the evidence base

Howard White (Campbell Collaboration) examines whether DRIVE, an Intimate Partner Violence programme being piloted in the UK, will be conducted in a way that will generate useful and much-needed evidence.

BBC News recently carried a story on a new programme, called DRIVE, to tackle Intimate Partner Violence being piloted in Essex, West Sussex and South Wales. I wondered both what the evidence is for such programmes, and whether the pilot will be rigorously testing the model or not.

By chance, the Campbell Collaboration has just published a study on programmes to help women deal with domestic abuse. DRIVE has a different focus since it works with perpetrators not victims, but it is still instructive to consider the findings. The review finds some, but rather limited, benefits from such interventions. But the main conclusion is that the evidence base is weak. Although 13 trials were found covering 1,241 women the strength of the designs varied, as did the interventions being assessed, the measures being used in each study.

So my second question as to how DRIVE is being evaluated becomes even more relevant. There is scant evidence to support most interventions being implemented around the world by both governments and NGOs. So every programme is an opportunity to collect new evidence. Is DRIVE taking this opportunity? And, if so, is it a rigorous evaluation, or one which systematic reviewers will consider weak?

In training sessions for programme managers I say ‘if you only remember one thing from this workshop, remember that it is never too early to start an impact evaluation’. Just because we evaluate impact toward the end of a programme doesn’t mean that we can leave the design until then.

There are several reasons why we want to design the evaluation at the same time as the programme. A first is that the evaluation design may have implications for programme design, most notably if we decide it is possible to conduct a randomized controlled trial. Such an approach clearly is possible for DRIVE. It would be good to have the academic team on hand to derive the protocol for random assignment. And what happens to the control – nothing, an alternative treatment, or treatment as usual? And will this be the same for the different implementing agencies in different programme areas? What happens to the control matters for the study findings – as highlighted in my last blog for the Alliance.

We also want to start early so we can collect baseline data. In this case, assignment into the programme is happening over time, so again academic input is needed for a data instrument which will collect the required pre-intervention data from treatment and control on recruitment into the programme. The outcomes need to be agreed and measured in a way which is consistent with existing academic best practice to enhance learning from the pilot. It also has to be decided whether to collect endline data at graduation for each participant, or a set time after graduation (or both), rather than a single endline with varying times since graduation. Power calculations are needed to determine the required sample size. Finally, a gold standard impact evaluation embeds the analysis of effectiveness in a broader analysis of the causal chain, assessing the factors influencing programme success and failure. That won’t happen, or not so well, if the academics are brought in at the end. They should come in at the start to be engaged in mapping out the evaluation questions which emerge from the programme’s theory of change.

So what about DRIVE? I found online the bidding documents to implement DRIVE in Wales. It is indeed a randomized controlled trial. The implementing agency is expected to assign 100 perpetrators to the programme, and a similar number to the control. The evaluation design – which includes a process evaluation and cost effectiveness analysis – has been developed in association with a team of researchers from the University of Bristol.

So the real story of DRIVE is this is an exciting opportunity to learn about programme effectiveness. It is being taken through close collaboration between researchers and the partner organisations behind the programe; SafeLives, Respect and Social Finance. Not to do so would have been a waste of an opportunity to truly help victims of domestic violence. I hope that others learn of, and from, this example.

This blog was amended on 4 April 2016 to correct details about how the DRIVE evaluation is being conducted.

 

Views expressed are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence. Join our network (it’s free and open to all) and find out more about the how we champion the use of evidence in social policy and practice.