The results of one of the most complex evaluations ever attempted in social policy were published this week. Based on a specially built dataset with over a million cases and over 3,000 variables, its focus was one of the most ambitious and, in some people’s eyes, controversial programmes of recent decades: the UK’s Troubled Families programme.
It’s a landmark study in terms of its methodological sophistication – covering what is itself a landmark programme – and certainly one that any self-respecting policy wonk or behavioural scientist should know about. Jonathan Portes, former Cabinet Office chief economist and critic of previous attempted evaluations of the programme, has welcomed the new analysis.
For those in a real hurry, it looks like the programme worked, though not for all outcomes, and was cost effective. Which is good, since about £1bn has been spent on it…
How the programme began
The idea behind the program is a simple one, with origins that date back more than a decade. When you pick apart government expenditure – or just spend any time in front line public services – you find that there are a small number of families who have far more than their share of problems. The adults in such families are generally struggling to get or hold a job, and on benefits. There are often problems with mental illness, drugs, or criminal offending in the household. The kids are skipping school, and getting in trouble with the police. The neighbours are complaining. An array of public services are getting involved: social workers, housing officers, the police, health workers, concerned teachers; and yet with seemingly little positive effect. Why not instead provide an integrated package of support, built around a single caseworker, who could see the bigger picture and help support the family to better outcomes?
Like many ambitious policy programs, its history stretches across administrations. Its origins date back to the Blair government, and a line of policy that became known as ‘tough love’ – providing extra help and support but with strong expectations about what the person themselves needed to do – or what psychologists might recognise as echoing ‘authoritative parenting’ (love and limits).
If there was one intervention that sparked it off, it was the Dundee Family Intervention Project. This involved bringing a whole family into an intensive intervention, providing highly structured support, guidance and limits on their behaviour. For many such families, it was often their ‘last chance saloon’ before their disruptive behaviour triggered even more dramatic action by the authorities, such loss of housing and kids being taken into care.
As it happens, I led the work for Tony Blair’s No10 Social Exclusion Task Force to shape the Dundee project into something that could be tried on a more ambitious scale. The force of nature who was asked to deliver it was Louise Casey, then based in the Home Office. But this was 2006, Blair’s last year in office. The then Prime Minister was probably more knowledgeable and confident than he had ever been about what might work around entrenched social policy issues, but his power was waning fast, and we struggled to scrape together more than a few million to get the program going.
Taking the programme to scale
With Louise behind it, the program carried on at a modest level through the Brown years. But it was the incoming coalition Government of 2010 within which the decision was taken to massively scale up the program. Steve Hilton was a particular champion within No10, and made the case even against the background of very tough spending cuts to find more £400m for the programme. It went from helping a few thousand families, to hundreds of thousands.
Here, I should make a personal admission of failure. Having failed to get the original program set up as a randomised controlled trial (RCT), I again failed in 2010. That might be a story for another day, but it is one of the reasons why I’d especially like to call out the ingenuity and skill behind the evaluation released this week, and to recognise the extraordinary effort and work of the Ministry of Housing, Communities and Local Government’s Chief Economist (and my old colleague) Stephen Aldridge and his team in getting it done. Behind this evaluation lies five years of genuinely extraordinary work to painstakingly piece together vast arrays of separate administrative data sets, from across multiple Local Authorities and years, and at the same time negotiating the complex legal and ethical safeguards that such data linking involves. With no RCT, step-wedge or obvious discontinuity built into the program design, Stephen’s team had to instead work their way through a complex, multi-layered propensity score matching design. If that wasn’t hard enough, they also had to figure out how to handle the possible effects of other large scale cross-cutting programs such as the roll out of Universal Credit. Kudos!
Evaluating the programme
What did they find? Arguably the most important result, from a human and fiscal perspective, was the reduction in the number of children who ended up being brought into care (see graph below). In essence, the troubled families’ key workers seemed to help the families do a better job, and hence keep the kids in the family rather than ending up in care. It clearly wasn’t easy. A tell-tale sign is the surge in child protection plans that are put in place in the troubled family group in the first year of the programme, but how these ultimately seem to bear fruit.
Similarly, both adults and (older) children in the programme group were found to be significantly less likely to end up in prison or custody – a 25% reduction in adult custodial sentences, and 37% reduction in youth custodial sentences. These reductions drive the research team’s estimates of savings: that every £1 spent on the programme delivers £2.28 of public value benefits over five years. More important for Treasury colleagues, they estimated that every £1 spent on the programme delivers £1.51 of fiscal benefits over five years (though not all of these are necessarily cashable).
There are other areas where the results were less clear cut, or generally non-significant. Employment outcomes did not seem to be improved nor, rather disappointingly, missed school days. There are also lots of questions that this analysis does not (yet) answer, in particular to help us understand which of the various versions of troubled family programme work best, or for who.
This evaluation is a remarkable piece of work. It’s testament to not just the slog and skill of Stephen Aldridge and his MHCLG team, but also the dedication and of the families’ key workers who have been doing all the hard work over the last few years. In the current media environment, its findings aren’t likely to get much attention, let alone the ingenuity and the methods that lie behind it, but it has truly set a new benchmark in data-linking. It offers the potential to unlock insights into the cost effectiveness of a host of other social policy programs, and to help the most disadvantaged better than we do today. Congratulations to everyone involved.
Finally, while we should admire the incredible work that went into this study, it should also serve to remind us all, especially policymakers, just how hard it is to evaluate a program if it wasn’t set up to be evaluated. We really, really should try and set up such programs to make the evaluation simple to do, and maximise our ability to optimise and improve it in future.
Views expressed are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence. Remember you can join us (it’s free and open to all) and find out more about the how we champion the use of evidence in social policy and practice.