Not all research is equally important to social programmes. This isn’t only a matter of the standard of evidence, but of understanding the limits of different research strands and what they can, and cannot, tell us.
I gave evidence to the Public Administration and Constitutional Affairs Committee’s Kids Company inquiry. I gave evidence about evidence. The kind that Kids Company used in its annual reports and in its public communications, a topic on which I first wrote in February of 2015.
Kids Company funded research with respected academics at the universities of Cambridge and UCL. The charity was welcoming to individual researchers who were interested in their cohorts and gave generous access to them for varied projects, from handwriting analysis to gang research. They sponsored PhD students. They facilitated access to their clients for Great Ormond Street Hospital. There was, in short, a relatively complicated set of research relationships ongoing at any one time.
This flurry of activity and the way it was reported in the annual accounts managed to convey the impression that Kids Company had a relatively robust approach to the use of evidence. This was, however, an impression rather than a substantive reality.
The vast majority of the charity’s research spending, both internal and external, was concentrated in two areas: need and neuroscience.
The first type of research was useful for both fundraising and advocacy. The second was a particular interest of the charity’s CEO and provided a small amount of evidence in support of Kids Company’s model.
There was by contrast, a minimal allocation of resource to assess the effectiveness of the charity’s own delivery of services, whether it was consistent, reliable and made a tangible difference.
This is the heart of the problem.
The neuroscience partnerships were a vanity project for the charity. This is not to denigrate the research they funded, but to underline its marginal applicability to Kids Company’s work. The needs research was important but needed to be bolstered by systematic measurement of the charity’s delivery across its various sites. This was non-existent.
The only evidence of monitoring – of outputs – comes from the small number of young people who came directly under the government grant overseen by Methods Consulting and paid for by the Cabinet Office. In general, however, data is conspicuous by its absence across all the charity’s reporting.
The lesson here is that before we even get to considering standards of evidence, we need to give careful consideration to the allocation of resource to research and remember that only systematic internal monitoring will help us to assess effectiveness. It may not be as illustrious as a Cambridge post-doc, its usefulness for fundraising may not be immediately apparent, but this is what will tell us if we are really doing our best for those we want to help.
My full submission to PACAC is available online and gives recommendations on implementing an effective evidence cycle. This is the approach that we use at Osca. More information on some of our current projects can be viewed at www.osca.co.
Dr Genevieve Maitland Hudson is Director of Research at Osca, the social impact lab. She writes about measurement for Civil Society News and works for a wide range of clients developing useful social metrics that help to improve services.
Views expressed are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence. Join us (it’s free and open to all) and find out more about the how we champion the use of evidence in social policy and practice.