Articles Your chance to vet the Department for Education’s evidence

Your chance to vet the Department for Education’s evidence

An open invitation has gone out for comments on the ‘strength’ of the Department for Education’s evidence. Jonathan Breckon, Head of the Alliance for Useful Evidence, explores how the ‘strength’ of evidence can be judged and how you can submit your response – as an individual or by contributing to the Alliance’s. Deadline for online submissions to the Department: 12th Dec 2014.

So how good is the evidence used by the Department for Education? You, the public (or more likely evidence geeks if you are reading this) are cordially invited by a parliamentary committee to vet the ‘strength of evidence’ of education policy.

Is the research they use reliable and robust enough to back government policies, such as teaching children to read with synthetic phonics, or spending billions pounds on teaching assistants?

If you have little idea what the government evidence is, the Department for Education (DfE) has helpfully provided pithy two-page ‘evidence-check memorandums’ in nine policy areas, from big ticket political commitments like free schools meals, to narrower topics like music education.

The deadline for online responses is Friday 12 December 2014. We will also be submitting a response to them. So feel free to share with us any thoughts, report or other items you would like us to include. If you would like to add our voice to our response you can do so by emailing us.  Deadline for contributions to our response will be Wednesday 10th December 2014.

A fertile evidence ‘ecosystem’ in education

This evidence review by the House of Commons education select committee follows a report last year on DfE evidence by Bad Science author and randomista Ben Goldacre. He was invited into Whitehall to cast a scientists eye over the research used by the Department for Education. He argued that the real priority was that ‘Teaching needs an ecosystem that supports evidence-based practice’.

Some of that ecosystem is falling into place. And the Department for Education has skin in the game, with significant cash and backing of evidence initiatives. They are, for instance, co-funders of some ‘what works centres’ like the Education Endowment Foundation. The department should also be applauded for publicising their research needs, so we know priorities and where there are gaps. Establishing a ‘college of teaching’ has also been given tacit support. The college will play in role in disseminating the evidence for best practice. The department hosts research champions. In Feb 2014, LSE lecturer Dr Tim Leunig started a job share as the ‘Chief Analyst and Chief Scientific Adviser’ at the department (Tim was once maliciously branded a ‘barmy academic’ in the Liverpool Echo, but his research CV is long The other part of the job share is Donna Ward. With all this brains and back-up it must surely be possible for the DfE deliver strong evidence?

How do you decide evidence ‘strength’?

It’s one thing to check if education policy has some embarrassing statistical and factual errors, as the brilliant Full Fact do. But quite another thing to check the ‘strength’ of evidence. What criteria could you use? The DfE evidence-check memorandums refer to ‘sound evidence’ to back policies such as ‘systematic phonics instruction’. But what is ‘sound’? They mention reviews of randomised controlled trials that have created ‘sound evidence’, according to the department, such as the review by Professor Carole Torgerson. But when Carole was interviewed by Mark Henderson for The Geek Manifesto; Why Science Matters, p17, she cast doubt about the research. Although there was some promising evidence from the US – and a small-scale study in Clackmmanshire, Scotland, that this approach worked, the evidence was relatively weak. Her review found only a dozen small trials, the biggest of which involved 120 children. She urged caution in making national policy.

They also mention 12 Ofsted reports on primary schools that showed strong performance related to phonics teaching. But there are around 16,000 state-funded primaries in England so 12 isn’t really that much. And we are not talking research, just Ofsted inspections.

My point here is that just listing evidence is not enough. We have to dig deeper and make judgments on the relevance, replicability and rigour of the evidence. Some organisations use formal hierarchies to help make these decisions on what counts as good evidence, such as Nesta’s below.

 

levels of evidence

 

The quality of qualitative research

The well-seasoned evaluation expert Stephen Morris at NatCen says the most persuasive evidence is ‘rigorous, social science’. He argues that we should not privilege quantitative over qualitative evidence, or vice versa. Qualitative evidence can be just as scientifically credible as quant. But what we should do is:

privilege though is evidence that involves the collection and analysis of data, through the application of defined, systematic and replicable methodologies, producing transparent data, testing predefined evaluation questions, and with results made publically available’ (Evaluating service transformation, NatCen blog, 19 November 2014).

As well as the hierarchies of evidence, there are other useful guides and toolkits to help make judgements on ‘sound evidence’. The international development network BOND Evidence Principles checklist, used by many NGOs to help commission, design or review evidence, and is particularly relevant to qualitative research. It’s quite radical in some ways, as includes issues such as ‘voice and inclusion’ (have you properly engaged your research ‘subjects’). The BOND principles also include appropriatnesss – does your method match your research question? The old Cabinet Office strategy unit also has an old guide to the quality of qualitative evaluation. It’s still as relevant today as it was back when it was produced in 2003. The HM Treasury Magenta Book, a vast tome offering guidance on evaluation, has a special supplement to check the quality of qualitative evaluation

Whatever the downsides of these hierarchies or principles – and there are certainly many critics – they do at least give you a structure to check the evidence claims. Without it, we may fall into a worrying approach of just quoting our own pet piece of research or personal opinion. The evidence-check of educational policy is an excellent move. We should applaud Graham Stuart MP, chair of the Education Select Committee for taking this on and do more of such evidence-checks, following the model of the science and technology committee.

Showing the evidence behind policies should be normal practice for all departments, perhaps by producing a ‘red book for evidence’. But we also need to make judgments on the credibility of the research that is revealed. We do hope that Alliance members will send their own best available research to inform the select committee.