Research, evidence and data do not exist in a vacuum. To influence decision making, sources of information have to compete with a myriad of other factors, ranging from political pressure, lobbyists, public opinion, ideology and personal values. If the research findings clash with the dominant view, how can these factors be managed to embed evidence into decision-making?
It can be difficult to challenge perceived wisdom, especially when it seems at first glance rather harmless. Take Patz, a doctor working in post war America, for instance. Patz observed a link between the use of pure oxygen to treat premature babies and sight loss. To investigate this further he proposed to test this with a clinical trial. However the National Institute of Health refused to fund it, fearing the study would “kill a lot of babies by anoxia [lack of oxygen] to test a wild idea“. This isn’t unsurprising, most people would assume that giving oxygen to babies is a natural thing to do. Undeterred Patz borrowed money from his brother and undertook what is believed to be the first randomised control trial in ophthalmology. The findings overturned the common sense thinking of the time to reduce childhood blindness in the USA by 60%.
Letting the evidence “speak for itself” is not easy, especially in instances where the “politics of electoral anxiety” conflict with the potential controversy arising from the research findings. The walk out by scientists at the Advisory Council on the Misuse of Drugs (ACMD) is an interesting example of scientific research and politics colliding. Nutt, a pharmacologist at Bristol University and Imperial College London, was sacked after he criticized the Government’s decision to upgrade the legal classification of cannabis, arguing that research indicates it is less harmful than cigarettes and alcohol. The conflict between research and political ideology were shown in Nutt’s comments that politicians were “distorting” and “devaluing” the research evidence in the debate over illicit drugs, whilst former Alan Johnson, Home Secretary said Nutt had “crossed a line” into politics amounting to “lobbying against government policy”.
Then there are other instances when findings are ignored. The US-based Scared Straight is a good example of this. Scared Straight involves young people visiting prisons and talking to inmates, with the experience supposed to prompt them to think-twice about offending. This may sound sensible, but rigorous evaluation shows that it is not only ineffective but that is actually damaging to the young people involved. Despite this evidence, Scared Straight remains in use worldwide.
Alongside policy makers and governments, it can also be hard to sell evidence to the general public if they are “emotionally invested” in a particular approach. Take prisons for instance. Fundamentally many see prisons as a necessary place of punishment and it could be political suicide for government to challenge the status quo, yet research shows there is a lack of understanding of the justice system in much of the population. Then there are the debates about smaller class sizes seemingly a common sense way of increasing attainment, yet research into its impacts are inconclusive and arguably the money spent on reducing class sizes could be more usefully spent to improve pupils learning experiences in other ways.
As we’ve said before, generating evidence is only one piece of the puzzle. Too often findings can be toned down, or worse still, ignored entirely. How do we overcome and manage these tensions? What needs to change to enable the interface between research and decision making to be less influenced by values, opinions and politics? Indeed, can we ever do this?