Articles Day 3: debunking the myths about randomised control trials (RCTs)

Day 3: debunking the myths about randomised control trials (RCTs)

 

There is much contention around the use of Randomised Control Trials (RCTS), with examples of their use being blocked or vetoed. Yet if used correctly they can be one of the most powerful tools in helping test whether a service you receive is effective, or indeed harmful.

Yesterday we discussed the need for evidence to not trample on innovation, yet we need to ensure that, where appropriate, “top tier” evaluation methodologies are used. Randomised Control Trials test the efficacy of an intervention by randomly assigning the intervention among members of a treatment or user population.  Although no-one would argue that randomised trials are the only form of evidence, or that they are always appropriate or able to answer every question, it is clear that even when they should and could be undertaken, they face a number of barriers.

The idea of subjecting people to experiments can have extremely negative connotations. Take a homeless project in the US, for instance. When it was announced that the intervention was going to be evaluated with random assignment there was huge controversy with people saying it was unethical to deny access to the programme for the control group, in effect treating the vulnerable as “lab rats”.  Of course, there are strong counter arguments. First, the ‘unethical’ stance presupposes that the intervention is beneficial – of course testing that assumption is the whole point of the experiment. Second, often we are testing a new intervention: the ‘control group’ may simply continue with current provision. Third, many programmes are not funded sufficiently to treat everyone:  access is already restricted to a select group.

Another reason why random assignment is often shunned is that it is perceived to be a difficult, time consuming and expensive methodology. Yes, it can be a difficult technique to administer in some circumstances, but if a large scale randomised field experiment can be undertaken to assess impacts on counterinsurgency in war-torn Afghanistan, then there should be plenty of scope in more domestic circumstances. And they can be done reasonably cheaply, such as a reading programme that was found to increase achievement scores by 35-40%, with the main expenditure being the cost of books. Then there are studies where existing data can be drawn upon to greatly reduce costs, such as when researchers analysed the guidance given to potential US College applicants, with a huge sample of over 22,000 families.  Or a study which drew upon college enrolment data to analyse the impact of coaching as an eight site randomised trial for $15,000. With the UK’s move to open data, this kind of analysis could be undertaken here.

Another misconception is that RCTs only involve quantitative analysis, that practitioner and service user perspectives are lost, and that RCTs are somehow an overtly “centralist” approach. Again, this need not be the case. It has been argued before that RCTs are not the opposite of qualitative research with further calls for a need to develop rigorous mixed methods for effective evaluation.  At a conference earlier in the year discussing evaluations in Europe, it was clear that the findings generated from the qualitative element of an RCT were just as useful as the quantitative data, if not more so.

As well as overturning the ‘myths’ to enable more and better RCTs to be undertaken, we need to build the capacity of the research community to undertake such analysis. As Ben Goldacre has noted, RCTs are an underused tool in social policy evaluation. For instance, an RCT into the effectiveness of different forms of outreach from Sure Start centres was the first time such analysis of social policy had been commissioned by local authorities in Greater Manchester.

Although a well conducted RCT should rightly be considered a gold standard in demonstrating the effectiveness of an intervention, we don’t believe that they are the answer to every research question or that they are the only methodology which should be used. Indeed we believe alternative, rigorous approaches need to be further explored to stimulate innovative research techniques. Yet the rejection of random assignment techniques because of misunderstandings is a huge missed opportunity for improving the quality of our public services.