Rapid Evidence Assessments: a bright idea or a false dawn?

 

Research synthesis will be at the heart of the Government’s new What Works centres, however conducting systematic research reviews is often a lengthy and time consuming process and consequently, difficult to juggle with policy development.   In this guest blog, Dr James Thomas, Assistant Director for Health and Wellbeing at the Institute of Education, explores whether Rapid Evidence Assessments are a practical solution.   

Given the gap between the speed at which policy development moves, and the time it takes to conduct research (including reviews), being able to pull research together quickly, in time for policy deadlines, is clearly desirable. The idea of Rapid Evidence Assessments (REA) is therefore seductive: the rigour of a systematic review, but one that is cheaper and quicker to complete. While a few short-cuts are taken, the quality of the review is not compromised. In this situation, the only difference between an REA and a systematic review is speed.

Does the reality of REAs meet their promise though? If it were that easy to review the literature systematically, why do systematic reviews take so long? We have conducted numerous systematic reviews and rapid evidence assessments across many areas of social policy and found that the answer is, unfortunately, that “it depends”. Sometimes it’s possible to conduct what is essentially a systematic review in a short timescale; and sometimes this is challenging to the point of impossibility.

A few years ago we conducted two rapid reviews for HM Treasury, which had identical resources, were completed to the same timeframe, but which were quite different in nature. One was on the effectiveness of interventions to support people with common mental health problems into employment[1], and the other on interventions to improve the coordination of service delivery for High Cost High Harm Household Units (HCHHHU)[2].

The first REA was fairly straightforward, since it ‘mapped’ easily onto an existing academic field of study, there were pre-existing definitions (e.g. of “common mental health problem”) and we had studies which directly aimed to answer the same questions as the review. Because of the psychological / clinical roots of existing research, the studies used fairly established designs (controlled trials), for which there are highly developed review methods. In the event, this REA was effectively a systematic review.

The second review, on ‘High Cost High Harm Household Units’ (HCHHHU), was far more challenging. In this case the concept ‘HCHHHU’ had developed in Whitehall and did not map precisely against any existing research. In fact, if you search for the term on Google now, five years later, the only reference you’ll find is this rapid review. The lack of existing research on the subject meant that the team had to define, in conjunction with the policy team, what a HCHHHU might be and then go out and find applicable literature. While there were relevant studies, they conceptualised the issue in different ways, and so the team also needed to determine the extent to which each study actually did address a HCHHHU as defined by the review. Finally, the team found that the types of study design identified were quite heterogeneous and difficult to appraise; again, synthesis was more challenging in this situation. While this review was still a rigorous piece of work, having more time available may have changed it significantly, allowing the team time to consult and define the field in more depth, locate a greater range of studies, and consider how ‘neighbouring’ literatures might help to answer their highly complex question.

From these experiences, we have identified some of the conditions which can facilitate rapid reviewing (though see also the REA Toolkit[3]):

  • Having a focused question;
  • Having a question which ‘maps’ against an existing field of research; and
  • Including study designs for which there are established methods of appraisal and synthesis.

However, the above apply equally to systematic reviews.

Even reviews which are not branded as ‘rapid’ make trade-offs between sensitivity and precision in their searches; they do not guarantee to find every single relevant paper; they do not contact every author and insist on obtaining the primary data of every study they contain to ensure nothing is missed or misinterpreted; they are all compromises between what is desirable and what is possible. Rather than conceptualising REAs as a new method, it might be more useful to see them as one of many options for understanding what research is telling us in a given area and that the different options selected by the reviewers when undertaking them may give us greater, or lesser, confidence in their conclusions.[4]

 

Dr James Thomas
James is Director of the EPPI-Centre’s Reviews Facility for the Department of Health, England, which undertakes systematic reviews across a range of policy areas to support the department. He specialises in developing methods for research synthesis, in particular for qualitative and mixed methods reviews and in using emerging information technologies such as text mining in research. James leads a module on synthesis and critical appraisal on the EPPI-Centre’s MSc in Research for Public Policy and Practice and development on the Centre’s in-house reviewing software, EPPI-Reviewer. James is also Assistant Director at the Social Science Research Unit and for Health and Wellbeing at the Institute of Education, University of London.

(The views are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence)


[1] http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2315

[2] http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=2313

[4] Thomas J, Newman M, Oliver S (2013) Rapid evidence assessments of research to inform social policy: taking stock and moving forward. Evidence and Policy, 9(1): 5-27