Systematic reviews provide a way of finding out everything we know and don’t and prove mainstream in medical and health research, but common myths are preventing greater uptake of the process. Jonathan Breckon and David Gough debunk these myths in light of the new Alliance publication ‘Learning from Research’.
A fertile place for any Whitehall mandarin embarking on a new policy is to ask what has worked before? What has failed? Traditionally this might mean a literature review, a bit of digital digging with Google, perhaps spliced with a chat to the current favoured experts.
But a comprehensive trawl of all the published and hard-to-reach research is a much better approach, we argue in a new short guide to systematic reviews.
It is no longer acceptable to ‘just mooch through the research literature’ according to Ben Goldacre in his book Bad Pharma, ‘consciously or unconsciously picking out papers here and there that support [our] pre-existing beliefs’
Systematic reviews provide a way of finding everything we know and don’t know. It’s mainstream in medicine and health through organisations like Cochrane Collaboration and there is interest in international development , social policy and science via organisations like Campbell Collaboration.
But many myths hinder the greater take up of this approach:
Myth 1: Systematic reviews only look at randomized controlled trials. No, reviews can answer all types of research questions. Causal questions are not the only, or even necessarily the most important. However, if the review question is on the extent of impact of a programme then you do need to take seriously possible counterfactuals (other explanations for causal attribution) and RCTs are particularly good for dealing with this type of problem.
But if you are asking a different question – for example, how something works or how an issue can be conceptualized then you need a different type of systematic review – different method of review and different types of study reviewed.
Myth 2. Systematic reviews are only about statistical analysis. Not true. Some reviews do aggregate numbers statistically, Biut other reviews configure and synthesise themses and concepts. These are all ‘’systematic’ reviews because they have explicit and thus accountable methods. (unlike many traditional literature reviews)
Myth 3. They are too expensive. We can’t avoid the fact that reviews need to be properly resourced. If you are serious about finding out what we do and don’t know, then it’s a big task. But if you are to make big government spending decisions, isn’t it a good investment to make sure you haven’t missed something? For primary research we expect rigorous and transparent methods and accept this requires proper funding. We need to expect (and demand) the same of reviews of the literature.
Myth 4. Reviews take too long. It is true that the process can be time consuming. But once completed they can be updated to keep relevant so remain relevant. The most enduring and useful systematic reviews, notably those undertaken by the Cochrane and Campbell Collaborations, are regularly updated to incorporate new evidence.
If you are in a policy rush, one answer is find reviews that already completed, such as in the free Campbell Collaboration library. Another option is ‘rapid evidence assessments’ to meet tight policy deadlines, although the need for speed may be ‘challenging to the point of impossibility’ according to two experienced reviewers.
Myth 5. Reviews ignore ‘grey literature’ such as research by think-tanks and other material outside peer-reviewed journals. Wrong. Although this material may be harder to find on databases it is not excluded. Even experienced reviewers have fallen into this misunderstanding of thinking that grey literature is not part of reviews.
Myth 6. Review methods are complex, obscure and hidden, using impenetrable statistical tools. No, one of the chief benefits of reviews is that they are transparent about how you harvested and used your data. It helps to avoid future criticism by being open and transparent about your methods.
The transparent method also means that reviews can be repeated and updated. This is important. If research is not replicable, we may doubt the validity of the original research.
Myth 7. It’s not really proper research, just a way of communicating what others have done. No, systematic reviews are a form of secondary research. It uses rigorous methods to bring together the results of individual primary studies. It is the first thing to do before undertaking any new primary research so that you are sure you are not simply duplicating prior research. It also means that the new research is informed by the lessons of previous studies.
We do not deny they are practical challenges doing reviews in the real world. For instance, as much as you try and search databases, you may still need to ‘hand-search’ institutional websites for relevant material.
But their purpose is hard to argue against: to gather all evidence in a way that is as comprehensive and as transparent as you as you possibly can. We think it’s a shame that there is so much resistance to them amongst researchers. We have tried to counter some of the most common misunderstandings and hope this encourage you to use these reviews to make better informed decisions.
The views are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence.