Social science and replication

 

The hesitation to reproduce and replicate datasets inhibits clarity and can obscure research results. Alex Sutherland and Nicole Janz at the University of Cambridge argue for a cultural shift to demand more replication of results which can help policymakers and academics alike to bring greater legitimacy and relevance to research. 

Social science is broken. The far-too-prevalent reporting of ‘statistically significant’ results, results ‘bordering on significance’ creeping in, focusing on significance rather than significance and effect size, undeclared conflicts of interest, weak or non-existent peer-review processes, or unwillingness to retract articles, create headaches for anyone trying to make sense of research output in social science (and science in general).

But all is not lost. There are strident attempts to push for greater reproducibility and replication of research findings. In fact, replication may be the tool that can help empirical social science avoid the pernicious problems set out above.

How? Well, if there’s a “knock-out-once-in-a-lifetime-would-you-believe-it” result from a single study that everyone pays attention to (especially if it influences policy) then at the very least researchers should seek to: (a) reproduce the result (significance and effect size) using the same dataset; and/or (b) replicate the analysis using a different dataset (i.e. verify the result); and/or (c) replicate and improve the original using updated data/methods.

Policy makers on the other hand, who may not being a position to replicate, should be demanding ‘what other evidence is there for this finding?’, or commissioning a replication, rather than relying on one-off results from a single study, no matter how high the quality of that piece of research.

Why does all this matter? The Reinhart and Rogoff replication scandal earlier this year, where a student found that two Harvard economists had made mistakes and omitted data (since corrected but still disputed), showed that by holding back data sets and analytical steps, errors may only be discovered years later – if at all. (The need for replication doesn’t stop with the economy. How about the effects of nuclear proliferation? Or thirteen major effects in psychology?)

Despite the importance of ‘getting it right’, particularly for policy-related research, replication is uncommon. The problem is cultural: Gherghina and Katsanidou found that only 18 of 120 political science journals have a replication policy in which they state authors should upload data for their papers.

But journals are not the only driver. In an age where ‘publish or perish’ looms larger in academia than ever, the fear may be that such papers may not see the light of day because they are not ‘new’, or these studies remain undone because they are not ‘glamorous’.

How do we change this? By making replication something that is accepted as routine, whether done by other academics or analysts within government organisations if data are sensitive. A key lever for such change is by including replication courses at universities as core teaching for (under?) graduate students. Replication is an unparalleled tool for teaching students about the trials and tribulations of real world research – including all the shortcuts an author might have taken to get ‘that result’. If enough students are exposed to this idea, some go on to become academics, editors and policy-makers themselves and are then in a position to influence those around them and the wider research community.

The consequences of not reproducing research findings are not just poor quality control and less transparency, which may seem of more concern to academics, but it can (and does) affect our everyday lives. From ‘being in it together’ in terms of austerity – a lot of credence was given to the 2010 Reinhart and Rogoff paper when it came to countries handled their recession – to how we might raise our children.

 

Dr Alex Sutherland
Alex worked for several years as a researcher at the Centre for Criminology at the University of Oxford before completing his D. Phil. in Sociology at Oxford. From 2010 until November 2013, he coordinated the Social Sciences’ Research Methods Centre (SSRMC) at the University of Cambridge and taught numerous courses on quantitative methods and research design. Alex currently holds a joint appointment at RAND Europe at the University of Cambridge. At RAND, he is a Research Leader in Communities, Safety & Justice. At the Institute of Criminology, he is a Research Associate, working on the London Education and Inclusion Project (LEIP), a multi-site trail of a new intervention aimed at children at risk from school exclusion.

Nicole Janz
Nicole teaches statistical methods at the University of Cambridge at the Social Sciences Research Methods Centre. She has developed the Cambridge Replication Course, in which students replicate a published paper and bring it into a publishable format. Nicole is actively involed in the reproducibilty debate in political science and publishes a blog on the topic.

(The views are the author’s own and do not necessarily represent those of the Alliance for Useful Evidence)