Articles Towards a Learning Culture of Evidence Use: Lessons Learned from the U.S. Evidence Act

Towards a Learning Culture of Evidence Use: Lessons Learned from the U.S. Evidence Act

The U.S. recently passed bipartisan legislation which embeds and systematizes the way evidence is used in decision-making. The act passed with overwhelming support, indicating the power that rigorous, credible research evidence has on traversing a polarised political context. The Act strengthens data privacy protections, improves secure access to data, and enhances the federal government’s capacity for producing and using evidence. While this increased push for transparency and accountability via evidence is promising, there’s a risk that this becomes another ‘check-box’ activity for government agencies already inundated with responsibility.

To discuss the way forward, the U.S. Bipartisan Centre, in collaboration with Urban Institute, hosted a panel discussion on implementing the policy. Panel discussants from a range of federal agencies discussed the importance of embedding evidence use within their institutional membranes, the inherent risks that come with it, and implementation actions that can be taken to mitigate these risks. The panel also highlighted a requisite ingredient for evaluation: data. Not only does the policy call for government data to be machine-readable, it also requires agencies to create and maintain data inventories and publicly provide details about those datasets. Researchers working with the government can now access government data with ease, making collaboration more efficient. So, we think it’s great, and there’s a lot to learn from the implementation of a flagship evidence policy for the UK and further afield. We summarise below:

First, get everyone on the same page

Panel members warned of the danger of the policy getting lost in confusion. For example, the act encourages agencies to develop evidence plans – do people know what these are? Or, are they likely to be confused with strategic plans? Successful implementation and system-wide change is dependent on clear messages – people need to be aware of the policy, what it entails, and how this affects strategy and workloads now, and in the future.

Assess team capacity

Team capacity is one of the key drivers of implementation. To bring such a policy into realisation, there needs to be both expertise and motivation. Critically, staff working in a government evaluation role need 1) a knowledge of research methods and their applicability to different research questions in social and economic policy or thematic areas, and 2) the ability to synthesise and communicate evidence. For example, the Department for Education needs evaluation staff who are experienced in developing research questions to address issues in education and also capable of explaining and presenting results to policymakers in a meaningful way. This calls for a pause for reflection: what is the current in-house capacity, and is there a need to recruit?

Get a team ready for evidence

When upskilling current employees, it’s important to first assess the current evidence literacy and capacity. To fill observed skills gaps, training programmes should be tailored to meet staff where they are on the evidence journey. Training also needs to target attitudes and mindsets about evidence. If people are able to see the benefits that evidence can have in their work and feel that they have ownership over the evidence production and engagement processes, their motivation to bring forward the evidence campaign improves. In order to have the capacity to comply with the policy, new staff recruitment may be required. Yet finding employees capable of generating, compiling, assessing, and brokering can be tricky. Agencies can tap into proven tools and techniques for hiring high capability, highly motivated candidates, such as through signalling techniques that help to emphasise the value-add of evidence generation and translation within the posted role.

Say no to Silos

This act brings together a range of actors under the evidence banner. Its success is thus reliant on relationships across sectors and agencies. When agencies are developing evidence plans and research questions, leadership, evaluation and implementation personnel should collaborate to ensure research and evaluation align with strategic aims. Agencies also need strong connections outside of their teams, such as with the software developers who may be designing fit for purpose data collection tools, or with grantees, such as service deliverers to ensure data collected is useable. But how is this done? Our blog piece Dissemination is Dead: so Do This Instead identifies mechanisms which can be built within and across organisations that encourage collaboration and sharing, such as roundtables or the Delphi technique.


The U.S. Evidence Act is a bright beacon of hope for evidence in an era of post-truths but ensuring it holds to its promise is complex. We’ll be keeping an eye on how this unfolds to see what we, policymakers, and government agencies can learn.