Like many of my colleagues at NPC, I like to dream of a world where government policy is “evidence-led”, rather than “evidence-informed”. I am more geek than campaigner. Thus, I love to think back to 1 April 2012, when I had the delightful experience of successfully persuading the government to implement an idea of NPC’s.
As the official across the table was telling us what a good idea our proposal was and how keen he was to adopt it, a wild thought darted through my mind; was this an elaborate April fool’s day joke? We had braced ourselves for the death knell reaction, namely, “this is an interesting idea that we shall consider”. After all, we had only submitted our idea to the Ministry of Justice (MoJ) a few weeks earlier; they had little wind our proposal was coming; they had not been involved in the research that lay behind it; and the idea would cost the government money in the age of austerity. But he wasn’t joking. The official was Iain Bell, then chief statistician for the MoJ, and our proposal was for the Justice Data Lab, which the MoJ implemented under Iain’s leadership about a year later.
In NPC’s recent State of the Sector 2020 research, two thirds of the 300 charity leaders we spoke to said they work with government to influence policy. Influencing work is a growing activity, with one third of charities planning to do more in the next three years while only two percent expect to do less. But how can we do so?
Policy making seems like a sealed box of sausage-making
From the outside, and I suspect from the inside too, policy making seems like a sealed box of sausage-making. But when I reflect on how we won the MoJ over, I think it is because four stars aligned: we had a great idea backed up by research and evidence; we were talking to the right person; we were there at the right time; and we helped solve a problem for the government. Each of these was necessary, some were lucky, but none were sufficient on their own.
We had done our research on the problems charities face when trying to assess their impact, so we knew the benefits of what we were proposing. But what counts as good research and evidence from the perspective of government is not always clear.
The less geeky will be forgiven for not noticing that HM Treasury has recently updated the Magenta Book, the government’s guide to good evaluation practices. It is the sister of the better-known Green Book, or guide to cost-benefit analysis. So, if you are using evidence to tell the government about your good idea or to back-up claims about the impact of the your programme, it would be a good idea to review it, or at a minimum make sure your impact and evaluation team are familiar with it.
Government's evaluation manual
The newly updated Magenta Book is an improvement on the 2011 version, which was a bit of a hodge-podge of evaluation methodologies. This update is more structured and easier to navigate. It is a manual for government officials on how to evaluate, whilst also providing some general guidance on evaluation methods. This combination makes it a bit light on methods in general (sampling is explained in less than a page for example) and some things from the 2011 version have been dropped, such as the chapter on Action Research. Personally, I was disappointed that “bias” is never defined despite being mentioned seventeen times.
But what does the new Magenta book tell us about how the government approaches evidence and evaluation?
The book reveals how evaluation is conceptualised among government evaluators. For example, page six notes "There are three main types of evaluation: process, impact and value-for-money evaluations." This is a classic distinction, if a bit too narrow given recent developments in the field. But later chapters do mention other types of evaluation, such as systematic reviews, participatory evaluation, and developmental evaluation.
We learn what civil servants are told to consider when planning an evaluation, and how evaluations are managed in government (covered in Table 1.1 on page twenty and in chapter five). I think it would have been better to separate out management of evaluations from methodologies into two parts, as the Magenta Book manages to dedicate only ten pages to methodologies. If you are looking for deeper guidance on evaluation methodologies, I suggest the Better Evaluation website as a starting point.
Finally, we can see the four different approaches to assessing impact that government researchers see as valid. This is based on a paper by Elliot Stern, Nicoletta Stame, John Mayne, Kim Forss, Rick Davies, and Barbara Befani and was noted in NPC’s Four Pillars approach (published in 2014, since updated by Understanding Impact) so we are pleased to see the Magenta Book has caught up! Knowing this typology will help you talk with government researchers and may give you some ideas.
You need to see how your plan looks from the other side
Key to influencing is to argue from the perspective of the person you are trying to persuade. We’re all familiar with charities who are so passionate about their cause that they’ve convinced themselves that everyone will line up to follow their recommendations. The reality is not like that, you need to see how your plan looks from the other side, so being familiar with how the government defines evidence and success is a good place to start.
David Pritchard is NPC’s Associate Director in the USA and was previously NPC’s Head of Measurement and Evaluation. David is an economist with over 25 years of experience in the public and charity sectors on both sides of the pond.