The devil's in the detail for impact audits

07 Mar 2016 Voices

Rosie McLeod says its important that impact audits are tailored to the size and type of charity.

An external impact audit for charities? Something which could stretch across the voluntary sector, in all its shapes and sizes, and allow meaningful comparisons of what charities are achieving? It’s a great idea, an answer to so many questions…which would be precisely its problem, unless we are clear what it can and can’t do.

It’s easy to see why there is more demand than ever for this sort of idea. David Ainsworth makes a compelling case for why a single impact audit process could have saved charities and donors from recent chaos. The Kids Company debacle has shown how easily big ideas and very canny PR can translate into sackfuls of public money, without anyone official pausing to ask whether the grants and contracts were actually delivering what was promised.

For some charities, an impact audit would be inappropriate. Imagine a small start-up charity, just launched to provide groundbreaking new training to secondary teachers: the ultimate impact for these guys is to improve how children are educated and the results those children gain. But that is a long way in the future. First the charity needs to evaluate how its training is going, talk to the teachers about its potential and usefulness, and tweak its basic product according to feedback. This charity would probably fare poorly in an impact audit, which would try to measure something it isn’t trying to achieve yet. It could kill charities which are barely getting started.

What are we auditing?

David mentions the traditional dependence on financial measures in judging charities, and there is a reason that this method – with all its flaws – is popular. It’s because it is easy to look at two numbers side-by-side. But you can’t assess and compare social outcomes with a single measure. It isn’t possible to assess social impact with a single unit across a wide-ranging sector.

Yet you can audit a process, which is what accountants do. You can look at whether the charity is doing the right thing on its evaluation practice. This could happen quite broadly across the sector by, for example, examining the quality of research conducted, using a common method for assessing how well research methodology was suited to the question it set out to answer. This would be fine-tuned according to other factors: how new is the planned intervention, what type of change is it trying to produce, and crucially the amount of evidence that is already available on the approach a charity is taking. You can then know if you can trust the figures, and interpret and compare them where possible.

Should that be mandatory? We would need to ensure that the requirements are proportionate, taking the size and development of the charity into account. You could reasonably expect a charity of Kids Company’s size to have much more stringent requirements than a small start-up, given the massive funds. However, a start-up needs space to learn and not be too weighed down by reporting requirements.

Audits could potentially extend beyond process if outcomes are assessed issue by issue (or to put this in charity jargon, we would need “shared measurement frameworks at a sub-sector level”). With enough nuance, that could form part of a regulatory framework, if designed to encourage charities to use common outcome measures where possible: a comply-or-explain requirement, for example.

NPC is active in pushing for issue-by-issue measurement, as well as more scrutiny of process, both in the charity world and social investment. For example, the recent Impact Assurance Framework we developed for social investors in the US looked at the best way to measure how strong impact measurement processes were at charities, and extrapolate likely impact from that. It’s a starting point for an audit system.

The domestic violence charity SafeLives, formally called CAADA, offers a good example of shared measurement. Its Insights programme was developed to share data across the sector, providing tools for domestic abuse organisations to collect relevant data. SafeLives collates and analyses the data, which is fed back to each organisation. This allows organisations to identify who uses their services, see where there are gaps, and measure the way their work impacts on the lives of the people with whom it works. The information is then used to tailor the support available.

More than forty other organisations are now using the same proven tool to collect, and analyse impact data. It exists so that organisations tackling the same issues can compare programmes. It’s in the interests of beneficiaries and can mitigate against needs to produce data in myriad forms for different funders.

This is making a difference to work on domestic violence, which is a relatively small slice of the voluntary sector. Apply the same approach to, say, all the activities of the UK’s 60,000 children’s charities, and you can see how widely best practice can spread.

This sort of thing is precisely why NPC was established: so that charities and donors better understood how they were performing, and how whole chunks of the sector could improve. But as we also know, good ideas are often the most complex to turn into a reality.

Rosie McLeod is a senior consultant at NPC 

This article first appeared in Charity Finance

More on