All charities should have impact audits

15 Feb 2016 Voices

David Ainsworth says agreed impact standards are the best solution for evaluating charities. 

David Ainsworth

David Ainsworth says agreed impact standards are the best solution for evaluating charities.

In the absence of standardised impact frameworks, the most common metric for assessing charities’ progress tends to be annual income. It is simple and can be applied to every charity. But it has its flaws.

Take the largest charity in the UK, the Lloyd’s Register Foundation. Last year it had an income of over £1bn, and is the only charity ever to achieve this. But the charity – a grant-giver in the fields of technology and safety – is the owner of Lloyd’s Register, a risk measurement business, and its funding is merely a proportion of the company’s profit. As a result, it gave grants of just £18m.

Meanwhile the Wellcome Trust had an income of £337m – a third as much – but capital appreciation of its assets meant it could spend £728m on grants – 40 times as much.

So if we wanted to develop a more useful metric, what factors would we need to consider?

Funds available for charitable application

One place to look is at the funds the charity has once all money has been spent on income generation – what some charities refer to as funds available for charitable application. Indeed, this calculation formed the basis of Gina’s Miller’s controversial report late last year and is writ large on the Charity Commission’s new beta register of charities.

Yet it has a big impact on how different organisations may be perceived. In particular, a charity which makes all its money from trading would typically have a margin of around 25 pence in every pound, while a charity which makes all its money from fundraising would have three times as much.

A good comparison is the British Heart Foundation (BHF) and Barnardo’s. They are almost identical in terms of income, and both get their money mostly from trading and voluntary sources. Yet BHF – funded mostly by shops – has an income of £288m but funds available for charitable application of about £114m, while Barnardo’s has an income of £285m, but funds available of £213m – almost twice as much.

The situation become more complex when it comes to grant funding. If you win a grant which requires you to deliver a service, and spend all the money you’ve been given delivering that service, what is your surplus? Often a grant is a loss-making endeavour, once support costs and application costs are taken into account.

The same questions are even more troublesome for contracts. The value of the contract is less in the surplus than in the benefit from delivery.

Overall, surplus is certainly a better measure of a charity’s power to do good than income. However, it presents problems when used as a comparator. Social enterprises and public service delivery charities are very different beasts to the rest of the sector, and need to be quantified in completely different ways.

Income and expenditure

Why measure funds available, you might ask, and not spending? After all, that measures the good the charity actually did.

The trouble is, a charity can spend whatever it likes on beneficiaries, but if its spending doesn’t resemble its income, it quickly loses the power to do good. As Alan Yentob can attest.

Impact audits

The above suggests that any measure simple enough for public consumption will remain quite rough-and-ready, leaving us with the status quo of income and expenditure acting as the headline statistics.

Yet this is odd. After all charities exist to generate good, not money.

It’s certainly important to assess whether you have the money you need to do good. That is why we have auditors. But the real question is whether that money is spent effectively. And that is a question an auditor cannot answer.

There has been a lot of resistance to the idea that we should measure whether a charity is effective. Perhaps this is because charities’ cash is already too scarce for the job in hand. Perhaps it’s because donors hate charities spending on anything except “the cause”. Perhaps it’s because measuring usefulness has become wound up in all sorts of technical jargon – theory of change, deadweight loss and, of course, impact. Or perhaps it’s just because it’s sometimes devilishly hard to tell whether a charity’s intervention actually is effective.

Lack of evaluation

But there are reasons why such things are needed. The first is that it’s clear that many charities aren’t evaluating their work effectively. Kids Company can go 19 years without useful evaluation, and no one realises. A charity like the Cup Trust – an out-and-out tax avoidance scheme – can go equally unremarked.

Second, if the sector had its own system of impact audits and kitemarks, it would help those donors who are minded to make donations based on data, rather than instinct and emotion. Even if such donors are relatively few and far between, that does not mean they should be discouraged.

Third, the traditional metrics such as annual income discolour the importance we attach to charities. And fourth, if the sector has its own approved quality controls and assessments of effectiveness, this will act as an effective counterbalance to simplistic reports such as the one by Gina Miller.

Charities will argue that metrics have both a significant cost and a potentially distortive effect on charitable activities, and both of these are potentially effective arguments. However the current situation is not acceptable. Charities exist to provide public benefit, but their work is not effectively quantified by their annual accounts or the Charity Commission. Or by anyone else.

The truth is that charities’ public benefit is only measured if they themselves choose to measure it. It is time to audit their benefit in the same way we check their finances.

Renewed focus

The fall of Kids Company has put this back on the table. For years Camila Batmanghelidjh produced all sorts of documents demonstrating the good she was doing, and repeatedly trotted out the line that her charity was regularly audited by the government. It was one of her key claims that there was effective oversight of her work.

Late last year, though, in evidence sessions before MPs, the limitations of that oversight were brutally laid bare. The auditors had just checked whether the numbers added up, not whether the money was well spent. It was miles beyond their ken to identify whether the charity was providing useful services.

Meanwhile, the various other reports were not worth the paper they were written on. Despite Batmanghelidjh’s many and varied assertions, no one was really checking whether the charity delivered any value at all.

For many charities, of course, exactly the opposite problem applies. Many charities have a dozen funders, and spend much of their time writing reports for them. All of them want different figures, at different times, presented in different ways. Those figures are often not representative of what the charity wants to do, and many of them simply count beans: children in classes, number of leaflets delivered, total value of money disbursed. They do not assess the added value the charity delivers. And charities’ own annual reports are often equally flawed.

‘Independent oversight’

Nevertheless, the need to root out bad practice remains. Shortly after the Kids Company evidence sessions an idea was put forward by Genevieve Maitland Hudson – an expert in measuring effectiveness, a former Kids Company employee, and one of the first people to smell a rat.

It was no use relying on auditors, she said, to identify whether charities were doing what they were supposed to be doing. Nor could we rely on reports commissioned by the charity itself. There must be some kind of independent oversight.

Maitland Hudson did not prescribe exactly what was required, but this idea – a standard impact audit for all large charities, basically – is one which seems to have legs. As well as getting in the auditors to check that the finance department does what it’s supposed to, why not have a similar process where each year an independent assessor, with some knowledge of your charitable activities, examines whether your charity is delivering against its stated objects?

The objections are obvious. It’s expensive. It’s too complicated. Charities are all very different. Do we really want more well-paid consultants running around hoovering up charity cash?

Well, my view is yes. Anyone who’s committed to improvement should welcome an assessment of their effectiveness. Anyone who isn’t probably needs one. And while charities are all very different, I think too much is made of that. Experts in a given field should be able to spot effective interventions.

This process has the potential to save the sector money, as well. Instead of producing multiple duplicate reports for the government, the EU, the NHS, grant-givers and private philanthropists, a charity could simply bang down one annual, audited assessment of its impact and say: “There you go. Here’s what we achieved. Stop wasting our time.”

David Ainsworth is editor of Civil Society News.

More on