Genevieve Maitland Hudson: Is criticism of impact measurement fair?

29 Nov 2016 Voices

Genevieve Maitland Hudson assesses attempts to track the effectiveness of impact measurement, and finds that it isn’t quite living up to the hype. 

It’s beginning to seem as if impact measurement, after a relatively easy run, is coming under increased pressure to prove itself as a necessary part of the business of doing good. 

For a couple of years there have been murmurings of its distorting effects as part of outcomes based performance management (OBPM) systems.  A recent paper by Julia Morley of the LSE pointed the finger squarely at the social investment market as the background instigator of social impact reporting, and in much the same way as critics of OBPM, Morley suggested that impact measurement creates perverse incentives that distort frontline practice. 

David Floyd has been more sceptical of the distorting power of social impact measurement, not so much because he believes in the beneficent intent of the theory, but because he has a healthy scepticism about the effectiveness of its implementation in practice. 

Meanwhile, David Ainsworth pointed out with a forthright vigour that there is markedly little evidence of the effectiveness of impact measurement. 

Does it do what is says?

So, does it do what its advocates suggest? In the words of NPC, does it allow charities and funders to make the greatest difference possible, help to attract funding and motivate staff and volunteers?

NPC doesn’t provide much evidence of these effects. Its 2012 report suggests that charities and social enterprises were picking up on measurement because they were expected to, not to improve services, attract funding or motivate their members. 

There were perceived benefits for 25 per cent of its survey respondents, who said it helped in improving services, but less than 10 per cent pointed to increased funding, and motivation isn’t mentioned at all. 

Is there any evidence that goes beyond surveys? Not much. But there are ways in which we could try and test whether impact measurement is generating results or affecting a charity’s ability to access funding.  

Is impact measurement affecting grantmaking?

A systematic trawl of GrantNav, which launched a couple of months ago, gives us useful data. 

If grantmakers were funding on the basis of reliable results, then you would expect to see cycles of funding that leave sufficient time to elapse for interventions to be established, data to be collected, analysed, and results published. That would mean funding cycles like those of social research, with grants available for up to five years. 

This isn’t how funding operates. Instead, grants are distributed within relatively short periods and for continually evolving programmes. 

It also appears to be common for funders to pick up each other’s grantees. 

The small cohort of funders who have uploaded data to GrantNav regularly fund the same organisations. This might, of course, be based on evidence of effectiveness, but given the time scales looks more like good reputation and effective networking. 

Those are conclusions based on a couple of hours of sifting. A more comprehensive review of GrantNav would find clearer patterns.

Not living up to the hype

Based on available data though, it looks like its critics are right, impact measurement isn’t living up to its hype. 

Those of us who have a stake in it have to recognise where it is failing, and do a better job of introducing measures and systems that support frontline work and help to influence good decision-making by providers and grantmakers. 

Part of that means taking our own medicine and checking our own effectiveness. 

Genevieve Maitland Hudson is head of evaluation and impact assessment at Power to Change

 

More on