We're currently in BETA, please let us know your feedback on the new website Send feedback

Charities are best placed to assess their own effectiveness

07 Jan 2016 Voices

It is charities themselves who can best assess how effective they are, not the government or anyone else, says Genevieve Maitland Hudson.

It is the charities themselves who can best assess how effective they are, not the government or anyone else, says Genevieve Maitland Hudson

The journalist Harriet Sergeant recently wrote a piece for the Spectator on the ‘5 questions’ that would tell us whether charities are doing a good job. It will have made frustrating reading for many.

Here are three reasons why the article doesn’t provide an effective model for assessing the effectiveness of charities:

  1. Three of the ‘questions’ are already common practice across social programmes
  2. One won’t work
  3. The last fails to grasp the ways in which social programmes are fundamentally different to customer services

The first question “who uses the charity and in what numbers?” is the basis of all programme monitoring. The problem does not lie with asking this question, it lies in answering it.

I have written about this before. The problem with so much monitoring data lies in the use of aggregate figures that encourage exaggeration. There are better measures, such as caseload averages, that would steer us away from misleading cumulative numbers of the kind used by Kids Company.

The third question in the article, “how effective is the charity over the long term?” is interpreted as an outcomes question. Outcomes go some way to answering questions about long term effectiveness, but they aren’t sufficient on their own. I have written about this before too, once again in relation to Kids Company. Outcomes measurement is also rife with difficulties in practice, and has too often been used by commissioners to ensure compliance rather than improve service delivery.

The fourth question “is the charity actually required?” is presented as a question about duplication of effort. It is already common practice across social programme commissioning to ask whether there is duplication and whether organisations could work in partnership to better support their beneficiary groups. There may be a need for more of this mapping, or for it to be done differently or better, but to suggest it doesn’t exist already is simply incorrect.

These are the three questions that are already common practice. So much for them.

The second question in the article points towards the need to collect first hand feedback from service users. Feedback is crucial, certainly, and should directly influence the continual improvement of services within social programmes. There is still too little of it, although it is an area of growth and development particularly within the Feedback Labs collective.

It should not, however, be based on the casual interview of two service users as the article suggests. It should be a systematic, embedded and transparent part of ordinary service delivery. The casual interview of two detractors is no more useful than the casual interview of two promoters at providing reliable evidence of effectiveness.

The last question in the article is the hardest to follow. It suggests that charities should be “person-centred” but does not clearly define what this means in practice. There is a fundamental point to be made here about how social programmes build relationships with their service users.

Once again, I’ve touched on this elsewhere. I disagree with Harriet Sergeant when she suggests that the solution lies in “government” understanding the “problem” in advance of commissioning an “imaginative solution”.

Complex social problems don’t work like this. No problem can be simply understood and there is rarely if ever one appropriate “solution”, however imaginative. Nor can relationships between service users and the programmes they access be reduced to customer satisfaction ratings, not least because so much behaviour change is uncomfortable, difficult and takes time.

Most frustratingly for those of us who work with social programmes, is the suggestion that questions about effectiveness should come from without. They should not. They are best asked, and answered, from within social programmes where systematic and open monitoring and evaluation can improve services and better meet the needs of those we want to support.

Genevieve Maitland Hudson is a director of impact measurement consultancy OSCA

We use cookies to ensure that we give you the best experience on our website. Read our policy here.