If the government is truly committed to standardising its relationship with the charity sector it must put its money where its mouth is, says Genevieve Maitland Hudson.
The Public Administration and Constitutional Affairs Committee report into Kids Company’s relationship with Whitehall sets ambitious targets for shared measurement. It recommends a framework for the social sector that uses standardised tools and introduces benchmarked outcomes across comparable organisations.
I’m all for it. But then I would be. I gave evidence to the committee drawing attention to the absence of comparable indicators in the bidding and monitoring processes surrounding the grants to Kids Company.
The government is likely to take the committee’s recommendations seriously.
All the recommendations of the Public Accounts Committee inquiry into Kids Company have been accepted. A fundamental review of central government funding of charities has already been announced, as has a central register of charitable grants. In a Treasury response to the PAC report, there was a promise to introduce tougher scrutiny of charities’ internal monitoring of their effectiveness.
The PACAC recommendations are in the same vein. Comparable standard measures are a step up from scrutiny of internal monitoring, but they’re on the same continuum, so we can reasonably expect that this recommendation will be accepted.
If it is, what would a good government response look like?
Well, it wouldn’t look like a standardised set of outcome targets.
It is important that measurement frameworks, tools and comparable benchmarks should not dwindle into a set of compliance measures unrelated to service delivery. That merely introduces another set of tick boxes and does nothing to support and develop excellence in frontline support.
At its best, systematic measurement within social programmes allows us to monitor progress, identify what works, learn from what doesn’t and continually improve. It is as much about reflexivity and learning as it is about rigour and accuracy. It tells us about our capacity, our effectiveness and how our constituents feel about the support we offer and what it does for them.
This isn’t a matter of quasi-experimental studies and randomised control trials – although these are important too – it is about continuous, systematic and integrated questioning of how we go about our work.
This is where the PAC’s scrutiny of effectiveness and the PACAC’s shared measurement go hand in hand.
We must define a set of principles and benchmarks for internal scrutiny of effectiveness that sets out proportionate, appropriate and sufficiently rigorous methods for assessing what charities do. There are some excellent standards already in existence. Last week’s publication by NPC of its report on the systematic collection of qualitative evidence, for instance, outlined some admirable first principles. Standards for quantitative data collection can, and have already been, defined with equal accessibility.
What is needed is some sensible integration of existing standards into a single set of principles and associated benchmarks for all charities. This part of the work could be agreed quickly, if we set our minds to it.
To work it would of course need to be widely taken up. Government use of these principles across its grantmaking would make that more likely, which would be a very good thing. Other methods of consolidating their use are also worth exploring.
Once in possession of agreed principles for measuring effectiveness which are well-understood, applied, used and audited (by trustees or others), shared frameworks, measures and data (at appropriate sub-sector levels) will really come into their own.
The data produced by charities will be infinitely more reliable than it is at present. Robust standardised methods for collecting it will ensure that shared indicators can be agreed with minimal time and effort. The applicability of that data will not need to be endlessly debated.
Even in advance of this momentous point though, there are useful comparable measures that we could explore.
I’ve made the case elsewhere for the systematic use of caseload as a capacity measure across one-to-one support programmes. The work of the Safe Lives network shows how shared measurement can work across a sub-sector, in this case in domestic violence. Current work on a shared framework for Looked After Children by South East Together is also encouraging.
More, however, still needs to be done to ensure that agreed frameworks come with shared indicators across different providers in different parts of the country.
There are promising developments even here though: the emergent networks of the Centre for Youth Impact could provide the infrastructure through which to define and test new shared standards and benchmarks in youth work and services for young people.
This is an area ripe for sustained investment and support, and for a thoroughgoing sub-sector trial.
If the government does accept the PACAC’s recommendation, it must put its money where its mouth is and invest in the effective development of good quality standardised measurement, tested within an appropriate area, such as youth services. It must then commit to its use.
A sustained commitment to assessing effectiveness through agreed principles for internal assessment and shared benchmarks will ensure that the kind of lobbying that saw Kids Company receive special treatment for so many years will never happen again.
Genevieve Maitland Hudson is a director of impact measurement consultancy OSCA