Foundation Practice Rating: Under the bonnet of charitable trusts and foundations

17 Mar 2022 Research

For the first time ever, a new research project lifts the lid on the practices of the UK’s charitable trusts and foundations in relation to transparency, accountability and diversity. Tania Mason rounds up the findings.

Download the full report as a PDF

Privately funded trusts and foundations are almost uniquely unaccountable. They are independent from government, and most do not need to fundraise, so they don’t rely on anyone else. This has its strengths, in that they are not swayed by political agendas, for example, and can support causes that may be unpopular. It allows them to speak truth to power, if they choose. But in terms of the balance of power between them and those they fund, they hold all the cards. Apart from their duty to comply with charity law, they can operate largely with impunity. As Danielle Walker-Palmour, director of Friends Provident Foundation (FPF), put it: “Only in philanthropy is it still common practice to use family membership as the sole qualification for inclusion in decisions on the deployment of large sums of capital.”

Foundations are also perceived as somewhat opaque, with only 218 UK funders publishing their grants data through 360Giving. The sector has not devised any common standards for reporting on grants or investments beyond regulatory standards.

The sector also falls short on diversity. The 2018 research for the Association of Charitable Foundations (ACF) by Bayes Business School threw this into sharp relief, showing that 99% of foundation trustees are white, two-thirds are men, and three in five are aged 64 or over. Anecdotally, similar patterns are believed to exist across foundation staff teams, too.

For a sector with assets of more than £62bn, that gives out more than £4.7bn a year even when there is no pandemic, this is not good enough. Many philanthropic grantmakers exist to tackle inequalities and disadvantage, to support those on the margins, and to promote social justice. But if they do not represent the communities they seek to serve, refuse to open themselves up to scrutiny, and are unwilling to learn and better themselves, then they cannot possibly make best use of their resources and maximise their impact.

In recent years, awareness of these shortcomings has grown within the sector, and this knowledge has become increasingly unpalatable. Now a group of progressive foundations, led by FPF, has come together to try to address these issues. Realising that their own objectives to improve society cannot properly be met unless they are able to lead by example, 10 forward-thinking grantmakers (see Figure 1 below ) have launched a project to measure the performance of themselves and their peers on their approaches to diversity, accountability and transparency. What’s more, they’ve put their money where their mouths are and stumped up £45,000 apiece to carry out and publish their research for three years. The aim of the work is to identify and promote good practice and, where failings are uncovered, to encourage those trusts and foundations to improve. It is hoped that the work will inspire a new culture of openness and accountability across the sector.

In this, the first year of the project, the Foundation Practice Rating assessed 100 grantmaking foundations – all but two of which are charities – on 90 questions grouped under the three pillars of diversity, accountability and transparency. Using only information that is already in the public domain, the researchers analysed the practices of the selected trusts and gave each one a score on each question, with the total score used to calculate a rating for each of the pillars. Where possible, they assessed the foundations against standards and ratings that already exist, such as the Social Mobility Employer Index, Glasspockets’ Transparency Standard and the Racial Equality Index. All three pillars were given equal weight, as were all the criteria that fed into them. Then, they arrived at an average of the three pillar scores to deliver an overall rating for each participating trust.

How it worked

The Rating researchers scoured publicly available information, such as websites and annual reports, to assess each grantmaker. They deliberately mimicked a grant-seeker examining a potential new funder. Each researcher spent up to 90 minutes looking at each foundation, as that was the maximum time they estimated that a prospective grant applicant would take. The research was done without foundations’ permission and the foundations assessed had no influence over the findings.

The questions were published before the research began, and a public consultation in May and June 2021 invited feedback. Once the final criteria were determined, these were published and widely promoted, along with guidance on how to do well against them. Data on each trust was gathered from September to December 2021. Once the research was complete, all the trusts were sent the data about themselves to check before the final report was compiled.

Each foundation was assigned a score of A, B, C or D for each pillar (with A being highest), plus an overall score (see Figure 4 below).

Disclosure, not activity

The team also emphasised that because they relied wholly on publicly available information, the research relates only to what a foundation discloses, which could be different to what it actually does. WalkerPalmour explained: “For example, if a foundation does an excellent job involving a diverse group of stakeholders but does not talk about that in its published material, it gets no credit for that in our scoring – even if we know about that excellent work from our wider work in the sector.”

The team was also at pains to stress that the results on diversity of staff and trustees are based on disclosure, not performance. Scores were assigned based on whether foundations publish their diversity breakdown, not on the diversity breakdown itself. This means that points were awarded for disclosing the gender, ethnicity and disability of staff and trustees, even if those staff and trustees are all non-disabled white men.

Similarly, if there is a genuine reason why a foundation doesn’t have a particular policy on something that is within the criteria, and it explains why in publicly available material, it would have got credit for that, and not been marked down.

Top-line results: three As overall

Just three foundations achieved the top rating of A: Wellcome, the largest foundation in Europe; the Blagrave Trust, an endowed funder of youth organisations and young people with assets of around £42m – the fourth of five quintiles by giving budget; and the County Durham Community Foundation, a fundraising charity. Walker-Palmour said: “A first observation is how varied those three are. This suggests that good practice is not dependent on any one structure or size.”

Some 41 foundations scored B overall; 28 scored C; and 28 scored D (see Figure 5).

As a rule, the sampled foundations did best on transparency, and worst on diversity. The average score for transparency across the whole sample was B, for accountability it was C and for diversity it was D.

Some 53 foundations scored at least one A on the pillars, with 17 scoring two. But low scores were just as evident: 47 trusts scored at least one D, and 22 of the sample had Ds across the board.

Best practice

Community foundations scored better than average, as did the group of 10 foundations funding the project; all scored A or B overall. Of the five community foundations that were included, one scored A overall and the other four were rated B.

Across the whole sample, the best collective scores related to: publishing an investment policy; having a website; stating who the staff are; and publishing details of funding priorities and past/existing grantees.

The researchers highlighted several examples of excellent practice, including:

  • An appeals process for rejected applicants (County Durham Community Foundation).
  • Easily visible buttons across the top of a website which enlarge the text on all pages (Cumbria Community Foundation).
  • Extensive information on contact for disabled users, with assistive technology and including a £500 bursary for those needing help with applications (Paul Hamlyn Foundation).
  • Clear presentation of funding priorities in various formats – PDF, video and slideshow (Lloyd’s Register Foundation).
  • Provided clear evidence about how they increased the type and range of grants in order to address concerns that arose from their own impact analysis (Clergy Support Trust).

Biggest room for improvement

The worst collective scores related to: publishing a breakdown of the diversity of trustees and staff, and any plan to improve that; publishing in Welsh; and providing contact mechanisms for disabled people.

Some 28 foundations scored D overall, and 22 of these scored a D on all three pillars. These lowest performers span the size range by giving budget. None of them has a website and around two in five did not provide an email address.

The researchers identified several instances where foundations require something from their grantees which they do not do themselves. Examples of such requirements included paying the living wage, consulting with beneficiary communities about priorities, and having complaints or whistleblowing procedures.

Another anomaly they highlighted was the insistence that successful grant applicants produce evidence of their impact, when the foundations did not provide any analysis of their own success. Caroline Fiennes, director of Giving Evidence, said: “A standard reason that foundations give is that analysing a grantmaker’s impact can be difficult because its effects are vicarious through its grantees. In our view, this is a poor excuse.”

She contended that there is plenty a foundation can do to measure its own success, such as analysing the proportion of grants which meet its primary goals, against the proportion that don’t. “They can then compare that to the characteristics of the grants/grantees. It will show whether they succeed most often with grants in (say) Wales or Scotland, or small grants vs larger grants, or small grantees vs larger grantees. Almost all foundations’ work could be analysed in that way, and it would give great insight into how they can be most effective.”

Does size matter?

Performance did not appear to rely on financial clout; no clear trends emerged according to either giving budget or net assets. Good and bad practice could be found in all sizes of organisation, proving that you don’t need to be large or wealthy to set high standards – or to sink to low ones.

Patterns did emerge in relation to the size of teams and boards, however. Foundations with no staff tended to score lower than those with some employees. The very largest (teams of 100+) scored best but interestingly, those with 11-50 employees had a higher proportion of Bs than those with 51-99 staff.

It was a similar picture with regard to the number of trustees: foundations with five or fewer tended to perform worse, with over half scoring a D overall. No foundation with 10 or more trustees scored a D overall. All three of the top scorers have six to 10 trustees (see Figure 6). There was a correlation between having more trustees and scoring better, particularly on accountability.

Diversity

All but three foundations scored C or D on diversity (which includes accessibility). None scored an A. Sixteen foundations scored zero on diversity. This was easily the weakest pillar across the sample – there is lots of room for improvement here in the sector.

(It should be noted that foundations with 10 or fewer staff were exempted from the criteria on staff diversity and those with five or fewer trustees were exempted from the trustee diversity questions.)

At the outset of the project, the researchers attempted to measure the diversity of the foundations’ staff and trustees in relation to gender, ethnicity and disability, but there was so little data available that they abandoned this ambition – very few foundations publish this information.

They found that on staff, only four foundations – Barrow Cadbury, Power to Change, Wellcome and Comic Relief – published a diversity breakdown; on trustees, no foundations published a diversity breakdown, except for Rhodes Trust which publishes its ethnic breakdown. Only 14 trusts had a diversity plan for staff and of these, only Wellcome provided any targets within its plan. Ten foundations had a diversity plan for their boards, with Esmée Fairbairn the sole foundation to include targets for improvement within this. By contrast, 48% of FTSE250 firms publish a board diversity policy. The researchers said: “Though many foundations publicly affirm their commitment to equality, diversity and inclusion and provide statements indicating a willingness to improve, few of those statements contain clear targets or goals about how a foundation intends to improve its diversity over time. A statement is not a plan.”

The diversity pillar also included a number of questions relating to accessibility, such as whether a foundation’s website met international accessibility guidelines and whether the organisation offered different ways for people to get in touch or to apply for grants. The study found that standards generally fell below those adhered to by the public sector and suggested that more foundations should proactively work with disabled people to review their websites and practices to ensure they are more inclusive.

Examples of criteria used in the assessment and a foundation that does this:

  • Ability to zoom in 400% on any page on the foundation’s website and still read all the text in a single column – The Maitri Foundation.
  • Ability to submit proposals in a range of formats – Lankelly Chase.
  • Foundation publishes information on any pay gaps (gender, ethnicity, disability) – Barrow Cadbury Trust.

Transparency

A total of 51 foundations scored an A for transparency but despite some very good practice in this area, there was also some disappointingly poor practice. Just over a quarter of foundations didn’t even have a website. And among some that did, the site was either too sparsely populated or too cluttered, making information hard to find.

The project team acknowledged that some foundations do not have websites, or do not disclose certain information, because of the nature of their work. “For instance, some foundations which fund human rights work want to avoid attracting attention, particularly to their grantees, because that may imperil them.”

Rachel Hicks, head of marketing and communications at UK Community Foundations, said that one of the community foundations assessed for the Rating was in the midst of relaunching its website, and told her that the criteria questions had helped enormously to inform its design and content. “So the Rating is already meeting its targets of encouraging people to think about their practice,” she said.

Examples of criteria used in assessment and a foundation that does this:

  • Contact information is provided on the trust’s website – James Dyson Foundation.
  • Information on what the foundation will not fund explicit on its website – Charles Hayward Foundation.
  • Information is published on grant reporting requirement for grantees – Eveson Charitable Trust.

Accountability

Eighteen foundations scored an A for accountability and far more participants scored Cs and Ds than As or Bs. The researchers found that few trusts offered an obvious complaint mechanism and some provided no email or phone number.

Only around a third published any analysis of their own effectiveness, even though most ask this of the organisations they fund. The report said: “Perhaps this should be addressed considering that foundations routinely ask grant-seekers for precisely this kind of information.”

Examples of criteria used in assessment and a foundation that does this:

  • Foundation provides a mechanism to report malpractice concerns (whistleblowing) – Baron Davenport’s Charity.
  • Publication of any feedback that the foundation receives from grant-seekers and/or grantees – Comic Relief.
  • Evidence that the foundation, in determining its funding priorities, has consulted the communities it seeks to support – The Blagrave Trust.

Looking to the future

For the next two years of the project at least, FPF expects that the sample will comprise:

  • Again, the 10-strong funders group.
  • Again, the five largest foundations.
  • A fresh random but stratified sample of other foundations. This may or may not include those that were assessed this year. The project team acknowledged concerns raised by some foundations that if they are not reassessed in future years, they will not have the opportunity to demonstrate improvement, and said they were considering how to tackle this. But they felt it was important that all foundations realise that they could be included, so that there is an incentive for all of them to improve. Rating a wider range of trusts would also provide a more faithful picture of progress across the whole sector.

The criteria will likely be exactly the same as for this first year, for continuity.

Website, report and resources

The project team has created a website for the Foundation Practice Rating, which contains more information about the project and the research, as well as the full report. Find it at www.foundationpracticerating.org.uk.

As the motivation behind the Rating is to drive up standards, the project team has also compiled a list of resources and organisations that can provide advice and support to assist those that wish to improve their practices across the pillars. Find this at: www. foundationpracticerating.org.uk/resources/.

Impact of the project

The project team conceded that accurately assessing the impact of their work will be impossible, as changes in practice cannot be attributed to any one piece of work. However, they added: “Many foundations have said they find value in this process and our criteria. We will continue to track these anecdotes and hope that the process continues to create value for the sector.”


Figure 1: Project funders

The 10 project funders, each giving £45,000 to the Rating over the first three years, are:

  • Friends Provident Foundation
  • Barrow Cadbury Trust
  • The Blagrave Trust
  • Esmée Fairbairn Foundation
  • John Ellerman Foundation
  • Joseph Rowntree Reform Trust
  • Joseph Rowntree Charitable Trust
  • Lankelly Chase Foundation
  • Paul Hamlyn Foundation
  • Power to Change

Figure 2: The criteria used for assessment

Broadly, the principles set out for the three pillars were:

Diversity: The extent to which a foundation reports on the diversity of its staff and trustees; the extent to which a foundation reports on its plans to improve its diversity; and how well it caters for people who prefer or need to communicate in different ways, ie how accessible it is.

Accountability: Is it possible to examine the work or decisions of a foundation after the event, and to communicate with that foundation about these?

Transparency: Does a potential grantee have access to the information that it needs in order to contact the foundation, decide whether to apply for funding, or learn about it more generally in advance of any grant?

The final criteria had to meet both of the following requirements:

In scope: The criteria must relate to the three pillars: diversity, accountability and transparency. Therefore, criteria only about sustainability or relating to an assessment of a foundation’s impact or its strategy were out of scope.

Observable and measurable: The rating process only used data in the public domain. So, the evidence of whether a foundation meets a criterion must be measurable from the outside, and not require (for instance) interviews with staff or insider knowledge.

Every item that was used as criteria is provided by at least one foundation in the sample – ensuring that none of the criteria is impossible to meet.

Exemptions

Some foundations were exempted from certain criteria because not all questions were relevant to every foundation. For example, a foundation that funds only by invitation does not need to publish eligibility criteria, and foundations with fewer than 49 staff were exempted from publishing gender pay gap data. A full list of exemptions and details of how these are dealt with in the scoring is published in the appendix to the full report (see p28).

Limits of scope

It is also important to note that the research did not examine what the foundations actually fund. It did not look at issues such as how well foundations capture views from a diverse set of stakeholders to inform their work, nor the diversity of the work they fund. Walker-Palmour said: “This is because examining foundation practices is difficult enough, at least for the first year. So, there could be a foundation with poor disclosure and undiverse staff, which actually funds very diverse organisations and activities. We recognise this as a potential issue and may return to it in future years.”

Investment policies

On investment policies, the analysts used Glasspockets’ criterion for whether foundations should have one, plus the criteria from the Charity Commission on what the investment policies should contain.

Figure 3: How the sample of 100 foundations was chosen

The sample comprised:

  1. All the foundations that funded the project (See Figure 1). These foundations had no control over the detail of the assessment or of the ratings assigned to each trust.
  2. The five largest foundations by grant budget.
  3. A stratified subset of other foundations, chosen at random from those foundations that featured in the Foundation Giving Trends report 2019 published by the Association of Charitable Foundations plus the UK’s community foundations – a total of 383 organisations. This group was divided by five and an equal number was selected from each quintile to give a range of size by annual giving budget. The results table on pp28-29 identifies foundations in each quintile by colour coding.

Other than identifying the funders group foundations and community foundations, the selected organisations have not been categorised by type in the report. This is because distinguishing clear categories is very difficult; there is no clear, accepted definition within the sector of what constitutes family, corporate and other types of foundation.

Figure 4: How ratings were calculated

Equal weight was given to all criteria and to each pillar. Each criterion was allocated one point and a foundation’s actual score in each pillar was divided by the maximum possible score for it on that pillar, with allowances made for exemptions. This produced a percentage, which was the foundation’s score on a pillar, and this was then converted into a grade (A-D).

The project team explained that a natural way to generate a foundation’s overall rating would simply be to take an average of its scores of the three pillars. But they decided that this would not be fair, as an excellent performance requires a certain level of achievement in all three areas, rather than just an outstanding score in one or two. So they decided on a policy whereby a foundation’s overall rating could be, at most, one band higher than its lowest pillar score.  

Download the full report as a PDF
Governance & Leadership is a bimonthly publication which helps charity leaders and trustees on their journey from good practice to best practice. Written by leading sector experts each issue is packed with news, in-depth analysis and real-life case studies of best practice in charitable endeavour and charity governance plus advice and guidance straight from the regulator. Find more information here and subscribe today!

 

More on