Ian Allsop: Onwards and upwards – Foundation Practice Rating

14 Mar 2024 In-depth

Reporting on the third year of the Foundation Practice Rating, Ian Allsop finds that charitable grantmaking foundations are becoming more accountable and transparent, and even making progress on diversity.

By Paweł Michałowski / Adobe

Three years ago, Friends Provident Foundation spearheaded the launch of a groundbreaking initiative that sought to assess the performance of UK charitable grantmaking foundations on their approach to diversity, accountability and transparency. The ultimate aim of the Foundation Practice Rating (FPR) project was to drive change in these three areas in the foundation sector, by assessing foundations against a common set of criteria and highlighting beacons of best practice, as well as identifying where improvements could be made.

The results from year three of the project are now in, and it appears that these objectives are being met: overall, there is clear improvement in the ratings and practices of the foundations over the period. More than three times as many foundations scored the top mark overall this year compared with three years ago, while the number scoring the lowest overall rating has halved.

Diversity remains the weakest area, but even there we are seeing signs of progress. This year for the first time, one foundation scored the top mark, an A, on diversity (the Community Foundation for Tyne & Wear and Northumberland), and 11 scored B, compared with eight last year. Moreover, fewer than a third of the foundations assessed scored a D on diversity, whereas it was almost half in year two.

However, even though the general curve is distinctly upward, plenty of room for improvement remains. The project will continue into year four and beyond in the hope that even more progress will be made.

Background to the project

Charitable grantmaking foundations can be considered highly unusual, in that most do not need to raise funds, and so don’t rely on any other entity for anything. They can and do fund a broad array of charitable work. Most foundations are at liberty to take a long-term view and are able to be nimble in how they respond to situations such as the pandemic and the cost-of-living crisis, by increasing their giving even when their income falls.

They value their independence from government highly. But this enables them to operate with little transparency about what they do and how they do it. This can be a strength – it allows foundations to fund important but possibly unpopular causes, and can unlock charitable funding from people who wish to give but are not comfortable with publicity. It also gives them the option to speak truth to power, regardless of fashions or political interests.

However, the sector has clearly lacked diversity in the past, and plenty of research shows that less diverse groups make less good decisions than more diverse ones. We all have a stake in how well foundations perform because of their ability to do good, and also because they are in effect supported by the taxpayer, as registered charities. However, foundations can lack accountability to donors or the public, other than through charity law and regulators. Trusts’ accountability is to their boards, which do not always reflect the population as a whole or the communities they serve. And lack of transparency about what foundations do can leave other charities and members of the public in the dark about how foundations work, meaning that dealing with foundations can be unnecessarily costly, which wastes scarce resources.

At the heart of these issues is power. Organisations seeking funds are rarely able to question the source of funds, or the legitimacy and practices of the funder. However, in the era of big data and increasing democratisation of information these traditional relationships are shifting. Foundations are beginning to recognise that their effectiveness and responsibility depend upon their activities being diverse and inclusive, demonstrating results, being accountable to the organisations that they seek to support and to society more widely, and increasing their transparency.

The Foundation Practice Rating was designed with the intention of encouraging grantmaking charitable foundations to examine how they appear to the outside world on issues of their diversity, accountability and transparency. As foundations do not opt in and cannot opt out of the main cohort, it gives a more representative view than many other existing benchmarking-type initiatives, of the performance of the sector. It uses only publicly available information, and the included foundations have no influence over the findings.

About the research

The research and assessment are carried out each year by Giving Evidence, a consultancy specialising in the production and use of rigorous evidence in charitable giving.

The FPR’s most recent report covers the ratings based on data gathered in autumn 2023. The report is designed to be self-standing so, as with the previous two reports, it explains for new readers the development of the rating and the principles by which it operates. In most respects, FPR operates in year three as it did in the first two years.

A fresh sample of foundations is drawn each year. The cohort therefore changes year to year. In this respect, FPR’s sampling is similar to that used in political polling and is a demonstrably robust method despite the changing cohorts. In addition to the selected cohort, any foundation can opt in to be assessed. They are researched in the same way as the main cohort of 100 foundations, but reported separately. This year, three foundations opted in.

The research involves answering 98 questions about each of 100 foundations: 56 of the questions are criteria which contribute to the foundation’s score and rating. The FPR uses only publicly available information, because this is all that is visible to outsiders such as prospective applicants for grants or work. As the authors colourfully explain: “Just as astronomers have to infer what is happening inside a distant star based only on the light that emanates from it, outsiders can only infer how a foundation works and what it values from the information that emerges from it.” The criteria are determined as objectively as possible, drawing largely on other rating systems (in the voluntary sector and also beyond), and the researchers run an annual public consultation to inform the criteria and process.

Changes since last year

The FPR method is described in detail in appendix A of the full report. The method has been deliberately kept stable from year to year to enable year-on-year comparisons. However, there have been some changes in criteria that may affect scores, which follow from the annual consultations.

In year three, for example, the researchers have only given credit for information published within the three years before the research period, which was autumn 2023. They have also taken a more robust approach to the evidence required to score points around how foundations measure their own effectiveness as funders.

Lastly, they have changed how they assess how many programmes were transparent about their eligibility criteria, decision-makers, and timeframes for funding, by basing it on the proportion of a foundation’s funding that have those, rather than the proportion of programmes. This is to avoid a situation where a foundation would be marked down if it had, for example, one very large and transparently run programme and several less transparent but much smaller programmes. This change could either slightly increase or decrease scores.

General overview of the findings

As FPR is now in its third year, the team hope to see evidence of real change in behaviour and practice, and overall, there is clear improvement in the ratings and practices of the cohorts over time. For instance, the number of foundations scoring A overall has more than trebled since FPR started three years ago (from three in year one to 11 this time), and the number scoring D (the bottom grade) overall has halved (from 28 in year one to 14 this time).

As with both previous years, the foundations scoring A overall are diverse in size and structure: that means, crucially, that FPR is clearly not a tacit measure of a foundation’s size or structure. The A-rated foundations include: eight endowed foundations – some large, some much smaller; some long-established, one with a with a living settlor; a corporate foundation; and two community foundations.

Moreover, some small foundations score well overall, and some large ones score poorly – three of the UK’s largest foundations (by giving budget) scored C overall, for example.

For the first time, this year one foundation – the Community Foundation for Tyne & Wear and Northumberland – scored A on all three domains; the report authors declare: “Kudos to them.” And fewer foundations (nine) rated D on every pillar, compared to 23 in year two.

As in year two, the strongest domain was transparency, while the weakest by far continues to be diversity. However, there are improved scores in all three domains compared with previous years – and these changes are statistically significant.

Among the foundations included in all three years and not by random selection (which is some of the largest five foundations by giving budget, plus the majority of the group of foundations that are funding the project and thus automatically included, and a couple of others), there are improvements in all three domains; and diversity scores improved by the largest proportion. Of course, it might be expected that the group funding FPR would show particular commitment to these issues given their support for it.

There is very little difference between the performance of randomly selected foundations that were assessed for the first time this year and those that were included for the second or third time this year. Again, the sample numbers here are relatively small, but the authors note that “this might suggest that if there is a sector-wide change, it is more likely to result from sector-wide effects (which could be linked to FPR and/or other initiatives or influences) than from foundations changing practice after the experience of being assessed by FPR”.

Best and worst practice

Collectively, the criteria on which the 100 included foundations scored best were:

  • Whether the foundation gave any information on who or what it has funded (99% of eligible foundations did so).
  • Whether the foundation had an investment policy (91% did; note that the Charity Commission for England and Wales expects all charities that invest to have a written investment policy).
  • For approximately what percentage of the foundation’s funding is information given on who makes the funding decisions (87% of eligible foundations).
  • Whether the foundation has a website (87%).

The criteria on which they collectively scored worst all relate to diversity/accessibility:

  • Having ways to contact the foundation for people who have disabilities (2% of points scored by non-exempt foundations).
  • Having a plan to improve the diversity of trustees or board members, with numerical targets (3%).
  • Having a plan to improve the diversity of staff, with numerical targets (4% of possible points scored).
  • Having various ways for contacting the foundation concerning malpractice (5% of possible points scored).

Caroline Fiennes, director of Giving Evidence, says: “The results this year show material progress by foundations on these important issues of diversity, accountability and transparency in the three years since we started this work. For instance, the number of foundations scoring A overall has more than tripled and the number scoring D overall has halved.”

But there is still plenty to do.

“Interestingly, it is often the top-performing foundations which seem (from what they say to us) most aware of where they still need to improve,” Fiennes says. She also mentions that as FPR only assesses the UK’s largest 300 or so foundations, it is disappointing that so many (13) still do not even have websites.

Danielle Walker Palmour, director of Friends Provident Foundation, comments that the “steady improvement in results suggests that the conversation around philanthropy is shifting toward the ‘how’ we give and not just describing the ‘how much’ or ‘to what’”.


As noted above, diversity continues to be the weakest domain; however, there are signs of progress. This year for the first time, a foundation scored an A on diversity (the Community Foundation for Tyne & Wear and Northumberland), with 11 achieving B, compared to eight last year. Fewer than a third of the foundations scored a D on diversity, where it was almost half in year two.

For the first time, researchers also collected data about whether foundations reported on the diversity of their grantees. They found that 13 foundations did, five of them community foundations.

The foundations varied in whether they reported diversity of their applicants as well as grantees, and the characteristics of diversity on which they reported. For example, the Sports Aid Trust reported on the breakdown of people receiving a SportsAid award by gender, whether they had “a disability” and whether they were “from ethnically diverse communities”, whereas the Lloyds Bank Foundation for England & Wales stated a breakdown by race. The United Utilities Trust Funds reports breakdown of individuals applying for funding based on their age but no other characteristics.

There was also disparity on whether the foundations assessed stated what diversityrelated definitions they used and where these had come from. Some foundations had used the DEI Data Standard, and some had had a Race Equality Audit. Others did not state the definition or source: for instance, one reported the breakdown of funding which “went to charities led by and for Black, Asian and minority ethnic communities”, but it was unclear how that is defined or who determines whether a particular grantee meets that definition or not.

In short, the data provided by foundations about diversity of their applicants and grantees could be a great deal better, and as such is currently insufficient to create any reliable picture of the combined funding flows.


In general, foundations with websites were good about publishing their funding priorities, eligibility information and data about who and what they have funded. Having a website is essential to performing well, both in the domains and overall: no foundation without a website scored above a D overall.

However, as in both previous years, many foundation websites could be much better. For example, 13 foundations in the year three cohort had no website at all. Many were hard to navigate, and when zoomed to 400% the hamburger button froze, hindering navigation of the website. Many had no working search function, and when navigating using the keyboard, some did not highlight the position of the cursor, leaving one to read the small navigation text in the bottom left corner.

Some websites were very busy, impeding finding information quickly and easily, while others shared only very limited information. Some of the foundations included this year had just a single webpage.

All of this is a concern because it means that prospective applicants might be unable to find information that they need to determine if their work fits the funding aims. This wastes charity resources.


One notable area where improvement is needed is around evidence and analysis of foundations’ own effectiveness. Despite many foundations requiring grantees and applicants to produce evidence of their effectiveness, few foundations publish such analysis of their own. Indeed, Fiennes highlights the fact that only 16 of the 100 foundations assessed did so. “That is surprising, to say the least, given the emphasis which foundations have placed over the last 20 or so years on asking other non-profits about their performance. It also means that there is very little from which other foundations or private donors can learn about how to do grantmaking well. More introspection and self-analysis would surely help to make foundations more effective.”

The researchers acknowledge that as grantmakers’ effects are clearly mainly vicarious through their grantees, identifying their effects is complicated. But they say that it is possible to gain a line of sight through various types of analysis published, including: views of grantees and/or applicants, collected systematically; analysis of the proportion of grants which (at some level) succeeded versus those which did not; and analysis of the costs created by the foundation’s funding processes and borne by grantees/applicants – ideally this would be expressed as a proportion of the amount given, ie the net grant.

While researchers were open to counting other relevant analyses if they found them, points were not awarded for simple breakdowns of the grant portfolio – for example by grant size, geography or sector – because these do not relate to effectiveness.

Some foundations claimed some benefit/s but did not explain the input data or calculation method. For instance, one foundation cited a figure for its social return on investment, but it was not clear who did the calculation or how, or what dataset or time period it referred to. No points were awarded in such cases.

Other evidence deemed to be insufficient included: stories of grantees’ effects (grantees might have achieved that impact despite their funders); citing activities/outputs, eg “76 volunteers have received training to help them provide support within their organisations”; and describing or counting changes created by grantees, because it is unclear whether the funder(s) contributed to those changes. As the report authors observe, “sometimes grantees achieve things despite their meddlesome funders!”.

While 84 foundations published no analysis of their own effectiveness, of those that did, the analyses varied considerably. Surveys of grantees were the most common, some of which also included applicants.

In summary, the authors continue to feel that foundations could do much more to analyse and understand their own effects – as opposed to those achieved by their grantees – and to publish the methods and findings of those analyses. “If a new funder were to read all the impact reports published by the cohort of 100 foundations, we doubt that they would learn much which is backed by data about how to give well.”

Contact difficulties

In all of FPR’s three years, each of the assessed foundations was sent the data about it, for it to check. The researchers used the contact details that the funder provides. “For a surprisingly large number, that is not email but a postal address – so we send the information by post. For an also-surprising number, the email address is a generic one – such as info@ or enquiries@ – and sometimes is for a lawyer. We quite often hear from foundations that those emails are not received; presumably they go to spam and are not checked. That is, for many foundations the contact details which a prospective applicant might use, go to some place which is not checked.”

The obvious learning point from this is that, as all charitable foundations operate in the public interest and are subsidised by the taxpayer, “it seems not unreasonable that outsiders should be able to contact them”.

Community spirit

In general, community foundations continue to outperform their peers, and by an appreciable margin. As in year two, most of the 16 community foundations included this year were assessed for the first time, and over the three years of the FPR, 26 community foundations have been examined. Their scores are noticeably higher in all three domains. This may be because, unlike most foundations (endowed ones, or family ones, or foundations funded by a company), community foundations must compete for most of their resources, and therefore have strong incentives to perform well. The difference in scores is statistically significant.

Natalie Smith, director of grants and impact and deputy chief executive at Kent Community Foundation (which scored B overall), says that a critical but supportive eye on its grantmaking strategy, processes, and presentation to the world is always useful. “We were surprised how fairly minor individual insights can negatively impact applicants’ impression and experience of us. For example, lack of clear information on decision timelines for all funds, payment timings and decisionmaking process, plus specific support to people with access issues, may inadvertently create a negative impression.”

She says this is now on the agenda for every grants team meeting. “I set a culture of feedback and suggestions from all grants staff. Ideas for change are always taken seriously, and welcomed and considered.”

For Tamas Haydu, CEO at Cornwall Community Foundation, which scored C overall, it was very helpful to be independently assessed by experts. “We are always interested in adopting best practices and striving to provide the best service for our grant applicants and funders. This will be a great help for improving the way we provide information on our website. Our current website was developed seven years ago and we are in the process of developing a new one. The FPR has meant we are considering improving accessibility on it, as well as making information on our processes and policies more readily available. We will discuss the final results with our team and trustees and prepare an action plan to improve these areas.”

Other reflections

One of the aims of the FPR is to continually raise standards across the foundation sector, and even for those which are already demonstrating good practice, and obtaining high ratings, there is always more to do. Louise Snelders, head of funding and partnerships at the Co-op Foundation, which scored B overall, states: “We have discussed it as a team and while we see lots of areas for improvement, either in practice or in terms of transparency or clarity of explaining how we work, we do not see any areas where correction is required. We recognise there’s so much we’re doing that we haven’t made visible – but this is an action for us to take away.”

Mubin Haq, CEO of abrdn Financial Fairness Trust, which scored B overall, also highlights how the process can identify some omissions which can easily be fixed. On the criterion “Does the foundation provide its data on awarded grants in a downloadable (open) format that doesn’t require payment to access?”, he says: “The information is provided on our website and free to access but is not currently available in the formats indicated. We will now be exploring these options.” On being a Living Wage employer, he confirms that the Trust is one, as well as a principal partner of the Living Wage Foundation. “This was highlighted on our website but appears to have fallen off during our website refresh, so thank you for highlighting this. This information has now been added again.”

Looking ahead

The FPR will run again in 2024-25, which will be year four. The team envisages that the cohort of foundations will be defined in the same way. As with year three, there is a good chance that foundations in previous cohorts will be included again, simply by weight of numbers. It is also likely that the criteria for year four will be largely the same, for continuity and enabling direct comparison. That said, the researchers may continue to refine the questions in light of experience and feedback. “At some point – perhaps after year four – we may completely overhaul the criteria, reviewing them from the ground up.”

The grade boundaries are likely to remain unchanged in year four: an alternative is to raise the bar for the rating bands, on the basis that, by year four, foundations have had time to improve their practice and disclosure, and expectations should accordingly be higher.


As set out last year, accurately and comprehensively identifying the entire effect of this project will be impossible. The authors explain that this is because there is no counterfactual. “The FPR operates on the whole UK foundation sector – and does so quite deliberately – for instance, by publishing the FPR criteria and stating publicly that the rating is being carried out, and that any foundation might be included in any year. There are therefore no foundations that are outside what researchers call the ‘treatment group’ (ie who are not affected by the project). This precludes any comparison of changes in performance of foundations which are treated (ie assessed) with changes in performance of foundations which are not – everybody is treated.”

Furthermore, there is no baseline data. “The FPR year one data in effect are the baseline, but they were gathered after the criteria and guidance on how to do well were published: that is, after the intervention started. As a result, it is possible that some foundations may have changed practices and public documents in response to the criteria and guidance but before the formal data gathering. And that is great! The FPR team and funders are more interested in encouraging change than in documenting and attributing it.”

Consequently, it is not possible to rigorously distinguish between the effects of this rating and the effects of the many other factors that affect foundations.

Any observed changes could be due to factors that affect all foundations. That said, there are encouraging signals and examples from various foundations that they are changing their practices in response to FPR. As mentioned, ratings are rising, indicating better practices. Many foundations have said that they find value in the process and the criteria. The FPR researchers will continue to track these anecdotes. They may also commission some systematic qualitative work to hear from foundations about whether they are aware of FPR, their experiences of it, and whether/where/how it has affected their practices. This might illuminate both the kinds of effects that FPR is having, and how it could be amended to be more consequential.

Danielle Walker Palmour concludes: “The perspective of applicants is at the heart of the FPR and, we think, modern grantmaking practice. We hope that next year will see further improvement in our sector.”

Ian Allsop is a freelance journalist and editor

Figure 1: how it works

Each included foundation is assigned a rating of A, B, C or D on each of the three domains of diversity, accountability and transparency (with A being top), and is also given an overall rating. The researchers stress that it is a rating and not a ranking.

The research team looked at 100 UK-based charitable grantmaking foundations. The 2024 sample comprised:

  • The 13 foundations funding the work;
  • The five largest UK foundations by giving budget; and
  • A random sample of community foundations and charitable foundations as listed in the Association of Community Foundations’ most recent Giving Trends report (which this year was 2022) and the UK Community Foundations network (UKCF). The former covers the top 300 or so largest UK charitable grantmaking foundations.

The 100 foundations included in year three collectively had:

  • Net assets of £61.6bn, compared to £68.1bn in year two and £44.4bn in year one;
  • Annual giving of £2bn, compared to £1.8bn in year two and £1.25bn in year one; and
  • An average payout rate (ie the amount given annually as a proportion of assets) of 3.2% (year two 2.6% and year one 3%).

A fresh sample of foundations is drawn each year. Some 17 foundations were included in all three years: 10 from the year one funder group; one which was selected randomly in year one and joined the group in year two; three large foundations that were in the top five by giving budget in all three years; and three which were randomly selected for inclusion in all three years. Twenty-eight foundations that were included by random selection in year three had been randomly included in one of the two previous years. The remaining 55 were included in year three for the first time.

In total over its three years, FPR has assessed 227 foundations. This means that by this point, more than half of the eligible foundations have been assessed at least once.

Figure 2: project funders

  • Friends Provident Foundation
  • Barrow Cadbury Trust
  • The Blagrave Trust
  • Esmée Fairbairn Foundation
  • John Ellerman Foundation
  • Joseph Rowntree Reform Trust
  • Joseph Rowntree Charitable Trust
  • Lankelly Chase Foundation
  • Paul Hamlyn Foundation
  • Power to Change
  • Indigo Trust
  • City Bridge Foundation
  • John Lyon’s Charity

Figure 4: criteria used for assessment

Broadly, the principles set out for the three pillars were:

Diversity: The extent to which a foundation reports on the diversity of its staff and trustees; the extent to which a foundation reports on its plans to improve its diversity; and how well it caters for people who prefer or need to communicate in different ways, ie how accessible it is.

Accountability: Is it possible to examine the work or decisions of a foundation after the event, and to communicate with that foundation about these?

Transparency: Does a potential grantee have access to the information that it needs in order to contact the foundation, decide whether to apply for funding, or learn about it more generally in advance of any grant?

Criteria also have to be:

In scope: The criteria must relate to the three pillars: diversity, accountability and transparency. For example, criteria only about environmental sustainability or relating to an assessment of a foundation’s impact or its strategy were out of scope.

Observable and measurable: The rating process only used data in the public domain. So, the evidence of whether a foundation meets a criterion must be measurable from the outside, and not require (for instance) interviews with staff or insider knowledge.


Foundations are exempted from criteria which are not relevant to them. For example, a foundation that funds only by invitation does not need to publish eligibility criteria, and foundations with fewer than 49 staff are exempted from publishing gender pay gap data.

Limits of scope

The research did not examine what the foundations actually fund. It did not look at issues such as how well foundations capture views from a diverse set of stakeholders to inform their work, nor the diversity of the work they fund.

Investment policies

On investment policies, the analysts used Glasspockets’ criterion for whether foundations should have one, plus the criteria from the Charity Commission on what the investment policies should contain.

Figure 6: key findings in 2024

Overall, there is improvement in the ratings and practices of the cohorts over time. Whereas in year one, only three foundations scored an A overall, in year two, seven foundations did, and in this year three that has grown to 11 foundations.

As well as six foundations which also scored an A overall in year two, (Wellcome, Blagrave Trust, John Ellerman Foundation, Paul Hamlyn Foundation, Esmée Fairbairn Foundation, and Walcot Educational Foundation and Oxfordshire Community Foundation), five further ones achieved the top grade. They are: Friends Provident Foundation, Indigo Trust, Lloyds Bank Foundation, Gloucestershire Community Foundation, and Community Foundation Tyne & Wear and Northumberland.

  • Every criterion was achieved by at least one foundation in the cohort.
  • As with both previous years, the foundations scoring A overall are diverse in income size and structure.
  • For the first time, this year one foundation scored A on all three domains: the Community Foundation for Tyne & Wear and Northumberland.
  • Conversely, fewer foundations rated D in all three domains than in previous years: 14 of those rated D overall. This compares to 23 foundations rated D overall in year two, of which 17 were rated D on all three domains.
  • Diversity was again the weakest domain.
  • A foundation can score quite differently on one domain from on the others.
  • Overall performance does seem to correlate weakly with the number of trustees.
  • Community foundations continue to outperform the broader sector.
  • The paucity of foundations’ websites remains striking.
  • Few foundations publish quantitative analysis of their own effectiveness.

Figure 7: opting in

The aim of the FPR is to encourage all trusts and foundations to make improvements in how they operate in the areas of diversity, accountability and transparency. However, it isn’t possible to rate all foundations, which is why a sampling method is used to select those for inclusion. After year one, feedback from a number of foundations indicated that some would like to ensure they were included in subsequent samples so they could track their progress against the three pillars over time.

For years two and three, therefore, foundations not included in the sample could opt in by paying a small fee to cover research costs. These results are made public in the final report, separate from the random overall sample. Intuitively, foundations which opt in are likely to be unusually motivated to have good practices. Therefore the results for ‘opt-in foundations’ are reported separately from the results of the main cohort in order to avoid biasing the dataset. If a foundation which wants to opt in happens to be selected through the random process for inclusion in the main cohort, then it stays in the main cohort (in order to preserve the randomness). In that case, it does not pay to be assessed.

This year, three foundations opted in: KPMG Foundation, the Mercers Charitable Foundation, and Masonic Charitable Foundation.

Figure 8: examples of great practice

As in previous years, the research encountered some practices that seem particularly strong. Some are cited here to inspire other foundations and to show what is possible:

  • Has a grants application wizard that guides the applicant to select the appropriate fund/grant to apply for – South Yorkshire Community Foundation.
  • With diversity data of staff and trustees, the foundation provides the comparative community diversity data – Leeds Community Foundation.
  • Has a grants and giving booklet that is published every year – Leicestershire and Rutland Community Foundation.
  • Has an illustration of the entire grantmaking process on one page – Lincolnshire Community Foundation.
  • Provides detailed instructions on whistleblowing, with information for each type of concern, eg misuse of funds, bullying and harassment, sexual exploitation and abuse – Leprosy Mission International.
  • Has minicom-enabled phone contact detailed for text relay services on the contact page – Joseph Rowntree Foundation.
  • Provides details on the application criteria and process even though unsolicited applications are not accepted – KPMG Foundation.
  • Has multi-language enabled on the UserWay accessibility widget, which can change the entire website to Welsh or any other language – Lloyds Bank Foundation.

View from Paul Hamlyn Foundation

Generally, I would say that the FPR is proving useful when we are considering transparency. It is not just a case of compliance, but it helps our thinking about what we post and how we frame our output or our communication.

The progress of the work over the three years since it started really shows that thoughtful use of measures of this kind is a helpful addition to driving change. Lots of the practices that the FPR recommends are quite straightforward and having them laid out in this way is really useful. Something like the focus on having a website as the basis for transparency seems very obvious but before the FPR, were people talking about that? It is actually a crucial building block for transparency and accountability and accessibility for the whole sector.

The rating has provoked useful discussion for us. This year in particular it made us think about our process for managing enquiries from people who wanted to raise an issue with us about an organisation we fund. This is quite a complex subject when you start to unpack it, especially around the extent of our responsibility when there are disputes, among other things. It is an example of how a single item on a checklist can actually push a really important strategic discussion.

I think we feel like this is deep work that needs to be sustained over time. Getting an A overall is great and reflects a lot of work by staff and trustees, but implies we’ve reached the best position when we know in fact there is a lot more we need to do. Therefore, we will try to keep learning and listening; we really value this as an organisation.

The rating can be a useful proxy for understanding an organisation’s progress, but we know the measures that matter most today are certain to evolve over time and as the sector shifts in its practice, we might want to be more ambitious in what good looks like too. We hope to see the FPR continue to use consultations within the sector to evolve its criteria in response to what feels most important.

Holly Donagh is director, strategic learning, insight and influence at Paul Hamlyn Foundation  

The full report will be launched at an online event starting at 2pm today, 14 March. Sign up to attend at https://www.eventbrite.co.uk/e/foundation-practice-rating-report-launch-202324-tickets-765568184047. The full report can be downloaded from www.foundationpracticerating.org.uk.

Governance & Leadership is a bi-monthly publication which helps charity leaders and trustees on their journey from good practice to best practice. Written by leading sector experts each issue is packed with news, in-depth analysis and real-life case studies of best practice in charitable endeavour and charity governance plus advice and guidance straight from the regulator. Find more information here and subscribe today!


More on