Jonathan Cook: Is computer analysis the future of assessing fundraising applications?

05 Aug 2020 Voices

Jonathan Cook explains how the Martin Lewis Coronavirus Emergency Fund used keyword analysis to speed up the review of applications, and why this technique needs refining

In my 20 years as a fundraiser, I have never seen anything hit the UK in the way Covid-19 has. The sheer volume of people and organisations needing financial help and needing that help urgently highlighted an issue with the way we have traditionally awarded grant funding.

The standard model of – periodic application windows, written long form applications, panel discussions to review them, in person pitches and site visits – simply couldn’t work in March and April of this year.

Speed was of the essence, but our traditional processes are designed for in-depth analysis and review. What if – like Martin Lewis’ Coronavirus Emergency Fund – you get 7,000 applications in six days and a passionate desire from your boss to get the first batch of grants out within seven days of the fund opening. How do you select the right projects to fund and quickly?

Using keyword analysis to rank applications

One step was to enlist the help of a small army of volunteers (I’ll come onto those amazing people later). The other was to use computer analysis to sift through the applications in the first instance.

Over the past few years, I’ve become really fascinated by text and word analysis and how to sift through mountains of free text to discover people’s motivations for giving. I’ve also been studying how organisations can use the free text data at their disposal to make decisions about their marketing and fundraising. So I jumped at the opportunity to try a bit of this type of analysis to help prioritise the applications we should personally assess first.

With 7,000 applications and an urgent need to get funding out of the door, we initially used word analysis in Excel to scan through all of the applications. The idea was that this would highlight those that featured key words and indicate which met our criteria of alleviating poverty.

We looked for key phrases such as “free school meals” or “emergency food” which we felt would indicate that the project was addressing the fund’s criteria. We also looked at the government’s Index of Multiple Deprivation to see if a project was in a more deprived geographic area.

This allowed us to give every application a numeric score, which in turn allowed us to rank them and then assess them one by one with a human eye, starting with the highest scoring project.

It was an interesting technique to use which gave us focus in the first few weeks, while trying to get money out quickly. But ultimately some applications that fit our criteria very well scored very low in this system, so for this reason we decided to use volunteers to assess every application. It has the potential to work in the future but needs refining as a process.

Algorithms and discrimination

One thing that this has brought to mind is if more organisations use these processes in the future, how can we ensure the computer analysis and algorithms do not discriminate against minority communities?

At the Martin Lewis fund, we tracked the different religious & ethnic groups and geographical areas we were funding to ensure we were not neglecting any one area of the UK. Throughout the process we tried to ensure a balanced distribution of grants awarded by these groupings.

Computer algorithms do have an opportunity to reduce unconscious bias in the distribution of funding. But they’re only as unbiased as the programmer, so using an algorithm alone cannot solve this entirely. There was a report in recent weeks about how machine learning, when used to assess job applications, can unconsciously become biased against females and minority communities.

This year, I have met with and sought advice from a number of people from different minority groups to try to work out how any use of this kind of analysis in fundraising should be applied. What words can we use when building data models, who would be the best people to help with the analysis, what we as a sector can do to influence this method of assessment from the very start?

If we use it, we need to get it right

There is an immense amount of good that can come from using this technique. When speed is of the essence, this technique can help with the assessment of large numbers of applications and reduce the time it takes to get the money out of the door. 

But these techniques are in their infancy and mistakes will happen. Decent applications could be overlooked and fundraisers may learn the key words and phrases to use to game the process (in the way web content developers try to beat the Google algorithm to appear at the top of the search). It was because of this that the use of the volunteers at the Martin Lewis emergency fund was so essential.

If more and more organisations start to use these processes, we as a sector have a responsibility to ensure it is done well and transparently.

I would encourage the sector to think about this issue and to take the steps it needs to ensure that if this technology is used more by fundraisers or funders that it incorporates the views of all communities and actively engages with them all when building these models.

This is kind of a call to people from across the sector and from all different genders, races and communities to get involved in shaping this right now. Currently it’s very basic, but in the future it will be much more advanced, we need to make sure we get this right from the outset.

Jonathan Cook is a fundraising consultant and charity lead for the Martin Lewis Coronavirus Emergency Fund.

Fundraising Magazine is a practical and inspiring magazine that provides fundraising professionals with the tools to unlock new revenue streams, yield better results from campaigns and boost donor income. Subscribe today to receive 10 issues per year and access to premium fundraising content on civilsociety.co.uk. Find more information here and subscribe today!

More on