In-depth: Ethical considerations for charities using AI

08 Nov 2023 In-depth

As the use of artificial intelligence becomes more prevalent in the charity sector, Civil Society looks at the ethical considerations for charities…

By sdecoret / Adobe

Artificial intelligence (AI) has entered the mainstream this year with King Charles mentioning it at the state opening of parliament yesterday following the prime minister’s recent AI summit in Bletchley Park.

With the recent rise of ChatGPT, conversations around how organisations make use of AI are rising up the agenda across all sectors in the UK and worldwide.

Indeed, the government last week published a report on capabilities and risks from AI. A key area looks at generative AI development, which the report says has the potential to bring significant global benefits but will also increase risks to safety and security.

But while many experts are keen to highlight the technology’s opportunities, there are also important considerations around issues such as bias, misinformation, and transparency.

Though there are ethical elements to scrutinise, many charities are already using AI in their work. Some 27 of 100 charities polled for the Charity Digital Skills Report 2023 said they were currently using it in their day-to-day operations, while a further 26 said they were planning to do so. 

Speaking at this year’s Charity Law Association annual conference, chair of the Charity Commission Orlando Fraser said the regulator was thinking about benefits and risks for trustees concerning AI.

“We are also in the early stages of work to better understand how developments in artificial intelligence, AI, will impact charities. As a regulator, we will never be, nor is it our mission to be, the world’s leading experts on ever changing applications of AI, but we must develop an ongoing understanding of the benefits and risks of using AI may present charities,” he said.

“This will allow us to ensure that in this as in other areas of charity activity, trustees are meeting their legal duties and requirements. It is already clear that AI presents particularly interesting questions around trustee responsibility and accountability. And so we are working actively to better understand the issues.”

Generative AI has ‘brought more awareness’

In a speech at Charity Finance Summit 2023, James Plunkett, chief practices officer at innovation foundation Nesta, encouraged charities to adopt AI faster given the sector was “very slow” to embrace the “first wave of the internet and innovation” and missed out on potential benefits as a result.

Sector consultant Zoe Amar says “the majority of charities are at early stages with AI”, with 73% feeling unprepared to respond to its opportunities and challenges.

“As such, the questions we are getting from charities reflect their stage of adoption. Most of these conversations centre on how to use the tools, what other charities are doing with AI and what happens with their data when they put it into tools such as ChatGPT. Some charities are looking at how they can adopt AI in an inclusive way,” she says.

“I haven't heard as many charities exploring ethical considerations and would love to see more taking a values-first approach to adopting AI, and using their values in key decision-making criteria for why and how they use these tools. 

“In that context, it is even more important now to do due diligence on the values and ethics of your tech suppliers and to consider if their use of AI, and the products and services they offer you, align with your values as a charity.”

Rhodri Davies, director of Why Philanthropy Matters, agrees that it has not so much been ethical considerations that were holding charities back, but more a “general concern that they didn't have the resources or the knowledge” to engage with AI.  

He says this has probably changed quite a lot within the last year because of generative AI meaning it is clearer how you can use at least some of these tools.

“As a result, that has brought more awareness of some of the ethical issues,” he says.

‘Charities need to ask about the biases inherent’

Mark Suzman, chief executive of the Bill & Melinda Gates Foundation, wrote in a recent blog that advances in AI will likely be “transformative across societies and economies, with the potential to fundamentally alter the way people communicate, work, learn, and improve their well-being”.

Nonetheless, he warned that the potential gains will only be realised if the technology is implemented with beneficiaries participating in its development.

“It is vital to approach the potential uses of AI with care and caution, particularly from the perspective of populations who have historically been left behind in realising the benefits of innovations.”

Amar says there are many ethical considerations for charities to consider when using AI. 

“Charities need to ask about the biases inherent in these tools. There are famous examples of ChatGPT assuming that doctors are male and nurses are female. I’ve encountered a similar issue when asking Midjourney to create images and I was surprised at the stereotypical images it created. 

“Staff need to be aware of this when using generative AI tools and to question whether they should use the outputs if they reflect biases.

“The first step is educating your team about the possibility for bias and the consequences for where this could happen.

“For example, if your charity starts using an AI tool designed to help with recruitment, you need to ask about the bias that may be built into its parameters and the impact this could have on creating a less inclusive pool of candidates, and how to mitigate that.”

Protecting public trust

Davies says charities need to consider anything that could potentially undermine public trust in them. 

One particular area in which there may be ethical issues to consider, Davies says, is around generative AI for images and, in the future, for video too.

He says there is an interesting question about what the potential risks are in terms of trust and authenticity in an online environment. 

“There’s an argument for using these tools to generate imagery because it’s cheaper and easier”, he says, but people might ask: “Why couldn’t you find an image of this actually happening rather than just generating one?”. 

“The use of AI generated imagery potentially reduces people’s willingness to give if they’re aware of it, because it does prompt that sense of, hang on a minute, is somebody pulling the wool over my eyes? Even if it’s done entirely in open and transparent ways, that might become counterproductive, which is more of a practical rather than ethical issue.

“One of the most valuable things that charities have is the trust of people, so they need to think long and hard about doing anything that can potentially undermine that. It is difficult when the tools become widely available and everyone else is using them to not get left behind but maybe there’s an additional reason for charities to be careful about thinking through what the consequences might be.

“If misinformation becomes more commonplace as a result of deep fakes and generated imagery and things, then charities need to be careful in terms of not wading into things.

“I do think there's an issue around using a tool like ChatGBT naively, because I think hopefully most people are aware that you need to have a pretty good system of oversight to make sure that you're not regurgitating false information, or anything like that.”

Amar says charities need to allow additional time for checking any outputs created by AI for accuracy. 

If you are using certain tools to write your marketing content, “you need to allow extra time for this in your content planning, and to include it in our social media policy for staff, volunteers and trustees”, she says. 

Amar says there are also questions about the supply chain, such as the story about OpenAI using Kenyan workers on less than $2 an hour to make ChatGPT less toxic. There are also ethical issues about the amount of energy it uses.  

“There is also a question about how open charities need to be about their use of AI generated content,” says Amar. “For example, if you use it to write a funding bid, should you say that?”

Case study: Parkinson’s UK 

In 2020, Parkinson's UK ran a test to see whether using machine learning to help guide their data selections for cash appeal mailings, could increase the revenue generated compared to their existing method. They worked with a company called Dataro to build a model to help achieve this.

Using Dataro’s AI-driven supporter predictions, the charity increased revenue in its September 2020 appeal by over £15,000. It also identified reductions in its list size, therefore bringing costs down. 

“Previous approaches were no longer up to standard, resulting in missed gifts and increased costs,” a spokesperson for the charity says.

“By implementing better data selection methods like AI-driven scoring, the charity can automatically generate lists of warm supporters. These build better connections with donors and ultimately raise more money for their cause.”

This model has helped Parkinson’s UK to adapt to changes in how supporters are engaging with the charity, it says, and “in turn, it allows us to send supporters communications that may be of more interest to them”.

The charity is starting to use its own version of ChatGPT: Pbot, which was created using OpenAI from Microsoft Azure. 

“It gives us the full benefit of OpenAI’s technology but keeps all our sensitive information within our own system to protect confidentiality and comply with data protection regulations.”

When signing up to use Dataro's services, the charity looked over the terms and conditions to ensure that these were compliant with GDPR, and also in line with the charity's privacy policy.

“The Dataro model only uses existing supporter data to make its predictions. For example, past donation history or whether someone is a member of the charity. We do not gather any outside information. 

“Whilst Dataro's model makes recommendations on who to select, the actual data selections are still done by our own in-house data operations team.

“We work with our supporter care staff to monitor the feedback received about supporter communications. So if they receive calls from supporters on a similar theme, we can use these insights to change how we select our data or put additional exclusions and safeguards in place.”

‘We need to prepare for this now’

Ultimately, charities have learnt that digital advancements can be of huge benefit to the people they work with, and AI will also be key in the future. Nonetheless, there are ethical questions for charities to consider, and blanket approaches may not work for all charities.

Amar says: “The sector is at early stages with AI but it is inevitable that these tools will be used more as society adopts them further.

“The roll out of Microsoft’s CoPilot tool, and the other tools that will come on stream over the next year will be game changing and we need to prepare for this now, by getting our skills, data, policies and information governance in order in our charities. 

“Yet above all we need to be asking how we can adopt these tools safely and responsibly in a way that doesn’t harm our staff or our beneficiaries, and that requires careful thought from leaders.”

Civil Society Voices is the place for informed opinion, and debate about the big issues affecting charities today. We’re always keen to hear from anyone, working or volunteering at a charity, who has something to say. Find out more about contributing and how to get in touch.

 


 

More on