Emma Bracegirdle: Charities must lead on ethical use of AI imagery

03 Dec 2025 Voices

The sector should be cautious in its use of AI-generated imagery and prioritise transparency, writes Emma Bracegirdle…

Shutterstock

During Movember this year, the charity launched a clever campaign with Google Gemini - upload your photo, get an AI moustache, post it, and Google donates £11 to Movember. Great cause, seemingly simple activation.

Except nowhere on the campaign page did it mention what happens to your photo. According to Google’s privacy policy for the free Gemini app, uploaded photos are stored for three years, potentially reviewed by human reviewers, and used to train AI models. When I asked Movember about this, their response was essentially that the privacy policy is available if anyone wants to look.

The campaign raises money for men’s health. It also provides Google with facial data to train their AI. Only the first part is mentioned.

Sadly, Movember is far from alone. This is a sector-wide issue that we’re not talking about openly enough. We’re at a crossroads with AI images, and the charity sector has an opportunity to lead the way. We could be the sector that gets this right.

The transparency problem

I’ve spent years in the charity sector talking about dignity, consent, and not exploiting people’s images. The burden of informed consent shouldn’t be on participants to hunt down privacy policies buried in app settings.

We can support important causes and be transparent about data use. These aren’t mutually exclusive. A single line on a campaign page. A tick box. Something that lets people know their photo serves both purposes and gives them the choice to opt in.

This matters because charities are held to a different standard. We exist because we care about people and justice. When we fail at basic transparency, we undermine the trust that makes our work possible.

A deeper problem with AI-generated imagery

The transparency issue is just the surface. What most charities don’t understand about AI-generated images goes much deeper.

Not all charities are using AI-generated imagery to create exploitative content. Many are using it for abstract concepts, design elements, or illustrations where no real person is represented. That’s a different conversation.

What is concerning is the growing number of charities using AI to generate evocative, stereotypical images of people in distress. A homeless person on a street corner. A child in poverty. A person struggling with mental health. A common excuse is that they’re “protecting identities” or avoiding the complications of working with real people. That excuse isn’t good enough.

These tools are trained on massive datasets of existing images. When you generate an AI image of a “homeless person” or “child in poverty”, you’re pulling from decades of stereotypical, often racist imagery. The AI isn’t creating neutral images. It’s regurgitating and reinforcing the worst stereotypes we’ve spent years trying to move away from.

We don’t actually know what’s in these datasets. How many of the images used to train these tools were scraped without consent? How many photographers, illustrators, and creatives had their work taken without permission or payment?

No matter how careful you are with your prompts, you’re working with biased datasets. And you, as the human inputting those prompts, bring your own biases too. The combination can create images that reflect existing prejudices, even when you think you’re being thoughtful about it.

There’s also the environmental cost. AI image generation requires massive power, with significant carbon footprints. For a sector that cares about climate justice, that’s worth considering. And what about the livelihoods of photographers, illustrators, and other creatives who’ve built careers telling stories with integrity? It all has an impact.

The trust crisis and the path forward

When charities start using AI-generated images without disclosure, you normalise the practice. This erodes the already fragile trust between charities and the public.

We’re operating in an environment where many people don’t fully understand how charities work and already view the sector with scepticism. Adding undisclosed AI-generated content to the mix only widens that gap.

However, charities have a genuine opportunity to lead the way on ethical AI use. We could be the sector that demonstrates how to embrace this technology while staying true to our values.

If you’re using AI-generated images in your charity content without labelling them, ask yourself why. If you can’t be transparent about it, that’s probably your answer.

This is also an opportunity for charities to embrace authenticity. The organisations that will emerge strongest aren’t the ones with the most polished, algorithmically-perfect content. They’re the ones building genuine connections with real people and real stories.

Charities are going to use AI. That’s the reality. If you’re going to use it, be transparent about data use like the Movember example shows we should be. Label when you’re using AI-generated images. Ask yourself why you’re using it. Ask yourself if it’s worth it. Or would authenticity do the trick better? The charity sector has to lead with values. Now’s our chance to prove it.

I’m conducting research into AI ethics in the charity sector because I believe we need sector-wide conversations about this and clear guidelines. I’m running a survey to understand how charities are currently thinking about AI-generated imagery and I’d welcome your input.

Emma Bracegirdle is founder of the Saltways, a UK-based video production company specialising in ethical filmmaking for charities and not-for-profits

Civil Society Voices is the place for informed opinion, and debate about the big issues affecting charities today. We’re always keen to hear from anyone, working or volunteering at a charity, who has something to say. Find out more about contributing and how to get in touch.

More on