Close This site uses cookies. If you continue to use the site you agree to this. For more details please see our cookies policy.

Search

Type your text, and hit enter to search:

Efficiency or ethics? 

A new report from the University of East Anglia, UK, (UEA) warned that the potential reputational damage of charities using AI-generated images in their campaigns is more complex than many organisations realise. 

AI & trust (2)
Image by h9images | Freepik

It comes as humanitarian budgets tighten and production pressures increase, with many charities and NGOs turning to AI, tempted by the offers of speed, cost efficiency and creative flexibility. 

The study suggested the charity and development sector’s ‘high-tech shortcut’ to empathy is backfiring. While AI offers a cheaper, faster way to produce campaign visuals, it risks breaking the fundamental bond of trust between charities and the public, say the authors. 

The report, Artificial Authenticity, analysed 171 AI-generated images and more than 400 public comments surrounding campaigns from 17 organisations, including Amnesty International, Plan International, the World Health Organization (WHO) and WWF.  

The findings revealed a worrying shift: when AI images are used, the humanitarian cause effectively disappears from the conversation. The researchers found the introduction of AI fundamentally reshapes how the public engages with charity.  

Co-author David Girling, from UEA’s School of Global Development, said: “Charities exist because people care about other people. The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk.” 

He added: “The debate about the ethics of AI is increasingly polarised. AI is not inherently wrong, but if it begins to overshadow the human story at the heart of charitable work, organisations could lose far more in trust than they gain in efficiency." 

The study found that nearly 70 per cent of the AI images analysed were designed to appear photorealistic. Poverty was the dominant theme, accounting for around a third of the images (51 of 171), and often featuring children, followed by environment- (35) and human rights- (32) themed images. Moreover, while 85 per cent of images were appropriately captioned as AI-generated, this disclosure did not protect the cause and organisations from backlash, even when transparently labelled.  

In undisclosed campaigns, the audience adopted an ‘investigative tone,’ according to the study. Instead of evaluating the charity's work, commenters focused entirely on whether the images were artificial or not. 

The report also found significant public backlash against ‘message-medium misalignment’. For example, environmental organisations such as WWF Denmark faced criticism for using energy-intensive AI tools to promote sustainability – an irony not lost on a climate-conscious public who labelled the move ‘ecocidal’. 

For some organisations, mock visuals are seen as a way to balance storytelling with safeguarding and dignity. Therefore, using AI-generated imagery could reduce the number of people who would have been otherwise re-traumatised by the process of being photographed or filmed for campaign purposes. However, the study showed that donors often reject these ‘fake’ images, prioritising their own need for an ‘authentic witness’ over the beneficiary’s right to privacy. 

The researchers found the public response was far from simple. In some cases, people welcomed AI as a way to protect vulnerable individuals from exploitation. In others, they criticised it as a distraction from real solutions, particularly in emotionally sensitive campaigns such as cancer or famine. 

When AI is used, discussion often shifts away from the cause and towards debates about technology and trust. Of the comments analysed, 141 focused on AI ethics and authenticity concerns, not the charitable cause; 122 critiqued technical execution and visual quality; only 80 (less than 20 per cent) actually engaged with the humanitarian issue itself. 

Co-author Deborah Adesina, a former Master’s student in the School of Global Development and now a media, communications and development consultant, stated: “Ultimately, the future of charity storytelling will not hinge on technological capability alone. It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.”

“For communications teams who opt to include generative AI in their workflow, proper training in ethical prompt engineering will be crucial to avoid reputational harm and unintended bias.” 

The full report and the database of AI-generated charity images are available here.

    Tweet       Post       Post
Oops! Not a subscriber?

This content is available to subscribers only. Click here to subscribe now.

If you already have a subscription, then login here.