The ethics of global health communication in the artificial intelligence era: avoiding poverty porn 2.0
Following widespread criticism, so-called poverty porn, which usually appears in the guise of photographs featuring people in extreme states of suffering, is now largely discouraged in contemporary ethical communications guidelines, because it reduces people to decontextualised, suffering, and racialised bodies. Does this reduction mean that such biased imagery is a thing of the past? Alarmingly, this seems not to be the case.
Generative artificial intelligence (AI) imagery allows people to generate images in seconds. Importantly, it is cheaper than hiring a photographer or artist. Therefore, in the age of budget cuts, organisations are increasingly experimenting with AI-generated imagery. For example, in 2023, WHO published an AI-generated anti-tobacco campaign depicting a suffering and hungry child of presumed African heritage in dusty clothing standing alone in a field, with the caption “When you smoke, I starve”. In the same year, Plan International created two videos of AI-generated images depicting pregnant and abused adolescent girls forced into marriage, gaining more than 300 000 views. In 2024, the UN official YouTube channel, which boasts 3 million subscribers, posted a video featuring AI-generated avatars re-enacting testimonies from survivors of conflict-related sexual violence, along with the hashtag “#EndRapeInWar”. These same organisations would probably not create such depictions featuring actual people due to internal ethical policies. Is the artificiality of the image thus being used as justification to reintroduce much-contested visual tropes under a new guise?
An unemotional argument based on marketing strategies can be made that these images are ethical because they supposedly protect the anonymity of real, suffering people while still yielding the sought-after engagement.
A similar phenomenon of using AI imagery for communication seems to be rippling across the global health industry. From social media outputs, such as LinkedIn and X (previously Twitter), we collected a sample of more than 100 AI-generated images posted between Jan 1 and July 1, 2025, by individuals and small-scale organisations often based in the low-income and middle-income countries, many of which replicate the emotional intensity and visual grammar of poverty porn and dated fundraising imagery.
The ethics of global health communication in the artificial intelligence era: avoiding poverty porn 2.0, Alenichev, Arsenii et al. The Lancet Global Health, Volume 13, Issue 11, e1803 – e1804
