The growing use of deceptive images created by AI raises unprecedented ethical issues. Humanitarian agencies, in search of visual impact, find themselves facing a *new scourge*: the biased representation of poverty. These synthetic visuals amplify stereotypes, thus degrading the image of vulnerable populations.
Communications about poverty are becoming playgrounds for a *distorted narrative*, challenging the dignity of the people portrayed. The emergence of this phenomenon prompts deep reflection on the ethics of imagery and the necessity for *respectful representation* of victims.
Deceptive images created by AI
In the digital age, the proliferation of images generated by artificial intelligence raises alarming ethical questions. Humanitarian agencies face a flood of images depicting poverty, often sourced from AI generators, and their implications are profound. According to Noah Arnold from Fairpicture, a Swiss organization, some NGOs actively use these visuals, while others experiment with these technological tools.
The emergence of “poverty porn 2.0”
This new form of visual representation, described by some as “poverty porn 2.0”, reproduces only stereotypes. These images, characterized by children with empty plates and cracked earth, hinder a dignified representation of lived realities. Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, observed that more than a hundred AI-generated images are used by NGOs in their campaigns against hunger or sexual violence.
A growing use of AI-generated visuals
The use of AI images is increasing, often due to budget concerns and issues of consent. NGOs are turning to these synthetic alternatives, seen as less costly and free from complications related to obtaining permission from the individuals represented. Budget cuts in the United States exacerbate this trend, leading to a substitution of authentic photographs with virtual creations.
Stereotypes reinforced by AI
Stock photo sites, such as Adobe Stock Photos or Freepik, are filled with these clichés. A large number of these images perpetuate biased racial representations. Titles such as “Photo-realistic child in a refugee camp” or “Caucasian volunteer consulting black children in Africa” reveal the issue of a stereotypical narrative, often rooted in prejudice. Alenichev questions their publication, arguing that these visuals offer a distorted view of reality.
The responsibility of platforms
Leaders of platforms like Freepik state that the responsibility for using these images lies with the media consumers. Joaquín Abela, CEO of Freepik, emphasizes that the generated photos come from the global user community, receiving compensation when clients acquire their works. However, Abela acknowledges the difficulty in countering biases and market demands.
Notable examples in the humanitarian sector
Communication campaigns from the largest charities have early included AI visuals. In 2023, the Dutch branch of the British charity Plan International released a video against child marriage, incorporating generated images of a girl with a black eye. An initiative deemed shocking by public observers.
Risks and ethical challenges
In 2022, a United Nations video used AI-generated reconstructions of sexual violence in conflict. Following negative reactions, the video was withdrawn. A spokesperson stated that this type of content could alter the integrity of information, mixing real sequences with generated material. In the face of these issues, the sustainability of charitable organizations’ commitment to supporting victims remains uncertain.
A reminder about the issues of ethical imaging
The growing concerns around AI-generated images are part of a broader debate about the ethical representation of poverty and violence. Kate Kardol, a communications consultant for NGOs, fears that these new practices compromise the dignity of the individuals portrayed. Debates around ethical representation find new resonance with the advent of advanced technologies.
The repercussions of this practice are significant. Biased images could filter and influence future machine learning models, thus worsening existing prejudices in society. This cycle, from image creation to dissemination, only engenders a growing mistrust toward content presented as factual.
Faced with these challenges, Plan International has established a guidance framework discouraging the use of AI to represent children. This initiative aims to preserve both the privacy and dignity of individuals in vulnerable situations. The question of ethics in technology use remains at the forefront of concerns.
Frequently Asked Questions
What are the concerns related to the use of AI-generated images to illustrate poverty?
Concerns include the reproduction of stereotypes, the absence of consent from the represented individuals, and the risk of increasing stigma related to poverty and violence.
Why do some humanitarian agencies choose to use AI-generated images of poverty?
These agencies may be motivated by budget cuts, the lower cost of synthetic images, and the ease of use without needing consent from the subjects.
How do AI-generated images create biases in representations of poverty?
These images tend to replicate stereotyped and caricatured visuals that reinforce existing biases, which can harm public perception of vulnerable populations.
What impact do these images have on the visibility of the true issues related to poverty?
Concerning AI-generated images may distort reality and distract attention from real issues, making authentic empathy difficult and harming awareness of the root causes of poverty.
How can NGOs use these technologies while respecting ethics?
NGOs should adopt strict guidelines to ensure that the images used are respectful, avoid stereotypical representations, and include authentic testimonials and visuals when possible.
Can AI-generated images be used for purposes other than those related to poverty?
Yes, these images can be used in various awareness campaigns, but it is crucial that their use is sensitive to context and does not reproduce harmful stereotypes.
What are the risks of misinformation associated with the use of AI images in communications about poverty?
There is a risk that these images perpetuate distorted narratives about poverty, thereby creating misunderstandings among the public and affecting policies and funding decisions surrounding humanitarian aid.
How do intellectuals and researchers react to the use of AI-generated images?
Many researchers express their concern regarding their use, labeling it as “poverty porn” that dehumanizes the subjects portrayed and compromises the integrity of humanitarian communications.





