Google faces backlash over 'Nano Banana Pro' image bias
Users report the model generates stereotypical 'white savior' imagery in humanitarian contexts.
Ethical safeguards tested
Google’s newly released image generation model, Nano Banana Pro (internally Gemini 3 Pro Image), faced significant controversy on Thursday following reports of biased output. The Guardian reported that the model, when prompted to depict humanitarian or aid scenarios, frequently generated images fitting "white savior" tropes, depicting white subjects as rescuers in stereotypical developing world settings.
Furthermore, users discovered that the model was hallucinating the logos of real-world charities onto these fabricated images, raising potential legal and reputational issues for those organizations.
Technical friction
The incident highlights the persistent challenge of "alignment" in generative AI. - …
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.