AI News

Google faces backlash over 'Nano Banana Pro' image bias

Users report the model generates stereotypical 'white savior' imagery in humanitarian contexts.

Olivia Sharp 1 min read 758 views
Free
Google's new Nano Banana Pro image model draws criticism for generating stereotypical "white savior" images and hallucinating charity logos.

Ethical safeguards tested

Google’s newly released image generation model, Nano Banana Pro (internally Gemini 3 Pro Image), faced significant controversy on Thursday following reports of biased output. The Guardian reported that the model, when prompted to depict humanitarian or aid scenarios, frequently generated images fitting "white savior" tropes, depicting white subjects as rescuers in stereotypical developing world settings.

Furthermore, users discovered that the model was hallucinating the logos of real-world charities onto these fabricated images, raising potential legal and reputational issues for those organizations.

Technical friction

The incident highlights the persistent challenge of "alignment" in generative AI. - …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles