AI News

Google faces continued backlash over 'Nano Banana Pro' bias

The model's tendency to generate 'white savior' imagery prompts criticism of safety filters.

Olivia Sharp 1 min read 680 views
Free
Google's Nano Banana Pro model faces criticism for generating biased imagery and fake charity logos, highlighting ongoing alignment challenges in generative AI.

Alignment failures persist

Fallout continued Friday regarding Google’s new image generation model, Nano Banana Pro (Gemini 3 Pro Image). Reports from The Guardian and user testing highlighted a persistent bias in the model’s output, specifically its tendency to generate "white savior" imagery when prompted with humanitarian or aid-related scenarios. Users also documented instances where the model hallucinated the logos of real-world charities onto fabricated images of poverty.

Technical trade-offs

The controversy illustrates the ongoing struggle to balance high-fidelity generation with safety alignment. * Over-correction: The model appears to rely on blunt safety filters that struggle with nuance, leading …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles