Google to fix the Gemini model that’s been generating pics of “diverse” Nazis, but open source AI models are the only real solution: AI Eye.
After days of getting dragged online over its Gemini model generating wildly inaccurate pictures of racially diverse Nazis and black medieval English kings, Google has announced it will partially address the issue.
Google Gemini Experiences product lead Jack Krawczyk tweeted a few hours ago that: “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.”
Social media platform X has been flooded with countless examples of Gemini producing images with diversity dialed up to maximum volume: black Roman emperors, native American rabbis, Albert Einstein as a small Indian woman, Googles asian founders Larry Pang and Sergey Bing, diverse Mount Rushmore, President Arabian Lincoln, the female crew of the Apollo 11, and a Hindu woman tucking into a beef steak to represent a Bitcoiner.
It also refuses to create pictures of caucasians (which it suggests would be harmful and offensive), churches in San Francisco (due to the sensitivities of the indigenous Ohlone people) or images of Tiananmen Square in 1989 (when the Chinese government brutally crushed pro-Democracy protests). One Google engineer posted in response to the deluge of bad PR that hes never been so embarrassed to work for a company.
To be fair, Google is trying to address a genuine problem here, as diffusion models often fail to produce even real-world levels of diversity (that is, they produce too many pics of white middle-class people). But rather than retrain the model, Google has massively overcorrected with its aggressive hidden system prompt and inadvertently created a parody of an AI so borked by ideology that its practically useless.