Google says Gemini AI glitches were product of effort to address 'traps'

Google apologized Friday for a series of public mishaps by its artificial intelligence tool Gemini, which was denounced by some users this week after it generated historically inaccurate images such as nonwhite Nazi soldiers.

The company said in a blog post that it was still working on a fix for the app and was continuing to temporarily block the creation of new images of people until a solution is in place.

“It’s clear that this feature missed the mark,” Prabhakar Raghavan, a senior vice president at Google, wrote in the blog post.

“Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well,” he wrote.

Gemini is primarily a conversational AI app competing with OpenAI’s ChatGPT to explore the possibilities of generative AI, including using text prompts to create images. Google, under pressure from investors and others, had worked on similar ideas internally for years and released its app, initially called Bard, after ChatGPT unexpectedly took off in popularity starting in late 2022.

But images from Gemini became the subject of mockery on social media after people posted examples of ahistorical images. They included illustrations of World War II German soldiers who were Black or Asian, despite the racist ideology of the Nazi military and government. The app also created images of nonwhite American Founding Fathers, when in reality they were all white men.

The criticism came especially from conservative figures who accused Google of embracing political correctness. Tech figures including Elon Musk, who has a competing AI chatbot as part of his app X, have singled out individual Google employees for criticism.

Google acknowledged Wednesday that Gemini was offering inaccuracies and a day later it paused image generations that include people.

Google said Friday the intent had been to avoid falling into “some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images.” It also said that Gemini is targeted to a worldwide audience, so the diversity of people depicted is important.

But prompts for a specific type of person or people in a particular historical context “should absolutely get a response that accurately reflects what you ask for,” Google said.

“Over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive,” the company said.

Google did not give a timeline for turning back on the ability to generate images of people, and it said the process of building a fix “will include extensive testing.”

This article was originally published on