Google is under attack in the wake of its 'woke' AI disaster

  • Since day one, Google has been on a mission to organize the world's information.

  • Concern that its AI could disrupt that mission is mounting.

  • Mistakes made by its AI model Gemini have raised fears of censorship.

Since its inception, Google has had a mission statement that is now practically enshrined as lore: "To organize the world's information and make it universally accessible and useful."

Google appears to be as committed to that mission statement today as it was in 1998 when cofounders Larry Page and Sergey Brin were working out of a garage in Menlo Park, California.

But as Google hurtles into a new AI era, fears are growing that it could disrupt that core mission with its rollout of the technology.

Critics say Gemini, its AI chatbot, risks suppressing information by being too "woke."

Google's AI troubles

Google controls more than 90% of the search market, giving it dominance over the world's information flow online.

As AI becomes an increasingly important tool in helping users find information, the company plays an outsize role in ensuring facts are surfaced accurately.

But there are mounting concerns that Google's AI model has been neutered in a way that leads it to generate inaccuracies and withhold information.

The first major signs of this emerged last week, as users of Gemini reported issues with its image-generation feature after it failed to accurately depict some images requested of it.

One user said he asked Gemini to generate images of America's founding fathers. He said it produced "historically inaccurate" images of 18th-century leaders, ones that showcased a false sense of gender and ethnic diversity. Google has suspended the feature while it works on a fix.

The issues aren't just limited to image generation, however.

As my colleague Peter Kafka notes, Gemini has struggled to directly answer questions on whether Adolf Hitler or Elon Musk has caused more harm to society. Gemini responded that Musk's tweets are "insensitive and harmful," Nate Silver, the data journalist who founded FiveThirtyEight, posted on X, while "Hitler's actions led to the deaths of millions of people."

David Sacks, the cofounder of Craft Ventures and cohost of the "All-In" podcast, blames Google's company culture for Gemini's issues.

"The original mission was to index all the world's information. Now they are suppressing information. The culture is the problem," he said on his podcast last week.

Critics have said that company culture can play a role in how AI models are built.

Models like Gemini typically absorb the biases of the humans and the data used to train them. Those biases can then determine AI models' perceptions of cultural issues such as race and gender.

Other companies' AI bots have had similar problems. Sam Altman, the CEO of OpenAI, acknowledged last year that ChatGPT "has shortcomings around bias" after reports that it was generating racist and sexist responses to user prompts.

Sam Altman sits in front of a blue background, looking to the side.
Sam Altman has previously acknowledged bias issues with ChatGPT.Andrew Caballero-Reynolds/AFP/Getty Images

Google has been at the heart of the debate about how these biases should be addressed. Its slower rollout of AI compared to other companies was representative of a work culture focused on testing products before releasing them.

But as last week's Gemini saga showed, that process can lead to situations where accurate information is withheld.

"People are (rightly) incensed at Google censorship/bias," Bilal Zuberi, a general partner at Lux Capital, wrote in an X post on Sunday. "Doesn't take a genius to realize such biases can go in all sorts of directions, and can hurt a lot of people along the way."

Brad Gerstner, the founder of Altimeter Capital — a tech investment firm that has a stake in Google rival Microsoft — also described the problem as a "cultural mess." Musk has labeled it a "woke bureaucratic blob."

Google's explanation for why some of Gemini's issues occurred could give weight to some of the criticisms leveled at it.

In a blog published Friday, Prabhakar Raghavan, a senior vice president at the company, wrote that some of the images Gemini generated were "inaccurate or even offensive."

This happened, he said, because the model was tuned to avoid the mistakes existing AI image generators have made, such as "creating violent or sexually explicit images, or depictions of real people." But in that tuning process, Gemini sometimes overcorrected.

Raghavan added that Gemini also became "way more cautious" than had been intended.

If Google wants to stay true to its mission statement, these are mistakes it simply can't afford to make.

Read the original article on Business Insider