Leaked Google Memo Shows Fear of Losing the AI Race, But Not to the Foe You'd Think

In a leaked internal memo, a Google exec expressed serious fears about losing the ongoing AI arms race — but the competition that the exec fears most, according to an NBC report, might be a little unexpected.

"We've done a lot of looking over our shoulders at OpenAI," reads the memo, which a Google spokesperson confirmed as authentic to NBC but cautioned were only the thoughts of one person at the company. "But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI."

"I'm talking, of course, about open source. Plainly put, they are lapping us," it continued. "While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly."

In other words, according to this exec, though Google and its Silicon Valley competitors like Microsoft-slash-OpenAI and Meta still have a narrow upper hand, open-source models are quickly catching up.

And that, per the memo, is reason for concern.

The Google exec makes a pretty good case. Aided in large part by a major leak of Meta's advanced language model, LLaMa, small and scrappy open-source models like AutoGPT have made major strides in recent months.

And to that end, it's one thing to have visible, tangible competition like Meta and Microsoft-slash-OpenAI. Fighting an arms race against what's effectively the open web, where users can learn and borrow from each other and tailor development to their personal needs, is another beast entirely.

"I don't think I need something as powerful as GPT-4 for a lot of things that I want to do," Simon Willison, a programmer and tech analyst and blogger, told NBC.

"The open question I have right now is, how small can the model be while still being useful?" he added. "That's something which the open source community is figuring out really, really quickly."

"Largely, I think people are trying to do good with these things, make people more productive or are making experiences better," added Mark Riedl, a computer scientist and professor at Georgia Tech. "You don't want a monopoly, or even a small set of companies kind of controlling everything. And I think you’ll see a greater level of creativity by putting these tools into the hands of more people."

But while there might be some merits to a decentralized approach to AI, there are also some dangers. In addition to a number of more philosophical ethical questions, AI poses a lot of very real threats, and in the hands of theoretically unlimited bad actors, systems built by way of open-source channels may well do a lot of harm.

"It really now becomes the question of what are people going to use these things for," Riedl told NBC. "There's really no restrictions on making specialized versions of models that are designed specifically to create toxic material, or misinformation, or to spread hate on the internet."

More on Google: Google Staff Warned Its AI Was a "Pathological Liar" Before They Released It Anyway