Sam Altman’s Scarlett Johansson Blunder Just Made AI a Harder Sell in DC

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Sam Altman’s convincing turn as a Silicon Valley visionary just ran up against one of the few forces in American life still powerful enough to rival it: a rich, famous and beautiful movie star.

As the CEO of OpenAI, the hottest company in the hottest sector of technology, Altman wasn’t just a typical West Coast wunderkind. He became central to Washington politics and the national conversation almost immediately after ChatGPT caught fire with the public.

It seemed, and still seems, like his company was poised to change the world, and his appealing personal modesty contrasted favorably with Elon Musk’s culture-war pugilism or the icy superiority of Social Network-era Mark Zuckerberg. He was a welcome, cooperative emissary from Silicon Valley who arrived just in time for Washington to prepare for yet another wave of tech-driven social transformation.

Suddenly, he’s kind of a bad guy.

On Monday night, Scarlett Johansson — who famously voiced an AI in the movie Her — alleged that OpenAI had appropriated her voice without permission for a new AI assistant tool. Altman and OpenAI say the voice belonged to a different actress, but that didn’t stop a massive backlash that threatens to undo the goodwill they’ve painstakingly worked to build since the advent of ChatGPT. Tech analyst Brian Merchant posted on X that the move represents Altman’s vision of AI as “the ultimate engine of entitlement.” At Slate, Nitish Pahwa found it in keeping with “the underlying record of how he appears to treat his own employees, run his company, and keep his secrets.”

This is not the kind of tech figure Washington thought it was inviting in last year.

“I’ve seen a lot of policymakers personally enamored with Sam. You could see it in how they talked to him in the Senate when I was there,” said Gary Marcus, an AI critic who participated in the Senate’s May 2023 AI hearing along with Altman. “There’s almost a cult of personality around him.”

Altman now appears to be speedrunning the arc of charismatic tech figureheads before him like Zuckerberg and Bill Gates — burning through the goodwill and awe his products inspire just as governments across the world are poised to set hard regulatory boundaries around them.

“If people suddenly have questions about him,” Marcus said, “that could actually have a material impact on how policy gets made.”

The episode is an ill omen for both Altman’s world-reshaping ambitions and Washington’s tenuous relationship with Silicon Valley.

Big things are happening with AI in the nation’s capital. The Biden administration’s executive order ordered all federal agencies to put AI plans and rules in place; the bipartisan “roadmap” in the Senate set out a slate of policy issues for the relevant committees to take up; various bills regulating everything from deepfakes to a potential AI licensing regime have been proposed.

All of these moves are driven, at least in part, by public sentiment, which remains largely skeptical about AI and open to regulating it.

For the past few years, the backlash to Big Tech has been driven by a dawning awareness that tech companies have taken vast swaths of what, in the first, more open iteration of the internet, were in the public domain — words, photos, commerce — and used them to reap massive profits from ad-driven platforms.

The new wave of generative AI, trained on vast oceans of data gathered without permission from across the internet, threatens to escalate that trend by absorbing the whole of human effort and then forcing users to agree to the tech companies’ terms to access it. (Although, as with social media, they might not have to pay outright: GPT-4o is free to use.)

“I don’t see Congress rallying to the aid of a movie star, but there’s already a big conversation happening about generative AI, creativity and artists,” said Adam Kovacevich, founder of the pro-tech Chamber of Progress, who said he hopes the courts and policymakers uphold the tech companies’ belief that AI training falls under fair use law.

Protecting consumer rights is a longtime cause célèbre on the American left that dates back to “Nader’s Raiders” and the progressive movement of the early 20th century. On the right, the tech backlash is built on a growing interest in a renewed conception of the “common good” positioned vis-a-vis the desires of faceless global corporations. Sen. Josh Hawley (R-Mo.) stared down Zuckerberg in a Senate hearing earlier this year, pressuring him to apologize to families harmed by his platforms. MAGA standard-bearer Sen. J.D. Vance (R-Ohio) has offered fervent praise of Federal Trade Commission Chair Lina Khan, arguably the most fire-breathingly ideological member of President Joe Biden’s administration, for her rough treatment of Big Tech.

There’s still quite a bit of gloss around the industry writ large, to be fair. It’s enjoying a massive economic boom driven by innovation in AI — and also the newly formed bipartisan consensus around “on-shoring” tech development to rival China.

While chagrined by the personal nature of OpenAI’s voice controversy, not to mention its poor timing, tech boosters remain optimistic the technology will continue to grow and keep its hold on the public imagination.

Will it? Polling released yesterday by the pro-regulation AI Policy Institute found people more worried about AI’s harms than excited about its potential — at least when you remind them what those harms might actually be. Sixty-six percent of respondents, prompted with the threat of deepfakes, prioritized “reducing harms from misuse of [AI] models from bad actors” over promoting their growth, and that 52 percent of respondents had an unfavorable opinion of OpenAI specifically.

The Johansson incident is another unwelcome blow to Silicon Valley’s public image, coming immediately in the wake of Apple’s bad press for a recent not particularly self-aware iPad ad, which led the company to admit it “missed the mark” in its relationship with artists and creators.

On the legal front, there isn’t a lot of new regulation around AI, but there might be just enough to get OpenAI in trouble.

“This will get unique attention,” said Matt Mittelsteadt, an AI fellow at George Mason University’s Mercatus center.

Mittelsteadt pointed out that if Johansson could somehow bring a lawsuit in Tennessee, OpenAI might face serious problems — that state’s governor recently signed a bill outlawing the AI-assisted voice impersonation of performers.

The company has denied it simulated Johansson’s voice. Still, it’s paused the use of the chatbot while continuing to take flack from official critics like Sen. Chris Coons (D-Del.), co-sponsor of a bill targeting deepfakes, who wrote to POLITICO’s Mohar Chatterjee yesterday that the move was a “frankly disturbing threat.”

“It’s possible nothing comes of this discrete incident,” Mittelsteadt said. “But the PR problems are piling up and something is eventually going to break the camel's back.”

This scandal could hardly have come at a more fraught time for OpenAI internally. Amid the unveiling of its flashy new GPT-4o model, Wired reported last week that the company’s “superalignment” safety team disbanded after the resignations of two key leaders. While the definition of AI “safety” is highly contested, the departure of leaders focused at least nominally on AI’s social impact over the bottom line was likely not the press OpenAI wanted in Washington as debates over how to responsibly govern AI are still raging.

Appropriating the voice of one of the world’s most famous movie stars — in reference by Altman, it should be noted, to a film that serves as a cautionary tale about overreliance on AI — is unlikely to help shift the public back into his corner anytime soon.

“Everyone goes after Elon Musk’s weirdness, but Silicon Valley as a whole needs to get out of its bubble and into reality,” said GMU’s Mittelsteadt. “Tweeting ‘her’ isn’t going to win friends.”