An ambitious San Francisco lawmaker is in the middle of a battle for AI’s future

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

California’s latest effort to rein in AI is dividing the very companies and researchers developing the technology — and exposing deeper anxieties about its potential dangers.

The main point of contention is a bill proposed by state Sen. Scott Wiener, an ambitious San Francisco Democrat who is positioning to succeed Rep. Nancy Pelosi. The measure would make companies vet their largest AI models for safety risks like cyberattacks or developing bioweapons and allow the attorney general to sue for unspecified damages if their products are harmful.

It’s one of dozens of AI bills working through the state Legislature this year. The legislation is pitting major players in California’s homegrown tech world against one another at a time when Sacramento is increasingly seen as a standard-setter for tech regulation nationwide, especially as Congress remains bitterly divided.

Companies are also vying to shape laws that could dictate whether they prosper as the AI industry matures into a multi-billion-dollar behemoth.

On one side are industry lobbyists representing prominent companies like Google and Meta that have assailed the bill as an unwieldy damper on innovation, reflecting a broader wariness of over-regulation. Others in this camp align with the “effective accelerationist” vision of unconstrained technological advancement, publicly espoused by influential venture capitalist Marc Andreessen.

On the other side is a hodgepodge of nonprofits, often funded by billionaire former Facebook executive Dustin Moskovitz, joined by some smaller AI developers and venture capital firms. Many fall within the so-called effective altruism movement, which includes Moskovitz among its adherents, and aims to use data and logic to maximize public good, but views AI as a paramount threat on par with nuclear war.

The schism highlights the complicated partitions within the tech world over the future of AI and its threats.

“There are various opinions within the tech community — some quite responsible, and some insane,” said Chris Larsen, a cryptocurrency mogul who has spent heavily in recent years to push San Francisco’s politics to the center. Larsen signed an open letter last year, along with big names like Elon Musk, urging a halt to large-scale experimentation in AI, citing safety concerns. “I'm disappointed how much pushback any AI safety gets.”

Opponents argue Wiener’s bill would stymie a promising industry, particularly newer companies trying to break into the field that would have a harder time clearing such legislative hurdles.

“Regardless of whether that’s the intent, the ultimate outcome will be to make it much more difficult for other companies to compete with OpenAI,” said Alliance for the Future Executive Director Brian Chau, whose new Washington-based group aims to prevent lawmakers from overregulating the technology. Chau sees his group, which has ties to the effective accelerationists, as a counterweight to what it calls “the escalating panic around AI.”

Wiener and his allies repudiate that assertion, pointing to support from certain smaller firms like Imbue, which recently raised $200 million to endow computers with what it called “human values.”

Those backers call the measure a commonsense safeguard against some of AI’s most dire safety risks — for example, extreme scenarios like hostile attacks on energy grids or a rogue algorithm targeting humans for extinction. They warn that the government must step in now to constrain companies that could otherwise pursue profit at the expense of the public good.

“We’re building an industry now. We’re not making some app that delivers dinner,” said Matt Boulos, Imbue’s head of policy and safety. “If we take the risks of AI seriously, we need to put in place the incentives that keep them from being realized.”

Gov. Gavin Newsom, who has historically embraced the tech industry’s warnings against overregulation, will have to weigh these diverging views if the bill lands on his desk. Newsom’s office declined to comment, as is his general policy on pending legislation.

Major companies like Google have adopted internal safeguards as they hone their products and say it’s better to allow such self-regulation. Lawmakers like Wiener argue the government must intervene by dictating a responsible balance that nurtures AI’s development without letting it spin out of control.

“I’ve been around the tech sector for many, many years as a policymaker, and what I’ve seen is that there’s always an array of opinions about how to address a problem,” Wiener said. “It's a bad look to oppose safety legislation, particularly when so many prominent voices in the AI space have been vociferous in inviting regulation,” he added, nodding to leaders like OpenAI’s Sam Altman.

Some of the groups advocating for the bill have also been integral to its development. Last year, a nonprofit called the Center for Artificial Intelligence Safety spearheaded a public, one-sentence call to action: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Signatories included Democratic politicians like California Rep. Ted Lieu, as well as titans of industry like Bill Gates, and Moskovitz, who has funded the Center for AI Safety and other AI researchers through his nonprofit Open Philanthropy.

“The letter basically solved a bit of a collective action problem of: a lot of people believe this but they're afraid to say it,” said CAIS Director Dan Hendrycks, “and then we ended up finding out that basically, a lot more people are concerned about it than almost anybody expected.”

Now the Center for Artificial Intelligence Safety's political affiliate is sponsoring Wiener’s bill after helping to draft its language. Its political arm has hired a lobbying firm in Sacramento, paralleling its increased political engagement in Washington.

Companies are deciding right now whether they have to invest in these safety precautions,” Nathan Calvin, senior counsel for CAIS’ political action fund, said. “And if California does mandate that, then at least some of the big players will be like: ‘We do need to do that, it’s inevitable.’”

Open Philanthropy — founded by Moskovitz and his spouse Cari Tuna in 2017 — has spent more than $330 million in the last few years to research AI risks, including support for CAIS and other groups that back Wiener’s bill (the Center for AI Safety’s lobbying arm is funded separately). It has built out an influence network by funding fellowships of key congressional and think tank staffers shaping federal policy on Capitol Hill. Calvin also previously worked there.

Those overlaps have drawn accusations that the funders of safety research are backing versions of regulation that would cement their own companies’ advantages over AI rivals. Critics point to Moskovitz’s investments in the AI startup and ChatGPT competitor Anthropic as well as Open Philanthropy's early support of ChatGPT creator OpenAI. OpenAI’s CEO Altman has in turn invested in Moskovitz’s company Asana and emerged as a prominent industry voice for AI caution.

Open Philanthropy spokesman Mike Levine emphatically disputed that the organization was spending money to secure financial benefits for anyone. He argued the pro-safety research it supports is dwarfed by the clout of tech firms that are pouring money into lobbying for lighter regulations that would protect their bottom lines.

"There will be by default a lot of power and resources on the sides that want to go quickly and not have a lot of guardrails,” Levine said. “If at the end of the day we’re getting to really transformative AI and we’re relying on individual CEOs to make responsible decisions for the rest of society, I don’t think that’s where we want to be.”

Individual companies that oppose California’s AI bills have sought to influence lawmakers behind the scenes rather than publicly oppose the bills, according to five staffers and lobbyists familiar with the dynamics.

Instead, public pushback has come from industry organizations like TechNet, which represents companies including Meta, Google and Amazon, and the pro-business California Chamber of Commerce.

They have framed criticism in diplomatic terms, saying some regulation is appropriate but arguing Wiener’s bill is untenably broad and would hinder development by holding companies accountable for a sweeping array of potential harms.

“Such uncertainty, the cost of compliance and significant liability exposure … inevitably discourage technological innovation in our economy,” CalChamber Executive Vice President Ben Golombek testified at a recent Senate hearing.

A Google spokesperson deferred questions to CalChamber while Meta did not offer a comment.

“We completely understand the role of the Legislature is to decide between two competing sets of experts and determine what the best path forward is to achieving technically feasible solutions,” TechNet lobbyist Dylan Hoffman said. “I think you see differences of opinion on this issue because it’s such a complex and fast-moving technology that there’s subjectivity about how to regulate it.”

Wiener has pushed back against criticism of his bill, noting the contradiction between lobbyists representing major tech firms using the argument that small startups could be hurt while a number of smaller firms do support the measures. His bill also includes provisions to create a state-run AI research hub, which he argues could dull larger companies’ edge in private funding.

“I sort of rolled my eyes,” Wiener said in an interview. “I’m like, why are the small startups supporting it then, and why does TechNet, which supports the big tech companies, oppose it?”

Hendrycks, the head of the Center for AI Safety, said many AI companies and funders have operated on a “move fast and break things” model espoused by Facebook founder Mark Zuckerberg in the social media giant’s nascent days.

But he said that’s no longer appropriate as the industry matures and produces evermore powerful technology. He argued platforms like Facebook — owned by Meta, whose AI expert Yann LeCun believes safety fears are wildly overblown — presented a cautionary tale of what can happen if a transformative technology advances far beyond regulation.

“The safety stuff is generally an afterthought: That's something that you need to deal with, eventually,” Hendrycks said.

“I don't think we'll need to wait half a decade to determine: should we test AI systems before they're deployed on society?” he added. “I mean, we took that same sort of slow approach with social media and that really didn't work out.”