The AI safety movement is waning — it might even be dead already

icon
icon

Semafor Signals

Supported by

Microsoft logo
Microsoft logo

Insights from Fast Company, Bloomberg, and Wired

Arrow Down
Arrow Down
Title icon
Title icon

The News

Sixteen of the world’s top AI companies made fresh commitments to develop safe AI systems at a summit in Seoul organized by the governments of South Korea and the United Kingdom.

The gathering comes at a critical juncture. Tech giants like OpenAI and Microsoft have released a flurry of new updates to their AI models and core products, while governments are pursuing a patchwork of regulations to contain the nascent industry: European Union ministers on Tuesday approved a new law governing AI’s use, particularly in sensitive sectors like policing, while the United States Senate leaders released a proposed AI regulation roadmap last week.

icon
icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

Size of Seoul summit shows diminishing interest in safety

Source icon
Source icon

Source:  Fast Company

The Seoul summit had far fewer attendees compared to an AI safety summit held in the UK six months ago, which featured a star-studded guest list. With many countries and officials missing, it suggests “the broader movement toward a global agreement on how to handle the rise of AI is faltering,” tech journalist Chris Stokel-Walker wrote for Fast Company. There is no global consensus on how to handle fast-developing AI systems, and whatever policy exists is fragmented. AI researchers want action: A recent letter published in the journal Science called for transitioning “from vague proposals to concrete commitments.”

Safety movement is ‘dead,’ but safety work is continuing

Source icon
Source icon

Source:  Bloomberg

The broad “AI safety” movement — which peaked when AI experts and researchers called for a six-month pause in its development — is dead, Bloomberg columnist Tyler Cowen argued, but the work of “actually making artificial intelligence safer” has just begun. Many of AI’s past skeptics now have a more optimistic perspective, and innovation is seen as the best way to make AI safer, he wrote. This was perhaps inevitable, he said: “When humans invented computers, or for that matter the printing press, very few safety issues were worked out beforehand, nor could they have been.”

OpenAI’s approach is in the spotlight

Source icon
Source icon

Sources:  Wired, The Verge

ChatGPT creator OpenAI is in the spotlight over its safety efforts after its co-founder and chief scientist quit last week along with other members of its “superalignment” team, which was tasked with making extremely powerful and safe AI. The team has since been disbanded. OpenAI’s CEO Sam Altman defended its safety record, but acknowledged there is work to do; ahead of the Seoul summit, the company outlined 10 fresh safety commitments. The turmoil highlights “an increasing tension within OpenAI,” the Verge reported, as the company races to release new products and deprioritizes employees’ concerns about moving too fast.

Semafor Logo
Semafor Logo