AI Seoul Summit: Big Tech companies make commitment to AI safety as second world summit begins

AI Seoul Summit: Big Tech companies make commitment to AI safety as second world summit begins

Tech companies have agreed on a set of safety commitments on artificial intelligence (AI) as a second global safety summit led by South Korea and the UK gets underway.

The voluntary commitments were agreed by 16 major tech companies, including OpenAI, Mistral, Amazon, Anthropic, Google, Meta, Microsoft, and IBM.

China's and the United Arab Emirates' Technology Innovation Institute also signed the agreement.

They include identifying the possible risks of AI, setting thresholds at which those risks would be designated as too high, and being transparent about them, according to a statement from the UK government, which called it a historic first.

The companies also committed to pausing AI models or systems if the risks could not be reduced.

The agreement was announced as the two-day AI Seoul Summit began on Tuesday. The summit is a follow-up to last year's AI safety summit in Bletchley Park in the UK.

'World-first agreement' by Big Tech

France will host the next in-person summit on the topic.

The previous November gathering ended with a declaration signed by nearly 30 countries stating they would ensure AI is "human-centric, trustworthy and responsible".

The current online summit, hosted jointly by the UK and South Korea, is expected to result in further agreements from world leaders.

"It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety," UK prime minister Rishi Sunak said in a statement after the agreement was signed.

Sunak will co-chair a virtual meeting with world leaders on Tuesday with Korean president Yoon Suk Yeol. The second day of the summit will include a discussion among digital ministers.

Mark Brakel, the director of policy at the non-profit Future of Life Institute, said in a statement provided to Euronews Next that while he welcomed the companies' commitments, "it is crucial that the guardrails, standards, and oversight necessary to keep people safe are codified in law".

"Goodwill alone is not sufficient. If AI corporations are truly sincere in their commitments to safety, they should be supportive for said commitments to be enshrined in regulation," Brakel said.

The agreement comes as tech companies rush to release new AI models, with the development of the technology accelerating globally.

Promoting "safe, secure, and trustworthy" AI

Last week, OpenAI announced its new model GPT-4o with faster capabilities and Google unveiled new AI features for search and updates to its AI model Gemini.

Google had the model count the number of times the company used the word AI during its developer conference for a total of 120 mentions.

Yet the resignations of co-leaders of OpenAI's team dedicated to preventing AI systems from going rogue last week have led to more concerns about safety.

Jan Leike, the former co-leader of that team, said he resigned in part because "safety culture and processes have taken a backseat to shiny products," adding that OpenAI must become a "safety-first AI company".

There are multiple efforts to start regulating the technology, including the EU's world-first AI Act, which is expected to come into effect next month.

The UN General Assembly also adopted a resolution earlier this year on the promotion of "safe, secure and trustworthy" AI.

Last month, the US and UK signed an agreement to develop safety tests for the most advanced AI models.