World has no ‘strong protection against dangers of AI’

Prof Yoshua Bengio says some officials are 'too often being swayed by companies which would rather avoid the overhead of regulation and are racing ahead without sufficient care'
Prof Yoshua Bengio says some officials are 'too often being swayed by companies which would rather avoid the overhead of regulation and are racing ahead without sufficient care' - Shutterstock

The world has no strong protections in place to protect against the dangers of artificial intelligence (AI), a major report announced at the Prime Minister’s AI summit last year has warned.

Experts from 30 countries authored the first inaugural International Scientific Report on Advanced AI Safety focused around general purpose AI, such as ChatGPT.

“Its capabilities have been improving particularly rapidly,” the report says.

“General-purpose AI is different from so-called ‘narrow AI’, a kind of AI that is specialised to perform one specific task or a few very similar tasks.”

The report says that the current approach to managing risks from forms of AI like ChatGPT rely on developers of the technology to be able to assess the potential harms.

However, it warns that current assessment methods “have limitations and cannot provide strong assurances against most harms”.

The world leading experts also admitted that there is “very limited” understanding of the inner workings, societal impacts and capabilities of general-purpose AI.

The interim report has not yet involved major input from technology and AI companies.

The research team was led by Professor Yoshua Bengio, an AI expert, and its publication comes as debate around AI regulation has intensified around the world.

Prof Bengio told The Telegraph he was concerned most of the world’s governments were “sleepwalking and not sufficiently conscious of the magnitude of the risks”.

He said he believed some officials are “too often being swayed by companies which would rather avoid the overhead of regulation and are racing ahead without sufficient care”.

Three main categories of risk

The report identified three main categories of risk around AI: malicious use, risks from malfunctions, and systemic risks.

Malicious use could involve large-scale scams and fraud, deepfakes, misinformation, assisting in cyber attacks or the development of biological weapons, the report suggests.

The potential risks from AI malfunctions included concerns around bias, where it had been unintentionally built into systems and then leads to it impacting different people disproportionately, as well as fears around losing control of AI systems, in particular any autonomous AI systems.

Among the concerns around potential systemic risks were its possible impact on the job market, concerns over the concentration of its development in the West, and China creating an unequal system in terms of access to the technology, as well as concerns about the risks AI poses to privacy through models being trained on personal or sensitive data.

The impact on copyright law and the climate impact of AI development – an energy-heavy process – were also highlighted.

It concludes the future around general purpose AI is “uncertain”, with both “very good” and “very bad” outcomes, and that further research and discussions were needed.

The report is set to be a starting point for discussions at the next AI summit – the AI Seoul Summit – taking place virtually and in South Korea next week.

Michelle Donelan, the Technology Secretary, who will co-host the second day of the summit with Lee Jong-Ho, the Korean Minister of Science and ICT, said: “When I commissioned Professor Bengio to produce this report last year, I was clear it had to reflect the enormous importance of international cooperation to build a scientific evidence-based understanding of advanced AI risks. This is exactly what the report does.

“Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’s incredible opportunities safely and responsibly for decades to come.

“The work of Yoshua Bengio and his team will play a substantial role informing our discussions at the AI Seoul Summit next week, as we continue to build on the legacy of Bletchley Park by bringing the best available scientific evidence to bear in advancing the global conversation on AI safety.

“This interim publication is focused on advanced ‘general-purpose’ AI. This includes state-of-the-art AI systems which can produce text, images and make automated decisions.”

Broaden your horizons with award-winning British journalism. Try The Telegraph free for 3 months with unlimited access to our award-winning website, exclusive app, money-saving offers and more.