Next general election may be 'testing ground' for AI's 'harmful capabilities', Sajid Javid says

Sajid Javid has warned of the 'harmful capabilities' of AI in next year's general election. (PA)
  • Oops!
    Something went wrong.
    Please try again later.

Former chancellor Sajid Javid has said the UK’s next general election could be a “testing ground” for some of the “most harmful capabilities” of artificial intelligence (AI).

In an article for the Sunday Times, Javid also said the Electoral Commission, which regulates elections in the UK with the aim of maintaining their democratic integrity, is “clearly not ready for AI”.

His warnings come amid growing concerns about the way AI - whereby machines perform functions usually associated with humans - will disrupt our way of life.

The most dramatic warning so far came this week, when dozens of experts warned the technology could lead to the extinction of humanity.

With the UK’s next general election expected to be held next year, Javid, who has previously advised US AI firm C3 AI, said: “In the wrong hands, future AI will be able to conduct offensive cyber operations to enable mass fraud; manipulate and generate content to undermine trust and democracy, and imitate deep and personal relationships to facilitate abuse.

Watch: Sunday's politics briefing

“As a former home secretary it is also easy to see the capabilities AI brings to serious crime and hostile state activity.

“I am especially worried that the UK and US elections next year will become testing grounds for some of the most harmful capabilities. The Electoral Commission, which is only now getting to grips with social media, is clearly not ready for AI.”

The Electoral Commission, responding to Javid's comment in a statement to Yahoo News UK, said "wholesale reform of electoral law" is needed from the government.

Louise Edwards, director of regulation and digital transformation, said: "Voters deserve to know that elections in the UK are always free and fair and that laws are in place to safeguard them from unlawful influence. The Electoral Commission plays a central role in delivering this, ensuring political funding is transparent and preventing foreign money from entering UK politics.

"New requirements for digital campaign material to include an imprint will go some way to ensuring voters can see who has paid for an ad and is trying to influence them online. But we need wholesale reform of electoral law to ensure it keeps pace with the digital age. We are ready to work with other regulators and the UK’s governments to ensure our laws are fit for a modern age."

Last month, Sam Altman, CEO of OpenAI, the creator of ChatGPT - the AI chatbot which rose to prominence this year - called for regulation as he told a US congressional hearing of risks during election campaigns: “It’s one of my areas of greatest concern: the more general ability of these models to manipulate, to persuade, to provide one-on-one interactive disinformation.

WASHINGTON DC, UNITED STATES - MAY 16: Open AIâs CEO Sam Altman testifies at an oversight hearing by the Senate Judiciaryâs Subcommittee on Privacy, Technology, and the Law to examine A.I., focusing on rules for artificial intelligence in Washington, DC on May 16th, 2023. (Photo by Nathan Posner/Anadolu Agency via Getty Images)
Open AI CEO Sam Altman testifies at a congressional hearing last month. (Getty Images)

“Given we’re going to face an election next year [in the US] and these models are getting better, I think this is a significant area of concern.”

In the UK, Rishi Sunak has spoken about the importance of ensuring the right “guard rails” are in place to protect against potential dangers, ranging from disinformation and national security to “existential threats”.

Sunak is also considering setting up a global AI watchdog in London, it was reported on Friday.

AI can perform life-saving tasks, such as algorithms analysing medical images including X-rays, scans and ultrasounds: helping doctors to identify and diagnose diseases such as cancer and heart conditions more accurately and quickly.

Read more: Here's which jobs are likely going to be the first lost to AI

On the flipside, the San Francisco-based Centre for AI Safety - the organisation which released the statement about the threat to humanity - has warned AI could be weaponised, for example to develop new chemical weapons and enhance aerial combat.

The centre lists other risks on its website, including AI potentially becoming dangerous if it is not aligned with human values.

It also says humans could become dependent on machines if important tasks are increasingly delegated to them.