AI could offer ‘enhanced opportunities’ to interfere in 2024 election, DHS warns

Artificial intelligence tools for creating fake video, audio and other content will likely give foreign operatives and domestic extremists “enhanced opportunities for interference” as the 2024 US election cycle progresses, the Department of Homeland Security said in a recent bulletin distributed to state and local officials and obtained by CNN.

A variety of “threat actors” will likely try to use generative AI — or AI that creates fake content — to influence or “sow discord” during the US election cycle, says the May 17 bulletin, which was produced by the DHS Office of Intelligence and Analysis.

While a “large-scale attack on the election is less likely” because of the “higher risk of detection and the decentralized and diverse nature of the voting system,” the bulletin says, “threat actors might still attempt to conduct limited operations leading to disruptions in key election battleground areas.”

Foreign or domestic operatives could, for example, try to “confuse or overwhelm” voters and election staff by distributing fake or altered video or audio clips claiming that a polling station is closed or that polling times have changed, according to the bulletin.

CBS News first reported on the bulletin.

Like in 2020, election officials in the 2024 elections will be operating in a chaotic and sometimes hostile information environment, in which large portions of the electorate falsely believe there is widespread fraud in the system. Former President Donald Trump recently declined to commit to accepting the results of the election.

But unlike in 2020, AI tools present an easy way to greatly amplify false claims of electoral fraud and election conspiracy theories, according to US officials and private experts.

US officials focused on election security are concerned AI tools will exacerbate this already-fraught threat environment, pointing to an incident during the Democratic primary in New Hampshire in January as an example. An AI-made robocall imitating President Joe Biden targeted voters in the primary, urging them not to vote. A New Orleans magician made the robocall at the behest of a political consultant working for Minnesota Rep. Dean Phillips, a long-shot Democratic challenger to Biden, the magician told CNN.

AI becoming more sophisticated

AI tools, however, are effective only if they gain traction with audiences.

Operatives working for the Chinese and Iranian governments prepared fake, AI-generated content as part of a campaign to influence US voters in the closing weeks of the 2020 election campaign, CNN reported last week. The Chinese and Iranian operatives never disseminated the deepfake audio or video publicly, current and former officials briefed on the intelligence said.

At the time, some US officials who reviewed the intelligence were unimpressed, believing it showed China and Iran lacked the capability to deploy deepfakes in a way that would seriously impact the 2020 presidential election.

AI capabilities have progressed considerably in the last four years. They are easier to use and need fewer samples to convincingly imitate someone’s voice or face. As a result, the technology is proliferating among bad actors.

Foreign government-backed operatives have increasingly used generative AI in their influence operations aimed at Americans, according to the new DHS bulletin. Some violent extremists, meanwhile, have “increasingly expressed interest in generative AI to create violent ideological content online” and have promoted guides on how to exploit AI, the document says.

While American pollsters are hard at work trying to determine voter preferences ahead of the November election, so too are Chinese trolls, according to research published last month by Microsoft. A series of accounts on the social media platform X that Microsoft analysts “assess with moderate confidence are run by” the Chinese government have asked their followers how they feel about US aid to Ukraine and other hot-button topics.

“They could have been doing a mix of audience reconnaissance … [and] to demonstrate some ability to engage in a covert capacity,” Clint Watts, head of Microsoft’s Threat Analysis Center, said in a recent interview. “I don’t think they have a good handle … of what audiences in the United States are interested in.”

Chinese actors who target US audiences with influence operations are “expanding capability, they’re trying new things,” Watts said, “but at the same point, they’re not where the Russians were in terms of cultural context and understanding US audiences in, let’s say, 2015 or 2016.”

For more CNN news and newsletters create an account at CNN.com