UK Election Threats by AI

As Britain heads to the polls in 2024, it is anticipated to encounter a barrage of state-backed cyber-attacks and campaigns of disinformation.

As Britain heads to the polls in 2024, it is anticipated to encounter a barrage of state-backed cyber-attacks and campaigns of disinformation. The key risk behind this is said to be artificial intelligence, as mentioned to CNBC by Britain cyber experts.

Britons are set to cast their votes on the 2nd of May during local elections, with a national election anticipated in the latter part of this year, though Prime Minister Rishi Sunak has yet to announce a specific date. The votes are rounding up as the country is approaching a wide range of problems which include living crisis and sharp divisions over asylum and immigration.

“With most U.K. citizens voting at polling stations on the day of the election, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” said Todd McKinnon, CEO of identity security firm Okta, as he told CNBC via email.

Cybercriminals Using AI

Cyber experts anticipate various interference methods in upcoming elections, particularly via disinformation, exacerbated by AI’s widespread use.

“Deep fakes,” synthetic media created with AI, will become more prevalent, experts say.

State-backed cyber assaults, including AI-powered attacks like phishing and ransomware, are escalating, notes Okta’s McKinnon.

The cybersecurity sector urges vigilance against AI-generated misinformation, stressing global collaboration to combat malicious activities.

Risks of Elections in Britain

Adam Meyers, leading counter adversary operations at CrowdStrike, expressed concerns about AI-driven disinformation in the 2024 elections.

“Currently, generative AI serves both constructive and destructive purposes,” Meyers conveyed to CNBC.

Crowdstrike’s latest threat report suggests China, Russia, and Iran are likely to exploit generative AI in global election misinformation campaigns.

“The democratic process is delicate,” Meyers emphasized. “Hostile nations leverage generative AI to shape compelling narratives, exploiting confirmation bias.”

AI’s accessibility poses a significant issue, enabling cybercriminals to craft deceptive emails, noted Dan Holmes from Feedzai.

Holmes highlighted hackers’ use of social media data to train sophisticated voice AI models for personalized attacks.

In one instance, a fabricated AI audio clip of Keir Starmer circulated on social media in 2023, raising concerns among cybersecurity experts as the U.K. nears elections.

Local elections will serve as a pivotal examination for tech giants like Meta, Google, and TikTok to ensure their platforms remain free from misinformation.

Meta has already initiated measures to embed a “watermark” on AI-generated content, signaling to users its artificial nature.

Efforts to combat cyber threats may be deployed prior to the midterms, according to analysts.

The sophistication of deep fake technology is on the rise. Tech firms are now engaged in a race against time to counter these developments.

“Deepfakes have transitioned from theory to practical application,” remarked Mike Tuchen, CEO of Onfido, in an interview with CNBC last year.

He described the current scenario as a “cat and mouse game” between competing AI technologies, with the focus on identifying and mitigating deepfake impacts.

Ascertaining the authenticity of digital content is increasingly challenging, note cyber experts.

While AI excels in generating text, images, and video, it is prone to errors. An example is an AI-generated video where a spoon suddenly disappears during a dinner scene.

“During the election period, we anticipate a surge in deepfakes. However, a simple precaution we can all take is to verify the legitimacy of content before sharing,” added Okta’s McKinnon.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.