NCSC Warns Against AI Tools in the Next Election
The National Cyber Security Centre (NCSC) has warned that artificial intelligence poses a serious risk to the next national election. Cyberattacks carried out by hostile entities and their proxies are rising and becoming harder to identify.
Following the first AI Safety Summit held at Bletchley Park, industry executives, academics, and representatives of civil society gathered to talk about the frightening possibilities of artificial intelligence. They specifically looked at how AI may help hackers execute malicious and complex cyberattacks.
The National Cyber Security Centre (NCSC) also warned that countries such as Russia are likely to meddle with the next election, which is scheduled to take place in January 2025. It is possible that this will influence other important elections in Western democracies, including the US.
The cyber threat landscape has changed over the past year, according to the NCSC’s Annual Review. Operating within GCHQ, the Centre discovered a new class of state-aligned cyber attackers motivated more by ideology than money.
As stated in the NCSC Annual Review:
"While the UK’s use of paper voting in general elections makes it significantly harder to interfere with our elections, the next election will be the first to take place against the backdrop of significant advances in AI. But rather than presenting entirely new risks, it is AI’s ability to enable existing techniques which poses the biggest threat."
In Britain, people manually conduct voting with pencil and paper, offering more security against direct malicious activity. However, NCSC raises concern about the vulnerability to misinformation during campaigns, noting deepfake videos and hyper-realistic bots as possible risks.
This included deepfake efforts spreading quickly across social media platforms and the quick distribution of faked internet content. Notably, a malicious deepfake video was maliciously broadcast during the Slovakian election with the intention of disrupting the democratic process.
Applications like ChatGPT use large language models (LLMs), which malicious actors could exploit as a key technology. There is growing concern about how ChatGPT might be used in creating misleading information, influencing public opinion, and endangering democratic processes.
It is clearly illegal to intervene in elections or seek to prevent free speech. The UK government pledged to improve its resources for preventing and responding to the network attacks, notably disinformation.
Primarily regarding Russia’s conflict with Ukraine, the security landscape has changed significantly since the UK general election in 2019. The conflict has made it easier for authorities to have an impact on political discussion in democratic countries.
The UK government and political parties should take a proactive approach to mitigate the risks linked to artificial intelligence, according to the National Cyber Security Centre (NCSC). This entails strengthening security measures, improving public awareness of AI-generated misinformation, and enforcing more strict regulations on digital marketing.
The UK has made significant changes in the ongoing effort to prevent threats to democracy by forming the Safeguarding Democracy Taskforce and the Joint Election Security Assessment team.
The NCSC recognises that elections and other democratic events are likely to attract the attention of attackers. As a result, organisations and people must be careful and prepared to deal with both known and unknown risks.