Sumsub puts numbers to deepfakes preceding elections; govts take action

An internal analysis conducted by Sumsub shows a substantial rise in deepfake content leading up to the 2024 elections in a handful of the world’s largest countries, with an increase of more than 245 percent compared to the previous year. Lawmakers around the world are taking note.
Sumsub’s study indicates the importance of ongoing efforts to combat misinformation in the public domain, highlighting the United States (303 percent YoY growth rate) as the only country among the top 10 with the highest number of deepfake instances in the first quarter of 2024.
“The number and quality of deepfakes is increasing and evolving daily worldwide. Even with the most progressive technology, it’s getting much harder to differentiate between a deepfake and reality,” says Pavel Goldman-Kalaydin, head of AI/ML at Sumsub.
Sumsub launched a biometric deepfake detection tool last year, and then a set of free models to help identify deepfakes and fraud using synthetic data.
According to a report from The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), the influence of AI on specific election outcomes is minimal. However, the UK-based organization’s report highlights the potential harm to the overall democratic system.
While there is no clear evidence of substantial changes in election results due to AI, the report shows indications of effects related to a deteriorating and polarized information environment, leading to secondary risks.
The CETaS report recommends enhancing strategic communication and public guidance. It suggests that the Electoral Commission collaborate with Ofcom and the Independent Press Standards Organisation (IPSO) to release new guidelines for media reporting on allegedly or confirmed to be AI-generated content.
The report also outlines a timeline identifying potential AI election threats and the corresponding countermeasures. This timeline is divided into pre-election (distrust), polling period (disrupt), and post-election (discredit) phases.
“While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face,” says Dr. Alexander Babuta, director of CETaS at The Alan Turing Institute.
US states introduce laws on AI-generated media
California is taking steps to tackle misinformation and discrimination associated with AI technologies, particularly deepfakes in the context of elections. The legislative initiatives focus on creating regulations to establish public trust in AI.
There is currently a legal gap that fails to address AI-generated robocalls, AP reports. Following an incident involving a deepfake mimicking President Joe Biden’s voice, legislators are considering banning materially deceptive deepfakes related to elections.
Voting Rights Lab, a nonpartisan organization, is monitoring over 100 bills in 40 state legislatures that have been introduced or passed this year that aim to regulate the potential of artificial intelligence to generate election disinformation.
In addition, numerous nonprofit organizations are initiating public awareness campaigns, such as the AIandYou Campaign, to educate voters about the potential impact of AI, including deepfakes, on elections.
Nearby Colorado has enacted a new law that requires campaign ads that contain AI-generated content to include clear disclosures. These disclosures are required within 60 days of a primary and 90 days of a general election.
“AI is a threat to American elections and may supercharge election disinformation through the use of deepfakes. This new law will help ensure Coloradans know when communications featuring candidates or officeholders are deepfaked and will increase transparency,” says Secretary of State Jena Griswold.
Philippines considers a ban on deepfakes during the campaign period
In Southeast Asia, the Commission on Elections (Comelec) in the Philippines is considering banning the use of AI and deepfakes during the campaign period for the May 2025 midterm elections.
Comelec Chairman George Garcia expressed concerns that the use of deepfakes could result in confusion, misrepresentation, and the dissemination of falsehoods, all of which could compromise the integrity of the electoral process. Garcia has requested that the Comelec en banc develop guidelines that would prohibit the use of deepfakes in campaign materials.
In India, deepfakes are being used significantly to target specific political parties. For instance, an AI-generated video featuring the late actor-turned-politician Muthuvel Karunanidhi surfaced in January, just before the general elections. The video shows him endorsing the current leadership in the South Indian state of Tamil Nadu, despite the fact that Karunanidhi passed away in 2018.
The prevalence of deepfake content underscores the necessity for reliable detection tools. Many AI-based tools and algorithms are in development to identify deepfakes by analyzing inconsistencies in media, such as facial expressions and voice modulations. For instance, Pindrop Security has developed biometric technology that examines audio streams to authenticate whether they originate from a real human voice.
Article Topics
biometric liveness detection | deepfake detection | deepfakes | elections | fraud prevention | legislation | Pindrop | Sumsub | Turing Institute
Comments