Nearly three quarters of U.S. adults worry deepfakes could sway election: Jumio
The hour is ripe for political deepfakes. The U.S. presidential elections are still four months away, and the campaign has already seen a controversial attempted political assassination of one candidate and the withdrawal of another in response to concerns about mental acuity. The basic realities of the normal American electoral process have been shredded – and, according to new research from Jumio, many voters see biometric deepfakes lurking in the fissures.
A release from Jumio says its 2024 Online Identity Study reveals a deep unease among U.S. adults about the potential for AI deepfakes to influence the upcoming elections. Among some 2,000 Americans queried for the survey, 72 percent express concern that AI and deepfakes could come into play, and 70 percent say they trust online political content less than they did during the last election cycle. Even grimmer, just 30 percent say they trust political news they find online.
Other trends are consistent with general global attitudes about AI and deepfake technology: people want more from their governments on AI regulation, even as they overestimate their ability to detect a deepfake with their own eyes.
Disclosure requirements do not match scale of concern
Alarms are sounding across the U.S. about the threat generative AI content poses to free and fair elections. The topic came up at the recent Republican National Convention, where Microsoft experts issued warnings about the potential for disruption in the electoral process. And CNBC has coverage of a new Moody’s report identifying AI-generated deepfake content as potential “election integrity issues that could present a risk to U.S. institutional credibility.”
“The election is likely to be closely contested, increasing concerns that AI deepfakes could be deployed to mislead voters, exacerbate division and sow discord,” says the report (which was published before Joe Biden withdrew his candidacy and endorsed Vice President Kamala Harris as his replacement). “If successful, agents of disinformation could sway voters, impact the outcome of elections, and ultimately influence policymaking.”
The Federal Communications Commission (FCC) has made it a requirement for political TV, video and radio ads to disclose if they incorporate AI-generated content. The rule does not yet cover social media, although the Federal Elections Commission is weighing broader measures on AI disclosure.
“2024 is going to be the first cycle where the issue of deepfakes is going to play a more essential role,” says Ashley O’Rourke, who works as Microsoft’s business development lead for political campaigns, in a Cronkite News report. Her assessment is already evidently true. The case of the Joe Biden audio deepfake that made robocalls encouraging voters to stay home during New Hampshire primaries happened back in January. More recently, media outlet Arizona Agenda showcased a video deepfake of GOP Senate nominee Kari Lake as a warning call to voters about the AI technology’s potential.
CNBC notes in a report that some social media platforms have introduced selected proactive AI disclosure in an attempt to stay ahead of regulations. They have largely succeeded in identifying the most explicitly harmful types of content. But the unceasing firehose of information means AI occasionally slips past the gates, and political content may not raise flags with the same urgency as, for instance, child sexual abuse material. While a titan like Google requires disclosures for political ads with content that “inauthentically depicts real or realistic-looking people or events,” its policy does not specifically require AI disclosures.
The specter of deepfakes is haunting America: Moody’s
The increasing ease with which fraudsters can create convincing deepfakes is, in fact, a two-pronged threat to political stability. Moody’s Ratings assistant vice president Abhi Srivastava says that, “with the advent of readily accessible, affordable Gen AI tools, generating a sophisticated deepfake can be done in minutes.”
But aside from the documented deepfakes, there is also the great deepfake boogeyman: according to Moody’s, just the widespread perception that deepfakes have the ability to influence political outcomes, even without concrete examples, “is enough to undermine public confidence in the electoral process and the credibility of government institutions.”
Conflicting regulations across states are often cited as a hurdle for technology, but Moody’s says in this case, the decentralized nature of the U.S. election system could be an advantage. Some states already have advanced cybersecurity policies and knowledge of the threat landscape; as of February, 50 pieces of legislation related to AI were being introduced per week in state legislatures. Thirteen states already have and deepfakes laws on election interference in the books, eight of which came into effect this year.
Among outgoing President Joe Biden’s accomplishments was the issuance of the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” He has little time left to issue formal federal regulation on AI – but also a window in which to codify those projects he most wants to stand as his legacy. Could a slapdash U.S. AI Act modeled on the EU version be spun as a final feather in the president’s cap? In this election cycle, anything is possible.
Article Topics
biometrics | deepfake detection | deepfakes | elections | fraud prevention | generative AI | Jumio | New Hampshire | United States
Comments