Deepfakes are a global election issue and a major threat to global stability: WEF
Deepfakes are turning from a tech buzzword into an issue of significant international concern, as nations begin to understand the threat that misinformation, disinformation and biometric hacking present to fair elections and the global economy. Biometrics and cybersecurity firms are mounting a defense with new tools and approaches to authentication. But with 2024 to see several elections that could have massive ramifications on a wobbly global order – according to CNBC, this year more half the world’s adult population will be eligible to cast a vote – there is fresh urgency to the question of how to address what generative AI-driven fraud means for democracy.
Risk report warns about AI, predicts need for new rules and norms
According to the World Economic Forum’s new Global Risks Report, the threat of misinformation and disinformation is the number one risk facing humankind in the next two years. The WEF report, produced with Zurich Insurance Group and Marsh McLennan, tells a story of corrupted information ecosystems leading into an era of extreme weather and critical planetary change, with two-thirds of the 1,400 experts and industry leaders surveyed anticipating global catastrophe within a decade.
In a release, WEF Managing Director Saadia Zahidi says “an unstable global order characterized by polarizing narratives and insecurity, the worsening impacts of extreme weather, and economic uncertainty are causing accelerating risks – including misinformation and disinformation – to propagate.” The report draws connections between AI-driven misinformation and disinformation, social polarization and the persistent global cost-of-living crisis. While AI does not appear in the top risks over the next two years, the ten-year risk forecast puts “adverse outcomes of AI technologies” at number six.
McAfee launches audio deepfake detector as survey shows growing distrust
The WEF’s call for new approaches to risk confirms what is happening in real-time across electoral systems globally – in tandem with ripostes from the tech world. At CES 2024, antivirus mainstay and biometric security provider McAfee unveiled a new deepfake audio detection tool. Called Project Mockingbird, the consumer-targeted product uses AI models to analyze contextual, behavioral, and categorical cues to identify AI-generated audio.
In a blog post on Interesting Engineering, Steve Grobman, chief technology officer at McAfee, says Project Mockingbird provides over 90 percent accuracy in detecting biometric audio deepfakes generated by AI. “It’s akin to a weather forecast that aids in informed decisions regarding the credibility of online content,” he says.
If disinformation and misinformation are weather, the world is in the middle of a megastorm; a December 2023 survey from McAfee found that 84 percent of Americans worry about deepfakes and their potential misuse. Meanwhile, government agencies are preparing for a deluge of foreign interference in elections. In the U.S., the FBI and NSA are concerned that, in a highly polarized societal climate, deepfake attacks by foreign intelligence agencies could be the lightning strike that ignites smoldering tensions.
Speaking on a CNBC panel, FBI Director Christopher Wray says sharing information – with election officials, the public, other agencies and key stakeholders in the private sector – is the best defense. “What we’re focused on is hostile foreign intelligence services creating fake personas, attempting to persuade people that they’re something they’re not,” he says. “Appearing to be American, when they’re actually the Chinese, the Russian, or Iranian intelligence services. And so what we try to do together is ferret that out and then share it with people who need to know about it.”
Media giant taps blockchain to ensure fair and balanced journalism
An unlikely private-sector partner in the quest for media authenticity has also taken note of McAfee’s survey, and responded with a move aimed at establishing the origin and history of original journalism through cryptography. TechCrunch reports that Rupert Murdoch’s Fox Corp. has partnered with Polygon Labs, a layer-2 blockchain scaling Ethereum, to launch Verify – an open source protocol for media companies to register articles, photographs and other content.
“The Verify protocol establishes the origin and history of original journalism by cryptographically signing individual pieces of content on the blockchain,” explains Melody Hildebrandt, CTO of Fox. “It’s powered by a content graph, binding content to its verified publisher.”
The frontline is everywhere: UK and Taiwan face deepfake disinfo campaigns
The U.S. election may be the most visible (and widely consequential) vote in 2024, but disinformation is everywhere, and officials in the UK and Taiwan, Singapore and India are all bracing for it to reach new levels of sophistication and intensity over the next twelve months.
In an article from The Standard, the former head of the British civil service, Lord Gus O’Donnell, sums up the essence of the problem. The technology to create and distribute convincing fake audio and video content, he says, has already come to maturity – but response is lagging, with the UK vote expected to happen in the latter half of 2024. “A lot of people are working on technical solutions to this, which might in time be able to sort out what’s fake and what’s not,” says O’Donnell. “But they’re not there yet and they won’t be there in time for our election.”
Even more urgent is the case of Taiwan, which will hold elections this week, on January 13. In a video dispatch, WION reports on ominous words from China’s leader, Xi Jinping, regarding Beijing’s stance on the island nation’s sovereignty – and on how Xi’s government has been using deepfakes to sway public opinion in favor of candidates who support reunification with China. Xi has renewed his vow never to compromise on Taiwan, a sovereign nation that Beijing wants back under Chinese control. Deepfake videos that skew in China’s favor have been a particular issue on TikTok, the hugely popular social media company that has been scrutinized for its ties to Beijing’s repressive regime.
New tools help youth face the coming storm
Elsewhere in Asia, governments are taking note of the risks of AI deepfakes, and designing countermeasures. The Straits Times reports that Singapore is considering watermarking identified deepfake content, as part of a SG$20 million (US$15 million) campaign to bolster online trust and safety. And, according to MENAFN and India Today, former President of India Ram Nath Kovind’s recent speech to the graduating class of the Indian Institute of Mass Communication in Delhi included a warning and a call to arms in the face of a major threat to trust in elections – and, possibly, trust in general.
“Any mischief-maker sitting in any corner of the world can spread fake news in the social media space,” Kovind told the tech students. “By the time we realize that certain information is incorrect and spread with ill intention, the damage has been done to society. You need to be adequately prepared to tackle the misuse of rapidly advancing technologies.”
Article Topics
biometric authentication | biometrics | deepfake detection | deepfakes | elections | fraud prevention | McAfee | World Economic Forum
Comments