Nations and companies line up defenses against AI deepfake fraud
Regulatory alarms are sounding on new risks presented by deepfake technology and the concurrent spike in AI-driven fraud. New security measures in Singapore exemplify how nations are working to keep up with the first wave of deepfake fraud attacks that leverage generative AI. But startups promising near-perfect deepfake detection tools are not always what they seem, unlike proven results from established biometric digital ID providers.
Singapore funds research on deepfake detection to back up law
A post on Pindrop’s blog outlines ways in which Singapore is responding to the threat of deepfake fraud – which, say experts, is dire. “If we can no longer easily discern what is real and what is not, a functioning democracy becomes impossible,” says Dr. Tan Wu Meng, a member of Singapore’s parliament. “No government, irrespective of political affiliation, can effectively govern without this essential foundation for democratic discourse.” His warning comes in the wake of a recent incident involving Prime Minister Lee Hsien Loong, whose likeness was used to promote investment products.
Author Laura Fitzgerald, Pindrop’s head of brand and digital experience, says the main goal of the government’s $20 million online safety initiative is “to grow new capabilities to keep pace with deepfakes and prevent misuse.” She points to the Online Criminal Harms Act (OCHA), passed in July 2023, which mandates public education on deepfakes, AI and other digital media, and could require online service providers to beef up their security measures.
Finally, funding will support research efforts of groups including the Centre for Advanced Technologies in Online Safety (CATOS), which aims to facilitate industry collaboration and knowledge exchange in deepfake detection. Technologies such watermarking and content authentication have been identified as particular areas of interest.
Pindrop will be on hand at CATOS’ annual Online Trust and Safety (OTS) Forum in Singapore, where it will showcase real-time audio deepfake detection.
Entrust provides summary of AI threat, anticipates increase in 2024
In a blog post for cybersecurity and ID authentication firm Entrust, Tony Ball, president of the company’s payments and identity portfolio, provides a good breakdown of the emergent threat and the variety of attack vectors employed by fraudsters. Ball says it is now much easier for lone amateurs to access the tech needed to execute more scalable and sophisticated fraud.
“Fraudsters have started using deepfakes to try and bypass biometric verification and authentication methods,” writes Ball. “These videos can be pre-recorded or generated in real time with a GPU and fake webcam, and typically involve superimposing one person’s face onto another’s.” Ball cites a 3,000 percent increase in attempted deepfake attacks between 2022 and 2023. And, he says, the growing popularity of “fraud-as-a-service” offerings means volume will likely increase in 2024. “This is particularly concerning in the realm of digital onboarding and identity verification, where the integrity of personal identification is paramount.”
Ball also notes how phishing, synthetic identity and document forgeries complicate the picture.
He says that “keeping fraudsters from entering in the first place with a reliable identity verification solution at onboarding is the foundational element” in any detection framework.
Entrust acquired Onfido in February 2024.
Startups score contracts but lack experience of established PAD firms
Amid the deepfake panic, it is prudent to remember that hooey flies in every direction – and to look to experienced providers. The Washington Post reports on a surge of firms promising to offer effective deepfake detection – which may themselves be fairly flimsy outfits, in terms of delivering on their promises.
“Fears that synthetic media could disrupt elections and threaten national security have left institutions such as Congress, the military and the media desperately searching for a trustworthy technical fix to identify fake content,” says the piece by Nitasha Tiku and Tatum Hunter. It points to the California startup Deep Media, which has won “at least five military contracts worth nearly $2 million since late 2022,” despite apparently being a bare bones operation that started its life as an AI-based synthetic media generator and now has one machine learning engineer on staff and pending lawsuits against it. Other firms named in the article include Originality.ai, AI Voice Detector, GPTZero and Kroop AI. The Post says about 40 companies now offer deepfake detection services, with many claiming that their software offers almost perfect results.
Missing from the Post’s report is mention of relatively established deepfake detection tools from developers with a background doing presentation attack detection (PAD). Pindrop, ID R&D, Veridas and many others all have track records in providing standards-based identity security tools. But the dazzle of San Francisco Bay can be blinding, even for military and intelligence outfits.
Having made deepfakes possible, OpenAI can now help detect them
One high-profile Silicon Valley company is also entering the deepfake detection sweepstakes. The New York Times reports that OpenAI is launching a tool “designed to detect content created by its own popular image generator, DALL-E.” While the company claims the tool is 98.8 percent accurate at detecting DALL-E’s content, it is not designed to detect images made with other AI-based image generators.
However, in warning, once again, that the world is not sufficiently prepared to handle what AI will unleash, one of its prime movers is also pursuing other ways to mitigate its effects. OpenAI is joining the steering committee for the Coalition for Content Provenance and Authenticity, or C2PA, which the Times says provides “a kind of ‘nutrition label’ for images, videos, audio clips and other files that shows when and how they were produced or altered.” And it is also exploring the potential of watermarking.
The Times cites recent incidents of audio and video content being used to try and affect elections in Slovakia, Taiwan and India. The latter half of 2024 will see votes in the U.S., UK, and the EU, each of which is likely to have significant socio-political consequences.
A microphone to detect audio deepfakes
Finally, a team of researchers at Arizona State University has developed a prototype of a microphone that would authenticate voice recordings as human speech. KJZZ reports that the microphone identifies biosignals associated with human speech in a recording, then adds a watermark to flag the recording as authentic.
Article Topics
biometric liveness detection | biometrics | deepfake detection | Entrust | generative AI | OpenAI | Pindrop | presentation attack detection | synthetic data
Comments