FB pixel

Trust Stamp face biometrics layer addresses voice vulnerability to deepfakes

NPR shows audio tools are not always up to the task of deepfake detection
Trust Stamp face biometrics layer addresses voice vulnerability to deepfakes
 

Trust Stamp has launched a program that aims to help financial institutions fast-track their deepfake detection capabilities with multi-factor biometric authentication. A press release from the face biometrics firm positions Trust Stamp’s biometric face authentication product as a “as an alternative for, or supplement to, voice-based systems” that are vulnerable to deep fake voice attacks.

Audio and video deepfakes have become a pressing problem, as fraudsters exploit cheap and easily accessible voice cloning and generative AI software. Political figures and celebrities have had their likenesses cloned, leading to a spike in the market for biometrics software, verification tools and preventative ID authentication.

For businesses, however, perhaps no case has rattled the nerves as much as the infamous deepfake fraud story out of Hong Kong.

“In February of this year we saw a widely publicized example where a finance worker in Hong Kong paid out $25,000,000 based on a video call that included a deep fake representation of his company’s CFO,” says President of Trust Stamp Andrew Gowasack, describing what he calls  “CEO Fraud.”

“Although there should be significant focus on attacks on the interaction between the customer and the financial institution, deep fake technology can also be used for attacks within the customer enterprise resulting in the financial institution receiving instructions that have every appearance of being legitimate, having been initiated based upon a fraudulent communication within the enterprise,” says Gowasack. “Fraud of this type is typically commissioned by email via a spear phishing attack, but with voice and video deepfakes it can now be used for instructions given by Zoom or other video technologies.”

Trust Stamp says banks, credit unions and other enterprises that use voice instructions or voice-based authentication are particularly vulnerable to synthetic audio deepfakes, and should consider integrating the company’s multi-factor biometric authentication tools for a robust defense.

“We have never offered voice-based authentication because it appeared probable that it would be spoofed by fast advancing AI-technology,” Gowasack says. “Although OpenAI have stated that they are not currently releasing their Voice Engine for public use there are many alternative generative AI engines available including open-source models. Our multimodal authentication tool using facial authentication with proof of life, paired with optional device authentication, can quickly be integrated into current authentication systems as an alternative for, or supplement to, voice-based systems and can also be initiated as a stand alone service for high-risk transactions.”

NPR tests audio deepfake detection providers

The social and political implications of AI-generated deepfakes are becoming clearer. In an article for NPR, Sarah Barrington, an AI and forensics researcher at the University of California, Berkeley, describes how convincing deepfakes can rattle foundational trust.

“If we label a real audio as fake, let’s say, in a political context, what does that mean for the world?” Barrington says.  “We lose trust in everything. And if we label fake audio as real, then the same thing applies. We can get anyone to do or say anything and completely distort the discourse of what the truth is.”

In an informal experiment, Wisconsin Public Radio, a subsidiary of NPR, tested their reporters’ cloned voices against audio deepfake detection tools from three providers: Pindrop Security, AI or Not and AI Voice Detector.

A summary says that “NPR submitted 84 clips of five to eight seconds to each provider.  About half of the clips were snippets of real radio stories from three NPR reporters.” NPR used technology from the company PlayHT, to generate the rest, which cloned voices of the same reporters saying the same words as in the real clips.

Of the three, only Pindrop, which is available to businesses but not individuals, performed well, detecting all but three deepfake samples.

Barrington says this shows the need to continue developing novel approaches to deepfake detection, as fraud threatens not just the arenas of global politics and finance, but also interpersonal interactions with tactics like telephone fraud. Machine learning algorithms for detection are unlikely to have been trained on the voices of someone’s non-famous family members, so sussing out if cash-strapped Uncle Jimmy’s call is coming from a fraudster is more complex than detecting a deepfake of a Joe Biden or Taylor Swift – which is already difficult enough.

But it’s worth the effort, when the stakes are nothing less than our fundamental trust in reality.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

FIDO Alliance introduces passkey Design Guidelines to optimize UX

New guidance on how to implement passkeys for optimal user experience have been published by the FIDO Alliance. FIDO’s Design…

 

Interpol exec calls for more biometrics sharing to combat cross-border crime surge

Speaking at a recent event in London, Stephen Kavanagh, executive director of police services at Interpol, warned of a “new…

 

NSW launches digital inclusion strategy consultation on equitable service access

The New South Wales (NSW) government has initiated a call to action for local communities, industries, community organizations, and government…

 

Could be 25 years before TSA gets facial recognition in all US airports

The Transportation Security Administration (TSA) foresees significant delays in implementing facial recognition across U.S. airports if revenue continues to be…

 

Single solution for regulating AI unlikely as laws require flexibility and context

There is no more timely topic than the state of AI regulation around the globe, which is exactly what a…

 

Indonesia’s President launches platform to drive digital ID and service integration

In a bid to accelerate digital transformation in Indonesia, President Joko Widodo launched the Indonesian government’s new technology platform, INA…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events