FB pixel

Deepfake voice attacks are here to put detection to the real-world test

Deepfake voice attacks are here to put detection to the real-world test
 

It’s put up or shut up time for biometric software companies and public researchers claiming they can detect deepfake voices.

Someone sent robocalls as part of disinformation tactic in the United States purporting to be President Joe Biden. It sounded like Biden telling people not to vote in a primary election, but it could have been AI. No one, not even vendors selling deepfake detection software, can agree.

Software maker ID R&D, a unit of Mitek, is stepping into the market, and responded to the previous big voice cloning scandal in the U.S., involving pop star Taylor Swift, with a video showing that its voice biometrics liveness code can differentiate real recordings from digital impersonation.

The electoral fraud attempt poses a different kind of challenge.

A Bloomberg article this week looked at what might have been the first deepfake audio dirty trick played on Biden. But no one knows if it was an actor or AI.

Citing two other detector makers, ElevenLabs and Clarity, Bloomberg could find no certainty.

ElevenLabs’ software found it unlikely that the misinformation attack was the result of biometric fraud. Not so, Clarity, which apparently found it 80 percent likely to be a deepfake.

(ElevenLabs, which focuses on creating voices, became a unicorn. The company raised an $80 million series B this month, executives said the company is valued at more than $1 billion, according to Crunchbase.)

As is often the case, some hope springs from research, and in this case, it’s qualified.

A team of students and alums from University of California – Berkeley say that they have developed a method of detection that function with as little as no errors.

Of course, that’s in a lab setting and the research team feels the method will require “proper context,” to be understood.

The team gave a deep-learning model raw audio to process and extract multi-dimensional representations. The model uses these so-called embeddings to parse real from fake.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Digital ID systems fail migrants due to policy gaps, Caribou finds

A new report by research organization Caribou has warned that digital ID systems around the world have continued to deepen…

 

Certainty vs flexibility – does the UK need a Biometric Surveillance Act?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner Last week London became a city of two tales. Two…

 

TestMu AI releases testing tool for agent-produced code

TestMu AI (formerly LambdaTest) has launched Kane CLI, “a new browser automation tool that runs directly from the terminal,” and…

 

Travel biometrics making new connections

Airport biometrics projects and companies are breaking new ground and intersecting with other industry trends, from digital wallets to biometric…

 

Biometric Update Podcast: Teresa Wu on SIA’s Corporate Credential Design Guide

The Security Industry Association (SIA) has published its Corporate Credential Design Guide, and Idema Public Security’s Teresa Wu, who has…

 

AI agents operating continuously at machine speed are breaking human-centric IAM

New research commissioned by Ping Identity and compiled by KuppingerCole Analysts shows that “agents are being deployed into production faster…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events