FB pixel

Deepfakes: Some progress in video detection, but it’s back to the basics for faked audio

Categories Biometric R&D  |  Biometrics News
Deepfakes: Some progress in video detection, but it’s back to the basics for faked audio
 

A pair of developments are being reported in efforts to thwart deepfake video and audio scams. Unfortunately, in the case of digitally mimicked voice attacks, the advice is old school.

An open-access paper published by SPIE, an international professional association of optics and photonics, reports on a new algorithm reportedly has scored a precision rate in detecting deepfake video of 99.62 percent. It reportedly was accurate 98.21 percent of the time.

It has been three years since the threat of deepfakes broke big in the global media, and during that time, efforts have quickly grown more sophisticated.

Fear about misuse (beyond simply grafting the faces of celebrities on those of porn actors) has sometimes been breathless, with some observers warning that a key military figure in a nuclear-armed nation could appear to issue emergency orders to launch missiles.

The paper’s authors, two from Thapar Institute of Engineering and Technology and the third from Indraprastha Institute of Information Technology (both in India), claim a research milestone.

They say that they are the first to make publicly available a database of deepfakes manipulated by generative adversarial networks that feature famous politicians. Their database is 100 source and 100 destination videos.

What is more, they claim to be the first with an algorithm that can spot deepfakes of politicians within two seconds of the start of a clip. The team has said they used temporal sequential frames taken from clips to pull off the feat.

Biometrics providers ID R&D and NtechLab finished among the leaders in a recent video Deepfake Detection Challenge.

Voice fraud detection efforts continue apace, too.

Until the pandemic, when people of all walks of life began routinely participating in video calls, deepfake audio attack looked more menacing over the medium term.

Comparing the two threats, it just seemed more likely that a convincing faked call could rattle a key mid-level staff member into helping the boss out in an emergency. The odds have evened a bit.

A white paper published by cybersecurity firm Nisos, sketches five incidents involving deepfake audio attacks.

Nisos writes in the marketing document that it actually investigated one such attack — including the original synthetic audio. It was the faked voice of a company’s CEO “asking an employee to call back to ‘finalize an urgent business deal.’ ”

Wisely, the employee immediately called the legal department. The number the would-be victim was intended to call was a VOIP service burner.

Nisos engineers studied the recording with Spectrum3d, a spectrogram tool, which, along with just listening to the message and comparing it to a known human voice elicited some data, but, apparently, no smoking gun.

Ultimately, the best advice that Nisos or anyone in the industry can offer is to stress the commonsense. If something about a call smells fishy, call legal.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

 

Publication of ISO standard sets up biometric bias tests and measurement

The international standard for measuring biometric bias, or demographic differentials, is now available for purchase and preview from the International…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events