FB pixel

New deepfake generation and detection methods signal new AI arms race

Categories Biometric R&D  |  Biometrics News
New deepfake generation and detection methods signal new AI arms race
 

New methods of creating deepfakes are being developed with advanced artificial intelligence techniques, along with promising detection solutions. Fake people and fake text are the latest potential fronts in the deepfake debate, while a video authentication tool based on blockchain may hold the potential to expose deepfake video footage as having been altered.

NVIDIA Software Engineer Phillip Wang has created a website demonstrating the ability of a technique using generative adversarial networks (GANs) to generate images of fake people, My Modern Met reports. Thispersondoesnotexist.com generates images based on a new method of using StyleGANs developed by NVIDIA, which makes it possible to train a system to build high-quality artificial images with up to 1024 x 1024 resolution. LyrnAI describes the system as independently modifying input at different levels to control coarser facial features such as pose and face shape and finer features such as hair color.

The new GPT2 text generating tool from OpenAI makes predictions based on text input about what should come next, The Guardian reports. OpenAI, which is backed by Elon Musk and others, has departed from its usual research release practice for GPT2 due to the realistic results it produces, and the potential for misuse for “deepfakes for text.”

OpenAI research director Dario Amodei says the models for GPT2 are 12 times bigger, and the data set is 15 times bigger and broader than previous state-of-the-art systems. It was collected by crawling Reddit for links with more than three votes, and totals 40 GB of text.

“We need to perform experimentation to find out what they can and can’t do,” OpenAI Head of Policy Jack Clark told the Guardian. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

Preventing video deepfakes

As new deepfake risks are discovered, startup Amber Authenticate has developed a way to mark video with a cryptographic hash that can be used to ascertain whether it has been tampered with or not, according to Wired.

The app runs in the background on a device while video is captured, and at user-specified intervals generates hashes stored on a public blockchain built on Ethereum. Any changes to the file’s audio or video will result in a different hash being generated by the algorithm the next time it is viewed.

“There’s a systemic risk with police body cameras across many manufacturers and models,” says Amber CEO Shamir Allibhai. “What we’re worried about is that, when you couple that with deep fakes, you can not only add or delete evidence but what happens when you can manipulate it? Once it’s entered into evidence it’s really hard to say what’s a fake. Detection is always one step behind. With this approach it’s binary: Either the hash matches or it doesn’t, and it’s all publicly verifiable.”

Allibhai will present the technology to Department of Defense and Homeland Security officials at a Defense Advanced Research Projects Agency (DARPA) showcase this week, and Wired reports that DHS has already expressed interest in a blockchain-based video authentication technology from Factom.

Amber research consultant Josh Mitchell has reportedly found vulnerabilities in five models of mainstream body cameras, and says there is no authentication mechanism on any of them. Allibhai is self-financing Amber, and Mitchell says Authenticate is compatible with at least some mainstream body camera brands.

Pindrop CEO Vijay Balasubramaniyan recently told Biometric Update that while current technology can detect most audio and video fakes, the problem remains a threat to public discourse.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

 

Publication of ISO standard sets up biometric bias tests and measurement

The international standard for measuring biometric bias, or demographic differentials, is now available for purchase and preview from the International…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events