FB pixel

New deepfake generation and detection methods signal new AI arms race

Categories Biometric R&D  |  Biometrics News
New deepfake generation and detection methods signal new AI arms race

New methods of creating deepfakes are being developed with advanced artificial intelligence techniques, along with promising detection solutions. Fake people and fake text are the latest potential fronts in the deepfake debate, while a video authentication tool based on blockchain may hold the potential to expose deepfake video footage as having been altered.

NVIDIA Software Engineer Phillip Wang has created a website demonstrating the ability of a technique using generative adversarial networks (GANs) to generate images of fake people, My Modern Met reports. Thispersondoesnotexist.com generates images based on a new method of using StyleGANs developed by NVIDIA, which makes it possible to train a system to build high-quality artificial images with up to 1024 x 1024 resolution. LyrnAI describes the system as independently modifying input at different levels to control coarser facial features such as pose and face shape and finer features such as hair color.

The new GPT2 text generating tool from OpenAI makes predictions based on text input about what should come next, The Guardian reports. OpenAI, which is backed by Elon Musk and others, has departed from its usual research release practice for GPT2 due to the realistic results it produces, and the potential for misuse for “deepfakes for text.”

OpenAI research director Dario Amodei says the models for GPT2 are 12 times bigger, and the data set is 15 times bigger and broader than previous state-of-the-art systems. It was collected by crawling Reddit for links with more than three votes, and totals 40 GB of text.

“We need to perform experimentation to find out what they can and can’t do,” OpenAI Head of Policy Jack Clark told the Guardian. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

Preventing video deepfakes

As new deepfake risks are discovered, startup Amber Authenticate has developed a way to mark video with a cryptographic hash that can be used to ascertain whether it has been tampered with or not, according to Wired.

The app runs in the background on a device while video is captured, and at user-specified intervals generates hashes stored on a public blockchain built on Ethereum. Any changes to the file’s audio or video will result in a different hash being generated by the algorithm the next time it is viewed.

“There’s a systemic risk with police body cameras across many manufacturers and models,” says Amber CEO Shamir Allibhai. “What we’re worried about is that, when you couple that with deep fakes, you can not only add or delete evidence but what happens when you can manipulate it? Once it’s entered into evidence it’s really hard to say what’s a fake. Detection is always one step behind. With this approach it’s binary: Either the hash matches or it doesn’t, and it’s all publicly verifiable.”

Allibhai will present the technology to Department of Defense and Homeland Security officials at a Defense Advanced Research Projects Agency (DARPA) showcase this week, and Wired reports that DHS has already expressed interest in a blockchain-based video authentication technology from Factom.

Amber research consultant Josh Mitchell has reportedly found vulnerabilities in five models of mainstream body cameras, and says there is no authentication mechanism on any of them. Allibhai is self-financing Amber, and Mitchell says Authenticate is compatible with at least some mainstream body camera brands.

Pindrop CEO Vijay Balasubramaniyan recently told Biometric Update that while current technology can detect most audio and video fakes, the problem remains a threat to public discourse.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News


Could be 25 years before TSA gets facial recognition in all US airports

The Transportation Security Administration (TSA) foresees significant delays in implementing facial recognition across U.S. airports if revenue continues to be…


Single solution for regulating AI unlikely as laws require flexibility and context

There is no more timely topic than the state of AI regulation around the globe, which is exactly what a…


Indonesia’s President launches platform to drive digital ID and service integration

In a bid to accelerate digital transformation in Indonesia, President Joko Widodo launched the Indonesian government’s new technology platform, INA…


MFA and passwordless authentication effective against growing identity threats

A new identity security trends report from the Identity Defined Security Alliance (IDSA) highlights the challenges companies continue to face…


Zighra behavioral biometrics contracted for Canadian government cybersecurity testing

Zighra has won a contract with Shared Services Canada (SSC) to protect digital identities with threat detection and Zero Trust…


Klick Labs develops deepfake detection method focusing on vocal biomarkers

The rise in deepfake audio technology has significant threats in various domains, such as personal privacy, political manipulation, and national…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events