FB pixel

Threat of deepfakes draws legislator and biometrics industry attention

 

A U.S. senator plans to reintroduce legislation against “deepfake” media in the coming year as legislators join members of the artificial intelligence and biometrics industries in efforts to curb complex spoofs, Axios reports.

“Deepfakes — video of things that were never done with audio of things that were never said — can be tailor-made to drive Americans apart and pour gasoline on just about any culture war fire,” Senator Ben Sasse told Axios. “Even though this is something that keeps our intelligence community up at night, Washington isn’t really discussing the problem.”

University of Maryland law professor Danielle Citron, who co-authored an influential report on deepfakes, tells Axios that a system to automatically detect forgeries is more important than legal changes, but that such a system is not close.

Pindrop CEO Vijay Balasubramaniyan suggests that this claim is a case of disconnect between the tech industry and legal experts. He told Biometric Update in an interview that the problem is very real, and a threat both to public discourse, as well as identity verification and authentication services, particularly due to the frequency of data breaches and the increasing likelihood that audio samples will be included in breached data. However, he says, current technology can identify this kind of fake audio content with accuracy above 90 percent, and video is even easier due to the challenges of synching the sound and picture.

“The human voice, with millions of years of evolution, has certain characteristics,” he explains. “Machines don’t care about all of that, they just want it to sound like you. You can use that dichotomy to detect deep fakes.”

Efforts are ongoing, however, as deepfakes become more sophisticated. Google is making a large dataset of synthesized audio speech spoken by the company’s text-to-speech (TTS) deep learning models, for use in the 2019 ASVspoof challenge. Balasubramaniyan credits Google for recognizing the importance of the challenge, and the need for more synthetic data to ensure that testing is robust, and technologies capable of detecting deepfakes continue to improve. Google explains its reasoning in a blog post announcing the decision, noting that among other potential risks, deepfakes allow bad actors can more credibly claim that real content is fake.

Balasubramaniyan notes that fakes made by stitching together samples can be more difficult to detect than speech synthesized by a computer, but that variables such as the quality and quantity of source audio of the target used in the fake make a significant difference in how easy it is to detect. Despite that, the fact that deepfakes are made to fool human ears makes them susceptible to deep neural networks, he says.

“When you take 8,000 samples every second, because it’s a human that’s producing it, based on millions of years of evolution, there are only certain configurations that can happen,” Balasubramaniyan points out. “But when we look at these machines and what they produce, they’re optimizing for making your ear hear a certain thing.”

Multiple members of congress other than Sasse are consulting with legal scholars and state policymakers to gain a better understanding of the issue, and laws against deepfakes are also included in a controversial bill in the New York state legislature, according to Axios.

Electronic Frontier Foundation Civil Liberties Director David Greene expressed concern that the legislation could harm free speech. Balasubramaniyan agrees.

“I’m of the opinion that what you will prevent will be low-hanging fruit, like people creating parody videos,” he says. “A motivator attacker doesn’t live in the U.S. If someone really wants to do damage, they don’t care about your laws.”

Sasse’s bill would target both the creation and distribution of deepfakes, but University of Miami Law professor and Cyber Civil Rights Initiative President Mary Anne Franks says proving that people knowingly circulated deepfakes may be nearly impossible.

The problem is not intractable, Balasubramaniyan argues. With continued industry innovation and collaboration, liveness detection can keep ahead of the best efforts of fraudulent actors.

“There’s one great thing,” he says. “We’ve seen this with the open source community. When a bunch of good people get together, the number of good people hopefully is greater than the number of bad people.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Reach of Musk, DOGE’s federal data access sets off privacy, security alarms

Led by tech billionaire Elon Musk and a shadowy team believed to be under his control, the United States DOGE…

 

Mobile driver’s licenses on the cusp of ‘major paradigm shift’

More entities have integrated the California mobile driver’s license (mDL) credential for identity verification. Although just 15 states have introduced…

 

Gesture-based age estimation tool BorderAge joins Australia age assurance trial

Australia’s age assurance technology trial is testing the new biometric tool that performs age estimation based on hand gestures. The…

 

European AI compliance project CERTAIN launches

The pan-European project to create AI compliance tools CERTAIN has kicked off its work, with the goal of making European…

 

Signaturit Group acquiring Validated ID for undisclosed sum

Spain-based digital identity and electronic signature provider Validated ID is being acquired by Signaturit Group, a European company offering identity…

 

South Africa will invest in DPI, says President

South Africa is planning to invest in digital public infrastructure (DPI) , including the launch of a national digital identity…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events