Innovations in deepfake detection can’t come fast enough
From new studies of voice deepfakes to developing innovative ways to tell a fake face from the reflection in the eye, scientists are building tools to fight fraud and scams. Despite this, industry research shows many businesses are not confident in their deepfake detection capabilities.
The cost of deepfake fraud
While nearly 60 percent of businesses consider video and audio deepfakes a serious threat, 44 percent say they aren’t very confident in their ability to detect them, a new survey from identity verification company Regula has shown.
The company’s Deepfake Trends 2024 study also shows that 92 percent of businesses have dealt with identity fraud while half of them have experienced audio and video deepfake fraud. That is a big jump from 2022 when only 37 percent experienced identity fraud and 29 percent deepfake fraud.
Regula conducted the survey by interviewing 575 decision makers from businesses in Germany, Mexico, the UAE, Singapore and the U.S. The research covered aviation, crypto, financial services (both traditional and fintech), healthcare, law enforcement, technology and telecommunications industries.
Across industries, businesses have lost an average of nearly US$450,000 to deepfakes with 28 percent reporting the losses exceeded $500,000, says Regula. Losses in the financial services sector were higher on average amounting to little over $600,000 on average while fintech businesses lost an average of more than $630,000.
Companies are mostly concerned about deepfakes bypassing biometric security systems with 42 percent identifying identity theft as the greatest risk. The identity verification company, however, warns that tackling deepfakes in-house is not always the best choice.
Companies that built their own identity verification systems had higher average losses ($515,000) compared to those using ready-made solutions ($444,000).
Help is on the way
While deepfakes are spreading faster than expected, deepfake detection tools and research into AI-generated images are also booming.
Datasets are getting better
One thing that is crucial for digital forensics and deepfake research is datasets.
A group of researchers from China’s prestigious Peking University and tech giant Tencent’s AI research department Youtu Lab has published a paper proposing a new benchmark for deepfake detection.
The DF40 deepfake detection dataset comprises 40 distinct deepfake techniques that aim to help the detection of current SOTA deepfakes and AIGCs. The dataset includes realistic deepfake data created by popular generation software and methods such as HeyGen, MidJourney, and DeepFaceLab and offers a million-level deepfake data scale for both images and videos. Finally, the DF40 provides alignment between fake methods and data domains, the researchers explain on their GitHub page.
One limitation is the lack of comprehensive analysis for video-level detectors which is planned for future work, they add.
Listen to the research
Researchers from the University of Florida have concluded what they describe as the largest study on audio deepfakes.
The study recruited 1,200 people who were presented with samples from the three most widely-cited deepfake datasets. The results showed that humans were more likely to guess correctly whether an audio clip was real or a digital fake compared to machine learning models. The humans scored with an accuracy of 73 percent.
“Machine learning models suffer from significantly higher false positive rates, and experience false negatives that humans correctly classify when issues of quality or robotic characteristics are reported,” the study says.
The study also delved into how humans make their classification decisions, finding that they are often fooled by details generated by machines, like British accents and background noises.
The tiniest details
Other researchers are diving into the tiniest details to detect deepfakes.
Sejun Song, a professor at the University of Augusta, Georgia, has developed a unique method to determine whether a person is real or a deepfake by analyzing the reflections in an individual’s eyes. The solution, named EyeDentity, is designed to be used during facial recognition checks and analyzes the shape of reflections and details such as the environment, color and light with the help of AI.
The research has won Song a third prize at an innovation competition hosted by the National Security Innovation Network, a U.S. Department of Defense (DoD) program.
Cheaper to detect than generate
Despite the growing field of research examining the threat deepfakes, the number of tools for making AI-generated content is also rising.
At the end of last year, there were 120 deepfake tools but by March of this year, there were 358 tools. New attacks are also constantly rising, according to voice biometrics company Pindrop. The company made headlines this year after identifying the original AI engine used to create the Joe Biden deepfake audio.
The good news is that it is four orders of magnitude cheaper to detect deepfake than to generate it, the company’s CEO and Co-founder Vijay Balasubramaniyan said in a recent interview with business news site IPO Edge.
“That cost asymmetry allows us to stay ahead,” he says.
Article Topics
biometric liveness detection | biometric research | biometrics | deepfake detection | deepfakes | Pindrop | Regula | synthetic faces | synthetic voice | Tencent
Comments