Deepfake detection is a continuous process of keeping up with AI-driven fraud: BioID

The latest Lunch Talk from the European Association for Biometrics (EAB) focuses on the threat of deepfakes and what can be done to reliably detect their misuse. BioID has a solution, and the German biometrics firm’s VP for Business Development Ann-Kathrin Freiberg explains how it came to be, and why deepfake detection is an essential tool to fight political disinformation, identity theft, romance scams and video call CEO fraud.
BioID is a “research-focused company,” Freiberg says – so it knows well the extent of what can be done with deepfake technology. While pornography remains the driving force for deepfakes, political interference is a looming threat, as adversarial nations aim to meddle in the elections of their rivals.
“U.S. elections have strongly been influenced by other nations,” Freiberg says. She notes how far fake media can spread in a short time. “The issue is, even if you can find out later that the material is not genuine, the damage has already been done.”
Technological development, she says, is outpacing media literacy. Adults tend to believe they know what’s what, but new AI tools for deception have made a heightened level of awareness necessary to avoid being tricked into sharing disinformation. The result is an increased risk of identity theft, reputational damage and financial losses – but also, on a societal level, a threat to public security and free speech.
The human eye, Freiberg says, is not trained to detect deepfakes, and will have difficulty identifying them unless they’re alert to the threat. Even with training, it can be hard to identify deepfake images.
For now, there are tells. Deformed shapes, blurred textures, a lack of blinking and sketchy teeth are all indicators that media has been generated using AI or deep learning. But the pace of development means these artifacts will likely disappear sooner than later.
“If we wait a few more months, then those unrealistic parts of a deepfake probably won’t be there anymore,” Freibeg says, “because the technology develops so quickly and gets better and better.”
At the moment, the deepfake arsenal contains face swaps, face reenactment, face attribute manipulation (all of which modify existing images) and generative AI that synthesizes entirely fake images. Text-to-video platforms such as Sora and Pika Labs enable the creation of deepfake videos through prompts.
But more tactics are already looming, with tools like Picsi.AI offering hybrid versions of the deepfake craft. And the current limitations on time and capability will not last. Freiberg cites a statement from Netflix, which is “planning to have the possibility for individuals to wish for their movie within the next night, and then within one day a full movie will be created for them.”
Social media is a particular minefield. AI models already garner scads of followers on Instagram, pulling in thousands of euros a month for the agencies that create them. It may be a lucrative business proposition, but it is also a potential goldmine for fraudsters. Freiberg notes that “there are no checks right now on many of the platforms that your identity is real.”
BioID is part of the growing ecosystem of firms offering algorithmic defenses to algorithmic attacks. It provides an automated, real-time deepfake detection tool for photos and videos that analyzes individual frames and video sequences, looking for inter-frame or video codec anomalies. Its algorithm is the product of a German research initiative that brought together a number of institutions across sectors to collaborate on deepfake detection strategy. But it is also continuing to refine its neural network to keep up with the relentless pace of AI fraud.
“We are in an ongoing fight of AI against AI,” Freiberg says. “We can’t just just lean back and relax and sell what we have. We’re continuously working on increasing the accuracy of our algorithms.”
That said, Freiberg is not only offering doom and gloom. She points to the Ukrainian Ministry of Foreign Affairs AI ambassador, Victoria Shi, as an example of deepfake technology used with non-fraudulent intention.
The silver lining is reflected in the branding of BioID’s “playground” for AI deepfake testing. At playground.bioid.com, users can upload media to have BioID judge whether or not it is genuine.
Ultimately, Freibeg advocates for a holistic approach that combines media literacy with effective digital regulation and technical support, including watermarking or digital credentialing. Her quest to raise awareness continues with a free online Deepfake Detection Workshop hosted on October 22. Registration is through the EAB’s website.
Article Topics
BioID | biometric liveness detection | biometrics | biometrics research | deepfake detection | deepfakes | EAB | EAB 2024
Comments