FB pixel

New Persona release helps businesses detect AI-based face spoofs

Multi-signal approach crucial as visual quality of deepfakes continues to improve
New Persona release helps businesses detect AI-based face spoofs
 

Persona has announced “significant enhancements” in its AI-based face spoof detection capabilities, which a company blog says apply a more “more holistic approach” to detecting spoofs created or enhanced with generative AI.

The enhancements include more powerful detection of visual artifacts left behind by AI models, improved detection for compromised hardware and for similarities across face spoof submissions, and increased monitoring of threat channels with AI advancements.

Persona has also integrated more than 25 fraud detection micromodels into its government ID and biometric selfie verifications in the past two months.

“To catch more types of face spoofs, we’ve increased the recall of the models powering our government ID and selfie verifications, and we’ve refined their precision to reduce false positives during automated analysis,” says the post.

Non-visual signals in compromised hardware are also key, given the increasing sophistication of deepfakes, synthetic identities and other types of face spoofs. Suspicious patterns can be a flag that fraudsters are probing IDV systems to test new techniques.

Generative AI makes it tough for fraud detection to keep pace

Whether one is excited about generative AI’s possibilities or afraid of its potential for disruption, it is here and must be addressed as an amplifier of fraud. Persona says it has observed deepfake attacks increase by 50 times over the past few years.

“Given the 50x increase in deepfakes over the past few years, it’s evident that generative AI will continue to transform the fraud landscape,” says Rick Song, CEO of Persona, in a press release. “In 2024 alone, we helped customers detect and block over 75 million fraud attempts leveraging AI-based face spoofs.”

In short, it’s getting more difficult to keep pace with the evolution of AI-based face spoofs. The tech has been around long enough that different classes of spoofs have arisen, and the arsenal is quickly becoming more diverse.

Persona says that, “over the years, our micromodels and ensemble models have identified over 50 distinct classes of AI-based face spoofs – including face swaps, synthetic faces, and face morphs – that fraudsters used in their (unsuccessful) attempts to bypass our fraud detection capabilities.”

Moreover, “fraudsters leverage a variety of techniques – such as presentation attacks and injection attacks – to deploy AI-based face spoofs against identity verification systems.”

Meanwhile, humans are becoming less effective at detecting hyper realistic AI-based face spoofs.

Fraud strategies need to be able to adapt to evolving attacks

That means it’s important to have a strategy that incorporates different classes of signals – including visual and non-visual, as well as larger patterns – and adapts over time. Persona has “observed fraudsters using both AI-based face spoofs and stolen selfies in injection attacks.”

“Without our non-visual signals, there’s a chance instances like these might have bypassed visual-based inspection.”

A prediction from Gartner says that by 2026, attacks using AI-generated face deepfakes could make up to 30 percent of enterprises adopt AI-driven multi-channel identity verification and authentication solutions as a necessary defense. And rightly so: fraudsters continue to refine and improve on their techniques. Persona notes that “just because your approach works today doesn’t mean it’ll still work tomorrow.”

Fighting AI face spoofs requires what Song calls “a proactive, adaptable approach.”

Persona’s holistic strategy promises a comprehensive signal library, flexibility, and security. “Our data analysis, engineering, and threat monitoring teams are continually curating new data sources, fraud signals, and detection models that businesses can apply either broadly or more strategically during active attacks.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

 

The future of DARPA’s quantum benchmarking initiative

DARPA started the Quantum Benchmarking Initiative (QBI) in July 2024 to expand hardware capabilities and accelerate research. In April 2025,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events