Au10tix identifies new deepfake fraud tactic that probes for security vulnerabilities

The repeaters are here. I repeat: the repeaters are here.
The Q1 2025 Global Identity Fraud Report from Au10tix has identified and named a new AI-enabled fraud technique. A release defines “repeaters” as “minor variations of a single digital asset (face picture, image background, document number, etc.) that bad actors deploy in small numbers to test detection systems before launching cross-industry mega-attacks.”
Au10tix’s data shows the use of repeaters rising 33 percent between Q1 2024 and Q1 2025, enabled by the proliferation of Fraud-as-as-Service (FaaS) networks. Working something like minor, automated fraud minions, repeaters are low-profile, “designed to evade both KYC checks and biometric defenses by mimicking genuine facial behavior and spoofing liveness checks,” as they probe for security vulnerabilities to exploit.
The report calls repeaters “an early warning signal for businesses,” presaging the staging of a large-scale attack. Because once a few repeaters have beat a firm’s KYC and identity verification checks, “those same assets can be deployed across multiple platforms in coordinated mega-attacks with minimal risk of detection.”
“Repeaters are the fingerprint of a new class of fraud: automated, AI-enhanced attacks that reuse synthetic identities and digital assets at scale,” says Yair Tal, CEO of Au10tix. “We’re proud to be at the forefront of detecting and blocking these attacks through advanced pattern recognition and real-time consortium validation.”
New forms of biometric fraud require adaptable fraud prevention strategies. So far, the only proven way to detect repeaters is by cross-checking through consortium validation: “once an ID has been received by any organization in the consortium, it is easily flagged if another consortium member receives it in any permutation.”
To help organizations get into a better position to catch repeaters, Au10tix offers three actionable insights. The firm recommends shifting from static to behavioral detection, to track repetition across sessions, devices, and onboarding events. It says to embed consortium signals into fraud defenses: “fraud rings operate across industries, so protection should too.”And businesses should “audit for synthetic identity vulnerability” – in other words, get with the program, and find out just how exposed you are to deepfake fraud and its legion of manipulated image template invaders.
Article Topics
AI fraud | AU10TIX | behavioral biometrics | biometrics | cybersecurity | deepfake detection | deepfakes | fraud prevention | repeaters | synthetic identity fraud
Comments