FB pixel

Arsenal for deepfakes and injection attacks continues to grow

New tools and tactics are rewriting the fraud prevention playbook
Arsenal for deepfakes and injection attacks continues to grow
 

Developers of generative AI like to promise endless possibilities, but at the moment, the tech has seemingly gotten just good enough to make deepfake fraud extremely easy. Indeed, while Meta’s would-be AI innovations fail some very public tests, the means to create synthetic media that looks and sounds enough like a real person to commit biometric identity fraud is accelerating – as illustrated by recent news from iProov, Regula and Reality Defender.

iProov discovers new deepfake injection tool for iOS

It’s one thing to sound the alarm about deepfakes and injection attacks, but actually finding and identifying the weapons is another. This is what makes iProov’s latest discovery so intriguing. In a new blog, the UK biometrics firm says it has uncovered a “highly specialized tool designed to perform advanced video injection attacks,” which works on modified iOS 15 devices.

“The tool is deployed via jailbroken iOS 15 or later devices and is engineered to bypass weak biometric verification systems – and crucially, to exploit identity verification processes that lack biometric safeguards altogether.” This, says iProov, signals “a shift toward more programmatic and scalable attack methods,” and marks a significant escalation in identity fraud.

And the plot is even thicker: iProov says the tool has “Chinese origins,” which makes the appearance of a sophisticated new injection attack tool “a matter of national security interest.”

Andrew Newell, Chief Scientific Officer at iProov, says “the tool’s suspected origin is especially concerning and proves that it is essential to use a liveness detection capability that can rapidly adapt.”

The iOS video injection attack tool relies on hacked phones that have had native Apple security restrictions removed. The attacker uses a Remote Presentation Transfer Mechanism (RPTM) server to connect their computer to the compromised iOS device. The tool is then ready to inject deepfake content directly into the device’s video stream.

“These can include face swaps, where a victim’s face is superimposed over another video, or motion re-enactments, where a static image is animated using another person’s movements,” says iProov’s post. The process completely bypasses the physical camera by fooling the streaming application into believing the fraudulent video is a genuine feed.

All it takes then is for an injected deepfake to pass identity verification, opening the door to identity theft and fraud.

“To combat these advanced threats, organizations need multilayered cybersecurity controls informed by real-world threat intelligence,” says Newell. The company believes the best protection simultaneously confirms identity verification, liveness detection, a real-time passive challenge-response interaction “to ensure the verification is happening live and is not a replay attack,” and combining advanced technologies with human expertise.

Regula says identity spoofing, deepfakes top fraud threats in UAE

New survey data from Regula shows the United Arab Emirates facing maturing fraud threats, often in the form of impersonation attacks – which, says the firm, now affect more organizations than traditional threats like forged documents or synthetic identities.

A release says Regula’s survey shows identity spoofing, wherein criminals use photos, replays and screen images to impersonate legitimate users, has already hit 36 percent of UAE businesses, while deepfakes have impacted 35 percent. That makes impersonation attacks the UAE’s most common fraud tactic.

Traditional fraud methods have not gone away. But, according to Regula’s Chief Technology Officer Ihar Kliashchou, “the key shift is that fraudsters are no longer breaking in through the back door – they’re walking straight through the front.”

Kliashchou says the verification step itself has become the primary target. “Criminals create fake but ‘clean’ identities that look legitimate from day one, making downstream fraud detection nearly powerless. Onboarding is now the battleground.”

For traditional fraud methods, 28 percent of organizations report document fraud such as counterfeit and altered IDs, 27 percent report seeing synthetic identity fraud, and 30 percent say social engineering and human manipulation remain threats. Thirty-four percent report biometric fraud in the form of fake or stolen biometrics, including face morphing and masks.

Regula says the UAE’s rapid digital transformation and heavy reliance on remote onboarding and biometric checks have reshaped its fraud landscape.

“To protect customers and comply with regulatory expectations, UAE’s businesses need layered defenses that combine flexible identity workflow orchestration with the ability to adapt to business needs and evolving threats. By uniting multi-layered verification with a liveness-first strategy, businesses can build strong, lasting protection against increasingly sophisticated fraud.”

Regula plans to release a complete survey analysis later this month.

Reality Defender says deepfake arms race is lopsided

Reality Defender has long sounded alarms about the evolving deepfake threat. A blog from CTO Alex Lisle explores how “what began as a niche research curiosity (on Reddit, of all places) has evolved into a sophisticated threat ecosystem where bad actors leverage increasingly powerful generative AI tools to impersonate and defraud at scale.”

The traditional fraud prevention playbook, Lisle says, now belongs in the trash, given how quickly generative AI techniques are developing. “Unlike conventional threats that evolve incrementally, deepfake technology advances in quantum leaps. When a new model like Sora or Imagen 3 launches, threat actors gain access to capabilities that can bypass detection systems built for yesterday’s technology overnight.”

The story is not new, but remains urgent: fraudsters stay two steps ahead of organizations trying to defend themselves, leading to consequences ranging from financial losses to national security threats. The answer, says Lisle, is to build in “predictive resilience” by future-proofing your technology as much as possible.

For Reality Defender, that means staying on top of the latest research, and working together with generative AI companies like ElevenLabs and Respeecher to build responsible deployment frameworks – ensuring that as new capabilities launch, detection mechanisms evolve in parallel. While regulations are starting to come into play with legislation like the EU’s AI Act, Lisle says that compliance can’t be the only driver: “Organizations that wait for regulatory mandates will find themselves perpetually behind the curve.”

“The deepfake arms race isn’t slowing down, but we can change the rules of engagement. By building detection that evolves faster than the threats it faces, we can restore trust in digital communications and protect the foundations of human interaction in an AI-powered world.”

Behavioral Signals launches voice deepfake detection tool

Behavioral Signals has launched a deepfake speech detection offering, which a release says “identifies synthetic voice with or without prior knowledge of the speaker” by combining signal analysis with “emotion and behavioral intelligence.”

Technically, it monitors “vocal micro features” along with prosody, rhythm, timing and behavioral consistency – which it claims amounts to emotional analysis in detecting audio deepfakes.

Rana Gujral, CEO of Behavioral Signals, says voice is the most human interface, and is rapidly becoming the most exploited. “Our approach brings behavioral truth into the detection loop so organizations can know not only what was said but whether a real human actually said it.”

The product comes as an API and a forensic user interface with options for cloud, on-premises and edge cases. It works in two operating modes, either for detection or voice matching against an existing sample. It works across “many languages” and is “explainable by design.”

FARx gets UK government funding for ‘AI fused-biometrics’ tech  

UK startup FARx, which bills itself as “the world’s first and only AI fused-biometrics company,” has secured £250,000 (about USD$337,00) of seed investment through the UK government’s Seed Enterprise Investment Scheme (SEIS), which gives tax relief to investors who fund small, early-stage startups.

According to a press release, “FARx’s next-generation multi-factor authentication technology fuses, for the first time in history, speaker, speech and face recognition.” A patented proprietary machine learning algorithm adapts to each user, “detecting subtle biometric shifts such as emotion, tone, or behavioural anomalies that could signal a threat.”

“In addition, it captures biometric data from suspected fraudsters, matching them against internal or shared databases to track repeat offenders, and flag suspicious activity.”

The company, which claims a pre-money valuation (PMV) of 4 million pounds (about 5.4 million dollars), says it will use the funding to accelerate R&D and continue to bring its technology to market. Clive Summerfield, CEO of FARx, calls the investment through SEIS “an enabler that will help us roll out FARx across a wider range of applications and industries, while delivering strong returns for our investors.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Human super-recognizers teach AI how to recognize faces in new study

You might know someone who struggles to recognize people, even if they’re famous and on TV all the time. On…

 

Biometrics testing, more user control contrast with US surveillance expansion

Biometrics and digital identity technologies and policies are being upgraded by providers and implementers to increase trust, as seen in…

 

Sri Lanka digital ID launch by Q3 2026: President

Sri Lanka has set plans to launch the first digital ID by the third quarter of next year, President Anura…

 

Former Microsoft CSO named Princeton Identity Executive Advisor

Brian K. Tuskan, former Chief Security Officer for Microsoft and ServiceNow, has joined Princeton Identity as its newest Executive Advisor….

 

US DoD and Intelligence Community veteran joins ROC Board

ROC has announced the appointment of Brian A. Hibbeln, a 30-year veteran of the Department of Defense and the U.S….

 

With passkey sign-in secured, FIDO Alliance looks to frontier of digital credentials

According to the Passkey Index, a benchmark from the FIDO Alliance, 93 percent of user accounts across member firms are…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events