FB pixel

AI fraud threat continues to spur deepfake detection integration, investment, development

Strategic partnership for Reality Defender, 1Kosmos headlines list of deals, launches
AI fraud threat continues to spur deepfake detection integration, investment, development
 

Reality Defender and 1Kosmos have announced a strategic partnership that will see the deepfake detection firm integrate its real-time deepfake defenses into 1Kosmos’ blockchain-based biometric authentication platform.

A release says the integration enhances the platform’s existing PAD capabilities with new signals on both live and pre-recorded AI-generated image and video impersonations: “multimodal deepfake detection” that “enhances ISO/IEC 30107-3 PAD Level 2 performance to counter the growing share of synthetic-media attacks.”

“Deepfake attacks are evolving faster than most organizations can adapt, and detecting them requires specialized, continuously updated models,” says Ben Colman, CEO of Reality Defender. Reality Defender’s models keep pace with regulatory changes like the EU AI Act and upcoming ISO 25456 standards, to ensure seamless compliance.

“Advances in AI-generated impersonations are rewriting the rules of identity assurance and ratcheting up fraud losses,” says Mike Engle, chief strategy officer for 1Kosmos. “By adding Reality Defender as an embedded detection layer, we’re enabling enterprises to verify identity with greater certainty and stop AI-driven impersonation attacks before they result in financial loss, brand damage, or regulatory consequences.”

The product deploys without friction, integrating natively into existing 1Kosmos workflows, identity stacks, and user experiences with no new licenses, retraining or rearchitecture.

Resemble AI heads straight to market with new funding

Resemble AI has raised 13 million dollars in strategic funding to accelerate development on its voice generation and deepfake detection products. An announcement says the investment round includes Google’s AI Future Fund, Okta Ventures, Taiwania Capital, Gentree Fund, IAG Capital Partners, Berkeley Frontier Fund and KDDI – partnerships, the firm says, that “create immediate paths to market distribution and embed our technology within key identity and security ecosystems.”

The company says its deepfake detection model achieves 98 percent accuracy across more than 40 languages with multimodal threat detection across audio, video, images and text, and  enhanced explainability for content analysis. It intends to use the investment to “fuel our global expansion and accelerate the development of our AI detection platform.”

FARx releases new multimodal deepfake detection tool

FARx, which bills itself as “the world’s only fused-biometrics company,” has announced the launch of FARx 2.0, its new generation of biometric software which fuses speaker, speech and face recognition for added capabilities in synthetic and cloned voice detection. The software can be integrated into browsers, apps and communication systems, and operates in the background to provide continuous multi-factor authentication without disruption.

A post on the firm’s LinkedIn page says FARx 2.0 “identifies not just what is being said but who is speaking, enabling it to detect and block attempts to spoof someone’s identity using synthetic voices, deepfakes, or cloned audio and video.” Trained on some 55,000 synthetic voices from real telephony environments, it can “reliably distinguish between real and AI-generated voices.”

The Malvern Gazette quotes Clive Summerfield, CEO of the Worcestershire, UK-based firm, who says the new product aims to deliver “an even more sophisticated, flexible biometric multi-factor authentication technology to users across a broad range of industries and applications.”

“Legacy voice biometrics and traditional MFA systems are simply no longer enough to outsmart the new era of AI-powered threats.”

The launch follows a recent investment of 250,000 pounds (about US$337,00) through the Seed Enterprise Investment Scheme (SEIS).

Grant funds deepfake tool tailored to South Korea, Singapore

A team from Singapore Management University (SMU) has won a grant to develop a deepfake detection tool. A release says the project will be “the first multilingual deepfake data set that includes dialectal variants such as Singlish and Korean dialects.”

“Many existing tools don’t perform well on Asian languages, accents, or content,” says Professor He Shengfeng, who leads the team. “We’re focused on building something that fits the specific needs of our region.”

Understanding different linguistic, socio-cultural and environmental characteristics was a key requirement for the grant, which comes from AI Singapore (AISG) and South Korea’s Institute for Information & Communication Technology Planning & Evaluation (IITP).

The team has dubbed its tool DeepShield, which its proposal paper calls “the first unified interpretable detection system capable of handling diverse and multi-modal manipulations – including object insertions, lighting alterations, background swaps, and voice dubbing – within a single, explainable pipeline.”

Overall, DeepShield wants to position itself as “not merely a detection tool, but a next-generation AI governance layer for digital media integrity – setting it apart from commercial offerings in both ambition and design.” The ultimate goal is a spin-off startup that He imagines licensing services such as deepfake forensics, media authenticity verification, enterprise compliance and digital governance to public and private enterprise.

Work commences in January 2026 with the scouring of large-scale, publicly available datasets such as the YouTube8M dataset.

Ant International’s winning deepfake detector looks to eliminate bias

Ant International has won first place at the NeurIPS Competition of Fairness in AI Face Detection. A release from the company says its entry trumped over 2,100 submissions from 162 teams globally.

The deepfake detection contest challenged participants to develop AI models that “not only achieve high-utility performance but also demonstrate fairness across demographic subgroups such as gender, age, and skin tone.”

“A biased AI is an insecure AI,” says Dr. Tianyi Zhang, general manager of risk management and cybersecurity at Ant International. “Our model’s fairness prevents exploitation from deepfakes and ensures reliable identity verification for all users, supporting our mission to deliver secure and inclusive financial services worldwide.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Lawsuit casts new light on ICE, CBP’s expanding biometric, visual surveillance dragnet

A sweeping 103-page federal lawsuit filed by the State of Illinois and the City of Chicago against the Trump administration…

 

UK to reverse course on mandatory use of national digital ID for RTW checks: reports

People in the UK are still getting a national digital ID in 2029, but it will not be mandatory for…

 

Face biometrics heavily featured at Intersec Dubai 2026

Intersec Dubai 2026 kicked off with Sheikh Mansoor bin Mohammed bin Rashid Al Maktoum opening the twenty-seventh edition of the…

 

ROC expects $17M from IPO for American-made biometrics and computer vision

ROC has amended its plan to list an initial public offering on the Nasdaq, and is now expecting to raise…

 

TSA Touchless ID biometric entry lanes coming to 50 additional US airports

​The Transportation Security Administration (TSA) is expanding its PreCheck Touchless ID program, promising deployments in a total of 65 airports…

 

With Gemini integration, Walmart joins effort to ‘infuse AI into every bit of shopping’

It was a matter of time before the large language model (LLM) chatbots we have come to call “AI” became…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events