FB pixel

Deepfakes can duck dated detection tools, need new layers of protection

Need the cloned voice of a Finnish goose? There’s probably a tool for that
Deepfakes can duck dated detection tools, need new layers of protection
 

A sinister impostor sits in the trusted circle, biding his time. With luck, he will quietly waddle his way into a fortune, nabbing millions in fraud out from under the bills of so many unsuspecting mallards. Alas! Here comes DuckDuckGoose on its deepfake detecting mission. Suddenly, the fake duck is sweating. The jig is up: everyone’s going to find out he’s a goose. He’s cooked.

One of Latin America’s largest identity and fraud infrastructure providers has implemented DuckDuckGoose’s real-time deepfake detection technology across its onboarding infrastructure, to weed out scheming geese and other perpetrators of synthetic identity fraud. A release says the platform “embedded DuckDuckGoose’s real-time deepfake detection layer directly into its onboarding systems without adding user friction or redesigning onboarding flows.”

The detection process happens at identity verification to catch manipulated biometric media before an account is activated.

For the customer, that was critical. Traditional KYC controls were working as designed. “The gap was elsewhere. AI-generated synthetic identities were passing onboarding as legitimate customers,” fueling a 350 percent surge in deepfake and generative AI-driven synthetic identity attacks year-over-year. These identities were then used as mule accounts to coordinate fraud across payment and financial systems.

“Deepfake identities are no longer failing onboarding. They are completing it,” says Parya Lotfi, CEO of DuckDuckGoose. “By the time manipulation is discovered, those accounts are already active across payments and financial ecosystems. Trust must be established at identity creation. That is the next layer of the identity stack.”

Per the release, existing document verification, biometric matching, and liveness controls remain in place, with deepfake detection strengthening the trust layer upstream. The deployment has prevented more than 500,000 AI-generated synthetic identities from entering the financial ecosystem, reduced downstream exposure to mule accounts, account takeovers, and coordinated payment fraud, and seen a “significant reduction in manual fraud investigations.”

Pindrop expands into healthcare sector

Pindrop is expanding its deepfake detection efforts into the healthcare industry. A release says the firm can provide deepfake detection and continuous identity verification to HIPAA-regulated environments, providing security across voice, video, and digital communications in real time.

Per the release, Pindrop research shows more than half of fraud attempts in healthcare contact centers now involve AI-generated elements, including synthetic voice, automated bots and IVR reconnaissance. Healthcare is particularly vulnerable to data breaches, which enable fraudsters to pair stolen data with AI voice cloning tools.

Pindrop’s continuous monitoring platform, Real Human + Right Human, operates in the background to analyze voice, device intelligence, behavioral signals and AI integrity indicators. It’s seeking the answers to three questions: Is it a machine, given away by synthetic and bot-generated speech? Is it a bad human – i.e. a fraudster? And is it the right patient, provider or caregiver?

“Human judgment, knowledge-based questions, and one-time passcodes are vulnerable security layers,” says Dr. Vijay Balasubramaniyan, CEO of Pindrop. “Healthcare organizations operate under HIPAA, CMS oversight and intense reputational risk. We help healthcare CIOs and CISOs restore confidence in the most human channel – by determining in real time whether the caller is a real human and the right human before critical account actions are taken.”

An early collaboration with health savings account (HSA) administrator HealthEquity shows promising results, including a 90 percent reduction in voice fraud year-over-year, authentication rates over 91 percent, improved operational efficiency, stronger fraud defense and enhanced member experience.

Traceability, model generalization key to future of deepfake detection

Not even Finns are safe from voice deepfakes. So says new research from the School of Computing at the University of Eastern Finland, which finds that the generative AI technology used to create voice deepfakes has developed to the point that it is now freely available to anyone, and increasingly effective.

Professor Tomi Kinnunen calls for novel countermeasures. “For instance, speech deepfake detection and deepfake source tracing, that is, identifying the voice cloning or synthesis software used to create the deepfake. In the case of biometric authentication, the aim is to improve the robustness of systems against various attacks.”

“Neural networks and artificial intelligence are widely used in research in this field. Personally, however, I’ve felt it important to move on to more interpretable methods in which the detection method can ‘justify’ its decisions.”

“Machine learning is based on fitting models to large sets of training data, and models can easily overfit to the training data used. As a result, the detection of speech deepfakes created with previously unseen synthesis techniques becomes difficult.”

The ongoing SPEECHFAKES project funded by the Research Council of Finland has researchers developing “methods for identifying the sub-components of synthesis methods used to create speech deepfakes.” It has also developed new metrics for assessing accuracy.

“When biometric authentication is combined with deepfake detection, something as self-evident as accuracy assessment becomes less straightforward,” Kinnunen says.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events