FB pixel

Aurigin.ai integration to boost audio analysis for deepfake detection collaboration

Synthetic or manipulated audio is now a potent tool for misinformation
Aurigin.ai integration to boost audio analysis for deepfake detection collaboration
 

Aurigin.ai has announced an integration with the Deepfakes Analysis Unit (DAU) at India’s Misinformation Combat Alliance (MCA), aimed at strengthening protection against the growing problem of AI-generated and manipulated audio content.

A blog post says that integrating Aurigin.ai’s audio deepfake detection engine into its verification workflow enhances the DAU’s multi-layered approach, which combines machine learning and human analysis. The partnership will improve speed and reliability of real-time detection, address scale with automation to increase throughput without sacrificing analytical depth, and reinforce conclusions through probabilistic scoring and transparent evidence.

Swiss-based Aurigin.ai says its audio deepfake detection model, nicknamed Apollo, has a latency of less than 50 milliseconds and delivers a 97.7 percent accuracy. As profiled in the 2025 Deepfake Detection Market Report and Buyers Guide from Biometric Update and Goode Intelligence, the system requires only 3 seconds of audio, and works across 40 languages.

It aims for a future-proof approach that combines multiple layers including deceptive content analysis, source identifiers, voice clustering, voiceprint matching, watermarks and style fingerprinting.

Pamposh Raina, a New Delhi-based journalist who serves as head of the DAU, says that “audio verification has been especially challenging given that there are no visual markers of manipulation.” She calls AI-based audio deepfake detection software an essential tool for journalists, investigators and policymakers faced with an influx of synthetic audio content, often spreading misinformation or disinformation.

The Misinformation Combat Alliance, now the Trusted Information Alliance, is a non-profit alliance of cross-sector organizations, set up in 2022.  According to its website, its DAU currently includes 11 fact checking organizations, six detection and forensics partners and one technical and research partner (Tattle). It is supported by Meta.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events