FB pixel

Advanced deepfake defenses mustering in India, US, South Korea

Deepfake detection desperately needed but testing suggests tech isn’t battle-ready
Advanced deepfake defenses mustering in India, US, South Korea
 

Digital threats are global threats. As deepfakes generated with generative AI algorithms flood the online space, governments and private companies around the world are shoring up defenses.

In India, the Institute of Science (IISc) Bengaluru and the fintech firm Infibeam Avenues Ltd. have announced a strategic partnership to develop real-time deepfake detection systems. Indian Startup News (ISN) says Infibeam’s AI unit, Phronetic.AI, and IISc’s Vision and AI Lab (VAL) will collaborate on anti-deepfake tech tailored for real-time video communication – “an advanced video AI agent that actively monitors ongoing video calls, alerting users if the other party is identified as a deepfake.”

Phronetic.AI has already filed a patent for its algorithm, which IISc will help to refine through research and updates. Vishal Mehta, chairman and managing director of Infibeam Avenues Ltd., says the partnership is a “crucial step toward enhancing cybersecurity and preventing the misuse of deepfake technology for fraudulent activities.”

The objective is to deliver a cost-efficient, user-friendly deepfake detection platform which allows non-experts to verify the authenticity of live visuals and audio, and can operate at scale without compromising speed or accuracy. Potential use cases include banking, healthcare, finance, human resources, government organizations and law enforcement.

“As generative AI continues to advance at an unprecedented pace, the rise of deepfakes poses a significant challenge,” said Prof. Venkatesh Babu, professor and chair of the Department of Computational and Data Sciences (CDS) at IISc. “Addressing this requires ongoing efforts from AI researchers to monitor emerging generative models and develop robust techniques to detect deepfakes effectively.” Only in doing so can public trust in digital communication be maintained.

Startup Neural Defend explores agentic AI deepfake detection

ISN also reports on Neural Defend, a cybersecurity startup that has raised over $600,000 for its deepfake detection product in a pre-Seed funding round led by Gurugram-based angel investment firm Inflection Point Ventures (IPV), with participation from MIT SBXI, Techstars San Francisco, and Soonicorn Ventures.

Piyush Verma, CEO of Neural Defend, says the company’s goal is “to protect real identities against digital deception through innovative AI agentic technology.” The startup’s proprietary AI models detect deepfakes across multiple data formats, including video, audio and real-time streams. It is running pilot projects in New York and Singapore, and looking to scale its operations with global enterprises, fintech companies and financial institutions.

DARPA partners with Digital Safety Research Institute

The Defense Advanced Research Projects Agency (DARPA) has had deepfakes on its radar, launching several initiatives to detect, analyze, and mitigate the effects of deepfake  technologies.

Now, the U.S. Department of Defense (DOD) agency has entered a cooperative research and development agreement with the Digital Safety Research Institute (DSRI) of UL Research Institutes to “continue advancing the research of detection, attribution, and characterization of AI-generated media.”

A blog from the agency says that, “beginning with the Media Forensics program in 2016 and continuing with the Semantic Forensics (SemaFor) program in 2020, DARPA has produced comprehensive forensic technologies to help mitigate these online threats.” Now, the agency is “actively transitioning resulting technologies to the U.S. government and working with industry to commercialize these tools.”

The new agreement will see DSRI take over SemaFor’s ongoing open competition, the AI Forensics Open Research Challenge Evaluations (AI FORCE), announce challenge results, and award research grants at academic conferences.

“Innovation does not occur in a vacuum, so it’s important for us to communicate about the work we’re doing to engage with industry, academia and potential transition partners to develop the technology for practical applications,” says Wil Corvey, DARPA’s SemaFor program manager. “DSRI’s mission of product testing and evaluation, specifically with respect to the complex and evolving socio-technical environment in which products will be deployed, makes them an ideal fit for this area of transition.”

South Korean, Australian researchers call foul on current deepfake detection

Researchers from the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) and Sungkyunkwan University (SKKU) in South Korea have analyzed  51 leading deepfake detectors and tested 16 against various deepfakes – and found them sorely wanting.

Information Age says the CSIRO team tested three types of content: synthesis, face swaps and reenactment, on tools from DeepFaceLab, Dfaker, Faceswap, LightWeight, FOM-Animation, FOM-Faceswap and FSGAN against third-party testing sets DFDC and Celeb-DF.

All of the deepfake detectors blew it in tests applied to “real-world” content.

The lacklustre performances are cause for concern, given the pace at which deepfake technology is evolving. Photorealistic deepfake videos, copycat voices and injection attacks: the tech has already led to notorious fraud cases, such as the $25 million deepfake CEO swindle on a Hong Kong employee of British engineering firm Arup (Rob Greig, the firm’s chief information officer, insists “this happens more frequently than a lot of people realize). The IA piece even cites claims from security firm Trend Micro that we’ll soon see “malicious ‘digital twins’ of real people, trained on their knowledge, personality and writing style.”

The team from CSIRO and SKKU advises businesses to explore techniques like spectral artefact analysis, generative adversarial networks (GANs) and liveness detection. Better deepfake detectors will need to “incorporate a range of data sets including audio, text, images, and metadata, as well as using synthetic data and contextual analysis.”

“By breaking down detection methods info their fundamental components and subjecting them to rigorous testing with real-world deepfakes,” says CSIRO cybersecurity expert Dr Sharif Abuadbba, “we’re enabling the development of tools better equipped to counter a range of scenarios.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Age assurance shouldn’t lead to harvesting of kids’ data: Irish privacy watchdog

Age assurance requirements for pornography sites and platforms hosting extremely violent content will become mandatory in Ireland this July. Media…

 

Idemia reveals Armenia JV details, Saudi Arabia MoU, WVU biometrics research lab

Idemia is busily establishing new partnerships to develop biometrics for national projects, from Armenia to Saudi Arabia, and to further…

 

EU SafeTravellers project works to secure biometric digital travel credentials

Idemia Public Security, iProov, Vision-Box and Ubiquitous Technologies Company (Ubitech) are part of a European Union-funded project to introduce traveler…

 

World puzzled by lack of public trust in massive technology corporations

Sam Altman and Alex Blania, figureheads and evangelists for cryptically related firms World and Tools for Humanity, recently spoke at…

 

Milwaukee police debate trading biometric data for Biometrica facial recognition

Although it has pledged to seek public consultation before signing a contract with a biometrics provider, the Milwaukee Police Department…

 

Italian regulator holds out hopes to collect fine from Clearview AI

Italy data protection regulator, the Garante, has not given up on collecting the millions of euros in fines it imposed…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events