FB pixel

UK government declares deepfakes ‘greatest challenge of the online age’

Efforts to combat AI-driven fraud span public initiatives, private sector providers
UK government declares deepfakes ‘greatest challenge of the online age’
 

A new case study published by the UK government does not mince words, or numbers: “The rise in deepfakes generated by artificial intelligence (AI) has been scarily rapid – a projected eight million will be shared in 2025, up from 500,000 in 2023.”

Yet fraud is a crime that can explode while still going relatively unnoticed. With deepfake fraud and other sneaky ways to subvert reality, that’s the point. Generative AI has quickly and quietly made accessing the tools for sophisticated deepfake fraud cheaper and easier than ever.

As such, AI deepfake detection is becoming increasingly necessary. From those who have been on the wrong end of costly deepfake video injection fraud, to those on a mission to prevent it, the face-off between technologically enabled fraudsters and fraud prevention defenses is already heated, and gets more complex by the day.

Deepfake fraud an ‘urgent national priority’ for UK

Per the UK case study, “finding ways to quickly detect and mitigate this ever-growing threat is an increasingly urgent priority.” The government’s Accelerated Capability Environment (ACE), which connects frontline government and law enforcement with innovative tech, is “at the heart of this ramp-up in activity designed to find practical solutions to arguably the greatest challenge of the online age.”

ACE was involved in the Deepfake Detection Challenge, in cooperation with the Home Office, the Department for Science, Innovation and Technology and the Alan Turing Institute. Designed as a workshop for cutting-edge practical solutions to the surge in deepfake attacks, the challenge invited representatives from academia, industry and government to collaborate on responses to five challenge statements pushing the boundaries of current deepfake detection capabilities.

Over the course of eight weeks, participants used a custom platform hosting about two million assets of real and synthetic biometric data for training. Out of 17 resulting submissions, a few were highlighted as having strong proof-of-concepts and potential operational value, and are now in benchmark testing and user trials; these include submissions from Frazer-Nash, Oxford Wave, the University of Southampton and Naimuri.

The challenge yielded a couple of key takeaways. First, for the most effective and efficient deepfake detection, it is crucial to use curated training datasets that reflect real-world use cases. Second, collaboration and sharing data is critically important to the larger effort.

ACE works with EVITA, CSAM commission

Following the Deepfake Detection Challenge, ACE has taken on a project for the Defence Science and Technology Laboratory (DSTL) and the Office of the Chief Scientific Adviser (OCSA).

ACE’s role is to make recommendations on how to “mature” a tool called EVITA – an AI content detection tool for video, text and audio. Per the case study, “ACE leveraged its expertise from the Deepfake Detection Challenge to create a reusable ‘gold standard’ dataset. This dataset was designed to effectively test detection models, including those targeting child sexual abuse material (CSAM).”

The final result is a repeatable testing and evaluation approach for deepfake detection.

ACE has also collaborated with companies Blueprint, Camera Forensics and TRMG on a deepfake detection strategy for digital forensics in policing.

Deepfakes are both a growing menace and an evolving threat,” the organization says, “but bridging the gap between models and reality will be critical to tackling them at scale and at pace. ACE, its customers and suppliers remain laser focused on this evolution from the theoretical to the practical.”

iProov: to prevent attacks on selfie biometrics, use liveness detection

Among firms who have taken up their own version of the deepfake challenge, iProov and Paravision have recently published content that looks at specific challenges and use cases.

A recent webinar from iProov aims to look “beyond the selfie” – which, according to the UK company, while popular as a tool for biometric identity verification, is vulnerable to sophisticated generative AI-powered threats. Face swaps, spoofs, injection attacks and other techniques make it easier to hijack an identity transaction that relies on a static facial image.

In iProov’s webinar, Chief Product Officer Peter James discusses how organizations can future-proof their technology against evolving AI deepfake threats using liveness detection.

The webinar breaks down what’s required for an effective liveness detection system in the context of evolving AI threats. James says high quality capture, multi-layered analysis and rapid updates for effective threat monitoring are key elements. Systems must be accessible, free of bias and accurate. And certification by global standards bodies can help ensure compliance, inclusivity and security.

Paravision: liveness detection, deepfake detection separate tools

Paravision’s latest contribution to the deepfake discourse is a white paper outlining its approach to deepfake detection. It makes an important point of delineating between deepfake detection and liveness detection.

Paravisions Liveness product, it says, “checks for the presence of physical presentation attacks like masks or high resolution displays,” whereas its Deepfake Detection product “adds an essential layer of protection by helping identify and mitigate the growing threat of synthetic imagery or digitally altered faces.”

As such liveness detection and deepfake detection should be considered complementary technologies. “Together, these technologies enable a comprehensive defense strategy, ensuring that every verified identity is both live and authentic. This dual-layer approach is not only a significant leap forward in fraud prevention but also a critical step toward building trust in digital identity systems.”

The white paper also runs through types of deepfake types and detection use cases, as well as Paravision’s technical processes, system architecture, accuracy benchmarking and the use of ethically-sourced datasets.

Learnings from the $25 million Hong Kong deepfake CEO case

Perhaps the most infamous deepfake case to date is the Hong Kong deepfake CEO incident, wherein an employee of Arup was fooled by deepfake video avatars of his company’s executives into transferring US$25 million to foreign bank accounts.

In an interview with the World Economic Forum, Arup’s Chief Information Officer Rob Greig says the Hong Kong attack “ wasn’t even a cyberattack in the purest sense. None of our systems were compromised and there was no data affected.” Grieg classifies it as “technology-enhanced social engineering.”

“People were deceived into believing they were carrying out genuine transactions that resulted in money leaving the organization,” he says, noting that “this happens more frequently than a lot of people realize.”

Specifically, it is not so much the tip of the iceberg as the tip of a bayonet mounted to a submachine gun. “If cyberattacks were bullets,” Greig says, “we would all be crawling around on the floor because they would be coming through the window, thousands of rounds a second.”

Even more concerning is the ease with which they can be generated;  following the Hong Kong attack, Greig discovered he was able to create one with freely available generative AI tech in about 45 minutes).

Greig says that an effective strategy to defend against deepfake attacks needs to be based on an awareness of both technological capabilities, and the status of data security: “Who has access to what and when? What data is moving around your organization? Who is trusted and what is not trusted? And what sort of erroneous activity is happening within the organization?”

Deepfake fraud an international threat prompting global response

Deepfakes, alas, are everywhere: new research from fraud prevention firm Trustpair shows that, in 2024, the use of generative AI-based deepfakes and deepaudio increased by 118 percent, and that 90 percent of U.S companies experienced cyber fraud.

Baptiste Collot, CEO of Trustpair, says the research “shows that cyber fraud is an inescapable reality. While many executives express confidence in their organizations’ ability to identify sophisticated fraudsters, nearly the same percentage said their organizations experienced successful attacks, indicating the confidence is misplaced. The fraud landscape is constantly shifting. Companies need to stay vigilant and can’t afford to be complacent with their defenses.

The same goes for law enforcement agencies. Police in South Korea are investing 9.1 billion won (US$6.2 million) on a deepfake detection system, in response to a surge in AI-assisted deepfake fraud. Yonhap reports that the project, which uses a multimodal algorithm and analyzes noise and sound frequency to detect deepfake videos and AI-generated voices, is scheduled to be finished by December 2027.

Per the report, in the first 10 months of 2024, the police apprehended 573 suspects in relation to 1,094 deepfake sex crimes that also involved teens and minors.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Sweden issues RFI for new ABIS, Moldova issues biometric hardware tender

Sweden is considering purchasing a new biometric system that will help the country collect fingerprints and facial images of asylum…

 

Email service Kivra acquires digital ID firm Truid

Nordic email service Kivra, which handles official communication between citizens, companies and government agencies, has taken a step towards developing…

 

Identity verification, fraud prevention benefit from boom in real-time payments

On a classic episode of The Simpsons, when Homer is shown a deep fryer that can “flash fry a buffalo…

 

Rise of digital wallets integrating payments and digital identities across Asia

Digital wallets have grown from innovation to an essential financial instrument, easily integrating into billions of people’s daily activities. By…

 

Facephi touts ‘exceptional results’ on RIVTD face liveness detection test

Facephi is celebrating an “outstanding score” in the Remote Identity Validation Technology Demonstration (RIVTD) Track 3 test for Face Liveness…

 

InverID expands certification package with ETSI 119 461 compliance

Inverid’s NFC-based identity verification product ReadID now complies with applicable requirements of the ETSI 119 461 standard for unattended remote…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events