FB pixel

New UK deepfake detection testing framework, challenge aim to meet crisis head-on

Efforts highlight threat to women and girls as gov’t looks to hold Big Tech to account
New UK deepfake detection testing framework, challenge aim to meet crisis head-on
 

Having declared deepfakes the greatest challenge of the online age, the UK government is set to take the lead on doing something about it. Having fast tracked legislation making it illegal for anyone to create or request deepfake intimate images of adults without consent, the Home Office says it is set to develop and implement what a release calls a “world-first deepfake detection evaluation framework, establishing consistent standards for assessing all types of detection tools and technologies.”

To do so, it is partnering with large tech firms including Microsoft, as well as various academics and experts. The idea is to build a testing architecture for deepfake detection that is equivalent (or complementary) to the testing for liveness detection or facial age estimation tools carried out by the U.S. National Institute of Standards and Technology.

“The framework will evaluate how technology can be used to assess, understand and detect harmful deepfake materials, no matter where they come from,” the government says. “By testing leading deepfake detection technologies against real world threats like sexual abuse, fraud and impersonation, the government and law enforcement will have better knowledge than ever before on where gaps in detection remain.”

Once established, the testing framework will be “used to set clear expectations for industries on deepfake detection standards.”

While the deepfake crisis is often framed as a fraud problem, the UK’s messaging centers the risks to individual girls and women. But Minister for Safeguarding and Violence Against Women and Girls Jess Phllips says the technology does not discriminate.

“The devastation of being deepfaked without consent or knowledge is unmatched, and I have experienced it firsthand. For the first time, this framework will take the injustice faced by millions to seek out the tactics of vile criminals, and close loopholes to stop them in their tracks so they have nowhere to hide.”

“Ultimately, it is time to hold the technology industry to account, and protect our public, who should not be living in fear.”

Technology Secretary Liz Kendall took the opportunity to note that, in addition to developing the deepfake detection framework, the government has “criminalized the creation of non-consensual intimate images,” and intends to pursue a ban on so-called nudification tools.

“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear. The UK is leading the global fight against deepfake abuse, and those who seek to deceive and harm others will have nowhere to hide.”

The government says an estimated 8,000,000 deepfakes were shared in 2025, up from 500,000 in 2023.

Deepfake Detection Challenge runs live scenario hacks at MS London

Also part of the UK government’s deepfake crackdown is the new Home Office Deepfake Detection Challenge. The challenge incorporates benchmark testing and a four-day live hack event to accelerate collaboration and knowledge sharing on deepfake detection and outline effective approaches for public and private stakeholders.

Andrew Tyeloo, programme lead for the UK Home Office with a focus on data, AI, and innovation, oversaw the live event last week, which took place last week at Microsoft London. Posting on LinkedIn, Tyeloo calls it “a first‑of‑its‑kind global hackathon,” which puts teams in high pressure scenarios reflecting real-world national security and public safety risks.

Teams were challenged to identify real, fake and partially manipulated audiovisual media. Among those participating were members of INTERPOL, the Five Eyes community and Big Tech, as well as smaller entities like Ingenium Biometric Laboratories.

“Despite our different backgrounds, beliefs, and perspectives,” Tyeloo says, “everyone came together with a shared purpose: to push boundaries and make a meaningful difference.”

The account of Accelerated Capability Environment (ACE), a Home Office unit coordinating on digital challenges with the private sector and academia, has a note celebrating “hands-on, rigorous, and genuinely cutting-edge work happening in real time.

“Over 450 people, across four days, coming together to push the boundaries of how we detect and respond to deepfakes. Sixteen teams, 5 live scenarios, each with bespoke datasets, dropped at different moments to test real-world response, adaptability and performance.”

Nobody wants another Grok: lawmakers

From the get-go, tackling deepfakes has meant tackling nonconsensual deepfake porn. The issue, however, has been much inflamed by the mass creation and distribution of said content by social media platform X’s large language model chatbot (LLM), Grok.

A report from mLex says a group of 48 lawmakers has written to Minister Kendall regarding perceived gaps in UK law, calling for “legislation to prevent another situation similar to Grok producing sexualized images on the X platform.”

“The scale and rate of production of sexualized images by Grok was shocking – reportedly three million in just eleven days,” the letter says.

At present, the Online Safety Act has limited coverage of AI chatbots, and UK regulator Ofcom has found itself stymied in its investigations – and facing allegations that its powers are too feeble. While its probe into X remains open, it has announced that it won’t include X.AI, another of Elon Musk’s cluster of related companies.

Kendall has promised to fast track legislation to designate the creation and distribution of nonconsensual deepfakes as a priority offense under the OSA. But the larger call is to encode  into law safety and privacy by design. MLex quotes William Malcolm of the UK Information Commissioner’s Office, which believes “privacy by design and by default, as the GDPR provides for, is absolutely foundational to the development and deployment of new technology the public are entitled to expect.”

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Ambitious biometrics projects need clear roles for success

Biometrics technology development has long been the fixed domain of experts, and while public bodies like NIST have played a…

 

Who holds the keys to digital sovereignty? It might not be who you think

As governments think more about digital identity as a pillar of digital public infrastructure, and therefore a matter of vital…

 

Nigeria wades into social media age assurance debate with pubic survey

A survey has been released by the Nigerian Data Protection Commission to gather feedback on the proposed regulation of a…

 

Spain’s Digital Transformation Ministry backs Sybol with €500k

A Spanish digital transformation agency is helping to fund digital identity development and verifiable credentials. The Spanish Society for Technological…

 

Ethiopia’s digital ID joins sovereign wealth fund as weekly enrollments reach 1M

Ethiopia is accelerating its efforts to reach 90 million digital ID enrollments this year, with the National ID Program (NIDP)…

 

Vendors push deeper into high assurance identity verification

Digital identity vendors are accelerating product integrations as businesses look for stronger, more seamless ways to verify users across sectors….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events