FB pixel

Ada Lovelace analysis suggests EU AI Act could curtail biometrics regulation

Ada Lovelace analysis suggests EU AI Act could curtail biometrics regulation
 

A ban on facial recognition by the EU AI Act could actually reduce protections against biometrics surveillance afforded by existing national laws, the General Data Protection Regulation (GDPR) and the Law Enforcement Directive, according to an expert analysis from the Ada Lovelace Institute.

Written by Newcastle University Professor of Law, Innovation and Society Lillian Edwards, the explainer notes that a push for maximum harmonization, combined with the lack of scope over private spaces, law enforcement and online spaces, could result in less-stringent regulation in practice.

The analysis is accompanied by a policy briefing and an expert opinion from Edwards, titled ‘Regulating AI in Europe: four problems and four solutions.’

The explainer makes nine key points about the Act, including the need to understand it in the context of other EU legislation like the Digital Services Act (DSA), the Digital Markets Act (DMA) and the Digital Governance Act (DGA). The Act is aimed primarily at public sector and law enforcement uses of AI, Edwards notes, and includes expansive territorial jurisdiction, like GDPR.

Biometrics implications

The explainer delves into the impact of the AI Act on biometrics, and facial recognition in particular.

Whether to include a ban on facial recognition use is identified as an area of controversy around the Act, but the restrictions are “very limited,” without reference to forensic, or retrospective, applications.

“The ‘ban’ imposed by the Act may sometimes be less stringent than existing data protection controls under GDPR and the Law Enforcement Directive (LED),” Edwards writes. “Thus if the maximum harmonisation argument (above) operates, the Act might in fact reduce protection against biometric surveillance already given by existing national laws.”

The document also notes that biometrics-based facial analysis or categorization algorithms are classed as ‘limited risk,’ a lower risk category than biometric identification and verification systems.

The analysis goes on to describe the difference between the designation of biometrics as ‘high risk’ and biometrics-based categorization as ‘limited risk,’ and the requirements that go along with these categories and conformity assessments.

Article Topics

 |   |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometric Update Podcast digs into deepfakes with Pindrop CEO

Deepfakes are one of the biggest issues of our age. But while video deepfakes get the most attention, audio deepfakes…

 

Know your geography for successful digital ID adoption: Trinsic

A big year for digital identity issuance, adoption and regulation has widened the opportunities for businesses around the world to…

 

UK’s digital ID trust problem now between business and government

It used to be that the UK public’s trust in the government was a barrier to the establishment of a…

 

Super-recognizers can’t help with deepfakes, but deepfakes can help with algorithms

Deepfake faces are beyond even the ability of super-recognizers to identify consistently, with some sobering implications, but also a few…

 

Age assurance regulations push sites to weigh risks and explore options for compliance

Online age assurance laws have taken effect in certain jurisdictions, prompting platforms to look carefully at what they’re liable for…

 

The future of DARPA’s quantum benchmarking initiative

DARPA started the Quantum Benchmarking Initiative (QBI) in July 2024 to expand hardware capabilities and accelerate research. In April 2025,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events