FB pixel

Ada Lovelace analysis suggests EU AI Act could curtail biometrics regulation

Ada Lovelace analysis suggests EU AI Act could curtail biometrics regulation
 

A ban on facial recognition by the EU AI Act could actually reduce protections against biometrics surveillance afforded by existing national laws, the General Data Protection Regulation (GDPR) and the Law Enforcement Directive, according to an expert analysis from the Ada Lovelace Institute.

Written by Newcastle University Professor of Law, Innovation and Society Lillian Edwards, the explainer notes that a push for maximum harmonization, combined with the lack of scope over private spaces, law enforcement and online spaces, could result in less-stringent regulation in practice.

The analysis is accompanied by a policy briefing and an expert opinion from Edwards, titled ‘Regulating AI in Europe: four problems and four solutions.’

The explainer makes nine key points about the Act, including the need to understand it in the context of other EU legislation like the Digital Services Act (DSA), the Digital Markets Act (DMA) and the Digital Governance Act (DGA). The Act is aimed primarily at public sector and law enforcement uses of AI, Edwards notes, and includes expansive territorial jurisdiction, like GDPR.

Biometrics implications

The explainer delves into the impact of the AI Act on biometrics, and facial recognition in particular.

Whether to include a ban on facial recognition use is identified as an area of controversy around the Act, but the restrictions are “very limited,” without reference to forensic, or retrospective, applications.

“The ‘ban’ imposed by the Act may sometimes be less stringent than existing data protection controls under GDPR and the Law Enforcement Directive (LED),” Edwards writes. “Thus if the maximum harmonisation argument (above) operates, the Act might in fact reduce protection against biometric surveillance already given by existing national laws.”

The document also notes that biometrics-based facial analysis or categorization algorithms are classed as ‘limited risk,’ a lower risk category than biometric identification and verification systems.

The analysis goes on to describe the difference between the designation of biometrics as ‘high risk’ and biometrics-based categorization as ‘limited risk,’ and the requirements that go along with these categories and conformity assessments.

Article Topics

 |   |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

UK Home Office eyes suppliers for SCBP biometrics platform

The Home Office is hosting a preliminary market engagement event to engage with potential suppliers for two not-yet-guaranteed future procurements…

 

Meta uses AI profiling to infer user age, enforce teen restrictions

Meta says it has begun using AI to detect and remove users under 13 from its platforms, and to automatically…

 

Market for agentic commerce keeps growing, outpacing rails

According to Grandview Research, the global agentic commerce market size was worth $5.71 billion in 2025 and is projected to…

 

DRC seeks consultant for ambitious digital transformation, DPI project

The Democratic Republic of Congo is seeking a consultant as it launches a massive Digital Transformation Project. The wide-ranging project…

 

South Africa gazettes digital ID draft regulation, seeks comments

South Africans have up to June 6 to submit comments on draft amendments to the country’s Identification Act of 1997…

 

FTC settlement targets sale of mobile location data linked to sensitive sites

The Federal Trade Commission (FTC) has moved to prohibit Sandpoint, Idaho-based data broker Kochava and its subsidiary, Cedar Rapids, Iowa-based…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events