FB pixel

Rights groups urge priority to human rights in AI Act implementation

Rights groups urge priority to human rights in AI Act implementation
 

A group of more than 20 civil rights organizations has signed a letter urging the European Commission to prioritize human rights in formulating the upcoming guidelines for implementing the AI Act, including better measures for biometric systems – one of the most controversial issues within the legislation.

The AI Act guidelines are designed to specify the practical implementation of the rulebook, which entered into force in August last year. Rights groups have highlighted guidelines related to remote biometric identification, biometric categorization according to race, gender, and other markers, scraping facial images from the internet, and emotion recognition.

All of these AI use cases are considered to pose an “unacceptable risk” to fundamental rights and are banned, according to the AI Act. The law, however, makes exceptions for specific circumstances, including for law enforcement purposes.

The AI Act should specify that the development of remote biometric identification for export falls under the ban. The organizations also say that it shouldn’t be enough for police forces to put up a sign or distribute flyers saying that an area is surveilled to ensure the legality of biometric surveillance. Finally, the groups call for a ban on retrospective biometric identification (RBI).

“While we continue to call for a full ban on retrospective RBI by private and public actors, we urge that the ‘significant delay’ clause should be at a minimum of 24 hours after capture,” the groups say.

The current ban on non-targeted scraping of facial images leaves room for problematic loopholes: Systems like Clearview AI or PimEyes, which claim to store only biographical information or URLs and not the actual facial images currently fall outside of the prohibition and the Commission should consider deleting the proposed definition of a facial image database to prevent this.

The biometric categorization ban should be expanded to include categories such as “ethnicity” and “gender identity.” The civil rights groups also expect that companies will try to masquerade emotion recognition products as health and safety tools to escape the ban and urge EU lawmakers to clearly define the difference between these systems.

These specifications should be included in the AI Act guidelines to prevent the weaponization of technology against marginalized groups and the unlawful use of mass biometric surveillance. The EU should also ensure that future consultations related to the implementation of the rulebook give a meaningful voice to civil society and impacted communities, the groups conclude.

The signatories include Privacy International, Access Now, European Digital Rights (EDRi), AlgorithmWatch, Amnesty International and more.

The European Commission’s guidelines for national authorities and AI providers and deployers are set to be released in early 2025. In December, the European AI Office concluded a consultation aimed to define AI systems and the prohibited AI practices used to formulate the guidelines.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Japan moves toward age verification for social media filters and risk labels

Japan’s policymakers are considering their own version of age assurance for social media with content filtering taking the limelight. Nikkei…

 

AVPA plots course for age assurance future based on learnings from Australia

In 2025, few people on Earth logged as many travel miles as Iain Corby, the executive director of the Age…

 

Regula analysis finds ID document verification hardest for Arabic, Chinese, Japanese

While the Latin alphabet is the alpha and omega for around 40 percent of the world’s people, that still leaves…

 

London police win legal challenge against live facial recognition deployment

London’s Met Police force has won a legal challenge to its use of live facial recognition, allowing them to continue…

 

Roblox settles with Alabama, West Virginia, agrees to age checks for users under 16

Social gaming platform Roblox is settling its accounts. Having settled with the State of Nevada for $12.5 million over lawsuits…

 

YouTube offers its biometric deepfake detection tool to celebrities

After content creators, politicians and journalists, YouTube will also enable celebrities to access its likeness detection tool, allowing them to…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events