FB pixel

Rights groups criticize EU AI Act for inadequate protections against potential abuse

Rights groups criticize EU AI Act for inadequate protections against potential abuse
 

The EU’s AI Act is done, and no one is happy. Having been adopted by the European Parliament in March and set to go into effect in May, the Act now faces scrutiny from rights groups, who variously describe it as ineffective, undemocratic, imprecise, and “far from a golden standard for a rights-based AI regulation.”

“While EU policymakers are hailing the AI Act as a global paragon for AI regulation, the legislation fails to take basic human rights principles on board,” says Mher Hakobyan, advocacy advisor on AI for Amnesty International, in a news release from the human rights organization. “Even though adopting the world’s first rules on the development and deployment of AI technologies is a milestone, it is disappointing that the EU and its 27 member states chose to prioritize the interest of industry and law enforcement agencies over protecting people and their human rights.”

The European Center for Non-Profit Law (ECNL) agrees wholeheartedly. “Our overall assessment is that the AI Act fails to effectively protect the rule of law and civic space, instead prioritizing industry interests, security services and law enforcement bodies,” says a statement from the organization. “While the Act requires AI developers to maintain high standards for the technical development of AI systems, measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses.”

Although the AI Act places restrictions on use cases that pose a high risk to fundamental rights, such as healthcare, education, and policing, a core objection is what critics call “far-reaching exceptions” for use in law enforcement, migration control and national security. The blanket exemption for the latter, in particular, is an issue for activists, with ECNL worried that use of AI for national security will create “a rights-free zone.”

Furthermore, says the ECNL, gaps and loopholes make for toothless regulation, and AI companies are not fit to self-assess the risks their technology poses to basic human rights. “The legislative process surrounding the AI Act was marred by a significant lack of civil dialogue,” says its statement. “The Act was negotiated and finalized in a rush, leaving significant gaps and legal uncertainty, which the European Commission will have to clarify in the next months and years by issuing delegated acts and guidelines.”

Act introduces new rules on transparency, plus an AI complaints department

So what, exactly, will the AI Act prohibit? According to an article in MIT Technology Review, bans will apply to real-time facial recognition, scraping the internet to build image databases, and systems that exploit vulnerable people or infer sensitive characteristics such as someone’s political opinions or sexual orientation. But, writes Melissa Heikkilä, “there are some pretty huge caveats. Law enforcement agencies are still allowed to use sensitive biometric data, as well as facial recognition software in public places to fight serious crime, such as terrorism or kidnappings.”

Transparency is also a big part of the legislation, on both the distribution and corporate side. Companies must clearly identify situations in which people are interacting with AI, and put labels on deepfakes and AI-generated content. Significant for authentication, the Act also requires companies to make their generated AI content detectable. There are questions, however, about the ability to implement these safeguards on a technical level.

Another big change requires companies developing “general purpose AI models” (rather than AI for use in high-risk applications) to maintain public technical documentation on how they built the model and what data was used to train it. This means firms like OpenAI will be forced to disclose which datasets it uses to train generative AI and large language models such as ChatGPT. Some firms will be required to undergo additional evaluations and risk assessments.  Heikkilä says “companies that fail to comply will face huge fines or their products could be banned from the EU.”

Furthermore, the AI Act sets up a process for citizens to submit complaints about AI, and request justification for use. The question is, will people know when their rights are being violated? And even then, will the alleged vagaries of the Act’s language make meaningful enforcement impossible? And finally, will anyone care enough to protest? Even the critics, after all, seem unable to resist AI’s allure; last year, Amnesty International was criticized for using misleading AI-generated images in its reports.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

 

Publication of ISO standard sets up biometric bias tests and measurement

The international standard for measuring biometric bias, or demographic differentials, is now available for purchase and preview from the International…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events