AI Act stumbles over provisional deal with biometrics among main culprits
Biometric surveillance and generative AI are again proving to be stumbling blocks as lawmakers hammer out a provisional deal for the European Union’s Artificial Intelligence Act in a marathon negotiation.
Talks between EU members and lawmakers started on Wednesday afternoon and are still ongoing. Results are expected immenently.
Once it is reached, the provisional deal will pave the way for the final deal which should be reached before the end of the year, giving the AI Act a chance to become the law before European parliamentary elections in June. If the deal is not reached on time, however, the legislation is likely to be shelved. As a result, the 27-member bloc is losing its first-mover advantage in regulating the technology, according to Reuters.
This week, French Digital Minister Jean-Noel Barrot warned that despite this week’s agreements, EU countries and lawmakers may have to meet for more rounds of talks to outline details, including on live biometrics monitoring such as facial recognition and general-purpose AI.
“If we have a further trialogue it is fine. And from where I am looking, it seems to me that there are too many significant points to cover in one night for tomorrow to be the last trialogue,” says Barrot.
The regulation is currently being debated within trilogues between the EU Parliament, the EU Council of Ministers, representing European governments, and the European Commission, with the law expected to take effect sometime after 2025.
Biometrics are among the major blocks for the negotiation as law enforcement agencies lobby for expanding facial recognition surveillance provisions citing national security concerns. Some European lawmakers, on the other hand, are seeking to ban the technology altogether.
Warnings from NGOs
While EU lawmakers argue late into the night over AI risks to human rights, non-governmental organizations are warning that the current legislation does not prohibit the export of these same technologies to other parts of the world, including biometric surveillance systems.
“Companies based in EU countries have been known to provide rights-violating technologies to governments who use them to target and oppress marginalized communities,” says Mher Hakobyan, Amnesty International’s AI advocacy advisor.
Mher cited examples of surveillance systems produced by French, Swedish and Dutch companies used in China to target the Uyghur minority, as well as the use of Dutch cameras by the Israeli police force to monitor Palestinians.
Washington, D.C.-based nonprofit Center for Democracy and Technology said that legalizing the untargeted use of facial recognition by law enforcement, especially in situations such as protests, may place core rights at risk.
“Untargeted facial recognition technology poses, by its very nature, an unacceptable risk to human rights,” the group says.
Companies reminded of compliance
Although the final results of the AI Act trialogue are still unknown, companies are preparing for new regulatory compliance efforts. Technology and IT consulting company Wipro has issued a guide on the AI Act, recommending organizations working in the EU market to prepare for the risk-based approach that the legislation will bring.
The first recommendation from Wipro is classifying AI systems in companies’ inventory according to the risk categories in the Act. High-risk systems will need an assessment to confirm they are compliant with the regulation.
AI development and management frameworks should be refined to meet the regulatory requirements of periodic assessments. Companies should enable the automatic logging of events during the operation of high-risk AI systems with the retention period set to a minimum of 10 years.
Companies are advised to take other steps, including training staff, reviewing technical documents and setting up privacy and security safeguards.