With long road ahead experts suggest EU AI Act could stifle or enable biometrics adoption
The European Union’s AI Act could dramatically increase the compliance burden on biometrics developers even before their systems reach the product stage, according to a panel of privacy and AI experts convened by the International Association of Privacy Professionals (IAPP). Conversely, it could enable member states to establish frameworks for real-time remote facial recognition use.
How exactly the legislation will treat things like algorithm training, or what carve-outs will remain on the real-time remote biometrics ban when it reaches a final vote, however, is still largely undetermined.
Panelists met in a webinar to discuss the ‘EU Artificial Intelligence Act Proposal: What could it change?’
The IAPP’s Isabelle Roccia moderated the event, with panelists IAPP senior western research fellow Jetty Tielemans and Kai Zenner, an advisor to MEP Axel Voss. Nathalie Laneret, a VP at open commerce media platform Criteo, which she says is the biggest industrial AI lab in Europe, represented the private sector.
A pair of committees are acting as co-rapporteurs on the European Parliament side, Roccia explained while bringing viewers up to speed on the current state of the proposal.
It is “the beginning of a long journey,” given the complexity of the topic and the level of interest, Roccia points out.
Tielemans outlined the scope of the proposed AI Act, which includes entities producing output which is used in the EU, even if they have no other ties to the bloc.
The ban on real-time remote facial recognition and other biometric identification modalities by law enforcement in public spaces is not only subject to exceptions, Tielemans points out, but can also be deviated from by member states with national laws.
High-risk systems form the bulk of the proposal and therefore received the most attention during the discussion. This is where most biometric systems sit.
Organizations putting these systems in place have a lot of obligations, due to the Commission’s “cradle-to-grave approach,” Tielemans says.
Leaving enforcement to member states has led to some concern that it will be handled in many ways, and the potential fines are higher than the GDPR.
Zenner described a political divergence among European parliamentarians, with opposing camps unable even to agree on whether increased AI use is likely to increase energy consumption. Instead of productive debates, he suggested the discussions are often sidetracked with opposing viewpoints repeatedly restated.
There is, however, some high-level agreement.
Estimates on how many AI systems would fall into the high-risk category (with most biometrics), Zenner notes, range from 5 to 25 percent and higher.
Laneret noted the need for a sufficiently broad research exemption to support continued innovation, and how unrealistic an expectation is for datasets to be free of errors, as suggested in the data governance section of the proposal.
Applying the concept of secure and ethical supply chains to something like AI could also touch off an interminable series of obligations that are impossible for startups to meet.
Many questions remain to be answered for the AI Act, but numerous other regulations, including sector-specific measures, are also in place or being crafted. Aligning these efforts adds a further layer of complexity, the panelists agreed.
New red tape, in the form of duplicate reporting obligations, can already be found in the AI Act, Zenner warns.
The role of DPOs in ensuring compliance, and of regulatory sandboxes in encouraging innovation were each discussed.
While these issues are sorted out, the industry will rely on portions of the GDPR, the Digital Services Act, and other regulations to guide their processes and deployments.
The EAB also recently examined the implications of the AI Act for biometrics developers and providers.