Biometrics providers unready for AI Act requirements risk market exclusion: EAB lunch talk
The impact of Europe’s new artificial intelligence regulation on vendors and users of biometrics could be quite severe, and even determine the success or failure of some biometric technology providers in the years ahead, an expert said during the latest European Association for Biometrics (EAB) lunch talk.
The lunch talk asking ‘What does the EU AI regulation mean for the biometrics community?’ featured a presentation by Validate ML Co-founder and regulatory expert Professor Dr. Oliver Haase.
The regulatory framework for biometrics in Europe, Haase says, is similar to regulatory frameworks applied to medical devices.
“The AI Act will fundamentally change the way high-risk AI systems will be developed, placed into the market, and run. And biometric identification systems will be the most affected market.”
The term “biometric” appears 61 times in the Act, Haase notes, and biometric identification is the only application with its own specific rules. The Act has not yet reached final approval, however, and it could yet become more restrictive for biometrics.
Haase also suggests that like GDPR, the Act’s eventual passage could have a critical impact on unprepared businesses.
The AI Act was reviewed, including the intentions of the European Commission in introducing the Act and the four risk categories it defines. As previously reported, the AI Act severely restricts the use of real-time remote biometric identification systems for law enforcement.
Some AI systems that might use biometrics are already subject to regulation based on their domain of use, such as migration, while other biometric systems are as of yet unregulated.
Manufacturers of high-risk AI systems will take on requirements for quality management with a risk management component, and training, validation and test data are subject to a series of rules. The manufacturer must also document compliance with the other seven requirements listed in the Act with a series of technical descriptions and instructions for use.
Users must keep detailed records, including on accuracy. The Act also refers to explicit recognition of the limitations of the AI system, and the need for oversight by a human who can stop the system if non-compliance with the requirements is determined.
Conformity assessments must be completed before any high-risk AI system reaches the market, though they are performed by manufacturers themselves, except in the case of certain biometric systems, which require external assessments. Haase calls the enforcement penalties proposed by the EC, which could result in fines of up to €30 million (US$34.6 million), “draconian.” Internal assessments can only be used if harmonized standards are used, but they will not be available for many areas of AI and biometric technology.
Putting an appropriate quality management system in place in time to meet the deadline could be challenging for many biometrics providers, Haase warns.
Audience members raised questions about the definition of AI, cybersecurity requirements, the level of human oversight required, and the scope of the AI Act regarding the use of datasets provided by suppliers in third countries. Haase clarified that the AI Act applies in all situations where the lives of EU students are impacted, meaning it applies to any systems used within the bloc.
There remains, however, “a high degree of legal uncertainty as to what exactly is expected from manufacturers” of biometric identification systems according to Haase.
Addressing that uncertainty could be a matter of survival for biometrics vendors in the EU.