Biometrics community breaks down details of AI Act proposals in EAB workshop
One interesting point of consensus arising at the European Association for Biometrics’ (EAB’s) Artificial Intelligence Act Workshop 2022 is that it will be helpful in discussions of whether to allow exceptions to a ban on remote biometric systems in public to know precisely what is mean by “remote.”
The workshop was held by Zoom over two days last week.
The beginning of a three-way dialogue between the three EU institutions on the AI Act is expected around Spring of 2023, according to Catherine Jasserand of CiTIP at KU Leuven.
Jasserand notes that compromise text from the European Council removed the terms “remote” and “at a distance” were removed from the definition provided in the Act, and interrogated what it meant in this context by “remote.”
Even defining “biometrics” in a way that covers the intended use cases is somewhat contentious, leading the European Parliament to write in a category of “biometrics-based data.”
Katharina Senhert of German standards organization DIN explained how stakeholders can get involved in the process of developing AI and biometrics standards at the European and International levels.
The JTC1 working group was mandated by the EC to work on the standard for ‘high-risk’ use cases, and proposed a sector-specific Management System Standard originally, Gacon says. Now, operators are directed to refer to ISO/IEC 42001 for system management, while the standard in development deals with technical aspects.
A lack of clarity around the term “remote” has resulted in the title of the standard being changed to ‘Biometric identification systems involving uninformed, passive capture subjects,’ and the subsequent removal of “uninformed.”
Julien Chiaroni of the European Innovation Council presented work being done to support trustworthy systems compliant with the incoming regulations.
An unfinished process
European Commission Head of Sector AI Policy Irina Orssich reviewed the regional governments attempts to stand up an ecosystem around the AI Act to promote both trust and excellence.
Seven committees of EU Parliament are working on the Act, and a vote is scheduled for November. On the Council side, the Czech Presidency (for the second half of 2022) is aiming to have reached a common position by December.
Kai Zenner of European Parliament set out the major points of debate within parliament, and what lawmakers are trying to arrive at.
Weibke Hutiri of the Delft University of Technology suggests that on-device processing and applications are overlooked in the AI Act, as currently proposed, as are complex systems like those voice biometrics are often deployed in.
Some voice-based biometric systems are used in both low and high-risk applications, Hutiri notes.
Further, biometric data used for verification can be repurposed for identification, and systems can be relatively easily adapted to the latter from the former, leading to questions about the legitimacy of potential dual-use systems.
Even evaluating the fairness and effectiveness of voice systems is highly dependent on the context the application will be used in, Hutiri argues, as for instance voices change with age, and voice systems are more likely to seem effective if tested on younger speaker cohorts.
Independent audits are difficult to carry out on proprietary systems, and self-assessments may not produce the requisite level of accountability.
Work on auditable AI was presented on day 2 of the event by Arndt von Twickel of German cyber authority BSI.
As referred to by Hutiri, the task is as complex as the subject, taking in the entire “Connectionist AI Process Chain.”
Even the ability to audit systems must itself be audited. BSI’s attempt to do so shows that a lot of work remains to improve auditability.
Proponents of a full ban on remote biometrics in public spaces
Ella Jakubowska of rights group eDRI presented the organization’s view on the growing presence and further risk of biometric mass surveillance. The group believes that a ban can not only be consistent with, but enhance GDPR.
Quick deletion of biometric data, or processing it at the edge, does not adequately mitigate the risk of mass surveillance, according to Jakubowska.
Therefore, all remote biometric identification in publicly accessible spaces should be prohibited, according to eDRI, including both real-time and retrospective biometric applications, and without exceptions.
Remoteness, on eDRI’s reading, is defined by a potential lack of awareness and control by the subject.
European Data protection Supervisor Xabier Lareo reviewed the updates made to the proposed Act along the way, including some he approves of and others he believes miss the mark.
Positive changes include the ban on predictive policing systems, balancing of the responsibilities of AI vendors and users, and making the Chair of the AI Board an elected position.
Less encouraging, while the parliament recognized that completely error-free datasets are not possible, their relaxed requirements give AI developers too much leeway, and should mandate data quality assessments and minimum thresholds. Lareo is also unclear on what scenario parliament has in mind with the proposed change that AI users responsible could be responsible for data governance issues when in “exclusive” control of the system, and automation bias should be further considered.
Lareo would like to see bans on biometric identification in publicly accessible spaces, as suggested by eDRI, along with bans on emotion recognition and discriminatory categorization with biometrics. The EDPS should have a vote on the AI Board, third party conformity assessments should be prioritized, and the scope should be expanded to include the use of AI by public authorities for international law enforcement and judicial processes.
The varying perspectives set up panel discussions at the end of each day, but those discussions were not recorded or shared with the public.