New EU AI strategy puts remote biometric identification in “high-risk” category
The EU will consider biometrics for authentication differently from remote biometric identification, as it develops regulation as part of its five-year human-centric digital strategy to establish clear global standards for technological advancement and AI technology, specifically. The Commission is looking into providing a transparent framework that is beneficial for both citizens and companies operating in the region, allowing them to thrive, innovate and feel their data is safe online.
The focus will be on endorsing solutions that are human-centric, support a “fair and competitive economy” and promote “an open, democratic and sustainable society.” The first step in figuring out the European data strategy is the recently published white paper on Artificial Intelligence, which includes guidance particular to facial recognition and other biometrics.
Based on guidelines provided by the High Level Expert Group, the Commission has outlined some key features for high-risk AI apps that need to be addressed, including training data, data and record-keeping, robustness and accuracy, human oversight, and specific requirements for AI apps used, for example, in remote biometric identification.
“Bias and discrimination are inherent risks of any societal or economic activity. Human decision making is not immune to mistakes and biases,” reads the report. “However, the same bias when present in AI could have a much larger effect, affecting and discriminating many people without the social control mechanisms that govern human behavior. This can also happen when the AI system ‘learns’ while in operation.”
The EU argues remote biometric identification should be separate from biometric authentication. While biometric authentication is described as a security mechanism that leverages unique biological characteristics to verify identity, remote biometric identification is when biometric modalities such as fingerprints, facial images, iris or vascular patterns are used to identify multiple persons’ identities “at a distance, in a public space and in a continuous or ongoing manner by checking them against data stored in a database.”
However, the paper warns that the collection and use of biometric information used for facial recognition in a public space “carries specific risks for fundamental rights.” With some exceptions, biometric data processing to identify a person is forbidden by data protection regulation. GDPR, for example, clearly says biometric data processing can happen “on a limited number of grounds […] for reasons of substantial public interest.” The report also recognizes that rights implications of remote biometric identification systems can be quite varied according to their purpose, context, and scope of use.
In order to avoid market fragmentation, the EC plans to launch a broad debate on the circumstances and safeguards necessary for appropriate use biometric identification in public spaces.
Ultimately, the EC suggests a risk-based approach, in which the use of AI for remote biometric identification is one of two examples provided of systems that “would always be considered ‘high-risk,’” triggering a set of mandatory legal requirements related to training data, record-keeping, transparency, accuracy, oversight, and application-specific rules.
The investment in the digital framework strategy with a focus on AI will be supported by the Digital Europe program (DEP), the Connecting Europe Facility 2 and Horizon Europe. The Commission suggested an investment of 15 billion in ‘Digital, Industry and Space’ under Horizon Europe, some €2.5 billion in data platforms and AI applications under DEP, out of which €2 billion will be invested in a European High Impact project.
Digital technologies will be an important pillar in EU’s Green Deal strategy aimed to be finalized by 2050. Europe is actively focusing on introducing standards for advanced technologies such as blockchain, high-performance and quantum computing, AI and data sharing.
The Commission is hoping to get industry and expert feedback on its AI white paper, to capitalize on the technology’s benefits, but also properly address issues and roadblocks.
By putting citizens first, the EU aims to provide an AI system framework that is compliant with current legislation and does not compromise fundamental rights through algorithm bias. The Commission emphasizes that “biases in algorithms or training data used for recruitment AI systems could lead to unjust and discriminatory outcomes, which would be illegal under EU non-discrimination laws.” Should breaches occur, national authorities need to step up and address the challenges.
Another point the Commission makes is that high-risk AI systems required extensive testing and certification to ensure EU standards are abided by. While it understands the opportunities and benefits facial recognition can deliver for user authentication, the Commission warns that remote biometric identification is “the most intrusive form of facial recognition and in principle prohibited in the EU.” The white paper opens the door for a debate regarding the scenarios that could justify the use of facial recognition for remote biometric identification.
The Civil Liberties Committee is holding a hearing this week at the European Parliament to analyze the pros and cons of AI use, specifically biometric facial recognition, in police investigations. The speaker lineup includes members from the Council of Europe, the United Nations Interregional Crime and Justice Research Institute (UNICRI), the European Union Agency for Fundamental Rights (FRA) and the European Data Protection Supervisor (EDPS) as well as academia, think tanks and civil society.
Article Topics
accuracy | artificial intelligence | biometric authentication | biometric identification | biometrics | data protection | dataset | EU | European Commission | facial recognition | regulation | standards
Comments