Defining remote biometric identification technically and legally
For a more accurate discussion on what remote biometric identification is and current and potential laws around it, the association of civil and human rights organizations, European Digital Rights (EDRi), has created a highly accessible guide to the technologies and legal landscape.
The guide is prompted by the December 2022 compromise among the European Union digital ministers on the EU’s upcoming AI Act, which EDRi claims will water down the Act’s proposed ban. The European Parliament’s co-rapporteurs countered by proposing a tenth set of amendments to the AI Act to ensure risk criteria would classify any AI systems handling biometric data as high-risk.
Germany appears to positioning itself against the December agreement and to be instead more closely aligned with the European Parliament, which wants a ban on mass biometric surveillance, ahead of the next rounds of dialogue on the AI Act which seek an agreement between the Council, Commission and Parliament.
EDRi’s guide also comes within a context of civil society organizations claiming they have been excluded from participating in the drafting of an international AI treaty handled by the Council of Europe. The United States is reportedly behind the decision.
Acceptability and consent
The guide is split into three sections covering the technologies and laws and legal proposals surrounding them, then a further section on EDRi’s recommendations.
The first section addresses what biometrics use cases are deemed legally acceptable, such as to unlock a smartphone, and use cases which are unacceptable to EDRi in terms of potential harm, such as being surveilled in public. In these areas the law needs to be strengthened.
As it involves biometrics processing, even the example of face scans or fingerprints to biometrically unlock one’s smartphone is only lawful within the bloc’s General Data Protection Regulation (GDPR) if it is done with “informed consent, the data are processed in a privacy-preserving and secure manner and not shared with unauthorized third parties, and all other data protection requirements are met,” notes the guide.
EDRi asks how being biometrically surveilled in public can be deemed to have received informed consent. Entering a public area covered by facial recognition cameras forces someone to undergo biometric processing. This is “coercive and not compatible with the aims of the GDPR, nor the EU’s human rights regime (in particular rights to privacy and data protection, freedom of expression and freedom of assembly and in many cases non-discrimination).”
The guide finds that providers are using exceptions within GDPR to effectively conduct mass surveillance, which data protection authorities are pursuing. EDRi argues that the law around RBI (remote biometric identification) in public places needs to be made more explicit.
The draft AI Act only banned live RBI, not post or retrospective. It also only banned the police from using it, but not central or local government or private companies. EDRi argue that in human rights terms, there is no real difference between being identified in real-time or after the matter, and that retrospective processing can prove more harmful.
The current Law Enforcement Directive does not clearly establish what types of biometrics processing by police should be blocked by criteria that specify what is sensitive.
Biometric identification vs verification
What happens when biometrics are taken needs to be clearly understood, finds the guide, as the difference between biometric identification and biometric verification is legally significant.
The guide outlines the 1:1 matching in verification such as phone unlocking, versus 1:N matching for identification, such as surveillance. Verification generally does not require comparison against a database of people and the sensitive data does not go anywhere.
It argues that the use of biometric identification for authentication purposes is growing, such as by pre-enrolling in system such as for travel and entering venues. EDRi warns that providers use language such as ‘validation’ and ‘authentication’ when what they are really doing is identification rather than verification.
Identification carries further risk as it requires a database prone to hacking and commercial exploitation. It can also introduce more risks around misidentification and empower biometric mass surveillance.
The addition of ‘remoteness’
Biometric verification, such as unlocking a phone or laptop or going through a passport gate, is an active move and not remote. CCTV cameras and sensors that can identify people are remote, even unseen.
“Although biometric identification is often referred to as 1:n (1-to-many matching), it’s actually more accurate to think of remote biometric identification as n:n (many-to-many matching).
“That’s because – even if only one person is being searched for – every single person gets scanned.”
This is biometric mass surveillance.
Tighten and define
The blog concludes with a series of amendments to the AI Act to ensure people are protected from biometric mass surveillance. Remote biometric identification should be banned – live and post – in all publicly-accessible areas, by all actors, with no exceptions.
Specific rewordings of articles of the AI Act are included, to remove exceptions for police use and to ensure the law applies online. The guide includes suggested wording to define an RBI system and ‘remote’ as well as add new prohibitions for RBI in the draft.
Biometric verification would not be affected.
Article Topics
AI Act | biometric identification | biometrics | data protection | EU | European Digital Rights (EDRi) | facial recognition | privacy | surveillance
Comments