FB pixel

Manipulated faces threaten trust in remote biometrics, new legal battlegrounds

EAB experts discuss
Manipulated faces threaten trust in remote biometrics, new legal battlegrounds
 

The European Association for Biometrics addressed different potential harms related to face biometrics and technologies for digital manipulation of faces in multiple areas in its most recent workshop. The risks and challenges for remote onboarding systems around face morphing and manipulation are also accompanied by ethical and social considerations.

The workshop was largely inspired by the ‘Handbook of Digital Face Manipulation and Detection,’ published by a quartet of EAB-linked experts earlier this year.

The EAB’s Workshop on ‘Digital Face Manipulation & Detection’ featured presentations on progress being made in the technology for matching videos of faces to audio samples and similar applications.

The impact of face manipulation on remote biometric onboarding systems and the law were also explored in expert talks.

Clear benefits and positive early experience build trust

Coming at the issue from another angle, Katrin Laas-Miko delivered a presentation on social and ethical challenges with biometrics in remote onboarding, based on a paper in the handbook.

Sociologists refer to our current environment as the “post-digital world,” Laas-Miko notes, due to the disruption of processes and consumer expectations. In this environment, concerns about risks and possible harms from digital identity are created or heightened.

The article in the handbook analyzes the usage of face biometrics for remote identity onboarding. The technology’s use it increasing rapidly, but as of 2020 the main remote onboarding method was still operator-assisted synchronous video calls, or based on existing digital identities.

New ETSI and European Banking Authority standards are likely to shift things towards automated, biometric systems.

Laas-Miko reviewed the various methods used, and noted the additional burdens, such as liveness controls, necessary when automation is increased.

“Ethical considerations and social concerns really depend on risk analysis and the possible risks in this concrete user context,” Laas-Miko says. This means that the biometric systems should not be analyzed generally, but rather in terms of specific applications and implementations.

The main risk groups for remote biometric onboarding are identified as falsified evidence, identity theft, phishing, and matching errors (false acceptance or false rejection).

One central issue of concern for Laas-Miko is the integrity of “practical identity.” This concept is based on a link between a person and their biometric information, which asserts not only identity but also their rights, ownership, responsibilities and entitlements.

The main concern from a social perspective is the protection of privacy and avoidance of function creep. Privacy, however, is recognized as an instrumental value, rather than an absolute one. Trade-offs sacrificing some privacy for a benefit and asymmetric relations can result in people behaving in ways that are inconsistent with their stated position on privacy.

Bias built into biometric matching algorithms and public acceptance and barriers associated with new technologies were also reviewed. Unclear reasoning behind the implementation of new systems creates distrust, as shown by recent studies on Europe’s adoption of biometric passports, Laas-Miko says. Higher awareness itself does not increase trust. Awareness of benefits does.

Positive early experiences also establish trust that can be extended to other applications, according to Laas-Miko, who cites the example of facial recognition on mobile devices.

An emerging legal battleground

Catherine Jasserand of the ‘Biometrics Law Lab’ led by Els Kindt at KU Leuven followed with a presentation on ‘Deepfakes: Emerging Legal, Ethical, and Social Challenges.’

Jasserand, who also contributed a paper to the handbook in collaboration with Kindt, began with the observation that while an image of a synthetic person may not fool technical experts, the state of the art can easily fool non-experts.

The benefits of deepfakes, Jasserand says, go beyond entertainment and into more serious domains, such as the training medical image analysis systems to more effectively identify tumors and other maladies.

The risks of deepfakes are well-established.  Jasserand further delves into the possibilities, however, noting for instance that “digital resurrection” of deceased people with deepfakes does not involve the subject’s consent, and may go against their living wishes.

Challenges include the lack of unified legislation, and lack of a single way to protect individuals from the harms of deepfakes.

Personal data protection regulations make some deepfake applications and processes illegal, but other areas, such as synthetic identities and deceased people, appear to be uncovered by existing rules.

Image rights also cover some issues related to deepfakes, but vary significantly between jurisdictions. Copyright and IP rights can provide some protection against deepfakes, but Jasserand points out that the victims of deepfakes often do not hold copyright of the manipulated material.

Criminal law may be the area of the greatest impact on deepfakes, but procedural law is also impacted by the risk of deepfakes being introduced into court cases as evidence. Jasserand warns that this may be an issue with more importance than commonly recognized so far.

The European Commission’s draft of the AI Act imposes only minimum transparency rules on content providers to inform people when they are interacting with manipulated material, which Jasserand finds “a bit mild,” and does not address malicious content.

Jasserand also addressed the AI Act’s implications for biometrics in a recent workshop specifically on that legislation.

The way forward, Jasserand suggests, is to get a firm handle on the technical possibilities, so that laws can be built to purpose.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Sri Lanka biometric data to be secured with hashing

Sri Lanka will secure the biometric data for its national ID system in hashed form. This one-way technology converts biometric…

 

As market for age assurance heats up, standards and rankings ID trusted vendors

According to a recent report from Liminal, the global age assurance market is set to grow from $5.7 billion in…

 

Southeast Asian countries align on approach to digital identity adoption

There are several different models of national digital IDs followed around the world, and several different methods of managing them….

 

IJCB’s facial recognition adversarial attack challenge kicks off

This Monday saw the official kick-off of the 2025 Adversarial Attack Challenge (AAC), a competition aimed at strengthening biometric authentication…

 

Effective digital public services need strong ID tech foundation: Entrust

Digital public services are increasing their efficiency, as well as accessibility, which in turn increases inclusivity. Delivering them to people…

 

Iran rolls out AI platform prototype amid facial recognition surveillance accusations

Iran has presented a prototype of its national AI platform, designed to address both the country’s lagging technological development and…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events