FB pixel

Is the EU AI Act leaving a backdoor for emotion recognition?

Is the EU AI Act leaving a backdoor for emotion recognition?
 

As the European Union prepares to adopt the Artificial Intelligence Act, the landmark legislation is still attracting criticism, this time for allowing the use of emotion recognition. An opinion piece published in the EU Observer highlights the AI Act still leaves the door open for its use in law enforcement and migration officers which could lead to potential rights abuses.

The anonymous writer, who describes themselves as an EU civil servant, claims that this leaves space for its use in law enforcement and migration control, particularly the controversial EU-funded iBorderCtrl project, an AI lie-detector for facial and emotion recognition technologies designed to be used during migrant interrogation. In 2018, the bloc announced it would fund the project and conduct pilots in Hungary, Greece and Latvia.

iBorderCtrl has been a target of heavy scrutiny among rights groups and lawmakers. Despite efforts to shed light on its use, the Court of Justice of the European Union (CJEU) ruled in September 2023 to deny access to documentation related to the project, citing the protection of commercial interests.

Facial emotion recognition is the process of identifying human sentiment using face biometrics, a relatively new field of AI that has been met with mixed results. The AI Act itself highlights the technology’s shortcomings and its potential for “discriminatory outcomes” and intrusiveness to rights and freedoms.

“There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly as expression of emotions vary considerably across cultures and situations, and even within a single individual,” the document states.

Despite these concerns, emotion recognition has not been included in Article 5 of the AI Act which defines Prohibited Artificial Intelligence Practices. Instead, emotion recognition is defined as a high-risk AI system and its use is prohibited at the workplace and in educational institutions, with exceptions for safety and medical reasons. Deployers of emotion recognition systems are not required to inform people about the operation of such systems if the system is used to “detect, prevent and investigate criminal offenses,” according to the legislation.

The language of the AI Act adds to the confusion on the legality of emotion recognition: The law states that the fact that an AI system is classified as a high-risk AI system “should not be interpreted as indicating that the use of the system is lawful under other acts of Union law or under national law compatible with Union law.”

At the beginning of February, European lawmakers reached an agreement on the technical details of the AI Act, opening the path to European Parliament committees’ approval in April. Although significant opposition is not expected, lawmakers may still introduce changes that could slow down the timeline of its implementation.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

PNG launches birth registration legislation in landmark for national ID project

Papua New Guinea is taking a concrete step in making sure every citizen is officially recognized and able to access…

 

Yoti improves liveness detection pass rates

Digital identity and age estimation company Yoti has released new figures on its liveness detection technology, showing success rate improvements…

 

Inclusive digital ID poised for leap forward with QR codes, similar credentials

QR codes have been around for decades, but they and other similar technologies have only recently emerged as digital identity…

 

Age assurance debate simmers across EU with calls for stronger measures

Age checks remain in the headlines with new proposals from EU digital ministers to go further with legislation limiting social…

 

Yoti welcomes age assurance direction in UK Strategic Priorities

Yoti has weighed in on the UK government’s publication of its final draft Strategic Priorities for online safety. Prepared by…

 

AuthenticID and Darwinium execs pinpoint AI fraud weaknesses

AI always leaves a trace. Executives from AuthenticID and Darwinium agreed on this point, which offers a silver lining among…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events