FB pixel

Emotion AI excites some businesses, but the legal landscape feels fickle

Categories Biometric R&D  |  Biometrics News
Emotion AI excites some businesses, but the legal landscape feels fickle
 

Businesses deploying emotion recognition algorithms need to be aware of the legal risks that they could incur in a regulatory environment that is inconsistent across the United States, and becoming more so as states pass data privacy laws governing biometrics and other personal information. This is the view advanced by Lena Kempe of LK Lawfirm in an article in the American Bar Association’s Business Law Today.

Emotion AI comes with some of the same concerns as biometric technologies, such as risks to data privacy and bias. It also introduces the possibility that individuals whose emotions are understood by automated systems could also be manipulated by them.

Kempe suggests that the market is growing. The article cites a forecast from market analyst Valuates, which sets revenues in the field at $1.8 billion in 2022, and predicts rapid growth to $13.8 billion by 2032 as businesses attempt to improve online user experiences and organizations address mental health and wellbeing.

Kempe also notes that Affectiva was performing advertising research for a quarter of the companies in the Fortune 500 as of 2019. A year and a half later, the company said it was up to 28 percent, and today it is 26 percent, somewhat undercutting the claim of rapid growth.

Emotion AI uses data such as the text and emojis contained in social media posts, facial expressions, body language and eye movements captured by cameras, and the tone, pitch and pace of voices captured by microphones and shared over the internet. Biometric data such as heart rate can also be used to detect and identify emotions, as can behavioral data like gestures.

If this data or its output can directly identify a person, or if it can be reasonably linked to an individual, it falls under the category of personal information. This in turn, brings it into the scope of the European Union’s General Data Protection Regulation and a raft of diverse U.S. state data privacy laws. In some cases outlined by Kempe, the information can qualify as sensitive personal data, triggering further restrictions under GDPR and state law.

The frequent use of biometric data for emotion AI also introduces regulatory risk from Illinois’ Biometric Information Privacy Act (BIPA) and similar laws being passed or considered elsewhere around the country.

Kempe advises businesses to include any emotion data in comprehensive privacy notices, minimize the data they collect and store and anonymize it where possible and review and update policies to limit their data handling based on the specific purpose it is used for. They should implement opt-in measures when sensitive personal data is involved and robust security measures.

She also sets out legal strategies for avoiding bias and manipulation, which are largely related to transparency and risk management.

The unsettled regulatory environment and market for emotion AI and affective computing force companies that are using the technologies to keep abreast of ongoing changes, Kempe says, lest their excitement for a deeper understanding of their users lead to feelings of violation or betrayal, and lawsuits.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

OCR Studio expands KYC fraud detection for AI-generated identity documents

Fake documents made with the help of generative AI are becoming increasingly more convincing. Document analysis and data extraction software…

 

ID4Africa speakers urge legal identity inclusion for refugees, stateless persons

African governments must accelerate efforts to provide legal and digital identity to refugees and stateless populations, according to speakers at…

 

Biometrics lawyer Dan Saeedi talks BIPA on Biometric Update Podcast

Dan Saeedi is a BIPA buster. The renowned Chicago attorney, CIPP/US,a partner and team co-lead of the biometric privacy team…

 

World Bank, African DPAs outline formula for trusted digital identity, DPI

Trust has moved steadily to the center of the conversation around digital public infrastructure and identity at ID4Africa, and the…

 

UK watchdog warns of legal risks as London police deploy LFR at protest

London’s Metropolitan Police will deploy live facial recognition (LFR) technology at a protest for the first time this weekend, prompting…

 

Age assurance debate arrives in Bangladesh

The dominos continue to fall in the game of global online safety legislation targeting social media platforms. Bangladesh is weighing…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events