FB pixel

Fujitsu, Carnegie Mellon develop AI-based facial expression recognition technology

 

artificial-intelligence

Japanese company Fujitsu in collaboration with Carnegie Mellon University School of Computer Science has developed an AI facial expression recognition technology that instantly recognizes subtle changes in facial expressions, such as uncomfortable or nervous laughter, or confusion, the company announced.

So far technologies that read human emotion usually detected clear changes in facial expression and have been used in automatic extraction of highlight scenes in videos and enhancing robots’ reactions. Human emotions are hard for computers to read, so to identify subtle facial changes, Fujitsu has integrated the technology with Action Units (AUs) of as many as 30 types based on movements associated with each facial muscle, including cheeks and eyebrows.

To achieve high accuracy, deep learning techniques would have to leverage large amounts of data, yet this is not always possible in real-life situations because faces are captured at different angles, sizes, and positions, according to the announcement.

The technology developed by Fujitsu will adapt different normalization processes for each facial image to overcome the complication of obtaining large data troves to train detection models for each facial expression, as it would appear in real-world applications. Fujitsu explains even if the subject’s face is in a different angle and not facing the system, the technology will adjust the image and still be able to train the model with limited data. The technology could be use used in a number of real-world scenarios, such as employee engagement and workplace safety.

Fujitsu and the Carnegie Mellon University School of Computer Science developed this normalization process to adjust the face for better resemblance of the frontal image and to analyze significant regions that affect AU detection for each AU. Fujitsu says the technology has obtained an accuracy rate of 81 percent and claims it is more accurate than others on the market. Future plans include integrating the technology for teleconferencing support, employee engagement measurement, and driver monitoring.

Facial behavior analysis of tics such as raised brows, nose wrinkles, jaw movement, and pressed lips have been used by researchers at the University of California Berkeley and University of Southern California to detect deepfake videos.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Identity verification scale and maturity to push average cost down

The costs that relying parties pay for digital identity verification, from collecting and analyzing selfie biometrics to ID document authenticity…

 

How the ID industry can become more sustainable – and help to raise awareness for greener travel

By Tobias Nuessle, COO of Veridos The travel and tourism industry is a significant contributor to global CO2 emissions. Various…

 

Biometrics upgrades arriving at borders (but check the schedule for updates)

New biometric technology is coming to borders in Europe and the UK, but as reflected in several of Biometric Update’s…

 

What is the killer app for verifiable credentials? Daon, Dock and Youverse discuss

Industries such as financial services, healthcare, transport, government and more are increasingly adopting digital verifiable credentials connected to biometrics. Their…

 

Veteran biometrics leaders join FaceTec, Credence, ID.me adds ex-Meta exec

The latest round of appointments in the biometrics and identity management sector includes a former leader of federal government sales…

 

EU announces phased approach for EES

The European Union has proposed a progressive introduction of its biometric traveler registration scheme, the Entry-Exit System (EES). On Wednesday,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events