FB pixel

Fujitsu, Carnegie Mellon develop AI-based facial expression recognition technology

 

artificial-intelligence

Japanese company Fujitsu in collaboration with Carnegie Mellon University School of Computer Science has developed an AI facial expression recognition technology that instantly recognizes subtle changes in facial expressions, such as uncomfortable or nervous laughter, or confusion, the company announced.

So far technologies that read human emotion usually detected clear changes in facial expression and have been used in automatic extraction of highlight scenes in videos and enhancing robots’ reactions. Human emotions are hard for computers to read, so to identify subtle facial changes, Fujitsu has integrated the technology with Action Units (AUs) of as many as 30 types based on movements associated with each facial muscle, including cheeks and eyebrows.

To achieve high accuracy, deep learning techniques would have to leverage large amounts of data, yet this is not always possible in real-life situations because faces are captured at different angles, sizes, and positions, according to the announcement.

The technology developed by Fujitsu will adapt different normalization processes for each facial image to overcome the complication of obtaining large data troves to train detection models for each facial expression, as it would appear in real-world applications. Fujitsu explains even if the subject’s face is in a different angle and not facing the system, the technology will adjust the image and still be able to train the model with limited data. The technology could be use used in a number of real-world scenarios, such as employee engagement and workplace safety.

Fujitsu and the Carnegie Mellon University School of Computer Science developed this normalization process to adjust the face for better resemblance of the frontal image and to analyze significant regions that affect AU detection for each AU. Fujitsu says the technology has obtained an accuracy rate of 81 percent and claims it is more accurate than others on the market. Future plans include integrating the technology for teleconferencing support, employee engagement measurement, and driver monitoring.

Facial behavior analysis of tics such as raised brows, nose wrinkles, jaw movement, and pressed lips have been used by researchers at the University of California Berkeley and University of Southern California to detect deepfake videos.

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Face biometrics use cases outnumbered only by important considerations

With face biometrics now used regularly in many different sectors and areas of life, stakeholders are asking questions about a…

 

Biometric Update Podcast explores identification at scale using browser fingerprinting

“Browser fingerprinting is this idea that modern browsers are so complex.” So says Valentin Vasilyev, Chief Technology Officer of Fingerprint,…

 

Passkeys now pervasive but passwords persist in enterprise authentication

Passkeys are here; now about those passwords. Specifically, passkeys are now prevalent in the enterprise, the FIDO Alliance says, with…

 

Pornhub returns to UK, but only for iOS users who verify age with Apple

In the UK, “wanker” is not typically a term of endearment. However, the case may be different for Pornhub, which…

 

Europol operated ‘shadow’ IT systems without data safeguards: Report

Europol has operated secret data analysis platforms containing large amounts of personal information, such as identity documents, without the security…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events