AI improving at sentiment recognition, already good at rash judgement, studies show
When a person demonstrates a low emotional quotient (EQ), they are sometimes described as having robotic reactions. Normal human reactions also include making inaccurate judgements, however, like superficial first impressions based on the way someone’s face looks.
Understanding how someone will be perceived by others is a function of EQ, and a team of researchers from Stevens Institute of Technology says they have trained an AI algorithm to do so with high accuracy.
A study published in the IEEE Transactions on Affective Computing by researchers from a pair of Japanese academic institutions and reported by Psychology Today suggests that artificial intelligence applied to normally unobservable physiological characteristics can significantly improve speech-based sentiment analysis.
Psychology Today frames the paper within the broader movement to build human-like EQ into automated services like chatbots.
The abstract to ‘Effects of Physiological Signals in Different Types of Multimodal Sentiment Estimation’ describes a fusion of linguistic data and physiological data to achieve higher-accuracy sentiment recognition.
“Our results suggest that physiological features are effective in the unimodal model and that the fusion of linguistic representations with physiological features provides the best results for estimating self-sentiment labels as annotated by the users themselves,” the researchers write.
Sentiment analysis is often differentiated from emotion recognition based on the type of data used, and both fall under the more general umbrella of affective computing.
A forecast from MarketsandMarkets predicts the global market for emotion detection and recognition will grow at a robust 12.9 percent compound annual growth rate to reach $43.3 billion by 2027. Speech-based systems are expected to take the largest market share, and in addition to chatbots, automotive applications are seen as among the market’s likely main drivers.
Do I look judgemental?
An article contributed to Tech Xplore by Stevens Institute of Technology outlines the research into predicting how people’s faces will be judged, which is published in the Proceedings of the National Academy of Sciences.
‘Deep models of superficial face judgements’ describes the identification of 34 “perceived social and physical attributes,” such as trustworthiness and age, based on first impressions of computer-generated facial images. These impressions were used to label images and train a neural network to make similar judgements, with significant success, according to the paper.
“The algorithm doesn’t provide targeted feedback or explain why a given image evokes a particular judgment,” lead researcher Jordan Suchow says. “But even so it can help us to understand how we’re seen—we could rank a series of photos according to which one makes you look most trustworthy, for instance, allowing you to make choices about how you present yourself.”
Article Topics
accuracy | AI | algorithms | biometrics | biometrics research | emotion recognition | market report | sentiment detection | synthetic faces
Comments