FB pixel

South Asian group develops voice-based emotion recognition

 

voice-biometrics

Thai researchers say they have created an emotion-recognition dataset and models for speech biometrics.

The data, which is free to download, enables AI applications to both recognize emotions as people talk and to imbue artificial speech with seemingly authentic emotions. Interesting as the news is, the models are only up to 70 percent accurate. The researchers plan to work further on the models’ accuracy, and expand them to people of all ages.

They say the dataset and models can be used to improve call center systems and AI-powered robots.

Chulalongkorn University, or Chula, arts and engineering faculty created the tools in part to build a new category of biometric data that can be collected and analyzed by consumer-facing organizations.

Two hundred actors spoke in Thai with angry, sad, frustrated, happy and neutral tones in monologues and in dialogues, according to a Chula release.

Writers of facial recognition applications are striving to increase the accuracy of emotion recognition. It is a potentially big market. Voice recognition’s involvement with emotion are less developed, but likely will face similar privacy concerns.

The datasets can be downloaded from this Thai-language site.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics