Future Siri could read human emotion through voice, facial recognition
Apple is looking into making Siri more aware of user emotions. The tech company is developing a way for the personal assistant to interpret human emotions through facial analysis, writes Apple Insider.
Future versions of Siri may no longer only rely on voice recognition for accuracy, but, according to a new patent application, will combine it with FaceTime video to analyze its interaction with the user, reduce the number of misinterpreted requests, and deliver customized actions.
“Intelligent software agents can perform actions on behalf of a user,” says Apple in US Patent Application Number 20190348037. “Actions can be performed in response to a natural-language user input, such as a sentence spoken by the user. In some circumstances, an action taken by an intelligent software agent may not match the action that the user intended.”
“As an example, the face image in the video input… may be analyzed to determine whether particular muscles or muscle groups are activated by identifying shapes or motions,” it reads.
The new, upgraded Siri would interpret the emotional state of a user as either pleased or annoyed to understand how accurately it filled out or misinterpreted the voice request. To read human emotions, the technology would analyze audio input and images, which means complete access to microphone and camera. It would then classify facial expressions based on the Facial Action Coding System (FACS), the industry standard for facial taxonomy, and assign scores to the facial expression interpretations.
Whether for tapping into the use of digital credentials for mobile driver’s licenses (mDLs), pushing for paswordless authentication, or the Apple Ring smart wearable with biometric authentication, Apple has been looking into a number of innovations and projects. The company has not commented on which innovations will make it through to production.