Research shows human face recognition aided by speech movement
Cognitive psychologist and speech scientist Alexandra Jesse of the University of Massachusetts Amherst and her undergraduate linguistics student Michael Bartoli conducted a series of experiments in which adult listeners were found to be able to recognize previously unfamiliar speakers just from the motion they produce while talking, ScienceDaily reports.
According to Jesse, academics studying face perception have argued that individuals recognize other people’s faces based on static features, such as shape, size, and skin colour. Speech perception scientists counter that dynamic features are important.
“The missing link, and the reason for this study, was to show that listeners can use visual dynamic features to learn to recognize who is talking,” Jesse says.
One of the experiments used displays of faces made up of points of light, and showed that people recognize the motion “signatures” of others to identify individuals. The study has been published online by the journal Cognition, which is expected to publish it in its July print edition.
Jesse says the findings could have significant practical implications for facial recognition software and other kinds of recognition technology. Systems requiring people to speak a short phrase could be more reliable and harder to spoof if they can leverage both static and dynamic data.
Researchers at the University of Bradford recently developed a system for gender recognition based on the dynamics of smiles, rather than static features.