Research volunteers throw humans under the facial recognition bus
Which is inherently more trustworthy – facial recognition performed by biometric AI or people? According to a new government study, people are more willing to go along with the judgment of an algorithm than of another human.
That surprising finding is even more so when considering the (not always unwarranted) opposition people show for AI face recognition in practice. This has implications for scenarios in which humans work with algorithms on biometric tasks that need to be completed quickly.
A paper on human-algorithm teaming, and published in the latest peer-reviewed scientific journal PLOS ONE, says that people can be cognitively biased just knowing facial recognition results came from their species or an AI biometric system.
A group of 376 paid volunteers were asked if they trust themselves, a computer or another human to make accurate identity decisions. Three-quarters trusted themselves. Fifty-six percent said they would trust computers to do it, and 53 percent said they trusted humans.
The big differences in trust came when the volunteers were asked which they distrusted to make good identity judgments.
Eighteen percent threw other humans under the bus, according to the paper. Eight percent had no faith in computers in this context, and nine percent said they could not trust themselves to perform optimally.
As part of this project, researchers found the certainty that volunteers felt about whether two photos were of the same person could be influenced by the labels “same” and “different” over two columns of the images.
The labels did not, however, make them unable to identify same and different people in the pairs of images.
The research was funded by the U.S. Department of Homeland Security’s Science and Technology Directorate. It was performed by John J. Howard, Laura R. Rabbitt and Yevgeniy B. Sirotin at DHS’ test facility in Maryland.
Humans are often paired with software for various reasons. In this context, humans are better at deciding when automated face biometrics are not appropriate. People also can readily step in when software fails.
Operations could result in errors beyond what might generally be considered nominal if the person in a team gives AI — which also is fallible — the benefit of a doubt when sharper judgment is required.
Article Topics
accuracy | biometric identification | biometrics | biometrics research | facial recognition | identity verification
Comments