FB pixel

Research volunteers throw humans under the facial recognition bus

 

facial-recognition-database

Which is inherently more trustworthy – facial recognition performed by biometric AI or people? According to a new government study, people are more willing to go along with the judgment of an algorithm than of another human.

That surprising finding is even more so when considering the (not always unwarranted) opposition people show for AI face recognition in practice. This has implications for scenarios in which humans work with algorithms on biometric tasks that need to be completed quickly.

A paper on human-algorithm teaming, and published in the latest peer-reviewed scientific journal PLOS ONE, says that people can be cognitively biased just knowing facial recognition results came from their species or an AI biometric system.

A group of 376 paid volunteers were asked if they trust themselves, a computer or another human to make accurate identity decisions. Three-quarters trusted themselves. Fifty-six percent said they would trust computers to do it, and 53 percent said they trusted humans.

The big differences in trust came when the volunteers were asked which they distrusted to make good identity judgments.

Eighteen percent threw other humans under the bus, according to the paper. Eight percent had no faith in computers in this context, and nine percent said they could not trust themselves to perform optimally.

As part of this project, researchers found the certainty that volunteers felt about whether two photos were of the same person could be influenced by the labels “same” and “different” over two columns of the images.

The labels did not, however, make them unable to identify same and different people in the pairs of images.

The research was funded by the U.S. Department of Homeland Security’s Science and Technology Directorate. It was performed by John J. Howard, Laura R. Rabbitt and Yevgeniy B. Sirotin at DHS’ test facility in Maryland.

Humans are often paired with software for various reasons. In this context, humans are better at deciding when automated face biometrics are not appropriate. People also can readily step in when software fails.

Operations could result in errors beyond what might generally be considered nominal if the person in a team gives AI — which also is fallible — the benefit of a doubt when sharper judgment is required.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Biometric authentication invaluable, set to further enhance security in Africa

A webinar held during the Digital ID Hackathon for Africa organized by Upanzi Network and Microsave Consulting in partnership with…

 

Low birth registration, high cost hinder access to legal ID in Sub Saharan Africa

While the need for legal and digital ID remains ever pressing as a result of the digital transformation wind blowing…

 

Saudi Arabia’s Absher digital identity for financial inclusion and transactions

The Absher platform in the Kingdom of Saudi Arabia has emerged as the core pillar of the country’s efforts towards…

 

Malawi begins biometric voter registration pilot to test new system

A trial voter registration process will begin in Malawi tomorrow September 13 to put the country’s new Electoral Management Device…

 

Biometrics pilots, launches and investments foreshadow next areas for growth

Biometrics pilots, a patent, predictions and acquisitions paint a picture in the most popular news items of the week on…

 

Biometrics firms pitch privacy in age assurance ahead of US court battle

The U.S. is facing its first constitutional debate connected with age verification in 20 years: The Supreme Court will have…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events