April 6, 2017 -
The National Institute of Standards and Technology (NIST) has released an interagency report detailing the results of its Face in Video Evaluation (FIVE) public test designed to advance video facial identification for these and other applications.
Authored by Patricia Grother, George Quinn and Mei Ngan, the “Face in Video Evaluation (FIVE) Face Recognition of Non-Cooperative Subjects” (PDF) report is intended to provide guidance to technology developers and inform policymakers’ decisions regarding the use of these system.
The report shows that achieving the best, most accurate results for each intended application requires good algorithms, a dedicated design effort, a multidisciplinary team of experts, limited-size image databases and field tests to properly calibrate and optimize video facial recognition technology.
Using 36 prototype algorithms from 16 commercial suppliers, FIVE took 109 hours of video imagery at a range of settings to match faces in the video to databases containing photographs of up to 48,000 individuals.
The video images included hard-to-match pictures of individuals looking at smartphones, wearing hats or just looking away from the camera.
Lighting was also an issue and some faces never appeared on screen because they were blocked by another individual.
For the test, the people in the videos were not required to look at the camera so that the technology would have to compensate for large changes in the appearance of a face, which would often lead to less successful results.
The more accurate algorithms successfully identified subjects 60 percent of the time to more than 99 percent, depending on video or image quality and the algorithm’s ability to handle the given scenario.
“Our research revealed that the video images’ quality and other properties can highly influence the accuracy of facial identification,” said lead author Patrick Grother, adding that while accuracy is important, it should not be the only factor to analyze when considering the deployment of video face recognition.
Other key factors include the costs of computer processing time and access to trained facial recognition experts to ensure that the matches are accurate.
In video, many faces are small, or unevenly lit, or not forward-facing—three critical points for accurately identifying individuals because the algorithms are not very effective at compensating for these factors.
For more traditional face-matching evaluations, NIST uses algorithms that compare a photograph of a person’s face against a database of millions of portrait photographs, which usually achieve match rates of more than 99 percent in some applications.
However, NIST limited galleries to just 48,000 images for the new study because the lower face quality in video undermines recognition accuracy.
The organization also measured “false positive” outcomes in which an algorithm incorrectly matches a video face image with an image in the gallery.
The report emphasizes that face identification technology providers must consider this issue, particularly in crowded settings in which most people in the video may be absent from the gallery.
NIST says these video-based applications may deliver the same accuracy as that of still-photo face recognition, but only if the image collection can be improved.
To this end, the report provides guidance to a wide group of individuals involved with the technology, from algorithm developers to system designers.
Previously reported, the National Institute of Standards and Technology’s (NIST) Trusted Identities Group (TIG) said it will invest $750,000 to assess the benefits of five state and local government identity management pilot projects it funded in 2016.