Deep Glint and VisionLabs lead latest NIST biometric facial verification report
Steady improvement in facial biometric verification accuracy is shown by the new Face Recognition Vendor Test (FRVT) 1:1 Verification track report and leaderboard published by the U.S. National Institute of Standards and Technology (NIST).
The leaderboard includes many companies that are less well-known that many of the 1:N leaders. The report includes six new developers from the July 27 edition, and 13 new algorithms from previously participating developers. The leaderboard shows the most accurate algorithm in visa and mugshot results, and lists them by developer.
Deep Glint tops the overall leaderboard as of August 25, finishing third in several categories and seventh or better in each. VisionLabs is second, and scores first in the VISA photos, MUGSHOT photos (without an extended time lapse between image capture), VISABORDER photos and BORDER photos, and second in MUGSHOT photos over time. Dahua, Canon Information Technology (Beijing), and Su Zhou NaZhi-TianDi Intelligent Technology round out the top five.
All of the entries in the top 10 are Chinese, expect Netherlands-based VisionLabs, Vocord (Russia), and Paravision (U.S.). Major providers SenseTime, Idemia, Paravision again, NtechLab, CyberLink, Yitu, Neurotechnology and Innovatrics also appear among the top 30 out of the 146 entries, overall.
A bill was proposed in U.S. Congress last year to bar Chinese and Russian companies from the industry-leading NIST benchmarking test, and several of the Chinese companies above are restricted from certain business relationships with American companies and government institutions over alleged participation in human rights violations.
The algorithms with the highest accuracy in each visa and mugshot category had accuracy between 0.0025 and 0.0035 false non-match rate (FNMR) at 0.000001 false match rate (FMR) for visas and 0.00001 FMR for mugshots.
“These photos are all photos from standards-compliant sources (i.e. mugshot photos), where lighting, blur, and camera angle conditions were carefully controlled…,” wrote AIH Technology Co-founder Ben Su in a LinkedIn comment. “What we are seeing is that when these ‘99% accurate’ algorithms are tested with “in-the-wild” photos, accuracy rates drops significantly. In addition, standards compliant photo databases don’t really show the full picture on racial bias either – racial bias could be way worse with in-the-wild photos.”
NIST also plans to test the algorithms for effectiveness with face masks.