FB pixel

Facial recognition & NIST – Fake news or old views?

Facial recognition & NIST – Fake news or old views?

By Tony Porter, Chief Privacy Officer at Corsight AI

The focus on tests conducted in 2019 skews the debate —  Facial recognition technology algorithms have advanced rapidly since then and public commentators, civil rights groups and lawmakers need to reflect that.

The National Institute of Standards and Technology (NIST) Face Recognition Vendor Test (FRVT) program has for many years been the most respected independent and authoritative evaluator of facial recognition algorithms anywhere in the world. The FRVT program examines FR technologies that are submitted by developers voluntarily, for independent testing as to their performance and accuracy. The results are published in the public domain. The first FRVT report on demographic effects to be produced by NIST was way back in 2019. This four-year-old report is significant in the modern context for one particular reason, namely that it is often alluded to and misrepresented in policy debates by narrators and in the media, as being indicative of alarming levels of bias within modern facial recognition algorithms being used or being considered for use today.

For clarity, that report does not do that.

That particular 2019 NIST study evaluated 189 software algorithms from 99 developers to determine disparities in performance or ‘bias’ across a diversity of images. The obvious outcome in 2019 was much the same as you would expect today, namely that the quality of performance of any given FRT system depends on the qualities of the algorithm at the heart of the system, the application that uses it and the data it is fed and trained upon.

Back in 2019 the standard of the algorithms in use by the range of systems being tested on that occasion did indeed show a disparity in performance amongst some systems of a factor of 10 to 100 depending upon the algorithm tested.  It is the latter figure which was achieved by the poorest algorithms tested by NIST which is often quoted by critics of FRT today, particularly those seeking to deny the use of this technology to law enforcement agencies. A key message seems to have been lost, conveniently or otherwise, is that different algorithms perform differently.

FR technology has of course developed significantly in the four years since the first FRVT report, but the narrative of the naysayers hasn’t always evolved beyond an entrenchment in a selective representation of the 2019 report.

The highly respected FRVT is a regular assessment process and the regular publication of modern and up-to-date reports relevant to the performance of modern FRT algorithms seems largely ignored by comparison in more contemporary policy debates and media reporting. Were the true facts of the outcomes of current FRVT assessments to achieve at least an equal standing to the out of date representations of the 2019 report, then both policy makers and the public alike could at least have some hope of having factual and up to date information upon which to establish their own views and make better informed decisions.

Modern and better performing FRT systems such as those produced by Corsight AI when tested across 12 million images deliver an accuracy score of 99.88 percent. Any disparity in performance across race, gender and age is so negligible as to be statistically insignificant, so say the independent testers. This level of performance is far in excess of human capability, and with those levels of accuracy, the application of proper risk management controls by human operators can remove such risk altogether.

Across the European Union there have been many reasoned and well-considered arguments put forward particularly by Civil Liberty groups against the use of FRT. Indeed many of these arguments are being advanced now- whilst the EU Trilogue is currently discussing the future of live biometric identification. The unquestionable value in those arguments has challenged law enforcement and has been considered by the courts. Paradoxically, those arguments have probably contributed to the establishment of higher standards of use by organisations and better guidance being produced by regulators and stakeholders alike where FRT is concerned.

The value within FRT to law enforcement and national security agencies is significant and undeniable, the risks are understood, acknowledged and manageable. Indeed, recently Interpol declared that since 2016 there have been 1500 arrests of serious and organised criminals across the EU through the use of FRT. The debate is no longer one of whether or not to deny law enforcement those capabilities which can effectively protect our densely populated, digitally enabled and increasingly dangerous societies from an array of serious harms (EU to note). The debate to be had now is how the regulation of its capabilities should evolve so as to continue to shape and harness its use as a force for good to the benefit of society and to hold such use to account.

That particular debate requires common sense, balance, being grounded in modern fact, and not tainted by old information, misinformation or indeed disinformation as to do otherwise would indeed be akin to being…dystopian!

About the author

Tony Porter is Corsight AI’s Chief Privacy Officer and the former UK Surveillance Camera Commissioner. Corsight AI is a leading provider of facial recognition solutions for corporations, law enforcement and other government agencies.

DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News


The UK’s election may spell out the future of its national ID cards

Identity cards are back among the UK’s top controversial topics – thanks to the upcoming elections and its focus on…


Challenges in face biometrics addressed with new tech and research amid high stakes

Big biometrics contracts and deals were the theme of several of the stories on that drew the most interest from…


Online age verification debates continue in Canada, EU, India

Introducing age verification to protect children online remains a hot topic across the globe: Canada is debating the Online Harms…


Login.gov adds selfie biometrics for May pilot

America’s single-sign on system for government benefits and services, Login.gov, is getting a face biometrics option for enhanced identity verification…


BIPA one step closer to seeing its first major change since 2008 inception

On Thursday, a bipartisan majority in the Illinois Senate approved the first major change to Illinois Biometric Information Privacy Act…


Identity verification industry mulls solutions to flood of synthetic IDs

The advent of AI-powered generators such as OnlyFake, which creates realistic-looking photos of fake IDs for only US$15, has stirred…


One Reply to “Facial recognition & NIST – Fake news or old views?”

  1. Still on a db that size there is a significant possibility for a false positive. So it is a good investigative tool, yet not 100 % positive identification

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events