Face recognition systems need to instill trust, but they do not: NIST researchers
Not enough is being done to engender trust in decisions made by facial recognition and biometrics systems, according to a pair of U.S. government researchers.
Even though experts routinely are enlisted to explain to stakeholders including the public how these algorithms work, there are far fewer explanations for why software made a decision, write Johnathon Phillips and Mark Przybocki of the National Institute of Standards and Technology (NIST) in an Arxiv.org paper.
It is unusual for anyone in face recognition to even discuss this, according to the authors.
There already are ample societal skepticism and distrust about artificial intelligence generally and face recognition specifically. If nothing else, explainability is going to be a key factor in the universal acceptance of these technologies as evidence in courts.
Solving this situation, the authors argue, will mean treating algorithms as experts. That is to say, algorithms need to defend their judgment the same as human experts. To do that, the creators of systems have to hold to four principles.
The first is, simply, to explain. Systems must “supply evidence, support or reasoning for each decision.”
The second is to make decision processes interpretable. People have to be able to understand decisions and to then perform a task based on what they have understood.
And explanations have to be accurate. The software has to be able to demonstrate how it came to a decision, whether the decision was right or wrong.
Last is making sure an algorithm is operating within its knowledge limits. Systems have to alert humans to tasks that are outside their scope of training.