Stop waiting for society to wake up on biometrics. Here’s a plan to build trust
A new research paper finds that the capabilities of facial recognition and other biometric systems have vastly outpaced how the law or society understand the technology.
Developers need to look to their own processes to fundamentally improve that reality, according to the report published last month on IEEE.org.
Authors of the lengthy paper have proposed five concepts that proponents need to address if they want to avoid development missteps.
The stakes may be higher than the paper assumes: A backlash to the technology, spurred by ignorance, fear and resentment could have large negative impacts on governments and even businesses. Right now, facial recognition is closely associated with law enforcement, which under public scrutiny unlike any it has experienced since the last days of the Vietnam War.
What is interesting is that the paper does not urge biometrics proponents to humanize the products by extoling their societal or personal benefits. That is the go-to advice for anyone pushing new technology, but it is not gaining traction because biometrics has no smiling Mac face, so to speak.
In fact, it has no warmth at all. Facial recognition in particular feels like a one-way transaction. One’s likeness is taken, analyzed and stored, typically in the cloud where seemingly anyone has access to it but the person living behind it.
The paper covers a great deal of ground, but zeroes in on how “the legal and societal scrutiny of the technologies utilizing automated decision systems seem to be insufficient.” Its authors lay out five “social and technological provisions” for developers to consider.
First, watch for and ruthlessly root out biases when selecting training data for algorithms. And view data sets the way medical researchers view treatment trials — the bigger the better.
Then, at least try to create public trust in facial recognition systems but finding the next higher level of development transparency. It might be better to get independent insight into training data and algorithms. A check-and-balance strategy can in time lessen skepticism.
Third, remember that the best projects start with a definition of success and failure. If nothing else, this will tell developers when the job is done. In a highly related vein, the paper says proponents have to agree on “thresholds for acceptable accuracy.”
The authors go a step further, suggesting that the thresholds can be made legally binding, “as well as reviewed and validated periodically.”
Next, provide training for systems personnel educating them about key concepts in fairness, potential pitfalls and, just as important, how to solve novel problems in ways that balance product quality and business requirements.
Last, contractually obligate due diligence of vendors. Those feeding systems into one’s own products should follow the same steps, and they should be on the hook for fixing or updating systems related to these concepts.