Can ‘human touch’ become part of the AI biometrics zeitgeist?

There is no avalanche of biometrics bills being passed around Washington, but there is a flurry of proposals from Capitol Hill and the White House. Among all the reports and discussion, one phrase stands out: human touch.
Also, people are talking about human approach, human centricity and human alternatives.
That is one of the key points in President Joe Biden’s proposed “Blueprint for an AI Bill of Rights,” human alternatives. The document was written by the Science and Technology Policy office.
“You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.” That is the first sentence in the blueprint’s chapter on “Human Alternatives, Consideration, and Fallback.”
The blueprint identifies five principles intended to guide AI design, deployment and use. AI systems should be safe and effective, protect against algorithmic discrimination, preserve data privacy, and provide notice and explanation, in addition to the human alternatives.
Some AI activities and outcomes documented in the United States are “deeply harmful,” authors of the blueprint write. Complaints are growing against industries – health care, social media and others – and government, where the promise of capturing more criminals has jurisdictions nationwide deploying biometric systems.
In reaction, regulatory and legislative remedies for biometrics that overstep are being pursued, if not always with vigor.
Yet not all of the efforts would require what the National Institute of Standards and Technology describes as socio-technical operations.
That would be people, in government and out, empowered to ride herd on biometric surveillance to make sure the algorithms are the benefit that their makers intended. It also would be human who, as described above, can step in on an automated transaction when requested or necessary.
Indeed, it is common to see requirements written into agreements for law enforcement’s use of biometric surveillance. It is unknown how independent from a computer that a person would be, but the provisions are in ink.
A NIST spokesperson interviewed by government-tech trade publication said the AI community needs to be realistic about the limits of human-programmed algorithms.
There is some optimism about the blueprint.
Marc Rotenberg, head of the nonprofit Center for AI and Digital Policy, reportedly called it a starting point that does not end the debate on how AI is implemented in the United States. Not a standing ovation, but people working to make AI a benefit for everyone have seen setbacks.
Officials at NIST, which has been working on an AI trust framework since July 2021, sound just as cautious in an interview with trade publication FedScoop. Even what they are doing is a small step toward giving people a reason to trust AI. The agency says its risk management framework for AI should be ready by early next year.
Article Topics
AI | bias | biometrics | data privacy | NIST | regulation | standards | surveillance | United States
Comments