NIST kicks off public discussion about creating trust in AI
A U.S. federal agency is going back to basics to address AI’s fundamental weakness: It is distrusted by most of the public.
The National Institute of Standards and Technology — the same organization that raised waves last year with its bias rankings for face biometrics software — has published a draft document designed to spur discussion about engendering trust in systems.
It is not a one-and-done prospect either. Regardless of how uncomfortable people are with AI today, according to NIST’s new report, trust “will become even more important the less we know about our technology.”
And the more powerful that AI gets, the harder it will be for people to know how it works, and the more they will reject it.
It is counterintuitive that NIST’s report is dense and tough for the average person to understand. Indeed, it uses math describing, for example, probabilistic assumptions in a matter — trust — that seems as unweighable as love.
That is not so surprising given that the agency’s conversation starter was authored by a NIST psychologist and computer scientist (Brian Stanton and Ted Jensen, respectively).
The pair write about things that lead to trust and distrust, which they point out are not opposite concepts.
This was famously illustrated during the final years of the Soviet Union, when that nation and the United States were negotiating once-unthinkable denuclearization.
In talks, President Ronald Reagan repeated the phrase “trust but verify” so often that Mikhail Gorbachev the then-Soviet leader broke character in astonishment when the phrase came up during a joint press conference called to announce progress.
Trust in AI is not as consequential (yet), but if there is to be progress toward even the basic societal benefits using algorithms, it cannot be taken for granted any longer.