The duties and rights of business AI
The federal government’s technology standards organization, NIST, has proposed four principles for explainable AI.
An esoteric topic this is not. It would be untenable to ask a person or an organization to explain the why behind every decision, but a democratic society is based on the assumption that explanations can be had for all decisions if there is sufficient need.
With no meaningful amount of additional resources over time required to do so, it could be argued that on-demand explanations for AI decisions should be available to anyone with a sufficient need to know.
The first draft principle that NIST has proposed for public review suggests that AI deliver evidence or reasons for all outputs.
Next, algorithms have to make explanations understandable to people occupying all the various roles touching on the system and its actions. That means consumers and other end users all the way back up to the CEO of an organization writing the algorithm. No two roles will be served by the same explanation.
It also is fundamental that an “explanation correctly reflects the system’s process for generating” a decision.
Last, algorithms should only be used “under conditions for which it was designed or when the system reaches a sufficient confidence in its output.” AI systems cannot be expected to succeed in a sink-or-swim environment any more than people can.
The principles spurred a LinkedIn post by an AI practitioner, Eric Hess, that points out the folly of a business world in which “because the computer told me so” would be an acceptable explanation at any level.
If that happens, writes Hess, “you either have a training problem, or a tool problem. Or perhaps both.”