See no evil: Bad AI strategy, especially when alternative plans exist
A U.S. federal watchdog agency has defined four principles that its researchers say can build accountability into the creation and use of AI systems.
The work is aimed primarily at government AI development and products, but if it is proved useful, it could help with private industry, where attitudes toward bias can be as opaque as black-box AI itself.
A disturbing environment there is described in a recent New York Times article that took a peek behind the curtain of obligatory and facile assurances that many of those in private-sector AI recite. At least some CEOs feel, incorrectly, that data is like sand — free of even original sin.
Data is changed and skewed from the moment it is collected. Strong standards set for representativeness, quality and reliability provide a more solid foundation for every other step in a product’s development, deployment and usefulness.
Real thought has to go into creating datasets. Is the object to sell ties to white cisgender males? Or is it to sell products to anyone with income and a desire? Not all data is the same.
The second principle is monitoring. Proving repeatedly that an algorithm is reliable and relevant is required, according to the GAO.
“One and done” is fortunate, not a realistic expectation even most of the time. Some algorithmic operations will need the same kind of ongoing oversight and management that is necessary with many critical human tasks, including running a power plant, for example.
Governance is highly related. The GAO is recommending governance at both the systems and organizational levels as the best way of exemplifying accountability. The researchers define it as creating the processes to manage, operate and oversee implementation.
The fourth principle involves performance — does the output match the objectives.
It is standard B-school stuff, but some agency heads and CEOs are buying into the Silicon Valley hype. Thinking a content management system will revolutionize a company is one thing. If the CEO is wrong, money has been wasted.
An algorithm that routinely prioritizes a health care company’s Black patients lower than white patients is different.
The New York Times story looking at the ongoing campaign to tease bias out of AI brought up that example. About 18 months ago, New York State regulators investigated UnitedHealth Group for allegedly using AI that pushed aside Black patients, “even when the white patients were healthier.”
It is not known, according to the Times, what if any conclusion has been reached in the UnitedHealth investigation.
The article opens with a focus on a comparatively young AI innovator named Liz O’Sullivan, who participated in a startup trying to automatically eliminate explicit images on the Internet. The firm contracted out the tagging to Indian workers, and what they got back was images of same-sex couples — not same-sex pornography — tagged as offensive.
She’s since been named CEO of Parity, which is searching for tools to find and remove bias in AI. (Parity is not alone.)
Meeting executives and engineers over the last few years, O’Sullivan told the Times, she heard about “fairness through unawareness.” That means they do not look closely at underlying data because it is neither a tool for good nor bad, and to get involved with the data is to introduce bias.
That is like saying all ice ponds are the same; if one looks solid get out there and skate until it proves otherwise.