Fine words from AI vendors, but actions on ethics are scarce
Can altruism be profitable for the AI-industrial complex? Maybe, but the board meeting on that topic has been pushed to the farthest reaches of TBD.
Just watch the VentureBeat interview of Margaret Mitchell, half of the ethics duo that Google fired after someone used Bing to look up the word ethics.
Mitchell stands in a room by herself talking about how a theoretical company, maybe its parent organization rhymes with Falphabet, too often sees ethical AI as a last box to tick before pushing a product out the door.
As co-lead (with also-fired Timmit Gebru) of Google’s Ethical AI unit, Mitchell contributed to praiseworthy products, so while there obviously was some acceptance of the idea that ethics by design can result in good products, that warm glow was not enterprise wide.
Or, in her words, some executives (not necessarily at Google) still regard ethics by design as “a policing action, a block to (product) launch.”
In the interview, Mitchell is described as a researcher at large, which might be a little like a librarian at large. She says organizations have approached her, possibly in a consulting role.
That she is not too swamped with offers to answer a journalist’s question is disheartening.
A CNBC article on the topic quotes an executive from Sony and one from Norwegian telco Telenor genuflecting at the symbolism of trust.
Telenor’s vice president of research Ieva Martinkenaite reportedly glibbed, ethical AI is “responsible business in the making.”
Hiroaki Kitano, CEO of Sony’s Computer Science Laboratories, apparently did not have anything as marketing-ready to say on the topic, but according to CNBC, Kitano said trusted algorithms could potentially help companies in the market.
Separately, facial recognition software vendor Corsight AI has issued a white paper (summarized here) saying that biometrics developers should work only on software that benefits society. Working closely with clients, they can help suss out the legitimacy of projects, and help build in appropriate safeguards.