Building a practical solution to ethics in AI code is possible, researchers find
Principles by themselves are like jockeys in colorful silks without a horse — interesting to behold but of limited use.
The current shower of ethics principles being written for artificial intelligence developers and applications like biometrics got a small team of corporate and academic researchers thinking. Are the principles worth the space they take up on a cloud server?
If not, could they be made truly useful? According to the researchers, yes, they can.
To test their theory out, the team brought together 48 practitioners of artificial intelligence and machine learning, and conducted an iterative co-design process centered on fairness — a bite out of the overall apple of AI ethics.
In the report, they suggest inclusively crafted practical “checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates.”
This is markedly different than how organizations address ethics in algorithms now.
Spurred by a few years of hand-wringing by the media about the fairness and safety of artificial intelligence and machine learning, many public and private-industry players in the sector have thrown together working groups and produced anodyne lists of ideas. The results issued by those committees often are no more useful than any other corporate big ideas unless they are accompanied by practical guidelines and processes, according to three researchers from Microsoft Research and one from Carnegie Mellon University.
Principles alone get ignored by developers pressured by ever-shortening product deadlines and ever-growing features lists. Better would be to create checklists with actions to take in matters involving ethics, the report’s authors write. But here, too, the checklists must be practical.
They looked at checklists created by some players in artificial intelligence and found “few appear to have been designed with active participation from practitioners.”
Feel-good and expeditious steps are “ethics washing,” according to the researchers. They define that as “a rhetorical commitment to addressing AI ethics issues that is unsupported with concrete actions.”
And it is not always a matter of a company randomly looking for the right fig leaf.
Executives at software firms, even those in the advanced arena of artificial intelligence, use checklists all the time in tactical ways. The difference is that those lists “are used to standardize known procedures for code review.” Getting ethics right in code is a new and changing goal.