Scandalous commercial AI decisions might be minimized with basic detective work
With AI algorithms making more and bigger decisions without human guidance, the possibility for scandalous machine actions joining human misconduct is growing. There is room for optimism, however.
Setting aside for another day the possibility of better training humans or at least finding the crooked ones before they can cost employers and investors billions in losses and penalties, it may be realistic to hope that untoward algorithms decision can be meaningfully reduced.
New research published in the journal Royal Society Open Science points out that it is impossible for people to specify all strategies. That means all shady options cannot be prohibited. And software being without a conscience, it is fine with taking dark paths to achieve goals.
The paper’s authors cited what they call the Unethical-Optimization Principle: “If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.”
But what if programmers only have to estimate, not obviate, the problem?
As an announcement from the Ecole Polytechnique Fédérale de Lausanne put it, it is possible to “estimate the proportion of unethical strategies, and the distribution of the most profitable strategies.”
Any banking regulator or police detective knows to look for a motive, and, indeed, the paper’s author propose that people focus on return-maximizing strategies likely to bring in the most money. That is what the algorithm will do, after all.
Following the script through will turn up actions most like to result in financial and reputation losses. It will also provide new insights useful in finding or preventing other ticking timebombs, according to an article in SingularityHub, part of Singularity University.