AI cannot help humanity if it cannot help an individual: UK report
Whatever the ultimate impact may be of a report by UK experts in algorithmic bias, the document already has succeeded where many analyst reports have failed.
The new report, on bias in algorithmic decision making, comes from the government-funded Centre for Data Ethics and Innovation.
It takes the position that AI decision making must be ethical to be successful, and AI ethics must be viewed as impacting individual people — because it does. Thinking about AI ethics in terms of industries, regions, demographics and data sets is an easy out. It lets everyone involved spread harms across faceless multitudes.
Indeed, almost all previous examinations of bias in algorithms discuss industry responsibility, legal culpability and regulatory agendas.
Meta-level issues certainly demand clarity, but advances in AI are moving too fast for unfocused debates or time-wasting games of chicken between government and industry, according to the report.
To address the centrality of individuals, the report’s authors analyzed four areas of public life that have deep experience dealing with internal bias and that have varying degrees of involvement with AI: financial services, policing, local government and recruitment.
All are particularly at significant risk of damaging failure if they are viewed as making unfair decisions involving an individual.
Not only have algorithms “amplified historic biases (they have) even created new forms of bias or unfairness,” according to the report. UK leaders created the center to convene businesspeople, policy makers, experts in civil society and the public to “develop the right governance regime for data-driven technologies.”
The document has specific, numerous recommendations for interested parties in each sector as they seek a path that balances the legitimate demand for privacy with the equally legitimate need for some amount of data to operate efficiently and develop useful new products.
More broadly, this view of ethics cannot succeed if it is invoked in a finished product review, or even during sourcing. It has to suffuse all of the organizations that hold stakes in seeing successful AI products reach the market.
The authors insist that the way to tackle biased algorithms in recruitment, for example, “must form part of, and be consistent with, the way we understand and tackle discrimination in recruitment more generally.”