Thoughts on AI ethics getting concrete and useful

At this point, there can be no creators of algorithms ignorant of the AI ethics debate. Users are a different story so far, but makers know they have a responsibility to balance risks with benefits in face biometrics systems, for example.
A new paper out of the United Kingdom should be of interest. It is a solid accounting of tools that makers — and end users — can employ to better ensure algorithms are ethically sourced, developed and deployed.
Much of the research looks at toolkits for everything from fairness and audits to impact assessments and design. It breaks down who in an organization or value chain typically uses a tool, who acts on the results, at what stage in the production of biometrics that a tool is employed and other guides.
The exhaustive work was done by a pair of researchers at the University of Southampton.
Lofty conceptual discussions in the AI industry that dealt with data — especially big data — and data protection from 2017 to 2020 have graduated to practical debates focused on models and algorithms. Data remains a critical consideration for those considering ethics; it just has company in 2021.
There were some exceptions to airy, philosophical corporate confabs but the results have be very mixed. Most government work to date has focused on principles.
Of the 39 tools that were judged “concrete” and “practical” (possibly the largest set of AI ethics tools collected for analysis), 36 were internal self-assessment documents. And, for the most part, nothing exists to prompt publication of results of the tools.
The authors, comparing these tools to those used in other industries and roles, such as environmental impact documents, found all of that alarming.
Just IEEE standards required outside verification, they found, and two tools involved public registers to force transparency.
There must be far more room for ethics reviews and assessments from external sources to avoid groupthink and inordinate revenue pressure, which can harm biometric systems development and the larger society.
At the same time, tools must be inclusive. Little opportunity exists for end users and consumers, among others, to participate in audits and assessments.
This might be the least surprising finding in that the technology industry from funding to updates and upgrades typically is a vendor-out affair. Feedback historically has been met with a dismissal: “You’re just a user.”
Article Topics
AI | biometrics | biometrics research | ethics | research and development | standards
Comments