Transparency standards for AI piloted in the UK
In an encouraging development for anyone developing, deploying or being surveilled by biometric systems, a transparency standard for AI has been published by the United Kingdom government.
An office in the executive branch of the UK government has created proposed rules for “algorithmic transparency” that would apply to agencies and public-sector bodies.
AI coders are expected to use the standard to be “meaningfully transparent” about how algorithms are used in supporting decisions.
The Central Digital and Data Office intends to run pilots among a finite number of bodies, collecting feedback along the way, according to a post by the office.
Few if any digital revolution in the past has enjoyed (or suffered) the near-total lack of national guidelines that biometric recognition has experienced.
They are important because they can speed technology and product development and, in the case of transparency, at least try to address the public’s justifiable skepticism of AI’s virtues.
The standard being piloted divides attributes into two tiers, providing definitions and a template for reporting information such as how data is used, how algorithms have been trained, data protection impact assessments performed.
There are other efforts to create standards, but they are so piecemeal as to strain the term standard. The EdSAFE AI Alliance was formed this fall to bring transparency to facial recognition proctoring systems.
In the United States, where the business culture abhors standards created by government, politicians debate laws to try to accomplish the same thing, typically with less success than if public and private sectors could agree on standards.