ACM adds to its list of principles on responsible algorithmic systems
The Association for Computing Machinery (ACM), the world’s largest computing society, has released a set of nine principles to encourage accurate and fair algorithmic decision making, updating the 2017 statement. It also lists four recommendations for data processing to counter bias.
In the meantime the ACM has been outspoken in areas such as the dangers of facial recognition and its U.S. Technology Policy Committee has even pushed for a ban.
The latest set of guidelines, the ‘Statement on Principles for Responsible Algorithmic Systems,’ co-authored by the association’s Europe and U.S. Technology Policy Committees, breaks down its concerns into Legitimacy and Competency (a new area for this update); Minimizing Harm; Security and Privacy; Transparency; Interpretability and Explainability [sic]; Maintainability; Contestability and Auditability; Accountability and Responsibility; and Limiting Environmental Impacts.
The Transparency section includes the requirement for developers to document the attempts they make to detect any biases in the systems. Developers must also consider the environmental impact of their systems to “ensure that their carbon emissions are reasonable given the degree of accuracy required by the context in which they are deployed.”
Public understanding of systems is emphasized: “Managers of algorithmic systems are encouraged to produce information regarding both the procedures that the employed algorithms follow (interpretability) and the specific decisions that they make (explainability). Explainability may be just as important as accuracy, especially in public policy contexts or any environment in which there are concerns about how algorithms could be skewed to benefit one group over another without acknowledgement.”
The recommendations for system and data processing to avoid bias are for system builders and operators to adhere to the same standards as to which humans are held in decision making; they should undertake impact assessments in advance; audit trails must be used to achieve higher standards of transparency, accuracy and fairness (an area proving problematic at present for auditing the NYPD’s use of surveillance technologies) and that AI system developers should be held responsible for their decisions whether or not algorithmic tools are used.
Numerous biometrics providers and other institutions have established principles for responsible algorithm development and use over the past few years.
Rome Call for AI Ethics gets new signatory
The University of Florida (UF) is the latest organization to sign the Rome Call for AI Ethics, reports Unite.ai, the call to promote a sense of responsibility in AI development so that technology serves rather than replaces humans, signed first by the Pontifical Academy for Life, Italian Ministry for Innovation, Microsoft, IBM and the FAO (based in Rome) in 2020.
The UF is also the eighth university to join a group of universities worldwide to push for human-centered approaches to AI.
accuracy | ACM | AI | algorithms | bias | biometrics | facial recognition | research and development | responsible AI