Ethics guidelines for trustworthy AI published by European Commission
The European Commission’s High-Level Expert Group on AI has published its Ethics Guidelines for Trustworthy AI, which sets out seven key requirements to ensure that artificial intelligence systems are lawful, ethical, and robust from both a technical and social perspective, and warns specifically of critical concerns about biometric identification.
The 41-page document was composed based on the first draft guidelines published in December, 2018, and more than 500 comments received during open consultation. It presents chapters on the foundations of trustworthy AI, realizing trustworthy AI, and assessing trustworthy AI, which set out a framework taking in the ethical principles of respect for human autonomy, prevention of harm, fairness, and explainability, along with the seven key requirements.
In a section on “(e)xamples of critical concerns raised by AI,” the identification and tracking of individuals is specifically addressed.
“Noteworthy examples of a scalable AI identification technology are face recognition and other involuntary methods of identification using biometric data (i.e. lie detection, personality assessment through micro expressions, and automatic voice detection),” the report authors write. “Identification of individuals is sometimes the desirable outcome, aligned with ethical principles (for example in detecting fraud, money laundering, or terrorist financing). However, automatic identification raises strong concerns of both a legal and ethical nature, as it may have an unexpected impact on many psychological and sociocultural levels.”
Clearly defined conditions for if, when, and how automated identification with AI should be used is recommended within the guidelines, along with differentiating between identifying and tracking individuals, and between targeted and mass surveillance. The details of consent, including for the usage of “anonymous” personal data must also be considered.
The seven requirements are human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability. Special attention should also be given to protecting vulnerable people, possible including disabled people, children, and the elderly, according to the guidelines.
The framework also includes a pilot version of a “Trustworthy AI Assessment List,” which breaks each key requirement down into several questions for AI practitioners to tailor to their specific use cases for self assessment. The European Commission plans to launch a pilot program this summer for stakeholders to provide practical feedback on the assessment list, and has launched a forum for the exchange of best practices. The Expert Group will review the assessment list in early 2020. The EC is also inviting businesses to join the European AI Alliance to review relevant information and next steps.