Academics, Oosto propose facial recognition, AI ethics guidance in new forum
A group of artificial intelligence ethicists and stakeholders took a step towards establishing a forum to advance AI guidelines and regulations, with a conference on ethical vision AI hosted by Fordham University Law School visiting Professor Shlomit Yanisky-Ravid.
At the recent conference, held under the title ‘Ethical Vision Artificial Intelligence: Creating an Effective AI Compliance Framework,’ Professor Yanisky-Ravid spoke about the rapid adoption of AI, which includes sensitive applications like law enforcement use of facial recognition, in the context of a lack of regulation.
“Our goal is to fill the existing gap resulting from the lack of U.S. laws and regulations relating to AI systems,” said Professor Yanisky-Ravid. “It also aims to cultivate dialogue that is currently lacking between policy makers and private industry by building bridges of trust between these entities to foster better understanding of various perspectives. We share the same goals in establishing ethical-legal principles, guidelines and norms. These principles should be based upon fairness, equality, privacy, responsibility, accountability, transparency and accuracy of AI systems.”
The challenge of creating an AI framework which is ethical and effective was discussed by Professor Carole Basri, chief advisor of the Association of Corporate in-House Counsel Program, who proposed several ways for companies to self-regulate, with government oversight of machine vision, biometrics, and facial recognition.
Industry was represented by Oosto Chief Marketing Officer Dean Nicolls, who presented a scale for the range of use cases and their degree of sensitivity, with users unlocking their mobile phones on one end and mass surveillance based on full-population biometric databases. In the middle of the ‘scale of sensitivity’ are police applications for alerts based on limited watchlists of wanted or missing individuals.
The scale is further divided into ‘innocent,’ ‘sensitive,’ ‘questionably legal,’ and ‘human rights violation’ categories of uses.
“The media’s focus on law enforcement’s use of facial recognition and the wrongful arrests resulting from its application have cast a negative perception of facial recognition technology — even though these examples represent a small fraction of the total use cases in production,” Nicolls said.