Efforts to understand impact of AI on society put pressure on biometrics industry to sort out priorities, role
Companies involved in face biometrics and other artificial intelligence applications have not come to a consensus on what ethical principles to prioritize, which may cause problems for them as policymakers move to set regulations, according to a new report from EY. Facial recognition check-ins for venues such as airports, hotels and banks, and law enforcement surveillance, including the use of face biometrics, are two of a dozen specific use cases considered in the study.
The report ‘Bridging AI’s trust gaps’ was developed by EY in collaboration with The Future Society, suggests companies developing and providing AI technologies are misaligned with policymakers, which is creating new risks for them. Third parties may have a role to play in bridging the trust gap, such as with an equivalent to ‘organic’ or ‘fairtrade’ labels, EY argues.
For biometric facial recognition, ‘fairness and avoiding bias’ is the top priority for policymakers, followed by ‘privacy and data rights’ and ‘transparency.’ Among companies, privacy and data rights tops the list followed by ‘safety and security,’ and then transparency.
Home virtual voice assistants are covered extensively in the report, though no explicit mention is made of voice recognition, or any biometric modality besides facial recognition.
Both groups believe a multi-stakeholder approach to governance is necessary, but 38 percent of those in industry believe it will lead such a framework, but only 6 percent of policymakers agree. Just over two-thirds (69 percent) of companies say regulators understand the complexities of AI technologies and business challenges, and roughly the same number of policymaker’s (66 percent) say they do not.
“As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks,” EY Global Markets Digital and Business Disruption Leader Gil Forer comments. “Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic.”
The UK Information Commissioner’s Office (ICO) has launched a new guidance document on AI and data protection to help technology and compliance specialists navigate the regulatory landscape, according to a blog post.
The guidance provides information and recommendations regarding best practices and technical benchmarks for organizations to adopt to minimize risks associated with AI. Data protection impact assessments are explained, and the roles of controller and processor are defined, and guidance is offered on how to ensure AI systems are lawful, fair and transparent.
How GDPR applies to biometric data, and when it is included in a special category for sensitive information is detailed. The data security implications of reformatting or encrypting biometric data, local inferencing, and privacy-preserving query architectures are discussed.
McGill University has received a $2 million gift to establish a Chair in its Department of Philosophy to research and analyze the ethical and wider social implications of AI and other technologies.
The new Stephen Jarislowsky Chair in Technology and Human Nature, named for the philanthropist donating the funds, will consider everything from disparities in facial recognition performance between demographics to the trolley problem, which has become a practical question with the advent of self-driving cars, the McGill Reporter writes.
The Chair will work closely with the Yan P. Lin Centre for Freedom and Global Orders in the Ancient and Modern Worlds, according to the report.
“The Lin Centre is looking forward to working with the Chair and providing the institutional infrastructure and apparatus that will allow collaboration across departments and faculties,” says Jacob Levy, director of the Lin Centre and Tomlinson Professor of Political Theory. “We would like to see McGill become a place where scholars who want real training on the technological side of things and on the philosophical or other liberal arts side of things could get those together.”