Paravision answers face biometrics ethics concerns with AI Principles and new chief advisor
Paravision has published a set of principles to guide the ethical development of face biometrics and other technologies related to artificial intelligence. To help implement its vision for ethical biometric development, the company has also appointed Elizabeth M. Adams as its chief AI ethics advisor.
Adams currently serves as a Race & Technology Fellow at Stanford University’s Center for Comparative Studies in Race and Ethnicity in partnership with the Institute of Human-Centered Artificial Intelligence and the Digital Civil Society Lab. She is also actively participating in efforts to improve governance and policy for AI systems used by the city of Minneapolis.
During a 20-year career in technology, Adams has led initiatives for Fortune 100 companies, the U.S. Department of Defense and the Defense Intelligence Agency. Her areas of expertise include diversity and inclusion in AI, including demographic differences in facial recognition performance, surveillance, predictive analytics, and children’s rights.
“Paravision has shown a deep desire to pursue the ethical development and use of world-class face recognition and AI-based computer vision technologies,” states Adams in the press release. “I’m excited to collaborate with the team and guide them to leadership in taking an ethical, inclusive, and thoughtful approach for this critical and challenging technology.”
“Face recognition technology has the potential to improve our lives in profound ways, but it must be developed and deployed with the right intentions and safeguards,” comments Doug Aley, Paravision’s CEO. “Elizabeth’s leadership, her expertise in addressing AI racial bias and her vision for realizing a better future with AI will be an invaluable resource for Paravision.”
The company says in the announcement that although it already working to reduce bias in facial recognition, Adams role will include sensitizing the Paravision workforce to ethical issues and leading its next step towards ensuring inclusion. Along with the ‘Ethical Tech Design’ process, Paravision plans to integrate an ethical workflow with its product development process, partner engagement and solution deployment.
“Elizabeth has been at the forefront of thinking about accountable AI, and any organization would be fortunate to have her lead efforts to craft a better future for computer vision,” said Daniel E. Ho, Associate Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI) and William Benjamin Scott and Luna M. Scott Professor of Law at Stanford University.
Paravision’s AI Principles commit the company to ethical training and conscientious sales of its biometric technology. The commitment to ethical training will be carried out through balanced training data, obtaining all necessary data rights, and heavy investment in benchmarking. The FTC has ordered Paravision to delete legacy algorithms that were trained with data the agency ruled it did not have rights to.
The conscientious sales commitment includes vetting potential partners and customers, only selling AI models that meet a high standard of quality and avoiding countries identified as rights violators by the U.S. State Department and human rights groups. It also involves limiting its distribution to law enforcement, defense and intelligence agencies, with Paravision specifically committing to not allow use of its technology in lethal autonomous weapons systems (LAWS), and ensuring a baseline level of accuracy for different use cases.