Ethical AI and biometrics move forward. More industry, academia effort needed
A series of initiatives have recently been unveiled by businesses and nonprofits showing different approaches to and perspectives on artificial intelligence and how it relates to biometrics.
From a slow awakening to ethical responsibilities by industry and researchers witnessed at this year’s Computer Vision and Pattern Recognition Conference, to the launch of an AI ethics advisory board at Northeastern University, it seems the ethical implications of AI and biometrics are getting due consideration.
And yet, the road toward ethical, inclusive AI is still long, according to a Stanford University professor. Fortunately, new generations in the industry are being provided with the tools to help research move in this direction, with the Mark Cuban Foundation recently announcing two high school AI bootcamps in Pittsburgh.
Computer vision community gets slow start
An analysis by technology-focused publication Protocol suggests the computer vision sector is slowly starting to take more seriously the ethical implications inherent in AI deployment, but more work is needed.
The article, written by journalist Kate Kaye, supports these claims by exploring trends that emerged during the 2022 Computer Vision and Pattern Recognition Conference.
For instance, conference organizers created updated ethics guidelines outlining some negative impacts of computer vision.
At the same time, while the conference encouraged researchers to include an impact assessment in their papers, it did not require them to do so in their published papers available for viewing outside the conference review process.
Additionally, Kaye’s article argues that academics have traditionally been very aware of the implications of their research on the real world, but they so far have shied away from publicly considering the ethical and human rights impacts of AI they develop.
“Dismissive attitudes toward ethical considerations can hinder business goals to operationalize ethics principles promised in splashy mission statements and press releases,” she writes.
Kaye argues that the majority of tutorials, workshops, and papers presented at the conference made little mention of ethical considerations, hinting that more work is needed to bring awareness to the topic.
Northeastern launches AI ethics advisory board
AI insiders from Northeastern University have created a new AI ethics advisory board to oversee the expansion of AI apps within the school and beyond.
The group was founded by Cansu Canca, creator and director of the Institute for Experiential AI at Northeastern. Canca will see more than 40 multidisciplinary experts from inside and outside the university.
The board will act as a consultancy for companies dealing with AI ethical questions, creating smaller subcommittees to address specific queries.
“We can help [companies] understand the issues they are facing and figure out the problems that they need to solve through a proper knowledge exchange,” Canca says.
Stanford professor looks at inclusive design for AI
A Stanford University professor is exploring the importance of inclusive design in AI apps.
Londa Schiebinger, the John L. Hinds Professor in the History of Science at Stanford, is also the founding director of Gendered Innovations in Science, Health & Medicine, Engineering and Environment.
Schiebinger is also part of the teaching team for Innovations in Inclusive Design, a course offered by Stanford’s design school, known as d.school, focusing on asking students to analyze technologies for inclusivity.
In an interview on Stanford University’s Human-Centered AI site, Schiebinger highlighted the importance of including design through the use of intersectional tools.
“If you don’t have inclusive design, you’re going to reaffirm, amplify and harden unconscious biases,” she says.
Schiebinger also gave the example of voice biometric systems, and how demographic and gender biases affect them.
“We know that voice assistants that default to a female persona are subjected to harassment and because they again reinforce the stereotype that assistants are female,” she explained.
“There’s also a huge challenge with voice assistants misunderstanding African American vernacular or people who speak English with an accent.”
At the same time, Schiebinger says there are already good examples of inclusive design of biometric systems. Specifically, she quotes the well-known paper ‘Gender Shades,’ in which researchers found that women’s faces were not recognized as accurately as men’s faces, and darker-skinned people were not recognized as easily as those with lighter skin.
“They did the intersectional analysis and found that Black women were not seen 35 percent of the time,” Schiebinger explains.
“Using what I call ‘intersectional innovation,’ they created a new dataset using parliamentary members from Africa and Europe and built an excellent, more inclusive database for Blacks, whites, men, and women.”
Schiebinger agrees: In order to have inclusive design, it is necessary for researchers and companies to be able to manipulate the database.
“If you’re doing natural language processing and using the corpus of the English language found online, then you’re going to get the biases that humans have put into that data. There are databases we can control and make work for everybody, but for databases, we can’t control, we need other tools, so the algorithm does not return biased results.”
Pittsburgh to host AI bootcamps
Self-driving vendor Argo AI and The Readiness Institute at Penn State will each host a Mark Cuban Foundation AI bootcamp for high school students next fall, with a focus on those in underserved communities.
The events will be free for students between grades 9 and 12 and will introduce them to beginner AI concepts and skills.
In particular, the bootcamps aim to teach participants what artificial intelligence is, where they already interact with AI in their lives, the ethical implications of AI systems, as well as smart home assistants and facial recognition.