Reports says attention to “pipeline” has not helped diversity crisis in artificial intelligence industry
Women are severely underrepresented in the field of artificial intelligence, and visible minorities even more so, and fixing the “pipeline” from schools to industry will not fix the problem, according to new research by the AI Now Institute.
The report “Discriminating Systems: Gender, Race, and Power in AI” (PDF) by Sarah Myers West, Meredith Whittaker, and Kate Crawford shows that more than 80 percent of AI professors are men, and just 18 percent of authors presenting at leading AI conferences are women. They also found women make up a small minority of AI research staff at Facebook (15 percent) and Google (10 percent). Black people make up only 2.5 percent of Google’s workforce, and only 4 percent at Facebook and Microsoft, according to the report. There are no public statistics available for trans workers. Statistics about the diversity, or lack thereof, in workforces and datasets, along with results that indicate unequal performance for different groups, constitute a “diversity crisis,” according to the researchers.
In addition to the AI Now Institute and New York University (where the institute is based), Whittaker is associated with Google Open Research, and Crawford with Microsoft Research.
The researchers say the application of AI systems to “use physical appearance as a proxy for character or interior states are deeply suspect,” and suggest that rather than delivering insights, these systems entrench bias.
They make eight recommendations for increasing workplace diversity, including publishing compensation levels for different demographics, setting pay and benefit equity goals, publishing transparency reports on harassment and discrimination, and changing hiring practices.
The report finds no substantial progress in industry diversity despite decades of “pipeline studies.”
“The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether,” the report authors write.
Four recommendations are made for addressing bias and discrimination in AI systems, starting with the need for transparency, and including rigorous testing, “wider social analysis of how AI is used in context,” and thorough risk assessments that consider whether certain systems should even be built.
The 33-page report extensively reviews the workplace conditions of the tech giants that dominate artificial intelligence development, and examines the worker-led initiatives and anti-diversity pushback that they have given rise to.
The lack of diversity in the tech industry more broadly, and its possible relation to bias, was recently discussed in a U.S. House Energy and Commerce subcommittee hearing. In a Forbes editorial, AI career development company Gloat Co-founder Ben Reuveni recently argues that companies should consider creating a position of Chief Bias Officer.