Algorithmic Justice League highlights automatic speech recognition bias report
The Algorithmic Justice League has released a spoken word and visual project called “Voicing Erasure” to protest the demographic discrepancies or “bias” identified in popular speech recognition systems on the market, following its recent study of algorithmic bias, AI, and technology, writes VentureBeat.
The Algorithmic Justice League was started by MIT researcher Joy Buolamwini. Results of the research led by the group were outlined in the report “Racial disparities in automated speech recognition”, which was published last week in the Proceedings of the National Academy of Sciences.
The written report highlights how automatic speech recognition systems developed by Apple, Amazon, Google, IBM, and Microsoft together have a higher word error rate for African-American voices (35 percent) compared to white voices (19 percent).
Each system tested transcribed 20 hours of recordings of 42 white speakers and 73 African-American speakers. The group used voice data from Humboldt County and Sacramento, California, and focused on African-American Vernacular English (AAVE). The reason behind the bias is likely the lack of audio data containing African-American speakers, which is why the researchers emphasized the importance of investments in inclusivity.
“Such an effort, we believe, should entail not only better collection of data on AAVE speech but also better collection of data on other nonstandard varieties of English, whose speakers may similarly be burdened by poor ASR performance — including those with regional and nonnative-English accents,” the report reads. “We also believe developers of speech recognition tools in industry and academia should regularly assess and publicly report their progress along this dimension.”
The visual project was put together by seven women, including former White House CTO Megan Smith, Race After Technology author Ruha Benjamin, Design Justice author Sasha Costanza-Chock, and Kimberlé Crenshaw, law professor at Columbia Law School and UCLA. The project notes that the New York Times coverage of the bias report cites male experts, but not the report’s lead author, Allison Koenecke.
Buolamwini and a number of research partners investigated, between 2018 and 2019, racial differences and imbalance in how popular facial recognition and analysis systems performed. The team conducted a number of audits on facial recognition bias, which are now often referenced by lawmakers and activists.
Google and IBM Watson have committed to solving this problem in their systems.