Mobile game company Artie releases demographic bias detection for voice technologies

Mobile game company Artie releases demographic bias detection for voice technologies

Social media mobile game developer Artie has released the Artie Bias Corpus (ABC), a speech dataset that analyzes speech recognition in voice applications to detect demographic bias, writes Venturebeat.

The toolkit combined with audio files and their associated transcripts detects bias not only in Speech-to-Text systems, but also in other systems, based on age, gender and accent.

“We define demographic bias in speech technology as the gap in performance between demographic groups,” the company writes. “When a bias exists, it means that people from one demographic group get a worse experience relative to people in favored groups.”

In April, the Algorithmic Justice League released “Voicing Erasure,” a project that looked into racial disparities in speech recognition algorithms developed by Apple, Amazon, Google, IBM and Microsoft. It found error rates were significantly higher for African American voices compared to white voices.

The dataset has 2.4 hours of spoken English and is part of Mozilla’s Common Voice collection. It’s not meant to be used for training, only for testing purposes. It covers three gender classes, eight age ranges and 17 English accents.

A first experiment was ran on Mozilla’s DeepSpeech models previously trained on with a bias toward North American English, while a second experiment analyzed gender bias in Google and Amazon U.S. English models already used by millions on a daily basis. The first test confirmed a bias for U.S. and U.K. accents but it was not gender related. When gender bias was analyzed in Google and Amazon’s U.S. English models, Google’s algorithm showed worse performance on female voices.

“Fairness is one of our core AI principles, and we’re committed to making progress in this area,” a Google spokesperson told VentureBeat. “We’ve been working on the challenge of accurately recognizing variations of speech for several years, and will continue to do so. In the last year we’ve developed tools and data sets to help identify and carve out bias from machine learning models, and we offer these as open-source for the larger community.”

“As voice technology becomes more common, we discover how fragile it can be … In some cases, demographic bias can render a technology unusable for someone because of their demographic,” Josh Meyer, lead scientist at Artie and a research fellow at Mozilla, wrote in a blog post. “Even for well-resourced languages like English, state of the art speech recognizers cannot understand all native accents reliably, and they often understand men better than women … The solution is to face the problem, and work toward solutions.”

The Artie Bias Corpus and Artie Bias Toolkit are available for developers to use when reducing demographic bias in voice technologies.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics