Microsoft working on automatic bias-detection tool for AI engineers
Microsoft is responding to the growing recognition that bias can be unintentionally embedded in software by developing a tool to automatically detect bias in AI algorithms, MIT Technology Review reports.
Research has demonstrated drastically differing accuracy rates for leading facial recognition algorithms when applied to different populations, and with algorithms guiding criminal sentencing and flagging individuals as potential security threats, a U.S. House subcommittee has examined the implications of the issue for government adoption of AI.
“Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models,” Senior Microsoft Researcher Rich Caruna told the Review.
Caruna is part of a team building a bias-detection dashboard, though he cautions against over-reliance on policing software with software.
“Of course, we can’t expect perfection—there’s always going to be some bias undetected or that can’t be eliminated—the goal is to do as well as we can,” Caruna says. “The most important thing companies can do right now is educate their workforce so that they’re aware of the myriad ways in which bias can arise and manifest itself and create tools to make models easier to understand and bias easier to detect.”
Facebook also unveiled its Fairness Flow algorithm fairness evaluation tool at its developer conference earlier this month.
Article Topics
accuracy | artificial intelligence | biometric-bias | biometrics | facial recognition | Microsoft
Comments