Google introduces machine learning analysis tool to combat AI bias
Google has unveiled a bias-detection feature for its TensorFlow machine learning web application, dubbed the What-If Tool, in a blog post.
The What-If Tool allows users to view visualizations of model results on the TensorFlow dashboard, test and visualize the results of manual edits to examples from a dataset, and view partial dependence plots showing how changes to a single feature change the model’s predictions.
The tool addresses the difficulties of editing machine learning models, which usually involves writing custom code for each potential “what-if” scenario.
“Not only is this process inefficient, it makes it hard for non-programmers to participate in the process of shaping and improving ML models. One focus of the Google AI PAIR initiative is making it easier for a broad set of people to examine, evaluate, and debug ML systems,” Google AI Software Engineer James Wexler writes in the post.
The post highlights the What-If Tool’s capacity for detecting misclassifications, uncovering biases in algorithms, and investigating the relative performance of models across different subgroups among its other features.
Microsoft and IBM have each introduced measures for reducing bias in AI after public controversy erupted over the issue after an academic research report on facial recognition. Subsequent attention has included examination in hearings by a U.S. House subcommittee.
Article Topics
artificial intelligence | facial recognition | Google | machine learning
Comments