Google Brain researcher discusses AI translator project
Google Brain research scientists have developed a tool for increasing the transparency of machine learning models, according to a Quanta Magazine article.
“Interpretable” machine learning specialist Been Kim and her colleagues created “Testing with Concept Activation Vectors” (TCAV) as a tool to help translate the decision-making process of an AI system. It does so by reporting the extent of the role of a particular concept, such as a certain feature, in the algorithms’ outcomes. The system was originally tested on ML modes for image recognition, but Kim says it also works on models trained with text and some kinds of data visualization.
“AI is in this critical moment where humankind is trying to decide whether this technology is good for us or not,” Kim says. “If we don’t solve this problem of interpretability, I don’t think we’re going to move forward with this technology. We might just drop it.”
Kim says “interpretability” in AI has split into an academic scientific branch, studying the specifics of how models work, while her work has been more focussed on interpretability for responsible AI, which can provide warnings and indications of algorithms creating problems or risk. Other examples of the latter include Google’s What-If Tool and a Microsoft bias-detection project. Kim suggests that while the approach does not provide a deep understanding of how the technology works, it can help to make it safe enough to leverage its benefits.
Google Cloud announced in December it would decline to offer general-purpose facial recognition APIs while it works out how to proceed with responsible AI development.