FB pixel

Tool for explainable face biometrics, neural networks open-sourced by TruEra

Tool for explainable face biometrics, neural networks open-sourced by TruEra
 

TruEra has made its tool for explainability in machine learning models based on neural networks, like many biometric systems, available as open source software, according to a company announcement.

The new TruLens provides a uniform API for explaining models built with Tensorflow, Pytorch and Keras, with a uniform abstraction layer. The library provides a coherent and consistent approach to explaining deep neural networks, the company says, based on public research. TruLens also natively supports internal explanations, such as what visual concepts a facial recognition model is drawing on to identify people from images.

The tool is inspired in significant part by the paper ‘Influence-Directed Explanations for Deep Convolutional Networks’ by the creators of Carnegie Mellon University’s library, TruEra says.

Use cases for TruLens include explanations for computer vision models like object recognition and face biometrics, natural language processing like identifying malicious speech or smart assistants, forecasting and personalized recommendations.

“Image recognition and text recognition machine learning models are both highly in demand and have a lot of consumer wariness about them, due to highly publicized stories about error or possible misuse,” says Shayak Sen, co-founder and CTO, TruEra. “The recent European Commission regulations specifically listed cautions around machine learning models and how they deal with personal data or images. So there is a huge need for explainability for these types of models, to ensure that they are effective, but also compliant and easily explained to a concerned society. We feel strongly about the ethical use of AI, and wanted to make TruLens freely available to the world to help ensure responsible adoption of AI for uses like image recognition.”

TruLens is the product of eight years of explainability research conducted at both Carnegie Mellon University and TruEra, and is available now.

Explainability has been recognized as a necessary feature to increase the trustworthiness of artificial intelligence in general, and biometrics in particular.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Asian banks deploy biometrics and digital ID for security, additional services

As digital identity gains prominence in a financial industry beset by fraud, nations in Asia are enabling banks to lead…

 

Mobile ID combats fraud, gives holder control of personal data: USPF report

The U.S. Payments Forum (USPF) has published a new white paper entitled “The Role of Mobile IDs in Payments.” Authored…

 

Wyoming plots mobile driver’s license launch for 2025

Wyoming is moving towards the issuance of digital ID in the form of an optional mobile driver’s license, but it…

 

EU publishes rollout schedule for AI Act

The European Union has published the final text of the Artificial Intelligence Act, outlining the most important deadlines for complying…

 

World Bank identifies priority actions for DPI in Equatorial Guinea

A World Bank team has identified immediate actions which the government of Equatorial Guinea should undertake in ramping up its…

 

Some sort of ID could be ‘inevitable’ for UK: lawmakers

The UK is still debating national identity documents and digital identities, with lawmakers and experts arguing that the creation of…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events