FB pixel

Tool for explainable face biometrics, neural networks open-sourced by TruEra

Tool for explainable face biometrics, neural networks open-sourced by TruEra
 

TruEra has made its tool for explainability in machine learning models based on neural networks, like many biometric systems, available as open source software, according to a company announcement.

The new TruLens provides a uniform API for explaining models built with Tensorflow, Pytorch and Keras, with a uniform abstraction layer. The library provides a coherent and consistent approach to explaining deep neural networks, the company says, based on public research. TruLens also natively supports internal explanations, such as what visual concepts a facial recognition model is drawing on to identify people from images.

The tool is inspired in significant part by the paper ‘Influence-Directed Explanations for Deep Convolutional Networks’ by the creators of Carnegie Mellon University’s library, TruEra says.

Use cases for TruLens include explanations for computer vision models like object recognition and face biometrics, natural language processing like identifying malicious speech or smart assistants, forecasting and personalized recommendations.

“Image recognition and text recognition machine learning models are both highly in demand and have a lot of consumer wariness about them, due to highly publicized stories about error or possible misuse,” says Shayak Sen, co-founder and CTO, TruEra. “The recent European Commission regulations specifically listed cautions around machine learning models and how they deal with personal data or images. So there is a huge need for explainability for these types of models, to ensure that they are effective, but also compliant and easily explained to a concerned society. We feel strongly about the ethical use of AI, and wanted to make TruLens freely available to the world to help ensure responsible adoption of AI for uses like image recognition.”

TruLens is the product of eight years of explainability research conducted at both Carnegie Mellon University and TruEra, and is available now.

Explainability has been recognized as a necessary feature to increase the trustworthiness of artificial intelligence in general, and biometrics in particular.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics and injection detection for deepfake defense a rising priority

Biometrics integrations with injection attack detection to defend the latest front in the global battle against fraud, deepfakes, is the…

 

Biometric Update Podcast looks at the road to a global standard for age assurance

Episode 2 of the Biometric Update Podcast is a dispatch from the 2025 Global Age Assurance Standards Summit, held from…

 

WEF launches new DPI initiative focused on emerging tech, including biometrics

Global Digital Public Infrastructure (DPI) initiatives are lagging behind emerging technologies such as AI, which could lead to inefficiencies, bottlenecks…

 

Odds are good for biometrics firms in the global gambling sector

Gambling has always been a vice associated with certain kinds of criminal activity, but the development of the online gambling…

 

New Zealand issues tender for digital ID services accreditation infrastructure

New Zealand’s accredited digital identity services regulator, the Trust Framework Authority (TFA), has published a request for information (RFI) for…

 

Pindrop surpasses $100M in annual recurring revenue, kicks off BU podcast

A release from Atlanta-based voice biometrics firm Pindrop celebrates a milestone: the firm has surpassed US$100 million in Annual Recurring…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events