FB pixel

Meta suggests an AI computer vision fairness standard, opens a model to all

Categories Biometric R&D  |  Biometrics News
Meta suggests an AI computer vision fairness standard, opens a model to all
 

Meta made a couple significant computer vision announcements this week. It introduced a proposed fairness benchmark, and it made vision model open source.

In both cases, the parent of Facebook wants to insinuate itself deeper into fabric of AI development.

Meta has proposed FACET as the standard for image classification and semantic segmentation “at unprecedented scale.” How much, if at all, Facebook benefits from this is an open question. The company famously swore off facial recognition for the social media service.

Presumably referring to its corporate self, Meta in an announcement said, “we have a responsibility to ensure that our AI systems are fair and equitable.”

Anyone using AI computer vision “may” have a bad experience because of their demographics, not because biometric recognition and related tasks are inherently complex.

FACET, an acronym only a human could dream up, stands for FAirness in Computer Vision EvaluaTion. It is written to better evaluate vision models for visual grounding, instance segmentation, detection and classification.

There are 50,000 people recorded in 32,000 images in the FACET database, according to Meta. Each image is labeled for demographic attributes by expert human annotators. The company did not explain how it defines “expert.”

Other physical attribute labels include perceived skin tone and hair style and “person-related” classification such as doctor and basketball player.

As well, the company says, there are labels for 69,000 SA-1B database masks. That stands for Segment Anything 1 Billion, and it was designed to train general-purpose object segmentation on images from the wild.

The announcement, a six-minute read, goes into much more detail.

It also explains that Meta is expanding DINOv2, making it open-source. The computer vision model was trained using self-supervised learning to create universal features. It is covered by the Apache 2.0 license.

DINOv2-derived dense prediction models have been released for semantic image segmentation and monocular depth estimates.

This will give the AI community “greater flexibility to explore its capabilities on downstream tasks,” according to the company.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

UK startup raises $15M to build Europe’s sovereign alternative to biometric surveillance

A British start-up has raised millions for its biometric-alternative surveillance technology. Augur, a resilience technology startup, has raised $15 million…

 

NIST concept paper explores identity and authorization controls for AI agents

A draft concept paper released by the National Institute of Standards and Technology (NIST) asks industry and government stakeholders how…

 

Age assurance community sets new goals with standard published and use exploding

“Age Assurance Has Come of Age,” crows the Draft Summit Communiqué for the upcoming Global Age Assurance Standards Summit 2026….

 

‘Big Tech’ fears and confusion dominate dialogue over UK digital ID scheme

The UK government’s digital ID consultation has begun, its detailed plan for the process finally revealed, but all that is…

 

Bunnings introducing facial recognition to 42 New Zealand stores

Hardware and garden center chain Bunnings is introducing facial recognition technology (FRT) to its New Zealand stores to prevent serious…

 

Sweden to launch government eID in December 2026

Sweden has announced that its electronic identity (e-ID) will be launched on December 1st, 2026, giving both Swedish citizens and…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events