FB pixel

Radically oversimplified AI skin-tone tool could be out at Google

Radically oversimplified AI skin-tone tool could be out at Google
 

It can be hard for some to understand that one of the most commonly referenced (yet still obscure) skin tone scales, the Fitzpatrick Skin Type gradient, describes four “white” tones, one “black” tone, one “brown” tone and nothing else.

The FST has been used to predict sunburn risk, make cosmetics, color emojis and, improbably, classify the skin of every person around the world who was ever recorded by machine vision or whose photo was posted online.

Google executives were being congratulated last week for saying that they are working on a new skin tone scale, one that was not generated seemingly to reflect the cast of the Brady Bunch TV show.

It is progress of a sort for Google.

In 2015 it was found to be autonomously classifying some people of color in its Photos app as “gorilla,” “chimp,” “chimpanzee” and “monkey.” Executives promised to fix the algorithms but is unclear if company-wide solution was found and implemented.

And in the last seven months, Google fired a pair of AI ethics researchers, reportedly, after they criticized how the company handles bias in its algorithms.

In October 2020, attendees at a U.S. Department of Homeland Security-sponsored biometrics gathering, the International Face Performance Conference, said the gradient should be replaced. They said it was inadequate for humanity’s diversity.

Reuters, in an exclusive article, reported that after its correspondents asked Google about its use of the FST, and the company said it would find alternatives. That was the first time, according to Reuters, that Google said that, and its position puts Google ahead of its peers.

Among their peers are Microsoft, Apple and Garmin, all of whom employ the FST for health-related products, according to Microsoft.

For supporters of the FST’s use in AI, the gradient is just one of many tools applied to writing algorithms, and not singly important. But the U.S. National Institute of Standards and Technology, which created an ongoing quality ranking of facial recognition algorithms called the Face Recognition Vendor Test, found that it is hard to isolate problems in algorithms.

In a facial recognition technology report involving demographic sources of errors, NIST notes, “Errors at one stage will generally have downstream consequences.”

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics back digital government gains around the world

Digital government was in the spotlight this week on Biometric Update with the release of the OECD rankings and a…

 

MOSIP delves into biometric data quality considerations

Biometric data quality was in focus at MOSIP Connect 2026 in Rabat, Morocco, from policies for ensuring good enrollment practices…

 

NIST nominee pressed on AI standards, facial recognition oversight

The Senate Committee on Commerce, Science and Transportation on Thursday considered the nomination of Arvind Raman to serve as Under…

 

Trulioo’s Hal Lonas on how he applies aeronautics principles to fighting fraud

Rocket science is routinely held up as the ultimate example of a highly complex discipline. But Trulioo’s Hal Lonas found…

 

Vouched donates MCP-I framework to Decentralized Identity Foundation

An announcement from Seattle-based Vouched says it has formally donated its Model Context Protocol – Identity (MCP-I) framework to the…

 

California’s OS-based age verification law challenges open-source community

California’s new online safety bill, AB 1043 (the Digital Age Assurance Act), adopts a declared age model for operating systems….

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events