FB pixel

Radically oversimplified AI skin-tone tool could be out at Google

Radically oversimplified AI skin-tone tool could be out at Google
 

It can be hard for some to understand that one of the most commonly referenced (yet still obscure) skin tone scales, the Fitzpatrick Skin Type gradient, describes four “white” tones, one “black” tone, one “brown” tone and nothing else.

The FST has been used to predict sunburn risk, make cosmetics, color emojis and, improbably, classify the skin of every person around the world who was ever recorded by machine vision or whose photo was posted online.

Google executives were being congratulated last week for saying that they are working on a new skin tone scale, one that was not generated seemingly to reflect the cast of the Brady Bunch TV show.

It is progress of a sort for Google.

In 2015 it was found to be autonomously classifying some people of color in its Photos app as “gorilla,” “chimp,” “chimpanzee” and “monkey.” Executives promised to fix the algorithms but is unclear if company-wide solution was found and implemented.

And in the last seven months, Google fired a pair of AI ethics researchers, reportedly, after they criticized how the company handles bias in its algorithms.

In October 2020, attendees at a U.S. Department of Homeland Security-sponsored biometrics gathering, the International Face Performance Conference, said the gradient should be replaced. They said it was inadequate for humanity’s diversity.

Reuters, in an exclusive article, reported that after its correspondents asked Google about its use of the FST, and the company said it would find alternatives. That was the first time, according to Reuters, that Google said that, and its position puts Google ahead of its peers.

Among their peers are Microsoft, Apple and Garmin, all of whom employ the FST for health-related products, according to Microsoft.

For supporters of the FST’s use in AI, the gradient is just one of many tools applied to writing algorithms, and not singly important. But the U.S. National Institute of Standards and Technology, which created an ongoing quality ranking of facial recognition algorithms called the Face Recognition Vendor Test, found that it is hard to isolate problems in algorithms.

In a facial recognition technology report involving demographic sources of errors, NIST notes, “Errors at one stage will generally have downstream consequences.”

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

iProov, iiDENTIFii help Standard Bank create network of trust

It’s one thing to know your customer, and another thing to know your customer is real. As GenAI becomes a…

 

World to spend $26B on IDV checks by 2029: Juniper

By 2029, the total global spend for digital identity verification checks will spike by 74 percent to reach $26 billion,…

 

Regula to replace SumSub as face biometrics provider for Maldives

Regula Forensics has been granted the contract to provide face recognition for the Maldives’ national digital identity, eFaas, after the…

 

UK student IDs now supported by Yoti digital identity apps

Yoti has added support for school IDs to its digital ID apps so students can more easily prove their status…

 

Ecommerce is losing money to fraud – and looking towards biometrics

Fraud losses continue to plague ecommerce and online payments, with Juniper releasing the latest sobering statistics on merchant losses. Behavioral…

 

UK govt publishes $25M tender for live facial recognition

UK’s law enforcement agencies are seeking live facial recognition (LFR) suppliers in a new tender worth up to £20 million…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events