Radically oversimplified AI skin-tone tool could be out at Google
It can be hard for some to understand that one of the most commonly referenced (yet still obscure) skin tone scales, the Fitzpatrick Skin Type gradient, describes four “white” tones, one “black” tone, one “brown” tone and nothing else.
The FST has been used to predict sunburn risk, make cosmetics, color emojis and, improbably, classify the skin of every person around the world who was ever recorded by machine vision or whose photo was posted online.
Google executives were being congratulated last week for saying that they are working on a new skin tone scale, one that was not generated seemingly to reflect the cast of the Brady Bunch TV show.
It is progress of a sort for Google.
In 2015 it was found to be autonomously classifying some people of color in its Photos app as “gorilla,” “chimp,” “chimpanzee” and “monkey.” Executives promised to fix the algorithms but is unclear if company-wide solution was found and implemented.
And in the last seven months, Google fired a pair of AI ethics researchers, reportedly, after they criticized how the company handles bias in its algorithms.
In October 2020, attendees at a U.S. Department of Homeland Security-sponsored biometrics gathering, the International Face Performance Conference, said the gradient should be replaced. They said it was inadequate for humanity’s diversity.
Reuters, in an exclusive article, reported that after its correspondents asked Google about its use of the FST, and the company said it would find alternatives. That was the first time, according to Reuters, that Google said that, and its position puts Google ahead of its peers.
Among their peers are Microsoft, Apple and Garmin, all of whom employ the FST for health-related products, according to Microsoft.
For supporters of the FST’s use in AI, the gradient is just one of many tools applied to writing algorithms, and not singly important. But the U.S. National Institute of Standards and Technology, which created an ongoing quality ranking of facial recognition algorithms called the Face Recognition Vendor Test, found that it is hard to isolate problems in algorithms.
In a facial recognition technology report involving demographic sources of errors, NIST notes, “Errors at one stage will generally have downstream consequences.”
Article Topics
accuracy | AI | biometric identification | biometrics | facial recognition | Google | research and development | skin tone scale
Comments