Bias in facial recognition is handicapping deepfake detection
Harmful bias has been found in deepfake datasets and detection models by researchers from the University of Southern California. A commonly used dataset is “overwhelmingly” dominated by white subjects — particularly white females.
The result of this skew is that deepfake detectors are less able to spot fraudulent images and video of people of color.
That is troublesome. Many in the industry feel that deepfake algorithms are growing so realistic so quickly that automated detectors will soon be the only hope to spot them.
A paper by researchers Loc Trinh and Yan Liu experimented with three well-known deepfake detectors and found as much as a 10.7 percent difference in error rates depending on gender and race.
The pair found that the popular FaceForensics++, in particular, is poorly balanced. It is important to note that attention to bias in AI datasets is very new. More examples of underrepresentation will appear for some time as improvements are made.
Achieving a low false-positive rate when trying to spot faked video is a “challenging problem,” the researchers write.
Using FaceForensics++ and Blended Image biometric datasets, they trained MesoInception4, Xception and Face X-ray models, all of which have “proven success” in video detection. They said the three represent a variety of size, architecture and loss formulation.
All three detectors performed the same on male and female faces, scoring a 0.1 to 0.3 percent difference in the error rate.
Detectors trained on Blended Image were least successful when presented with darker, African images — a 3.5 to 6.7 percent difference in error rate.
However, Blended Image and Face X-ray were most successful working with white male faces — with an error rate of 9.8 percent for all white faces and 9.5 percent for white males.
An article in VentureBeat drew a comparison between the USC paper and another one published last year by the University of Colorado, Boulder. According to the publication, cisgender men were correctly identified at least 95 percent of the time by algorithms written by Microsoft, Clarifai, Amazon and others.
But transgender men were misidentified as women 38 percent of the time.
Some researchers are working on face biometric liveness detection. One effort recognizes the rapid and subtle color shift in a live person’s face as blood washes under the skin in pulses.
Facebook, still burning from its inability to separate dangerous political propaganda from informed threads about health care, offered a $1 million prize in its Deepfake Detection Challenge, which wrapped up last June. The results of the challenge were anything but definitive.
Article Topics
accuracy | AI | biometric-bias | biometrics | biometrics research | dataset | deepfake detection | deepfakes | facial recognition | spoof detection
Comments