Researchers use makeup to elude facial recognition via novel black-box attack

Researchers at Israeli Ben-Gurion University of the Negev have found a way to thwart facial recognition cameras using certain software-generated patterns and natural makeup techniques, with a success rate of 98 percent, reports Vice.
For the study ‘Dodging Attack Using Carefully Crafted Natural Makeup’, the team of five used YouCam Makeup, a selfie app to digitally apply makeup to identifiable regions of 20 participants’ faces. In the second condition, a makeup artist physically recreated the digitally applied, software-generated makeup patterns on participants, but in a naturalistic way. Participants walked through a hallway videoed by two cameras. In both the physical and digital makeup tests the participants were flagged as blacklisted individuals for the systems to be alert to.
The face biometric system was unable to identify any of the participants where makeup was digitally applied. For the physical makeup recreation experiment, “the face recognition system was able to identify the participants in only 1.22 percent of the frames (compared to 47.57 percent without makeup and 33.73 percent with random natural makeup), which is below a reasonable threshold of a realistic operational environment,” states the paper.
“[The makeup artist] didn’t do too much tricks, just see the makeup in the image and then she tried to copy it into the physical world. It’s not a perfect copy there. There are differences but it still worked,” Nitzan Guettan, a doctoral student and lead author of the study told Vice.
“Our attacker assumes a black-box scenario, meaning that the attacker cannot access the target FR model, its architecture, or any of its parameters. Therefore, [the] attacker’s only option is to alter his/her face before being captured by the cameras that feeds the input to the target FR model,” according to the research paper.
Adversarial machine learning (AML) attacks have been conducted before. In June, Israeli firm Adversa announced the creation of the ‘Adversarial Octopus,’ a black-box transferable attack designed to fool face biometrics models.
While facial recognition systems have not historically been able to identify those wearing face coverings, the pandemic accelerated the drive to advance this capability. Corsight AI announced in July that the company’s facial recognition system, Fortify, is able to identify individuals wearing motorcycle helmets and face covers at the same time.
Article Topics
biometric identification | biometrics | biometrics research | facial recognition | spoofing | video surveillance
Comments