McAfee points to a potentially scary problem with facial biometrics used in airports
The head of McAfee’s advanced threat research group says his team has used biometric facial recognition software and common human corruption in an exercise to get a theoretical person who is listed on a no-fly roster aboard a commercial airliner.
The process depends on as many fortuitous assumptions as a Mission Impossible movie, but it is real enough for McAfee’s Steve Povolny to call for new defensive steps by the industry.
In a company blog he co-authored, Povolny writes that the cybersecurity community must create a standard that results in machine learning systems that are ready for the kind of biometric attacks his unit has posited.
He wrote that developers are not adequately “considering the inherent security flaws present in the mysterious internal mechanics of face-recognition models.”
That, he said, “could provide cyber criminals unique capabilities to bypass critical systems such as automated passport enforcement.”
The team’s goal was seeing if they could make adversarial images in passport format that would convince a facial recognition system that Person A on an imagined no-fly list was actually Person B, an accomplice who was not on a list. The exercise reportedly succeeded.
Povolny writes that to his knowledge his exercise is a unique use of model hacking and facial recognition.
His gambit assumes Person A has no digital image on a relevant government system but Person B does.
Person B uses the CycleGAN framework in the experiment to translate an image of Person A into an image of Person B.
The resulting image looks enough like each person, and yet different enough that a live image of Person A recorded by an automated passport control system does not bring a swarm of federal law enforcement agents.
Data systems note the presence of Person B, the accomplice, who has no travel restrictions.
A security agent very likely would spot the deception, according to Povolny. And advanced surveillance like the kind in major airports might spot Person A on a concourse. But assuming that Person A is not arbitrarily flagged later, and facial recognition cameras miss the suspect, Person A will fly that day.
Povolny writes that it is common practice in other endeavors like cryptography to create a standard for the reliability of machine learning in an increasingly adversarial environment. He suggests that it is time to do the same for biometrics.