Reflectacles develops eyewear to bypass facial recognition systems
Chicago company Reflectacles believes it has the solution to concerns about the widespread deployment of facial biometrics. The company is developing the IRpair and Phantom eyewear that uses special lenses and optical filters to block facial recognition, tracking and infrared facial mapping. “Both block 3D infrared facial mapping during both day and night and block 2D video algorithm-based facial recognition on cameras with infrared for illumination,” according to ChicagoINNO. As per the Kickstarter page, the glasses will be delivered in April 2020 and the company has doubled its goal by raising $34,000.
Founder Scott Urban says international customers have already shown interest in the glasses, hoping to bypass facial recognition algorithms especially at political protests.
“[It’s about] continuing the process of how to make the best way to block facial recognition but then also to make it seemingly normal as possible,” Urban said.
“You’re largely buying Reflectacles not for the purpose of committing crime or you don’t want the government to be seeing you. It’s kind of a political speech,” said Rajiv Shah, data scientist at DataRobot and adjunct assistant professor at the University of Illinois-Chicago. “You want to stand up and point the people around you that, ‘Hey, this is something we all need to think about.’”
It’s not just Americans who are working on spy-like gadgets to bypass facial recognition identification. A group of white hat researchers from Lomonosov Moscow State University and Huawei Moscow Research Center came up with a wearable card to confuse the technology, writes Synced. Their technique is called “ADvHat” and is presented in a paper called “AdvHat: Real-World Adversarial Attack on ArcFace Face ID system.”
The method was tested with full-face photos under different lighting conditions, viewpoints and facial rotation to “change the input to an image classifier so the recognized class will shift from correct to some other class.” It consists in a simple color sticker fixed to a hat to reduce accuracy by creating a raised eyebrow effect, confirming that machine learning algorithms are prone to error when exposed to adversarial examples.