Clearview reveals biometric presentation attack detection feature, talks training and testing
A new presentation attack detection feature has been added to the Clearview Consent API from Clearview AI to allow developers to build spoof detection into identity verification solutions.
Clearview Consent was launched just months ago to bring the company’s facial recognition algorithms to a whole new set of use cases as a selfie biometrics tool, and the addition of presentation attack detection capabilities is the next step in its development, according to the people who made it.
Clearview considered a range of approaches, and CEO Hoan Ton-That points out that developers do not typically have access to the specialized hardware behind device-based 3D biometric systems.
Early engagement with Clearview Consent customers has yielded some insights into how businesses and developers plan to use it, which not only convinced the company to pursue liveness detection based on 2D images, but also imagine a range of applications.
“We’re looking at passive liveness video too, but some vendors have told us ‘We have these old profiles, and we want to find out how many of them are deepfakes and how many are presentation attacks,” Clearview Ton-That tells Biometric Update in an interview.
He tells a story about a crypto platform that looked back through images accepted by its KYC provider and found photos and printouts of faces.
Clearview’s technology focusses on single images from commercial RGB images, VP of Research Terence Liu told Biometric Update during the same video call.
Clearview takes an ensemble approach, combining models that look for different things, Liu says.
He shared a demo of the software, which looks for replay attacks and masks separately. In a few instances out of many in the demo, the software detected both in an image which was clearly a replay to human eyes. This, Ton-That explains, is due to the threshold settings.
The settings can be customized for different applications from within the API, and Clearview provides recommended settings.
Under the hood
When asked about indications that poor quality spoofs and deepfakes can be better at evading detection by some systems designed to spot them, Liu has a ready technical account of why.
“If you design your model to target the very fine differentiation between a real face and a very good mask face, your model is going to be specialized in that category, but will probably miss out on the other things,” he explains. “So, you want to have an ensemble of models approach. You have one coarse filter to just get rid of all the edge cases, another that zooms into specific cases, and then you can limit the type of training data you’re feeding to each of the models.”
“A lot of these commercial masks have very weird wrinkle patterns,” Liu notes.
Ton-That emphasizes the importance of training data volume, and notes the relatively modest size of the training datasets Clearview had for PAD algorithms when it began working on the solution.
“We could augment the training sets funnily enough, by putting those mask photos into Clearview, and finding way more examples of masks.” One example is a mask of the lead character from the television show Breaking Bad, which was found in images at various angles and in a wide range of lighting conditions.
Liu says the continued expansion of larger and deeper neural networks has boosted AI applications over hand-picked features in many areas.
“I see a similar trend happening in this also-longstanding field of presentation attack detection,” he says.
The field of biometric PAD has come a long way from techniques focused on particular features in the past, such as vascular patterns. Now the training of neural networks boils down to simple question, Liu explains: “Does this evaluate higher, in terms of the accuracy, on my dataset, and is my dataset closer to the ground truth for the application I’m interested in.”
Concerns about the source of the data Clearview uses to train its algorithms continue to be expressed by the ACLU, but Ton-That told Biometric Update when Clearview Consent was launched that the company does not “anticipate any issues” on that front.
The challenges of scale in biometrics development, for training data and volumes themselves, are familiar, and help explain gaps in the performance of some algorithms.
“The models are very smart, but it’s not as smart if you don’t see the data for a particular task,” Liu says. “The industry, the whole computer vision industry, not just biometrics, know about this. That’s why all of these big transformer models are pre-trained. I’m anticipating a similar revolution as how language models revolutionized that field.”
Customer engagement with Clearview Consent has been strong so far, according to Ton-That, with a KYC platform, a BNPL provider and a school security application among its early adopters. The pick-up has been entirely organic so far, and enterprise adopters always take longer. He is optimistic that the cloud and Docker deployment options and per-query pricing will help attract more clients.
The company is excited for NIST’s planned PAD tests, and Ton-That says it is looking at all testing options.
“The more tests the better, and the more standardization we have around that, the better,” he says.
Ton-That and Liu’s view on testing aligns with their views on training data, with Liu noting that testing should be as broad as possible “to avoid that domain gap.”
Clearview Consent’s PAD feature is available now through the API.
Article Topics
biometric liveness detection | biometrics | Clearview AI | Clearview Consent | face biometrics | identity verification | presentation attack detection | research and development | spoof detection
Comments