iBeta biometrics testing expert explains the path from playdough to vendor credibility
Even within the field of biometrics, where neural networks and modern machine learning techniques have revolutionized established modalities and enabled new ones, liveness detection is still relatively new. The concept reached an early milestone with the launch of the Liveness Detection competition (LiveDet) in 2009.
Recent iBeta recruit David Yambay went from data collection for the event to organizing it in 2011, while attending Clarkson University and studying under Dr. Stephanie Schuckers. He had become involved after selling some of his biometric data to the CITER lab as a freshman, and becoming interested in the field through conversations with the people performing data collection.
“It was a lot of research, but a huge portion of my focus was on the testing side,” Yambay, now the deputy director of Biometrics at iBeta, tells Biometric Update in an interview. “It was a thing that rewarded creativity, being able to kind of work with might seem like a simple idea but then turn into more. Finding different ways to make the PAIs [presentation attack instruments) to try against the systems, and seeing what these small differences could do in the output and how a system saw it.”
Explaining what he did while studying one of the more esoteric areas in a new technology field, he would tell people that he was playing with playdough. Which he was.
While it was dismissed as a PAI initially, playdough could be applied at a certain thickness and drying time to fool some biometric systems, Yambay confirmed during LiveDet 2015.
An evolving field
The terminology around presentation attack detection and biometric liveness was still unsettled in those early days of LiveDet, Yambay recounts. In those days, “f error live and f error fake” were sometimes used to express PAD accuracy.
Vendors were also not always thrilled to have vulnerabilities in their systems pointed out, despite LiveDet being voluntary and anonymous. The competition, like the field in general, has thrived and grown regardless.
So has emphasis on independent testing, across the biometrics industry.
“It’s past the point where people can use in-house testing and actually say, ‘this is what we’re doing,’” in Yambay’s assessment.
In PAD, improved maturity reflected in the adaptation of testing standards to market conditions.
One example Yambay provides is the use of 3D printers to create face or finger PAIs, which is much easier and less costly than just a few years ago.
This kind of PAI has been reclassified in the Android standard in response, from Level 3 to Level 2.
Independent PAD tests are increasingly demanded by customers like banks and fintech, who back in 2018 and 2019 were more often customers of iBeta, contracting the tests themselves.
While PAD testing has increased relative to other types of tests, Yambay hopes to perform more biometrics performance tests based on the ISO 19795 standard in the future, and emphasizes the importance of both types of evaluations.
Deepfake detection could be the next growth area for biometrics testing, as standards are established, and as that happens iBeta will seek accreditation to perform evaluations based on those standards.
“There are new standards in the works that will open up other kinds of testing as well,” Yambay says. “Those will be interesting to get into when they start coming out.”
How and why independent testing works
The industry continues to develop new PAD techniques and contribute to research efforts, but also relies on academia to advance the field.
“It’s definitely a mixture. As I started getting further along working with 3D modeling for presentation attacks, I was consulting with numerous groups, kind of teaching them a little about it. I think it’s a mixture of the other organizations trying to see and keep up with what’s out there, and also some of the freedom of academia that lets us experiment more.”
Meanwhile, the use of creativity, experience, and dedicated expertise are identified by Yambay as advantages that come with independent testing.
“One of the reasons it’s so important beyond just having a third-party result is, a lot of the times when people are testing their systems, it’s the people who are making or who are adjacent to designing the systems that are running the systems,” Yambay explains. “It’s super-helpful to have groups such as iBeta that, that is their one focus; testing the systems, experimenting, and learning all different things about systems can function can how attacks can be made and presented. There’s a wealth of knowledge to be gained from having a group that specializes in this.”
That specialization includes building up the experience of new iBeta personnel on Level 1 PAD evaluations before they conduct Level 2 testing.
It also means that not every attempt is successful. Vendors get their first chance to see if their technology will pass the evaluation during a readiness review, which tells some that their technology needs further development. Others fail later in the process, though they are never eager to publicize that fact. Some of those come back later with improved technology and successfully complete the evaluation.
Failure and success alike can yield valuable insights, though only one impresses prospective customers. Internal testing can help with the insight, but at this point, in an industry often accused of overly lofty promises, independence is required for credible evaluations.