Passive biometric liveness detection from ID R&D adds frictionless security for voice and face
For decades the key performance measure of any biometric system was the accuracy of biometric matching. Is this the same person or not? That was the question. ID R&D CEO Alexey Khitrov argues that matching accuracy can no longer project the real-life security of authentication without taking into account the anti-spoofing component of the technology. Is it a person at all? This is a key question for modern biometrics.
Sophisticated spoofing attacks can trick biometric systems by playing manufactured or synthetic speech, or by displaying a public photo or a video clip to impersonate a verified user. Typically this challenge is being addressed through an introduction of potentially clumsy additional steps into authentication UX, such as OTP, security questions, and prompted actions. ID R&D’s new liveness detection solutions aim to address this challenge in a completely different way. Khitrov and company have a vision – they want to deliver this stronger authentication process including anti-spoofing liveness checks while making it not just convenient, but entirely effortless for the end user.
“Biometric authentication can be just as annoying as password-based systems, whether requiring users to swipe, to touch a sensor, to move their phone in a particular direction, etc.,” says Khitrov. “What if, instead, our technology could recognize us in the same easy and obvious way that our friends and family do? That’s what we aim to do. ID R&D has been very busy analyzing the bottlenecks within the authentication process and developing solutions to remove those bottlenecks entirely, taking all user friction out of the process.” In what Khitrov says is an industry first, ID R&D extends that frictionless experience to include wholly passive liveness detection to both face- and voice-based authentication.
Khitrov did not provide details about the specific algorithms and AI that ID R&D uses, but in an interview with Biometric Update, he said delivering passive liveness detection builds on ID R&D’s core mission of frictionless authentication that requires no effort from the end user.
“There are three key areas that allowed us to do all of this. One is subject-matter expertise in audio processing, signal processing, and image processing,” Khitrov explains. “Second is AI and ML capabilities. We actively partnered with the AI community worldwide to tackle some of the more difficult tasks, and at certain points we had up to 100 people working on most challenging tasks in this project. It was a monumental effort. Thirdly, we collected a database unique in its size and composition for training and testing these capabilities.”
IDLive Face requires no additional steps or input from the customer and can distinguish a live person in front of a camera from various spoof attempts such as photos, cutouts, masks or videos. The entire process occurs at the back end, with no warning or visible process for spoof attackers to learn from. This process is based on a single shot analysis and doesn’t require any additional software on the capture side. It also doesn’t require any special hardware and works across channels (web, mobile, standalone cameras).
Similarly, IDLive Voice analyzes short speech samples to detect spoofing during the course of normal interaction with the conversational channel. For example, in a product demo, IDLive Voice was able to determine a speaker’s live presence after a short utterance. In cases where an impersonator is trying to gain unauthorized access using a computer-generated or a recorded voice, Khitrov says, IDLive can detect the fraudulent attempt in under one second, and prevent access. IDLive Voice runs completely in the background, which Khitrov says makes the authentication process, including the detection of synthetic, recorded, and computer-generated speech, entirely imperceptible.
ID R&D sees liveness detection is the next essential component to biometrics. Khitrov notes that it is often the first thing that comes up in customer conversations, saying, “I regularly hear from industry voices that liveness is the new accuracy.”
Combining with traditional biometrics
Part of ID R&D’s vision is not just to enhance the user experience, but to also make it easy for developers to integrate its technology, according to Khitrov. IDLive enables the combination of traditional facial or voice matching with liveness through ID R&D’s SDKs. The company takes a flexible approach both to architecture and partnerships, enabling organizations to build its capabilities into native or new applications in a way that meets custom needs.
This flexibility of IDLive is one of its key benefits, according to Khitrov.
“It’s a completely frictionless experience, no action required, works across platforms, on the web, on mobile, stand-alone camera, no additional software required on the capture side, all standard cameras would work,” he says. “We can work in parallel with any facial recognition algorithms and provide this additional layer of security.”
Varied use cases
With a number of signed deals following successful beta trials, ID R&D identifies a wide range of use cases for IDLive. The landscape of spoof attack threats, increasing regulation, and consumer demand for convenience is motivating banks and fintechs, as well as health care providers, insurers, sharing economy companies and others to deploy biometrics with anti-spoofing.
ID R&D serves some end-user businesses directly for implementation in mobile onboarding apps, and has had particular traction with financial institutions, according to Khitrov.
The company has also found traction in the identity management space. “We partner with a number of identity management companies, with multimodal biometric authentication providers that consume our technologies as an SDK and integrate them into their platforms and products to allow this new generation of capabilities to be accessible to their customers.”
IDLive’s flexibility also enables system integrators to build passive liveness detection capabilities into a wide range of connected devices.
“Some of the use cases that we’ve seen are the introduction of the capabilities on devices like ATMs and physical access systems, as more and more face is being used as your identifier,” Khitrov explains.
Any kiosk, smart speaker, or other automated device that uses the voice channel or computer imaging can implement ID R&D’s biometric technologies, building new modalities or capabilities into any authentication process.
Offered as a standalone component, IDLive is also integrated into the company’s chatbot and virtual assistant authentication product, SafeChat. SafeChat provides five layers of authentication, Khitrov explains: face, voice, and behavioral biometrics, and face and voice liveness detection. SafeChat was recently chosen as a Top 3 voice-enabled ID technology for Best Banking/Financial Experience at the 2019 Voice Summit Awards.
As AI provides the tools for sophisticated liveness detection, it also enables deep fakes and other sophisticated spoofs. Because ID R&D’s technology is better at detecting synthetic voices and images than human senses, Khitrov says, its anti-spoofing technology could soon be required on practically any device people use to interact with businesses or each other. He sees the race between AI security and spoofs continuing indefinitely. “This race will continue. We are just in the beginning.”
With its recent breakthrough in liveness detection and its history of record speed and accuracy in biometrics, ID R&D is offering organizations and integrators a way to reduce friction and enhance the security of their authentication processes. Rather than having to constantly evaluate how to best balance the two, as is traditionally the case, the company’s focus on research and development of its core biometric technologies now enables its customers to improve both at the same time.