Two behavioral biometric lip-reading techniques emerge
A pair of new technologies offer user authentication based on lip movement while speaking – or lip-reading – as a behavioral biometric modality.
Liopa LipSecure prompts the user to utter a random sequence of digits as they appear on the screen, and works with the vendor’s existing facial recognition technology to check the liveness of the user. The Irish startup has launched trials with established user authentication and identity verification providers to enhance the anti-spoofing capabilities of their facial recognition systems, according to a company announcement.
“These trials are an exciting opportunity to further develop and validate the LipSecure liveness checking solution in areas where FR technology is currently deployed, such as authentication for online services, Identity Verification for customer on-boarding, device unlocking etc.,” says Liopa Founder and CEO Liam McQuillan. “On its own, facial recognition is a very convenient biometric authenticator but poor liveness detection, and the resulting negative press, is materially impacting take-up, particularly in mobile devices. LipSecure provides a simple-to-use, highly robust Liveness Checker to ensure Facial Recognition systems are robust to increasingly sophisticated spoofing techniques.”
The company says LipSecure is easily integrated with third-party facial recognition systems through cloud or on-premise software accessed from a range of SDKs.
Lip-reading and spoken passwords were proposed as a two-factor authentication system by a Hong Kong Baptist University professor last year.
A team of researchers led by Jiadi Yu of Shanghai Jiao Tong University has meanwhile developed LipPass, which senses users’ mouth movements acoustically, IEEE Spectrum reports.
According to a recently published study, LipPass identifies users with 90.2 percent accuracy and detects spoofs with 93.1 percent accuracy in initial testing. Unique Doppler profiles created by the speaking behavior of a user are sensed by the smartphones microphone, and a match is generated with a binary tree-based authentication approach. The system was tested by 48 volunteers on four popular Android smartphones in four different acoustic environments.
A similar system was developed at Florida State University in 2017.
In laboratory settings, LipPass’ authentication accuracy was 95.3 percent, while WeChat voiceprint recognition had 96.1 percent accuracy, and Alipay facial recognition was found to be 97.2 percent accurate. In noisy and dark environments, however, WeChat’s accuracy dropped to 21.3 percent and Alipay’s to 20.4 percent, while LipPass had relatively stable accuracy in different conditions.
“To resist an attack, existing solutions either employ specialized infrastructure, such as Apple FaceID, or require users to involve extra operations, such as eye blinking, which introduces additional cost and effort and further reduces user experience,” Yu says.
The researchers found that LipPass detected over 90 percent of audio replay, mimicking, and even reflected Doppler profile attacks. The latter were successful nearly 20 percent of the time under controlled conditions in a laboratory, but requires the attacker to record the correct user’s profile from as close as 50 centimeters away to capture high enough quality data.
Yu says the team is considering smartphone and smart home device applications for the technology.