Fingerprint, face, iris … lips? Researchers say the mouth is biometric gold
Many people, maybe millions, could authenticate Mick Jagger simply by looking at his lips. The trick would be using lips as a behavioral biometric to identify someone enrolled in a system.
Now, two researchers from Queen’s University Belfast in Northern Ireland say there is nothing fundamental preventing a mobile phone from doing exactly that.
In a new report, Carrie Wright and Darryl William Stewart say that phones might only have to compensate for poor lighting in order to identify someone as they spoke while aiming the camera at their face. The pair work at the university’s Institute of Electronics, Communications & Information Technology.
Lip-based biometric authentication would be harder to spoof than physiological authentication because it captures discrete behavior rather than a static (or relatively so) pattern like the dimensions of a face. For the same reason, liveness tests would be more conclusive with lip authentication.
The biometric software was able to authenticate new videotaped users after they uttered a multi-digit string — a so-called one-shot learning solution. The model used in the research had an equal error rate of 1.65 percent on the so-called XM2VTS data set. A more traditional approach to lip-based biometrics had an equal error rate of about 16.5 percent during the team’s experiments.
Using differing video content had “little impact on performance, which is crucial to liveness checks,” according to the report. It was lighting that threw off the software most often, and the researchers recommended relevant model training to overcome the problem.
The researchers used the LipAuth model, which was trained twice — once on a closed-set protocol (all video samples are known in advance of the tests) and again on a new open-set protocol (where new samples are enrolled throughout testing), which was defined for the “highly controlled video recordings” of the XM2VTS data set.
The XM2VTS data set was chosen for its large size (2,360 videos of 295 people) and popularity among researchers. It also has closed-set protocol, something that allows for result comparison to other algorithms.
The data set holds video of volunteers repeating a numerical sequence twice and a sentence to a steady Nexus 7 Android tablet during four sessions under consistent lighting, and scheduled a month a part to capture appearance differences.
The team also used other data sets, including Favlips, which “was designed to mimic some of the hardest challenges that could be expected in a deployment scenario.”
Technologies using lips as an authentication factor were reported on last year, but reading lips as a behavioral biometric.
Article Topics
authentication | behavioral biometrics | biometrics | lip motion | mobile device | research and development
Comments