Authentication system for wearables combines behavioral biometrics with voice recognition
Researchers from the College of William and Mary have published a first-of-its-kind study that reveals that combining voice and touch recognition may help to prevent hackers from accessing personal data, according to a report by IEEE Xplore.
The researchers claim to be the first to combine behavioral biometrics from different dimensions, which includes touch gestures, with voice commands for use with wearable glasses.
The study explores the development of a new continuous authentication system for increased privacy protection could make it nearly impossible to replicate or steal private data that can be used to gain access into the smart device.
The majority of leading wearable glasses use on-head detection privacy, which instantly locks the glasses when the user removes them.
However, the glasses can sometimes fail to lock when the user takes off their device, allowing a would-be hacker to gain access to the device if they uncover them.
Even when the on-head detection is functioning properly, the user must enter in one-time authentication credentials.
This makes the system still susceptible to hacking as imposters can easily bypass one-time authentication systems with smudge attacks (identifying the oil streak that a user’s finger leaves behind when unlocking) or by spying on the user as they enter the credentials.
To resolve these issues, the researchers developed GlassGuard, a continuous and non-invasive user authentication system that can distinguish the owner from an imposter.
Combining touch biometrics and voice commands generates a stronger protection system to prevent potential hacking cases, compared to using only one of the security measures at a time.
“A continuous authentication system based only on touch biometrics can be easily circumvented by using voice commands,” said Ge Peng, researcher at the College of William and Mary. “Similarly, a system purely based on voice authentication does not always work, as voice commands are not available all the time. A user may be in a situation when speaking is not appropriate.”
The machine learning-based process involves using six types of touch interactions (including single-tap, swipe forward, swipe backward, swipe down, two-finger swipe forward, two-finger swipe backward) in concert with vocal recognition.
The continuous authentication system uses these seven classifiers to identify the wearer and effectively grant him or her access to the device.
Once the classifiers have been implemented, an aggregator is employed to synthesize the biometric data to determine whether the current wearer is the actual owner.
The system classifies each touch or voice interaction as either positive (meaning the action most likely stemmed from the device owner), or negative (where the action most likely originated from an imposter).
The aggregator combines multiple classification results from a series of interactions to calculate a likelihood ratio, with an upper threshold indicating that the user is likely the owner while a lower threshold indicating that the user is an imposter.
In the event that the ratio falls between the two thresholds, an accurate decision cannot be made and the system will delay the decision-making until it can analyze more user events.
On average, it takes four to five user events for the system to come to a decision.
The researchers conducted an initial study of 32 users, in which the results showed that GlassGuard had a 99 percent accurate detection rate, and a 0.5 percent false alarm rate, following an average of 3.5 user events.
The researchers plan to measure the power consumption of GlassGuard’s continuous authentication system to determine its performance, which will measure long term, routine use of wearable glasses by a larger sample size.