Biometrics beyond the lock and key
This is a guest post by Saleel Awsare, SVP and GM of the IoT division at Synaptics.
What’s next for biometrics now that it has become ubiquitous?
Hundreds of millions of people, after all, use fingerprint or facial recognition on their smartphones or PC’s every day. That’s an amazingly rapid adoption of a technology that has been offered on mass consumer products for only a half-dozen years.
Still, biometrics has largely been used for a single function — as a virtual key to unlock a phone or a laptop. Now we are moving into a new phase of pervasive biometrics where these technologies become a flexible toolkit with which to build a broad spectrum of applications, each with a different balance of convenience, personalization, privacy and security.
At Synaptics, we see this explosion of powerful applications on the horizon because we work with the world’s leading consumer electronics makers to provide technology for human machine interfaces. While we are widely known as the inventor of the computer touchpad, touchscreens for smartphones, and uniquely secure fingerprint sensors for both, we now offer breakthrough technology in audio-, video- and voice-processing. With that, our new generation of voice-enabled SoC’s with neural network accelerators can now deploy built-in machine learning to identify people biometrically by voice, face, and eventually other factors. Machine learning algorithms done within the SoC, known as edge computing, offer users advantages of performance, reliability, and privacy over the current cloud-based approaches.
Automatic personalization without enrollment
Over the next year or two, much of the innovation will be around speech identification, building on the broad acceptance of smart speakers and other connected devices with voice assistants. Consider a family that buys a smart speaker for the kitchen. Mom, Dad and the kids each start asking it to play their favorite songs and podcasts when grabbing something to eat. In a short period of time, the speaker’s AI system learns to distinguish their voices and associates each of them with their listening preferences. Soon, each family member can just say “Hey speaker, good morning,” and their preferred playlist will start automatically.
What’s most important in this scenario is what didn’t happen. Nobody had to register a profile or stop to train the biometrics, the way you need to set up your fingerprint or face on a phone. The speaker need not know each person’s name or other identifying details. Just matching a unique vocal pattern with usage habits is enough to make for a distinctively better experience. Another bonus is that these biometrics are processed on the device, not in the cloud, which is a big step in improving consumer privacy.
The same sort of passive personalization without explicit enrollment can enhance the experience of many other connected devices. A small camera on a connected TV could identify the faces of different viewers in order to suggest new shows based on their viewing habits. Indeed, it could also determine if two family members are sitting on the couch and search for shows that both of them would like.
Automatic voice identification is a great way to distinguish between family members, where the penalty for a recognition error is little more than playing hip-hop beats to a classic rock fan. But it’s not discriminating enough, say, to protect your car from a thief with a tape recording of your voice. We need to start thinking about deploying a range of biometrics along with other security measures depending on the situation. With today’s technology, a fingerprint sensor is a secure way to replace the ignition key. Yet the same car could use passive voice personalization to help pick the best music choices for you automatically. Combine fingerprint and voice as multifactor biometrics, and then you could pay meters, tolls and maybe the drive-thru restaurant for that ultimate chili-dog you are craving.
In the home, different levels of authentication may be appropriate for different functions. Passive face identification is fine for TV show recommendations, but to buy a movie, a user might need multifactor biometrics to authorize the purchase with both face and voice recognition.
Voice recognition offers an easy improvement on the text-message based two-factor authentication used these days by banks and social networks. To shop on a smart speaker, for example, a user could simply read out loud a verification code sent by text. The speaker can validate the code and, for added security, verify that the voice is one it recognizes.
In particularly important cases, multifactor biometrics can be required. A bank might require you to validate both your fingerprint and face — and maybe more if one of them is passive — in order to transfer more than $10,000 using your smart device.
Protecting against biometric hacking
As biometric security becomes more widely deployed, crooks, spies and mischievous teenagers will increasingly look for ways to break it. One researcher recently unlocked a smartphone using a finger model made on a 3-D printer. And a security consultant discovered that a British security company serving banks and police forces left more than 1 million fingerprint images in an unsecured database on the Internet.
One protection against that sort of breach is to keep biometric information off of the cloud. Some devices connect their fingerprint sensors to an OS that keeps fingerprint templates walled off from intruders. Synaptics has gone further with Match-in-Sensor technology that integrates the processor and memory for fingerprint template storage in the sensor itself, not the more vulnerable operating system.
Over the coming years, we will start seeing this new phase of consumer biometrics where the simple lock and key application is replaced by a portfolio of different configurations of biometric identification that can make many experiences easier, more personal and more secure.
We can also see another more fluid phase of biometrics emerging over the next decade or so as sensor and artificial intelligence technologies continue to advance. Networks of connected devices will have access to a broader range of visual, audio and other methods to build more precise profiles of who is in a home, office or car and provide proactive helpful services. They will be able to make reliable and secure identifications of individuals in the background by combining the measures used today with other factors such as facial expression and habitual hand gestures.
Eventually, the consumer device will continuously know that it is “you” (and respond to “you”), but then be able to seamlessly switch to another user if that user comes into view (and you leave the field of view). And by using this data to infer the context of your actions they will be able to better understand your requests and even anticipate what you want even before you ask for it. Are you ready for: “You look tired this morning, shall I brew a double expresso?” I know I am.
About the author
Saleel Awsare is SVP and GM of the IoT division at Synaptics. Awsare has 25 years’ experience in the development, promotion and delivery of semiconductor products to global markets.
DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.
artificial intelligence | biometric enrollment | biometric identification | biometrics | biometrics at the edge | edge computing | IoT | multi-factor authentication | personalization | privacy | security | spoof detection | Synaptics