May 4, 2017 -
This is a guest post by Jonas Lamis, VP of Marketing at Nok Nok Labs.
William Gibson is known for his quip, “The future is already here, it’s just not evenly distributed.”. From my house in Palo Alto, CA, you can see that future bright as day. Every 15 minutes or so, a funny looking electric two seater dome-car will hum by. I mean literally hum. They must have embedded some sort of hum generator in the cars because otherwise they would be quiet as a cockroach. It’s like a bug – VW Beetle specifically – and a Star Wars pod racer hooked up and produced the future of human transportation.
These Googlecars may need significant further testing before we summon them to ferry us around town, but it’s in the cards. Current projections have the first large scale self-driving projects rolling out to consumers in less than 5 years. Tesla, also a neighbor here in Palo Alto, already has 50,000 vehicles outfitted with level 3 autonomy capabilities in use by human drivers around the world, just with the autonomy curtailed.
One of the main problems facing autonomous cars is what’s called the Trolley Problem.
In our version of the problem, we ponder how the car decides which path to take when both options involve bad choices? Faced with either a sure collision with a pedestrian, or running off the road and over the cliff, which choice would the car make? Why did it make that choice? Is that choice congruent with the choice that the human passenger would want it to make? Or should it be congruent with the programmer’s ethical perspective? And importantly, how do we assure that the decisions made by the car are in fact the decisions that have been agreed to. How do we trust that the car has not been hacked and it’s driving behavior changed without the rider’s knowledge.
All sorts of connected devices will be “coming alive” in the next decade. And they will all face the risk of hacking and “artificial ethical corruption”. Robo-pets and prosthetics, juicers and toilet censors, vehicles of every kind. Appliances, HVAC, homes, buildings and cities. 10 years from now, many things we interact with will have artificial brains, and they’ll be doing their best to customize that interaction, personalize that result, or minimize the amount of cognitive load we humans will need to invest to get stuff done.
As artificial intelligence makes its way into human-oriented systems, the risks associated with hacking become more and more tangible. It’s one thing to have your password stolen and your credentials used to make purchases at Home Depot. It’s an entirely different threat model when a hacked autonomous car could kidnap your kid and hold him/her for bitcoin ransom at an unknown location.
With more automation comes the need for seamless, ambient two-way authentication. That robo-uber needs to know that it’s really me it is picking up. Likewise, I need to know that it is really the (uncorrupted) service that I am expecting.
There is no room for traditional passwords at each automated interaction, and the risk of untethered machine to machine decisionmaking is too great. Biometrics and other protocols that offer proof of identity, coupled with encryption methods that keep those identities off centralized servers offers the foundation upon which trust of our connected devices will be built.
At night when I turn out the lights, I can still hear the hum of those Googlecars as they cruise down my street. While I drift off to dream of a connected future, those cars seem to always be awake.
DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.