Behavioral biometrics and persona-based security intelligence as a key fraud prevention layer

Behavioral biometrics and persona-based security intelligence as a key fraud prevention layer

A number of companies on the market today are exploring behavioral-based security, which relies on an individual’s unique behavior patterns such as the way a person types, how hard the buttons are pressed, how quickly a person holds a device for secure and more accurate authentication. They argue it is superior to the already popular authentication methods that leverage biometric facial or fingerprint recognition, because behavioral-based security performs continuous authentication by analyzing real-time interaction with a device.

Deep Labs, a Silicon Valley-based startup founded in 2016, develops artificial intelligence security tools for banks and credit card companies. The company uses artificial intelligence and machine learning to leverage the power of behavioral biometrics to develop the concept of persona-based intelligence. According to its CEO and co-founder, Dr. Scott Edington, behavior-based security has reached a higher level of sophistication, because it would be nearly impossible for fraudsters to fake how a person types their password or holds their phone. If a user’s phone is used even in a slightly different way, the action is immediately flagged.

With a career spanning over two decades of developing next-generation technology capabilities in the payments, defense and intelligence sectors, Dr. Edington spoke with Biometric Update about persona-based intelligence, data privacy, blockchain and the fraud landscape.

How persona-based intelligence can prevent account takeover

The persona-based approach is a key differentiator, because most of the industry does not look at ID security from multiple points of view, Dr. Edington told Biometric Update. A person cannot be the same throughout the day, it all depends on how that day went, what activities that person took part in.

“We exhibit different personas, as we exhibit through space and time,” he explains. “The 8AM you is different than at 8PM after working a hard day. So, your persona will shift based on how tired you are, or depending on how the weather pattern changed.”

Artificial intelligence helps develop different personas based on behavior patterns. The algorithm already knows which persona the user will exhibit, so even if the correct password was typed on the associated device, the system knows when bad actors are featuring a persona that does not belong to the actual user.

If we go back even 20 years in time and think about signals, they’ve always been there, Dr. Edington says, but the industry lacked the proper compute power to cut through all the noise. However, Moore’s law (editor’s note: Moore’s law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.) has made it possible for compute power to nearly double yearly.

From a biometrics and layer defense angle, Dr. Edington says the industry has turned to detecting additional signals that can be added to the security layer to understand what it is dealing with and better understand the type of attack.

“A multi-dimensional approach [is needed] to create security profiles for individual actors, with an emphasis on actor, as opposed to person, because we know the actors can be a bot, a machine, an organization – it doesn’t have to be a nefarious human being,” he explains.

There are many contributing factors for identity theft or account takeover, but the major issue, according to Dr. Edington, is the high number of large-scale data breaches that have happened over the past five years. Knowledge-based authentication (KBA) was the prominent strategy twenty years ago when static signals such as Social Security Number, the name of your high school or mom’s name were used for authentication. But this is no longer the case because the data breaches have exposed personal information online. Whether it can be easily found on Google or on the dark web, personal information is now accessible, making it quite uncomplicated for a fraudster to impersonate a user.

This led to the popularity increase of dynamic signals and device information, he explains. While that might have worked in 2014, however, it no longer applies today, because, again, the recent large-scale breaches have also compromised device information which can be found on the dark web for sale, alongside credit card information. This is where persona-based intelligence comes in. By leveraging AI and machine learning combined with behavioral biometrics, “you analyze the actions surrounding the actor.”

Although the focus for Deep Labs is on the financial services and payments industries, persona-based intelligence can be rolled out in any industry where authentication is an issue or that is struggling with account takeover. Dr. Edington warns that with FinTech and financial companies “raising their walls, […] we’re now seeing fraud move towards other channels and industries.”

Data privacy implications and what to expect from AI in the future

In most regions, technologies usually meet authentication requirements, but, naturally, there are some regional implications for the technology from a data privacy angle, Dr. Edington points out, because each jurisdiction, whether it is Europe, Singapore, Australia or the U.S., has its own legal framework and regulatory requirements a company must follow. In today’s threat landscape, all companies must have a policy for biometric data use.

From a technology angle, the industry’s goal is to provide a seamless and frictionless user experience where the consumer is not bothered with countless authorizations in the fraud prevention process. While “there is clear understanding and demand for this next generation technology […] we should start seeing more regulation around.”

“There will always be a back and forth between the industry and threat actors,” Dr. Edington warns. Threat actors are growing in sophistication. Not only are criminals experienced, well-rounded individuals, but threat actors could also be nation-states that sponsor cybercriminal groups to carry out attacks. The fight between the security industry and the criminal world will not end, yet “it’s no longer about individual signals, it’s about how you aggregate the signals that are available.”

Dr. Edington further explains that it is unlikely for threats to actually be eliminated, but what the industry can do is try to lower impact through “basic hygiene,” a context rich approach and AI. Two-factor authentication, regular password updates or keeping credentials private are important steps that not all companies follow. “You’d be surprised how many times obvious is not obvious,” he says.

From a defense perspective, Dr. Edington advocates for security layers, whether it’s a blockchain solution or layering machine learning algorithms, the layered approach is superior to any human trying to identify differences. It would be impossible for someone to scan millions of data points, but machine learning can do that in less than a millisecond. AI can successfully tell the difference between bad actors and legitimate users in a pool of 8 billion users, while a human cannot.

Over the next five years, Dr. Edington’s predictions are that the industry will heavily rely on artificial intelligence and machine learning which will deliver a seamless and frictionless customer experience. Most industries are looking into delivering a friction free ecosystem, so in the upcoming years “pressing 1” to confirm transactions will be a thing of the past. AI will solve most authentication problems and the world will likely be living “the dawn of the machines.”

Consumer perception of sophisticated advanced technologies vs friction

“For high-value dollar transactions, you’d be amazed how most people don’t mind having some friction, in fact, they embrace friction to feel they are actively participating in ensuring noting gets stolen,” Dr. Edington explains. “But if you’re talking about every day transactions or activities, consumers don’t want to have a great deal of friction. So, if you’re asking them, as an example, to present their iris or their retina it’s often met with skepticism.”

There are also regional variables to consider such as climate, time of year or cultural appropriateness which could hinder the technology from working accurately. While iris, retina and facial recognition are great technologies, it always depends on the use case, because if special equipment is required, the entire process might not be feasible for the average user, so it really depends on why a company wants to introduce such sophisticated technologies and how deep in the process, Dr. Edington concludes.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics