FB pixel

Does voice recognition have a place in modern banking?

Does voice recognition have a place in modern banking?
 

By Fabian Eberle, co-founder and COO at Keyless Technologies

Fraudsters are financially motivated, adaptable, and are increasingly becoming more adept at using the latest technologies to execute account takeover fraud, so it’s no wonder that, as biometric authentication becomes ubiquitous in the financial services sector, bad actors are finding opportunities to exploit these systems – but does this mean that technologies like voice recognition have no place in modern banking?

A recent article from VICE exposed how a journalist proved, by hacking into their own account, how easy it is to bypass telephone banking security steps using synthetic voice technologies generated by AI. Sometimes referred to as voice cloning or voice spoofing, these attacks have sparked an outpouring of privacy concerns about our voices being harvested and used against us.

Since the launch of Lyrebird and WellSaid Labs, synthetic voices generated by AI have evolved to the point where they are indistinguishable compared to real voices, and need only a minute of voice data to produce realistic results, as reported by both MIT and Google respectively. With the advancement of such technologies, spoofing voice recognition systems when you have access to the victim’s voice data is entirely possible.

Not a cause for alarm

Voice recognition systems, like the one used in the VICE exposé, rely on the victim saying something aloud, either a unique passphrase (similar to a password), or a generic statement such as “my voice is my password.” Both are vulnerable to exploitation, with the latter being particularly weak in terms of security.

While this is alarming, it’s neither unexpected nor is it a cause to boycott the technology altogether. Generally speaking, banks will not rely on a single form of authentication, so the debate around the effectiveness and security of voice recognition therefore depends on the mitigating factors put in place to stop spoofing threats from escalating into full blown fraud.

Fraud prevention processes typically require banks to practice a higher degree of diligence when making critical changes, such as updating the contact details, resetting passwords, adding new beneficiaries, or ordering a replacement card, to someone’s account over the phone.

Normally, the customer would be asked specific security questions about their transactional and account data to verify their identities – this kind of data is hard to obtain without direct access to the account. That being said, it is possible that a threat actor would be able to dig out more obscure data like transactional history from their victims, especially if the person is known to them.

There are heightened concerns for public figures, as fraudsters can harvest their voice data from interviews and social media – with platforms like Instagram, TikTok and YouTube opening up the floodgates for these kinds of attacks. However, obtaining sensitive account data would be much more difficult, making this kind of attack relatively unscalable for threat actors, who today have access to more efficient means of account takeover fraud.

Scale vs Impact

When it comes to fraud prevention in financial services, mitigating threats based on their financial impact or their ability to be executed at scale is key to reducing the threat surface.

Voice recognition spoofing today poses less of a threat to the general public as it’s difficult to execute at scale. To be successful, threat actors would need to have substantial personal information about a customer in order to successfully evade the bank’s layered security defenses. However, that does not mean that such attacks don’t have a high impact when successful.

High-net-worth customers are a particular risk, as their transactional data may be managed by trusted associates like employees, making them more vulnerable to such attacks. Plus, they’re more likely to have given interviews or spoken online – making it possible for fraudsters to illegally harvest their voice data.

However when it comes to protecting against voice spoofing threats, the answer is not replacing biometric technologies altogether, but complementing biometric solutions with additional security measurements.

Applying the appropriate amount of friction, based on risk

To avoid spoofing threats, all biometric authentication solutions – whether voice, face or fingerprint – need to have robust fallback methods and be used together with fraud detection engines. When high-risk activity is detected, it’s important that banks re-authenticate their customers, regardless of the amount of friction this would cause for the customer.

Fraud detection systems typically provide banks with:

  • Behavioral analysis
    • This involves analyzing patterns in the user’s behavior during the authentication process. For example, the system may look for unusual typing patterns or mouse movements that suggest the user is not who they claim to be.
  • Device identification
    • The system may also analyze the device being used to authenticate the user. This can include checking the device’s IP address, location, and other characteristics to ensure it is a legitimate device.
  • Geolocation
    • The system may also analyze the user’s geolocation data to ensure that they are in a location that is consistent with their usual patterns of behavior.
  • Time-based analysis
    • The system may also analyze the time of day and day of the week that the authentication attempt is made. This can help to identify patterns of behavior that are unusual or suspicious.
  • Fraudulent pattern analysis
    • Machine learning algorithms can be used to analyze large amounts of data and identify patterns that are associated with fraudulent behavior. These algorithms can learn from past patterns of fraudulent behavior and use this knowledge to identify new instances of fraud.

If and when high-risk activity is detected, it’s important banks use step-up authentication to ensure that it’s the real person authenticating – not a fraudster using deepfake technology. Step-up authentication is when customers are asked to re-authenticate themselves. This could be when adding a new beneficiary, ordering a new card, or making a transfer to an unknown or new account.

When compared to passwords and PINs, which can be easily compromised, biometric authentication solutions provide a much higher level of security – but this does not mean they should be used in isolation. By combining biometric solutions, such as voice and facial recognition with other authentication challenges and fraud detection systems, banks can help protect their customers from the financial impact of identity fraud and account takeover threats.

About the author

Fabian Eberle is co-founder and COO at Keyless Technologies. Keyless is a passwordless authentication company pioneering privacy-preserving biometric solutions for workforce and consumer authentication.

DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Governments still struggling to secure data. Zero-trust, passkeys could help

A digital data breach at the National Social Security Fund (CNPS) of Cameroon has resulted in the leak of citizens’…

 

Controversy surrounding police use of FRT in Denmark and Germany continues

In recent months, European nations have seen heightened debate over the use of facial recognition technology (FRT) by law enforcement,…

 

Privado ID merges with Disco to unify digital identity across Web2, Web3

Privado ID, formerly known as Polygon ID, has announced a merger with Disco, a company specializing in multichain verifiable data…

 

G20 ministers pledge AI transparency and digital inclusion with DPI at the core

At the G20 Digital Economy Ministers’ meeting held in Maceió, Brazil, on September 13, 2024, global leaders reaffirmed their commitment…

 

Spanish startup B-FY brings offline biometrics to US cloud authentication market

Spain-based biometrics startup B-FY has launched in the U.S. market, introducing its cloud-based identity verification and authentication software. B-FY’s technology…

 

Biometric payment cards from FPC and Infineon ready for mass production

Fingerprint Cards and Infineon Technologies have officially unveiled the complete package of biometric payment card technologies that Infineon previewed in…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events