FB pixel

Prepare for post-biometric security amid AI cyber-attacks: Traficom

Prepare for post-biometric security amid AI cyber-attacks: Traficom
 

“A familiar voice on the phone or a familiar face in a video chat will no longer be sufficient grounds to prove the identity of an individual and therefore should not be trusted anymore.”

If cyber attackers employ more AI capabilities and such attacks become more widespread, existing security provision including biometrics may no longer be sufficient, argues a succinct and highly accessible report by the National Cyber Security Centre at Traficom, the Finnish Transport and Communications Agency. Fire will have to be fought with fire.

The Security Threat of AI-Enabled Cyberattacks’ (PDF) collates current knowledge on the deployment of artificial intelligence in attacks. It finds that at present, AI is used in a limited way to enhance certain tactics, and likely only by nation-state attackers. Soon it will inevitably allow far greater automation, information and social engineering. It will become cheaper and easier to implement.

It may even be used alongside traditional attacks: “By triggering many of the rules that are handled by human operators, a conventional cyberattack could be run in parallel to a decoy attack and may go completely unnoticed.”

“Biometric authentication methods may become obsolete because of advanced impersonation techniques enabled by AI,” summarizes the report. AI and machine learning can be put to task studying an individual and then spoof his or her biometrics such as face and voice, but also any behavioral biometrics such as keystroke patterns, eye movements and device motion used in authentication. It can even learn password habits to guess a particular password.

“Certain security processes will become deprecated when they are found to be insecure in the face of AI-enabled attacks,” states the report. “This will likely be the case for voice authentication methods over phone calls as well as many other biometric or behaviour-based authentication methods, which, while being convenient, can be easily spoofed by AI generation techniques.”

Deepfake detection capabilities are also developing, such as reverse modelling of speech which shows that the sound cannot have been produced by human physiognomy, even if the audio initially sounds exactly the same as the victim.

The current styles of defence will have to evolve to combat traditional attack, but entirely new AI-based defences will also be needed. Human security staff will still be needed for ethical control finds the report. Defenders may be bound by regulation such as the EU’s AI Act. Attackers will not.

“Although research is starting to address such attacks, there is no effective solution to counter them yet. There are also no solutions available to prevent side-channel credential theft attacks that can learn and reproduce human behaviours used in implicit key logging.”

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

UK watchdog warns of legal risks as London police deploy LFR at protest

London’s Metropolitan Police will deploy live facial recognition (LFR) technology at a protest for the first time this weekend, prompting…

 

Age assurance debate arrives in Bangladesh

The dominos continue to fall in the game of global online safety legislation targeting social media platforms. Bangladesh is weighing…

 

Et tu, browser? Security experts ring bell over browser fingerprinting

Your web browser wants you to think it’s on your side. It’s your helpful window into the online universe, and…

 

Suprema’s BioStation 3 Max supports on-device biometric credential storage

Suprema has launched BioStation 3 Max, a biometric access control terminal that combines AI-powered facial recognition, fingerprint authentication and hardened…

 

NIST, Air Force move to sole-source biometric testing and monitoring contracts

The National Institute of Standards and Technology (NIST) and the U.S. Air Force Academy are pursuing separate sole-source contracts tied…

 

AI fraud crackdown risks locking blind users out of biometric identity systems

Government identity verification systems are increasingly locking blind and low-vision (BLV) Americans out of essential services as agencies deploy stricter…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events