Prepare for post-biometric security amid AI cyber-attacks: Traficom
“A familiar voice on the phone or a familiar face in a video chat will no longer be sufficient grounds to prove the identity of an individual and therefore should not be trusted anymore.”
If cyber attackers employ more AI capabilities and such attacks become more widespread, existing security provision including biometrics may no longer be sufficient, argues a succinct and highly accessible report by the National Cyber Security Centre at Traficom, the Finnish Transport and Communications Agency. Fire will have to be fought with fire.
‘The Security Threat of AI-Enabled Cyberattacks’ (PDF) collates current knowledge on the deployment of artificial intelligence in attacks. It finds that at present, AI is used in a limited way to enhance certain tactics, and likely only by nation-state attackers. Soon it will inevitably allow far greater automation, information and social engineering. It will become cheaper and easier to implement.
It may even be used alongside traditional attacks: “By triggering many of the rules that are handled by human operators, a conventional cyberattack could be run in parallel to a decoy attack and may go completely unnoticed.”
“Biometric authentication methods may become obsolete because of advanced impersonation techniques enabled by AI,” summarizes the report. AI and machine learning can be put to task studying an individual and then spoof his or her biometrics such as face and voice, but also any behavioral biometrics such as keystroke patterns, eye movements and device motion used in authentication. It can even learn password habits to guess a particular password.
“Certain security processes will become deprecated when they are found to be insecure in the face of AI-enabled attacks,” states the report. “This will likely be the case for voice authentication methods over phone calls as well as many other biometric or behaviour-based authentication methods, which, while being convenient, can be easily spoofed by AI generation techniques.”
Deepfake detection capabilities are also developing, such as reverse modelling of speech which shows that the sound cannot have been produced by human physiognomy, even if the audio initially sounds exactly the same as the victim.
The current styles of defence will have to evolve to combat traditional attack, but entirely new AI-based defences will also be needed. Human security staff will still be needed for ethical control finds the report. Defenders may be bound by regulation such as the EU’s AI Act. Attackers will not.
“Although research is starting to address such attacks, there is no effective solution to counter them yet. There are also no solutions available to prevent side-channel credential theft attacks that can learn and reproduce human behaviours used in implicit key logging.”