The time to battle malevolent AI is now, according to a new EU report
A public-private group in the European Union is telling the union’s 27 nations that the time to combat AI-perpetrated crime, including biometric spoofs, is now, and their approach must be to fight fire with fire.
Three organizations teamed up to give governments and others some battle tactics, and outline real and anticipated problems with AI. They have emphasized in a new report that current AI attacks go beyond the media anti-darling, deepfakes.
The trio are security vendor Trend Micro, the United Nations’ Interregional Crime and Justice Research Institute and Europol, an EU agency tasked with protecting Europe from cybercrime.
The report’s authors pointed out that intelligent systems today are able to mimic human behavior convincingly enough to get around detection systems.
An AI-supported bot that is available now can increase Spotify song play counts, right down to strategically creating play lists that appear to be products of human behavior, and therefore act as cover when the bot artificially generates traffic. The traffic is converted to money, which cybercriminals pocket.
Then there are voice cloning tools that can be used to get past voice biometric authentication systems. Cloning tool LyreBird, according to the report, is being joined by “less legitimate” voice tools that could obviate a category of biometric protection.
In the offing are biometrics-aimed algorithms that likely will copy the typing patterns of human targets to skirt defenses designed to look for unfamiliar patterns. Typing biometric software shuts down systems when it becomes suspicious.
On the more exotic end of the anticipated development spectrum are bird-sized drones under development that carry small explosive charges. Kitted out with facial recognition algorithms, the drones could be used to assassinate individuals.
With this in mind, the report’s authors have come up with five recommendations, some more easily grasped than others.
Top on the list is for public and private organizations to begin using AI “as a crime-fighting tool to future-proof the cybersecurity industry and policing.
Criminals today are deploying algorithms to clone voices, break Captcha defenses and guess passwords, according to the report. Bad-acting AI is poised to spot holes in detection rules, creating large-scale, realistic social engineering attacks and other tasks.
The report calls for more research “to stimulate the development of defensive technology,” and for governments and businesses to build secure AI design frameworks.
The trio says that their collaboration should not be the last to address the malignant AI threat. More such partnerships and multidisciplinary expert groups are necessary just to stay a step ahead of criminals.
Its one soft recommendation is for Europeans to “de-escalate politically loaded rhetoric on the use of AI for cybersecurity.” In an age of voices, nothing can convince better than action, in this case, instances of AI defending values and assets.