The war between cybersecurity and cybercrime will be fought by artificial intelligence

This is a guest post by Aman Khanna, VP of Products at ThumbSignIn
When thinking about a war fought by robots, most would immediately picture futuristic scenes from the movie “The Terminator.” However, the robot war has already begun.
The technological capabilities on either side of cyber warfare are ratcheting up with major advances in artificial intelligence. AI bots have developed the ability to deliver cyberattacks with the intensity and precision that would be nearly impossible for today’s human cybersecurity experts to contain. Only AI powered machines of matching prowess are able to defend our systems from these attacks.
With advanced AI toolkits proliferating the market and the evolution of organized marketplaces to “rent” botnets as easily as one buys groceries on Amazon, it has become easier than ever for even rookie hackers to launch AI-based attacks at large scale. These capabilities are no longer restricted to state-actors with tremendous resources at their disposal but abundantly available to the multitudes of malicious actors all over the world.
The war between cybersecurity and crime has already begun. It’s here and now and we’re currently experiencing it firsthand. Outlined below are some of the most important battles that will ultimately determine who wins the war.
The battle against shape shifting malware
Traditionally, malware scripting has been more of a cottage industry than a mass production factory economy. Handcrafting the scripts carefully to evade detection while stealing data and taking advantage of security loopholes to propagate stealthily from one system to another is a laborious task that requires advanced skills and knowledge of inner workings for these systems. However, machine learning algorithms can and will be increasingly deployed to rapidly create “mutating malware” that changes its own signature as it propagates. This type of shape-shifting behavior- which camouflages malicious code as “harmless” software – is the key to deploying malware capable of evasive maneuvers against cyber detection tools.
Not to be outdone, detection systems are also using machine learning algorithms to detect patterns across mutant versions’ behavior and identify this new type of malware. However, in this ever-intensifying fight, a sub-class of machine learning algorithms (called generative adversarial networks) is now capable of generating malware capable of stealth against the best machine-learning based detection systems.
Your move next, cybercops!
The battle against algorithms masquerading as humans
With AI algorithms working overtime to analyze vast amounts of data stolen from social networks, hackers are able to generate phishing attacks that are far more effective than the best human-powered hacks today.
AI could also be used in social engineering tactics to steal personal information. For example, a hacker could create fake social media profiles that pretend to be human and then reach out to real people in an attempt to steal their personal information. And as algorithms learn the persuasion styles that are most effective with each individual, the phishing emails and posts will become progressively more believable. The impact of these attacks will increase several fold when these phishing attempts manipulate influential and/or powerful individuals.
Algorithms can also be used to create customized and targeted social media posts to spread false and misleading information. A combination of natural language processing and sentiment analysis to systematically enhance responses that guide public opinion convincingly in the direction desired by the attackers. The more the algorithms interact with real people, the better they could get at customizing content to get more shares and reactions across platforms.
Another distinct but related threat is posed by computer programs trying to mislead online systems (as opposed to proving to other humans in phishing attacks) to believe they are humans. Traditionally such attacks have been successfully thwarted through the use of captcha challenges. Most of the advanced captcha systems deployed today rely on activities that humans can easily perform. However, computers struggle in completing such captchas. Until now…
The ability of appropriately trained image recognition algorithms has already been demonstrated to break image-based captchas with increasing success rates, and they are only getting better at this.
On the cybersecurity side, companies are working hard to identify newer ways to distinguish legitimate human activity from automated activity by computer programs. On the phishing and social engineering threat, more sophisticated continuously-improving NLP analysis approaches have shown some promise. On the captcha front, companies are trying to distinguish human solvers from algorithmic solvers using approaches that include “silently” monitoring a range of user behavior asking them to explicitly do anything. Such monitored behavior includes as mouse movements, intensity of tap on touchscreens, time taken to click a button etc.
Sith: 2. Jedi: 1.
The battle against perpetually improvising bots
Botnets are groups of infected devices running malicious software (bots) that harness processing power of other resources to launch cyber-attacks against a variety of internet services. Traditional botnets have used command-and-control architectures where “zombie bots” communicate with a centralized “herder” for instructions.
With the incorporation of new AI capabilities, these “zombie bots” will become smarter and be able to make decisions on their own (or as a self-organized community that pools information) based on the information in their native environment. Over a period of several interactions with the target systems, they discover previously unknown vulnerabilities and adapt their behavior based on learnings from their historical experiences. Checkmate? Not yet!!!
At the same time – on the cybersecurity side – intrusion detection and prevention systems are being enhanced by AI algorithms to continuously learn from past intrusion attempts and get smarter about detecting and responding to botnet attacks.
Crime stays one step ahead
The unfortunate reality of the cybersecurity industry is that defense mechanisms are often developed reactively after the cyber attackers have already made a few initial moves. Moreover in the realm of AI warfare, cyber criminals have gained additional strategic advantage by exploiting the Achilles heel of ML based cybersecurity products – they depend on finding data collected over several attacks. Cyber criminals exploit this by deliberately first launching attacks that CAN be detected by the engines but with very different signatures from the one that they actually want to succeed. This causes the defense’s machine learning engine to learn the wrong rules for recognizing the telltale signs of the actual threat. This is the equivalent of the incapacitating guerilla tactic of administering hemlock to the mastermind generals of the cyber-defense armies. Time to catch up again!
Rather unsurprisingly, cybersecurity has always been and will always be one step behind cybercrime. Whether cybersecurity will be able to take that step quickly enough will determine whether our cyber future will be utopian or dystopian.
About the author
Aman Khanna is VP of Products at ThumbSignIn, a strong authentication provider offering a suite of two-factor and biometric solutions.
DISCLAIMER: BiometricUpdate.com blogs are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of BiometricUpdate.com.
Article Topics
artificial intelligence | biometrics | cybersecurity | machine learning | malware | ThumbSignIn
Comments