Research into making AI safe soars, but is it enough to build trust in algorithms?
An Israeli AI security consulting firm says computer vision is the most vulnerable aspect of artificial intelligence, which as a whole is being attacked with increasing intensity globally.
That insight is part of a self-published report by Adversa that finds a significant increase in research into AI defenses and countermeasures from 2010 to 2020.
The number of government, academic and industry research papers published in the last two years of that period reached 3,500 — more, according to the report, than all the articles published in the previous 20 years.
Assuming some of that surge addresses computer vision vulnerabilities, it could be timely for the sector.
Sixty-five percent of exploits involving AI, Adversa writes, target computer vision. It is no coincidence that vision, which includes facial recognition and other biometrics, is one of the most evolved of AI’s many offspring.
The most hunted AI application is image classification at 43 percent. Second is facial recognition (seven percent). Object detection falls further down, accounting for three percent of attacks.
The next-most targeted applications are analytics (18 percent) and language (13 percent), according to Adversa. Those three arguably are the AI market right now.
There is plenty of worry to spread around, though. The report finds that all 60 of the most common machine learning models are “prone to at least one vulnerability.”
(While there is no reason to doubt Adversa’s findings, necessarily, the report is part of the company’s marketing effort. Also, sizable portions of the report are based on “the expert opinions of our team.” More hard statistics would make for a stronger case.)
Hackers are primarily motivated by the desire to manipulate AI behavior. Some, predictably, want to know how algorithms work. Others want to steal data or infect models and datasets.
The industries that are most attractive to criminals right now are cybersecurity, the internet, biometrics, all of which exist with significant or dominant impact from computer vision.
Of datasets, static images are hammered the hardest — 61 percent of attacks. Text datasets is next, at a distant 10 percent, according to the report.
After that comes the long tail of single digits. Of the 11 listed datasets, video ranks last, with two percent of attacks.