IARPA looks to industry for help identifying tampering with AI systems

The Intelligence Advanced Research Projects Activity (IARPA) is proposing a TrojAI program to build tools for predicting if artificial intelligence systems have been corrupted with Trojan attacks, and has asked industry stakeholders for input, Nextgov reports.
Trojan AI attacks consist of exploiting the AI training process, such as by manipulating the datasets used to train AI systems, many of which are crowdsourced. AI systems such as those used for facial and speech recognition can be trained with tainted data to misidentify an object or individual based on certain “triggers,” and the open-source software many AI tools run on make it easy for Trojans to be missed until it is too late, Nextgov reports.
IARPA officials write in a solicitation that it is impractical to clean and monitor crowdsourced datasets, and that the security of the data and training pipeline “may be weak or nonexistent.” TojanAI program participants will build systems to predict whether AI tools for image classification contain Trojans, and must be capable of scanning roughly 1,000 systems per day with no human interaction. The program will run in multiple stages, with accuracy standards ramping up gradually over 24 months.
The deadline for program proposal comments is January 4.
IARPA also partnered with NIST earlier this year to launch a challenge to improve facial recognition by fusing the output of multiple algorithms.
AI researchers have been working on ways to improve algorithmic transparency, but until that happens, there will be few ways to determine if an automated system’s decisions are sound.
“Deep Fake” technology is another risk associated with AI, and is an increasing concern, particularly in the context of fake news and disinformation campaigns.
Article Topics
algorithmic transparency | artificial intelligence | IARPA | research and development
Comments