IARPA expands research on protecting AI systems from tampering
The U.S. government’s Intelligence Advanced Research Projects Activity (IARPA) is planning a pair of programs to prevent training data from being maliciously tampered with to turn artificial intelligence systems against their users, Federal News Network reports.
“We appreciate the fact that AI is going to be in a lot more things in our life, and we’re going to be relying on it a lot more, so we would want to be able to take advantage of, or at least mitigate, those vulnerabilities that we know exist,” IARPA Director Stacey Dixon told an audience at the Intelligence and National Security Alliance (INSA) conference in Arlington, Virginia.
One project, Trojans in Artificial Intelligence (TrojAI), seeks to create a warning system for machine-learning algorithm training data compromised by an adversary. That project was originally announced in December, and industry has provided feedback on it. Details of the second project will be revealed in a draft announcement later this year, but Dixon said that it will focus on protecting the identity of people whose images have been used to train facial biometric algorithms.
“How do you ensure that no one can take the algorithm that you created and go back and recreate the faces that were in the database?” Dixon asks. “These are certain areas that we hadn’t seen too much research, and so we will be starting programs.”
IARPA is also working towards cybersecurity attack forecasting based on publicly available information and what Dixon called “non-traditional sensors.” She also notes that while the U.S. government used to be the biggest funder for a lot of research projects, this is no longer the case, making it necessary for government agencies to partner with academic and private sector stakeholders.
In that vein, IARPA is backing a team of academic researchers on protecting biometric systems from previously unseen attack types.
The threat of “deepfakes” and other malicious uses of AI is also drawing increasing attention from concerned researchers.