Intel takes the lead on DARPA effort to end machine-learning spoofing
The U.S. Department of Defense has recruited Intel Corp. and Georgia Tech to lead an effort to prevent criminals from fooling critical and otherwise trustworthy artificial intelligence object-recognition algorithms.
The pair is taking the reins of a four-year program — Guaranteeing AI Robustness against Deception, or GARD — created by DARPA last February. The goal is to prevent contamination of mammoth databases used to train AI algorithms, influencing them to make poor or even fatal decisions.
Researchers have shown that it is possible to surreptitiously place misleading information, such as images, in training databases. The tactic is known as generative adversarial network (GAN).
In a stunning demonstration of its own, online security firm McAfee LLC in February showed how it was possible to make systems on a Tesla car misclassify street signs. A piece of black tape convinced software for semi-autonomous operation that a 35 mph sign was an 85 mph sign.
Intel announced April 9 that it was GARD’s prime contractor and that its researchers would join with experts from George Tech to manage the program. Intel in 2017 bought Mobileye, maker of the computer-vision sensor used in the Tesla vehicle tested by McAfee.
Technology publication Protocol has reported that 15 other organizations will work on the project as well, including IBM, SRI International, Johns Hopkins University, Massachusetts Institute of Technology and Carnegie Mellon University.
This post was updated at 5:47pm on April 10 to note DARPA’s continuing leadership role in the program, and its launch date.
Article Topics
algorithms | artificial intelligence | biometrics | Darpa | facial recognition | Intel | machine learning | spoofing
Comments