Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner
AI and cybersecurity are converging, according to Microsoft’s chief scientist, and one result will be the long-term inability of the U.S. Defense Department to reliably detect deepfakes using algorithmic tools.
Eric Horvitz testified this week before the cybersecurity subcommittee of the Senate Armed Services Committee. Horvitz says AI is getting better at detecting manipulated and synthetic identities, including deepfakes, but it is a losing effort.
Instead, software developers have to turn that solution upside down.
Offensive AI is improving the effectiveness of cyberattacks and algorithms. Defensive algorithms are, in turn, becoming more vulnerable to attack, Horvitz says.
It is starting to spook a lot of people. Europol’s concerns, for example, are mounting.
The first experimental and commercial software designed to spot synthetic identities are arriving, including Microsoft’s anti-cyberattack products. (Many of the world’s militaries are working on their own defenses.)
New research shows promise in spotting fake expressions in videos as a way of flagging deepfakes and new commercial software capable of detecting synthetic-ID fraud.
Researchers at University of California, Riverside, say their Expression Manipulation Detection framework can detect and then spotlight the emoting areas of a face that have been changed. Their paper is here.
Last month, Unite.AI reported a less unwieldly way to detect deepfakes using biometrics.
Meanwhile, a company called Early Warning Services says its newest AI-based software, Verify Identity, enables a business to determine in real time whether a presented identity is valid or synthetic.
All that might be good for now, but Horvitz’s message is that none of it will win out.
He says the world needs to speed development of technology that guarantees digital-content provenance — a way to put a figurative reality watermark on recorded events, including the actions and words of individual people.
Few of Horvitz’s recommendations to the defense establishment are surprising: Invest in its own research and development, follow security hygiene best practices, train employees, create its own networks to share information and experiences, prepare for the worse.
Legislation focused on provenance efforts in the civilian world — the Deepfake Task Force Act — was introduced in the senate last summer. It would seek mechanisms for determining who created and subsequently manipulated deepfake content.
This post was updated at 10:07am Eastern on May 6, 2022 to clarify that Horvitz does not think AI techniques will be reliable in the fight against deepfakes, and digital content provenance tool will prove better. Also, Horvitz says he was not suggesting that the government should have a role in defending civilian systems against deepfakes. It should be able to assure people in government and out that its claims about what is genuine information are trustworthy.