FB pixel

Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner

Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner
 

AI and cybersecurity are converging, according to Microsoft’s chief scientist, and one result will be the long-term inability of the U.S. Defense Department to reliably detect deepfakes using algorithmic tools.

Eric Horvitz testified this week before the cybersecurity subcommittee of the Senate Armed Services Committee. Horvitz says AI is getting better at detecting manipulated and synthetic identities, including deepfakes, but it is a losing effort.

Instead, software developers have to turn that solution upside down.

Offensive AI is improving the effectiveness of cyberattacks and algorithms. Defensive algorithms are, in turn, becoming more vulnerable to attack, Horvitz says.

It is starting to spook a lot of people. Europol’s concerns, for example, are mounting.

The first experimental and commercial software designed to spot synthetic identities are arriving, including Microsoft’s anti-cyberattack products. (Many of the world’s militaries are working on their own defenses.)

New research shows promise in spotting fake expressions in videos as a way of flagging deepfakes and new commercial software capable of detecting synthetic-ID fraud.

Researchers at University of California, Riverside, say their Expression Manipulation Detection framework can detect and then spotlight the emoting areas of a face that have been changed. Their paper is here.

Last month, Unite.AI reported a less unwieldly way to detect deepfakes using biometrics.

Meanwhile, a company called Early Warning Services says its newest AI-based software, Verify Identity, enables a business to determine in real time whether a presented identity is valid or synthetic.

All that might be good for now, but Horvitz’s message is that none of it will win out.

He says the world needs to speed development of technology that guarantees digital-content provenance — a way to put a figurative reality watermark on recorded events, including the actions and words of individual people.

Few of Horvitz’s recommendations to the defense establishment are surprising: Invest in its own research and development, follow security hygiene best practices, train employees, create its own networks to share information and experiences, prepare for the worse.

Legislation focused on provenance efforts in the civilian world — the Deepfake Task Force Act — was introduced in the senate last summer. It would seek mechanisms for determining who created and subsequently manipulated deepfake content.

 

This post was updated at 10:07am Eastern on May 6, 2022 to clarify that Horvitz does not think AI techniques will be reliable in the fight against deepfakes, and digital content provenance tool will prove better. Also, Horvitz says he was not suggesting that the government should have a role in defending civilian systems against deepfakes. It should be able to assure people in government and out that its claims about what is genuine information are trustworthy.

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Latvia extends e-signature service to keep 400,000 eIDs valid

The Latvian government will temporarily extend an agreement with its current service provider to avoid losing electronic signature capabilities for…

 

EU pushes AI Act deadlines for high-risk systems, including biometrics

The EU has reached a provisional agreement on changes to the AI Act that postpone rules on high-risk AI systems,…

 

Colombia holds verifiable credential workshop for public sector

The government of Colombia is holding a workshop in Bogotá this week to develop a proof-of-concept for verifiable credentials, focusing…

 

SLC Digital patents multi-signal fingerprint biometric sensor array for smart cards

Multifactor authentication and multi-modal biometrics are both well established as robust identification methods, but now SLC Digital is introducing multi-signal…

 

Will Scotland be the first nation to pass primary legislation covering live FRT?

The Scottish privacy commissioner continues to express consternation over the potential use of live facial recognition by Police Scotland. Meanwhile,…

 

France Identité app launches sandbox for iOS, proves age check privacy bona fides

France Identité, the French government’s mobile app for digital identity verification, has made its sandbox build available in iOS. Writing…

Comments

One Reply to “Microsoft says DOD needs to be better at detecting synthetic identities. And do it sooner”

  1. “Prepare for the worse”?

    In reality though, it seems like an approach akin to that used for detecting Photoshop wouldn’t run awry – compression artifact ratios differ significantly between organically captured frames and generated frames. If you perform error level analysis on frames of a deepfake, the generated portions show drastically differing levels.

    Accommodating for that in the generation stage would require a LOT of work, so I think this technique could at least be used as a stopgap under traditional development.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events