NATO gets soothing message about deepfakes, but…

Fear about deepfakes may have peaked. NATO Strategic Communications (StratCom) Center Of Excellence (COE) held an online discussion about the threat posed by highly realistic — but manufactured — AI-powered videos of actual people.
The talk, which can be watched here, plays down the chances that a world leader will be brought down by a faked video of them taking a bribe, eating with their fingers or cooperating with political rivals. The biometrics industry could eventually be impacted, however.
A pair of experts enlisted for the videoconference cite a number of reasons to relax about deepfakes. But the rationales boil down to deepfake attacks have yet to happen, detection technology will pace or even outperform deception technology and people will not be fooled.
Jānis Sārts, director of NATO StratCom COE, opens the talk referencing “the almost mythological subject of deepfakes,” and suggesting that their destructive potential will not match technological or economic realities.
“Just because a technology can be used doesn’t mean it will be used,” said Tim Hwang, director of the Harvard-MIT Ethics and Governance of IA initiative.
Hwang said that there is no evidence that disinformation teams are using deepfakes. That largely is because it is less expensive to engage in traditional and social media influence campaigns.
Even if bad actors spend the “tens of thousands of dollars” necessary to mount deepfake campaigns, Hwang said, researchers have plenty of examples they can use with machine learning to spot video fraud.
He also pointed out the possibility of “poisoning” training data — image and voice content — with figurative “radioactive markers” that would out fraudsters. Just the threat of poisoning, could be disincentive enough to make attacks unlikely, he explained, because major social media outlets would aggressively seek markers and clean feeds.
Joining Hwang and Sārts was Keir Giles, a specialist on Russia at the Conflict Studies Research Centre, who said arguments about fraudulent video footage of a leader can be overstated.
Giles said people cannot be easily shocked because deceptive media influencers have been around a long time. Pundits have worried about “the erosion of trust in objective truth or reality,” he said, since “the first flattering portrait” was painted.
However, there are other dangers related to deepfakes that Giles said should be considered.
For one, they could theoretically render biometric facial and voice recognition as irrelevant as hand-written signatures, he said. Nuance Communications VP and GM of Security and Biometrics Brett Beranek also participated in the discussion.
And the scaling ability of this technology bears watching, according to Giles. It would carry “close to zero cost” to create misleading content involving a believable human avatar and distribute it globally.
It is the avatar that would seem to worry Giles most.
People jaded by politics and technology would be less likely to take on faith damning content of a known person. But an avatar or avatars could do or say anything and influence the thoughts and behaviors of people already consuming other deceptive content.
Bad actors could create anonymous deepfake leaders, groups or even movements, he said.
This post was updated at 9:04am on May 14, 2020 to clarify that the event was held by NATO StratCom COE, which is not part of NATO’s military command structure.
Article Topics
AI | artificial intelligence | biometrics | deepfakes | facial recognition | NATO | voice recognition
Comments