FB pixel

Deepfakes were scoffed at last year; FBI issues a warning this year

 

biometric digital identity verification for fraud prevention

Last spring, NATO gave itself a pep talk about the threat posed by deepfakes. They are no big deal, private sector experts said in panel discussion.

A year later, the FBI’s cyber division is warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations.”

In fact, the FBI says that a serious deepfake misinformation campaign carried out by another nation, an independent propaganda cell or a home-grown criminal operation could occur a year to 18 months from now.

Deepfakes are AI applications capable of creating lifelike still and video images of people who do not exist. They also can use biometric data to create believable images and footage of real people in fictional scenarios.

FBI agents called out unnamed Russian, Chinese and Chinese-speaking actors for creating synthetic social media profile images as part of influence campaigns in the United States. Fictitious journalists are being created in this way, for instance, according media reports cited by the bureau.

What has changed over the past year that two interrelated government organizations could assess the world so differently? Not much, really.

Deepfake algorithm improvement has been bracing, but not surprising to AI researchers and programmers.

It is more likely that the technology is being taken more seriously in global and national security circles.

The panel, assembled by NATO’s Strategic Communications Center of Excellence, found little of concern because deepfake attacks had not yet occurred, people will not be fooled and because methods of determining what is fake will keep up with nefarious influencers.

And, indeed, fakes-spotting tactics continue to dribble out.

One of the more recent efforts focused on the eyes of people digitally represented. Light reflections in each eye of a real person’s photograph are largely mirrored. That is not the case in the eyes of manufactured faces.

That could be because every pixel of a deepfake is an amalgamation of untold data set images. It could also be a result of programming that avoids unnatural symmetry (which often produces misshapen ears and mismatched earrings.)

Either way, the resulting images can be analyzed for similarity by an AI system to detect the fake, and researchers achieved a 94 percent deepfake detection success rate with portrait-style photos. The researchers also acknowledge limitations in the method based on the need for a reflected source of light mirrored by both eyes.

Related Posts

Article Topics

 |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics