FB pixel

Despite facial recognition misgivings, we seem ready to trust synthetic faces

Despite facial recognition misgivings, we seem ready to trust synthetic faces
 

The European Commission’s effort to create standards for the use of facial recognition is ongoing, but as important as that work is, should politicians be giving more attention to a mirror-image danger?

For all the very real concern about how accurately facial recognition algorithms can recognize a human face, new research finds that people not only get fooled by AI-generated faces, they trust deepfake faces more than authentic photos.

(There is a separate phenomenon involving people who welcome — or at least accept — dealing with synthetic leaders. More on that below.)

The researchers, Sophie Nightingale from Lancaster University and Hany Farid from the University of California – Berkeley, say the best deepfakes have become indistinguishable from real images.

Automated detection tools have been developed for spotting fakes, according to the researchers, but “current techniques are not efficient or accurate enough to contend with the torrent of daily uploads.”

Worse, their work suggests that people tend to trust deepfake images than those of actual people.

In one experiment, 223 participants rated on a graduated scale the trustworthiness of 128 faces taken from an 800-image dataset. Synthetic faces were judged more trustworthy 7.7 percent of the time.

Images of females were rated decidedly more trustworthy compared to images of males. There were no major differences between images of synthetic people of different races.

The researchers theorize that synthetic faces are more trustworthy because they are averages of physical attributes used to train AI facial recognition tools.

That theory is not being tested in South Korea, but something as interesting is. A political candidate there is being digitized in an effort to generate more votes.

According to the International Business Times, Yoon Suk-yeol has become his own deepfake — AI Yoon. The avatar is campaigning for the flesh-and-blood pol.

And actually, none of these efforts to fool/entertain people are new. When India’s current prime minister, Narendra Modi ran for top office in 2014, he put lookalikes out giving speeches.

At the time, media reports found that few if any Indian voters were put off by attending rallies with their almost-candidate. They often said they found the imposter trustworthy.

Of course, one of those lookalikes is now out campaigning for national office and attacking Modi’s policies.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

European AI compliance project CERTAIN launches

The pan-European project to create AI compliance tools CERTAIN has kicked off its work, with the goal of making European…

 

Signaturit Group acquiring Validated ID for undisclosed sum

Spain-based digital identity and electronic signature provider Validated ID is being acquired by Signaturit Group, a European company offering identity…

 

Social media ban for kids under 13 advances through US Senate committee

U.S. kids under 13 could soon be banned from accessing social media platforms, as an age assurance law makes its…

 

South Africa will invest in DPI, says President

South Africa is planning to invest in digital public infrastructure (DPI) , including the launch of a national digital identity…

 

Sri Lanka: ‘DPI is not just a buzzword’

A robust digital public infrastructure (DPI) framework is central to Sri Lanka’s ongoing digital transformation, while the country’s digital ID…

 

Idiap highlights biometrics research, open-source contributions for 2024

Idiap Research Institute has released its 2024 Scientific Report. Its research covers a wide range of digitally relevant topics, several…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events