Despite facial recognition misgivings, we seem ready to trust synthetic faces
The European Commission’s effort to create standards for the use of facial recognition is ongoing, but as important as that work is, should politicians be giving more attention to a mirror-image danger?
For all the very real concern about how accurately facial recognition algorithms can recognize a human face, new research finds that people not only get fooled by AI-generated faces, they trust deepfake faces more than authentic photos.
(There is a separate phenomenon involving people who welcome — or at least accept — dealing with synthetic leaders. More on that below.)
The researchers, Sophie Nightingale from Lancaster University and Hany Farid from the University of California – Berkeley, say the best deepfakes have become indistinguishable from real images.
Automated detection tools have been developed for spotting fakes, according to the researchers, but “current techniques are not efficient or accurate enough to contend with the torrent of daily uploads.”
Worse, their work suggests that people tend to trust deepfake images than those of actual people.
In one experiment, 223 participants rated on a graduated scale the trustworthiness of 128 faces taken from an 800-image dataset. Synthetic faces were judged more trustworthy 7.7 percent of the time.
Images of females were rated decidedly more trustworthy compared to images of males. There were no major differences between images of synthetic people of different races.
The researchers theorize that synthetic faces are more trustworthy because they are averages of physical attributes used to train AI facial recognition tools.
That theory is not being tested in South Korea, but something as interesting is. A political candidate there is being digitized in an effort to generate more votes.
According to the International Business Times, Yoon Suk-yeol has become his own deepfake — AI Yoon. The avatar is campaigning for the flesh-and-blood pol.
And actually, none of these efforts to fool/entertain people are new. When India’s current prime minister, Narendra Modi ran for top office in 2014, he put lookalikes out giving speeches.
At the time, media reports found that few if any Indian voters were put off by attending rallies with their almost-candidate. They often said they found the imposter trustworthy.
Of course, one of those lookalikes is now out campaigning for national office and attacking Modi’s policies.
Article Topics
AI | biometrics | biometrics research | deepfakes | facial recognition | fraud prevention | synthetic faces
Comments