Deepfakes: The Times wows, the Senate punts and Asia worries

The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent people.
For those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort.
Seemingly, nothing in the newspaper can be taken at face value. In an apparent case of synchronicity (although…), the U.S. Senate last week passed a bill on to the House of Representatives that would promote research on ways to keep deepfakes ethical.
Few in the United States have never heard of deepfakes. Most people know that in 2018, the words of a comedian were digitally inserted into the mouth of former President Barack Obama. A good many know that it is possible to view celebrity faces added to the undulating bodies of nobodies on porn sites.
It would have been irresponsible for The Times to ignore the opportunity to push the software around. And the article itself is as much a primer on spotting deepfakes as it is a flashy statement about The Times’ technological and journalistic savvy.
Staff members used a pre-trained iteration of Nvidia’s StyleGAN2 to create the images. It was implemented in TensorFlow. The training set was the 70,000-image Flickr-Faces-HQ.
The technology has people in China spooked, according to the South China Morning Post. The most biometrically face-recognized population on Earth has cause to worry.
Not only could their own faces show up in unexpected places, but, more significant is the concern that societal trust — already fraying around the world — could collapse.
The Morning Post article made the point that technology only has to make a critical mass of people question what is true. It short order, just as in the Great Depression, people line up at banks to withdraw their money in economy-cratering distrust and cynicism.
The article says that that sort of problem likely would not happen in developed economies.
But just last May, NATO Strategic Communications Center of Excellence held a conference in which someone from the COE and the head of a joint Harvard-MIT AI governance group implied that political or military deepfake attacks will not happen because they have not yet happened.
The Senate’s legislation, the Identifying Outputs of Generative Adversarial Networks Act, could be a tentative first step in staving off such a possibility.
It dedicates no specific money, but instead directs the National Science Foundation to “support merit-reviewed and competitively awarded research on manipulated” content. The research should look for tools for spotting deepfakes.
The legislation, which would have to be passed by the House and sent to the White House for a signature, also wants to see research on how the public can be educated in spotting deepfakes.
The better question might be, can people be educated on the extremely deceptive potential of deepfakes without leading them to doubt everything.
Article Topics
biometrics | China | deepfakes | facial recognition | fraud prevention | legislation | spoof detection | United States
Comments