Deepfakes are a) detectable b) cool c) not a worry
According to news from inside the AI community (or bubble, if you like), coders are taking the prospect of deepfake videos as an imminent threat seriously.
While some in biometric and related circles sigh with relief, some on the outside do not see the same danger that has disturbed algorithm writers. Some, in fact, wonder what the big deal is.
An opinion article on the Unite.AI site states that video synthesis has been “the galvanizing factor” behind a “developmental sprint” this year. Unite.AI is an advocacy journalism publisher promoting ethical AI and reporting on AI abuse, primarily by government.
The lengthy article is worth the time, catching readers up on important developments in spotting and fighting deepfakes, which depend greatly on faked biometrics of averaged human beings or spoofed biometrics of real ones.
Meta has gone public with text-to-video software, Make-A-Video, for instance. Google Research is pushing its Imagen Video T2V architecture. And, according to the article, Stability AI has promised video for its Stable Diffusion algorithm before 2023.
The meat of the article, however, is about tools being created to fake-proof the internet. Among them are generative adversarial networks and latent diffusion models. The article states that, positioned as “adjuncts to external animation works,” GANs and diffusers are “starting to gain real traction.”
Backers of De-fake architecture, a paper about which is linked to via the article, proposes nothing less than to “achieve ‘universal detection’ for images produced by text-to-image diffusion models” – and to identify which diffuser was used to make the image.
Salesforce collaborated on this work.
Another paper cited touts Blade Runner, “plug and play” software for everyone who cannot afford to create their own deepfake detector.
A second article out this week was published by news publisher Business Insider. It is more in the “wow” category. The author, novelist and podcaster Evan Ratliff, offers a very lengthy account of his surprise about the ubiquity of deepfake headshots.
Ratliff sees a photo of a 20-something blonde with Chicklet-white and straight upper teeth (lower teeth missing from the smile). What follows is recounting of how he became obsessed with the photo and chatbot and ultimately came to realize he could use his own human facial recognition faculties to spot some fakes.
But the most fun perspective comes from a news article on The Register. In a piece that is going to look prescient or naïve in five years, The Register tries to take some of the air out of deepfake balloon.
Relax is the message from John Shier, a Sophos researcher quoted by the publication. Sophos is an online security firm.
It is so much easier just to successfully phish for passwords than it is to build algorithms and architectures and platforms. That is, when trying to conventionally defraud businesses. Deepfake romance scams are primed like a mid-summer lawn mower, however.
“Industrialized deepfake lovebots” are on their way as soon as they can be made to trick at scale.
There is a novelist out there who probably needs to read that right away.
AI | biometrics | biometrics research | deepfakes | synthetic data