Reverse-engineering deepfakes to pry out telling secrets

Scientists at Facebook and Michigan State University are describing their reverse-engineering method for identifying deepfakes and learning what was used to create them.
No one is saying how accurate it is, but an article in VentureBeat quotes researchers from the organizations saying their tactic is “substantially better” compared to chance and “competitive” when compared to other deepfake detection schemes.
Biometrics developers NtechLab and ID R&D placed among the leaders in Facebook’s Deepfake Detection Challenge a year ago.
Facebook executives chose MSU scientists to collaborate on a way to take a known deepfake from a single still or frame, and reverse-engineer it to identify the tools that created it.
Researchers say their idea also would spot coordinated disinformation campaigns involving varied synthetic images from the same source are posted on multiple platforms.
A deepfake image is put through a so-called fingerprint estimation network, which flags patterns unique to generative models that created the content. Datasets of fingerprints are used to train models to spot fingerprints that are new, or at least new to a model.
Researchers tested their idea using 100 open-source models to create a 100,000 synthetic-image dataset.
According to VentureBeat, Facebook has expressed confidence that the tool will work outside of the lab, but the company has not actually started using it as part of its goal to prevent deceptive AI-generated video frames and still images from reaching subscribers.
Article Topics
AI | biometrics | deepfakes | Facebook | fraud prevention | MSU | research and development
Comments