Researchers develop “digital watermarks” to detect deepfakes
Researchers from the Tandon School of Engineering at New York University are developing methods to identify when an image has been altered using artificial intelligence and “digital watermarks” to help detect deepfakes.
A prototype imaging pipeline increased the chances of detecting manipulation from roughly 45 percent to over 90 percent in tests, without sacrificing image quality, according to the announcement.
In an approach pioneered by Department of Computer Science and Engineering Research Assistant Professor Pawel Korus, a typical photo development pipeline was replaced by a neural network, which embeds carefully crafted artifacts which are highly sensitive to manipulation directly into the image at acquisition.
“Unlike previously used watermarking techniques, these AI-learned artifacts can reveal not only the existence of photo manipulations, but also their character,” Korus says.
The process can be performed in-camera, and survives the distortion of online photo sharing services. The technology is open-source.
Most other attempts to analyze image authenticity rely on the final image, but the NYU researchers reasoned that most photos now use machine learning in image acquisition to normalize elements like lighting and stability.
“We have the opportunity to dramatically change the capabilities of next-generation devices when it comes to image integrity and authentication,” says NYU Tandon Professor of Computer Science Nasir Memon, who co-authored the research paper with Korus. “Imaging pipelines that are optimized for forensics could help restore an element of trust in areas where the line between real and fake can be difficult to draw with confidence.”
The researchers will present their paper on “Content Authentication for Neural Imaging Pipelines: End-to-end Optimization of Photo Provenance in Complex Distribution Channels” at the Conference on Computer Vision and Pattern Recognition in June.