AI vs. AI: MIT researchers combat image manipulation

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has designed a new tool to jam AI image generators, using invisible “perturbations” at the pixel level of an image.
A release describes how the PhotoGuard technique uses a combination of offensive and defensive tactics to block AI tools such as DALL-E or Midjourney from manipulating photos to create deepfakes and other compromised images. In the encoding tactic, perturbations are small alterations to the latent representation of an image that an AI engine “sees” in mathematical code. By making changes to the code, PhotoGuard “immunizes” the image by making it incomprehensible to AI, which can then only perceive it as a random entity. The resulting output will be unrealistic and recognizably altered – faces on a grey field, for instance, or unblended into a blurred background.
On a defensive level, PhotoGuard creates perturbations in the original input image that are checked against during the inference process, which causes the AI to confuse the two images. This more complex biometric “diffusion attack” uses significantly more memory than encoding.
In either case, the process is undetectable in the original image.
While the training of facial recognition algorithms is not mentioned in MIT’s release, PhotoGuard would presumably also block this application of AI to online images.
Potential and protection in equal measures
“The progress in AI that we are witnessing is truly breathtaking,” says MIT professor Aleksander Madry, who co-authored the PhotoGuard research paper. “But it enables beneficial and malicious uses of AI alike. It is thus urgent that we work towards identifying and mitigating the latter.”
The PhotoGuard team, however, emphasized that truly robust protection against AI will require cooperation and coordination across the sector. Hadi Salman, the graduate student in electrical engineering and computer science and the paper’s lead author, says policymakers should consider regulating safeguards against manipulation, pointing to PhotoGuard as an example.
“Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools,” he says. “As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”
Article Topics
AI | deepfakes | fraud prevention | image analysis | PhotoGuard
Comments