FB pixel

AI vs. AI: MIT researchers combat image manipulation

Categories Biometric R&D  |  Biometrics News
AI vs. AI: MIT researchers combat image manipulation
 

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has designed a new tool to jam AI image generators, using invisible “perturbations” at the pixel level of an image.

A release describes how the PhotoGuard technique uses a combination of offensive and defensive tactics to block AI tools such as DALL-E or Midjourney from manipulating photos to create deepfakes and other compromised images. In the encoding tactic, perturbations are small alterations to the latent representation of an image that an AI engine “sees” in mathematical code. By making changes to the code, PhotoGuard “immunizes” the image by making it incomprehensible to AI, which can then only perceive it as a random entity. The resulting output will be unrealistic and recognizably altered – faces on a grey field, for instance, or unblended into a blurred background.

On a defensive level, PhotoGuard creates perturbations in the original input image that are checked against during the inference process, which causes the AI to confuse the two images. This more complex biometric “diffusion attack” uses significantly more memory than encoding.

In either case, the process is undetectable in the original image.

While the training of facial recognition algorithms is not mentioned in MIT’s release, PhotoGuard would presumably also block this application of AI to online images.

Potential and protection in equal measures

“The progress in AI that we are witnessing is truly breathtaking,” says MIT professor Aleksander Madry, who co-authored the PhotoGuard research paper. “But it enables beneficial and malicious uses of AI alike. It is thus urgent that we work towards identifying and mitigating the latter.”

The PhotoGuard team, however, emphasized that truly robust protection against AI will require cooperation and coordination across the sector. Hadi Salman, the graduate student in electrical engineering and computer science and the paper’s lead author, says policymakers should consider regulating safeguards against manipulation, pointing to PhotoGuard as an example.

“Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools,” he says. “As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Securing user trust and safeguarding platforms with biometric identity verification

Verified trust is the new currency: so says a new report from reusable verified identity and screening company Trua, looking…

 

Essex Police reveal impressive accuracy of LFR from Corsight, Digital Barriers

England’s Essex Police have performed 383,356 match attempts with live facial recognition software from Corsight AI and Digital Barriers, with…

 

US and UK refusal to sign Paris declaration shows divergence in AI strategy

The U.S. and the UK have declined to sign the Paris AI summit declaration, which seeks to establish a “human…

 

DHS’s compliance with AI privacy, civil liberties requirements lacking, IG says

The Department of Homeland Security (DHS) has made strides in developing policies and frameworks to govern its AI use, including…

 

Precise Biometrics: quarterlies, annuals, SEC actions

Feb 13, 2025 – Net sales for Precise Biometrics rose 15.7 percent percent from 75.1 million Swedish kronor (approximately US$7 million)…

 

YouTube, Meta lean into age assurance in 2025

In the past twelve months, age assurance for online content – a method for knowing that a user is of…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events