FB pixel

AI vs. AI: MIT researchers combat image manipulation

Categories Biometric R&D  |  Biometrics News
AI vs. AI: MIT researchers combat image manipulation
 

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has designed a new tool to jam AI image generators, using invisible “perturbations” at the pixel level of an image.

A release describes how the PhotoGuard technique uses a combination of offensive and defensive tactics to block AI tools such as DALL-E or Midjourney from manipulating photos to create deepfakes and other compromised images. In the encoding tactic, perturbations are small alterations to the latent representation of an image that an AI engine “sees” in mathematical code. By making changes to the code, PhotoGuard “immunizes” the image by making it incomprehensible to AI, which can then only perceive it as a random entity. The resulting output will be unrealistic and recognizably altered – faces on a grey field, for instance, or unblended into a blurred background.

On a defensive level, PhotoGuard creates perturbations in the original input image that are checked against during the inference process, which causes the AI to confuse the two images. This more complex biometric “diffusion attack” uses significantly more memory than encoding.

In either case, the process is undetectable in the original image.

While the training of facial recognition algorithms is not mentioned in MIT’s release, PhotoGuard would presumably also block this application of AI to online images.

Potential and protection in equal measures

“The progress in AI that we are witnessing is truly breathtaking,” says MIT professor Aleksander Madry, who co-authored the PhotoGuard research paper. “But it enables beneficial and malicious uses of AI alike. It is thus urgent that we work towards identifying and mitigating the latter.”

The PhotoGuard team, however, emphasized that truly robust protection against AI will require cooperation and coordination across the sector. Hadi Salman, the graduate student in electrical engineering and computer science and the paper’s lead author, says policymakers should consider regulating safeguards against manipulation, pointing to PhotoGuard as an example.

“Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools,” he says. “As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

Article Topics

 |   |   |   | 

Latest Biometrics News

 

Smart Eye buys competitor Sightic to expand its driver monitoring system

Smart Eye, the biometric driver monitoring systems (DMS) supplier for auto makers such as Volvo, Nissan and BMW, is acquiring…

 

UNICEF in search of firm to co-design youth digital credentialing system

The United Nations Children’s Fund (UNICEF), under its Generation Unlimited (GenU) initiative, is looking for a company to create and…

 

South Korea prepares for more digital wallets thanks to won-backed stablecoins

As South Korea’s quest to legalize won-denominated stablecoins enters its final stages, the market is preparing new digital wallets that…

 

India’s DPI model continues global expansion with 23 country partnerships

India’s Digital Public Infrastructure (DPI) Stack, commonly known as India Stack, keeps inspiring nations around the world with more of…

 

Identity must be continuous, says Prove State of Identity Report 2026

Are you still you? It’s not a philosophical question or an episode of The Twilight Zone, but a key question…

 

Movement to get kids off social media gains momentum in EU

The snowball is officially rolling. In the wake of Australia’s landmark Social Media Minimum Age act, the movement to get…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events