FB pixel

Researchers develop tools to detect AI artifacts in photos and videos

Categories Biometric R&D  |  Biometrics News
Researchers develop tools to detect AI artifacts in photos and videos
 

While digitally manipulated “deepfake” photos and videos are becoming increasingly difficult to detect, a new paper by researchers at Binghamton University — State University of New York, Virginia State University and Intelligent Fusion Technology shows that images can be parsed using frequency domain analysis techniques to look for “anomalies that could indicate they are generated by AI.”

In their study, the researchers explored ways that Generative Adversarial Networks Image Authentication (GANIA) could be used to identify a photo’s AI origins, and thereby, hopefully, thwarting its being spread as misinformation.

In their paper, “Generative adversarial networks-based AI-generated imagery authentication using frequency domain analysis,” published in the Proceedings of Disruptive Technologies in Information Sciences, the researchers created thousands of images using easily accessible generative AI tools like Adobe Firefly, PIXLR, DALL-E, and Google Deep Dream, and analyzed them using signal processing techniques “so their frequency domain features could be understood.”

The researchers said that the difference in the frequency domain characteristics of AI-generated and non-AI generated, or “natural,” images, is what differentiates them using a machine learning model. They found that when comparing images using GANIA they were able to detect artifacts in the images because of the way the AI generates the fake images. When upsampling AI images, which clones pixels, the file size is increased significantly, which results in tell-tale signs being left in the frequency domain, the researchers said.

“When you take a picture with a real camera, you get information from the whole world — not only the person or the flower or the animal or the thing you want to take a photo of, but all kinds of environmental info is embedded there,” said Professor Yu Chen from the Department of Electrical and Computer Engineering at Binghamton.

“With generative AI, images focus on what you ask it to generate, no matter how detailed you are. There’s no way you can describe, for example, what the air quality is or how the wind is blowing or all the little things that are background elements,” Chen explains.

“While there are many emerging AI models, the fundamental architecture of these models remains mostly the same. This allows us to exploit the predictive nature of its content manipulation and leverage unique and reliable fingerprints to detect it,” adds PhD student Deeraj Nagothu.

“We want to be able to identify the ‘fingerprints’ for different AI image generators,” says researcher Nihal Poredi. “This would allow us to build platforms for authenticating visual content and preventing any adverse events associated with misinformation campaigns.”

The team of researchers also developed a new technique they call “DeFakePro” to detect fake AI-enhanced or generated audio-video recordings. The researchers said the tool leverages the electrical network frequency (ENF) signal that’s created by the miniscule electrical fluctuations in the power grid, which the researchers said is embedded in media files when they’re recorded.

By analyzing this signal, which the research team said is unique to the time and location of the recording, the DeFakePro tool can verify if the recording is authentic or whether it’s been manipulated in some way. The researchers said the technique was shown to be very effective in exposing deepfakes and demonstrated how it can be used to secure “large-scale smart surveillance networks” against attacks using AI-manipulated video.

“Misinformation is one of the biggest challenges that the global community faces today. The widespread use of generative AI in many fields has led to its misuse. Combined with our dependence on social media, this has created a flashpoint for a misinformation disaster,” Poredi said. “This is particularly evident in countries where restrictions on social media and speech are minimal. Therefore, it is imperative to ensure the sanity of data shared online, specifically audio-visual data.”

“AI is moving so quickly that once you have developed a deepfake detector, the next generation of that AI tool takes those anomalies into account and fixes them,” Chen said. “Our work is trying to do something outside the box.”

Related Posts

Article Topics

 |   |   |   | 

Latest Biometrics News

 

The duality of AI in digital verification: Balancing innovation and security

By Mikkel Nielsen, CPO at Verifymy Artificial intelligence plays an increasingly pivotal role in online verification processes, but it is…

 

Daon to build newly patented synthetic voice detection into call center platform

A newly issued patent for synthetic voice detection will be built into Daon’s call center fraud protection platform to secure…

 

Deepfake detectives lay out types of deepfakes and common attack points

The existence of deepfake detection implies the existence of deepfake detectives. That’s arguably the role of the Kantara DeepfakesIDV discussion…

 

SITA and Idemia partner on airport biometrics, digital identity interoperability

Two of the largest global suppliers of airport biometrics and traveler digital identities, SITA and Idemia Public Security, are collaborating…

 

Australian PM’s proposed law makes social platforms responsible for age assurance

Australian Prime Minister Anthony Albanese is proposing a minimum age of 16 years social media users and wants the onus…

 

UK shop theft inquiry addresses use of facial recognition to combat retail crime

The Justice and Home Affairs committee has issued a call for reforms to address the rise in organized retail crime,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events