FB pixel

Be prepared for more fake news, cloned people and manipulated images

Generative AI examined at EAB and CiTER Biometrics Workshop
Be prepared for more fake news, cloned people and manipulated images
 

The growing accessibility and power of deepfakes and generative AI are causing headaches for fraud prevention professionals and forensic investigators, and the problem appears to be getting worse. Tencent Cloud is offering Deepfakes-as-a-Service, charging $145 to generate digital copies of an individual based on three minutes of video and one hundred spoken sentences, The Register reports.

The interactive fakes take only 24 hours to produce, and avoid the flat intonation that can sometimes alert viewers to the presence of a virtual human with timbre customization technology.

The Cyberspace Administration of China has put rules in place for generative AI that seem to require the products of this service to be clearly marked as such.

Criminals are demonstrating the nefarious uses this kind of technology can be put to, with Arizona outlet Arizona’s Family reporting an incident in which criminals faked a teenager’s voice in an attempt to fake a kidnapping. The purported kidnappers phoned the teenager’s mother and demanded a million dollars in ransom, threatening to harm her if the victim did not comply.

The teen’s mother alertly ascertained that her daughter was safe without paying, but AI experts are warning people to be alert to the possibility of similar fraud attacks.

Journalists, too, are finding a ready audience for their tales of AI trickery, with the latest example coming from a Wall Street Journal columnist who managed to trick her bank’s voice biometric system and family members, at least temporarily. Senior Tech Columnist Joanna Stern cloned herself with help from a professional generative AI service and an extra layer of voice technology.

Research from Regula indicates that roughly a third of businesses have already suffered a deepfake fraud attack.

Generative AI threatens digital forensics

Deepfakes were one of the four topics in focus at the recent EAB & CiTER Biometrics Workshop

Anderson Rocha, professor and researcher at the State University of Campinas and visiting professor at the Idiap Research Institute, presented a keynote on ‘Deepfakes and Synthetic Realities: How to Fight Back?’

“Deepfakes are just the tip of the iceberg,” Rocha says. Generative AI is overturning longstanding assumptions in forensics.

Multiple complete yet fake narratives are possible, with the ability to create synthetic video, audio, text and other kinds of data.

“The singularity” is a long way off, Rocha argues, but as Arthur C. Clarke noted, “any sufficiently advanced technology is indistinguishable from magic.”

AI is used in digital forensics to help identify, analyze and interpret digital evidence, in part by searching for the artefacts that are, at least in theory, left behind by every change made to a piece of evidence.

The problem of determining media provenance was raised to Rocha’s team in 2009, with a real world investigation into the legitimacy of photos of Brazil’s then-President published in news media. Rocha describes the techniques used at the time, and their evolution to include computer vision techniques, up until the explosion of data and advancement of neural networks changed the possibilities for manipulating photos and other evidence, around 2018.

Now, combinations of detectors with machine learning are necessary to detect the more-subtle manipulations that have become possible with AI. The pace of AI advancement, however, poses a constant challenge to forensic investigators.

The true threat of generative AI, therefore, on Rocha’s view, is not so much from deepfakes as it is from manipulations that do not leave detectable artifacts.

The topic was further explored with presentations from Pindrop’s Nick Gaubitch on PAD in echoey environments, Arun Ross of Michigan State University on iris deepfakes, and a quartet of presentations from academic researchers.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Edge computing firm Blaze IPOs, announces security deal with Vsblty

AI-powered edge computing company Blaize, known for its collaborations with biometric surveillance developers, went public on the Nasdaq on Tuesday….

 

Illinois to get mobile driver’s licenses in Apple Wallet by end of 2025

Illinois is “working to bring IDs in Apple Wallet to Illinois residents in the future with the goal of launching…

 

Singapore slaps app stores with age verification requirement for adult apps

Singapore will impose age assurance requirements on app stores starting in April 2025, blocking underage users from downloading social media…

 

Paravision’s next generation algorithm cracks top 5 on NIST FRTE 1:N benchmark

Facial recognition from San Francisco-based Paravision has landed in the global top 5 in the primary benchmark of the latest…

 

Age assurance legislation drives talk on how to create an age-aware internet

There are few hotter topics in biometrics and regulatory circles right now than the issue of age assurance as a…

 

Breach exposes privacy risk from de-anonymization of location data

Gravy Analytics, a prominent location data broker, has disclosed that a significant data breach potentially exposed through de-anonymization the precise…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events