FB pixel

Be prepared for more fake news, cloned people and manipulated images

Generative AI examined at EAB and CiTER Biometrics Workshop
Be prepared for more fake news, cloned people and manipulated images

The growing accessibility and power of deepfakes and generative AI are causing headaches for fraud prevention professionals and forensic investigators, and the problem appears to be getting worse. Tencent Cloud is offering Deepfakes-as-a-Service, charging $145 to generate digital copies of an individual based on three minutes of video and one hundred spoken sentences, The Register reports.

The interactive fakes take only 24 hours to produce, and avoid the flat intonation that can sometimes alert viewers to the presence of a virtual human with timbre customization technology.

The Cyberspace Administration of China has put rules in place for generative AI that seem to require the products of this service to be clearly marked as such.

Criminals are demonstrating the nefarious uses this kind of technology can be put to, with Arizona outlet Arizona’s Family reporting an incident in which criminals faked a teenager’s voice in an attempt to fake a kidnapping. The purported kidnappers phoned the teenager’s mother and demanded a million dollars in ransom, threatening to harm her if the victim did not comply.

The teen’s mother alertly ascertained that her daughter was safe without paying, but AI experts are warning people to be alert to the possibility of similar fraud attacks.

Journalists, too, are finding a ready audience for their tales of AI trickery, with the latest example coming from a Wall Street Journal columnist who managed to trick her bank’s voice biometric system and family members, at least temporarily. Senior Tech Columnist Joanna Stern cloned herself with help from a professional generative AI service and an extra layer of voice technology.

Research from Regula indicates that roughly a third of businesses have already suffered a deepfake fraud attack.

Generative AI threatens digital forensics

Deepfakes were one of the four topics in focus at the recent EAB & CiTER Biometrics Workshop

Anderson Rocha, professor and researcher at the State University of Campinas and visiting professor at the Idiap Research Institute, presented a keynote on ‘Deepfakes and Synthetic Realities: How to Fight Back?’

“Deepfakes are just the tip of the iceberg,” Rocha says. Generative AI is overturning longstanding assumptions in forensics.

Multiple complete yet fake narratives are possible, with the ability to create synthetic video, audio, text and other kinds of data.

“The singularity” is a long way off, Rocha argues, but as Arthur C. Clarke noted, “any sufficiently advanced technology is indistinguishable from magic.”

AI is used in digital forensics to help identify, analyze and interpret digital evidence, in part by searching for the artefacts that are, at least in theory, left behind by every change made to a piece of evidence.

The problem of determining media provenance was raised to Rocha’s team in 2009, with a real world investigation into the legitimacy of photos of Brazil’s then-President published in news media. Rocha describes the techniques used at the time, and their evolution to include computer vision techniques, up until the explosion of data and advancement of neural networks changed the possibilities for manipulating photos and other evidence, around 2018.

Now, combinations of detectors with machine learning are necessary to detect the more-subtle manipulations that have become possible with AI. The pace of AI advancement, however, poses a constant challenge to forensic investigators.

The true threat of generative AI, therefore, on Rocha’s view, is not so much from deepfakes as it is from manipulations that do not leave detectable artifacts.

The topic was further explored with presentations from Pindrop’s Nick Gaubitch on PAD in echoey environments, Arun Ross of Michigan State University on iris deepfakes, and a quartet of presentations from academic researchers.

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News


UK school reprimanded by ICO for using facial recognition without DPIA

A school in Chelmsford, Essex, has been reprimanded by the Information Commissioner’s Office (ICO) for the unlawful implementation of facial…


Tech5 introduces flexible biometric template protection for its ABIS

Tech5 has developed biometric template protection technology that it says meets the criteria set out in the ISO/IEC 30136 standard….


Maza streamlines KYC with Regula biometric and document verification

Regula has integrated its document and biometric verification system into Maza Financial, a fintech company based in the United States,…


More ballparks to get biometric entry through MLB’s Go-Ahead Entry

Major League Baseball continues to grow its facial recognition entry program with biometrics from NEC. An article in Sports Business…


Inrupt enters growing digital wallet market with pitch from WWW inventor

Inrupt has launched a digital wallet, which comes with a notable endorsement from an internet pioneer. A press release says…


OIX calls on new UK government to accelerate digital ID rollout

The UK should work toward a digital wallet strategy, provide clarity on how ID will work across the public and…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events