FB pixel

Two concerning deepfake developments but new hope for robust detection

Categories Biometric R&D  |  Biometrics News
Two concerning deepfake developments but new hope for robust detection
 

Another day, another hopeful deepfake detection tactic, but also, two chilling news reports.

First, a pair of stories about the biometrics-faking djinn escaping its bottle.

TechCrunch is reporting on an open-source AI image generator just out that is being adopted “stunningly” fast. Stable Diffusion, a model created by Stability AI, creates realistic image content from simple text, and can do so on consumer-grade computers.

Stable Diffusion is being used by organizations that generate digital art or allow people to create their own art using their machine learning software, according to TechCrunch, a publisher of tech-business news.

But the model was leaked on 4chan, a dingey point in the internet where poor digital choices often are made and celebrated. The assumption is that realistic, unethical and harmful deepfakes will flourish in the democracy of inexpensive computing.

A good example of this, although it is not known if Stable Diffusion or 4chan are remotely involved, is a cryptocurrency exchange executive who reports that he is the unwitting model for a holographic deepfake.

According to reporting by tech news publication The Register, a live biometric deepfake of Binance spokesman Patrick Hillmann was used on Zoom calls to convince would-be investors to list their tokens on what is the world’s largest crypto spot exchange.

PCMag has a good description of how the caper was pulled off and how it was discovered. Here are Hillmann’s thoughts on the incident. There are no reports on who pulled the stunt.

There is room for hope that reliable, long-lived detection is possible, though the record of putting deepfakes in their place is one of repeated failure as creation technology evolves.

An article in tech publication Unite.AI (based on a new paper) is encouraging for the moment. It is possible that the drive for biometric deepfake precision could be the lasting clue sought.

Creators want every video frame to be perfect at the expense of context or even recreating the peculiar signatures of compressed video.

A condition called regularity disruption occurs in deepfakes alone. A graphic in the research paper illustrates the disruption. It looks like jagged horizonal lines representing a facial feature over time. The image of a real subject is smoother, like an extruded material.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

India scales farmer ID system for payments with KPMG support

The India office of influential accounting firm KPMG has explained how it supported the advancement of the country’s Digital Agriculture…

 

Digital ID systems fail migrants due to policy gaps, Caribou finds

A new report by research organization Caribou has warned that digital ID systems around the world have continued to deepen…

 

Hopae launches eIDAS 2.0, AMLR onboarding readiness tool

Hopae has launched a free self-assessment tool to help financial institutions offering customer onboarding and identity verification to evaluate their…

 

Certainty vs flexibility – does the UK need a Biometric Surveillance Act?

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner Last week London became a city of two tales. Two…

 

TestMu AI releases testing tool for agent-produced code

TestMu AI (formerly LambdaTest) has launched Kane CLI, “a new browser automation tool that runs directly from the terminal,” and…

 

Travel biometrics making new connections

Airport biometrics projects and companies are breaking new ground and intersecting with other industry trends, from digital wallets to biometric…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events