FB pixel

Two concerning deepfake developments but new hope for robust detection

Categories Biometric R&D  |  Biometrics News
Two concerning deepfake developments but new hope for robust detection
 

Another day, another hopeful deepfake detection tactic, but also, two chilling news reports.

First, a pair of stories about the biometrics-faking djinn escaping its bottle.

TechCrunch is reporting on an open-source AI image generator just out that is being adopted “stunningly” fast. Stable Diffusion, a model created by Stability AI, creates realistic image content from simple text, and can do so on consumer-grade computers.

Stable Diffusion is being used by organizations that generate digital art or allow people to create their own art using their machine learning software, according to TechCrunch, a publisher of tech-business news.

But the model was leaked on 4chan, a dingey point in the internet where poor digital choices often are made and celebrated. The assumption is that realistic, unethical and harmful deepfakes will flourish in the democracy of inexpensive computing.

A good example of this, although it is not known if Stable Diffusion or 4chan are remotely involved, is a cryptocurrency exchange executive who reports that he is the unwitting model for a holographic deepfake.

According to reporting by tech news publication The Register, a live biometric deepfake of Binance spokesman Patrick Hillmann was used on Zoom calls to convince would-be investors to list their tokens on what is the world’s largest crypto spot exchange.

PCMag has a good description of how the caper was pulled off and how it was discovered. Here are Hillmann’s thoughts on the incident. There are no reports on who pulled the stunt.

There is room for hope that reliable, long-lived detection is possible, though the record of putting deepfakes in their place is one of repeated failure as creation technology evolves.

An article in tech publication Unite.AI (based on a new paper) is encouraging for the moment. It is possible that the drive for biometric deepfake precision could be the lasting clue sought.

Creators want every video frame to be perfect at the expense of context or even recreating the peculiar signatures of compressed video.

A condition called regularity disruption occurs in deepfakes alone. A graphic in the research paper illustrates the disruption. It looks like jagged horizonal lines representing a facial feature over time. The image of a real subject is smoother, like an extruded material.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics outlook mostly sunny with periods of deepfakes

Biometrics forecasts are among the most-read stories of the week on Biometric Update. Goode Intelligence sees revenues for the biometrics…

 

FBI seeks vendors for its Somalia ABIS

The Federal Bureau of Investigation (FBI), through its Criminal Justice Information Services (CJIS) Division, has issued a Request for Information…

 

ID.me bolsters board, Identiv names SVP

ID.me has appointed Brian Robins and William “Bill” Welch as independent members to its Board of Directors. Robins, GitLab CFO and…

 

GSA has Section 508 concerns, Idemia touts accessibility advantage for Login.gov

In an announcement noting its inclusion in a $194.5 million blanket purchase agreement (BPA) from the U.S. General Services Administration…

 

Updated Innovatrics ABIS algorithm takes back rank 1 in latent accuracy evaluation

An updated biometric algorithm for latent fingerprint identification from Innovatrics has landed at the top of the U.S. government’s Evaluation…

 

Mexican state biometric population registry sees high acceptance

The Mexican state of Veracruz is seeing high levels of interest from its population in the biometric version of the…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events