Two concerning deepfake developments but new hope for robust detection
Another day, another hopeful deepfake detection tactic, but also, two chilling news reports.
First, a pair of stories about the biometrics-faking djinn escaping its bottle.
TechCrunch is reporting on an open-source AI image generator just out that is being adopted “stunningly” fast. Stable Diffusion, a model created by Stability AI, creates realistic image content from simple text, and can do so on consumer-grade computers.
Stable Diffusion is being used by organizations that generate digital art or allow people to create their own art using their machine learning software, according to TechCrunch, a publisher of tech-business news.
But the model was leaked on 4chan, a dingey point in the internet where poor digital choices often are made and celebrated. The assumption is that realistic, unethical and harmful deepfakes will flourish in the democracy of inexpensive computing.
A good example of this, although it is not known if Stable Diffusion or 4chan are remotely involved, is a cryptocurrency exchange executive who reports that he is the unwitting model for a holographic deepfake.
According to reporting by tech news publication The Register, a live biometric deepfake of Binance spokesman Patrick Hillmann was used on Zoom calls to convince would-be investors to list their tokens on what is the world’s largest crypto spot exchange.
PCMag has a good description of how the caper was pulled off and how it was discovered. Here are Hillmann’s thoughts on the incident. There are no reports on who pulled the stunt.
There is room for hope that reliable, long-lived detection is possible, though the record of putting deepfakes in their place is one of repeated failure as creation technology evolves.
An article in tech publication Unite.AI (based on a new paper) is encouraging for the moment. It is possible that the drive for biometric deepfake precision could be the lasting clue sought.
Creators want every video frame to be perfect at the expense of context or even recreating the peculiar signatures of compressed video.
A condition called regularity disruption occurs in deepfakes alone. A graphic in the research paper illustrates the disruption. It looks like jagged horizonal lines representing a facial feature over time. The image of a real subject is smoother, like an extruded material.
Article Topics
AI | biometric liveness detection | biometrics | deepfake detection | deepfakes | face biometrics | research and development
Comments