Making deepfakes cute: Adobe shows a bit of its muscle

Deepfakes as an ordinary part of an average person’s life has gotten that much closer with a prototype algorithm from Adobe Research for a process the company describes as “magic.”
The company behind Photoshop is discussing its Project In-Between, software that turns multiple digital pictures into the most basic gif. The result is a warmer, more fluid and considerably less comical version of the cut-paper animation of movie director Terry Gilliam.
Adobe produced a short video about In-Between starring the least objectionable man in English-language entertainment, Kenan Thompson, to show how a few seconds of movement can be added to as few as two pictures.
As has been pointed out elsewhere, this is Adobe’s first public, experimental step into deepfakes. In-Between uses Sensei, the company’s AI content and data tool. No release date has been announced.
(The company is also working on ways to spot deepfakes.)
Adobe stuck with Photoshop while the developed world, at least, fretted about the nature of truth in a world where photographs were losing their last claims of legal proof.
Deepfakes are likely to face deeper distrust, particularly in a time when authoritarian politicians around the world foster paranoia and distrust in institutions.
That is probably why Thompson was hired for the project’s debut video. The company needs to pull the quills from deepfakes in order to profit from them. In fact, it only refers to the technology involved as machine learning.
Among the examples of In-Betweened content in Adobe’s video is a manufactured slow-motion clip of a man seemingly breathing in a sublime dawn. Another one takes two clearly Photoshopped images of a woman, a man and a young child, and sets the trio into looped, swaying motion.
One of the child’s animated arms detaches from her torso in the process, but, as deepfakes themselves demonstrate more clearly every day, software-originated gaffes will get fewer and harder to spot.
Article Topics
Adobe | deepfakes | machine learning | research and development
Comments