Samsung researchers develop method of creating deepfakes from minimal source data
Samsung has developed an artificial intelligence system for creating deepfake video from as little source data as one image, CNET reports.
The Russia-based team of Samsung researchers published a paper and presented a video on “Few-Shot Adversarial Learning of Realistic Neural Talking Head models,” which shows the creation of “realistic neural talking heads” using neural networks. The researchers say it could be applied to videoconferencing, video games, and the special effects industry.
Dartmouth media forensics researcher Hany Farid calls the technique “another step in the evolution” of deepfakes. “Following the trend of the past year, this and related techniques require less and less data and are generating more and more sophisticated and compelling content,” he says.
A manipulated video of U.S. House Speaker Nancy Pelosi appearing to slur her speech recently proliferated around the internet.
The system begins with an extensive “meta-learning stage” in which it trains on the movement of faces in large quantities of video, and then applies it to a single image or handful of pictures. The final result shows imperfect details, particularly with a single image, and University of Albany Computer Science Professor Siwei Lyu says it tends to resemble whoever the face movement was modeled on. Lyu says it could save training time and make the model more generalizable, however.