Intel shows its FakeCatcher but deepfake’s challenge might be too big

U.S. chipmaker Intel is talking about its efforts to create deepfake busters, although the message is not as convincing as some of the fabricated images in the wild today.
Intel‘s FakeCatcher software uses at least two techniques to identify 96 percent of manufactured video of avatars and of genuine humans doing things they didn’t really do, the company boasts.
In an article for the BBC, excitable Intel Labs research scientist Ilke Demir says one method used is photoplethysmography, which sees and measures the largely invisible color change of a face as blood is pumped through its uncountable blood vessels with every heartbeat.
It is not a new concept and while deepfake videos don’t or can’t replicate it today, it would seem like it is not the biggest hurdle that makers of the videos face in creating their reality dysfunction.
Demir also described the problem AI software has in keeping a deepfake avatar’s eyes parallel and pointed in the right spot in space. That is a red flag for FakeCatcher, she said.
The task of finding and eliminating doctored video and audio ultimately could be beyond Intel or anyone else’s grasp, however. A second related BBC story, showed the difficulty that people and software working together have in ridding the world of far more obvious deepfake and genuine content.
The reporter watches, on camera, a deepfake of child abuse found online by the Internet Watch Foundation, which is licensed to seek, view and try to stop all such depictions. The article, which does not in any way show the video in question, knocks the reporter so far off his center of gravity, the video is stopped while he collects himself.
Although much is possible, the prospect of ever preventing all dangerous deepfakes, which are growing more nuanced all the time, seems like it might be beyond even tireless AI.
Article Topics
deepfake detection | deepfakes | face biometrics | fraud prevention | Intel | synthetic data
Comments