Demographic-agnostic tool for deepfake detection shows promise
A group of U.S. researchers say they have created the first deepfake detection algorithms successfully designed to be less biased.
The team tested their idea on a well-known algorithm and dataset and report overall detection accuracy from 91.49 percent to 94.17 percent. Citing separate algorithm research, they said a 10.7 percent difference has been found in deepfake detection error rates between races.
The scientists work at the University of Buffalo, Indiana University-Purdue University and Carnegie Mellon University.
UB researcher Yan Ju says that in focusing on bias in face recognition detection tools – a critical priority –past efforts have paid too little attention to deepfake detector bias. The team’s perspective differed from conventional thought in another way.
Rather than try to balance biometric databases, make algorithms more fair. The team claim this is a first.
For the project, supported in part by the U.S. Defense Advanced Research Projects Agency (DARPA), the researchers created two machine learning methods. One made algorithms aware of various demographics while the other was “demographic agnostic.”
They reportedly improved accuracy disparities “across races and genders.” Overall accuracy improved, too.
“We’re essentially telling the algorithms that we care about overall performance, but we also want to guarantee that the performance of every group meets certain thresholds, or at least is only so much below the overall performance,” says the study’s lead author, UB computer scientist Siwei Lyu.
What’s more, the agnostic algorithm is freed from datasets’ demographic bias by classifying deepfake videos on features in a segments’ “not immediately visible to the human eye.”
The researchers used the Xception algorithm with multiple datasets and the FaceForensics++ datasets with other algorithms in their work and the new methods largely held up.
There was an improvement in “most fairness metrics” with “slightly reduced overall detection accuracy.”
Lyu says the tradeoff is worth it as improvements are made to biometric datasets.
Article Topics
biometric-bias | biometrics | biometrics research | deepfake detection | deepfakes
Comments