Use bug bounty to tackle biometric algorithm bias, Algorithmic Justice League researcher argues
Deborah Raji, a research fellow in algorithmic harms for the Mozilla Foundation has suggested bug bounties could be used to tackle algorithmic bias in facial recognition applications.
The findings were presented earlier this week at the annual Mozilla Festival, an event exploring the nexus of AI and power, labor, truth, and other critical issues.
Raji’s research was conducted together with advocacy group the Algorithmic Justice League (AJL) and analyzed how bug bounty programs could be deployed to detect algorithmic bias in biometrics.
“When you release software, and there is some kind of vulnerability that makes the software liable to hacking, the information security community has developed a bunch of different tools that they can use to hunt for these bugs,” Raji told ZDNet in an interview.
“Those are concerns that we can see parallels to with respect to bias issues in algorithms,” she added.
Bias is a recurrent issue in face biometrics, and just last month allegations of biased performance by biometric systems in law enforcement and education resulted in legal actions in the U.S.
According to the research fellow, the first issue to solve during the research was the definition of algorithmic harm from a programming perspective, as bias is inherently subjective.
And even if these definitions were to be established, a methodology for bias detection would then have to follow, potentially endangering the entire engineering process behind certain products.
This issue, Raji explained, could potentially be solved by companies spontaneously investing in developing ethical technology. Populations affected by algorithm bias in biometric and other systems, however, are typically not paying customers.
However, the research fellows added, a more likely solution would be to push stricter regulations on corporations.
“I think that cooperation is only going to happen through regulation or extreme public pressure,” she concluded.