Problems with audits for bias in AI systems highlighted in research paper
Sasha Costanza-Chock, co-author of a research paper which looks into algorithmic audits, says there are many areas that require improvement in order to bolster the effectiveness of the process and reduce harms from bias in AI used in the real world, like facial recognition systems.
Speaking about the Algorithmic Justice League paper on a recent episode of technology news podcast Marketplace, Costanza-Chock posits that it is very difficult to determine the effectiveness of algorithmic audits in the current dispensation because of non-disclosure agreements that bind first and second party auditors who have more access to the data and systems of companies they are auditing.
Bias has been found in algorithms not only related to biometric matching, but adjacent areas like liveness detection, as well as unrelated AI applications.
While putting together the research paper, which identifies emerging best practices as well as methods and tools for AI audits, the teams found out that a number of variations exist in the algorithmic auditing process as there is no harmonized standard or regulation on what auditors should look out for, said the co-author. While some of the audits focus on accuracy or fairness of training and sample data, some look at the privacy and security implications of the systems under audit, and only about half of the auditors they spoke to said they check to find out if companies have quality systems to enable users to channel complaints of AI bias harms in real-time.
“Well, only half of the auditors right now say that they’re looking for a way for people who’ve been harmed to report that harm back to the company. That’s something that we think should be important to look at in any audit. But it’s not happening all the time,” she said.
On whether there are efforts whatsoever in improving transparency and accountability when it comes to algorithmic audits, Costanza-Chock said it was a difficult thing to say because of a couple of factors, including non-disclosures to the public about audit findings. She however believes this could change in the future as the survey notes broad acceptance for algorithmic audit regulation.
“There are now increased calls for regulators to carefully audit systems. So, for example, the Federal Trade Commission can potentially evaluate algorithms as to whether they are discriminatory. And if they find that an algorithmic system was developed without consent, they can actually order the company to destroy the data set on the algorithm. So, there are cases where a government regulator has power and access but we need a lot more of that,” she says.
Apart from having platforms where AI harm complaints can be easily submitted, the study conducted by the Algorithmic Justice League also calls on companies to gather sufficient feedback from communities at risk of harm early on in the process of developing their systems.
Meanwhile, a YouTube video summarizes how the study was conducted and makes five policy recommendations which have to be taken into consideration in order to address some of the problems in algorithmic audits
The video explains that the study captured the views of 438 individuals and 189 organizations engaged in AI audits and whose work is directly relevant to algorithmic auditing. Some of the auditors interviewed have vast experience in audits that touch on industry use in social media, employment, consumer goods, insurance and credit, while others have experience working for state and local governments, per the video.
In the study, 82 percent of the auditors surveyed hold that public disclosure of audit results should be legally mandated, and about half of them believe there should be regulation to explicitly define what algorithmic audit must entail. Others, however, think there should be standards and guidelines, but no explicit definitions.
Article Topics
accuracy | AI | algorithmic accountability | Algorithmic Justice League | algorithms | audits | best practices | biometric matching | biometric-bias | biometrics | facial recognition | research and development
Comments