FB pixel

Problems with audits for bias in AI systems highlighted in research paper

Problems with audits for bias in AI systems highlighted in research paper

Sasha Costanza-Chock, co-author of a research paper which looks into algorithmic audits, says there are many areas that require improvement in order to bolster the effectiveness of the process and reduce harms from bias in AI used in the real world, like facial recognition systems.

Speaking about the Algorithmic Justice League paper on a recent episode of technology news podcast Marketplace, Costanza-Chock posits that it is very difficult to determine the effectiveness of algorithmic audits in the current dispensation because of non-disclosure agreements that bind first and second party auditors who have more access to the data and systems of companies they are auditing.

Bias has been found in algorithms not only related to biometric matching, but adjacent areas like liveness detection, as well as unrelated AI applications.

While putting together the research paper, which identifies emerging best practices as well as methods and tools for AI audits, the teams found out that a number of variations exist in the algorithmic auditing process as there is no harmonized standard or regulation on what auditors should look out for, said the co-author. While some of the audits focus on accuracy or fairness of training and sample data, some look at the privacy and security implications of the systems under audit, and only about half of the auditors they spoke to said they check to find out if companies have quality systems to enable users to channel complaints of AI bias harms in real-time.

“Well, only half of the auditors right now say that they’re looking for a way for people who’ve been harmed to report that harm back to the company. That’s something that we think should be important to look at in any audit. But it’s not happening all the time,” she said.

On whether there are efforts whatsoever in improving transparency and accountability when it comes to algorithmic audits, Costanza-Chock said it was a difficult thing to say because of a couple of factors, including non-disclosures to the public about audit findings. She however believes this could change in the future as the survey notes broad acceptance for algorithmic audit regulation.

“There are now increased calls for regulators to carefully audit systems. So, for example, the Federal Trade Commission can potentially evaluate algorithms as to whether they are discriminatory. And if they find that an algorithmic system was developed without consent, they can actually order the company to destroy the data set on the algorithm. So, there are cases where a government regulator has power and access but we need a lot more of that,” she says.

Apart from having platforms where AI harm complaints can be easily submitted, the study conducted by the Algorithmic Justice League also calls on companies to gather sufficient feedback from communities at risk of harm early on in the process of developing their systems.

Meanwhile, a YouTube video summarizes how the study was conducted and makes five policy recommendations which have to be taken into consideration in order to address some of the problems in algorithmic audits

The video explains that the study captured the views of 438 individuals and 189 organizations engaged in AI audits and whose work is directly relevant to algorithmic auditing. Some of the auditors interviewed have vast experience in audits that touch on industry use in social media, employment, consumer goods, insurance and credit, while others have experience working for state and local governments, per the video.

In the study, 82 percent of the auditors surveyed hold that public disclosure of audit results should be legally mandated, and about half of them believe there should be regulation to explicitly define what algorithmic audit must entail. Others, however, think there should be standards and guidelines, but no explicit definitions.

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News


U.S. academic institutions get biometric upgrades with new partnerships

A press release says ROC (formerly Rank One Computing), which provides U.S.-made biometrics and computer vision for military, law enforcement…


Smart Bangladesh 2041: Balancing ambition with reality

Bangladesh aims to be a “Smart” nation by 2041 as the country goes through a drastic transformation founded on digital identity…


Nigeria’s NIMC introducing one multi-purpose digital ID card, not three

The National Identity Management Commission of Nigeria (NIMC) has clarified that only one new digital ID card with multiple functions…


Age assurance tech is ready now, and international standards are on their way

The Global Age Assurance Standards Summit has wrapped up, culminating in a set of assertions, a seven-point call-to-action and four…


NIST finds biometric age estimation effective in first benchmark, coming soon

The U.S. National Institute of Standards and Technology presented a preview of its assessment of facial age estimation with selfie…


Maryland bill on police use of facial recognition is ‘strongest law in the nation’

Maryland has passed one of the more stringent laws governing the use of facial recognition technology by law enforcement in…


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read From This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events