FB pixel

AI bias poorly understood, activists warn

AI bias poorly understood, activists warn
 

Biased artificial intelligence systems need to be reigned in through a combination of regulation, education, and transparency, according to civil society experts discussing how AI impacts civil rights. The context in which technologies like facial recognition are deployed is critical to understanding their impacts, they say.

The Brookings Institution’s TechTank podcast tackles the topic in an episode titled: ‘Civil rights and artificial intelligence: Can the two concepts coexist?’.

Host Nicol Turner Lee, a senior fellow of governance studies and director of the Center for Technology Innovation, says multiple times towards the beginning of the episode that facial recognition has resulted in wrongful arrests of black men and women, but those cases and biometrics are not the main focus of the program.

Joining Turner Lee to discuss AI bias were Renee Cummings of the University of Virginia’s School of Data Science and Lisa Rice, National Fair Housing Alliance CEO.

Rice begins with the transparency barrier that AI system deployment creates, and the deep inequalities in the marketplace, referring to the U.S. legal and political systems.

Data driven systems must be scrutinized and “remodeled” to avoid inflicting harms on consumers, she says.

Cummings notes the potential benefits of AI deployment, and the risks that go along with that potential. Algorithms have become responsible for high-stakes decisions within the criminal justice system, according to Cummings, and are colliding with “race-based laws.” These are coded and designed into systems, perhaps subconsciously.

“Blackness as a data point for risk,” is the result.

Rice says that there are thousands of race-based laws found in U.S. history, and most people are not aware of that fact, which hinders them from coming to terms with the racial context algorithms are created and deployed within.

“Because people don’t know the history, they think it doesn’t exist,” Rice says.

The conversation turned to the coding of race into credit scores and zip codes, so that even systems that claim to be race-agnostic can perpetuate bias, and the related concept of “data trauma.” Similarly, the concept of “disparate impact,” coined as part of fair housing efforts by the Nixon Administration, involves laws or policies that appear neutral that in application have a discriminatory effect.

Despite this, AI can also be deployed to address these disparities, including in policing, the Cummings says. Increased transparency and early warning systems can warn of officers departing from accepted practices.

Facial recognition is mentioned briefly in the context of AI surveillance systems,

Rice says the existing civil rights laws can be used to fight many existing instances of discrimination, but at the same time not every instance of algorithmic bias can be litigated. Federal regulators have failed to keep up with the technology, she says, making technology the next civil rights frontier.

Better measurement and better education for data scientists can help reduce algorithmic bias, the podcasts participants say.

The EU’s approach to regulating algorithms, with facial recognition and others classified as ‘high-risk,’ can provide guidance for the U.S. in terms of its commitment to human rights.

The ‘Purpose, Process and Monitoring’ framework formulated by the National Fair Housing Alliance for algorithmic fairness could be broadly applied, Rice says, and together with better auditing systems can lead to better outcomes from the use of AI.

Wrongful arrests and new policies

Facial recognition is used in many arrests in which its role is not disclosed, according to a Wired article, which describes police leading witnesses and a lack of transparency in the use of the biometric technology by U.S. police.

Prosecutors are not required to disclose the use of facial recognition in the identification of criminal suspects, Wired writes, and the technology is not admissible as evidence.

The impact of mistaken arrests, even ones in which the record is eventually corrected, is detailed in a separate Wired article.

Eyewitness misidentification is known to be a common cause of conviction of people later exonerated by DNA evidence, the article says.

A public defender tells Wired that police often disguise the use of facial recognition to identify suspects by attributing the identification to a witness. The article includes guidance for how defense attorneys can tell if facial recognition was used in an investigation.

A law may need to be passed to force disclosure of the technology’s use, stakeholders suggest.

The use of Clearview AI by 44 Toronto Police Officers has resulted in a new policy  against the unsanctioned use of AI by investigators, The Toronto Star reports. Some of the police who signed up for the service only tested it, but Clearview was used in 84 investigations before police were instructed to stop using it.

The risks to both privacy and prosecutions from the use of unapproved technologies, according to Jack Gemmel of the Law Union of Ontario, are obvious.

Toronto Police were introduced to Clearview’s app, according to an internal report, by a presentation given by the U.S. FBI and Department of Homeland Security at a conference in the Netherlands.

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Biometrics White Papers

Biometrics Events

Explaining Biometrics