FB pixel

Disparate impact laws needed to combat AI discrimination, says policy analyst

Disparate impact laws needed to combat AI discrimination, says policy analyst
 

A former deputy assistant to the president and deputy director of the White House Domestic Policy Council said in a new Brookings Institution commentary that Congress should enact legislation to address how AI discrimination “impacts people’s rights and opportunities.”

There’s “algorithmic discrimination” in the workplace, marketplace, health care, and the criminal justice system, wrote Chiraag Bains, a nonresident senior fellow with Brookings Metro and a consultant to the Democracy Fund who specializes in AI, democracy, and government programs that advance fairness and opportunity.

“AI can and does produce discriminatory results,” Bains says, noting that “AI works by using algorithms to process and identify patterns in large amounts of data, and then uses those patterns to make predictions or decisions when given new information.”

AI discrimination, also known as algorithmic discrimination, is when automated systems treat people differently or impact them in a way that is unjustified and based on a protected characteristic. This can include race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, or genetic information.

To prevent AI discrimination, it’s widely accepted that designers, developers, and deployers of automated systems should take proactive measures, such as using representative data, ensuring accessibility for people with disabilities, testing and mitigating disparity before and after deployment, and having clear organizational oversight.

Bains said “researchers and technologists have repeatedly demonstrated that algorithmic systems can produce discriminatory outputs. Sometimes, this is a result of training on unrepresentative data. In other cases, an algorithm will find and replicate hidden patterns of human discrimination it finds in the training data.”

“As developments in artificial intelligence accelerate,” Bains says, “advocates have rightly focused on the technology’s potential to cause or exacerbate discrimination. Policymakers are considering a host of protections, from disclosure and audit requirements for AI products to prohibitions on the use of AI in sensitive contexts. At the top of their list should be an underappreciated but vital measure: legal liability for discrimination under the doctrine of disparate impact.”

Disparate impact laws provide legal remedies for people who are victims of discrimination based on race, sex, or another protected characteristic.

“This form of liability will be critical to preventing discrimination in a world where high-stakes decisions are increasingly made by complex algorithms,” Bains said, noting that “current disparate impact protection is not up to the task. It is found in a patchwork of federal statutes, many of which the courts have weakened over the years.”

Under US law, 26 federal funding agencies have Title VI regulations that include provisions addressing the disparate impact or discriminatory effects legal standard.

According to the US Department of Justice’s (DOJ) Civil Rights Division,” a growing body of social psychological research has also reaffirmed the need for legal tools that address disparate impact. This research demonstrates that implicit bias against people of color remains a widespread problem. Such bias can result in discrimination that federal agencies can prevent and address through enforcement of their disparate impact regulations.”

DOJ said “because individual motives may be difficult to prove directly, Congress has frequently permitted proof of only discriminatory impact as a means of overcoming discriminatory practices. The Supreme Court has, therefore, recognized that disparate impact liability under various civil rights laws, “permits plaintiffs to counteract unconscious prejudices and disguised animus that escape easy classification as disparate treatment.”

The White House Office of Science and Technology Policy’s (OSTP) Blueprint for an AI Bill of Rights, said that, “depending on the specific circumstances … algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.”

Large Language Model (LLM) AI “developers have tried to mitigate bias through post-training methods such as fine-tuning and reinforced learning from human feedback,” Bains says, but added that “early research indicates LLMs indeed produce stereotyped and otherwise biased results.”

“For instance, in one study, LLMs tended to associate successful women with empathy and patience, and successful men with knowledge and intelligence,” Bains pointed out. And in another study, the LLM “associated Muslims with terrorism and violence. In still another, LLMs associated the word ‘black’ with weapons and terms connoting guilt, while associating the word ‘white’ with innocuous items and terms connoting innocence.”

“Scenario-based testing has suggested how such associations could cause discrimination as LLMs are integrated into real-world decision systems,” Bains warns.

“Disparate impact liability helps root out discrimination that is unintentional but unjustified,” which is “precisely the risk with AI” and the reason “we need disparate impact liability,” Bains says. “Disparate impact allows us to address the vast amount of harmful and unnecessary discrimination that results from thoughtless or overly broad policies.”

“Disparate impact allows us to prevent and address algorithmic discrimination,” Bains advocates, but says “existing disparate impact law is inadequate to address algorithmic discrimination. Unfortunately, these provisions are insufficient to guard against many common forms of algorithmic discrimination.”

Bains argues that Congress needs to pass legislation “to strengthen disparate impact protection and enforcement. He says, “Congress should take at least three steps to better prepare our legal regime to deter and combat algorithmic discrimination.”

“First,” he says, “Congress should enact a federal law that prohibits discrimination – specifically including disparate impact – in the deployment of AI and other automated technology. Such legislation should cover the use of algorithms to make or inform decisions that impact people’s rights and opportunities.”

Second, Congress needs to codify “a private right of action to allow individuals who have suffered discrimination to file suit,” and “third, Congress should significantly increase funding for federal enforcement agencies.”

Bains said because “these new authorities and resources aren’t a panacea for algorithmic discrimination, we’ll need comprehensive privacy legislation, AI governance standards, transparency requirements, and a host of other measures.”

“We need our policymakers to get started and prioritize strong liability rules,” he says. And “we must strengthen our legal system’s ability to prevent discrimination before AI is integrated into rights-impacting systems across our economy. The next president and Congress should prioritize enacting comprehensive disparate impact rules for AI in early 2025.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics cycle from innovations to scale-up opportunities

Biometrics integrations range from the experimental to the everyday in the most-read articles of the week on Biometric Update. Yesterday’s…

 

US Justice developing AI use guidelines for law enforcement, civil rights

The US Department of Justice (DOJ) continues to advance draft guidelines for the use of AI and biometric tools like…

 

Airport authorities expand biometrics deployments with Thales, Idemia tech

Biometric deployments involving Thales, Idemia and Vision-Box, alongside agencies like the TSA,  highlight the aviation industry’s commitment to streamlining operations….

 

Age assurance laws for social media prove slippery

Age verification for social media remains a fluid issue across regions, as stakeholders argue their positions to courts and governments,…

 

ZeroBiometrics passes pioneering BixeLab biometric template protection test

ZeroBiometrics’ face biometrics software meets the specifications for template protection set out in the ISO/IEC 30136, according to a pioneering…

 

Apple patent filing aims for reuse of digital ID without sacrificing privacy

A patent filing from Apple for ensuring a presented reusable digital ID belongs to the person holding it via selfie…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events