FB pixel

AI industry, government will have to dig deep to end racial bias in algorithms: Brookings

AI industry, government will have to dig deep to end racial bias in algorithms: Brookings
 

To solve bias in facial recognition in the United States, first at least address systemic racial bias in the culture, recommends noted centrist think tank The Brookings Institution.

In-house authors, posting on Brookings’ TechTank blog, are not being flippant or naïve. They are playing with the conclusion that similar industry observers have recently arrived at: Everything that goes into AI, from seed funding to services, comes of a world of conscious and subconscious illogical bias.

AI cannot help but reflect those ingrained prejudices.

A year or so ago, the most complex topic in the already very complex world of AI algorithms was about how scientists can scientists tear harmful bias from their math.

Nicol Turner Lee, a Brookings senior fellow in governance studies and co-editor of TechTank, and Brookings research assistant Samantha Lai write approvingly of a growing national discussion about AI governance (specifically, the National AI Initiative Act of 2020).

One of the law’s six “pillars,” advancing trustworthy AI, is seen by the government as fundamental to making algorithms a powerful and faithful servant of humankind.

If nothing else, untrustworthy AI will be resisted by societies generally.

But in reading documents posted by the National AI Initiative Office (created by the 2020 act), it is clear that leaders of the effort view all AI as distinct from the surrounding culture.

Last year, The Brookings Institution convened an industry roundtable discussion on AI’s anticipated impact on competitiveness and workforce issues and on whether the federal government is adequately overseeing AI systems.

According to Brookings, though opinions diverged on points as basic as a definition for AI bias, the panelists agreed that “diversity and inclusion are treated as afterthoughts in AI development and execution.”

And “when systems go awry … quick fixes that do not address the breadth of such harmful technology” are applied.

The post singles out facial recognition as a “red flag” use case because of the limited oversight it is subject to despite the harm it can create.

Its shortcomings have resulted in accusations of wrongful arrests and de facto mandatory surveillance for people whose only shelter option is public housing.

It is also worth noting that U.S. government bodies NIST and DHS S&T conduct assessments of bias in facial recognition algorithms.

Government and industry leaders must “trace back to the roots” of code in bias, according to Brookings.

Assuming there are no fixes for systemic racism in the nation, AI developers and deployers have to understand the cultural problems and work to be inclusive in the processes of educating, training, hiring, creating and governing.

Even then, the writers say it is only “possible” that anti-racism ethics can itself take root in AI.

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

 

Canada regulator backs privacy-preserving age assurance

The Office of the Privacy Commissioner of Canada (OPC) has published a policy note and guidance documents pertaining to age…

 

FCC seeks comment on KYC revision for commercial phone calls

The U.S. Federal Communications Commission (FCC) has proposed stronger KYC requirements for voice service providers to prevent scams and illegal…

 

Deepfake detection upgrade for Sumsub highlights continuous self-improvement

Sumsub has launched an upgrade to its deepfake detection product with instant online self-learning updates to address rapidly evolving fraud…

 

Metalenz debuts under-display camera for payment-grade face authentication

Unlocking a smartphone with your face used to require a camera placed in a notch or a punch hole in…

 

UK regulators pan patchwork policy for law enforcement facial recognition

The UK’s two Biometrics Commissioners shared cautionary observations about the use of facial recognition in law enforcement over the weekend…

 

IDV spending to hit $29B by 2030 as DPI projects scale: Juniper Research

Spending on digital identity verification (IDV) technology is projected to reach a 55 percent growth rate between now and 2030,…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events