FB pixel

AI industry, government will have to dig deep to end racial bias in algorithms: Brookings

AI industry, government will have to dig deep to end racial bias in algorithms: Brookings
 

To solve bias in facial recognition in the United States, first at least address systemic racial bias in the culture, recommends noted centrist think tank The Brookings Institution.

In-house authors, posting on Brookings’ TechTank blog, are not being flippant or naïve. They are playing with the conclusion that similar industry observers have recently arrived at: Everything that goes into AI, from seed funding to services, comes of a world of conscious and subconscious illogical bias.

AI cannot help but reflect those ingrained prejudices.

A year or so ago, the most complex topic in the already very complex world of AI algorithms was about how scientists can scientists tear harmful bias from their math.

Nicol Turner Lee, a Brookings senior fellow in governance studies and co-editor of TechTank, and Brookings research assistant Samantha Lai write approvingly of a growing national discussion about AI governance (specifically, the National AI Initiative Act of 2020).

One of the law’s six “pillars,” advancing trustworthy AI, is seen by the government as fundamental to making algorithms a powerful and faithful servant of humankind.

If nothing else, untrustworthy AI will be resisted by societies generally.

But in reading documents posted by the National AI Initiative Office (created by the 2020 act), it is clear that leaders of the effort view all AI as distinct from the surrounding culture.

Last year, The Brookings Institution convened an industry roundtable discussion on AI’s anticipated impact on competitiveness and workforce issues and on whether the federal government is adequately overseeing AI systems.

According to Brookings, though opinions diverged on points as basic as a definition for AI bias, the panelists agreed that “diversity and inclusion are treated as afterthoughts in AI development and execution.”

And “when systems go awry … quick fixes that do not address the breadth of such harmful technology” are applied.

The post singles out facial recognition as a “red flag” use case because of the limited oversight it is subject to despite the harm it can create.

Its shortcomings have resulted in accusations of wrongful arrests and de facto mandatory surveillance for people whose only shelter option is public housing.

It is also worth noting that U.S. government bodies NIST and DHS S&T conduct assessments of bias in facial recognition algorithms.

Government and industry leaders must “trace back to the roots” of code in bias, according to Brookings.

Assuming there are no fixes for systemic racism in the nation, AI developers and deployers have to understand the cultural problems and work to be inclusive in the processes of educating, training, hiring, creating and governing.

Even then, the writers say it is only “possible” that anti-racism ethics can itself take root in AI.

Related Posts

Article Topics

 |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics