Scientists, sociologists speak out against biometrics research that allegedly predicts criminals

Categories Biometric R&D  |  Biometrics News
Scientists, sociologists speak out against biometrics research that allegedly predicts criminals

Back in May, a group of professors and a Ph.D. student from Harrisburg University in Pennsylvania claimed to have developed automated biometric facial recognition software to predict criminal behavior in an individual based on facial features without bias.

Not only did the announcement spark outrage in the biometrics industry, but now over 500 tech experts and sociologists united under the Coalition for Critical Technology have written a letter to publisher Springer Nature asking to postpone publication of all research allegedly predicting criminal behavior, reads OneZero.

“How might the publication of this work and its potential uptake legitimize, incentivize, monetize, or otherwise enable discriminatory outcomes and real-world harm?” the organizers wrote. “These questions aren’t abstract.”

In the initial press release, the university claimed the tool was aimed for use in law enforcement, and that it had 80 percent accuracy with no bias. After it retracted the press release, the university said it would release an update but was not available for immediate comment. Springer Nature claims the project was rejected.

The Coalition says the research was part of other efforts with little to no scientific grounds which have kept appearing in academia. Despite exposing these studies, there is still research work hinting there is a connection between a person’s face and their behavior.

“Black women and Black scholars and practitioners have led the critique on this work over the last 50 years,” Theodora Dryer, a research scientist at NYU and Coalition member, told OneZero. “They have shown time and time and time again that prediction of criminality is intrinsically racist and reproductive of structures of power that exclude them.”

The letter was also signed by biometrics researchers Joy Buolamwini, Timnit Gebru, and Inioluwa Deborah Raji. All three have won the AI for Good category in VentureBeat’s AI Innovation Awards for their research into algorithmic bias in facial recognition.

The Coalition writes that the data is provided by a “racist criminal justice system and set of laws,” which results the algorithm will likely be racist.

“Criminality cannot be predicted. Full stop. Criminality is a social construct and not something in the world that we can empirically measure or capture visually or otherwise,” said Sonja Solomun, a research director at McGill University’s Centre for Media, Technology, and Democracy and signatory to the letter.

Harrisburg University is not the only institution publishing this type of work. In 2016, academics from a Chinese university claimed their technology could predict criminals based on face analysis and no bias. The algorithm was debunked by Google and Princeton researchers, who also invalidated in 2018 research conducted by Stanford researcher Michal Kosinski who claimed his algorithm could determine sexual orientation.

“Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place,” the researchers wrote at the time.

Related Posts

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics