Biometric software that allegedly predicts criminals based on their face sparks industry controversy
A group of academics and a Ph.D. student from Harrisburg University of Science and Technology in Pennsylvania have developed automated biometric facial recognition software that can allegedly predict criminal behavior in an individual, the university announced.
The researchers claim their technology has no racial bias and an 80 percent accuracy in predicting if an individual is a criminal based on their facial features in a picture. The software was developed to help law enforcement agencies.
The research is titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” and was conducted by Ph.D. student and NYPD veteran Jonathan W. Korn, Prof. Nathaniel J.S. Ashby, and Prof. Roozbeh Sadeghian.
“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said in a prepared statement. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”
The research will be included in a book series named “Springer Nature – Research Book Series: Transactions on Computational Science & Computational Intelligence.”
“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Ashby said, in a prepared statement. “Our next step is finding strategic partners to advance this mission.”
“Crime is one of the most prominent issues in modern society. Even with the current advancements in policing, criminal activities continue to plague communities,” Korn added in a prepared statement. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”
This research has sparked some controversy on LinkedIn, where industry experts have shared opinions regarding efficiency, privacy and ethical principles, calling the initiative “irresponsible,” “far-fetched” and “audaciously wrong,” as it may infer people are born criminals. The conversation was initiated by Michael Petrov, VP of Technology at EyeLock.
“In all my many years in the field of biometrics, I have never seen a study more audaciously wrong and still thought provoking than this,” Petrov wrote. “It’s wrong in its motivation (people are not born as criminals, and facial appearance is something we inherit from our likely non-criminal ancestors), technology (algorithm overtraining) and, most importantly, human privacy implications (by implying that the police can predict future criminals and correct them ahead of crimes).”
Once the controversy broke, Harrisburg University pulled down the announcement, but the text can still be read at archive today.
Tim Meyerhoff, Director at Iris ID Systems wrote he is “Quite curious about the data used to train this algorithm and the ground truth which accompanies it. This does nothing to help privacy concerns and claims of bias.”
Identity + Biometrics Industry Association (IBIA) Executive Director Tovah LaDier told Biometric Update in an email that IBIA members have responded negatively, though they have also expressed a desire to see the research article to confirm their understanding. LaDier also compared the idea of biometric prediction to phrenology, eugenics, astrology, and other pseudoscience fields, and expressed concern that it could “threaten facial recognition progress.”
This post was updated at 6:58pm Eastern on May 7, 2020 to remove it from the “facial recognition” category.
Article Topics
biometric software | biometrics | biometrics research | criminal ID | facial recognition | IBIA
Comments