FB pixel

Biometric software that allegedly predicts criminals based on their face sparks industry controversy

Categories Biometric R&D  |  Biometrics News
 

biometric facial recognition

A group of academics and a Ph.D. student from Harrisburg University of Science and Technology in Pennsylvania have developed automated biometric facial recognition software that can allegedly predict criminal behavior in an individual, the university announced.

The researchers claim their technology has no racial bias and an 80 percent accuracy in predicting if an individual is a criminal based on their facial features in a picture. The software was developed to help law enforcement agencies.

The research is titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” and was conducted by Ph.D. student and NYPD veteran Jonathan W. Korn, Prof. Nathaniel J.S. Ashby, and Prof. Roozbeh Sadeghian.

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said in a prepared statement. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

The research will be included in a book series named “Springer Nature – Research Book Series: Transactions on Computational Science & Computational Intelligence.”

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Ashby said, in a prepared statement. “Our next step is finding strategic partners to advance this mission.”

“Crime is one of the most prominent issues in modern society. Even with the current advancements in policing, criminal activities continue to plague communities,” Korn added in a prepared statement. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”

This research has sparked some controversy on LinkedIn, where industry experts have shared opinions regarding efficiency, privacy and ethical principles, calling the initiative “irresponsible,” “far-fetched” and “audaciously wrong,” as it may infer people are born criminals. The conversation was initiated by Michael Petrov, VP of Technology at EyeLock.

“In all my many years in the field of biometrics, I have never seen a study more audaciously wrong and still thought provoking than this,” Petrov wrote. “It’s wrong in its motivation (people are not born as criminals, and facial appearance is something we inherit from our likely non-criminal ancestors), technology (algorithm overtraining) and, most importantly, human privacy implications (by implying that the police can predict future criminals and correct them ahead of crimes).”

Once the controversy broke, Harrisburg University pulled down the announcement, but the text can still be read at archive today.

Tim Meyerhoff, Director at Iris ID Systems wrote he is “Quite curious about the data used to train this algorithm and the ground truth which accompanies it. This does nothing to help privacy concerns and claims of bias.”

Identity + Biometrics Industry Association (IBIA) Executive Director Tovah LaDier told Biometric Update in an email that IBIA members have responded negatively, though they have also expressed a desire to see the research article to confirm their understanding. LaDier also compared the idea of biometric prediction to phrenology, eugenics, astrology, and other pseudoscience fields, and expressed concern that it could “threaten facial recognition progress.”

This post was updated at 6:58pm Eastern on May 7, 2020 to remove it from the “facial recognition” category.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Ring and Flock call off integration as scrutiny of camera-to-police partnership intensifies

Amazon-owned Ring and Flock Safety have canceled their planned partnership, stepping back from an integration that would have linked one…

 

MOSIP pursues democratization of digital identity with unconference conversations

A democratic vision of digital identity is central to the non-profit, open-source mandate of MOSIP. As the organization and the…

 

Liveness is king: FaceTec’s Jay Meier in conversation with Chris Burt 

It’s best, says Jay Meier, to think about identity management as a system of symbiotic systems. Which is to say,…

 

Ofcom fines Kick, threatens 4chan as OSA enforcement steadily dials up

UK regulator Ofcom has faced criticism for being too slow and lenient with its power to enforce the Online Safety…

 

Innovatrics, ROC improve rankings in NIST ELFT, rising to 2 and 3 respectively

Innovatrics is celebrating success in the latest National Institute of Standards and Technology (NIST) Evaluation of Latent Fingerprint Technologies (ELFT)…

 

Meta plans launch of facial recognition to smart glasses in ‘dynamic political environment’

Meta is reportedly planning to roll out facial recognition capabilities for its smart glasses as early as this year, taking…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events