FB pixel

Biometric software that allegedly predicts criminals based on their face sparks industry controversy

Categories Biometric R&D  |  Biometrics News
 

biometric facial recognition

A group of academics and a Ph.D. student from Harrisburg University of Science and Technology in Pennsylvania have developed automated biometric facial recognition software that can allegedly predict criminal behavior in an individual, the university announced.

The researchers claim their technology has no racial bias and an 80 percent accuracy in predicting if an individual is a criminal based on their facial features in a picture. The software was developed to help law enforcement agencies.

The research is titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” and was conducted by Ph.D. student and NYPD veteran Jonathan W. Korn, Prof. Nathaniel J.S. Ashby, and Prof. Roozbeh Sadeghian.

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said in a prepared statement. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

The research will be included in a book series named “Springer Nature – Research Book Series: Transactions on Computational Science & Computational Intelligence.”

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Ashby said, in a prepared statement. “Our next step is finding strategic partners to advance this mission.”

“Crime is one of the most prominent issues in modern society. Even with the current advancements in policing, criminal activities continue to plague communities,” Korn added in a prepared statement. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”

This research has sparked some controversy on LinkedIn, where industry experts have shared opinions regarding efficiency, privacy and ethical principles, calling the initiative “irresponsible,” “far-fetched” and “audaciously wrong,” as it may infer people are born criminals. The conversation was initiated by Michael Petrov, VP of Technology at EyeLock.

“In all my many years in the field of biometrics, I have never seen a study more audaciously wrong and still thought provoking than this,” Petrov wrote. “It’s wrong in its motivation (people are not born as criminals, and facial appearance is something we inherit from our likely non-criminal ancestors), technology (algorithm overtraining) and, most importantly, human privacy implications (by implying that the police can predict future criminals and correct them ahead of crimes).”

Once the controversy broke, Harrisburg University pulled down the announcement, but the text can still be read at archive today.

Tim Meyerhoff, Director at Iris ID Systems wrote he is “Quite curious about the data used to train this algorithm and the ground truth which accompanies it. This does nothing to help privacy concerns and claims of bias.”

Identity + Biometrics Industry Association (IBIA) Executive Director Tovah LaDier told Biometric Update in an email that IBIA members have responded negatively, though they have also expressed a desire to see the research article to confirm their understanding. LaDier also compared the idea of biometric prediction to phrenology, eugenics, astrology, and other pseudoscience fields, and expressed concern that it could “threaten facial recognition progress.”

This post was updated at 6:58pm Eastern on May 7, 2020 to remove it from the “facial recognition” category.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Federal law enforcement must now conduct transparent, standardized AI field testing

A White House advisory panel voted to approve a 24-page report that sets forth specific actions that all federal law…

 

Incode, Tech5, Idnow, Suprema level up with certifications, standards compliance

The global identity verification industry is witnessing advancements as biometric technology providers, including Incode Technologies, Tech5, Idnow, and Suprema, announce…

 

Texas age verification law mostly unconstitutional: US district court judge

In court battles over age assurance, victory can be a matter of perspective. A Texas judge has ruled that the…

 

Switzerland takes another step towards digital ID

Switzerland’s national digital identity is inching closer to reality. On Tuesday, the country’s Council of States approved the project’s regulative…

 

SIC introducing biometric self-enrollment kiosk, mobile kit at Identity Week

SIC Biometrics has introduced a pair of new devices on the eve of Identity Week, as promised in a recent…

 

Vitaprotech relaunches prominent biometric security brand Hirsch

Physical security and digital identity brand Hirsch is relaunching, following the merger of Identiv’s biometrics assets with Vitaprotech. Hirch’s portfolio…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events