FB pixel

Biometric software that allegedly predicts criminals based on their face sparks industry controversy

Categories Biometric R&D  |  Biometrics News
 

biometric facial recognition

A group of academics and a Ph.D. student from Harrisburg University of Science and Technology in Pennsylvania have developed automated biometric facial recognition software that can allegedly predict criminal behavior in an individual, the university announced.

The researchers claim their technology has no racial bias and an 80 percent accuracy in predicting if an individual is a criminal based on their facial features in a picture. The software was developed to help law enforcement agencies.

The research is titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” and was conducted by Ph.D. student and NYPD veteran Jonathan W. Korn, Prof. Nathaniel J.S. Ashby, and Prof. Roozbeh Sadeghian.

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said in a prepared statement. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

The research will be included in a book series named “Springer Nature – Research Book Series: Transactions on Computational Science & Computational Intelligence.”

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Ashby said, in a prepared statement. “Our next step is finding strategic partners to advance this mission.”

“Crime is one of the most prominent issues in modern society. Even with the current advancements in policing, criminal activities continue to plague communities,” Korn added in a prepared statement. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”

This research has sparked some controversy on LinkedIn, where industry experts have shared opinions regarding efficiency, privacy and ethical principles, calling the initiative “irresponsible,” “far-fetched” and “audaciously wrong,” as it may infer people are born criminals. The conversation was initiated by Michael Petrov, VP of Technology at EyeLock.

“In all my many years in the field of biometrics, I have never seen a study more audaciously wrong and still thought provoking than this,” Petrov wrote. “It’s wrong in its motivation (people are not born as criminals, and facial appearance is something we inherit from our likely non-criminal ancestors), technology (algorithm overtraining) and, most importantly, human privacy implications (by implying that the police can predict future criminals and correct them ahead of crimes).”

Once the controversy broke, Harrisburg University pulled down the announcement, but the text can still be read at archive today.

Tim Meyerhoff, Director at Iris ID Systems wrote he is “Quite curious about the data used to train this algorithm and the ground truth which accompanies it. This does nothing to help privacy concerns and claims of bias.”

Identity + Biometrics Industry Association (IBIA) Executive Director Tovah LaDier told Biometric Update in an email that IBIA members have responded negatively, though they have also expressed a desire to see the research article to confirm their understanding. LaDier also compared the idea of biometric prediction to phrenology, eugenics, astrology, and other pseudoscience fields, and expressed concern that it could “threaten facial recognition progress.”

This post was updated at 6:58pm Eastern on May 7, 2020 to remove it from the “facial recognition” category.

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Opinions on UK Online Safety Act emphasize importance of enforcement

Online safety legislation is making headlines around the world. But in places where laws have taken effect, are they proving…

 

UK Home Office raises estimate for passport contract to 12 years, £576M

The UK Home Office has opened a third round of market engagement for its next major passport manufacturing and personalization…

 

US lawmakers move to restrict AI chatbots used by kids

A bipartisan pair of House and Senate bills would impose new federal restrictions on AI chatbots, including a ban on…

 

Utah age assurance law for VPN users takes effect this week

Privacy advocates and virtual private network (VPN) providers are up in arms over Utah’s Senate Bill 73 (SB 73), “Online…

 

CLR Labs wins ISO 17025 accreditation for biometrics testing across EU

Cabinet Louis Reynaud (CLR Labs) has been accredited for ISO/IEC 17025, the international standard for testing and calibration laboratories, in…

 

Leidos, Idemia PS advance checkpoint modernization with biometrics, CAT-2 systems

Leidos and Idemia Public Security have formed a strategic partnership to deploy biometric‑enabled eGates and integrated Credential Authentication Technology (CAT-2)…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events