November 30, 2016 -
Two scientists from Shanghai Jiao Tong University have published a controversial new study that uses an AI system to distinguish a criminal from a law-abiding citizen with a 85.5% accuracy rate, according to a report by the Telegraph.
Researchers Xiaolin Wu and Xi Zhang collected 1,856 ID photos of Chinese males ages 18 to 55, with no facial hair, no facial scars or other markings. Of this data set, 730 were images of criminals which were provided by the police.
The researchers then formulated four different machine-learning algorithms, which were provided with a sample of the photo set. The system was instructed on which photos were of criminals in an effort to ‘train’ them.
After the initial data set, the algorithms were fed the remaining photos and asked to identify which ones were criminals without any further cues.
The accuracy of its guesses, which were based on features it relates to criminality, led to the researchers’ conclusions that, “despite the historical controversy”, convicts tend to have certain unique facial features.
“All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic,” Wu and Zhang wrote in the paper.
The convolutional neural network (CNN), an advanced form of machine learning technology, correctly identified the criminal in 89.5 percent of cases, a result “paralleled by all other three classifiers which are only few percentage points behind in the success rate of classification.”
The convicts, who included both serious and petty criminals, shared some common physical features that helped the system to identify them.
“We find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle,” said the paper.
Researchers found that criminals had their eyes closer together, a significantly curved upper lip, and most prominently, possessed faces that greatly deviated from the norm as well as from each other.
“The faces of general law-abiding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people,” wrote the authors.
The scientists said that the system still needs to be tested using a dataset of different races, genders and facial expressions before it could be implemented on a broader scale.
In an effort to preempt some potential criticism, Wu and Zhang wrote in the paper that they they do not intend to or are not “qualified to discuss or debate on societal stereotypes” while their AI program cultivates “no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc.”