NIST scientists develop algorithm that automates key step in fingerprint analysis
Scientists from the National Institute of Standards and Technology (NIST) and Michigan State University have developed an algorithm that automates a key step in the fingerprint analysis process, according to their findings which are published in IEEE Transactions on Information Forensics and Security.
“We know that when humans analyze a crime scene fingerprint, the process is inherently subjective,” said Elham Tabassi, a computer engineer at NIST and a co-author of the study. “By reducing the human subjectivity, we can make fingerprint analysis more reliable and more efficient.”
Anil Jain, a computer scientist at Michigan State University and a co-author of the study, explains that latent fingerprints left at a crime scene “are often partial, distorted and smudged.”
When an examiner receives these latent prints from a crime scene, their first step is to gauge how much useful information they contain.
“This first step is standard practice in the forensic community,” said Jain. “This is the step we automated.”
After this first key step, if the print contains sufficient usable information, it can be sent to an Automated Fingerprint Identification System (AFIS).
The system searches its database and provides a list of potential matches, which the examiner then evaluates to find a conclusive match.
“If you submit a print to AFIS that does not have sufficient information, you’re more likely to get erroneous matches,” Tabassi said. However, “If you don’t submit a print that actually does have sufficient information, the perpetrator gets off the hook.”
Currently, the process of assessing print quality is completely subjective, and different examiners arrive at different conclusions.
Automating this first step will make the results consistent, which means that researchers will be able to study the errors and find ways to correct them over time, Tabassi said.
As a result, fingerprint examiners will be able to process evidence more efficiently to ultimately decrease backlogs, solve crimes faster, and spend more time on challenging prints that require more work.
To develop their algorithm, the researchers used machine learning to train the computer to recognize patterns by showing it examples.
To get these examples, the researchers had 31 fingerprint experts analyze 100 latent prints each, which they then scored the quality of each on a scale of 1 to 5. They used these prints and their scores to train the algorithm to figure out how much information a latent print contains.
Upon completing the training, researchers tested the algorithm’s performance by having it score a new series of latent prints.
The researchers submitted the scored prints to AFIS software connected to a database of over 250,000 rolled prints. All the latent prints had a match in the database, and they asked AFIS to find said match.
If the scoring algorithm worked correctly, then AFIS’ ability to find the correct match should correlate with the quality score. That is, any prints scored as low-quality should be more likely to generate erroneous results and prints scored as high-quality should be more likely to create the correct match.
Using this metric, the scoring algorithm resulted in a slightly better score than the average of the human examiners who participated in the test.
The Michigan State Police provided the researchers with a large dataset of latent prints, after they initially erased the data of all identifying information.
The researchers will try another test using an even larger dataset, which will allow them to improve the algorithm’s performance and more accurately measure its error rate.
“We’ve run our algorithm against a database of 250,000 prints, but we need to run it against millions,” Tabassi said. “An algorithm like this has to be extremely reliable, because lives and liberty are at stake.”
Previously reported, federal scientists at NIST recently eliminated outdated requirements for the agency’s digital identity authentication guidelines, such as regular changing passwords as well as adding new standards for the use of biometrics, keysticks and other two-factor authentication tokens.