Scientists tighten up algorithms in earprint biometric research
A team of researchers say they have come up with a way to more-reliably and quickly identify people by the unique structure of their ears, an endeavor by other researchers going back to at least 2007.
More specifically, the scientists say they have improved upon earlier software experiments using earprints as a biometric key to access smart homes using phones. That effort, too, dates back years. At least one product has made it to market, with little success (more below).
Previous lab work on using ears for biometrics required fairly pristine conditions to be broadly practical, according to a new paper by Sana Boujnah at École Nationale d’Ingénieurs de Tunis, and fellow scientists.
Boujnah’s paper claims to demonstrate greater algorithm reliability under degraded, or less-than-ideal, conditions.
Ears might seem an odd choice for biometrics, but they are of value because ears do not change too significantly over time, and they can be scanned without having to put them in contact with a reader.
Boujnah’s innovation is in image recognition and comparison, not in optics or how a phone app would interact with home access electronics. And although she has discussed combining ear prints with voice prints, her team left for future experiments this second layer of biometric security.
According to the paper, the team employed “an approach based on local and multiresolution features for ear recognition.”
The result was software that matched images in six-tenths of a millisecond with an accuracy of 93.88 percent on the University of Science and Technology-I database and 92.5 percent on, the EVDDC, which the paper described as a new database.
The USTB-I database contained 180 images, with three images per subject. They included frontal images and images with different illuminations. The EVDDC database was comprised of 111 people, with at least 12 images from each volunteer.
Phones were used to collect images, between five and 30 centimeters from subjects. Images were complicated by jewelry, hats, glasses, scarves, hair and the like.
Like previous research in the area, the new solution consisted of three steps: image preprocessing, feature extraction, and classification.
Feature extraction was composed of local and frequency domain features. The Harris algorithm was used for feature extraction and pattern recognition. Texture features were derived from the spectral-saliency-based dual tree complex wavelets transform.
The random forest, support vector machine (SVM) and K-nearest neighbors (KNN) classifiers were used as part of the experiment. The KNN approach was used to record the best accuracy rates, noted above.
Boujnah submitted a paper in 2018 on the same topic, and recommended a database be created with ear and voice prints in degraded conditions to move the technology along.
Ear recognition biometrics actually have been implemented on smartphones, though the model in question, the Siam 7X, did not start a trend.