Iris recognition systems are vulnerable to attack
Iris scan systems have been the subject of much controversy after they had been successfully duped by academics to assess their vulnerability. On academic testing the penetrability of iris recognition systems is Javier Galbally, an Assistant Professor at the Biometrics Recognition Group at Universidad Autonoma de Madrid.
Galbally has been conducting extensive research and vulnerability assessments on countless biometric recognition security systems, focusing on the synthetic generation of biometric traits. He has published several research findings on the exploitation of biometrics and the use of direct attacks using fake images during iris scans.
In one research paper, Galbally stated that an iris scan sensor was tricked by using fake iris images. The fraudulent image was taken from a database that held a number of fake iris scans and used against an iris sensor. The exercise revealed that about 40 percent of the total number of images scanned by the sensor was wrongly recognized by the system. This means that an intruder who has access to fake images can ultimately beat a biometric system by exploiting that vulnerability.
Another breach was also conducted on security systems that used genetic algorithms. This new approach to breaching security systems was later used to evaluate iris verifications systems to check their vulnerability to attack. It was later revealed that as many as 90 percent of the programs tested were successfully duped.
In the end, the goal of the exercise was not to actually design a fool proof or perfect system. Instead, vulnerability evaluators want to put more focus on developing counter measures to protect biometric systems.
What are strategies can the biometrics industry adopt to protect systems from vulnerability exploits?
Article Topics
biometrics | biometrics research | fraud | iris recognition | privacy | technology
RT @BiometricUpdate: #IrisRecognition systems are vulnerable to attack: Iris scan systems have been the subject of muc… http://t.co/Y8 …