FB pixel

Explainer: Face Recognition

 

Humans often use faces to recognize individuals and advancements in computing capability over the past few decades now enable similar recognitions automatically. Early face recognition algorithms used simple geometric models, but the recognition process has now matured into a science of sophisticated mathematical representations and matching processes. Major advancements and initiatives in the past 10 to 15 years have propelled face recognition technology into the spotlight. Face recognition can be used for both verification and identification (open-set and closed-set).

Automated face recognition is a relatively new concept. Developed in the 1960s, the first semi-automated system for face recognition required the administrator to locate features (such as eyes, ears, nose, and mouth) on the photographs before it calculated distances and ratios to a common reference point, which were then compared to reference data. In the 1970s, specific subjective markers were used such as hair color and lip thickness to automate the recognition.

The problem with both of these early solutions was that the measurements and locations were manually computed. In 1988, a new applied principle component analysis was developed, along with a standard linear algebra technique, to address the face recognition problem. This was considered somewhat of a milestone as it showed that less than 100 values were required to accurately code a suitably aligned and normalized face image.

In 1991, scientists discovered that while using the eigenfaces techniques, the residual error could be used to detect faces in images – a discovery that enabled reliable real-time automated face recognition systems. Although the approach was somewhat constrained by environmental factors, it nonetheless created significant interest in furthering development of automated face recognition technologies. The technology first captured the public’s attention from the media reaction to a trial implementation at the January 2001 Super Bowl, which captured surveillance images and compared them to a database of digital mugshots. This demonstration initiated much-needed analysis on how to use the technology to support national needs while being considerate of the public’s social and privacy concerns. Today, face recognition technology is being used to combat passport fraud, support law enforcement, identify missing children, and minimize benefit / identity fraud.

There are two predominant approaches to the face recognition problem: geometric (feature based) and photometric (view based). As researcher interest in face recognition continued, many different algorithms were developed, three of which have been well studied in face recognition literature: Principal Components Analysis (PCA), Linear Discriminant Analysis (LOA), and Elastic Bunch Graph Matching (EBGM).

PCA, commonly referred to as the use of eigenfaces, is the technique that was pioneered in 1988. With PCA, the probe and gallery images must be the same size and must first be normalized to line up the eyes and mouth of the subjects within the images. The PCA approach is then used to reduce the dimension of the data by means of data compression basics and reveals the most effective low dimensional structure of facial patterns. This reduction in dimensions removes information that is not useful and precisely decomposes the face structure into orthogonal (uncorrelated) components known as eigenfaces. Each face image may be represented as a weighted sum (feature vector) of the eigenfaces, which are stored in a 1 D array. A probe image is compared against a gallery image by measuring the distance between their respective feature vectors. The PCA approach typically requires the full frontal face to be presented each time; otherwise the image results in poor performance. The primary advantage of this technique is that it can reduce the data needed to identify the individual to 1 I 1 OOOth of the data presented.

Linear Discriminant Analysis (LDA) is a statistical approach for classifying samples of unknown classes based on training samples with known classes. This technique aims to maximize between-class (i.e., across users) variance and minimize within-class (i.e., within user) variance. In When dealing with high dimensional face data, this technique faces the small sample size problem that arises where there are a small number of available training samples compared to the dimensionality of the sample space.

Elastic Bunch Graph Matching (EBGM) relies on the concept that real face images have many nonlinear characteristics that are not addressed by the linear analysis methods discussed earlier, such as variations in illumination (outdoor lighting vs. indoor fluorescents), pose (standing straight vs. leaning over) and expression (smile vs. frown). A Gabor wavelet transform creates a dynamic link architecture that projects the face onto an elastic grid. The Gabor jet is a node on the elastic grid, notated by circles on the image below, which describes the image behavior around a given pixel. It is the result of a convolution of the image with a Gabor filter, which is used to detect shapes and to extract features using image processing. [A convolution expresses the amount of overlap from functions, blending the functions together.] Recognition is based on the similarity of the Gabor filter response at each Gabor node. This biologically based method using Gabor filters is a process executed in the visual cortex of higher mammals. The difficulty with this method is the requirement of accurate landmark localization, which can sometimes be achieved by combining PCA and LDA methods.

Source: National Science and Technology Council (NSTC)

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

Procurement audit finds fault with Australian Passport Office

A recent performance audit by the Australian National Audit Office (ANAO) has uncovered procurement issues within the Department of Foreign…

 

Idemia NSS wins DOJ biometric service contract

The U.S. Department of Justice (DOJ) Offices Boards and Division’s Justice Management Division has awarded Idemia National Security Solutions (NSS)…

 

LSEG Risk Intelligence’s new biometric, ID verification tools hit the market

LSEG Risk Intelligence has launched two new products that utilizes AI to verify identities, with the aim of strengthening anti-fraud…

 

Canadian policy makers spar over age verification bills S-210, C-412

The question of effective age verification in Canada increasingly has ramifications beyond the beaver. There is ongoing debate on whether…

 

Lobby groups sue to block Florida’s age verification requirements for social media

NetChoice and the Computer & Communications Industry Association (CCIA) are suing the state of Florida to prevent social media companies…

 

World strengthens chain link with LBank Exchange listing, multiple integrations

Fresh off its rebrand, World (ex-Worldcoin) has announced its upcoming listing on LBank Exchange, a premier global digital asset trading…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events