Biometric: Facial Recognition
Essay by review • January 20, 2011 • Essay • 1,390 Words (6 Pages) • 1,399 Views
Procedure and Approach
This research abstract analyzes the eigenface approach for solving computer vision recognition problems. The previous sections of this report introduced the reader to the basic concepts generally involved in biometrics. This section will focus on the computational aspect of the eigenface approach.
To generate a set of eigenfaces, a large set of digitized images of human faces, taken under the same lighting conditions, are normalized to line up the eyes and mouths. The eigenface approach isn't just limited to human identification problems. The robust nature of the algorithms coupled with the superior design results in a varying operational framework. The approach can be used to work problems in other areas of interest. The initial set of face images is referred to as the training set or training data.
Often times, the initial training set is re-sampled at the same pixel resolution. Eigenfaces can be extracted out of the image data by means of a mathematical tool called Principal Component Analysis (PCA). Only the M eigenfaces corresponding to the M largest eigenvalues are retained. It is recognized that these eigenfaces span the face space which constitutes the training set. Upon further manipulation, each face can be represented by M weights which constitute an extremely compact representation.
The steps illustrated above constitute the initialization procedure; following is a discussion on recognition. The system then demands a test image for analysis. The set of M weights corresponding to the test image were found by projecting the test image onto each of the eigenfaces. The abstract indicates the system checks the test image to verify whether or not it is human. This procedure is accomplished by comparing the distance between the test image and the face space to an arbitrary distance threshold.
Next, compute the distance of the M weights of the test image to the M weights of each face image in the training set. The final step involves a second verification. A second arbitrary threshold is put in place to check whether the test image corresponds at all to any known identity in the training set. The system may then verify identification. In the event of an unknown, the system may be conditioned to storing the image for later analysis.
Training Procedure of Eigenface Approach
A face image is computationally identified as a matrix of intensity values. The matrix is rarely a perfect square and other dimensions are possible, but for the purposes of this discussion we'll assume symmetry. Each intensity value represents a grayscale pixel with a 0 to 255. Thus, most systems recognize only 256 different grayscale values (sufficient for most applications). The digitalized picture needs to be concatenated next in respect to rows. This step results in a column vector of x 1 dimension.
To obtain the eigenfaces for a training set, it is critical to determine the mean vector, deviation-from-mean vectors, and the covariance matrix for a training set. As represented in the literature, a simplistic notation for the training set is . It is then understood that each is a vector of dimension. The value M represents the number of images in the training set. This notation leads to the mean vector definition given below.
Furthermore, the set of deviation-from-mean vectors contains the individual difference of each training image from the mean vector. The equation below represents this ideology.
To obtain the eigenface description of the training set, the training images are subjected to Principal Component Analysis, which seeks a set of vectors which significantly describes the variations of the data. Research indicates that the principal components of the training set are the eigenvectors of the covariance matrix of the training set. The equation for obtaining the covariance matrix is illustrated below. Another representation of the covariance matrix is also defined, and both equations are recognized as equivalents.
The covariance matrix is of dimension . Determining eigenvectors and eigenvalues from a matrix of this size is computationally difficult and impractical. The very ideology of PCA contradicts working with a matrix this large. PCA attempts to obtain a low-dimensional representation that can describe the training set. To circumnavigate this problem, Turk and Pentland developed an innovative numerical approach to finding the solution. The development below successfully reduces the dimension of the matrix from to M by M.
Following this method, we should first construct the matrix L = ATA of M by M dimension and find the M eigenvectors, of L. The first M eigenvectors of the covariance matrix can be obtained by finding , and the corresponding eigenvalues allow us to rank the eigenvectors according to their significance.
The number of eigenfaces utilized is up for debate. It is recognized that a smaller subset of the eigenfaces is appropriate for capturing the variation in a larger set. Thus, the proper
...
...