- Research Article
- Open Access
Comparison of Spectral-Only and Spectral/Spatial Face Recognition for Personal Identity Verification
EURASIP Journal on Advances in Signal Processingvolume 2009, Article number: 943602 (2009)
Face recognition based on spatial features has been widely used for personal identity verification for security-related applications. Recently, near-infrared spectral reflectance properties of local facial regions have been shown to be sufficient discriminants for accurate face recognition. In this paper, we compare the performance of the spectral method with face recognition using the eigenface method on single-band images extracted from the same hyperspectral image set. We also consider methods that use multiple original and PCA-transformed bands. Lastly, an innovative spectral eigenface method which uses both spatial and spectral features is proposed to improve the quality of the spectral features and to reduce the expense of the computation. The algorithms are compared using a consistent framework.
Automatic personal identity authentication is an important problem in security and surveillance applications, where physical or logical access to locations, documents, and services must be restricted to authorized persons. Passwords or personal identification numbers (PINs) are often assigned to individuals for authentication. However, the password or PIN is vulnerable to unauthorized exploitation and can be forgotten. Biometrics, on the other hand, use personal intrinsic characteristics which are harder to compromise and more convenient to use. Consequently, the use of biometrics has been gaining acceptance for various applications. Many different sensing modalities have been developed to verify personal identities. Fingerprints are a widely used biometric. Iris recognition is an emerging technique for personal identification which is an active area of research. There are also studies to use voice and gait as primary or auxiliary means to verify personal identities.
Face recognition has been studied for many years for human identification and personal identity authentication and is increasingly used for its convenience and noncontact measurements. Most modern face recognition systems are based on the geometric characteristics of human faces in an image [1–4]. Accurate verification and identification performance has been demonstrated for these algorithms based on mug shot type photographic databases of thousands of human subjects under controlled environments [5, 6]. Various 3D face models [7, 8] and illumination models [9, 10] have been studied for pose and illumination-invariant face recognition. In addition to methods based on gray-scale and color face images over the visible spectrum, thermal infrared face images [11, 12] and hyperspectral face images  have also been used for face recognition experiments. An evaluation of different face recognition algorithms using a common dataset has been of general interest. This approach provides a solid basis to draw conclusions on the performance of different methods. The Face Recognition Technology (FERET) program  and the Face Recognition Vendor Test (FRVT)  are two programs which provided independent government evaluations for various face recognition algorithms and commercially available face recognition systems.
Most biometric methods, including face recognition methods, are subject to possible false acceptance or rejection. Although biometric information is difficult to duplicate, these methods are not immune to forgery, or so-called spoofing. This is a concern for automatic personal identity authentication since intruders can use artificial materials or objects to gain unauthorized access. There are reports showing that fingerprint sensor devices have been deceived by Gummi fingers in Japan  and fake latex fingerprints in Germany . Face and iris recognition systems can also be compromised since they use external observables . To counter this vulnerability, many biometric systems employ a liveness detection function to foil attempts at biometric forgery [17, 18]. To improve system accuracy, there is strong interest in research to combine multiple biometric characteristics for multimodal personal identity authentication [19, 20]. Since hyperspectral sensors capture spectral and spatial information they provide the potential for improved personal identity verification.
Methods that have been developed consider the use of representations for visible wavelength color images for face recognition [21, 22] as well as the combination of color and 3D information . In this work, we examine the use of combined spectral/spatial information for face recognition over the near-infrared (NIR) spectral range. We show that the use of spatial information can be used to improve on the performance of spectral-only methods . We also use a large NIR hyperspectral dataset to show that the choice of spectral band over the NIR does not have a significant effect on the performance of single-band eigenface methods. On the other hand, we show that band selection does have a significant effect on the performance of multiband methods. In this paper we develop a new representation called the spectral-face which preserves both high-spectral and high-spatial resolution. We show that the spectral eigenface representation outperforms single-band eigenface methods and has performance that is comparable to multiband eigenface methods but at a lower computational cost.
2. Face Recognition in Single-Band Images
A hyperspectral image provides spectral information, normally in radiance or reflectance, at each pixel. Thus, there is a vector of values for each pixel corresponding to different wavelengths within the sensor spectral range. The reflectance spectrum of a material remains constant in different images while different materials exhibit distinctive reflectance properties due to different absorbing and scattering characteristics as a function of wavelength. In the spatial domain, there are several gray-scale images that represent the hyperspectral imager responses of all pixels for a single spectral band. In a previous study , seven hyperspectral face images were collected for each of 200 human subjects. These images have a spatial resolution of and 31 bands with band centers separated by 0.01 m over the near-infrared (0.7 m–1.0 m). Figure 1 shows calibrated hyperspectral face images of two subjects at seven selected bands which are separated by 0.06 m over 0.7 m–1.0 m. We see that the ratios of pixel values on skin or hair between different bands are dissimilar for the two subjects. That is, they have unique hyperspectral signatures for each tissue type. Based on these spectral signatures, a Mahalanobis distance-based method was applied for face recognition tests and accurate face recognition rates were achieved. However, the performance was not compared with classic face recognition methods using the same dataset.
The CSU Face Identification Evaluation System  provides a standard set of well-known algorithms and established experimental protocols for evaluating face recognition algorithms. We selected the Principal Components Analysis (PCA) Eigenfaces  algorithm and used cumulative match scores as in the FERET study  for performance comparisons. To prepare for the face recognition tests, a gray-scale image was extracted for each of the 31 bands from a hyperspectral image. The coordinates of both eyes were manually positioned before processing by the CSU evaluation programs. In the CSU evaluation system all images were transformed and normalized so that they have a fixed spatial resolution of pixels and the eye coordinates are the same. Masks were used to void nonfacial features. Histogram equalization was also performed on all images before the face recognition tests were conducted. For each of the 200 human subjects, there are three front-view images with the first two (fg and fa) having neutral expression and the other (fb) having a smile. All 600 images were used to generate the eigenfaces. Figure 2 shows one single-band image before and after the normalization, and the first 10 eigenfaces for the dataset. The number of eigenfaces used for face recognition was determined by selecting the set of most significant eigenfaces which account for 90% of the total energy.
Given the th band of hyperspectral images and , the Mahalanobis Cosine distance  is used to measure the similarity of the two images. Let be the projection of the th band of onto the th eigenface and let be the standard deviation of the projections from all of the th band images onto the th eigenface. The Mahalanobis projection of is where . Let be the similarly computed Mahalanobis projection of . The Mahalanobis Cosine distance between and for the th band is defined by
which is the negative of the cosine between the two vectors. For the 200 subjects, the fg images were grouped in the gallery set and the fa and fb images were used as probes . The experiments follow the closed universe model where the subject in every image in the probe set is included in the gallery. For each probe image, the Mahalanobis Cosine distance between the probe and all gallery images is computed. If the correct match is included in the group of gallery images with the smallest distances, we say that the probe is correctly matched in the top . The cumulative match score for a given is defined as the fraction of correct matches in the top from all probes. The cumulative match score for is called the recognition rate. Figure 3 plots the cumulative match scores for , and 10 respectively. Band 1 refers to the image acquired at 700 nm and band 31 refers to the image acquired at 1000 nm. We see that all bands provide high recognition rates, with more than 96% of the probes correctly identified for and over 99% for . It is important to consider the statistical significance of the results. For this purpose, we model the fraction of the probes that are correctly identified by a binomial distribution with a mean given by the measured identification rate . The variance of is given by where 400 is the number of probes . For an identification rate of 0.97 we have which corresponds to a standard deviation in the identification rate of 0.009 and for an identification rate of 0.99 we have which corresponds to a standard deviation in the identification rate of 0.005. Thus, for each of the three curves plotted in Figure 3 the variation in performance across bands is not statistically significant. Figure 4 compares the cumulative match scores using the spectral signature method  and the single-band eigenface method using the most effective band. We see that the spectral signature method performs well but somewhat worse than the best single-band method for matches with less than 8. For , a recognition rate of 0.92 corresponds to a standard deviation in the recognition rate of 0.014 which indicates that the difference between the two methods in Figure 4 is statistically significant. The advantage of the spectral methods is pose invariance which was discussed in a previous work  but which is not considered in this paper.
3. Face Recognition in Multiband Images
We have shown that both spatial and spectral features in hyperspectral face images provide useful discriminants for recognition. Thus, we can consider the extent of performance improvements when both features are utilized. We define a distance between images U and V using
where the index takes values over a group of -selected bands that are not necessarily contiguous. Note that the additive 1 is to ensure a nonnegative value before the square.
Redundancy in a hyperspectral image can be reduced by a Principal Component Transformation (PCT) . For a hyperspectral image , the PCT generates where The principal components are orthogonal to each other and sorted in order of decreasing modeled variance. Figure 5 shows a single-band image at 700 nm and the first five principal components that are extracted from the corresponding hyperspectral image. We see that the first principal component image resembles the single-band image while the second and third component images highlight features of the lips and eyes. We also see that there are few visible features remaining in the fourth and fifth principal components.
Figure 6 plots the recognition rates for different multiband eigenface methods. First we selected the bands in order of increasing center wavelength and performed eigenface recognition tests for the first one band, two bands and up to 31 bands, respectively. We also sorted all 31 bands in descending order of recognition rate and performed the same procedure for the face recognition tests. From Figure 6 we see that both methods reach a maximum recognition rate of 98% when using multiple bands. However, when the number of bands is less than 16, the multiband method performs better if the bands are sorted in advance from the highest recognition rate to the lowest. We also used the leading principal components for multiband recognition. We see in Figure 6 that over 99% of the probes were correctly recognized when using the first three principal bands. Increasing the number of principal bands beyond 3 causes performance degradation. The original-order algorithm in Figure 6 achieves a recognition rate of approximately 0.965 for less than ten bands which corresponds to a standard deviation in recognition rate of 0.009. Thus, the performance difference between this method and the PCT-based method is significant between 3 and 9 bands. Note that the PCT was performed on each hyperspectral image individually with different sets of . The PCT can also be implemented using the same coefficients for faster computation.
Figure 7 also compares the recognition performance of the three multiband methods discussed in the previous paragraph where each algorithm uses only the first three bands. It is interesting that sorting the bands according to performance improves the recognition rate for but worsens the performance somewhat for larger values of . In either case, the multiband method based on the PCT has the best performance for and is equivalent to the original-order method for larger values of .
4. Face Recognition Using Spectral Eigenfaces
We showed in Section 3 that multiband eigenface methods can improve face recognition rates. In these algorithms, the multiple bands are processed independently. A more general approach is to consider the full spectral/spatial structure of the data. One way to do this is to apply the eigenface method to large composite images that are generated by concatenating the 31 single-band images. This approach, however, will significantly increase the computational cost of the process. An alternative is to subsample each band of the hyperspectral image before concatenation into the large composite image. For example, Figure 8 shows a 31-band image after subsampling so that the total number of pixels is equivalent to the number of pixels in a pixel single-band image. We see that significant spatial detail is lost due to the subsampling.
A new representation, called spectral-face, is proposed to preserve both spectral and spatial properties. The spectral-face has the same spatial resolution as a single-band image so the spatial features are largely preserved. In the spectral domain, the pixel values in the spectral-face are extracted sequentially from band 1 to band 31 then from band 1 again. For example, the value of pixel in spectral-face equals the value of pixel in band where is the remainder of divided by 31. Figure 9 shows an original single-band image together with the normalized spectral-face image in the left column. Spectral-face has improved spatial detail as compared with Figure 8. The pattern on the face in Figure 9 demonstrates the variation in the spectral domain. With the spectral-face images, the same eigenface technique is applied for face recognition. The first 10 spectral eigenfaces are shown on the right side of Figure 9. It is interesting to observe that the eighth spectral eigenface highlights the teeth feature in smiling faces.
The spectral eigenface method was applied to the same dataset as the single-band and multiband methods. The cumulative match scores for to 20 are shown in Figure 10. The best of the single-band methods, which corresponds to band 19 (880 nm), is included for performance comparison with the spectral eigenface method. We see that the spectral eigenface method has better performance for all ranks. The best of the multiband methods, which combines the first three principal bands, is also considered. The multiband method performs better than the spectral eigenface method for small values of the rank, but performs worse for larger values of the rank. For this case, an identification rate of 0.99 corresponds to a standard deviation in identification rate of 0.005. Thus, the two multiple-band methods have a statistically significant advantage over the single-band eigenface method for ranks between 3 and 10. Note that the multiple principal band method requires more computation than the spectral eigenface method.
Multimodal personal identity authentication systems have gained popularity. Hyperspectral imaging systems capture both spectral and spatial information. The previous work  has shown that spectral signatures are powerful discriminants for face recognition in hyperspectral images. In this work, various methods that utilize spectral and/or spatial features were evaluated using a hyperspectral face image dataset. The single-band eigenface method uses spatial features exclusively and performed better than the pure spectral method. However, the computational requirements increase significantly for eigenface generation and projection. The recognition rate was further improved by using multiband eigenface methods which require more computation. The best performance was achieved with the highest computational complexity by using principal component bands. The spectral eigenface method transforms a multiband hyperspectral image to a spectral-face image which samples from all of the bands while preserving spatial resolution. We showed that this method performs as well as the PCT-based multiband method but with a much lower computational requirement.
Chellappa R, Wilson CL, Sirohey S: Human and machine recognition of faces: a survey. Proceedings of the IEEE 1995,83(5):705-740. 10.1109/5.381842
Etemad K, Chellappa R: Discriminant analysis for recognition of human face images. Journal of the Optical Society of America A 1997,14(8):1724-1733. 10.1364/JOSAA.14.001724
Moghaddam B, Pentland A: Probabilistic visual learning for object representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 1997,19(7):696-710. 10.1109/34.598227
Wiskott L, Fellous J-M, Krüger N, von der Malsburg C: Face recognition by elastic bunch graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 1997,19(7):775-779. 10.1109/34.598235
Phillips PJ, Moon H, Rizvi SA, Rauss PJ: The FERET evaluation methodology for face-recognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence 2000,22(10):1090-1104. 10.1109/34.879790
Phillips PJ, Grother P, Micheals R, Blackburn DM, Tabassi E, Bone M: Face recognition vendor test 2002: overview and summary. Defense Advanced Re-search Projects Agency, Arlington, Va, USA; March 2003.
Blanz V, Vetter T: Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence 2003,25(9):1063-1074. 10.1109/TPAMI.2003.1227983
Chang KI, Bowyer KW, Flynn PJ: An evaluation of multimodal 2D+3D face biometrics. IEEE Transactions on Pattern Analysis and Machine Intelligence 2005,27(4):619-624.
Adini Y, Moses Y, Ullman S: Face recognition: the problem of compensating for changes in illumination direction. IEEE Transactions on Pattern Analysis and Machine Intelligence 1997,19(7):721-732. 10.1109/34.598229
Lee K-C, Ho J, Kriegman DJ: Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence 2005,27(5):684-698.
Socolinsky DA, Selinger A, Neuheisel JD: Face recognition with visible and thermal infrared imagery. Computer Vision and Image Understanding 2003,91(1-2):72-114. 10.1016/S1077-3142(03)00075-4
Wilder J, Phillips PJ, Jiang C, Wiener S: Comparison of visible and infra-red imagery for face recognition. Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (AFGR '96), October 1996, Killington, Vt, USA 182-187.
Pan Z, Healey G, Prasad M, Tromberg B: Face recognition in hyperspectral images. IEEE Transactions on Pattern Analysis and Machine Intelligence 2003,25(12):1552-1560. 10.1109/TPAMI.2003.1251148
Leyden J: Gummi bears defeat fingerprint sensors. The Register 2002.
Harrison A: Hackers claim new fingerprint biometric attack. Security Focus 2003.
Lewis M, Statham P: CESG biometric security capabilities programme: method, results and research challenges. Biometric Consortium Conference, September 2004, Crystal City, Va, USA
Bigun J, Fronthaler H, Kollreider K: Assuring liveness in biometric identity authentication by real-time face tracking. Proceedings of IEEE International Conference on Computational Intelligence for Homeland Security and Personal Safety (CIHSPS '04), July 2004, Venice, Italy 104-111.
Tan T, Ma L: Iris recognition: recent progress and remaining challenges. Biometric Technology for Human Identification, April 2004, Orlando, Fla, USA, Proceedings of SPIE 5404: 183-194.
Kittler J, Matas J, Jonsson K, Ramos Sánchez MU: Combining evidence in personal identity verification systems. Pattern Recognition Letters 1997,18(9):845-852. 10.1016/S0167-8655(97)00062-7
Kittler J, Messer K: Fusion of multiple experts in multimodal biometric personal identity verification systems. Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, December 2002, Kauai, Hawaii, USA 3-12.
Yang J, Zhang D, Xu Y, Yang J-Y: Recognize color face images using complex eigenfaces. Proceedings of International Conference on Advances in Biometrics (ICB '06), January 2006, Hong Kong, Lecture Notes in Computer Science 3832: 64-68.
Yoo S, Park R-H, Sim D-G: Investigation of color spaces for face recognition. Proceedings of IAPR Conference on Machine Vision Applications (MVA '07), May 2007, Tokyo, Japan 106-109.
Tsalakanidou F, Tzovaras D, Strintzis MG: Use of depth and colour eigenfaces for face recognition. Pattern Recognition Letters 2003,24(9-10):1427-1435. 10.1016/S0167-8655(02)00383-5
Pan Z, Healey G, Prasad M, Tromberg B: Face recognition in hyperspectral images. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '03), June 2003, Madison, Wis, USA. Volume 1. Institute of Electrical and Electronics Engineers; 334-339.
Bolme D, Beveridge JR, Teixeira M, Draper BA: The CSU face identification evaluation system: its purpose, features and structure. Proceedings of the 3rd International Conference Computer Vision Systems (ICVS '03), April 2003, Graz, Austria, Lecture Notes in Computer Science 2626: 304-313.
Turk MA, Pentland AP: Face recogntion using eigenfaces. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '91), June 1991, Maui, Hawaii, USA 586-591.
Beveridge JR, Bolme DS, Teixeira M, Draper B: The CSU face identification evaluation system user's guide: version 5.0. Computer Science Department, Colorado State University, Fort Collins, Colo, USA; May 2003.
Papoulis A: Probability and Statistics. Prentice-Hall, Englewood Cliffs, NJ, USA; 1990.
Ready PJ, Wintz PA: Information extraction, SNR improvement, and data compression in multispectral imagery. IEEE Transactions on Communications 1973,21(10):1123-1131. 10.1109/TCOM.1973.1091550
This work was conducted when the author was with the Computer Vision Laboratory at the University of California, Irvine, USA. This work has been supported by the DARPA Human Identification at a Distance Program through AFOSR Grant F49620-01-1-0058. This work has also been supported by the Laser Microbeam and Medical Program (LAMMP) and NIH Grant RR01192. The data was acquired at the Beckman Laser Institute on the UC Irvine campus. The authors would like to thank J. Stuart Nelson and Montana Compton for their valuable assistance in the process of IRB approval and human subject recruitment.