 Research
 Open Access
 Published:
Face recognition using nonparametricweighted Fisherfaces
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 92 (2012)
Abstract
This study presents an appearancebased face recognition scheme called the nonparametricweighted Fisherfaces (NWFisherfaces). Pixels in a facial image are considered as coordinates in a highdimensional space and are transformed into a face subspace for analysis by using nonparametricweighted feature extraction (NWFE). According to previous studies of hyperspectral image classification, NWFE is a powerful tool for extracting hyperspectral image features. The Fisherfaces method maximizes the ratio of betweenclass scatter to that of withinclass scatter. In this study, the proposed NWFisherfaces weighted the betweenclass scatter to emphasize the boundary structure of the transformed face subspace and, therefore, enhances the separability for different persons' face. The proposed NWFisherfaces was compared with Orthogonal Laplacianfaces, Eigenfaces, Fisherfaces, direct linear discriminant analysis, and null space linear discriminant analysis methods for tests on five facial databases. Experimental results showed that the proposed approach outperforms other feature extraction methods for most databases.
1. Introduction
Face representation is important in recognizing face in many applications such as database matching, security systems, face indexing on web pictures, and humancomputer interfaces. The appearancebased method is one of the wellstudied techniques for face representation [1, 2]. Two purposes of the appearancebased method are reducing dimensionality and increasing discriminability of extracted features. Hence, a good feature extraction method helps recognize face in a highly discriminative subspace with low dimensionality.
Two of the most classical feature extraction techniques for this purpose are the Eigenfaces and Fisherfaces methods. Eigenfaces [3] applies principal component analysis (PCA) to transform facial data to the linear subspace spanned by coordinates that maximize the total scatter across all classes. Unlike the Eigenfaces method, which is unsupervised, the Fisherfaces method is supervised. Fisherfaces applies linear discriminant analysis (LDA) to transform data into directions with optimal discriminability. LDA searches for coordinates that separate data of different classes and draw data of the same class close. However, both Eigenfaces and Fisherfaces see only the global Euclidean structure, which may lose some discriminability contained in other hidden structures.
To discover local structure, He et al. [4] and Cai et al. [5] proposed the Laplacianfaces method [4] and its orthogonal form, which is referred to as OLaplacianfaces [5]. The Laplacianfaces algorithm is based on the locality preserving projection (LPP) algorithm, which aims at finding a linear approximation to the eigenfunctions of the Laplace Beltrami operator on the face manifold. Han et al. [1] proposed the eigenvectorweighting function based on graph embedding framework.
Recently, many LDAbased methods have been proposed to embed manifold structure into the facial feature extraction process [6–13]. Park and Savvides [6] proposed a multifactor extension of LDA. Na et al. [7] proposed the linear boundary discriminant analysis, which increases class separability by reflecting different significances of nonboundary and boundary patterns.
There are several drawbacks in LDA. First, it suffers from the singularity problem, which makes it hard to perform. Second, LDA has the distribution assumption which may make it fail in applications where the distribution is more complex than Gaussian. Third, LDA cannot determine the optimal dimensionality for discriminant analysis, which is an important issue but has often been neglected previously. Fourth, applying LDA may encounter the socalled small sample size problem (SSSP) [14].
However, the classical LDA formulation requires the nonsingularity of the scatter matrices involved. For undersampled problems, where the data dimensionality is much larger than the sample size, all scatter matrices are singular and classical LDA fails. Many extensions, including null space LDA (NLDA) [15] and orthogonal LDA (OLDA), have been proposed in the past to overcome this problem. NLDA aims to maximize the betweenclass distance in the null space of the withinclass scatter matrix, while OLDA computes a set of orthogonal discriminant vectors via the simultaneous diagonalization of the scatter matrices.
Direct linear discriminant analysis (DLDA) [16] is an extension of LDA to deal with SSSP. DLDA does not use the information inside the null space, as some discriminative information may be lost. DLDA will be equivalent to NLDA and LDA in highdimensional data and small sample size.
In this study, we propose an appearancebased face recognition scheme called nonparametricweighted Fisherfaces (NWFisherfaces). The NWFisherfaces approach is a derivative of the nonparametricweighted feature extraction (NWFE) [17], which performs well in the studies of hyperspectral image classification [18, 19]. The proposed NWFisherfaces method weights the betweenclass scatter to emphasize the boundary structure of the transformed face subspace and, therefore, enhances the face recognition discriminability. The proposed approach is compared with OLaplacianfaces, Eigenfaces, Fisherfaces, NLDA, and DLDA methods for tests on five face databases. Experimental results show that the proposed approach gains the least error rates in lowdimensional subspaces for most databases.
The rest of this article is organized as follows. Section 2 gives a brief review of related studies. Section 3 introduces the NWFisherfaces algorithm. Section 4 presents the experimental results on face recognition. In Section 5, we draw some conclusions and provide some ideas for future research.
2. Related study
Linear feature extraction methods can reduce excessive dimensionality of image data with simple computation. In essence, linear methods project highdimensional data to lowdimensional subspace.
2.1. PCA
PCA finds directions efficient for representation. Considering a set of N sample images, x_{1}, x_{2},..., x_{ N }, in an ndimensional image space, the original ndimensional image space is linearly transformed to an mdimensional feature space, where m < n. The new feature vectors y_{k} are defined by the following linear transformation:
where W ∈ R^{n×m}is a matrix with orthonormal columns. Total scatter matrix S_{T} is defined as
where N is the number of sample images and μ is the mean of all samples. The objective function is as follows
where W_{PCA} is the set of ndimensional eigenvectors of S_{T} corresponding to the m largest eigenvalues.
2.2 LDA
LDA finds directions efficient for discrimination. Considering a set of N sample images, x_{1}, x_{2},..., x_{ N }, which belong to l classes of face in an ndimensional image space, the objective function of LDA is as follows
where μ is the mean of all samples, N_{ i } is the number of samples in class i, μ^{i} is the average of class i, and ${x}_{j}^{i}$ is the j th sample in class i. S_{w} is the withinclass scatter matrix. S_{b} is the betweenclass scatter matrix. W_{LDA} is the set of generalized eigenvectors of (S_{w})^{1}S_{b} corresponding to the m largest generalized eigenvalues.
2.3 DLDA
The new DLDA method is applicable to solve the SSSP which often arising in face recognition. Most LDAbased algorithms including Fisherfaces [20] and DLDA [21] utilize the conventional Fisher criterion defined in (4) while some authors use the alternative given in (6) proposed by Liu [22, 23].
where S_{t} is population scatter matrix.
A variant of Fisher criterion of DLDA is expressed as follows
2.4 NLDA
In this new LDA method, they proved that the most expressive vectors derived in the null space of the withinclass scatter matrix using PCA are equal to the optimal discriminant vectors derived in the original space using LDA. This method is more efficient, accurate, and stable to calculate the most discriminant projection vectors based on the modified Fisher's criterion (7). This process starts by calculating the projection vectors in the null space of the withinclass scatter matrix S_{w}. This null space can be spanned by those eigenvectors corresponding to the set of zero eigenvalues of S_{w}. If this subspace does not exist, i.e., S_{w} is nonsingular, then S_{t} is also nonsingular. Under these circumstances, we choose those eigenvectors corresponding to the set of the largest eigenvalues of the matrix (S_{b} + S_{w})^{1}S_{ b } as the most discriminant vector set; otherwise, the SSSP will occur.
2.5 LPP
LPP finds directions efficient for preserving the intrinsic geometry of the data and local structure. The objective function of LPP is as follows:
with the constraint
where D_{ ii } = ∑_{ j }S_{ ij } and L = D  S is the Laplacian matrix. S is a similarity matrix attempting to ensure that if x_{ i } and x_{ j } are "close", then y_{i} and y_{ j } are close as well. The basic functions of LPP are the eigenvectors of the matrix (XDXT)^{1}XLXT associated with the smallest eigenvalues. Moreover, Cai et al. [5] proposed the orthogonal form of LPP (OLPP) and proved that OLPP outperforms LPP. In this study, OLPP is applied with a supervised similarity matrix for comparison. The weights of S are defined as follows:
where cos(·) denotes the cosine distance measure, i and j denote sample indices, and k and l denote classes. The applied S preserves the locality depending on the cosine distance measure and ensures preservation only for withinclass face by setting the betweenclass weights as 0.
3. Methodology: NWFisherfaces
The proposed NWFisherfaces scheme is based on the NWFE method proposed by Kuo and Landgrebe [17]. NWFE is an LDAbased method that improves LDA by focusing on samples near the eventual decision boundary location. Both NWFE and OLPP use distance function to evaluate closeness between samples. While OLPP emphasizes the local structure by defining a closeness graph map, NWFE emphasizes the boundary structure by weighting the calculation of mean and covariance with the measured closeness. The main ideas of NWFE put different weights on every sample to compute the "weighted means" and define new nonparametric betweenclass and withinclass scatter matrices. In NWFE, the nonparametric betweenclass scatter matrix is defined as follows:
where N_{ i } is the training sample size of class i, ${x}_{k}^{i}$ is the k th sample of class i, ${M}_{j}\left({x}_{k}^{i}\right)$ denotes the weighted mean corresponding to ${x}_{k}^{i}$ for class j, and dist(x, y) is the distance measured from x to y. The closer ${x}_{k}^{i}$ and ${M}_{j}\left({x}_{k}^{i}\right)$ are, the larger the weight ${\lambda}_{k}^{\left(i,j\right)}$ is. The sum of ${\lambda}_{k}^{\left(i,j\right)}$ for class i is one. The weight ${w}_{kl}^{\left(i,j\right)}$ for computing weighted means is a function of ${x}_{k}^{i}$ and ${x}_{l}^{j}$. The closer ${x}_{k}^{i}$ and ${x}_{l}^{j}$ are, the larger ${w}_{kl}^{\left(i,j\right)}$ is. The sum of ${w}_{kl}^{\left(i,j\right)}$ for ${M}_{j}\left({x}_{k}^{i}\right)$ is one.
In face recognition, the dimension of face data often exceeds the size of data. In this case, the covariance was not a full rank matrix and could not be inverted. A simple method to deal with the SSSP is called regularized discriminant analysis, which artificially increases the number of available samples by adding white noise to existing samples. Some regularized techniques [18, 24], can be applied to withinclass scatter matrix. In this article, the withinclass scatter matrix was regularized by
where diag(.) denotes the diagonal part of a matrix.
The NWFisherfaces computational scheme is as follows.

(1)
PCA projection: Face images are projected into the PCA subspace by throwing away the components corresponding to zero eigenvalue. W_{PCA} denotes the transformation matrix of PCA projection. The projected components are statistically uncorrelated and the rank of the projected data matrix is equal to the data dimensionality. This study applied the PCA projection method proposed in [5, 25] to prevent the singularity of S_{w} due to the simple computation and fair comparison with Fisherfaces and OLaplacianfaces. However, throwing the dimensionalities corresponding to zero eigenvalue may lose important discriminant information [26]. For further applying LDAbased methods to practical applications, an advanced regularization method proposed in [26] is suggested.

(2)
Compute the distances between each pair of samples and form the distance matrix.

(3)
Compute ${w}_{kl}^{\left(i,j\right)}$ with the distance matrix.

(4)
Use ${w}_{kl}^{\left(i,j\right)}$ to compute the weighted means ${M}_{j}\left({x}_{k}^{i}\right)$.

(5)
Compute the scatter matrix weight ${\lambda}_{k}^{\left(i,j\right)}$.

(6)
Compute ${S}_{\mathsf{\text{b}}}^{\mathsf{\text{NW}}}$ and the regularized ${S}_{\mathsf{\text{w}}}^{\mathsf{\text{RNW}}}$.

(7)
Compute W_{NWFE} = [w_{1,}..., w_{ m }] as the eigenvectors of ${\left(\left({S}_{\mathsf{\text{w}}}^{\mathsf{\text{RNW}}}\right)\right)}^{1}{S}_{\mathsf{\text{b}}}^{\mathsf{\text{NW}}}$ corresponding to the m largest eigenvalues.

(8)
Compute NWFE embedding as follows:
$$W={W}_{\mathsf{\text{PCA}}}{W}_{\mathsf{\text{NWFE}}}$$
where W is the transformation matrix and the column vectors of W are the socalled NWFisherfaces.
4. Experimental results
The performance of the proposed NWFisherfaces method was compared with the three most popular linear methods in face recognition: Eigenfaces [3], Fisherfaces [20], and OLaplacianfaces [5]. Three face databases were tested: Yale database, Olivetti Research Laboratory (ORL) database, and the PIE (pose, illumination, and expression) database from CMU [24]. This study applied the same preprocessing in [5] to locate face. Gray level images were manually aligned, cropped, and resized to 32 × 32 pixels. Each image was represented by a 1,024dimensional vector. For simplicity, the k nearestneighbor (knn) classifier, where k = 1, was applied in all experiments. Recognition processes were as follows: face subspace was calculated from training samples; new testing face images were projected into calculated subspace; and new facial images were identified by the 1nearest neighbor classifier.
4.1. ORL database
The ORL database contains 10 different images for each of 40 distinct individuals. For some individuals, the images were captured at different times, varying the lighting, facial expressions (open/closed eyes, smiling/not smiling), and facial details (glasses/no glasses) as shown in Figure 1. The database is divided into training and testing sets for experiment. The applied divisions are n images per individual for training and 10  n images per individual for testing, where n = 2, 3, 4, and 5. Furthermore, experimental results are averaged over 20 random sets for each division. Table 1 presents the least error rates and the corresponding dimensions obtained by Eigenfaces, Fisherfaces, OLaplacianfaces, and NWFisherfaces. The proposed NWFisherfaces outperformed other methods on the ORL database. Figure 1 shows the plots of error rate versus reduced dimensionality. Since the optimization of LDA produces at most L  1 features [20], the maximal dimension of Fisherfaces is also L  1, where L is the number of individuals. As observed, error rates of OLaplacianfaces are below those of PCA and LDA after the dimension reaches a certain degree, which is 19 in Figure 2a. The error rates of NWFisherfaces are lower than those of other methods where over all dimensions below L  1.
4.2. Yale database
The Yale face database contains 165 grayscale images of 15 individuals. There are 11 images per individual, one per different facial expression or configuration: centerlight, w/glasses, happy, leftlight, w/no glasses, normal, rightlight, sad, sleepy, surprised, and wink as shown in Figure 3. The database is divided into training and testing sets for experiment. The applied divisions are n images per individual for training and 11  n images per individual for testing, where n = 2, 3, 4, and 5. Furthermore, the experimental results are averaged over 20 random sets for each division. Table 2 and Figure 4 show the experimental results. The proposed NWFisherfaces still outperformed other methods with low dimensionality.
4.3. PIE database
The CMU PIE face database contains 41,368 face images of 68 individuals. Each individual was imaged under various poses, illuminations, and expressions. In this study, 5 near frontal poses (C05, C07, C09, C27, and C29) and all the images under various illuminations, lighting, and expressions were gathered as 170 near frontal facial images for each individual as shown in Figure 5. The database is divided into training and testing sets for experiment. The applied divisions are n images per individual for training and 170  n images per individual for testing, where n = 5, 10, 20, and 30. Furthermore, the experimental results were averaged over 20 random sets for each division. Table 3 presents the least error rates and the corresponding dimensions. Both OLaplacianfaces and NWFisherfaces outperformed the Fisherfaces and Eigenfaces. OLaplacianfaces resulted in the least error rates on PIE database. However, the dimensionality required by the NWFisherfaces to reach its least error rate is much lower than the dimensionalities required by other methods. As shown in Figure 6, NWFisherfaces outperformed other methods over the dimensions below L  1, where L is the number of individuals.
There is no result for NLDA for PIE database after 10 Train. Because the sample number in training set is larger than the dimension of feature, there was no null space for within scatter matrix S_{w}.
4.4. PIE_Small database
The PIE_Small database is a part of PIE database. To check the performance of the proposed method, we reduced number of pictures for each subject. Instead of 170 images, we took 15 images for each person as shown in Figure 7 and found that the performance of the proposed method is better than that of others especially for small sample size data. The applied divisions are n images per individual for training and 15  n images per individual for testing, where n = 5, 6, 7, and 8. Furthermore, the experimental results were averaged over ten random sets for each division. Table 4 presents the least error rates and the corresponding dimensions. Both OLaplacianfaces and NWFisherfaces outperformed the Fisherfaces and Eigenfaces. OLaplacianfaces resulted in the least error rates on PIE_Small database. However, the dimensionality required by the NWFisherfaces to reach its least error rate is much lower than the dimensionalities required by other methods. As shown in Figure 8, NWFisherfaces outperformed other methods over the dimensions below L  1, where L is the number of individuals.
4.5. AR database
In order to check the capability of invariance to lighting condition and face orientation, which have been better solved by 3D deformation approaches. We used AR face database for our proposed method and we found that it is giving better result compare to other method which has been proposed previously.
In this database, there are totally 126 subjects (70 men, 56 women) and each subject has 26 different images as shown in Figure 9. This had taken in different facial expressions, illumination conditions, and occlusions. The applied divisions are n images per individual for training and 13  n images per individual for testing, where n = 5, 6, 7, and 8. Furthermore, the experimental results are averaged over ten random sets for each division. Table 5 and Figure 10 show the experimental results. The proposed NWFisherfaces still outperformed other methods with low dimensionality.
The pictures were taken at the CVC under strictly controlled conditions. No restrictions on wear (clothes, glasses, etc.), makeup, hair style, etc. were imposed to participants. Each person participated in two sessions, separated by 2 weeks (14 days) time. The same pictures were taken in both sessions.
In this face database, there are totally 13 expressions of each person. The expressions are as follows: Neutral expression, Smile, Anger, Scream, Left light on, Right light on, All side lights on, Wearing sun glasses, Wearing sun glasses and left light on, Wearing sun glasses and right light on, Wearing scarf, Wearing scarf and left light on, Wearing scarf and right light on 14 to 26: second session (same conditions as 1 to 13).
5. Conclusions and future works
5.1. Conclusions

(1)
The proposed NWFisherfaces consistently outperforms the Eigenfaces, Fisherfaces, DLDA, and NLDA methods.

(2)
This study applied a nonparametric feature extraction method into the scheme of appearancebased face recognition.

(3)
The proposed NWFisherfaces method weights the betweenclass scatter to emphasize boundary structure of the transformed face subspace and, therefore, enhances the discriminability of face recognition.

(4)
For practical applications, the computational load will depend on the dimensionality of the trained linear projection matrix. In this study, experimental results show that the proposed method can reach its lowest error rate with low dimensionality. Hence, the NWFisherfaces method is practical for realworld face recognition due to the low dimensionality requirement.
5.2. Future works
The future research in this area could involve the following.

(1)
The supervised OLPP weights the scatter matrix to preserve the locality of within class face. This weighting concept may enhance the withinclass scatter of LDA and other LDAbased methods such as NDA and NWFE.

(2)
Linear feature extraction methods measure and optimize closeness between samples depending on Euclidean distance. However, Euclidean distance is basically light variant. Variance caused by lighting should be reduced before using linear feature extraction methods. Several solutions to reduce light variances of face images are proposed:

(a)
Mapping face images into the same intensity distribution by simple preprocessing such as histogram specification.

(b)
Transforming images into frequency domain by Fourierbased methods such as Gabor wavelets.
The performance of NWFisherfaces in nonlinear feature space, such as kernel Hilbert space, can be further evaluated.
References
 1.
Han PY, Jin ATB, Siong LH: Eigenvector weighting function in face recognition. Discrete Dyn Nat Soc 2011: 521935. doi:10.1155/2011/521935
 2.
Murase H, Nayar SK: Visual learning and recognition of 3D objects from appearance. Int J Comput Vis 1995, 14: 524. 10.1007/BF01421486
 3.
Turk M, Pentland AP: Face recognition using Eigenfaces. Presented at the IEEE Conf Computer Vision and Pattern Recognition Maui, HI 1991, 586591.
 4.
He X, Yan S, Hu Y, Niyogi P, Zhang HJ: Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 2005, 27(3):328340.
 5.
Cai D, He X, Han J, Zhang HJ: Orthogonal Laplacianfaces for face recognition. IEEE Trans Image Process 2006, 15(11):36083614.
 6.
Park SW, Savvides M: a multifactor extension of linear discriminant analysis for face recognition under varying pose and illumination. EURASIP J Adv Signal Process 2010: 158395. doi:10.1155/2010/158395
 7.
Na JH, Park MS, Choi JY: Linear boundary discriminant analysis. Pattern Recogn 2010, 43(3):929936. 10.1016/j.patcog.2009.09.015
 8.
Sanayha W, Rangsanseri Y: Weighted LDA image projection technique for face recognition. IEICE Trans Fund 2009, E92.A(9):22572265. 10.1587/transfun.E92.A.2257
 9.
Sugiyama M: Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J Mach Learn Res 2007, 8: 10271061.
 10.
Liu C, Wechsler H: Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Trans Image Process 2002, 11(4):467476. 10.1109/TIP.2002.999679
 11.
Loog M, Duin RPW: Linear dimensionality reduction via a heteroscedastic extension of LDA: the Chernoff criterion. IEEE Trans Pattern Anal Mach Intell 2004, 26(6):732739. 10.1109/TPAMI.2004.13
 12.
Liu Q, Lu H, Ma S: Improving kernel Fisher discriminant analysis for face recognition. IEEE Trans Circ Syst Video Technol 2004, 14(1):4249. 10.1109/TCSVT.2003.818352
 13.
Yan S, Hu Y, Xu D, Zhang HJ, Zhang B, Cheng Q: Nonlinear discriminant analysis on embedded manifold. IEEE Trans Circ Syst Video Technol 2007, 17(4):468477.
 14.
Fukunaga K: Introduction to Statistical Pattern Recognition. Academic Press, New York; 1990.
 15.
Chen LF, Mark Liao HY, Ko MT, Lin JC, Yu GJ: A new LDAbased face recognition system which can solve the small sample size problem. Pattern Recognition 2000, 13(10):17131726.
 16.
Wu XJ, Kittler J, Yang JY, Messer K, Wang S: A new direct LDA (DLDA) algorithm for feature extraction in face recognition. Proceedings of the 17th International Conference on Pattern Recognition 2004 ICPR 2004 2004, 4: 545548.
 17.
Kuo BC, Landgrebe DA: Nonparametric weighted feature extraction for classification. IEEE Trans Geosci Remote Sens 2004, 42(5):10961105.
 18.
Benediktsson JA, Palmason JA, Sveinsson JR: Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans Geosci Remote Sens 2005, 43(3):480491.
 19.
Kuo BC, Chang KY: Feature extractions for small sample size classification problem. IEEE Trans Geosci Remote Sens 2007, 45(3):756764.
 20.
Belhumeur PN, Hepanha JP, Kriegman DJ: Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 1997, 19(7):711720. 10.1109/34.598228
 21.
Yu H, Yang J: A direct LDA algorithm for highdimensional datawith application to face recognition. Pattern Recogn 2001, 34: 20672070. 10.1016/S00313203(00)00162X
 22.
Liu K, Cheng YQ, Yang JY: A generalized optimal set of discriminant vectors. Pattern Recogn 1992, 25(1):731739.
 23.
Lu JW, Plataniotis KN, Venetsanopoulos AN: Face recognition using LDAbased algorithms. IEEE Trans Neural Netw 2003, 14(1):195200. 10.1109/TNN.2002.806647
 24.
Sim T, Baker S, Bsat M: The CMU pose, illumination, and expression database. IEEE Trans Pattern Anal Mach Intell 2003, 25(12):16151618. 10.1109/TPAMI.2003.1251154
 25.
Martinez AM, Kak AC: PCA versus LDA. IEEE Trans Pattern Anal Mach Intell 2001, 23(2):228233. 10.1109/34.908974
 26.
Mandal B, Jiang X, Eng HL, Kot A: Prediction of eigenvalues and regularization of Eigenfeatures for human face verification. Pattern Recogn Lett 2010, 31(8):717724. 10.1016/j.patrec.2009.10.006
Acknowledgements
The authors would like to thank the editors and reviewers for their helpful comments. They would also like to thank Dr. LiWei Ko, Department of Electrical and Control Engineering, National ChiaoTung University, Taiwan, for their support in writing this article, and Dr. Deng Cai, for his essential contribution and providing a complete framework on face recognition.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Li, DL., Prasad, M., Hsu, SC. et al. Face recognition using nonparametricweighted Fisherfaces. EURASIP J. Adv. Signal Process. 2012, 92 (2012). https://doi.org/10.1186/16876180201292
Received:
Accepted:
Published:
Keywords
 appearancebased vision
 face recognition
 nonparametricweighted feature extraction (NWFE)