- Research
- Open Access

# Face recognition using nonparametric-weighted Fisherfaces

- Dong-Lin Li
^{1}, - Mukesh Prasad
^{2}, - Sheng-Chih Hsu
^{1}, - Chao-Ting Hong
^{1}Email author and - Chin-Teng Lin
^{1}

**2012**:92

https://doi.org/10.1186/1687-6180-2012-92

© Li et al; licensee Springer. 2012

**Received:**9 May 2011**Accepted:**26 April 2012**Published:**26 April 2012

## Abstract

This study presents an appearance-based face recognition scheme called the nonparametric-weighted Fisherfaces (NW-Fisherfaces). Pixels in a facial image are considered as coordinates in a high-dimensional space and are transformed into a face subspace for analysis by using nonparametric-weighted feature extraction (NWFE). According to previous studies of hyperspectral image classification, NWFE is a powerful tool for extracting hyperspectral image features. The Fisherfaces method maximizes the ratio of between-class scatter to that of within-class scatter. In this study, the proposed NW-Fisherfaces weighted the between-class scatter to emphasize the boundary structure of the transformed face subspace and, therefore, enhances the separability for different persons' face. The proposed NW-Fisherfaces was compared with Orthogonal Laplacianfaces, Eigenfaces, Fisherfaces, direct linear discriminant analysis, and null space linear discriminant analysis methods for tests on five facial databases. Experimental results showed that the proposed approach outperforms other feature extraction methods for most databases.

## Keywords

- appearance-based vision
- face recognition
- nonparametric-weighted feature extraction (NWFE)

## 1. Introduction

Face representation is important in recognizing face in many applications such as database matching, security systems, face indexing on web pictures, and human-computer interfaces. The appearance-based method is one of the well-studied techniques for face representation [1, 2]. Two purposes of the appearance-based method are reducing dimensionality and increasing discriminability of extracted features. Hence, a good feature extraction method helps recognize face in a highly discriminative subspace with low dimensionality.

Two of the most classical feature extraction techniques for this purpose are the Eigenfaces and Fisherfaces methods. Eigenfaces [3] applies principal component analysis (PCA) to transform facial data to the linear subspace spanned by coordinates that maximize the total scatter across all classes. Unlike the Eigenfaces method, which is unsupervised, the Fisherfaces method is supervised. Fisherfaces applies linear discriminant analysis (LDA) to transform data into directions with optimal discriminability. LDA searches for coordinates that separate data of different classes and draw data of the same class close. However, both Eigenfaces and Fisherfaces see only the global Euclidean structure, which may lose some discriminability contained in other hidden structures.

To discover local structure, He et al. [4] and Cai et al. [5] proposed the Laplacianfaces method [4] and its orthogonal form, which is referred to as O-Laplacianfaces [5]. The Laplacianfaces algorithm is based on the locality preserving projection (LPP) algorithm, which aims at finding a linear approximation to the eigenfunctions of the Laplace Beltrami operator on the face manifold. Han et al. [1] proposed the eigenvector-weighting function based on graph embedding framework.

Recently, many LDA-based methods have been proposed to embed manifold structure into the facial feature extraction process [6–13]. Park and Savvides [6] proposed a multifactor extension of LDA. Na et al. [7] proposed the linear boundary discriminant analysis, which increases class separability by reflecting different significances of nonboundary and boundary patterns.

There are several drawbacks in LDA. First, it suffers from the singularity problem, which makes it hard to perform. Second, LDA has the distribution assumption which may make it fail in applications where the distribution is more complex than Gaussian. Third, LDA cannot determine the optimal dimensionality for discriminant analysis, which is an important issue but has often been neglected previously. Fourth, applying LDA may encounter the so-called small sample size problem (SSSP) [14].

However, the classical LDA formulation requires the nonsingularity of the scatter matrices involved. For undersampled problems, where the data dimensionality is much larger than the sample size, all scatter matrices are singular and classical LDA fails. Many extensions, including null space LDA (N-LDA) [15] and orthogonal LDA (OLDA), have been proposed in the past to overcome this problem. N-LDA aims to maximize the between-class distance in the null space of the within-class scatter matrix, while OLDA computes a set of orthogonal discriminant vectors via the simultaneous diagonalization of the scatter matrices.

Direct linear discriminant analysis (D-LDA) [16] is an extension of LDA to deal with SSSP. D-LDA does not use the information inside the null space, as some discriminative information may be lost. D-LDA will be equivalent to N-LDA and LDA in high-dimensional data and small sample size.

In this study, we propose an appearance-based face recognition scheme called nonparametric-weighted Fisherfaces (NW-Fisherfaces). The NW-Fisherfaces approach is a derivative of the nonparametric-weighted feature extraction (NWFE) [17], which performs well in the studies of hyperspectral image classification [18, 19]. The proposed NW-Fisherfaces method weights the between-class scatter to emphasize the boundary structure of the transformed face subspace and, therefore, enhances the face recognition discriminability. The proposed approach is compared with O-Laplacianfaces, Eigenfaces, Fisherfaces, N-LDA, and D-LDA methods for tests on five face databases. Experimental results show that the proposed approach gains the least error rates in low-dimensional subspaces for most databases.

The rest of this article is organized as follows. Section 2 gives a brief review of related studies. Section 3 introduces the NW-Fisherfaces algorithm. Section 4 presents the experimental results on face recognition. In Section 5, we draw some conclusions and provide some ideas for future research.

## 2. Related study

Linear feature extraction methods can reduce excessive dimensionality of image data with simple computation. In essence, linear methods project high-dimensional data to low-dimensional subspace.

### 2.1. PCA

*N*sample images,

*x*

_{1},

*x*

_{2},...,

*x*

_{ N }, in an

*n*-dimensional image space, the original

*n*-dimensional image space is linearly transformed to an

*m*-dimensional feature space, where

*m*<

*n*. The new feature vectors

*y*

_{k}are defined by the following linear transformation:

*W*∈

*R*

^{n×m}is a matrix with orthonormal columns. Total scatter matrix

*S*

_{T}is defined as

*N*is the number of sample images and

*μ*is the mean of all samples. The objective function is as follows

where *W*_{PCA} is the set of *n*-dimensional eigenvectors of *S*_{T} corresponding to the *m* largest eigenvalues.

### 2.2 LDA

*N*sample images,

*x*

_{1},

*x*

_{2},...,

*x*

_{ N }, which belong to l classes of face in an

*n*-dimensional image space, the objective function of LDA is as follows

where *μ* is the mean of all samples, *N*_{
i
} is the number of samples in class *i, μ*^{
i
} is the average of class *i*, and ${x}_{j}^{i}$ is the *j* th sample in class *i. S*_{w} is the within-class scatter matrix. *S*_{b} is the between-class scatter matrix. *W*_{LDA} is the set of generalized eigenvectors of (*S*_{w})^{-1}*S*_{b} corresponding to the *m* largest generalized eigenvalues.

### 2.3 D-LDA

where *S*_{t} is population scatter matrix.

### 2.4 N-LDA

In this new LDA method, they proved that the most expressive vectors derived in the null space of the within-class scatter matrix using PCA are equal to the optimal discriminant vectors derived in the original space using LDA. This method is more efficient, accurate, and stable to calculate the most discriminant projection vectors based on the modified Fisher's criterion (7). This process starts by calculating the projection vectors in the null space of the within-class scatter matrix *S*_{w}. This null space can be spanned by those eigenvectors corresponding to the set of zero eigenvalues of *S*_{w}. If this subspace does not exist, i.e., *S*_{w} is nonsingular, then *S*_{t} is also nonsingular. Under these circumstances, we choose those eigenvectors corresponding to the set of the largest eigenvalues of the matrix (*S*_{b} + *S*_{w})^{-1}*S*_{
b
} as the most discriminant vector set; otherwise, the SSSP will occur.

### 2.5 LPP

*D*

_{ ii }= ∑

_{ j }

*S*

_{ ij }and

*L*=

*D*-

*S*is the Laplacian matrix.

*S*is a similarity matrix attempting to ensure that if

*x*

_{ i }and

*x*

_{ j }are "close", then

*y*

_{i}and

*y*

_{ j }are close as well. The basic functions of LPP are the eigenvectors of the matrix (XDXT)

^{-1}XLXT associated with the smallest eigenvalues. Moreover, Cai et al. [5] proposed the orthogonal form of LPP (OLPP) and proved that OLPP outperforms LPP. In this study, OLPP is applied with a supervised similarity matrix for comparison. The weights of

*S*are defined as follows:

where cos(·) denotes the cosine distance measure, *i* and *j* denote sample indices, and *k* and *l* denote classes. The applied *S* preserves the locality depending on the cosine distance measure and ensures preservation only for within-class face by setting the between-class weights as 0.

## 3. Methodology: NW-Fisherfaces

where *N*_{
i
} is the training sample size of class *i*, ${x}_{k}^{i}$ is the *k* th sample of class *i*, ${M}_{j}\left({x}_{k}^{i}\right)$ denotes the weighted mean corresponding to ${x}_{k}^{i}$ for class *j*, and dist(*x, y*) is the distance measured from *x* to *y*. The closer ${x}_{k}^{i}$ and ${M}_{j}\left({x}_{k}^{i}\right)$ are, the larger the weight ${\lambda}_{k}^{\left(i,j\right)}$ is. The sum of ${\lambda}_{k}^{\left(i,j\right)}$ for class *i* is one. The weight ${w}_{kl}^{\left(i,j\right)}$ for computing weighted means is a function of ${x}_{k}^{i}$ and ${x}_{l}^{j}$. The closer ${x}_{k}^{i}$ and ${x}_{l}^{j}$ are, the larger ${w}_{kl}^{\left(i,j\right)}$ is. The sum of ${w}_{kl}^{\left(i,j\right)}$ for ${M}_{j}\left({x}_{k}^{i}\right)$ is one.

where diag(.) denotes the diagonal part of a matrix.

- (1)
*PCA projection*: Face images are projected into the PCA subspace by throwing away the components corresponding to zero eigenvalue.*W*_{PCA}denotes the transformation matrix of PCA projection. The projected components are statistically uncorrelated and the rank of the projected data matrix is equal to the data dimensionality. This study applied the PCA projection method proposed in [5, 25] to prevent the singularity of*S*_{w}due to the simple computation and fair comparison with Fisherfaces and O-Laplacianfaces. However, throwing the dimensionalities corresponding to zero eigenvalue may lose important discriminant information [26]. For further applying LDA-based methods to practical applications, an advanced regularization method proposed in [26] is suggested. - (2)
Compute the distances between each pair of samples and form the distance matrix.

- (3)
Compute ${w}_{kl}^{\left(i,j\right)}$ with the distance matrix.

- (4)
Use ${w}_{kl}^{\left(i,j\right)}$ to compute the weighted means ${M}_{j}\left({x}_{k}^{i}\right)$.

- (5)
Compute the scatter matrix weight ${\lambda}_{k}^{\left(i,j\right)}$.

- (6)
Compute ${S}_{\mathsf{\text{b}}}^{\mathsf{\text{NW}}}$ and the regularized ${S}_{\mathsf{\text{w}}}^{\mathsf{\text{RNW}}}$.

- (7)
Compute

*W*_{NWFE}= [*w*_{1,}...,*w*_{ m }] as the eigenvectors of ${\left(\left({S}_{\mathsf{\text{w}}}^{\mathsf{\text{RNW}}}\right)\right)}^{-1}{S}_{\mathsf{\text{b}}}^{\mathsf{\text{NW}}}$ corresponding to the*m*largest eigenvalues. - (8)Compute NWFE embedding as follows:$W={W}_{\mathsf{\text{PCA}}}{W}_{\mathsf{\text{NWFE}}}$

where *W* is the transformation matrix and the column vectors of *W* are the so-called NW-Fisherfaces.

## 4. Experimental results

The performance of the proposed NW-Fisherfaces method was compared with the three most popular linear methods in face recognition: Eigenfaces [3], Fisherfaces [20], and O-Laplacianfaces [5]. Three face databases were tested: Yale database, Olivetti Research Laboratory (ORL) database, and the PIE (pose, illumination, and expression) database from CMU [24]. This study applied the same preprocessing in [5] to locate face. Gray level images were manually aligned, cropped, and re-sized to 32 × 32 pixels. Each image was represented by a 1,024-dimensional vector. For simplicity, the *k* nearest-neighbor (*k*-nn) classifier, where *k* = 1, was applied in all experiments. Recognition processes were as follows: face subspace was calculated from training samples; new testing face images were projected into calculated subspace; and new facial images were identified by the 1-nearest neighbor classifier.

### 4.1. ORL database

*n*images per individual for training and 10 -

*n*images per individual for testing, where

*n*= 2, 3, 4, and 5. Furthermore, experimental results are averaged over 20 random sets for each division. Table 1 presents the least error rates and the corresponding dimensions obtained by Eigenfaces, Fisherfaces, O-Laplacianfaces, and NW-Fisherfaces. The proposed NW-Fisherfaces outperformed other methods on the ORL database. Figure 1 shows the plots of error rate versus reduced dimensionality. Since the optimization of LDA produces at most

*L*- 1 features [20], the maximal dimension of Fisherfaces is also

*L*- 1, where

*L*is the number of individuals. As observed, error rates of O-Laplacianfaces are below those of PCA and LDA after the dimension reaches a certain degree, which is 19 in Figure 2a. The error rates of NW-Fisherfaces are lower than those of other methods where over all dimensions below

*L*- 1.

Performance Comparisons on the ORL Database

Method | 4 Train | 5 Train | 6 Train | 7 Train |
---|---|---|---|---|

Eigenfaces | 80.25%(522) | 78.24%(678) | 77.15%(658) | 75.49%(844) |

Fisherfaces | 47.69%(135) | 50.30%(135) | 48.78%(135) | 49.71%(135) |

D-LDA | 39.59%(133) | 37.13%(133) | 31.43%(136) | 26.45%(136) |

N-LDA | 41.32%(135) | 44.36%(135) | 45.28%(135) | 51.32%(135) |

O-Laplacianfaces | 33.06%(198) | 28.71%(355) | 24.70%(443) | 20.93%(543) |

NW-Fisherfaces | 31.05%(34) | 26.27%(35) | 21.96%(37) | 17.70%(34) |

### 4.2. Yale database

*n*images per individual for training and 11 -

*n*images per individual for testing, where

*n*= 2, 3, 4, and 5. Furthermore, the experimental results are averaged over 20 random sets for each division. Table 2 and Figure 4 show the experimental results. The proposed NW-Fisherfaces still outperformed other methods with low dimensionality.

Performance comparisons on the Yale database

Method | 2 Train | 3 Train | 4 Train | 5 Train |
---|---|---|---|---|

Eigenfaces | 53.96%(29) | 50.04%(44) | 44.33%(58) | 42.28%(74) |

Fisherfaces | 56.48%(10) | 40.08%(13) | 30.95%(14) | 26.22%(14) |

D-LDA | 75.30%(14) | 44.79%(15) | 37.67%(13) | 32.94%(15) |

N-LDA | 45.52%(14) | 33.25%(14) | 26.60%(26) | 22.71%(26) |

O-Laplacianfaces | 45.52%(14) | 33.25%(14) | 26.76%(14) | 23.06%(14) |

NW-Fisherfaces | 43.30%(15) | 31.83%(14) | 24.10%(15) | 20.00%(15) |

### 4.3. PIE database

*n*images per individual for training and 170 -

*n*images per individual for testing, where

*n*= 5, 10, 20, and 30. Furthermore, the experimental results were averaged over 20 random sets for each division. Table 3 presents the least error rates and the corresponding dimensions. Both O-Laplacianfaces and NW-Fisherfaces outperformed the Fisherfaces and Eigenfaces. O-Laplacianfaces resulted in the least error rates on PIE database. However, the dimensionality required by the NW-Fisherfaces to reach its least error rate is much lower than the dimensionalities required by other methods. As shown in Figure 6, NW-Fisherfaces outperformed other methods over the dimensions below

*L*- 1, where

*L*is the number of individuals.

Performance comparisons on the PIE database

Method | 5 Train | 10 Train | 20 Train | 30 Train |
---|---|---|---|---|

Eigenfaces | 76.50%(334) | 64.68%(670) | 48.88%(822) | 1.92%(896) |

Fisherfaces | 42.58%(67) | 29.24%(67) | 21.53%(67) | 10.93%(67) |

D-LDA | 39.70%(63) | 24.64%(62) | 14.26%(62) | 9.80%(62) |

O-Laplacianfaces | 29.33%(131) | 16.23%(272) | 1.03%(601) | 1.05%(701) |

NW-Fisherfaces | 33.97%(25) | 20.34%(22) | 1.66%(608) | 1.08%(785) |

There is no result for N-LDA for PIE database after 10 Train. Because the sample number in training set is larger than the dimension of feature, there was no null space for within scatter matrix *S*_{w}.

### 4.4. PIE_Small database

*n*images per individual for training and 15 -

*n*images per individual for testing, where

*n*= 5, 6, 7, and 8. Furthermore, the experimental results were averaged over ten random sets for each division. Table 4 presents the least error rates and the corresponding dimensions. Both O-Laplacianfaces and NW-Fisherfaces outperformed the Fisherfaces and Eigenfaces. O-Laplacianfaces resulted in the least error rates on PIE_Small database. However, the dimensionality required by the NW-Fisherfaces to reach its least error rate is much lower than the dimensionalities required by other methods. As shown in Figure 8, NW-Fisherfaces outperformed other methods over the dimensions below

*L*- 1, where

*L*is the number of individuals.

Performance comparisons on the PIE_Small database

Method | 5 Train | 6 Train | 7 Train | 8 Train |
---|---|---|---|---|

Eigenfaces | 55.97%(272) | 51.26%(164) | 47.00%(376) | 43.34%(164) |

Fisherfaces | 39.56%(65) | 34.64%(67) | 32.74%(67) | 29.35%(67) |

D-LDA | 36.32%(55) | 31.47%(59) | 28.51%(58) | 25.76%(60) |

N-LDA | 29.97%(67) | 26.36%(82) | 25.18%(75) | 22.73%(67) |

O-Laplacianfaces | 24.66%(99) | 20.56%(97) | 18.35%(99) | 16.32%(99) |

NW-Fisherfaces | 25.21%(23) | 20.00%(29) | 17.50%(20) | 15.69%(22) |

### 4.5. AR database

In order to check the capability of invariance to lighting condition and face orientation, which have been better solved by 3D deformation approaches. We used AR face database for our proposed method and we found that it is giving better result compare to other method which has been proposed previously.

*n*images per individual for training and 13 -

*n*images per individual for testing, where

*n*= 5, 6, 7, and 8. Furthermore, the experimental results are averaged over ten random sets for each division. Table 5 and Figure 10 show the experimental results. The proposed NW-Fisherfaces still outperformed other methods with low dimensionality.

Performance comparisons on the AR database

Method | 4 Train | 5 Train | 6 Train | 7 Train |
---|---|---|---|---|

Eigenfaces | 80.25%(522) | 78.24%(678) | 77.15%(658) | 75.49%(844) |

Fisherfaces | 47.69%(135) | 50.30%(135) | 48.78%(135) | 49.71%(135) |

D-LDA | 39.59%(133) | 37.13%(133) | 31.43%(136) | 26.45%(136) |

N-LDA | 41.32%(135) | 44.36%(135) | 45.28%(135) | 51.32%(135) |

O-Laplacianfaces | 33.06%(198) | 28.71%(355) | 24.70%(443) | 20.93%(543) |

NW-Fisherfaces | 31.05%(34) | 26.27%(35) | 21.96%(37) | 17.70%(34) |

The pictures were taken at the CVC under strictly controlled conditions. No restrictions on wear (clothes, glasses, etc.), make-up, hair style, etc. were imposed to participants. Each person participated in two sessions, separated by 2 weeks (14 days) time. The same pictures were taken in both sessions.

In this face database, there are totally 13 expressions of each person. The expressions are as follows: Neutral expression, Smile, Anger, Scream, Left light on, Right light on, All side lights on, Wearing sun glasses, Wearing sun glasses and left light on, Wearing sun glasses and right light on, Wearing scarf, Wearing scarf and left light on, Wearing scarf and right light on 14 to 26: second session (same conditions as 1 to 13).

## 5. Conclusions and future works

### 5.1. Conclusions

- (1)
The proposed NW-Fisherfaces consistently outperforms the Eigenfaces, Fisherfaces, D-LDA, and N-LDA methods.

- (2)
This study applied a nonparametric feature extraction method into the scheme of appearance-based face recognition.

- (3)
The proposed NW-Fisherfaces method weights the between-class scatter to emphasize boundary structure of the transformed face subspace and, therefore, enhances the discriminability of face recognition.

- (4)
For practical applications, the computational load will depend on the dimensionality of the trained linear projection matrix. In this study, experimental results show that the proposed method can reach its lowest error rate with low dimensionality. Hence, the NW-Fisherfaces method is practical for real-world face recognition due to the low dimensionality requirement.

### 5.2. Future works

- (1)
The supervised OLPP weights the scatter matrix to preserve the locality of within class face. This weighting concept may enhance the within-class scatter of LDA and other LDA-based methods such as NDA and NWFE.

- (2)
Linear feature extraction methods measure and optimize closeness between samples depending on Euclidean distance. However, Euclidean distance is basically light variant. Variance caused by lighting should be reduced before using linear feature extraction methods. Several solutions to reduce light variances of face images are proposed:

- (a)
Mapping face images into the same intensity distribution by simple preprocessing such as histogram specification.

- (b)
Transforming images into frequency domain by Fourier-based methods such as Gabor wavelets.

The performance of NW-Fisherfaces in nonlinear feature space, such as kernel Hilbert space, can be further evaluated.

## Declarations

### Acknowledgements

The authors would like to thank the editors and reviewers for their helpful comments. They would also like to thank Dr. Li-Wei Ko, Department of Electrical and Control Engineering, National Chiao-Tung University, Taiwan, for their support in writing this article, and Dr. Deng Cai, for his essential contribution and providing a complete framework on face recognition.

## Authors’ Affiliations

## References

- Han PY, Jin ATB, Siong LH: Eigenvector weighting function in face recognition.
*Discrete Dyn Nat Soc*2011: 521935. doi:10.1155/2011/521935Google Scholar - Murase H, Nayar SK: Visual learning and recognition of 3-D objects from appearance.
*Int J Comput Vis*1995, 14: 5-24. 10.1007/BF01421486View ArticleGoogle Scholar - Turk M, Pentland AP: Face recognition using Eigenfaces.
*Presented at the IEEE Conf Computer Vision and Pattern Recognition Maui, HI*1991, 586-591.Google Scholar - He X, Yan S, Hu Y, Niyogi P, Zhang H-J: Face recognition using Laplacianfaces.
*IEEE Trans Pattern Anal Mach Intell*2005, 27(3):328-340.View ArticleGoogle Scholar - Cai D, He X, Han J, Zhang H-J: Orthogonal Laplacianfaces for face recognition.
*IEEE Trans Image Process*2006, 15(11):3608-3614.View ArticleGoogle Scholar - Park SW, Savvides M: a multifactor extension of linear discriminant analysis for face recognition under varying pose and illumination.
*EURASIP J Adv Signal Process*2010: 158395. doi:10.1155/2010/158395Google Scholar - Na JH, Park MS, Choi JY: Linear boundary discriminant analysis.
*Pattern Recogn*2010, 43(3):929-936. 10.1016/j.patcog.2009.09.015View ArticleGoogle Scholar - Sanayha W, Rangsanseri Y: Weighted LDA image projection technique for face recognition.
*IEICE Trans Fund*2009, E92.A(9):2257-2265. 10.1587/transfun.E92.A.2257View ArticleGoogle Scholar - Sugiyama M: Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis.
*J Mach Learn Res*2007, 8: 1027-1061.Google Scholar - Liu C, Wechsler H: Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition.
*IEEE Trans Image Process*2002, 11(4):467-476. 10.1109/TIP.2002.999679View ArticleGoogle Scholar - Loog M, Duin RPW: Linear dimensionality reduction via a heteroscedastic extension of LDA: the Chernoff criterion.
*IEEE Trans Pattern Anal Mach Intell*2004, 26(6):732-739. 10.1109/TPAMI.2004.13View ArticleGoogle Scholar - Liu Q, Lu H, Ma S: Improving kernel Fisher discriminant analysis for face recognition.
*IEEE Trans Circ Syst Video Technol*2004, 14(1):42-49. 10.1109/TCSVT.2003.818352View ArticleGoogle Scholar - Yan S, Hu Y, Xu D, Zhang H-J, Zhang B, Cheng Q: Nonlinear discriminant analysis on embedded manifold.
*IEEE Trans Circ Syst Video Technol*2007, 17(4):468-477.View ArticleGoogle Scholar - Fukunaga K:
*Introduction to Statistical Pattern Recognition.*Academic Press, New York; 1990.Google Scholar - Chen L-F, Mark Liao H-Y, Ko M-T, Lin J-C, Yu G-J: A new LDA-based face recognition system which can solve the small sample size problem.
*Pattern Recognition*2000, 13(10):1713-1726.View ArticleGoogle Scholar - Wu X-J, Kittler J, Yang J-Y, Messer K, Wang S: A new direct LDA (D-LDA) algorithm for feature extraction in face recognition.
*Proceedings of the 17th International Conference on Pattern Recognition 2004 ICPR 2004*2004, 4: 545-548.Google Scholar - Kuo B-C, Landgrebe DA: Nonparametric weighted feature extraction for classification.
*IEEE Trans Geosci Remote Sens*2004, 42(5):1096-1105.View ArticleGoogle Scholar - Benediktsson JA, Palmason JA, Sveinsson JR: Classification of hyperspectral data from urban areas based on extended morphological profiles.
*IEEE Trans Geosci Remote Sens*2005, 43(3):480-491.View ArticleGoogle Scholar - Kuo B-C, Chang K-Y: Feature extractions for small sample size classification problem.
*IEEE Trans Geosci Remote Sens*2007, 45(3):756-764.View ArticleGoogle Scholar - Belhumeur PN, Hepanha JP, Kriegman DJ: Eigenfaces vs. Fisherfaces: recognition using class specific linear projection.
*IEEE Trans Pattern Anal Mach Intell*1997, 19(7):711-720. 10.1109/34.598228View ArticleGoogle Scholar - Yu H, Yang J: A direct LDA algorithm for high-dimensional data-with application to face recognition.
*Pattern Recogn*2001, 34: 2067-2070. 10.1016/S0031-3203(00)00162-XView ArticleGoogle Scholar - Liu K, Cheng YQ, Yang JY: A generalized optimal set of discriminant vectors.
*Pattern Recogn*1992, 25(1):731-739.View ArticleGoogle Scholar - Lu JW, Plataniotis KN, Venetsanopoulos AN: Face recognition using LDA-based algorithms.
*IEEE Trans Neural Netw*2003, 14(1):195-200. 10.1109/TNN.2002.806647View ArticleGoogle Scholar - Sim T, Baker S, Bsat M: The CMU pose, illumination, and expression database.
*IEEE Trans Pattern Anal Mach Intell*2003, 25(12):1615-1618. 10.1109/TPAMI.2003.1251154View ArticleGoogle Scholar - Martinez AM, Kak AC: PCA versus LDA.
*IEEE Trans Pattern Anal Mach Intell*2001, 23(2):228-233. 10.1109/34.908974View ArticleGoogle Scholar - Mandal B, Jiang X, Eng H-L, Kot A: Prediction of eigenvalues and regularization of Eigenfeatures for human face verification.
*Pattern Recogn Lett*2010, 31(8):717-724. 10.1016/j.patrec.2009.10.006View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.