Skip to main content

A Multifactor Extension of Linear Discriminant Analysis for Face Recognition under Varying Pose and Illumination

Abstract

Linear Discriminant Analysis (LDA) and Multilinear Principal Component Analysis (MPCA) are leading subspace methods for achieving dimension reduction based on supervised learning. Both LDA and MPCA use class labels of data samples to calculate subspaces onto which these samples are projected. Furthermore, both methods have been successfully applied to face recognition. Although LDA and MPCA share common goals and methodologies, in previous research they have been applied separately and independently. In this paper, we propose an extension of LDA to multiple factor frameworks. Our proposed method, Multifactor Discriminant Analysis, aims to obtain multilinear projections that maximize the between-class scatter while minimizing the withinclass scatter, which is the same core fundamental objective of LDA. Moreover, Multifactor Discriminant Analysis (MDA), like MPCA, uses multifactor analysis and calculates subject parameters that represent the characteristics of subjects and are invariant to other changes, such as viewpoints or lighting conditions. In this way, our proposed MDA combines the best virtues of both LDA and MPCA for face recognition.

1. Introduction

Face recognition has significant applications for defense and national security. However, today, face recognition remains challenging because of large variations in facial image appearance due to multiple factors including facial feature variations among different subjects, viewpoints, lighting conditions, and facial expressions. Thus, there is great demand to develop robust face recognition methods that can recognize a subject's identity from a face image in the presence of such variations. Dimensionality reduction techniques are common approaches applied to face recognition not only to increase efficiency of matching and compact representation, but, more importantly, to highlight the important characteristics of each face image that provide discrimination. In particular, dimension reduction methods based on supervised learning have been proposed and commonly used in the following manner. Given a set of face images with class labels, dimension reduction methods based on supervised learning make full use of class labels of these images to learn each subject's identity. Then, a generalization of this dimension reduction is achieved for unlabeled test images, also called out-of-sample images. Finally, these test images are classified with respect to different subjects, and the classification accuracy is computed to evaluate the effectiveness of the discrimination.

Multilinear Principal Component Analysis (MPCA) [1, 2] and Linear Discriminant Analysis (LDA) [3, 4] are two of the most widely used dimension reduction methods for face recognition. Unlike traditional PCA, both MPCA and LDA are based on supervised learning that makes use of given class labels. Furthermore, both MPCA and LDA are subspace projection methods that calculate low-dimensional projections of data samples onto these trained subspaces. Although LDA and MPCA have different ways of calculating these subspaces, they have a common objective function which utilizes a subject's individual facial appearance variations.

MPCA is a multilinear extension of Principal Component Analysis (PCA) [5] that analyzes the interaction between multiple factors utilizing a tensor framework. The basic methodology of PCA is to calculate projections of data samples onto the linear subspace spanned by the principal directions with the largest variance. In other words, PCA finds the projections that best represent the data. While PCA calculates one type of low-dimensional projection vector for each face image, MPCA can obtain multiple types of low-dimensional projection vectors; each vector parameterizes a different factor of variations such as a subject's identity, viewpoint, and lighting feature spaces. MPCA establishes multiple dimensions based on multiple factors and then computes multiple linear subspaces representing multiple varying factors.

In this paper, we separately address the advantages and disadvantages of multifactor analysis and discriminant analysis and propose Multifactor Discriminant Analysis (MDA) by synthesizing both methods. MDA can be thought of as an extension of LDA to multiple factor frameworks providing both multifactor analysis and discriminant analysis. LDA and MPCA have different advantages and disadvantages, which result from the fact that each method assumes different characteristics for data distributions. LDA can analyze clusters distributed in a global data space based on the assumption that the samples of each class approximately create a Gaussian distribution. On the other hand, MPCA can analyze the locally repeated distributions which are caused by varying one factor under fixed other factors. Based on synthesizing both LDA and MPCA, our proposed MDA can capture both global and local distributions caused by a group of subjects.

Similar to our MDA, the Multilinear Discriminant Analysis proposed in [6] applies both tensor frameworks and LDA to face recognition. Our method aims to analyze multiple factors such as subjects' identities and lighting conditions in a set of vectored images. On the other hand, [6] is designed to analyze multidimensional images with a single factor, that is, subjects' identities. In [6], each face image constructs an -mode tensor, and the low-dimensional representation of this original tensor is calculated as another n-mode tensor with a smaller size. For example, if we simply use 2-mode tensors, that is, matrices, representing 2D images, the method proposed in [6] reduces each dimension of the rows and columns by capturing the repeated tendencies in rows and the repeated tendencies in columns. On the other hand, our proposed MDA analyzes the repeated tendencies caused by varying each factor in a subspace obtained by LDA. The goal of MDA is to reduce the impacts of environmental conditions, such as viewpoint and lighting, from the low-dimensional representations obtained by LDA. While [6] obtains a single tensor with a smaller size for each image tensor, our proposed MDA obtains multiple low-dimensional vectors, for each image vector, which decompose and parameterize the impacts of multiple factors. Thus, for each image, while the low-dimensional representation obtained by [6] is still influenced by variance in environmental factors, multiple parameters obtained by our MDA are expected to be independent from each other. The extension of [6] to multiple factor frameworks cannot be simply drawn because this method is formulated only using a single factor, that is to say, subjects' identities. On the other hand, our proposed MDA decomposes the low-dimensional representations obtained by LDA into multiple types of factor-specific parameters such as subject parameters.

The remainder of this paper is organized as follows. Section 2 reviews subspace methods from which the proposed method is derived. Section 3 first addresses the advantages and disadvantages of multifactor analysis and discriminant analysis individually, and then Section 4 proposes MDA with the combined virtues of both methods. Experimental results for face recognition in Section 5 show that the proposed MDA outperforms major dimension reduction methods on the CMU PIE database and the Extended Yale B database. Section 6 summarizes the results and conclusions of our proposed method.

2. Review of Subspace Projection Methods

In this section, we review MPCA and LDA, two methods on which our proposed Multifactor Discriminant Analysis is based.

2.1. Multilinear PCA

Multilinear Principal Component Analysis (MPCA) [1, 2] is a multilinear extension of PCA. MPCA computes a linear subspace representing the variance of data due to the variation of each factor as well as the linear subspace of the image space itself. In this paper, we consider three factors: different subjects, viewpoints (i.e., pose types), and lighting conditions (i.e., illumination). While PCA is based on Singular Value Decomposition (SVD) [7], MPCA is based on High-Order Singular Value Decomposition (HOSVD) [8], which is a multidimensional extension of SVD.

Let be the data matrix whose columns are vectored training images with pixels. We assume that these data samples are centered at zero. By SVD, the matrix can be decomposed into three matrices , , and :

(1)

If we keep only the column vectors of and corresponding to the largest singular values and discard the rests of the matrices, the sizes of the matrices in (1) are as follows: , , and . For a sample , PCA obtains an -dimensional representation:

(2)

Note that these low-dimensional projections preserve the dot products of training images. We define the matrix consisting of these projections obtained by PCA:

(3)

Then, we can see that the Gram matrices of and are identical since

(4)

Since a Gram matrix is a matrix of all possible dot products, a set of also preserves the dot products of original training images.

While PCA parameterizes a sample with one low-dimensional vector , MPCA [1] parameterizes the sample using multiple vectors associated with multiple factors of a data set. In this paper, we consider three factors of face images: identities (or subjects), poses, and lighting conditions. denotes a vectored training image of the th subject in the th pose and the th lighting condition. These training images are sorted in a specific order so as to construct a data matrix :

(5)

Using MPCA, an arbitrary image and a data matrix are represented as

(6)
(7)

respectively, where denotes the Kronecker product and is identical to the matrix in (1). A matrix results from the pixel-mode flattening of a core tensor [1]. In (6), we can see that MPCA parameterizes a single image using three parameters: subject parameter , viewpoint parameter , and lighting parameter , where , , and . Similarly, in (7) is represented by three orthogonal matrices , , and . The columns of each matrix span the linear subspace of the data space formed by varying each factor. Therefore, , , and consist of eigenvectors corresponding to the largest eigenvalues of three Gram-like matrices , , and respectively, where the entry of these matrices is calculated as

(8)

These three Gram-like matrices , , , represent similarities between different subjects, different poses, and different lighting conditions, respectively. For example, can be thought of as the average similarity, measured by the dot product, between the th subject's face images and the th subject's face images under varying viewpoints and lighting conditions.

Three orthogonal matrices , , and are calculated by SVD of the three Gram-like matrices:

(9)

Then, can be easily derived as

(10)

from (7). For a training image assigned as one column of , the three factor parameters , , and are identical to the th row of , th row of , and th row of , respectively. In this paper, to solve for the three parameters of an arbitrary unlabeled image , one first calculates the Kronecker product of these parameters using (6):

(11)

wheredenotes the Moore-Penrose pseudoinverse. To decompose the Kronecker product of multiple parameters into individual ones, two leading methods have been applied in [2] and [9]. The best rank-1 method [2] reshapes the vector to the matrix , and using SVD of this matrix, is calculated as the left singular vector corresponding to the largest singular value. Another method is the rank- approximation using the alternating least squares method proposed in [9]. In this paper, we employed the decomposition method proposed in [2], which produced slightly better performances for face recognition than the method proposed in [9].

Based on the observation that the Gram-like matrices in (8) are formulated using the dot products, Multifactor Kernel PCA (MKPCA), a kernel-based extension of MPCA, was introduced [10]. If we define a kernel function , the kernel versions of the Gram-like matrices in (8) can be directly calculated. Thus, for training images, , , and can be also calculated using eigen decomposition of these matrices. Equations (10) and (11) show that in order to obtain , , and for any test image, also called an out-of-sample image, , we must be able to calculate and . Note that and are projections of training samples and a test sample onto nonlinear subspace, respectively, and these can be calculated by KPCA as shown in [11].

2.2. Linear Discriminant Analysis

Since Linear Discriminant Analysis (LDA) [3, 4] is a supervised learning algorithm, class labels of all samples are provided to the traditional LDA approach. Let be the class label corresponding to , where and is the number of classes. Let be the number of samples in the class such that . LDA calculates the optimal projection direction maximizing Fisher's criterion

(12)

where and are the between-class and within-class scatter matrices:

(13)

where denotes the sample mean for the class . The solution of (12) is calculated as the eigenvectors corresponding to the largest eigenvalues of the following generalized eigenvector problem:

(14)

Since does not have full column rank and thus is not invertible, (14) can be solved not by eigen decomposition but instead by a generalized eigenvector problem. LDA obtains a low-dimensional representation for an arbitrary sample :

(15)

where the columns of the matrix consist of . In other words, is the projection of onto the linear subspace spanned by . Note that . Despite the success of the LDA algorithm in many applications, the dimension of is often insufficient for representing each sample. This is caused by the fact that the number of available projection directions is lower than the class number . To improve this limitation of LDA, variants of LDA, such as the null subspace algorithm [12] and a direct LDA algorithm [13], were proposed.

3. Limitations of Multifactor Analysis and Discriminant Analysis

LDA and MPCA have different advantages and disadvantages, which result from the fact that each method assumes different characteristics for data distributions. MPCA's subject parameters represent the average positions of a group of subjects across varying viewpoints and lighting conditions. MPCA's averaging is premised on the assumption that these subjects maintain similar relative positions in a data space under each viewpoint and lighting condition. On the other hand, LDA is based on the assumption that the samples of each class approximately create a Gaussian distribution. Thus, we can expect that the comparative performances of MPCA and LDA vary with the characteristics of a data set. For classification tasks, LDA sometimes outperforms MPCA; at other times MPCA outperforms LDA. In this section, we demonstrate the assumptions on which each method is based and the conditions where one can outperform the other.

3.1. The Assumption of LDA: Clusters Caused by Different Classes

Face recognition is a task to classify face images with respect to different subjects. LDA assumes that each class, that is, each subject, approximately causes a Gaussian distribution in a data set. Based on this assumption, LDA calculates a global linear subspace which is applied to the entire data set. However, a real-world face image set often includes other factors, such as viewpoints or lighting conditions in addition to differences between subjects. Unfortunately, the variation of viewpoints or lighting conditions often constructs global clusters across the entire data set while the variation of subjects creates only local distribution as shown in Figure 1. In the CMU PIE database, both viewpoints and lighting conditions create global clusters, as shown in Figures 1(b) and Figure 1(c), while a group of subjects creates a local distribution, as shown in Figure 1(a). Therefore, low-dimensional projections obtained by LDA are not appropriate for face recognition in these samples, which are not globally separable.

Figure 1
figure 1

Low-dimensional representations of training images obtained by PCA using the CMU PIE database. (a) Each set of samples with the same color represents each subject's face images. (b) Each set of samples with the same color represents face images under each viewpoint. (c) Each set of samples with the same color represents face images under each lighting condition. (d) The red C-shape curve connects face images under various lighting conditions for one person and one viewpoint. The blue V-shape curve connects face images under various viewpoints for one person and one lighting condition. Green dots represent 30 subjects' face images under one viewpoint and one lighting condition. We can see that varying viewpoints and lighting conditions create clusters, rather than varying subjects.

LDA inspires multiple advanced variants such as Kernel Discriminant Analysis (KDA) [14, 15], which can obtain nonlinear subspaces. However, these subspaces are still based on the analysis of the clusters distributed in a global data space. Thus, there is no guarantee that KDA can be successful if face images which belong to the same subject are scattered rather than distributed as clusters. In sum, LDA cannot be successfully applied unless, in a given data set, data samples are distributed as clusters due to different classes.

3.2. The Assumption of MPCA: Repeated Distributions Caused by Varying One Factor

MPCA is based on the assumption that the variation of one factor repeats similar shapes of distributions, and these common shapes rarely depend on the variation of other factors. For example, the subject parameters represent the averages of the relative positions of subjects in the data space across varying viewpoints and lighting conditions. To illustrate this, we consider viewpoint-and lighting-invariant subsets of a given face image set; each subset consists of the face images of subjects captured under fixed viewpoint and lighting:

(16)

That is, each column of represents each image in this subset. As shown in Figure 4(a), there are viewpoint- and lighting-invariant subsets, and in (8) can be rewritten as the average of the Gram matrices calculated in these subsets:

(17)

In Euclidean geometry, the dot product between two vectors formulates the distance and linear similarity between them. Equation (9) shows that is also the Gram matrix of a set of the column vectors of the matrix . Thus, these column vectors represent the average distances between pairs of subjects. Therefore, the row vectors of , that is, the subject parameters, depend on these average distances between subject across varying viewpoints and lighting conditions. Similarly, the viewpoint parameters and the lighting parameters depend on the average distances between viewpoints and lighting conditions, respectively, in a data space.

Figure 2 illustrates an ideal case to which MPCA can be successfully applied. Face images lie on a manifold, and viewpoint- and lighting-invariant subsets construct red and blue curves, respectively. Each red curve connects face images only due to varying illumination while each blue curve connects face images only due to varying viewpoints. Since all of the red curves have identical shapes, different lighting conditions can be perfectly represented by row vectors of . Also, since all of the blue curves have identical shapes, different viewpoints can be perfectly represented by row vectors of . For each factor, when these subsets construct similar structures with small variations, the average of these structures can successfully cover each sample.

Figure 2
figure 2

Ideal factor-specific submanifolds in an entire manifold on which face images lie. Each red curve connects face images only due to varying viewpoint while each blue curve connects face images only due to varying illumination.

We observe that each blue curve in Figure 3(a) that represents viewpoint variation seems to repeat a similar V-shape for each person and each lighting condition. Also, Figure 3(b) visualizes the viewpoint parameters , learned by MPCA; the curve connecting the viewpoint parameters roughly fits the average shape of the blue curves. As a result, in Figure 3(b) also has a V-shape. Also, the 3D visualization of the lighting parameters in Figure 3(d) roughly averages the C-shapes of red curves shown in Figure 3(c), each connecting face images under various lighting conditions for one person and one viewpoint. Similar observations were illustrated in [9].

Figure 3
figure 3

Low-dimensional representations of training images obtained by PCA and MPCA. (a) the PCA projections of 9 subjects' face images generated by varying viewpoints under one lighting condition. (b) the viewpoint parameters obtained by MPCA. (c) the PCA projections of 9 subjects' face images generated by varying lighting conditions under one viewpoint. (d) the lighting parameters obtained by MPCA.

Figure 4
figure 4

The relationships between the Gram matrix G defined in (4) and each of the Gram-like matrices defined in (8), where a training set has two subjects, three viewpoints, and two lighting conditions. Each of is calculated as the average of parts of the Gram matrix G. Each entry of these three Gram-like matrices is the average of same-color entries of G. (a) consists of averages of dot products which represent the averages of the pairwise relationships between a group of subjects. (b) consists of averages of dot products which represent the averages of the pairwise relationships between different viewpoints. (c) consists of averages of dot products which represent the averages of the pairwise relationships between different lighting conditions. (a) (b) (c)

Based on the above expectations, if varying just one factor generates dissimilar shapes of distribution, multilinear subspaces based on these average shapes do not represent a variety of data distributions. In Figure 3(a), some curves have W-shapes while most of the other curves have V-shapes. Thus, in this case, we cannot expect reliable performances from MPCA because the average shape obtained by MPCA for each factor insufficiently covers individual shapes of curves.

4. Multifactor Discriminant Analysis

As shown in Section 3.1, for face recognition, LDA is preferred if in a given data set, face images are distributed as clusters due to different subjects. Unlike LDA, as shown in Section 3.2, MPCA can be successfully applied to face recognition if various subjects' face images repeat similar shapes of distributions under each viewpoint and lighting, even if these subjects do not seem to create these clusters. In this paper, we propose a novel method which can offer the advantages of both methods. Our proposed method is based on an extension of LDA to multiple factor frameworks. Thus, we can call our method Multifactor Discriminant Analysis (MDA). From , MDA aims to remove the remaining characteristics which are caused by other factors, such as viewpoints and lighting conditions.

We start with the observation that MPCA is based on the relationships between , low-dimensional representations obtained by PCA, and multiple factor-specific parameters. Combining (3) and (7), we can see that the matrix is rewritten as

(18)

Similarly, combining (2) and (7), for an arbitrary image , can be decomposed into three vectors by MPCA:

(19)

where is the low-dimensional representation of obtained by PCA. Thus, we can think that performs a linear transformation which maps the Kronecker product of multiple factor-specific parameters to the low-dimensional representations provided by PCA. In other words, is decomposed into , , and by using the transformation matrix .

In this paper, instead of decomposing , decomposing is proposed, where is the low-dimensional representation of provided by LDA, as defined in (15). often has more discriminant power than , but it still has the combined characteristics caused by multiple factors. Thus, we first formulate into the Kronecker product of the subject, viewpoint, and lighting parameters:

(20)

where is the LDA transformation matrix defined in (14) and (15). As reviewed in Section 2.2, , the number of available projection directions, is lower than the class number : . Note that in (20) is formulated in a similar way to in (19) using different factor-specific parameters and . We expect in (20), the subject parameter obtained by MDA, to be more reliable than both and since provides the advantages of the virtues of both LDA and MPCA. Using (15), we also calculate the matrix whose columns are the LDA projections of training samples.

While MPCA decomposes the data matrix consisting of training samples, our proposed MDA aims to decompose the LDA projection matrix :

(21)

To obtain the factor-specific parameters of an arbitrary test image , we perform the following steps. During training, we first calculate the three orthogonal matrices, , , and , and subsequently . Then, during testing, for the LDA projection of an arbitrary test image, we calculate the factor-specific parameters by decomposing .

In Section 3.2, factor-specific parameters obtained by MPCA preserve the three Gram-like matrices , , and defined in (8). Figure 4 demonstrates that MPCA calculates subject, viewpoint, and lighting parameters using only the colored parts in the Gram matrix. These colored parts represent the dot products between pairs of samples that have only one varying factor. For example, the colored parts in Figure 4(a) represent the dot products of different subjects' face images under fixed viewpoint and lighting condition. Based on these observations, among the dot products of pairs of LDA projections, we only use the dot products which correspond to the colored parts of G in Figure 4. Replacing with , we define three new Gram-like matrices, , , and :

(22)

where denotes the LDA projection of a training image of the th subject under the th viewpoint and the th lighting condition. In (9), for MPCA, , , and are calculated as the eigenvector matrices of , , and , respectively. In similar ways, for MDA, , , and can be calculated as the eigenvector matrices of , , and , respectively. Again, each row vector of represents the subject parameter of each subject in a training set.

We remember that and . Thus, if we define the Gram matrix as

(23)

this matrix does not have full column rank. If is decomposed by SVD, has nonzero singular values at most. However, each of the matrices , , and has full column rank since these matrices are defined in terms of the averages of different parts of as shown in Figure 4. Thus, even if or , one can calculate valid , , and eigenvectors from , , and , respectively.

After calculating these three eigenvector matrices, can be easily calculated as

(24)

Thus, using this transformation matrix , the Kronecker product of the three factor-specific parameters is calculated as

(25)

Again, as done in (11), by SVD of the matrix , is calculated as the left singular vector corresponding to the largest singular value. Consequently, we can obtain of an arbitrary image test .

5. Experimental Results

In this section, we demonstrate that Multifactor Discriminant Analysis is an appropriate method for dimension reduction of face images with varying factors. To test the quality of dimension reduction, we conducted face recognition tests. In all experiments, face images are aligned using eye coordinates and then cropped. Then, face images were resized to gray-scale images, and each vectored image was normalized with unit norm and zero mean. After aligning and cropping, the left and right eyes are located at and , respectively, in each image.

For the face recognition experiments, we used two databases: the Extended YaleB database [16] and the CMU PIE database [17]. The Extended YaleB database contains 28 subjects captured under 64 different lighting conditions in 9 different viewpoints. For each of the subjects, we used all of the 9 viewpoints and the first 30 lighting conditions to reduce time for experiments. Among the face images, we used 10 lighting conditions in 5 viewpoints for each person for training and all of the remaining images for testing. Next, we used the CMU PIE database, which contains 68 individuals with 13 different viewpoints and 21 different lighting conditions. Again, to reduce time for experiments, we utilized 30 subjects. Also, we did not use two viewpoints: the leftmost profile and the rightmost profile. For each person, 5 lighting conditions in 5 viewpoints were used for training and all of the remaining images were used for testing. For each set of data, experiments were repeated 10 times using randomly selected lighting conditions and viewpoints. The averages of the results were reported in Tables 1 and 2.

Table 1 Rank-1 recognition rate on the Extended YaleB database.
Table 2 Rank 1 recognition rate on the CMU PIE database.

We compare the performance of our proposed method, Multifactor Discriminant Analysis, and other traditional subspace projection methods with respect to dimension reduction: PCA, MPCA, KPCA, and LDA. For PCA and KPCA, we used the subspaces consisting of the minimum numbers of eigenvectors whose cumulative energy is above 0.95. For MPCA, we set the threshold in pixel mode to 0.95 and the threshold in other modes to 1.0. KPCA used RBF kernels with set to . We compared the rank-1 recognition rates of all of the methods using the simple cosine distance.

As shown in Tables 1 and 2, our proposed method, Multifactor Discriminant Analysis, outperforms the other methods for face recognition. This seems to be because Multifactor Discriminant Analysis offers the combined virtues of both multifactor analysis methods and discriminant analysis methods. Like multilinear subspace methods, Multifactor Discriminant Analysis can analyze one sample in a multiple factor framework, which improves face recognition performance.

Figure 5 shows two dimensional projections of 10 subjects under varying viewpoints and lighting conditions calculated by LDA and Multifactor Discriminant Analysis. For each image, while LDA calculated one kind of projection vector as shown in Figure 5(a), Multifactor Discriminant Analysis obtained individual projection vectors for subjects, viewpoint and lighting. Among the factor parameters, Figure 5(b) shows subject parameters obtained by MDA. Since these parameters are independent from varying viewpoints and lighting conditions, the subject parameters of face images are distributed as clusters created by varying subjects rather than the scattered results in Figure 5(a). For the same reason, Tables 1 and 2 show that MPCA and Multifactor Discriminant Analysis outperformed PCA and LDA respectively.

Figure 5
figure 5

Two dimensional projections of 10 classes in the Extended Yale B database. (a) features calculated by LDA, (b) subject parameters calculated by MDA.

Also, Figure 6 shows the first two coordinates of the lighting features calculated by Multifactor Discriminant Analysis for the face images of two different subjects in different viewpoints. These two-dimensional mappings are continuously distributed with steadily varying lighting while differences in subjects or viewpoint appear to be relatively insignificant. For example, for both Person 1 in Viewpoint 8 and Person 4 in Viewpoint 1, the mappings for face images that were lit from the subjects' right side appear on the top left-hand corner, while dark images appear on the top-right corner; images captured under neutral lighting conditions lie on the bottom right. On the other hand, any two images captured under similar lighting conditions tend to be located close to each other even if they are of different subjects in different viewpoints. Therefore, we can conclude that the lighting features calculated by our proposed MDA preserve neighbors for lighting, which are captured under similar lighting conditions.

Figure 6
figure 6

The first two coordinates of lighting feature vectors computed by Multifactor Discriminant Analysis using the Extended Yale database.

6. Conclusion

In this paper, we propose a novel dimension reduction method for face recognition: Multifactor Discriminant Analysis. Multifactor Discriminant Analysis can be thought of as an extension of LDA to multiple factor frameworks providing both multifactor analysis and discriminant analysis. Moreover, we have shown through experiments that MDA extracts more reliable subject parameters compared to the low-dimensional projections obtained by LDA and MPCA. These subject parameters obtained by MDA represent locally repeated shapes of distributions due to differences in subjects for each combination of other factors. Consequently, MDA can offer more discriminant power, making full use of both global distribution of the entire data set and local factor-specific distribution. Reference [6] introduced another method which is theoretically based on both MPCA and LDA: Multilinear Discriminant Analysis. However, Multilinear Discriminant Analysis cannot analyze multiple factor frameworks, while our proposed Multifactor Discriminant Analysis can. Relevant examples are shown in Figure 5 where our proposed approach has been able to yield a discriminative two dimensional subspace that can cluster multiple subjects in the Yale-B database. On the other hand, LDA completely spreads the data samples into one global undiscriminative distribution of data samples. These results show the dimension reduction power of our approach in the presence of nuisance factors such as viewpoints and lighting conditions. This improved dimension reduction power will allow us to have reduced size feature sets (optimal for template storage) and increased matching speed due to these smaller dimensional features. Our approach is thus attractive for robust face recognition for real-world defense and security applications. Future work will include evaluating this approach on larger data sets such as the CMU Multi-PIE database and NIST's FRGC and MBGC databases.

References

  1. Vasilescu MAO, Terzopoulos D: Multilinear image analysis for facial recognition. Proceedings of the International Conference on Pattern Recognition, 2002 1(2):511-514.

    Google Scholar 

  2. Vasilescu MAO, Terzopoulos D: Multilinear independent components analysis. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, San Diego, Calif, USA 1: 547-553.

    Google Scholar 

  3. Fukunaga K: Introduction to Statistical Pattern Recognition. 2nd edition. Academic Press, San Diego, Calif, USA; 1999.

    MATH  Google Scholar 

  4. Martinez AM, Kak AC: PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, 23(2):228-233. 10.1109/34.908974

    Article  Google Scholar 

  5. Turk M, Pentland A: Eigenfaces for recognition. Journal of Cognitive Neuroscience 1991, 3(1):71-86. 10.1162/jocn.1991.3.1.71

    Article  Google Scholar 

  6. Yan S, Xu D, Yang Q, Zhang L, Tang X, Zhang H-J: Multilinear discriminant analysis for face recognition. IEEE Transactions on Image Processing 2007, 16(1):212-220.

    Article  MathSciNet  Google Scholar 

  7. Golub GH, Loan CFV: Matrix Computations. The Johns Hopkins University Press, London, UK; 1996.

    MATH  Google Scholar 

  8. De Lathauwer L, De Moor B, Vandewalle J: A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications 2000, 21(4):1253-1278. 10.1137/S0895479896305696

    Article  MathSciNet  MATH  Google Scholar 

  9. Vasilescu MAO, Terzopoulos D: Multilinear projection for appearance-based recognition in the tensor framework. Proceedings of the IEEE International Conference on Computer Vision (ICCV '07), 2007 1-8.

    Google Scholar 

  10. Li Y, Du Y, Lin X: Kernel-based multifactor analysis for image synthesis and recognition. Proceedings of the IEEE International Conference on Computer Vision, 2005 1: 114-119.

    Google Scholar 

  11. Scholkopf B, Smola A, Muller K-R: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 1996, 1299-1319.

    Google Scholar 

  12. Wang X, Tang X: Dual-space linear discriminant analysis for face recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004 564-569.

    Google Scholar 

  13. Yu H, Yang J: A direct LDA algorithm for high dimensional data-with application to face recognition. Pattern Recognition 2001, 2067-2070.

    Google Scholar 

  14. Baudat G, Anouar F: Generalized discriminant analysis using a kernel approach. Neural Computation 2000, 12(10):2385-2404. 10.1162/089976600300014980

    Article  Google Scholar 

  15. Mika S, Ratsch G, Weston J, Scholkopf B, Muller K-R: Fisher discriminant analysis with kernels. Proceedings of the IEEE Workshop on Neural Networks for Signal Processing, 1999 41-48.

    Google Scholar 

  16. Lee K-C, Ho J, Kriegman DJ: Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence 2005, 27(5):684-698.

    Article  Google Scholar 

  17. Sim T, Baker S, Bsat M: The CMU pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence 2003, 25(12):1615-1618. 10.1109/TPAMI.2003.1251154

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sung Won Park.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Park, S.W., Savvides, M. A Multifactor Extension of Linear Discriminant Analysis for Face Recognition under Varying Pose and Illumination. EURASIP J. Adv. Signal Process. 2010, 158395 (2010). https://doi.org/10.1155/2010/158395

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/158395

Keywords