Skip to main content

Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding

Abstract

Given the nonlinear manifold structure of facial images, a new kernel-based supervised manifold learning algorithm based on locally linear embedding (LLE), called discriminant kernel locally linear embedding (DKLLE), is proposed for facial expression recognition. The proposed DKLLE aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. DKLLE is compared with LLE, supervised locally linear embedding (SLLE), principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA), and kernel linear discriminant analysis (KLDA). Experimental results on two benchmarking facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database, demonstrate the effectiveness and promising performance of DKLLE.

Introduction

Affective computing, which is currently an active research area, aims at building the machines that recognize, express, model, communicate and respond to a user's emotion information [1]. Within this field, recognizing human emotion from facial images, i.e., facial expression recognition, is increasingly attracting attention and has become an important issue, since facial expression provides the most natural and immediate indication about a person's emotions and intentions. Over the last decade, the importance of automatic facial expression recognition has increased significantly due to its applications to human-computer interaction (HCI), human emotion analysis, interactive video, indexing and retrieval of image, etc.

An automatic facial expression recognition system generally comprises of three crucial steps [2]: face acquisition, facial feature extraction, and facial expression classification. Face acquisition is a preprocessing stage to detect or locate the face regions in the input images or sequences. One of the most widely used face detector is the real-time face detection algorithm developed by Viola and Jones [3], in which a cascade of classifiers is employed with Harr-wavelet features. Once a face is detected in the images, the corresponding face regions are usually normalized to have the same eye distance and the same gray level. Facial feature extraction attempts to find the most appropriate representation of facial images for recognition. There are mainly two approaches: geometric features-based systems and appearance features-based systems. In the geometric features-based systems, the shape and locations of major facial components such as mouth, nose, eyes, and brows, are detected in the images. Nevertheless, the geometric features-based systems require the accurate and reliable facial feature detection, which is difficult to realize in real-time applications. In the appearance features-based systems, the appearance changes (skin texture) of the facial images, including wrinkles, bulges, and furrows, are presented. Image filters, such as principal component analysis (PCA) [4], linear discriminant analysis (LDA) [5], regularized discriminant analysis (RDA) [6] and Gabor wavelet analysis [7, 8], can be applied to either the whole-face or specific face regions to extract the facial appearance changes. It's worth pointing out that it is computationally expensive to convolve facial images with a set of Gabor filters to extract multi-scale and multi-orientation coefficients. Moreover, in practice the dimensionality of Gabor features is so high that the computation and memory requirements are very large. In recent years, an effective face descriptor called local binary patterns (LBP) [9], originally proposed for texture analysis [10], have attracted extensive interest for facial expression representation. One of the most important properties of LBP is its tolerance against illumination changes and its computational simplicity. So far, LBP has been successfully applied as a local feature extraction method in facial expression recognition [11–13]. In the last step of an automatic facial expression recognition system, i.e., facial expression classification, a classifier is employed to identify different expressions based on the extracted facial features. The representative classifiers used for facial expression recognition are neural networks [14], the nearest neighbor (1-NN) [15] or k-nearest neighbor (KNN) classifier [16], and support vector machines (SVM) [17], etc.

In recent years, it has been proved that facial images of a person with varying expressions can be represented as a low-dimensional nonlinear manifold embedded in a high-dimensional image space [18–20]. Given the nonlinear manifold structure of facial expression images, two representative manifold learning (also called nonlinear dimensionality reduction) methods, i.e., locally linear embedding (LLE) [21] and isometric feature mapping (Isomap) [22], have been used to project the high-dimensional facial expression images into a low-dimensional embedded subspace in which facial expressions can be easily distinguished from each other [18–20, 23, 24]. However, LLE and Isomap fail to perform well on facial expression recognition tasks due to their unsupervised ways of failing to extract the discriminant information.

To overcome the limitations of unsupervised manifold learning methods for supervised pattern recognition, some supervised manifold learning algorithms have been recently proposed by means of a supervised distance measure, such as supervised locally linear embedding (SLLE) [25] using the linear supervised distance, probability-based LLE using a probability-based distance [26], locally linear discriminant embedding using a vector translation and distance rescaling model [27], and so forth. Among them, SLLE has become one of the most promising supervised manifold learning techniques due to its simple implementation, and successfully applied for facial expression recognition [28]. However, SLLE still has two shortcomings. Firstly, due to the used linear supervised distance, the interclass dissimilarity in SLLE keeps increasing in parallel while the intraclass dissimilarity is increased. However, an ideal classification mechanism should maximize the interclass dissimilarity while minimizing the intraclass dissimilarity. In this sense, this kind of linear supervised distance in SLLE is not a good property for classification since it will go to a great extent to decrease the discriminating power of the low-dimensional embedded data representations produced with SLLE. Secondly, as a non-kernel method, SLLE cannot explore the higher-order information of input data as SLLE cannot employ the characteristic of a kernel-based learning, i.e., a nonlinear kernel mapping. To tackle the above-mentioned problems of SLLE, in this article a new kernel-based supervised manifold learning algorithm based on LLE, called discriminant kernel locally linear embedding (DKLLE), is proposed and applied for facial expression recognition. On one hand, with a nonlinear supervised distance measure, DKLLE considers both the intraclass scatter information and the interclass scatter information in a reproducing kernel Hilbert space (RKHS), and emphasizes the discriminant information. On the other hand, with kernel techniques DKLLE extracts the nonlinear feature information when mapping input data into some high dimensional feature space. In order to evaluate the performance of DKLLE on facial expression recognition, we adopt the LBP features as facial representations and then employ DKLLE to produce the low-dimensional discriminant embedded data representations from the extracted LBP features with striking performance improvement on facial expression recognition tasks. The facial expression recognition experiments are performed on two benchmarking facial expression databases, i.e., the JAFFE database [15] and the Cohn-Kanade database [29].

The remainder of this article is organized as follows: in Section 2, LBP is introduced briefly. In Section 3, LLE and SLLE are reviewed briefly. The proposed DKLLE algorithm is presented in detail in Section 4. In Section 5, experiments and results are given. Finally, the conclusions are summarized in Section 6.

Local binary patterns

The original LBP operator [10] labels the pixels of an image by thresholding a 3 × 3 neighborhood of each pixel with the center value and considering the results as a binary code. The LBP code of the center pixel in the neighborhood is obtained by converting the binary code into a decimal one. Figure 1 gives an illustration for the basic LBP operator. Based on the operator, each pixel of an image is labeled with an LBP code. The 256-bin histogram of the labels contains the density of each label and can be used as a texture descriptor of the considered region.

Figure 1
figure 1

The basic LBP operator.

The procedure of extracting LBP features for facial representations is implemented as follows:

First, a face image is divided into several non-overlapping blocks. Second, LBP histograms are computed for each block. Finally, the block LBP histograms are concatenated into a single vector. As a result, the face image is represented by the LBP code.

LLE and SLLE

LLE

Given the input data point x i ∈ RD and the output data point y i ∈ Rd (i = 1,2,3,..., N), the standard LLE [21] consists of three steps:

Step 1: Find the number of nearest neighbors for each x i based on the Euclidean distance.

Step 2: Compute the reconstruction weights by minimizing the reconstruction error.

Let x i and x j be neighbors, the reconstruction error is measured by the following cost function:

ε ( W ) = ∑ i = 1 N ∥ x i - ∑ j = 1 N W i j x j ∥ 2
(1)

subject to two constraints: ∑ j = 1 N W i j = 1 and W ij = 0, if x i and x j are not neighbors.

Step 3: Compute the low-dimensional embedding.

The low-dimensional embedding is found through the following minimization:

ϕ ( Y ) = ∑ i = 1 N ∥ y i - ∑ j = 1 N W i j y j ∥ 2
(2)

subject to two constraints: ∑ i = 1 N y i = 0 and 1 N ∑ i = 1 N y i T y i = I , where I is the d × d identity matrix. To find the matrix Y under these constraints, a new matrix M is constructed based on the matrix W: M = (I-W)T(I-W). The d eigenvectors which correspond to the d smallest non-zero eigenvalues of M yield the final embedding Y.

SLLE

To complement the original LLE, SLLE [25] aims to find a mapping separating within-class structure from a between-class structure. One way to do this is to add the distance between samples x i and x j in different classes to modify the first step of the original LLE, while leaving the other two steps unchanged. This can be achieved by artificially increasing the pre-calculated Euclidean distance (abbreviated as Δ) between samples belonging to different classes, but leaving them unchanged if samples are from the same class:

Δ ′ = Δ + α max ( Δ ) Λ i j , α ∈ [ 0 , 1 ]
(3)

where Δ is the distance matrix without considering the class label information, and Δ' is the distance integrating with the class label information. If x i and x j belong to the different classes, then Λ ij = 1 and Λ ij = 0 otherwise. In this formulation, the constant factor α (0 ≤ α ≤ 1) controls the amount to which the class information should be incorporated. At one extreme, when α = 0, we get the unsupervised LLE. At the other extreme, when α = 1, we get the fully supervised LLE (1-SLLE). As α varies between 0 and 1, a partially supervised LLE (α-SLLE) is obtained. From Eq. (3), it can be observed that when the intraclass dissimilarity (i.e., Δ' = Δ, when Λ ij = 0) is linearly increased, the interclass dissimilarity (i.e., Δ' = Δ +α max(Δ), when Λ ij = 1) keeps increasing in parallel, since α max(Δ) is a constant. Therefore, the used supervised distance measure in SLLE is linear.

The proposed DKLLE

A discirminant and kernel variant of LLE is developed by designing a nonlinear supervised distance measure and minimizing the reconstruction error in a RKHS, which gives rise to DKLLE.

Given the input data point (x i , L i ), where x i ∈ RD and L i is the class label of x i , the output data point is y i ∈ Rd (i = 1,2,3,..., N). The detailed steps of DKLLE are presented as follows:

Step 1: Perform the kernel mapping for each data point x i .

A nonlinear mapping φ is defined as: φ : R D → ℱ , x ↦ φ ( x )

The input data point x i is mapped with the nonlinear mapping φ, into some potentially high-dimensional feature space ℱ. Then, an inner product 〈,〉 can be defined on ℱ for a chosen φ, which makes a so-called RHKS. In a RHKS, a kernel function κ(x i , x j ) can be defined as:

κ ( x i , x j ) = 〈 φ ( x i ) , φ ( x j ) 〉 =φ ( x i ) T φ ( x j )
(4)

where κ is called a kernel.

Step 2: Find the nearest neighbors for each φ(x i ) by using a nonlinear supervised kernel distance.

The kernel Euclidean distance measure [30] for two data points x i and x j induced by a kernel κ can be defined as:

Dist ( x i , x j ) = ⟨ φ ( x i ) - φ ( x j ) , φ ( x i ) - φ ( x j ) ⟩ = κ ( x i , x i ) - 2 κ ( x i , x j ) + κ ( x j , x j )
(5)

Let Dist denotes the kernel distance matrix for all the input data points, i.e., Dist = Dist(x i , x j ), i, j = 1,2,..., N. To preserve the intraclass neighboring geometry, while maximizing the interclass scatter, a nonlinear supervised kernel distance measure KDist in a RHKS can be defined as:

K D i s t = 1 - e - D i s t 2 β L i = L j e D i s t 2 β - α L i ≠ L j
(6)

where KDist is the supervised kernel distance matrix with the class label information, while Dist is the kernel Euclidean distance matrix without the class label information. α is a constant factor (0 ≤ α ≤ 1) and gives a certain chance for the data points in different classes to be more similar so that the dissimilarity in different classes may be smaller than that in the same class. β is used to prevent the supervised kernel distance matrix KDist from increasing too fast when the kernel Euclidean distance matrix Dist is relatively large, since Dist is in the exponent. Hence, the value of β should depend on the "density" of data sets and it is usually feasible to set β to be the average kernel Euclidean distance between all pairs of data points.

As shown in Eq. (6), we can make two observations. First, both the interclass dissimilarity and the intraclass dissimilarity in KDist, is monotone increasing with respect to the kernel Euclidean distance. This ensures that the main geometric structure of the original data sets can be preserved well in the process of dimensionality reduction. Second, the interclass dissimilarity in KDist can be always definitely larger than the intraclass dissimilarity, conferring a high discriminating power of DKLLE's low-dimensional embedded data representations. This is a good property for classification.

Step 3: Measure the reconstruction error in a RHKS.

The reconstruction error is measured by the following cost function:

ε ( W ) = ∑ i = 1 N ∥ φ ( x i ) - ∑ j = 1 k W i j φ ( x j , i ) ∥ 2
(7)

where k is the number of nearest neighbors. Given the constraint: ∑ j = 1 N W i j = 1 , the reconstruction error can be rewritten as follows:

ε ( W ) = ∑ i = 1 N ∥ φ ( x i ) - ∑ j = 1 k W i j φ ( x j , i ) ∥ 2 = ∑ i = 1 N ∥ ∑ j = 1 k W i j ( φ ( x i ) - φ ( x j , i ) ) ∥ 2 = ∑ i = 1 N ε ( W i )
(8)

Let P i = [φ(x i )-φ(x1,i),φ(x i )-φ(x2,i),..., φ(x i )- φ(x k ,i)], then

ε ( W i ) = ∥ ∑ j = 1 k W i j ( φ ( x i ) - φ ( x j , i ) ) ∥ 2 = ∥ P W i ∥ 2 = W i T P T P W i = W i T K W i
(9)

where K = PT P is a positive semi-definite kernel matrix. To compute the optimal weight W i , the following Lagrange function is formulated with the constraint W i T 1 =1. (1= (1,1,...,1)T)

L ( W i , λ ) = W i T K W i - λ ( W i T 1 - 1 ) ⇒ W i = K - 1 1 1 T K - 1 1
(10)

Performing eigen-decomposition, let K = UTΛU, then

W i = U T Λ - 1 U 1 1 T U T Λ - 1 U 1
(11)

Therefore, the reconstruction weights can be computed by the kernel matrix's eigenvalues and eigenvectors.

Step 4: Compute the final embedding.

As LLE done, the following embedding cost function is minimized.

ϕ ( Y ) = ∑ i = 1 N ∥ y i - ∑ j = 1 N W i j y j ∥ 2 = t r ( Y T M Y )
(12)

where M = (I-W)T(I-W), subject to two constraints: ∑ i = 1 N y i = 0 and 1 N ∑ i = 1 N y i T y i = I , . The final embedding Y comprises d eigenvectors corresponding to d smallest non-zero eigenvalues of M

Experiments and results

To verify the effectiveness of the proposed DKLLE, we use two benchmarking facial expression databases, i.e., the JAFFE database [15] and the Cohn-Kanade Database [29], for facial expression recognition experiments. Each database contains seven emotions: anger, joy, sad, neutral, surprise, disgust, and fear. The performance of DKLLE is compared with LLE, SLLE, PCA, LDA, kernel principal component analysis (KPCA) [31], and kernel linear discriminant analysis (KLDA) [32]. The typical Gaussian kernel κ ( x i , x j ) = exp - ∥ x i - x j ∥ 2 / 2 σ 2 is used for KPCA, KLDA, and DKLLE, and the parameter σ is empirically set to 1 for its satisfying performance. The number of nearest neighbors for LLE, SLLE, and DKLLE is fixed with an adaptive neighbor selection technique [33]. To cope with the embeddings of the new samples, the out-of-sample extensions of LLE and SLLE are developed by an existed linear generalization technique [34], in which a linear relation is built between the high and low-dimensional spaces and then the adaptation to a new sample can be done by updating the weight matrix W. As a kernel method, the proposed DKLLE can directly project the new samples into a low-dimensional space by using a kernel trick as in KPCA. For simplicity, the nearest neighbor (1-NN) classifier with the Euclidean metric is used for facial expression classification. A 10-fold cross validation scheme is employed in 7-class facial expression recognition experiments, and the average recognition results are reported.

Due to the computation complexity constraint, the reduced dimension is confined to the range [2, 100] with an interval of 5. An exception is that in the low range [2, 10] we present the recognition results of each reduced dimension with a small interval of 1, since the reduced dimension of LDA and KLDA is at most c-1, where c is the number of facial expression classes. In each reduced dimension, the constant α (0 ≤ α ≤ 1) for SLLE and DKLLE can be optimized using a simple exhaustive search within a scope (α = 0,0.1,0.2,...,1).

Preprocessing

As done in [11, 12], on the JAFFE database and the Cohn-Kanade Database, the eye distance of facial images was normalized to a fixed distance of 55 pixels once the centers of two eyes were located. Generally, it is observed that the width of a face is roughly two times of the distance, and the height is roughly three times. Therefore, based on the normalized value of the eye distance, a resized image of 110 × 150 pixels was cropped from the original images. To locate the centers of two eyes, automatic face registration was performed by using the robust real-time face detector developed by Viola and Jones [3]. From the results of automatic face detection including face location, face width, and face height, two square bounding boxes for left eye and right eye were automatically constructed by using the geometry of a typical up-right face which has been widely used to find a proper spatial arrangement of facial features [35]. Then, the center locations of two eyes could be automatically worked out in terms of the centers of two square bounding boxes for left eye and right eye. No further alignment of facial features such as alignment of mouth was performed. Additionally, there was no attempt made to remove illumination changes due to LBP's gray-scale invariance.

When the facial images of 110 × 150 pixels, including mouth, eyes, brows, and noses, were cropped from the original images, the LBP operator was applied to each cropped image and extracted the LBP features. As suggested in [10–12], we selected the 59-bin LBP operator, and divided the 110 × 150 pixels facial images into 42 (6 × 7) blocks, and finally extracted the LBP features represented by the length of 2478 (59 × 42).

Experiments on the JAFFE database

The JAFFE database [15] contains 213 images of female facial expressions. Each image has a resolution of 256*256 pixels. A few examples of facial expression images from the JAFFE database are shown in Figure 2. The number of images corresponding to each of the seven categories of expressions is roughly the same. The recognition results obtained by each method at different reduced dimensions are given in Figure 3. The best results and the standard deviations (std) for different methods with the corresponding reduced dimension are listed in Table 1.

Figure 2
figure 2

Examples of facial expression images from the JAFFE database.

Figure 3
figure 3

Recognition accuracy vs. reduced dimension on the JAFFE database.

Table 1 The best accuracy (std) of different methods on the JAFFE database

From the results in Figure 3 and Table 1, we can see that DKLLE achieves the highest accuracy of 84.06% at 40 reduced dimension, outperforming the other methods. More crucially, DKLLE makes about 9% improvement over LLE and about 6% improvement over SLLE. This demonstrates that DKLLE is able to extract the most discriminative low-dimensional embedded data representations for facial expression recognition. Note that it's difficult to perform directly a comparison with all the previously reported work on the JAFFE database due to the different experimental settings. Nevertheless, in our work with LBP-based 1-NN the reported accuracy of 84.06% is still very encouraging compared with the previously published work [12] similar to our experimental settings. In [12], after extracting the most discriminative LBP (called boosted-LBP) features, they used SVM and separately obtained 7-class facial expression recognition accuracy of 79.8, 79.8, and 81.0% with linear, polynomial, and radial basis function (RBF) kernels. It's worth pointing out that in this work for simplicity we did not use the boosted-LBP features and SVM. To further compare the performance of DKLLE with the work in [12], we will explore the performance of the boosted-LBP features and SVM integrating with DKLLE in our future work.

When DKLLE performs best at 40 reduced dimension, the corresponding confusion matrix of 7-class facial expression recognition results is presented in Table 2. The confusion matrix in Table 2 shows that anger and joy are identified well with an accuracy of over 90%, while other five expressions are discriminated poorly with an accuracy of less than 90%. In particular, sad is classified with the lowest accuracy of 64.93% since sad is highly confused to fear and neutral.

Table 2 Confusion matrix of recognition results with DKLLE on the JAFFE database

Experiments on the Cohn-Kanade database

The Cohn-Kanade database [29] consists of 100 university students aged from 18 to 30 years. Image sequences from neutral to target display were digitized into 640*490 pixels with 8-bit precision for grayscale values. As done in [11, 12], 320 image sequences were selected from 96 subjects for experiments. For each sequence, the neutral face and three peak frames were used for prototypic expression recognition, resulting in 1409 images (anger 96, joy 298, sad 165, surprise 225, fear 141, disgust 135, and neutral 349). Figure 4 shows a few examples of facial expression images from the Cohn-Kanade database.

Figure 4
figure 4

Examples of facial expression images from the Cohn-Kanade database.

Figure 5 presents the recognition performance of different methods. Table 3 shows the best accuracy (std) for different methods with the corresponding reduced dimension. The results in Figure 5 and Table 3 indicate that DKLLE obtains the recognition performance superior to the other used methods again. Compared with the previously reported work [11, 12] in which the experimental settings are similar to ours, the best accuracy of 95.85% obtained by LBP-based 1-NN is highly competitive. In [11], on 7-class facial expression recognition tasks they used LBP-based template matching and reported an accuracy of 79.1%. Additionally, they also employed LBP-based SVM to give an accuracy of 87.2, 88.4, and 87.6% with linear, polynomial and RBF kernels, respectively. In [12], based on boosted-LBP features and SVM, on 7-class facial expression recognition tasks they reported an accuracy of 91.1, 91.1, and 91.4% with linear, polynomial and RBF kernels, respectively.

Figure 5
figure 5

Recognition accuracy vs. reduced dimension on the Cohn-Kanade database.

Table 3 The best accuracy (std) of different methods on the Cohn-Kanade database

Table 4 shows the confusion matrix of 7-class expression recognition results when DKLLE obtains the best performance at 30 reduced dimension. From Table 4, it can be seen that 7-class facial expressions are identified very well with an accuracy of over 90%.

Table 4 Confusion matrix of recognition results with DKLLE on the Cohn-Kanade database

Conclusions

A new kernel-based supervised manifold learning algorithm, called DKLLE, is proposed for facial expression recognition. DKLLE has two prominent characteristics. First, as a kernel-based feature extraction method, DKLLE can extract the nonlinear feature information embedded on a data set, as KPCA and KLDA does. Second, DKLLE is designed to obtain a high discriminating power for its low-dimensional embedded data representations in an effort to improve the performance on facial expression recognition. Experimental results on the JAFFE database and the Cohn-Kanade Database show that DKLLE not only makes an obvious improvement over LLE and SLLE, but also outperforms the other used methods including PCA, LDA, KPCA, and KLDA.

References

  1. Picard RW: Affective Computing. The MIT Press, Cambridge; 2000.

    Google Scholar 

  2. Tian Y, Kanade T, Cohn J: Facial expression analysis, Handbook of face recognition. Springer, Heidelberg; 2005:247-275.

    Book  Google Scholar 

  3. Viola P, Jones M: Robust real-time face detection. Int J Comput Vision 2004, 57(2):137-154.

    Article  Google Scholar 

  4. Turk MA, Pentland AP: Face recognition using eigenfaces. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, HI USA; 1991:586-591.

    Google Scholar 

  5. Belhumeur PN, Hespanha JP, Kriegman DJ: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 1997, 19(7):711-720. 10.1109/34.598228

    Article  Google Scholar 

  6. Lee CC, Huang SS, Shih CY: Facial affect recognition using regularized discriminant analysis-based algorithms. EURASIP J Adv Signal Process 2010, 2010: 10.

    Google Scholar 

  7. Daugman JG: Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Trans Acoust Speech Signal Process 1988, 36(7):1169-1179. 10.1109/29.1644

    Article  Google Scholar 

  8. Shen L, Bai L: Information theory for Gabor feature selection for face recognition. EURASIP J Adv Signal Process 2006, 2006: 11.

    Article  Google Scholar 

  9. Ahonen T, Hadid A, Pietikäinen M: Face description with local binary patterns: Application to face recognition. IEEE Trans Pattern Anal Mach Intell 2006, 28(12):2037-2041.

    Article  Google Scholar 

  10. Ojala T, Pietikäinen M, Mäenpää T: Multiresolution gray scale and rotation invariant texture analysis with local binary patterns. IEEE Trans Pattern Anal Mach Intell 2002, 24(7):971-987. 10.1109/TPAMI.2002.1017623

    Article  Google Scholar 

  11. Shan C, Gong S, McOwan P: Robust facial expression recognition using local binary patterns. In IEEE International Conference on Image Processing (ICIP). Volume 2. IEEE Computer Society, Italy; 2005:370-373.

    Google Scholar 

  12. Shan C, Gong S, McOwan P: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput 2009, 27(6):803-816. 10.1016/j.imavis.2008.08.005

    Article  Google Scholar 

  13. Moore S, Bowden R: Local binary patterns for multi-view facial expression recognition. Comput Vis Image Understand 2011, 115: 541-558. 10.1016/j.cviu.2010.12.001

    Article  Google Scholar 

  14. Tian Y, Kanade T, Cohn J: Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 2002, 23(2):97-115.

    Article  Google Scholar 

  15. Lyons MJ, Budynek J, Akamatsu S: Automatic classification of single facial images. IEEE Trans Pattern Anal Mach Intell 1999, 21(12):1357-1362. 10.1109/34.817413

    Article  Google Scholar 

  16. Sebe N, Lew MS, Sun Y, Cohen I, Gevers T, Huang TS: Authentic facial expression analysis. Image Vis Comput 2007, 25(12):1856-1863. 10.1016/j.imavis.2005.12.021

    Article  Google Scholar 

  17. Kotsia I, Pitas I: Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans on Image Process 2007, 16(1):172-187.

    Article  MathSciNet  Google Scholar 

  18. Chang Y, Hu C, Turk M: Manifold of facial expression. In IEEE International Workshop on Analysis and Modeling of Faces and Gestures. IEEE Computer Society, France; 2003:28-35.

    Google Scholar 

  19. Shan C, Gong S, McOwan PW: Appearance manifold of facial expression. In Computer Vision in Human-Computer Interaction, Lecture Notes in Computer Science. Volume 3766. Springer, China; 2005:221-230. 10.1007/11573425_22

    Chapter  Google Scholar 

  20. Chang Y, Hu C, Feris R, et al.: Manifold based analysis of facial expression. Image Vis Comput 2006, 24(6):605-614. 10.1016/j.imavis.2005.08.006

    Article  Google Scholar 

  21. Roweis ST, Saul LK: Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290(5500):2323-2326. 10.1126/science.290.5500.2323

    Article  Google Scholar 

  22. Tenenbaum JB, de Silva V, Langford JC: A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290(5500):2319-2323. 10.1126/science.290.5500.2319

    Article  Google Scholar 

  23. Cheon Y, Kim D: Natural facial expression recognition using differential-AAM and manifold learning. Pattern Recogn 2009, 42(7):1340-1350. 10.1016/j.patcog.2008.10.010

    Article  Google Scholar 

  24. Xiao R, Zhao Q, Zhang D, Shi P: Facial expression recognition on multiple manifolds. Pattern Recogn 2011, 44(1):107-116. 10.1016/j.patcog.2010.07.017

    Article  Google Scholar 

  25. de Ridder D, Kouropteva O, Okun O, Pietikäinen M, Duin RPW: Supervised locally linear embedding. In Artificial Neural Networks and Neural Information Processing-ICANN/ICONIP-2003, Lecture Notes in Computer Science. Volume 2714. Springer, Heidelberg; 2003:333-341. 10.1007/3-540-44989-2_40

    Chapter  Google Scholar 

  26. Zhao L, Zhang Z: Supervised locally linear embedding with probability-based distance for classification. Comput Math Appl 2009, 57(6):919-926. 10.1016/j.camwa.2008.10.055

    Article  Google Scholar 

  27. Li B, Zheng C-H, Huang D-S: Locally linear discriminant embedding: An efficient method for face recognition. Pattern Recogn 2008, 42(12):3813-3821.

    Article  Google Scholar 

  28. Liang D, Yang J, Zheng Z, Chang Yuchou: A facial expression recognition system based on supervised locally linear embedding. Pattern Recogn Lett 2005, 26(15):2374-2389. 10.1016/j.patrec.2005.04.011

    Article  Google Scholar 

  29. Kanade T, Tian Y, Cohn J: Comprehensive database for facial expression analysis. In International Conference on Face and Gesture Recognition. Volume 4. IEEE Computer Society, France; 2000:46-53.

    Google Scholar 

  30. Scholkopf B: The kernel trick for distances, in advances in neural information processing systems. MIT Press Cambridge, Canada; 2001:301-307.

    Google Scholar 

  31. Scholkopf B, Smola A, Muller K: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput 1998, 10(5):1299-1319. 10.1162/089976698300017467

    Article  Google Scholar 

  32. Baudat G, Anouar F: Generalized discriminant analysis using a kernel approach. Neural comput 2000, 12(10):2385-2404. 10.1162/089976600300014980

    Article  Google Scholar 

  33. Wang J, Zhang Z, Zha H: Adaptive manifold learning. In Advances in neural information processing systems. Volume 17. MIT Press Cambridge, Canada; 2005:1473-1480.

    Google Scholar 

  34. Saul LK, Roweis ST: Think globally, fit locally: unsupervised learning of low dimensional manifolds. J Mach Learn Res 2003, 4: 119-155.

    MathSciNet  Google Scholar 

  35. Campadelli P, Lanzarotti R, Lipori G, Salvi E: Face and facial feature localization. In International Conference on Image Analysis and Processing. Springer, Heidelberg, Italy; 2005:1002-1009.

    Google Scholar 

Download references

Acknowledgements

This work was supported by Zhejiang Provincial Natural Science Foundation of China under Grant No. Z1101048 and Grant No. Y1111058.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiqing Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhao, X., Zhang, S. Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding. EURASIP J. Adv. Signal Process. 2012, 20 (2012). https://doi.org/10.1186/1687-6180-2012-20

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-20

Keywords