Skip to content


  • Research Article
  • Open Access

Integrating Illumination, Motion, and Shape Models for Robust Face Recognition in Video

EURASIP Journal on Advances in Signal Processing20072008:469698

  • Received: 14 January 2007
  • Accepted: 1 October 2007
  • Published:


The use of video sequences for face recognition has been relatively less studied compared to image-based approaches. In this paper, we present an analysis-by-synthesis framework for face recognition from video sequences that is robust to large changes in facial pose and lighting conditions. This requires tracking the video sequence, as well as recognition algorithms that are able to integrate information over the entire video; we address both these problems. Our method is based on a recently obtained theoretical result that can integrate the effects of motion, lighting, and shape in generating an image using a perspective camera. This result can be used to estimate the pose and structure of the face and the illumination conditions for each frame in a video sequence in the presence of multiple point and extended light sources. We propose a new inverse compositional estimation approach for this purpose. We then synthesize images using the face model estimated from the training data corresponding to the conditions in the probe sequences. Similarity between the synthesized and the probe images is computed using suitable distance measurements. The method can handle situations where the pose and lighting conditions in the training and testing data are completely disjoint. We show detailed performance analysis results and recognition scores on a large video dataset.


  • Face Recognition
  • Video Sequence
  • Recognition Algorithm
  • Illumination Condition
  • Detailed Performance

Publisher note

To access the full article, please see PDF.

Authors’ Affiliations

Department of Electrical Engineering, University of California, Riverside, CA 92521, USA


© Yilei Xu et al. 2008

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.