Open Access

Spatio-Temporal Graphical-Model-Based Multiple Facial Feature Tracking

EURASIP Journal on Advances in Signal Processing20052005:215497

https://doi.org/10.1155/ASP.2005.2091

Received: 1 January 2004

Published: 15 August 2005

Abstract

It is challenging to track multiple facial features simultaneously when rich expressions are presented on a face. We propose a two-step solution. In the first step, several independent condensation-style particle filters are utilized to track each facial feature in the temporal domain. Particle filters are very effective for visual tracking problems; however multiple independent trackers ignore the spatial constraints and the natural relationships among facial features. In the second step, we use Bayesian inference—belief propagation—to infer each facial feature's contour in the spatial domain, in which we learn the relationships among contours of facial features beforehand with the help of a large facial expression database. The experimental results show that our algorithm can robustly track multiple facial features simultaneously, while there are large interframe motions with expression changes.

Keywords and phrases:

facial feature tracking particle filter belief propagation graphical model

Authors’ Affiliations

(1)
College of Computer Science, Zhejiang University

Copyright

© Su and Huang 2005