Skip to main content
  • Research Article
  • Published:

Statistical Lip-Appearance Models Trained Automatically Using Audio Information

Abstract

We aim at modeling the appearance of the lower face region to assist visual feature extraction for audio-visual speech processing applications. In this paper, we present a neural network based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes. This model requires labeled examples to be trained, and we propose to label images automatically by employing a lip-shape model and a red-hue energy function. To improve the performance of lip-tracking, we propose to use blue marked-up image sequences of the same subject uttering the identical sentences as natural nonmarked-up ones. The easily extracted lip shapes from blue images are then mapped to the natural ones using acoustic information. The lip-shape estimates obtained simplify lip-tracking on the natural images, as they reduce the parameter space dimensionality in the red-hue energy minimization, thus yielding better contour shape and location estimates. We applied the proposed method to a small audio-visual database of three subjects, achieving errors in pixel classification around 6%, compared to 3% for hand-placed contours and 20% for filtered red-hue.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Daubias, P., Deléglise, P. Statistical Lip-Appearance Models Trained Automatically Using Audio Information. EURASIP J. Adv. Signal Process. 2002, 720534 (2002). https://doi.org/10.1155/S1110865702206186

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1155/S1110865702206186

Keywords