Open Access

Separation of Audio-Visual Speech Sources: A New Approach Exploiting the Audio-Visual Coherence of Speech Stimuli

  • David Sodoyer1Email author,
  • Jean-Luc Schwartz1,
  • Laurent Girin1,
  • Jacob Klinkisch1 and
  • Christian Jutten1
EURASIP Journal on Advances in Signal Processing20022002:382823

Received: 19 October 2001

Published: 28 November 2002


We present a new approach to the source separation problem in the case of multiple speech signals. The method is based on the use of automatic lipreading, the objective is to extract an acoustic speech signal from other acoustic signals by exploiting its coherence with the speaker′s lip movements. We consider the case of an additive stationary mixture of decorrelated sources, with no further assumptions on independence or non-Gaussian character. Firstly, we present a theoretical framework showing that it is indeed possible to separate a source when some of its spectral characteristics are provided to the system. Then we address the case of audio-visual sources. We show how, if a statistical model of the joint probability of visual and spectral audio input is learnt to quantify the audio-visual coherence, separation can be achieved by maximizing this probability. Finally, we present a number of separation results on a corpus of vowel-plosive-vowel sequences uttered by a single speaker, embedded in a mixture of other voices. We show that separation can be quite good for mixtures of 2, 3, and 5 sources. These results, while very preliminary, are encouraging, and are discussed in respect to their potential complementarity with traditional pure audio separation or enhancement techniques.


blind source separationlipreadingaudio-visual speech processing

Authors’ Affiliations

Institut de la Communication Parlée, Institut National Polytechnique de Grenoble, Université Stendhal, Grenoble Cedex 1, France


© Sodoyer et al. 2002