Skip to main content

Dereverberation by Using Time-Variant Nature of Speech Production System


This paper addresses the problem of blind speech dereverberation by inverse filtering of a room acoustic system. Since a speech signal can be modeled as being generated by a speech production system driven by an innovations process, a reverberant signal is the output of a composite system consisting of the speech production and room acoustic systems. Therefore, we need to extract only the part corresponding to the room acoustic system (or its inverse filter) from the composite system (or its inverse filter). The time-variant nature of the speech production system can be exploited for this purpose. In order to realize the time-variance-based inverse filter estimation, we introduce a joint estimation of the inverse filters of both the time-invariant room acoustic and the time-variant speech production systems, and present two estimation algorithms with distinct properties.


  1. 1.

    Rabiner LR, Schafer RW: Digital Processing of Speech Signals. Prentice-Hall, Upper Saddle River, NJ, USA; 1983.

    Google Scholar 

  2. 2.

    Gurelli MI, Nikias CL: EVAM: an eigenvector-based algorithm for multichannel blind deconvolution of input colored signals. IEEE Transactions on Signal Processing 1995,43(1):134-149. 10.1109/78.365293

    Article  Google Scholar 

  3. 3.

    Furuya K, Kaneda Y: Two-channel blind deconvolution of nonminimum phase FIR systems. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 1997,E80-A(5):804-808.

    Google Scholar 

  4. 4.

    Gannot S, Moonen M: Subspace methods for multimicrophone speech dereverberation. EURASIP Journal on Applied Signal Processing 2003,2003(11):1074-1090. 10.1155/S1110865703305049

    MATH  Google Scholar 

  5. 5.

    Hikichi T, Delcroix M, Miyoshi M: Blind dereverberation based on estimates of signal transmission channels without precise information on channel order. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), March 2005, Philadelphia, Pa, USA 1: 1069–1072.

    Google Scholar 

  6. 6.

    Delcroix M, Hikichi T, Miyoshi M: Precise dereverberation using multichannel linear prediction. IEEE Transactions Audio, Speech and Language Processing 2007,15(2):430-440.

    Article  Google Scholar 

  7. 7.

    Yegnanarayana B, Murthy PS: Enhancement of reverberant speech using LP residual signal. IEEE Transactions on Speech and Audio Processing 2000,8(3):267-281. 10.1109/89.841209

    Article  Google Scholar 

  8. 8.

    Gillespie BW, Malvar HS, Florêncio DAF: Speech dereverberation via maximum-kurtosis subband adaptive filtering. IEEE Interntional Conference on Acoustics, Speech, and Signal Processing (ICASSP '01), May 2001, Salt Lake, Utah, USA 6: 3701–3704.

    Google Scholar 

  9. 9.

    Gillespie BW, Atlas LE: Strategies for improving audible quality and speech recognition accuracy of reverberant speech. IEEE International Conference on Accoustics, Speech, and Signal Processing (ICASSP '03), April 2003, Hong Kong 1: 676–679.

    Article  Google Scholar 

  10. 10.

    Gaubitch ND, Naylor PA, Ward DB: On the use of linear prediction for dereverberation of speech. Proceedings of International Workshop on Acoustic Echo and Noise Control (IWAENC '03), September 2003, Kyotp, Japan 99–102.

    Google Scholar 

  11. 11.

    Nakatani T, Kinoshita K, Miyoshi M: Harmonicity-based blind dereverberation for single-channel speech signals. IEEE Transactions, Audio, Speech and Language Processing 2007,15(1):80-95.

    Article  Google Scholar 

  12. 12.

    Kinoshita K, Nakatani T, Miyoshi M: Efficient blind dereverberation framework for automatic speech recognition. Proceedings of the 9th European Conference on Speech Communication and Technology, September 2005, Lisbon, Portugal 3145–3148.

    Google Scholar 

  13. 13.

    Spencer PS, Rayner PJW: Separation of stationary and time-varying systems and its application to the restoration of gramophone recordings. IEEE International Symposium on Circuits and Systems (ISCAS '89), May 1989, Portland, Ore, USA 1: 292–295.

    Article  Google Scholar 

  14. 14.

    Hopgood JR, Rayner PJW: Blind single channel deconvolution using nonstationary signal processing. IEEE Transactions on Speech and Audio Processing 2003,11(5):476-488. 10.1109/TSA.2003.815522

    Article  Google Scholar 

  15. 15.

    Shalvi O, Weinstein E: New criteria for blind deconvolution of nonminimum phase systems(channels). IEEE Transactions on Information Theory 1990,36(2):312-321. 10.1109/18.52478

    Article  Google Scholar 

  16. 16.

    Abed-Meraim K, Moulines E, Loubaton P: Prediction error method for second-order blind identification. IEEE Transactions on Signal Processing 1997,45(3):694-705. 10.1109/78.558487

    Article  Google Scholar 

  17. 17.

    Theobald B, Cox S, Cawley G, Milner B: Fast method of channel equalisation for speech signals and its implementation on a DSP. Electronics Letters 1999,35(16):1309-1311. 10.1049/el:19990912

    Article  Google Scholar 

  18. 18.

    Pham D-T, Cardoso J-F: Blind separation of instantaneous mixtures of nonstationary sources. IEEE Transactions on Signal Processing 2001,49(9):1837-1848. 10.1109/78.942614

    MathSciNet  Article  Google Scholar 

  19. 19.

    Matsuoka K, Ohya M, Kawamoto M: A neural net for blind separation of nonstationary signals. Neural Networks 1995,8(3):411-419. 10.1016/0893-6080(94)00083-X

    Article  Google Scholar 

  20. 20.

    Acoustical Society of Japan : ASJ Continuous Speech Corpus.

  21. 21.

    Kuttruff H: Room Acoustics. Elsevier Applied Science, London, UK; 1991.

    Google Scholar 

  22. 22.

    Kleijn WB, Paliwal KK (Eds): Speech Coding and Synthesis. Elsevier Science, Amsterdam, The Netherlands; 1995.

    Google Scholar 

  23. 23.

    Gorokhov A, Loubaton P: Blind identification of MIMO-FIR systems: a generalized linear prediction approach. Signal Processing 1999,73(1-2):105-124. 10.1016/S0165-1684(98)00187-X

    Article  Google Scholar 

  24. 24.

    Jacod J, Shiryaev AN: Limit Theorems for Stochastic Processes. Springer, New York, NY, USA; 1987.

    Book  Google Scholar 

  25. 25.

    Comon P: Independent component analysis, a new concept? Signal Processing 1994,36(3):287-314. 10.1016/0165-1684(94)90029-9

    Article  Google Scholar 

  26. 26.

    Hyvärinen A, Karhumen J, Oja E: Independent Component Analysis. John Wiley & Sons, New York, NY, USA; 2001.

    Book  Google Scholar 

  27. 27.

    Yoshioka T, Hikichi T, Miyoshi M, Okuno HG: Robust decomposition of inverse filter of channel and prediction error filter of speech signal for dereverberation. Proceedings of the 14th European Signal Processing Conference (EUSIPCO '06), 2006, Florence, Italy

    Google Scholar 

  28. 28.

    Boll SF: Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans Acoust Speech Signal Process 1979,27(2):113-120. 10.1109/TASSP.1979.1163209

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Takuya Yoshioka.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Yoshioka, T., Hikichi, T. & Miyoshi, M. Dereverberation by Using Time-Variant Nature of Speech Production System. EURASIP J. Adv. Signal Process. 2007, 065698 (2007).

Download citation


  • Information Technology
  • Production System
  • Estimation Algorithm
  • Quantum Information
  • Speech Signal