Skip to content

Advertisement

  • Research Article
  • Open Access

Dereverberation by Using Time-Variant Nature of Speech Production System

EURASIP Journal on Advances in Signal Processing20072007:065698

https://doi.org/10.1155/2007/65698

  • Received: 25 August 2006
  • Accepted: 21 June 2007
  • Published:

Abstract

This paper addresses the problem of blind speech dereverberation by inverse filtering of a room acoustic system. Since a speech signal can be modeled as being generated by a speech production system driven by an innovations process, a reverberant signal is the output of a composite system consisting of the speech production and room acoustic systems. Therefore, we need to extract only the part corresponding to the room acoustic system (or its inverse filter) from the composite system (or its inverse filter). The time-variant nature of the speech production system can be exploited for this purpose. In order to realize the time-variance-based inverse filter estimation, we introduce a joint estimation of the inverse filters of both the time-invariant room acoustic and the time-variant speech production systems, and present two estimation algorithms with distinct properties.

Keywords

  • Information Technology
  • Production System
  • Estimation Algorithm
  • Quantum Information
  • Speech Signal

[12345678910111213141516171819202122232425262728]

Authors’ Affiliations

(1)
NTT Communication Science Laboratories, NTT Corporation 2-4, Hikaridai, Seika-cho, Soraku-gun Kyoto, 619-0237, Japan

References

  1. Rabiner LR, Schafer RW: Digital Processing of Speech Signals. Prentice-Hall, Upper Saddle River, NJ, USA; 1983.Google Scholar
  2. Gurelli MI, Nikias CL: EVAM: an eigenvector-based algorithm for multichannel blind deconvolution of input colored signals. IEEE Transactions on Signal Processing 1995,43(1):134-149. 10.1109/78.365293View ArticleGoogle Scholar
  3. Furuya K, Kaneda Y: Two-channel blind deconvolution of nonminimum phase FIR systems. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 1997,E80-A(5):804-808.Google Scholar
  4. Gannot S, Moonen M: Subspace methods for multimicrophone speech dereverberation. EURASIP Journal on Applied Signal Processing 2003,2003(11):1074-1090. 10.1155/S1110865703305049View ArticleMATHGoogle Scholar
  5. Hikichi T, Delcroix M, Miyoshi M: Blind dereverberation based on estimates of signal transmission channels without precise information on channel order. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), March 2005, Philadelphia, Pa, USA 1: 1069-1072.Google Scholar
  6. Delcroix M, Hikichi T, Miyoshi M: Precise dereverberation using multichannel linear prediction. IEEE Transactions Audio, Speech and Language Processing 2007,15(2):430-440.View ArticleGoogle Scholar
  7. Yegnanarayana B, Murthy PS: Enhancement of reverberant speech using LP residual signal. IEEE Transactions on Speech and Audio Processing 2000,8(3):267-281. 10.1109/89.841209View ArticleGoogle Scholar
  8. Gillespie BW, Malvar HS, Florêncio DAF: Speech dereverberation via maximum-kurtosis subband adaptive filtering. IEEE Interntional Conference on Acoustics, Speech, and Signal Processing (ICASSP '01), May 2001, Salt Lake, Utah, USA 6: 3701-3704.Google Scholar
  9. Gillespie BW, Atlas LE: Strategies for improving audible quality and speech recognition accuracy of reverberant speech. IEEE International Conference on Accoustics, Speech, and Signal Processing (ICASSP '03), April 2003, Hong Kong 1: 676-679.View ArticleGoogle Scholar
  10. Gaubitch ND, Naylor PA, Ward DB: On the use of linear prediction for dereverberation of speech. Proceedings of International Workshop on Acoustic Echo and Noise Control (IWAENC '03), September 2003, Kyotp, Japan 99-102.Google Scholar
  11. Nakatani T, Kinoshita K, Miyoshi M: Harmonicity-based blind dereverberation for single-channel speech signals. IEEE Transactions, Audio, Speech and Language Processing 2007,15(1):80-95.View ArticleGoogle Scholar
  12. Kinoshita K, Nakatani T, Miyoshi M: Efficient blind dereverberation framework for automatic speech recognition. Proceedings of the 9th European Conference on Speech Communication and Technology, September 2005, Lisbon, Portugal 3145-3148.Google Scholar
  13. Spencer PS, Rayner PJW: Separation of stationary and time-varying systems and its application to the restoration of gramophone recordings. IEEE International Symposium on Circuits and Systems (ISCAS '89), May 1989, Portland, Ore, USA 1: 292-295.View ArticleGoogle Scholar
  14. Hopgood JR, Rayner PJW: Blind single channel deconvolution using nonstationary signal processing. IEEE Transactions on Speech and Audio Processing 2003,11(5):476-488. 10.1109/TSA.2003.815522View ArticleGoogle Scholar
  15. Shalvi O, Weinstein E: New criteria for blind deconvolution of nonminimum phase systems(channels). IEEE Transactions on Information Theory 1990,36(2):312-321. 10.1109/18.52478MathSciNetView ArticleMATHGoogle Scholar
  16. Abed-Meraim K, Moulines E, Loubaton P: Prediction error method for second-order blind identification. IEEE Transactions on Signal Processing 1997,45(3):694-705. 10.1109/78.558487View ArticleGoogle Scholar
  17. Theobald B, Cox S, Cawley G, Milner B: Fast method of channel equalisation for speech signals and its implementation on a DSP. Electronics Letters 1999,35(16):1309-1311. 10.1049/el:19990912View ArticleGoogle Scholar
  18. Pham D-T, Cardoso J-F: Blind separation of instantaneous mixtures of nonstationary sources. IEEE Transactions on Signal Processing 2001,49(9):1837-1848. 10.1109/78.942614MathSciNetView ArticleGoogle Scholar
  19. Matsuoka K, Ohya M, Kawamoto M: A neural net for blind separation of nonstationary signals. Neural Networks 1995,8(3):411-419. 10.1016/0893-6080(94)00083-XView ArticleGoogle Scholar
  20. Acoustical Society of Japan : ASJ Continuous Speech Corpus. http://www.mibel.cs.tsukuba.ac.jp/jnas/instruct.html
  21. Kuttruff H: Room Acoustics. Elsevier Applied Science, London, UK; 1991.Google Scholar
  22. Kleijn WB, Paliwal KK (Eds): Speech Coding and Synthesis. Elsevier Science, Amsterdam, The Netherlands; 1995.Google Scholar
  23. Gorokhov A, Loubaton P: Blind identification of MIMO-FIR systems: a generalized linear prediction approach. Signal Processing 1999,73(1-2):105-124. 10.1016/S0165-1684(98)00187-XView ArticleMATHGoogle Scholar
  24. Jacod J, Shiryaev AN: Limit Theorems for Stochastic Processes. Springer, New York, NY, USA; 1987.View ArticleMATHGoogle Scholar
  25. Comon P: Independent component analysis, a new concept? Signal Processing 1994,36(3):287-314. 10.1016/0165-1684(94)90029-9View ArticleMATHGoogle Scholar
  26. Hyvärinen A, Karhumen J, Oja E: Independent Component Analysis. John Wiley & Sons, New York, NY, USA; 2001.View ArticleGoogle Scholar
  27. Yoshioka T, Hikichi T, Miyoshi M, Okuno HG: Robust decomposition of inverse filter of channel and prediction error filter of speech signal for dereverberation. Proceedings of the 14th European Signal Processing Conference (EUSIPCO '06), 2006, Florence, ItalyGoogle Scholar
  28. Boll SF: Suppression of acoustic noise in speech using spectral subtraction. IEEE Trans Acoust Speech Signal Process 1979,27(2):113-120. 10.1109/TASSP.1979.1163209View ArticleGoogle Scholar

Copyright

© Takuya Yoshioka et al. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement