Skip to content

Advertisement

  • Research
  • Open Access

Integrated acoustic echo and background noise suppression technique based on soft decision

EURASIP Journal on Advances in Signal Processing20122012:11

https://doi.org/10.1186/1687-6180-2012-11

  • Received: 19 May 2011
  • Accepted: 17 January 2012
  • Published:

Abstract

In this paper, we propose an efficient integrated acoustic echo and noise suppression algorithm using the combined power of acoustic echo and background noise within a soft decision framework. The combined power of the acoustic echo and noise is adopted to the integrated suppression algorithm based on soft decision to address the artifacts such as the nonlinear distortion and the disturbed noise introduced from the conventional methods. Specifically, in the unified frequency domain architecture, the acoustic echo and noise signal are efficiently able to be suppressed through the acoustic echo suppression algorithm based on soft decision without the help of the additional noise reduction technique.

Keywords

  • Discrete Fourier Transform
  • Mean Opinion Score
  • Voice Activity Detection
  • Soft Decision
  • Combine Power

1 Introduction

Recently, hands-free systems are widely used for safety and convenience in the mobile communication. However, such an equipment introduces specific technical difficulties due to the background noise and the echoes by acoustic coupling between a loudspeaker and a microphone of this equipment [1, 2]. Thus, for hands-free mobile equipment, the serial combination of the acoustic echo cancellation (AEC) and noise reduction (NR) algorithm has been predominantly considered to achieve the improved performance and sufficient quality of the transmitted speech signal [3, 4]. Indeed, the performance of the conventional integrated system is significantly affected by the combined structure of the AEC and NR algorithm. Generally, in the conventional unified structure where the NR module exists after the AEC algorithm, noise estimation can be disturbed by the AEC processing. Also, in the unified structure where the NR algorithm is placed before the AEC algorithm, it also introduces non-linear distortions on the echo signal which can disturb the identification operation [5]. Therefore, much work has been dedicated to the problem of improving the performance of the combined structure depending on AEC and NR algorithm. In [6], Gustaffson et al. used a single perceptually motivated weighted rule to suppress both noise and residual echo in a frequency domain. However, this method needs the adaptive echo canceller to identify the echo path impulse response for eliminating the undesired echo effect, which also affects the performance of the NR algorithm. In [7], Habets et al. presented the joint suppression technique of stationary (e.g., background noise) and non-stationary interference (e.g., echo) using a soft decision approach. But, an estimate of the variance of the echo signal was assumed to be known a priori, which inherently requires the AEC before the NR module. Other closely related technique by same authors is an approach of combined suppression of residual echo, reverberation, and background noise in a fashion of the post-filter following the traditional AEC [8]. But, the cancellation is performed directly on the waveform as in [7, 8]. The algorithm is sensitive to the misalignment in the echo path response estimate. Also, it is hard to efficiently model the impulse responses lasting above milliseconds long with hundreds of coefficients. From this viewpoint, it is noted that a low complexity acoustic echo suppression (AES) algorithm by Faller [9] uses a spectral modification technique by incorporating the echo path response filter characterizing the actual echo path in a frequency domain. Recently, our previous approach in [10] presented the novel acoustic echo suppression (AES) algorithm based on soft decision without the help of the AEC and an additional residual echo suppression (RES), which conventional methods substantially need [10]. However, this technique has a problem in that the background noise is not taken into consideration for suppression, which can not be considered realistic.

In this paper, we propose a novel approach to the integrated suppression algorithm where the combined power of acoustic echo and background noise is incorporated based on soft decision as in [10] to directly suppress both strong acoustic echo and noise signal in a frequency domain. The proposed method efficiently estimates the echo and noise power separately and summates them to provide the unified framework in determining and modifying the suppression gain based on soft decision. This is clearly different from the conventional integrated strategies requiring the AEC and NR independently. For this, our approach directly estimates the spectral envelope of the echo signal instead of identifying the echo path impulse response in a time domain. Also, the background noise is estimated during near-end speech and echo-absent periods. In particular, the acoustic echo and noise signal are able to be reduced at a time through a single gain based on soft decision using the estimated combined power. Based on this, the proposed method can efficiently suppress the acoustic echo and noise without the help of an additional residual signal suppressor. Accordingly, the proposed unified structure addresses the problems associated with the residual echo and noise produced by the conventional unified structure where the NR operation is placed after the AEC algorithm or vice versa. The performance of the proposed algorithm is evaluated by both the subjective and objective quality tests and is demonstrated to be better than that of the conventional methods.

2 Proposed integrated suppression algorithm based on soft decision

In the previous section, we note that the previous AES technique in [10] needs the additional NR before/after the AES architecture for suppressing noise. However, this procedure could have a drawback such as the non-linear distortion on echo or the disturbed noise power estimate as happened in the conventional integrated system [5]. Considering the case that the NR operation is placed after the AES algorithm, the noise power estimation can be disturbed by the AES processing. On the contrary, in the unified structure where the NR algorithm is simply placed before AES, it also introduces non-linear distortions on echo signal, which can disturb the identification operation. In order to reduce the problem resulting from serially combined structure, we propose a novel approach as the integrated suppression system based on the combined power of acoustic echo and background noise as in Figure 1 showing the block diagram of the proposed system based on soft decision. From the figure, it can be seen in advance that the proposed method can suppress the acoustic echo and the noise signal with a single gain based on soft decision. For this, the noise and echo spectral are separately and efficiently estimated and combined by a single power in the soft decision framework. Since we take the frequency domain AES algorithm in [10] as a baseline, we should reassume that two hypotheses to incorporating the discrete Fourier transform (DFT) spectrum of the noise signal D(i,k),H 0 and H1, indicate near-end speech absence and presence as follows:
Figure 1
Figure 1

Block diagram of the proposed integrated algorithm.

H 0 : near-end speech absent : Y ( i , k ) = D ( i , k ) + E ( i , k ) H 1 : near-end speech present : Y ( i , k ) = D ( i , k ) + E ( i , k ) + S ( i , k )
(1)

where E(i, k), S(i, k), and Y(i, k) represent the DFT spectra of the echo signal, the near-end speech, and the input signal picked up by the microphone with a time index i and frequency index k.

Under the assumption that D(i, k), E(i, k), and S(i, k) are characterized by separate zero-mean complex Gaussian distributions, the following are obtained [10].
p ( Y ( i , k ) | H 0 ) = 1 π { λ e ( i , k ) + λ d ( i , k ) } exp - | Y ( i , k ) | 2 { λ e ( i , k ) + λ d ( i , k ) }
(2)
p ( Y ( i , k ) | H 1 ) = 1 π { λ s ( i , k ) + λ e ( i , k ) + λ d ( i , k ) } . exp - | Y ( i , k ) | 2 { λ s ( i , k ) + λ e ( i , k ) + λ d ( i , k ) }
(3)
where λ e (i,k) d (i,k), and λ s (i,k) are the variance of the echo, noise, and near-end speech, respectively. The near-end speech absence probability (NSAP) p(H 0 |Y(i, k)) for each frequency band is derived from Bayes' rule such that [10]:
p ( H 0 | Y ( i , k ) ) = p ( Y ( i , k ) | H 0 ) p ( H 0 ) p ( Y ( i , k ) | H 0 ) p ( H 0 ) + p ( Y ( i , k ) | H 1 ) p ( H 1 ) = 1 1 + q Λ ( Y ( i , k ) )
(4)
where q = p(H1)/p(H0) and p(H0)(= 1-p(H1)) represent the a priori probability of near-end speech absence. Substituting (2) and (3) into (4), the likelihood ratio Λ(Y(i, k)) can be computed as follows:
Λ ( Y ( i , k ) ) = p ( Y ( i , k ) | H 1 ) p ( Y ( i , k ) | H 0 ) = 1 1 + ξ ( i , k ) exp γ ( i , k ) ξ ( i , k ) 1 + ξ ( i , k )
(5)
For (5), we define the a posteriori signal-to-combined power ratio (SCR) γ(i, k) and the a priori SCR ξ(i, k) by
γ ( i , k ) = | Y ( i , k ) | 2 λ c d ( i , k ) , ξ ( i , k ) λ s ( i , k ) λ c d ( i , k ) .
(6)
where λ cb (i, k) denotes the combined power of the echo and noise to simultaneously suppress, which should be estimated carefully. Also, ξ(i, k) is estimated with the help of the well-known decision-directed (DD) approach [10]. Then
ξ ^ ( i , k ) = α D D | Ŝ ( i - 1 , k ) | 2 λ ^ c d ( i - 1 , k ) + ( 1 - α D D ) P [ γ ( i , k ) - 1 ]
(7)

where α DD is a weight and P[z] = z if z ≥ 0, and P[z] = 0 otherwise. Also, Ŝ(i- 1, k) is a k th frequency estimate of the near-end speech at the previous frame, and λ ^ c d ( i , k ) is the estimate for λ cb (i, k).

For λ ^ c d ( i , k ) , we first estimate the power of the echo signal when the near-end speech signal is not present in the observation (single-talk), as given by
λ ^ e ( i , k ) = α λ e λ ^ e ( i 1 , k ) + ( 1 α λ e ) | E ^ ( i , k ) | 2
(8)
where α λe is a smoothing parameter. Note that noise is not taken into account in this update scheme, since it is assumed that the echo is not correlated with the noise and the power of the echo signal is more dominant than the noise power. The estimated magnitude spectrum of echo |Ê(i,k)| is given by
| E ^ ( i , k ) | = H ( i , k ) | X d ( i , k ) |
(9)
with the far-end speech signal X d (i, k) and the gain filter H(i, k) characterizing the response of the echo path that is achieved by the magnitude of the least squares estimator [9]
H ( i , k ) = E [ X d * ( i , k ) Y ( i , k ) ] E [ X d * ( i , k ) X d ( i , k ) ]
(10)

where * denotes the complex conjugate and d indicates d samples delay. Since the echo path is time varying, H(i,k) is estimated iteratively as in [10]. Note that, since Y(i, k) is not affected by the NR algorithm, the estimate of the echo path response does not suffer from the non-linear distortion by the NR operation. And the update of the estimate H(i, k) should be frozen during the double-talk periods to prevent the divergence of H(i, k). To detect a double-talk period, the cross-correlation coefficients-based double-talk detection method proposed by [4] in the frequency domain is implemented. More specifically, (1) the cross-correlation coefficient between the microphone input and the estimate echo, and (2) the cross-correlation coefficient between microphone input and the residual error of the suppressor are computed and used to detect double-talk periods on each frame.

Based on the estimated echo power, we propose the combined power incorporating both the echo power and the background noise power. This is clearly different from the previous approach in [10] in that the method of [10] does not substantially estimate and include the background noise power because of the difficulty in estimating the noise power after the AES algorithm as explained in the first paragraph of Section 2. Specifically, the combined power λ cb (i, k) is estimated by assuming that the acoustic echo and noise are uncorrelated and then combining the estimated echo and noise power based on the long-term smoothing scheme with a parameter α λcb such that
λ ^ c d ( i , k ) = α λ c d λ ^ c d ( i - 1 , k ) + ( 1 - α λ c d ) { λ ^ e ( i , k ) + E [ | D ( i , k ) | 2 | Y ( i , k ) ] }
(11)

where λ ^ e ( i , k ) is derived as in (8).

Actually, notice that if E[|D(i,k)|2|Y(i,k)] 0, (11) becomes the original AES algorithm as in [10], while (11) results in the conventional NR algorithm in case that λ ^ e ( i , k ) is nearly zero. Actually, the noise power estimate E[|D(i, k)|2|Y(i, k)] is obtained during noise-only periods, which is achieved by the voice activity detection (VAD) algorithm that is a similar method as in IS-127 noise reduction algorithm known to give robust performance under various noise conditions [11]. For this reason, we can avoid the disturbed estimate of the noise power incurred by the AES algorithm. Note that since both e(t) and s(t) have a role as a dominant speech, the additional VAD to detect the noise signal periods is needed at the near-end. In addition, the proposed integrated algorithm is further improved in that distinct values of q's in (4) are estimated for different frames and frequency bins such as q(i, k) that can be tracked in time [12]. Therefore, the proposed algorithm employs a decision rule to decide whether the near-end speech signal is present in the k th bin, as given by
q ( i , k ) = α q q ( i - 1 , k ) + ( 1 - α q ) I ( i , k )
(12)

in which the smoothing parameter α q is set as 0.3 and I(i, k) denotes an indicator function for the result in (6), that is, I(i,k) = 1 if η(i,k) > η th and I(i, k) = 0 otherwise. The value of q(i, k) can be easily updated using the η(i, k) as η ( i , k ) Ĥ 0 Ĥ 1 η t h where the threshold η th is set to 5.0 considering the desired significance level.

Finally, the estimated near-end speech Ŝ(i, k) for the echo and noise to be suppressed can be expressed as
Ŝ ( i , k ) = 1 - p ( H 0 | Y ( i , k ) ) G ( i , k ) Y ( i , k ) = G ̃ ( i , k ) Y ( i , k )
(13)
where p(H0|Y(i,k)),G(i,k) and G ̃ ( i , k ) are the NSAP in (4), suppression gain and overall suppression gain for the integrated system, respectively. Here, G(i, k) for each frequency band is derived from the Wiener filter such that
G ( i , k ) = ξ ^ ( i , k ) 1 + ξ ^ ( i , k ) .
(14)

Notice that a better echo and noise suppression rule through G ̃ ( i , k ) is formulated to apply higher attenuation using (1 -p(H0|Y(i, k))) consisting of echo or noise (or both) alone while preserving the quality of the near-end speech.

3 Experiments and results

In order to compare the performance of the proposed integrated algorithm compared with the conventional methods, we conducted a quantitative comparison and subjective quality test under various noise conditions. Twenty test phrases, spoken by seven speakers and sampled at 8 kHz, were used as the experimental data. For assessing the performance of the proposed method, we artificially created 20 data files, where each file was obtained by mixing the far-end signal with the near-end signal. Each frame of the windowed signal was transformed into its corresponding spectrum through 128-point DFT after zero padding. We then achieved 16 frequency sub-bands to entirely cover full frequency ranges (~4 kHz) of the narrow band speech signal, which is analogous to that of the IS-127 noise suppression algorithm [11]. The far-end speech signal was convolved with a filter simulating the acoustic echo path before being mixed [13, 14]. The simulation environment was designed to fit a small office room having a size of 5 × 4 × 3 m3. The length of the simulated acoustic impulse response corresponds to 1,400 tap with the reverberation time T60 = 0.14 s. The echo level measured at the input microphone was 3.5 dB lower than that of the input near-end speech on average. In order to create noisy conditions, white, babble, and vehicular noises from the NOISEX-92 database were added to clean near-end speech signals at signal-to-noise ratios (SNRs) of 5, 10, 15, and 20dB. For the purpose of an objective comparison, we evaluated the performance of the proposed scheme and that of the conventional integrated algorithm. The performance of the approach was measured in terms of echo return loss enhancement (ERLE) and speech attenuation (SA), which are defined in [13].

To see the performance of the conventional integrated algorithm for comparison, we also evaluated the performance of the conventional acoustic echo and noise suppression algorithm by Gustafsson et al. [3],a which is a serial algorithm on the basis of a time-domain AEC and an additional noise and residual echo reduction filter. Also, we included the other integrated system in which the NR algorithm, that is, IS-127 noise suppression [11] is followed by the AEC with the post-filter as in [15]. For the AEC, a normalized least mean square (NLMS) adaptive filter with the number of filter taps, L = 128, was used, because we consider the used DFT size (i.e., 128) in our AES approach in terms of the computational complexity. Given noise environments, overall results for the aforementioned 20 data files are shown in Figure 2. ERLE and SAs scores were averaged to yield final mean score results for the case of three types of noise sources. From Figure 2a, it is evident that in most noisy conditions, the proposed integrated algorithm based on soft decision yielded a higher ERLE compared to the conventional techniques. This means that the proposed method effectively suppresses both the acoustic echo and noise signal. The SAs of the proposed method during double-talk periods are shown in Figure 2b, where we can observe that the SAs of the proposed scheme were better than that of the methods by Gustafsson et al. and Turbin et al. in all the tested conditions. This phenomenon indicates that the proposed algorithm preserves the near-end talk signal well during the double-talk periods. Also, the speech spectrograms are presented in Figure 3. From Figure 3e yielded by the proposed method, the residual echo and background noise are further reduced compared to the conventional techniques (Figure 3c and 3d) during the active far-end speech and noise period while preserving the near-end speech quite well. In addition, Figure 4 illustrates the speech segments that are results of the proposed algorithm. When we see the double-talk periods carefully, it can be easily seen that the enhanced output signal is successfully obtained even during the double-talk periods.
Figure 2
Figure 2

Performance of integrated algorithms. (a) ERLE scores. (b) Speech attenuation during double-talk.

Figure 3
Figure 3

Speech spectrograms (white noise, SNR = 15 dB). (a) Microphone input signal with the noise and echo. (b) Clean near-end speech. (c) Output signal obtained by IS-127+Turbin et al. (d) Output signal obtained by Gustafsson et al. (e) Output signal obtained by the proposed method.

Figure 4
Figure 4

Speech waveforms (white noise, SNR = 15dB ). (a) Microphone input signal with the noise and echo. (b) Clean near-end speech. (c) Output signal obtained by the proposed method.

Finally, in order to evaluate the subjective quality of the proposed algorithm in terms of the distortion of the near-end speech and the residual echo, we carried out a set of informal listening tests. Opinion scores were, respectively, recorded by eleven listeners, and all the scores from the listeners were then averaged to yield final mean opinion score (MOS) results. Eleven listeners (6 men and 5 women) whose ages ranged from 20 to 35 participated in the experiment. Eight of them were students specialized in signal processing, while the others were not specialist. Ten test phrases, where five were spoken by a male speaker and the other were spoken by a female speaker, were used as the experimental data. Each phrase consisted of the two different meaningful sentences and lasted 8s as suggested in [16]

Table 1 illustrates that the proposed approach outperformed or at least was comparable to the conventional methods in terms of overall subjective quality under the given noise conditions. In addition, we separately checked the performance of noise reduction which is one of the major goals in this work, which was achieved by the ITU-T P.835 [16], that is, the subjective quality test in terms of the background noise rating scale (5: not noticeable, 4: slightly noticeable, 3: noticeable but not intrusive, 2: somewhat intrusive, 1: very intrusive) in a similar manner as in the previous MOS test. As Table 2 shows, the performance improvement was found for all cases at all SNRs. These results confirm that the proposed integrated system is effective in suppressing the background noise.
Table 1

Comparison of MOS results (with 95% confidence interval)

Environments

MOS

Noise

SNR (dB)

IS-127+Turbin et al.

Gustafsson et al.

Proposed

White

5

1.10 ± 0.14

1.35 ± 0.23

1.50 ± 0.36

 

10

1.45 ± 0.24

1.90 ± 0.40

2.40 ± 0.47

 

15

1.95 ± 0.39

2.70 ± 0.38

2.75 ± 0.43

 

20

1.85 ± 0.38

2.80 ± 0.39

3.10 ± 0.50

Babble

5

1.20 ± 0.24

1.15 ± 0.17

1.35 ± 0.27

 

10

1.40 ± 0.24

1.45 ± 0.28

1.50 ± 0.24

 

15

1.55 ± 0.24

2.10 ± 0.30

2.10 ± 0.30

 

20

2.25 ± 0.30

2.40 ± 0.28

2.45 ± 0.24

Vehicle

5

2.15 ± 0.31

3.10 ± 0.40

3.25 ± 0.48

 

10

2.25 ± 0.21

3.20 ± 0.24

3.40 ± 0.35

 

15

2.35 ± 0.27

3.20 ± 0.24

3.25 ± 0.30

 

20

2.45 ± 0.36

3.40 ± 0.38

3.50 ± 0.39

Table 2

Comparison of noise rating scale results (with 95% confidence interval)

Environments

Noise rating scale

Noise

SNR (dB)

IS-127+Turbin et al.

Gustafsson et al.

Proposed

White

5

1.40 ± 0.24

1.65 ± 0.35

2.55 ± 0.36

 

10

1.45 ± 0.24

2.20 ± 0.62

2.60 ± 0.53

 

15

1.85 ± 0.51

2.75 ± 0.30

2.80 ± 0.54

 

20

2.35 ± 0.51

3.20 ± 0.45

3.20 ± 0.39

Babble

5

1.20 ± 0.24

1.20 ± 0.19

1.30 ± 0.22

 

10

1.60 ± 0.38

1.65 ± 0.41

1.75 ± 0.37

 

15

1.95 ± 0.39

1.90 ± 0.37

2.15 ± 0.46

 

20

2.30 ± 0.40

2.45 ± 0.32

2.45 ± 0.32

Vehicle

5

1.95 ± 0.32

3.20 ± 0.29

3.60 ± 0.53

 

10

2.10 ± 0.26

3.35 ± 0.27

3.55 ± 0.28

 

15

2.00 ± 0.30

3.40 ± 0.28

3.45 ± 0.28

 

20

2.10 ± 0.40

3.40 ± 0.32

3.60 ± 0.35

4 Conclusions

In this paper, we have proposed a novel integrated suppression algorithm based on soft decision using the combined power of the estimated echo and noise power. The principal contribution of this study is that the proposed method can efficiently suppress the acoustic echo and noise signal through the suppression gain based on soft decision without the help of an additional residual echo and noise suppressor. The performance of the proposed algorithm has been found to be superior to that of the conventional technique. Future study areas may include the other superior statistical models characterizing the input signals such as the Laplacian and gamma as in [17], even though the Gaussian model can lead to more tractable mathematics.

Endnotes

aFor [3], we set T n to 0.05 where T n denotes a minimum threshold.

Declarations

Acknowledgements

This work was supported by the IT R&D program of MKE/KEIT. [2009-S-036-01, Development of New Virtual Machine Specification and Technology], by National Research Foundation of Korea(NRF) grant funded by the Korean Government(MEST) (NRF-2011-0009182), and by the research fund of Hanyang University (HY-2011-201100000000210). Note: Please send all correspondence related with this manuscript to Prof. J.-H. Chang at the address below.

Authors’ Affiliations

(1)
School of Electronic Engineering, Inha University, Incheon, 402-751, Korea
(2)
School of Electronic Engineering, Hanyang University, Seoul, 133-791, Korea

References

  1. Puder H, Dreiseitel P: Implementation of a hands-free car phone with echo cancellation and noise-dependent loss control. Proc IEEE Int Conf Acoust Speech Signal Process 2000, 6: 3622-3625.Google Scholar
  2. Dreiseitel P, Hänsler E, Puder H: Acoustic echo and noise control--a long lasting challenge. Proc EUSIPCO 1998, 945-952.Google Scholar
  3. Gustafsson S, Martin R, Vary P: Combined acoustic echo control and noise reduction for hands-free telephony. Signal Process 1998, 64(1):21-32. 10.1016/S0165-1684(97)00173-4View ArticleMATHGoogle Scholar
  4. Park SJ, Cho CG, Lee C, Youn DH: Integrated echo and noise canceler for hands-free applications. IEEE Trans Circuits Syst II 2002, 49(3):186-195.Google Scholar
  5. Guelou Y, Benamar A, Scalart P: Analysis of two structures for combined acoustic echo cancellation and noise reduction. Proc IEEE Int Conf Acoust Speech Signal Process 1996, 2: 637-640.Google Scholar
  6. Gustafsson S, Martin R, Jax P, Vary P: A psychoacoustic approach to combined acoustic echo cancellation and noise reduction. IEEE Trans Speech Audio Process 2002, 10(5):245-256. 10.1109/TSA.2002.800553View ArticleGoogle Scholar
  7. Habets E, Cohen I, Gannot S: MMSE log-spectral amplitude estimator for multiple interferences. In Proc Int Workshop Acoust Echo Noise Control, IWAENC'06. Paris, France; 2006.Google Scholar
  8. Habets E, Gannot S, Cohen I, Sommen P: Joint dereverberation and residual echo suppression of speech signals in noisy environments. IEEE Trans Audio Speech Lang Process 2008, 16(8):1433-1451.View ArticleGoogle Scholar
  9. Faller C, Tournery C: Estimating the delay and coloration effect of the acoustic echo path for low complexity echo suppression. Proc Intl Works on Acoust Echo and Noise Control (IWAENC) 2005, 53-56.Google Scholar
  10. Park YS, Chang JH: Frequency domain acoustic echo suppression based on soft decision. IEEE Signal Process Lett 2009, 161: 53-56.View ArticleGoogle Scholar
  11. TIA/EIA/IS-127, Enhanced variable rate codec, speech service option 3 for wideband spread spectrum digital systems 1996.Google Scholar
  12. Malah D, Cox R, Accardi A: Tracking speech-presence uncertainty to improve speech enhancement in non-stationary noise environments. Proc IEEE Int Conf Acoust Speech Signal Process 1999, 789-792.Google Scholar
  13. Lee SY, Kim NS: A statistical model based residual echo suppression. IEEE Signal Process Lett 2007, 14(10):758-761.View ArticleGoogle Scholar
  14. McGovern S: A Model for Room Acoustics, 2003 [Online].[http://sgm-audio.com/research/rir/rir.html]
  15. Turbin V, Gilloire A, Scalart P: Comparison of three post-filtering algorithms for residual acoustic echo reduction. Proc IEEE Int Conf Acoust Speech Signal Process 1997, 307-310.Google Scholar
  16. ITU-T Recommendation P.835, Subjective test methodology for evaluating speech communication systems that include noise suppression algorithm 2003.Google Scholar
  17. Chang JH, Gazor S, Kim NS, Mitra SK: Voice activity detection based on multiple statistical models. IEEE Trans Signal Process 2006, 54(6):1965-1976.View ArticleGoogle Scholar

Copyright

© Park and Chang; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement