EURASIP Journal on Applied Signal Processing 2003:11, 1125–1134 c ○ 2003 Hindawi Publishing Corporation A Two-Sensor Noise Reduction System: Applications

This paper presents a two-microphone speech enhancer designed to remove noise in hands-free car kits. The algorithm, based on the magnitude squared coherence, uses speech correlation and noise decorrelation to separate speech from noise. The remaining correlated noise is reduced using cross-spectral subtraction. Particular attention is focused on the estimation of the different spectral densities (noise and noisy signals power spectral densities) which are critical for the quality of the algorithm. We also propose a continuous noise estimation, avoiding the need of vocal activity detector. Results on recorded signals are provided, showing the superiority of the two-sensor approach to single microphone techniques.


INTRODUCTION
Hands-freecommunication has undergone huge developments in the past two decades. This technology is considered to have added value in terms of comfort and security for the users. Unfortunately, it is characterized by strong disturbances, namely, echo and ambient noise, which lead to unacceptable communication conditions for the far-end user. In highly adverse conditions, such as the interior of a running automobile (which is under consideration in this paper), the ambient noise-mainly due to the engine, the contact between the tires and road, and the sound of the blowing wind-may be even more powerful than speech and thus has to be reduced.
Since the 1970s, noise reduction has mainly utilized a one-microphone structure, with or without any hypothesis on the noise/speech distribution [1,2,3]. These techniques, which are only based on the signal-to-noise ratio (SNR) estimation, use the speech intermittence and noise stationarity hypothesis. These algorithms, and especially spectral subtraction, thanks to its low-computational load, have been investigated with success. Nevertheless, they lead to a compromise between residual noise and speech distortion, especially in the presence of highly energetic noise.
The presence of additional microphones should increase performance, allowing spatial characteristics to be taken into account and the system to get (partially) rid of some hypotheses like noise stationarity. In counterpart, the performance of the algorithms depends highly on speech and noise characteristics.
Microphone array techniques, based on beamformer algorithms like generalized sidelobe canceller (GSC) or superdirective beamformer, have been developed for car noise reduction. These approaches were revealed to be efficient in enhancing the SNR while ensuring no distortion due to time-varying filtering (like spectral subtraction for instance). Nevertheless, the achievable amount of noise reduction is limited by the noise decorrelation. Thus, additional postfiltering is added to cope with decorrelated characteristics: in [4,5], the beamformer is combined with a Wiener filter in order to remove decorrelated noise. Under more realistic hypothesis, car noise is considered as diffuse, thus presenting a strong correlation in the lower frequencies. Some authors proposed using a spectral subtraction in the lower-frequency n 2 (k) X 1

( f , p)
Attenuation law G( f , p) S 1 ( f , p) IFFT OLAŝ 1 (k) Figure 1: Two-sensor noise reduction system. bands rather than the Wiener filter [6,7,8], or modifying the Wiener filter estimation considering a priori knowledge of the noise spatial statistics [9]. In the GSM context, a two-microphone system, on the contrary to a microphone array, is considered acceptable in terms of cost and ease of installation. The previously described array techniques may be restricted to two-sensor configurations at the expense of reduced performance due to the limited number of microphones. Thus, algorithms specifically dedicated to two-microphone systems have been developed, also depending on signal characteristics. Adaptive noise cancellation has been proposed by Van Compernolle [10], adapted to one point-shaped noise source and linear convolutive mixtures (each microphone picks up noise and speech). A noise reference is formed by linear combination of the two microphone signals, and is then used to remove noise by Wiener filtering. This scheme has recently been adapted to hearing aids with closed microphones [11,12]. This signal configuration (point-shaped noise sources) is also perfectly suited to source separation under the constraint of less signal sources than sensors [13]. Unfortunately, the speech enhancer usually has to cope with cocktail-party effect (many disturbances with point-shaped sources) and with diffuse noises, which are poorly removed with the previous approach. Maj et al. [14] proposed using generalized singular value decomposition (GSVD) to estimate the Wiener filter. On the contrary to beamformers, this technique is able to remove coherent noise as well as diffuse noise. Though this algorithm provides interesting performance, its huge computational load is not compatible with real-time implementation. In order to reduce the complexity, subband implementation has been investigated, leading to more acceptable complexity, though remaining relatively large [15].
These contributions globally show the advantage of multisensor techniques compared to monosensor. They also demonstrate the difficulty to cope with the real characteristics of signals. The paper, whose concern is a two-sensor noise reduction algorithm, is organized as follows. In the second section, we describe noise and speech signal characteristics. These characteristics then lead into the third section, which discusses a filtering expression based on the coherence function and noise cross-correlation subtraction. We particularly focus on the estimation of the observed signal power spectral densities (psd) as well as those of the noises.
Finally, in Section 4, the algorithm is evaluated on real signals and compared to other techniques through objective performance measures.

SPATIAL SIGNAL CHARACTERISTICS
Using two microphones, the main question becomes where should we place the microphones inside the car? Indeed, as said in the introduction, the investigated technique depends on their relative position. Obviously, speech has to be picked up as directly as possible to improve the SNR. The position of the second microphone is strictly connected to the noise and speech signal characteristics.
As depicted in Figure 1, we denote by n 1 (k) (resp., n 2 (k)) the noise, by s 1 (k) (resp., s 2 (k)) the speech signal, and by x 1 (k) = n 1 (k) + s 1 (k) (resp., x 2 (k)) the noisy signals, picked up at the first microphone (resp., at the second microphone). The short-time Fourier transforms (STFT) are denoted by capitals, and indexed by p, the frame number, and f , the frequency (e.g., N 1 ( f , p) for n 1 (k) STFT of pth frame at frequency f ). The quantity G( f , p) represents the filtering gain applied to one of the noisy signal in order to remove noise. This gain can be calculated according to the spectral subtraction filter, theEphraim and Malah [3] filter, the coherence, and so forth.
The psd of the noise, speech, and noisy signals are denoted by γ ni ( f ), γ si ( f ), and γ xi ( f ) on the ith channel (i = 1, 2), while γ x1x2 ( f ) is the observations' cross-power spectral density (cross-psd). The coherence and the magnitude squared coherence (MSC) between the two signals x 1 and x 2 are given respectively by In a car environment, the signal characteristics are as follows.
(1) Noise is mainly composed of three independent components: the engine, the contact between tires and road, and the wind fluctuations. Their relative importance depends on the car, the road (more or less granular), and the car speed [16]. All these noises can be roughly considered as diffuse. It is well known that the coherence magnitude of diffuse signals is a cardinal  sine modulus function of frequency [17]. This is confirmed by Figure 2, which depicts the MSC of noises corresponding to a car travelling 130 km/h, with either an open or closed driver window. The microphone distance is 80 cm. The MSC profiles show strong correlation in the low part of the spectrum (as predicted by the theory) and decorrelation in the high frequencies.
Note that the difference between theoretical and real "cut-off " frequencies is due to noises which are only partially diffuse and also due to microphones characteristics. While the microphones are assumed to be omnidirectional for the theory, they are cardioid in our application. (2) Speech distribution: speech signals are emitted from a point source. Moreover, the small cockpit size and the interior trim induce no reverberation. Thus, speech signals picked up at different places are highly correlated. A perfect speech correlation is assumed in what follows.

TWO-SENSOR ALGORITHM
We first note that it is impossible to create a noise-only reference in the interior of a car. Indeed, speech is strongly reflected in interior car surfaces and is therefore picked up by both microphones wherever they are placed. The main idea is to use the decorrelation of the noises when microphones are sufficiently spaced. With 80 cm-spaced microphones and under diffuse hypothesis, the noises are decorrelated for frequencies above f = 210 Hz, that is, above the first minimum of the theoretical MSC function. The lower spectrum, which contains correlated noise, is removed by a bandpass filter in order to respect the telephony requirements (300-3400 Hz). Then, the coherence function is a perfect candidate to oper-ate the filtering of the decorrelated signals and the proposed algorithm is based on it. Indeed, we can show that, under certain hypotheses, the coherence may be equal to the Wiener filter [18]. Hence, applying coherence as a filter to any noisy signal leads to the removal of the decorrelated signals, that is, noise.
Coherence has been widely used in dereverberation techniques. In the car environment, it has been used successfully but with some modifications to cope with low-frequency noise correlation (see [4,6,7]). Indeed, in these frequency bands, noises usually exhibit nonnull correlation. Akbari Azirani et al. [18] proposed to estimate noise cross-psd during noise-only periods, and to remove it from the observations' cross-psd during speech activity. The present system is based on this technique named "cross-spectral subtraction." The zero-phase filter H css ( f , p) is given by the following expression: where γ n1n2 ( f ) is the noise cross-psd. The computation of the filter H css needs the estimation of the different psd and cross-psd quantities and is a key point in filtering quality. Concerning spectral subtraction, for instance, many techniques have been developed to remove the well-known problem of musical noise (see [1,19,20]). In the MMSE-STSA technique developed by Ephraim and Malah [3], it has been proven that the "decision-directed" approach proposed by the authors to estimate the a priori and a posteriori SNR allows musical noise to be more efficiently controlled [21]. This estimator is still widely used (see, e.g., [18,22]).
The psd and cross-psd estimation is described in this section. Firstly, we show that the estimation of the observations psd and cross-psd, γ x1 ( f ), γ x2 ( f ), and γ x1x2 ( f ), should be strictly connected to the signal characteristics, that is, it should respect the long-term noise stationarity and the short-term speech stationarity. This aspect is described in Section 3.1 and the noise cross-psd estimation is considered in Section 3.2. We focus on the noise overestimation and its online estimation, avoiding voice activity detection (VAD).

Power spectral densities estimation
The noisy signals psd γ xi ( f , p) and cross-psd γ x1x2 ( f , p) are estimated using a recursive filtering: where λ is a forgetting factor usually close to 1. The parameter λ has to cope with two contradictory constraints. On the one hand, the estimation has to respect the shortterm speech stationarity, and consequently λ should take low values; experience shows that for an 8-kHz sampling frequency with 256 sample frames and a 75% overlap, values of λ around 0.6-0.7 are the upper limit. On the other hand, λ has to favor long-term estimation to reduce the estimator variance. The MSC behaviour at 1 kHz is depicted in Figure 3 for two values of λ. The noisy signals used for the MSC computation are composed of correlated speech and decorrelated noise whose psd at 1 kHz, computed for each frame, are displayed at the top figure. For λ = 0.6, the MSC follows the speech variations, but the estimator variance is high during noise periods. These fluctuations lead to strong filter variations, thus musical noise appears. On the contrary, the variance is highly reduced for λ = 0.9, but the long-term forgetting factor induces an important reverberant effect especially during speech periods. Thus, λ has to take small values during speech presence and high values during noise-only periods. To cope with these constraints, we propose the law where SNR( f , p) is the SNR at the first microphone.  SNR( f , p)) by the previous frame-gain value H css ( f , p − 1), assuming that the SNR does not vary too quickly from one frame to another: This leads to the following adaptive expression of the forgetting factor λ: The ratio may also be estimated by direct computation of the SNR. Nevertheless, it should not exhibit quick large variations avoiding the rapid fluctuations of λ( f , p), thus limiting musical noise. Simulations which we conducted show that we can also use the a priori SNR given by the decisiondirected approach [3] (with high time constant). On the contrary, the a posteriori SNR produces overly rapid changes [23].
The proposed law allows the residual noise to be controlled during noise-only periods. Indeed, during speech activity, the adaptive coefficient λ varies quickly with the speech fluctuations, leading to the apparition of musical noise. Although this noise may be partially masked by the speech components, it is still audible and has to be reduced.

Noise cross-correlation estimation
The musical noise during speech activity is due to two factors: (1) the long-term estimation of noise cross-psd γ n1n2 ( f ) during noise-only periods, (2) the high variance of noise cross-psd included in the term γ x1x2 ( f ) due to the small forgetting factors.
In addition to its high variance, the short-term estimate |γ n1n2 ( f )| also exhibits a mean higher than the long-term one, being more sensitive to instantaneous energetic changes (these ones are less smoothed). Thus, we propose to control musical noise by overestimating the noise cross-psd. First, based on statistical studies, we propose in Section 3.2.1 an overestimation law ensuring the quasiabsence of musical noise. Finally, the noise cross-psd overestimation is achieved in Section 3.2.2 with a novel estimator, giving a long-term estimation without any need for a VAD.

Noise overestimation
Noise overestimation usually consists in multiplying the noise estimate by a constant factor α. For the power spectral subtraction technique, studies show that a 9 dB overestimation factor (α = 8) is necessary to remove musical noise [19]; however, this strongly degrades speech. In this section, we propose to evaluate the overestimation necessary for the cross-correlation spectral subtraction technique, ensuring no musical noise for minimal speech distortion.
To estimate this overestimation, we introduce the cumulative distribution function (cdf) of the short-term noise cross-psd magnitude: In (7), µ( f ) stands for the module of the long-term crosspsd estimate, and σ( f ) for the short-term cross-psd magnitude standard deviation; the parameter m may take different integer values, m = 1, 2, 3. This cdf roughly indicates the probability that the short-term cross-psd module is lower than its long-term estimate plus a positive term depending on its variance. The short-term cross-psd is computed using λ = 0.7. The cdf curves, computed with real signals, are depicted in Figures 4 (closed window) and 5 (open window). In closed window condition (Figure 4), 95% of the short-term cross-psd are included in the confidence interval [0; F( f , 2)]. Note that the profile for m = 1 depends highly on the frequency; for f ≤ 500 Hz, only 80% of the crosspsd are included in the interval [0; F( f , 1)]. The explanation is strictly connected to the spatial distribution (diffuse characteristics) but does not come straight forward. Nevertheless, we can conclude that, for closed window condition, µ+2σ is a fairly good overestimation of the short-term noise cross-psd. For an open window, the F( f , m) profiles are similar on the whole spectrum, and the segment [0; F( f , 1)] includes 90% of the short-term cross-psd: µ + σ is a sufficient overestimation ensuring that 90% of the frames do not produce musical noise. The constant profile over the frequency range is due to the noncorrelated characteristics of the noise, whatever the frequency is.
To evaluate the overestimation to be applied, the longterm noise cross-psd module |γ n1n2 ( f )| (dashed bottom line) and the µ( f ) + 2σ( f ) curve (middle solid line) are depicted in Figure 6 (closed window condition). For this condition, the necessary overestimation varies from 2 dB for the low frequencies to 6 dB for the high frequencies. We also displayed the long-term mean psd γ n1 ( f )γ n2 ( f ) (top dash-dotted line); this last curve is strictly connected to the µ( f ) + 2σ( f ) curve. Thus, the long-term estimate γ n1 ( f )γ n2 ( f ) is an accurate overestimation of the short-term cross-psd.
The open window condition is considered in Figure 7, with the µ( f ) + σ( f ) curve (instead of µ( f ) + 2σ( f ) for closed window), as well as the long-term |γ n1n2 ( f )| and γ n1 ( f )γ n2 ( f ). The conclusions are exactly the same. Finally, to limit musical noise, especially during speech periods, we propose to overestimate the noise cross-psd with the mean psd γ n1 ( f )γ n2 ( f ). It is important to note that this overestimation does not induce too much speech distortion for the following reasons.
(1) The overestimation is effective for decorrelated noises, that is, especially for high frequencies (see Figures 6  and 7). In this spectrum segment, the SNRs are quite favorable, and the speech components are slightly affected by this overestimation, (2) In the case of highly correlated noises, that is, for low frequencies, the cross-psd is close to the mean psd. Thus, this slight overestimation for closedwindow conditions does not lead to speech distortion (see Figure 6), while the musical noise is controlled. In open window conditions, the overestimation is large (6 dB) because of the noise decorrelation (see Figure 7); more speech distortion is expected.  Experiments on real data show that this overestimation completely removes the musical noise at the cost of a small but acceptable amount of speech distortion.

Continuous noise estimation
Usually, noise psd estimation is achieved during noise-only periods, while being frozen during speech presence. This approach, which is widely used in the literature, needs a robust VAD to help ensure filtering quality. It is especially true for algorithms like spectral subtraction techniques that directly use the noise psd estimation to derive the main signal; a small error in the estimation may lead to musical noise or large amounts of speech distortion. A robust VAD, however, is not as crucial for algorithms using a priori and a posteriori SNR estimation as for the decision-directed approach [3] since the filter estimate also depends on smoothing coefficients.
The cross-spectral technique is strongly affected by the quality of the noise estimate since the filter H css ( f , p) given by (2) depends directly on the noise cross-psd estimate. Experiments show that the filter needs a regularly estimated noise cross-psd to achieve a sufficient denoising with acceptable artefact on speech and noise. In particular, freezing the estimate during a whole sentence is not compatible with the noise stationarity, leading to musical noise emergence. Hence, the VAD has to detect speech pauses or even intersyllabic segments, which may be difficult to achieve with a lowcost stand-alone algorithm. We then propose to use a fuzzy law based on energetic considerations; noise is supposed to be a long-term stationary signal unlike speech. Therefore, a large energy increase between two adjacent frames may be viewed as the presence of speech, whereas small variations only or a decrease in energy corresponds more likely to noise. We propose using the following law, adapted from monosensor algorithm [24]: where the function α( SNR post ) depends on real positive constants b, g, and L: The a posteriori modified SNR, SNR post , is given by stantaneous amplitudes of the observations are less energetic than those of the previous noise estimate), (9) becomes α( SNR post ) L, hence The coefficient b fixes the maximal value reached by α (see Figure 8), while g adjusts this maximum for a given value of SNR post (see Figure 9). Note that L also has an impact on the maximum (the lower L, the higher the maximum). Usually, g is chosen as g = 1/ (1 − b), fixing the accumulation point α(1) = 1; thus, in the case of deterministic noise, the estimator converges towards the true value.

SIMULATIONS AND RESULTS
Simulations were conducted on real signals recorded in a driving car. The directional microphones were placed on the left-hand side, upright the windshield and close to the rear view mirror, ensuring a distance of 80 cm. Therefore, the noise decorrelation condition is fulfilled (see Figure 2 for coherence profile). Two different noises are recorded: a quasistationary noise, corresponding to a 130 km/h driving car, and a highly nonstationary one at the same speed with open driver window. These two conditions include slow changes in the engine revolution speed caused by accelerations and shifting gears. Artificial files with different SNR from −3 dB to 20 dB were created by adding noise and speech recorded in a quiet environment (stopped car, switched off engine). The proposed algorithm, called modified cross-spectral subtraction, is also denoted by modified H css . The gain is computed using (2) (as for standard cross-spectral subtraction). The norm of the noise cross-psd |γ n1n2 ( f )| is overestimated by γ n1 ( f )γ n2 ( f ), which is computed using (8) (9), and (10). The performances of our algorithm are compared to those of two other techniques, which have been proven to be efficient in those types of environments.
(1) A monosensor technique: the Wiener uncertainty algorithm denoted as WU [25]. The filtering part is achieved by the Wiener filter, with a correcting factor depending on the speech presence probability derived by Ephraim and Malah [3]. Note that this algorithm provides continuous SNR estimation using the decision-directed approach. The noise psd is learned during noise-only periods using a manual VAD. (2) A two-microphone algorithm: the cross-spectral subtraction denoted by H css . An implementation of this filter is given in [18]. For this algorithm, the noise cross-psd is learned during noise-only periods, then frozen during speech activity using the same manual VAD as the monosensor algorithm. The forgetting factor λ is fixed as 0.7.
In order to compare the performance of the different algorithms, two different measures have been evaluated on processed signals: the cepstral distance (dcep) and the SNR gain which is given by SNR gain (dB) = SNR after processing (dB) − input SNR (dB).
The first one evaluates speech distortion while the second shows the noise reduction. These indices are computed on manually segmented speech frames, then averaged on all frames to give a global measure per condition (stationary/nonstationary). Consider Figures 10 and 11 displaying the results for the quasistationary noise condition. The SNR gain curves Input SNR (dB)   ( Figure 10) show the improvement due to noise overestimation and permanent updating; the modified H css performs around 2 to 4 dB better than H css . The monosensor algorithm experiences lower performance than the modified H css , especially for low SNR. In terms of distortion (see Figure 11), the novel technique performs much better than the two others. This result may be explained by the use of the adaptive forgetting factor λ( f , p), which prevents overly large smoothing of the psd and cross-psd estimates during speech activity. Note that the monosensor WU algorithm distorts speech Input SNR (dB)   much more than the two-microphone techniques, in particular, for high SNR. This confirms the superiority of the modified H css over the WU algorithm for these high SNR despite their equivalent scores in terms of noise reduction.
The results concerning nonstationary noises are depicted in Figures 12 and 13. At a first glance, it is obvious that the two-microphone methods perform much better than the single microphone one in terms of noise reduction as well as speech distortion. It is mainly due to the fact that the two-sensor techniques work particularly well in filtering these decorrelated noises. Moreover, the fast noise variations prevent the WU from estimating the SNR with accuracy, thus leading to large amounts of speech distortion and residual noise fluctuations. Concerning the two-sensor algorithms, the performance appear to be quite comparable. The reason is that continuous noise updating does not provide any clear advantage; the noise variations, mainly due to the blowing wind, are too rapid to be followed by the estimator. Nevertheless, it should be pointed out that the noise overestimation does not distort the speech signal more than the standard H css filter. Moreover, from a subjective point of view, informal listening tests show that the residual noise appears more natural with the modified H css filter; musical noise and noise level fluctuations, which are audible with standard H css (and monosensor technique), are completely removed. Nevertheless, on very low SNR frames, slight additional speech distortion can be noticed, which is in accordance with the expected behavior of our algorithm. Note also that this distortion is hardly audible due to the energetic noises.

CONCLUSION
In this paper, we proposed a two-sensor noise reduction algorithm based on cross-spectral subtraction. The improvement mainly focused on a noise overestimation rule derived from statistical studies, and on spectral densities estimation. With these modifications, simulations showed that the proposed algorithm outperforms proven methods in this environment. With highly nonstationary noises, the new technique is intrinsically better than monosensor ones in terms of speech distortion and noise reduction. In stationary noise conditions, the modified filter outperforms the standard cross-spectral subtraction technique, ensuring much more noise reduction (from 2 to 4 dB) with less speech distortion.
From a computational point of view, this technique is low CPU consuming, about three times the complexity of the spectral subtraction. This allows real-time implementation in GSM mobile phones (e.g., far less CPU consuming than vocoder). The hardware cost caused by the two-microphone approach may be limited by using the terminal microphone, reducing the cost to one additional microphone, like most standard hands-free systems.