 Research
 Open Access
 Published:
DOAinformed source extraction in the presence of competing talkers and background noise
EURASIP Journal on Advances in Signal Processing volume 2017, Article number: 60 (2017)
Abstract
A desired speech signal in handsfree communication systems is often degraded by noise and interfering speech. Even though the number and locations of the interferers are often unknown in practice, it is justified to assume in certain applications that the directionofarrival (DOA) of the desired source is approximately known. Using the known DOA, fixed spatial filters such as the delayandsum beamformer can be steered to extract the desired source. However, it is wellknown that fixed dataindependent spatial filters do not provide sufficient reduction of directional interferers. Instead, the DOA information can be used to estimate the statistics of the desired and the undesired signals and to compute optimal datadependent spatial filters. One way the DOA is exploited for optimal spatial filtering in the literature, is by designing DOAbased narrowband detectors to determine whether a desired or an undesired signal is dominant at each timefrequency (TF) bin. Subsequently, the statistics of the desired and the undesired signals can be estimated during the TF bins where the respective signal is dominant. In a similar manner, a Gaussian signal modelbased detector which does not incorporate DOA information has been used in scenarios where the undesired signal consists of stationary background noise. However, when the undesired signal is nonstationary, resulting for example from interfering speakers, such a Gaussian signal modelbased detector is unable to robustly distinguish desired from undesired speech. To this end, we propose a DOA modelbased detector to determine the dominant source at each TF bin and estimate the desired and undesired signal statistics. We demonstrate that datadependent spatial filters that use the statistics estimated by the proposed framework achieve very good undesired signal reduction, even when using only three microphones.
Introduction
In applications that require handsfree capture of speech, the desired speech signal is often corrupted by background noise and interfering speech signals. Such applications involve humantohuman and humantomachine communication, where speech enhancement is crucial: in the former, to improve the communication comfort, and in the latter, to ensure low error rate of speech recognisers. In this work, we address scenarios where the desired speaker has a known DOA with respect to the microphones, such as incar applications, or voicecontrolled devices where the source of interest is restricted to a predefined DOA. Given the DOA of the desired source and assuming anechoic propagation, fixed spatial filters such as the delayandsum beamformer (DSB) [1] or superdirective beamformers [2] can be used. However, these filters are suboptimal as they do not consider the spatiotemporal statistics of the signals, and often provide insufficient interference reduction. Moreover, propagation model mismatch due to reverberation and DOA errors further limit the performance. In this work, we focus on optimal datadependent spatial filters [3, 4]. Two main paradigms can be distinguished which aim at optimal filtering given the source DOA: robust adaptive beamformers (RABs) and informed spatial filters (ISFs). While RABs seek to improve the robustness to errors in the DOAs and the signal propagation vectors, ISFs address the estimation of propagation vectors and signal statistics from the microphone signals, and their usage for optimal spatial filtering.
RAB representatives include Bayesian beamformers [5, 6] and spatial filters with eigenvector constraints [7, 8], which are implemented using linearly constrained minimum variance (LCMV) filters. These filters seek to minimise the undesired signal power at the output, while imposing constraints to ensure that the desired signal from the DOA of interest is undistorted. Another approach is proposed in [9], where the desired signal power spectral density (PSD) matrix is computed by integrating the free fieldbased PSD matrices across the region of possible source DOAs. Note that the increased robustness in these approaches often comes at the cost of worse undesired signal reduction. RABs can also be implemented in a general sidelobe canceler (GSC) structure, where the robustness to DOA and propagation vector mismatches is ensured by using an adaptive blocking matrix [10, 11], and by imposing constraints to the adaptive noise cancellers [10]. The robust GSCs require a desired signal detector, as the noise cancellers need to be updated when the desired signal is absent, while the blocking matrix when the desired signal is present [12].
ISFs, in contrast to RABs, estimate the desired signal propagation vector and the undesired signal statistics from the data, and substitute them in optimal filter expressions such as the minimum variance distortionless response (MVDR) or the multichannel Wiener filter (MWF) [13–16]. As the ISFs are estimated and implemented in the frequency domain, the relevant statistics correspond to the PSD matrices of the desired and the undesired signals at each frequency. The advantage of estimating the propagation vectors from the data, rather than using anechoic propagation models, is wellknown since the development of the transfer functionGSC [17] and the relative transfer function (RTF)GSC [18]. However, a less often addressed question is how to design narrowband signal detectors which are required to estimate the propagation vectors and the PSD matrices, or perform the filter adaptations in the adaptive GSCs. Signal detection in the presence of nonstationary interferers is a very challenging problem [19]. The Gaussian modelbased detectors used in stateoftheart systems [15, 20, 21] assume that the noise is significantly more stationary than speech, which is not true for speech interferers.
The question addressed in this paper is how to design a robust narrowband signal detector, by using the microphone signals, narrowband DOA estimates extracted from the signals, and the information about the desired source DOA. Narrowband DOAs have been previously used for desired speech detection in the literature. For instance, in [22], the authors use narrowband DOAs to control the a priori desired speech presence probability (DSPP) in a Gaussian signal model, while in [23] a Gaussian DOA model is used to compute a DSPP and apply it as a singlechannel gain to the output of a spatial filter. We propose a different statistical model for the narrowband DOA estimates which is used for desired signal detection and estimation of the propagation vectors and the PSD matrices in an ISF framework. Initial results obtained using the proposed framework were presented in [24]. In this paper, we provide a more detailed description of the system, further discussions and comparison to the stateoftheart approaches, as well as an extended set of experiments to evaluate the performance of the narrowband signal detector and the quality of the extracted source signal at the ISF output.
Signal model and problem formulation
A compact array of M microphones captures the signal of a desired speaker, unknown number of competing speakers, and background noise. The STFTdomain signal of the mth microphone is given by
where S _{ m }, I _{ m }, and V _{ m } are the signals of the desired speaker, the sum of competing speakers, and the noise, and t and k are time and frequency indices. Let the M×1 vectors y(t,k), s(t,k), i(t,k), and v(t,k) contain the respective signals from all M microphones. The corresponding PSD matrices are given by Φ _{ s }=E[s s ^{H}], Φ _{ i }=E[i i ^{H}], and Φ _{ v }=E[v v ^{H}], where E[·] denotes statistical expectation. As the signals are zeromean and mutually uncorrelated, the following holds
In addition, we define the undesired signal PSD matrix containing the speech interferers and the noise as Φ _{ u }(t,k)=Φ i(t,k)+Φ v(t,k). We seek to estimate the desired signal S _{ m }(t,k) at the mth microphone, and without loss of generality, we consider the first microphone as a reference, i.e., m=1. The desired signal estimate \(\widehat {S}_{1}(t,k)\) is obtained by applying a time and frequencydependent optimal linear filter w _{opt} to the microphone signals as follows
Two fundamental assumptions underlie the proposed framework for computing the optimal filter coefficients \(\boldsymbol {w}_{\text {opt}}^{\mathrm {H}}(t,k)\). The first assumption is used in many existing multichannel speech enhancement approaches and states that the PSD matrix of the desired signal is a rankone matrix given by
where g _{1}(k) is the RTF vector of the desired source with the first microphone as a reference, and \(\phi _{S_{1}}(t,k) = \operatorname {E}\left [S_{1}(t,k)^{2}\right ]\) is the PSD of the desired signal at the first microphone. The second assumption is that the speech signals are sparse in the STFT domain, so that each TF bin can be associated to one of the following mutually exclusive hypotheses
In addition, we introduce the hypothesis \(\mathcal {H}_{u}= \mathcal {H}_{i} \cup \mathcal {H}_{v}\) that undesired signal is dominant, regardless whether it is a competing speaker or background noise.
The objective in this work is to define likelihood models for the hypotheses in (5), design a detector that associates each TF bin to the correct hypothesis, estimate the PSD matrices and the RTF vector g _{1}, and finally, compute the ISF coefficients w _{opt} required for source extraction in (3). Narrowband DOA estimates play a key role in the framework, and appropriate stateofthe art estimators are briefly discussed in the following section.
Narrowband DOA estimation
The most important criteria when choosing a DOA estimator for our framework is the ability to obtain nearly instantaneous narrowband DOA estimates without requiring temporally averaged covariance matrices as the subspacebased estimators [25], and a sufficiently low complexity suitable for realtime implementation. We briefly review two estimators which satisfy these requirements.
Least squares (LS)fitting of instantaneous phase differences [26]
Assuming that a single source is dominant at each TF bin, its DOA can be estimated using the phase differences between the microphone signals at each TF bin. Denoting the 2D microphone locations as d _{1},…,d _{ M }, the frequency corresponding to the index k by f _{ k } (in Hertz), the DOA of the dominant source as θ _{ tk } (in radians), and the corresponding DOA vector as q(t,k)=[ cos(θ _{ tk }),sin(θ _{ tk })], the anechoic RTF vector with respect to the first microphone reads
Due to the relation s(t,k)=g _{1}(t,k) S _{1}(t,k), the phase differences of the source signal at each microphone with respect to the first microphone (provided that S _{1}(t,k)≠0) are given by
If we introduce the (M−1)×1 vector \( \bar {\boldsymbol {s}} = \left [\angle {\frac {S_{2}}{S_{1}}}, \angle {\frac {S_{3}}{S_{1}}},\ldots, \angle {\frac {S_{M}}{S_{1}}}\right ], \) and the (M−1)×2 matrix D containing (d _{ i }−d _{ j })^{T} as rows for i∈[2,M], Eq. (7) can be rewritten as
and can be solved for the DOA vector q(t,k). However, in practice, the signal \(\bar {\boldsymbol {s}}\) is not observable. Instead, the noisy phase vector \( \bar {\boldsymbol {y}} = \left [\angle {\frac {Y_{2}}{Y_{1}}}, \angle {\frac {Y_{3}}{Y_{1}}},\ldots,\angle {\frac {Y_{M}}{Y_{1}}}\right ], \) is used in (8) to obtain an estimate of the DOA vector by solving the LS problem
where ()^{+} denotes MoorePenrose pseudoinverse of a matrix.
LSfitting of cross PSD phase differences [27]
Instead of instantaneous phase differences, the authors in [27] use phase differences between the shortterm cross PSDs to estimate the DOA. According to the model in (6), the cross PSD between the mth and nth microphone is given by
Introducing the (M−1)×1 vector \( \bar {\boldsymbol {\phi }}(t,k) = \left [\angle {\frac {\phi _{S,12}(t,k)}{\phi _{S,11}(t,k)}}, \ldots, \angle {\frac {\phi _{S,1M}(t,k)}{\phi _{S,11}(t,k)}}\right ], \) and using the matrix D similarly as in (8) we obtain the relation
As the signals S _{1}(t,k),…S _{ M }(t,k) are unobservable, the noisy crossPSDs ϕ _{ Y,mn }(t,k) can be used instead. By defining \(\bar {\boldsymbol {\phi }}_{Y}(t,k) = \left [\angle {\frac {\phi _{Y,12}(t,k)}{\phi _{Y,11}(t,k)}}, \ldots, \angle {\frac {\phi _{Y,1M}(t,k)}{\phi _{Y,11}(t,k)}}\right ],\) the DOA vector estimate is obtained analogously to (9), as
The estimators given by (9) and (12) assume that for each microphone pair, the spatial aliasing frequency lies above \(\frac {F_{s}}{2}\), where F _{ s } is the sampling rate. Alternatively, frequencydependent binary weights can be used to exclude microphone pairs at the frequency bins where spatial aliasing might occur for those pairs, as done in [27].
Stateoftheart DOAinformed source extraction
DSB and MPDR beamforming
If the signal propagation is modelled as a pure delay, the DSB is the simplest filter which can be applied for source extraction. However, the DSB offers suboptimal performance as it does not consider the signal statistics and the reverberation.
Datadependent spatial filters such as the MVDR or the LCMV filter can be applied if the locations or the PSD matrices of the interfering sources are known. However, in the considered application, this information is unavailable. One possibility to employ datadependent spatial filtering without requiring the undesired signal PSD matrix is by using a minimum power distortionless response (MPDR) beamformer [28], computed using the anechoic RTF vector (6) and the microphone signal PSD matrix Φ _{ y }. An MPDR filter that provides an estimate of the desired signal received at the first microphone is given by
In contrast to the MVDR filter which is expressed in terms of the undesired signal PSD matrix Φ _{ u }, the MPDR filter is expressed in terms of Φ _{ y }, which contains the desired signal as well. Therefore, if the RTF vector is inaccurate due to the anechoic model mismatch in reverberant environments, or due to DOA errors, the MPDR filter causes severe distortion of the desired signal [28].
Informed spatial filtering
To extract a desired source from a given DOA, while reducing noise and directional interferers using ISFs, the narrowband detectors used for RTF and PSD matrix estimation need to distinguish TF bins where desired signal is dominant from TF bins where undesired signal is dominant. Such framework was developed in [22] for spatial filtering in the spherical harmonic domain, where the signal detector was obtained by estimating a Gaussian modelbased DSPP. To estimate the DSPP, the likelihoods of the signal vector are modelled as (indices t and k omitted for brevity)
Given an a priori DSPP \(q_{s} = p(\mathcal {H}_{s})\), the a posteriori DSPP follows from the Bayes theorem as follows
The authors in [22] incorporate the DOA information in the a priori DSPP q _{ s }, to provide more robust discrimination between desired and undesired speakers. If \(\Theta _{\theta,\hat {\theta }}(t,k)\) denotes the angle between the true DOA θ of the source of interest, and \(\hat {\theta }_{tk}\) the DOA estimate at TF bin (t,k), the a priori DSPP in [22] is computed as \(q_{s}(t,k) = w\left (\Theta _{\theta,\hat {\theta }}(t,k)\right)\), where w(Θ) is a Gaussian window centred at Θ=0.
Proposed DOA modelbased signal detection
The Gaussian modelbased DSPP is very sensitive to nonstationarity of the undesired signal, as the expression (15) requires an estimate of the PSD matrix Φ _{ u }. To estimate the DSPP at TF bin (t,k), the PSD matrix estimate \(\widehat {\mathbf {\Phi }}_{\boldsymbol {u}}(t1,k)\) from the previous frame t−1 is used, which leads to estimation errors when the undesired signal changes in consecutive frames. The DOAbased a priori DSPP used in [22] in the spherical harmonic domain, seeks to reduce this sensitivity in scenarios with nonstationary interferers. Nevertheless, our experiments for a posteriori DSPP estimation in the traditional signal domain indicated that the DOAbased a priori DSPP is often insufficient to compensate for errors in the likelihoods (15) occurring due to erroneous \(\widehat {\mathbf {\Phi }}_{\boldsymbol {u}}\) estimates. This is our motivation to develop a different method to incorporate DOA information in the a posteriori DSPP estimation, by using a generative probabilistic model of the narrowband DOAs.
Likelihood model for the narrowband DOA estimates
To derive the a posteriori DSPP, we propose likelihood models for the DOA estimates under the hypotheses \(\mathcal {H}_{s}\), \(\mathcal {H}_{i}\), and \(\mathcal {H}_{v}\). As the DOA estimates represent circular random variables, we propose to model \(f\left (\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{s}\right)\) by a von Mises distribution, which closely approximates a wrapped normal distribution on the circle [29]. The von Mises distribution is characterised by a mean \(\tilde {\theta }\) and a concentration κ, and is given by
The normalisation \(c_{\mathcal {M}}(\kappa)=[2 \pi I_{0}(\kappa)]^{1}\) is derived in [30], where I _{0} is the modified Bessel function of the first kind. If the DOA estimator is unbiased, the mean \(\tilde {\theta }\) is equal to the DOA of the desired source. The concentration parameter κ reflects the uncertainty in the DOA estimates, where larger concentration indicates larger DOA estimation error variance, while smaller concentration indicates smaller DOA estimation error variance. Factors which commonly affect the concentration include the array geometry, the number of microphones, the coherent signaltodiffuse signal ratio, as well as the DOA estimator. The concentration κ is an unknown model parameter and its computation is discussed in Section 5.3.2.
Assuming spatially isotropic background noise, where the sound may originate from all directions with equal probability, the likelihood \(f\left (\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{v}\right)\) is modelled by a uniform distribution on the circle, i.e., \(f\left (\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{v}\right) = (2\pi)^{1}\). It remains to define the likelihood of the DOA estimates under the hypothesis \(\mathcal {H}_{i}\). If the number and the DOAs of the interferers were known, a multimodal distribution on the circle would accurately model \(f\left (\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{i}\right)\). In practice, this information is unavailable and difficult to estimate. Instead, we propose to model \(f\left (\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{i}\right)\) as approximately uniform in regions sufficiently far from the desired source, and with a notch centred at the DOA of the source. We construct such a distribution by considering the following function \(g(\theta,\tilde {\theta },\kappa)\)
which attains the minimum value at \(\theta = \tilde {\theta }\) and approaches a uniform distribution as θ deviates from \(\tilde {\theta }\). To obtain a probability density, \(g(\theta,\tilde {\theta },\kappa)\) is normalised by \(c_{\mathcal {A}}\) such that
The integral of the first term is equal to the normalisation constant of the von Mises distribution, and \(\int e^{\kappa } \,\mathrm {d}\theta = 2\pi {\mathrm e}^{\kappa }\). Therefore, the constant \( c_{\mathcal {A}}\) is given by
The von Mises distribution and the proposed notched distribution are illustrated in Fig. 1 for different values of the concentration parameter κ.
Desired speech presence probability and optimal detection
Having defined the likelihoods, the a posteriori DSPP and the a posteriori desired speech absence probability (DSAP) are given by the Bayes theorem as
where the a priori probabilities \(q_{s} = p(\mathcal {H}_{s})\), \(q_{i} = p(\mathcal {H}_{i})\) and \(q_{v} = p(\mathcal {H}_{v})\) satisfy q _{ s }+q _{ i }+q _{ v }=1.
In scenarios with stationary undesired signals, such as background noise, the a posteriori DSPP is directly used for recursive estimation of the noise PSD matrix [15, 20, 21]. However, if the undesired signal contains speech, recursive updates using the DSPP introduce leakage of undesired signal into the desired signal PSD matrix and vice versa. Therefore, in this work, we employ the estimated DSPP and DSAP to compute an optimal binary detector at each TF bin, which minimises the Bayes risk for a false positive and a false negative costs C _{ su },C _{us}>0, as follows [31]
where \(\mathcal {I}_{\mathcal {H}_{a}}\) is a binary indicator which equals one if the hypothesis in the subscript is true, and zero otherwise. Using the binary indicator, only the PSD matrix of the dominant signal is updated, as discussed in Section 6 in more detail.
Estimation of the likelihood model parameters
Estimation of the a priori probabilities q _{ s },q _{ i }, and q _{ v }
The Gaussian signal model has been successfully used to compute the DSPP in scenarios where the undesired signal consists of stationary noise [15, 20, 21, 32]. Denoting the speech presence hypothesis (desired or undesired speech) by \(\mathcal {H}_{si} = \mathcal {H}_{s} \cup \mathcal {H}_{i}\), we can define the Gaussian likelihoods \(f(\boldsymbol {y}\, {\Huge } \, \mathcal {H}_{v})\) and \(f(\boldsymbol {y}\, {\Huge } \, \mathcal {H}_{si})\) according to (14), with the appropriate PSD matrices Φ _{ v } and Φ _{ y }−Φ _{ v }, and compute an a posteriori speech presence probability (SPP) \(p\left (\mathcal {H}_{si} \, {\Huge } \, \boldsymbol {y}(t,k)\right)\) using the Bayes rule. The a posteriori SPP from the Gaussian model can then be used as an a priori SPP in our proposed DOAbased model, such that
In this manner, the a priori SPP in the proposed model exploits the spatiotemporal properties of the signal vector y(t,k) and knowledge of the noise PSD matrix to aid the discrimination between noise and speechdominated TF bins, prior to the estimation of the narrowband DOA at the current TF bin.
Estimation of the concentration parameter κ
It was mentioned in Section 5.1 that the concentration parameter κ related to the mode and the notch of the DOArelated likelihoods often depends on the coherenttodiffuse ratio (CDR), the array geometry, and the DOA estimator. For a given array geometry and a given DOA estimator, a single concentration parameter can be estimated for instance by collecting the DOA estimates from all TF bins during a training period when only the desired speech source and background noise are present, and finding the maximum likelihood (ML) estimate. However, this way of obtaining a single concentration parameter does not take into account the fact that many of the TF bins used for training are noisedominated and do not contain significant speech energy. Instead, of providing an average concentration parameter, we seek to quantify the uncertainty of the DOA estimate at each TF bin. By quantifying the certainty of each DOA estimate, we provide additional information to the proposed signal detector for determining the dominant source at each TF bin. Therefore, the concentration parameter κ of the von Mises distribution needs to estimated for each TF bin as well.
In this work, we propose to use the shorttime CDR \(\widehat {\Gamma }_{tk}\) estimated from the microphone signals at each TF bin to control the concentration parameter. The motivation to use the CDR, stems from the fact that if the CDR is high at a given TF bin, it is more likely that the estimated DOA accurately indicates the DOA of the coherent sound. In such TF bins, \(f(\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{s})\) and \(f(\hat {\theta }_{tk} \, {\Huge } \, \mathcal {H}_{i})\) should have a high concentration κ, resulting in narrow mode or notch. If the CDR is low, the concentration should be lower, to reflect larger uncertainty in the DOA estimates. To estimate the CDR, we use the estimator proposed in [33], which is based on the shortterm complex coherence between two microphone signals. The underlying assumption for this CDR estimator is that the sound field at each TF bin can be modelled as a superposition of a direct sound component with a given DOA (originating from a directional source, such as a speaker), and a diffuse sound component corresponding to late reverberation and diffuse background noise. If we express the CDR in dB, and consider that the range [−∞,∞] of the CDR needs to be mapped to the nonnegative concentration parameter, we propose the following sigmoidlike function for the mapping
where l _{max} determines the maximum value of the function, \(c\in \mathbb {R}\) controls the offset along the \(\widehat \Gamma _{tk}\) axis, and \(\rho \in \mathbb {R}^{+}\) controls the steepness of transition region of the sigmoidlike function. The minimum value of the function \(f\left (\widehat {\Gamma }_{tk}\right) =0\), attained in the limit \(\widehat {\Gamma }_{tk} \rightarrow \infty \) indicates that the distribution of the DOA is a uniform distribution on the circle in the absence of a coherent signal.
The remaining question is how to determine the parameters of the sigmoid function, so that the concentration parameter κ accurately describes the distribution of the DOA estimates for each value of the CDR. To do this, we perform a training phase in a controlled simulated environments as follows:

1.
Simulate a short signal segment by convolving white Gaussian noise signal with an anechoic room impulse response, and add an ideally diffuse noise signal simulated according to [34], with a specified signaltonoise ratio (SNR). Note that although the CDR also depends on the reverberation from directional sources, the spatial properties of late reverberation closely resemble those of a diffuse sound field.

2.
Repeat the simulation for different SNRs (we used the range [−30,30] dB, with steps of 5 dB), and for each simulation store the CDR estimates and the DOA estimates for each TF bin.

3.
Make a histogram of the CDR estimates stored from all simulations and associate to each histogram bin the corresponding DOA estimates.

4.
If the set of DOA estimates associated with the nth histogram bin is \(\Theta _{n} = \left \{ \theta _{1}, \ldots, \theta _{L_{n}} \right \}\), a maximum likelihood estimate of the concentration parameter for this histogram bin is obtained by first computing
$$ r = \sqrt{\left(\frac{1}{L_{n}}\,\sum_{i=1}^{L_{n}} \, \text{cos}\, \theta_{i}\right)^{2} + \left(\frac{1}{L_{n}}\,\sum_{i=1}^{L_{n}} \, \text{sin}\, \theta_{i}\right)^{2} }, $$(25)and using the following approximation (see ([29], Section 5.3.1) for details)
$$ \kappa_{n,\text{ML}} = \left\{\begin{array}{ll} 2\,r + r^{3} + \frac{5}{6}\,r^{5} & \text{if}\ r<0.53, \\ 0.4 + 1.39\,r + \frac{0.43}{1r} & \text{if}\ 0.53 \leq r < 0.85 \\ \frac{1}{2(1r)} & \text{if}\ r \geq 0.85. \end{array}\right. $$(26) 
5.
For each histogram bin n, store the CDR value of the bin centre and the corresponding ML estimate of the concentration parameter as a pair (Γ _{ n },κ _{ n,ML}).
Following this datadriven procedure, we have experimentally found a correspondence between the CDR estimates and the concentration parameter κ. Given the pairs (Γ _{ n },κ _{ n,ML}), we can now determine the parameters of the sigmoidlike mapping function. First, note that although in theory the logarithmic range of the CDR is [−∞,∞], in practice, the CDR estimators saturate and are limited to a relatively small range of values around 0 dB. For our particular estimator, we observed that the range of estimates was [ −10, 20] dB, which allows us to determine the maximum value l _{max} of the concentration parameter by observing the values of κ _{ n,ML} for the histogram bins where Γ _{ n }≈20 dB. To find the parameter c that determines the offset along the \(\widehat \Gamma \) axis, we note that for any value of ρ, the value of \(\widehat \Gamma \) for which the resulting κ is exactly in the midpoint of its range [0,l _{max}], satisfies \(\widehat {\Gamma } = 10\,c\). Therefore, by looking for the pair (Γ _{ n },κ _{ n,ML}) in our training results where κ _{ n,ML} is as close as possible to l _{max}/2, we can use the corresponding Γ _{ n } to compute the parameter c. Having fixed c and l _{max} and noting that due to the aforementioned saturation of the CDR estimator, the concentration parameter is approximately 0 for Γ _{ n }≈−10, there is only a small range of values for ρ which satisfy the constraints on the maxima and the minima of the sigmoidlike function (i.e., f(−10)≈0 and f(20)≈l _{max}). This range was ρ∈ [ 0.2,2] in our case, and the best fit for ρ can be easily found by visual inspection of the curves obtained by substituting several values for ρ from this range. The above described procedure for our data resulted in l _{max}=8, c=1.5, and ρ=1.2, which we kept constant for all the experiments.
Application to informed spatial filtering
We use the detector proposed in Section 5 to obtain the PSD matrix estimates \(\widehat {\mathbf {\Phi }}_{\boldsymbol {u}}\) and \(\widehat {\mathbf {\Phi }}_{\boldsymbol {s}}\), and the RTF vector estimate \(\hat {\boldsymbol {g}}_{1}\). The RTF vector can be obtained using the generalized eigenvalue decomposition (GEVD) of the matrix pencil \(\left (\widehat {\mathbf {\Phi }}_{\boldsymbol {s}+\boldsymbol {v}}(t,k),\widehat {\mathbf {\Phi }}_{\boldsymbol {v}}(t,k)\right)\) [35], such that if u(t,k) denotes the generalised eigenvector corresponding to the maximum generalised eigenvalue, \(\hat {\boldsymbol {g}}_{1}(t,k)\) is given by
where the denominator normalises the first entry to one. The PSD matrices are computed using the standard recursive updates
where the averaging parameters are computed using the output of the detector in (22) as follows
where the values \(\tilde {\alpha }_{s},\tilde {\alpha }_{u}\in \ [\!0,1)\) are predefined constants determining the effective range of the averaging parameters, i.e., \(\alpha _{s} \in \ [\tilde {\alpha }_{s},1]\) and \(\alpha _{u} \in \ [\tilde {\alpha }_{u},1]\). The noise PSD matrix is computed using similar recursion as (28), however, the parameter α _{ v } is computed using the Gaussian modelbased SPP [15, 21] as follows
where \(\tilde {\alpha }_{v} \in \ [\!0,1)\) is a predefined constant. In contrast to (29), the noise averaging parameter (30) leads to a soft recursive update, as often done when the undesired signal is stationary [15, 20, 21].
Given the estimates \(\widehat {\mathbf {\Phi }}_{\boldsymbol {u}}\) and \(\hat {\boldsymbol {g}}_{1}\), an informed MVDR filter to extract the desired source in (3) is computed as
Note that, as the desired source PSD matrix Φ _{ s } is of rank one, the MVDR filter can be expressed only in terms of Φ _{ u } and Φ _{ s } [4]. However, when the undesired signal is nonstationary, \(\widehat {\boldsymbol {\Phi }}_{\boldsymbol {s}}\) contains errors which can be detrimental to the signal quality when used in such filter formulation. Therefore, we first estimate the desired signal RTF vector \(\hat {\boldsymbol {g}}_{1}(t,k)\), and compute an MVDR filter using (31). The complete source extraction framework is summarized in Fig. 2. In addition, the DSPP \(p\left (\mathcal {H}_{s} \, {\Huge } \, \hat {\theta }_{tk}\right)\) can be applied as a multiplicative factor to the output of the MVDR filter, i.e.,
which has a similar role as the singlechannel DOAbased gain in [23] and the DOAbased TF mask common for source separation [36]. Applying the DSPP as a multiplicative factor provides additional undesired signal reduction, however, when inaccurately estimated, it causes audible distortion to the desired signal. This is further evaluated in the experiments in Section 7.2.
Performance evaluation
To evaluate the proposed system, measurements were performed in a room with T _{60}=0.16 s using the setup illustrated in Fig. 3. We simultaneously recorded three sources captured at three arrays, and the objective is to extract each source using the nearest array to that source, while reducing the remaining two sources. Each array (uniform circular with a diameter of 3 cm) consisted of three omnidirectional DPA microphones (model DPA d:screet SMKSC4060). The distance between each source and the nearest array is 0.7 m, and each source is extracted using the nearest array only (hence the framework is implemented with only three microphones). To generate background noise, ten loudspeakers were placed facing the walls and babble speech signals were convolved with the measured room impulse responses (RIRs) of the ten loudspeakers. Finally, clean speech signals were convolved with the measured RIRs for the three sources in Fig. 3, and added with the babble noise signal, appropriately scaled to provide a desired SNR (exact values are given in the experiment descriptions). In addition, measured sensor noise was added with an SNR of 35 dB for all experiments. The SNRs were computed segmentally over 30 ms signal segments, as the power ratio of the desired speech signal and the background (or sensor) noise captured at the reference microphone. The final SNRs indicated in the experiments are obtained by averaging across all segments with SNR in the range [20,40].
To evaluate the performance for different reverberation, simulated data was used. RIRs were computed using the simulator in [37]. Diffuse noise was simulated as described in [34] and the microphone signals were obtained by adding the speech signals convolved with the RIRs, the diffuse noise signal, and spatially and temporally uncorrelated noise signal. The processing was done at a sampling rate of 16 kHz, with an STFT frame length of 64 ms with 50% overlap, windowed by a Hamming window. Unless stated otherwise, the DOA estimator with instantaneous phase differences, described in Section 3.1 was employed.
Detector evaluation in terms of receiver operating characteristics (ROC)
We compare a minimum Bayes risk detector obtained using the proposed DOA modelbased DSPP, to the one obtained using the Gaussian signal model, as in [22]. The false positive rate (FPR) and the false negative rate (FNR) are defined as
where \(\sum _{t,k} \, [\!\cdot \,]\) denotes summation of the value of the logical expression in the brackets. The ideal detector \(\mathcal {H}_{\text {ideal}}\) is obtained by comparing the spectra of the desired signal to the sum of all undesired signals, namely,
The ROC curves are obtained by computing the FPR and FNR as \(\frac {C_{su}}{C_{us}}\) varies from 0 to ∞. The FPR and FNR are computed for the three sources (French female, English female, and English male) during 20 s of multitalk. The average FPR and FNR used for the ROC curves are obtained by averaging over the segments for each of the three sources (hence over 60 s of speech in total). In all experiments, the desiredtointerfering speech ratio (DSIR) was in the range [ 5,8] dB. The DSIR for each source is computed at one of the microphones from the nearest array.
Experiment 1
In this experiment, we evaluate the detection accuracy for different noise and reverberation levels. To investigate the effect of background noise, the experiment was repeated for input SNR of 3, 8, 13, and 18 dB using the measurement data. The ROC curves for the different SNRs are shown in Fig. 4 a, for the two detectors. The detection accuracy is not notably affected by the SNR, as shown by the overlapping ROC curves and only a minor increase in the error rate can be observed for decreasing SNR. It is worthwhile noting that although in nonstationary scenarios, both types of errors are critical for the extracted signal quality [28], false positives are more detrimental as they lead to errors in the RTF vector and distortion of the desired signal. In contrast, if the RTF vector is accurate, which can be achieved if the FPR is low, false negatives do not affect the performance.
To evaluate the detectors for different reverberation levels, the setup shown in Fig. 3 was simulated for T _{60} of 0.2 s, 0.35 s, 0.5 s, and 0.65 s and diffuse babble noise with an SNR of 22 dB. As shown in Fig. 4 c, reverberation has a stronger effect on the ROC than the noise, as the curves shift more notably with increasing reverberation. However, the proposed detector clearly outperforms the signalmodel based detector in all cases.
Experiment 2
In this experiment, we simulated scenarios with one desired source and one interferer, for different angular separations between the desired source and the interferer, namely, 160°, 95°, 50°, 25°, and 0°. In all cases, the desired source is located at 0.7 m from the array, whereas the interferer at 1.5 m from the array. The reverberation time was T _{60}=0.35 s and diffuse babble noise with an SNR of 22 dB and uncorrelated sensor noise with an SNR of 35 dB were added. As expected, with decreasing angular separation, the detection accuracy deteriorates, as visible in Fig. 4 b. Note that even when the desired and the undesired source have equal DOA, the detector provides good accuracy due to the fact that the desired signal is stronger than the interferer at its respective nearest array. Another reason is that the CDR in interfererdominated TF bins is lower than the CDR in desired signaldominated TF bins, hence allowing the CDRcontrolled concentration κ to aid the detection even when the sources have equal DOA.
Experiment 3
The detector ROC curves obtained with the two DOA estimators discussed in Section 3, the one with instantaneous, and the one with timeaveraged phase differences, are illustrated in Fig. 4 d. Although time averaging of the phase differences generally provides less noisy DOA estimates (i.e., smoother across the TF spectrum), the detector performance is better when instantaneous DOA estimates are used.
Objective evaluation of extracted signals
To estimate the PSD matrices and the RTF vector, we computed a Bayes detector according to (22), where we used C _{ su }=1 and C _{ us }=2. These costs were chosen after investigating the objective performance measures and the results of informal listening tests in different acoustic conditions, where they proved to achieve the best performance from all (C _{ su },C _{ us }) pairs across the ROC. The chosen costs resulted in an FPR of 0.1 and an FNR of 0.9 on average (across the different experiments), which corroborates the observation made in Section 7.1.1, that the FPR needs to be very low in order to ensure a good extracted signal quality. The averaging constants for the PSD matrices were \(\tilde {\alpha }_{v} = 0.95\), \(\tilde {\alpha }_{s} = \tilde {\alpha }_{u} = 0.92\) (corresponding to time constants of 0.62 and 0.38 s). The performance was evaluated in terms of segmental noise reduction (NR), segmental interference reduction (IR), speech distortion (SD) index ν _{sd}, PESQ score improvement Δ _{PESQ} [38], and improvement of the shorttime objective intelligibility (STOI) score [39], Δ _{STOI}. Five spatial filtering frameworks are evaluated: (i) An oracle MVDR filter, where the PSD matrices are computed using recursive averaging with an ideal detector, denoted by \(\mathcal {D}_{\text {id}}\), (ii) a DSB steered to the desired source DOA, (iii) an MPDR filter steered to the desired source DOA, iv) an informed MVDR filter obtained using the Gaussian signal modelbased detector with a DOAbased a priori SPP, denoted by \(\mathcal {D}_{\text {sm}}\) and v) an informed MVDR filter obtained using the proposed DOAbased detector, denoted by \(\mathcal {D}_{\text {dm}}\).
Experiment 1
This experiment is performed using measured data for two SNR conditions, indicated in Table 1. Although \(\mathcal {D}_{\text {sm}}\) and \(\mathcal {D}_{\text {dm}}\) perform similarly in terms of NR, \(\mathcal {D}_{\text {dm}}\) offers by of up to 8 dB better IR than \(\mathcal {D}_{\text {sm}}\) for SNR of 10 dB, and up to 6 dB better IR for SNR of 2 dB. The SD index is lower for the proposed \(\mathcal {D}_{\text {dm}}\) in all cases. The better performance of the proposed system is due to the higher accuracy of the DOAbased detector compared to the Gaussian modelbased one. The improvement in PESQ and STOI scores at the output of the proposed system with respect to the unprocessed signal is notably higher than the improvement offered by the other systems. The severe distortions of MPDR filter often result in lower PESQ and STOI scores than the unprocessed signal, as visible in Table 1.
Experiment 2
In this experiment, the proposed system \(\mathcal {D}_{\text {dm}}\) and the output of the DSB are multiplied by the a posteriori DSPP. A system where DOAbased DSPP is applied at the output of a fixed spatial filter is proposed in [23], and the goal of the current experiment is to confirm that the benefit of the DSPP is even larger when it is used in combination with a datadependent, informed spatial filter, rather than a fixed spatial filter. The experiment is repeated with the two DOA estimators discussed in Section 3.
The results in Table 2 are shown for average input SNR of 10 dB and confirm that the informed MVDR filter outperforms the DSB when the DSPPbased mask is applied after spatial filtering. We also note that multiplying the DSPP is critical as it introduces SD, even though, the NR and IR are significantly improved. The choice whether to multiply the MVDR output by the DSPP, depends on the accuracy of the DSPP and the importance of undistorted speech for a given application. Finally, note that the system with the DOA estimator based on instantaneous phase differences slightly outperforms the one with cross PSD phase differences, which is consistent with the detection performance evaluation in Fig. 4 d. Timeaveraged phase differences result in timesmoothing of the DSPP, which can distort the speech onsets and degrade the overall performance.
Experiment 3
In this experiment, we investigate the performance for varying angular separation between the desired and the interfering source. Signals were simulated with reverberation time T _{60}=0.2 s and T _{60}=0.4 s. The distances of the desired source and the interferer from the array were 0.7 and 1.5 m, respectively, and the results in Fig. 5 are averaged over three experiments with different locations of the constellation. The NR and SD of the different frameworks are rather unaffected by the angular separation, while the IR decreases with decreasing angular separation. As the angular separation decreases, \(\mathcal {D}_{\text {dm}}\) and \(\mathcal {D}_{\text {sm}}\) achieve similar IR, due to the fact that in both cases, the performance is limited by the spatial resolution of the array.
Experiment 4
An important motivation for the current work was to mitigate the sensitivity to DOA mismatch typical for the MPDR beamformer, where DOA errors lead to severe distortion of the desired signal. Provided that the desired signal detector is accurate besides the presence of small DOA errors, the RTF vector and the undesired signal PSD matrix can be estimated and the desired signal can be extracted with a good quality using an informed MVDR filter. In Fig. 6, the DOA mismatch is varied such that the system is given a wrong information about the true source DOA, with an error from 1 to 19°, including an errorfree case. The difference between the DOA of the desired source and the interferer is 100 degrees. In Fig. 6 a, the SD index and the PESQ improvement are shown on the yaxis, whereas in Fig. 6 b, the NR and the IR are shown. Notably, besides minor performance loss as the mismatch angle increases, the ISF is very robust to DOA errors, which is crucial for practical applications where the DOA might be given only approximately.
Conclusions
We addressed the problem of source extraction in the presence of background noise and speech interferers. The DOA of the desired source was assumed to be approximately known, while the number and locations of the interferers were unknown. Designing robust spatial filters is a challenging task in such scenarios, as the PSD matrix of the undesired speech signals needs to be estimated from the data. We proposed an informed spatial filtering framework, where the first step is to design appropriate desired signal detector. We discussed and experimentally showed that the commonly used Gaussian signal modelbased detector is not suitable when the undesired signals contain speech. Therefore, we proposed a DOA modelbased detector, where narrowband DOA estimates are used for discrimination of desired and undesired speakers, while the Gaussian signal model aids the detection of noisy TF bins. The performance of the detector was evaluated in terms of ROC curves, and by objective evaluation of the extracted signals when the detector is applied for PSD matrix estimation in an informed MVDR filtering framework.
References
 1
JL Flanagan, JD Johnston, R Zahn, GW Elko, Computersteered microphone arrays for sound transduction in large rooms. J. Acoust. Soc. Am.5(78), 1508–1518 (1985).
 2
S Doclo, M Moonen, Superdirective beamforming robust against microphone mismatch. IEEE Signal Process. Lett.15(2), 617–631 (2007).
 3
J Benesty, J Chen, Y Huang, Microphone Array Signal Processing (Springer, Berlin, 2008).
 4
M Souden, J Benesty, S Affes, On optimal frequencydomain multichannel linear filtering for noise reduction. IEEE Trans. Audio Speech Lang. Process.18(2), 260–276 (2010).
 5
KL Bell, Y Ephraim, HL Van Trees, A Bayesian approach to robust adaptive beamforming. IEEE Trans. Signal Process.48(2), 386–398 (2000).
 6
CJ Lam, AC Singer, Bayesian beamforming for DOA uncertainty: theory and implementation. IEEE Trans. Signal Process.54(11), 4435–4445 (2006).
 7
Y Grenier, A microphone array for car environments. Speech Commun.12:, 25–39 (1993).
 8
MK Buckley, Spatial/spectral filtering with linearly constrained minimum variance beamformers. IEEE Trans. Acoust. Speech Signal Process.35(3), 249–266 (1987).
 9
CA Anderson, PD Teal, MA Poletti, Spatially robust farfield beamforming using the von Mises(Fisher) distribution. IEEE Trans. Acoust. Speech Signal Process.23(12), 2189–2197 (2015).
 10
O Hoshuyama, A Sugiyama, A Hirano, A robust adaptive beamformer for microphone arrays with a blocking matrix using constrained adaptive filters. IEEE Trans. Signal Process.47(10), 2677–2684 (1999).
 11
BJ Yoon, I Tashev, A Acero, in IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP). Robust adaptive beamforming algorithm using instantaneous direction of arrival with enhanced noise suppression capability (HI, USA, 2007), pp. 133–136.
 12
O Hoshuyama, B Begasse, A Sugiyama, A Hirano, in IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP). A real time robust adaptive microphone array controlled by an SNR estimate (ICASSP, Seattle, 1998), pp. 3605–3608.
 13
M Taseska, EAP Habets, Informed spatial filtering with distributed arrays. IEEE Trans. Audio Speech Lang. Process.22(7), 1195–1207 (2014).
 14
M Taseska, EAP Habets, Spotforming: Spatial filtering with distributed arrays for positionselective sound acquisition. IEEE/ACM Trans. Audio Speech Lang. Process.24(7), 1291–1304 (2016).
 15
M Souden, J Chen, J Benesty, S Affes, An integrated solution for online multichannel noise tracking and reduction. IEEE Trans. Audio Speech Lang. Process.19(7), 2159–2169 (2011).
 16
T Higuchi, N Ito, T Yoshioka, T Nakatani, in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP). Robust MVDR beamforming using timefrequency masks for online/offline ASR in noise (Shanghai, 2016).
 17
S Affès, Y Grenier, A signal subspace tracking algorithm for microphone array processing of speech. IEEE Trans. Speech Audio Process.5(5), 425–437 (1997).
 18
S Gannot, D Burshtein, E Weinstein, Signal enhancement using beamforming and nonstationarity with applications to speech. IEEE Trans. Signal Process.49(8), 1614–1626 (2001).
 19
D Van Compernolle, Adaptive filter structures for enhancing cocktail party speech from multiple microphone recordings. Colloque sur le traitement du signal et des images, 513–516 (1989). http://documents.irevues.inist.fr/handle/2042/11518?show=full.
 20
I Cohen, Noise spectrum estimation in adverse environments: improved minima controlled recursive averaging. IEEE Trans. Speech Audio Process.11(5), 466–475 (2003).
 21
M Taseska, EAP Habets, in International Workshop on Acoustic Echo and Noise Control (IWAENC). MMSEbased blind source extraction in diffuse noise fields using a complex coherencebased a priori SAP estimator, (2012).
 22
DP Jarrett, EAP Habets, PA Naylor, in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP). Spherical harmonic domain noise reduction using an MVDR beamformer and DOAbased secondorder statistics estimation (Vancouver, 2013).
 23
I Tashev, A Acero, in International Workshop on Acoustic Echo and Noise Control (IWAENC). Microphone array postprocessor using instantaneous directionofarrival (Paris, 2006).
 24
M Taseska, EAP Habets, in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP). Minimum Bayes risk signal detection for speech enhancement based on a narrowband DOA model (Brisbane, 2015).
 25
R Roy, T Kailath, ESPRIT  estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process.37:, 984–995 (1989).
 26
S Araki, H Sawada, R Mukai, S Makino, DOA estimation for multiple sparse sources with arbitrarily arranged multiple sensors. J. Signal Process. Syst.63:, 265–275 (2011).
 27
O Thiergart, W Huang, EAP Habets, in Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP). A low complexity weighted least squares narrowband DOA estimator for arbitrary array geometries (ICASSP, Shanghai, 2016), pp. 340–344.
 28
H Cox, RM Zeskind, MM Owen, Robust adaptive beamforming. IEEE Trans. Acoust. Speech Signal Process.35(10), 1365–1376 (1987).
 29
KV Mardia, PE Jupp, Directional Statistics (WileyBlackwell, New York, 1999).
 30
Stegun IA, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. (M Abramowitz, ed.) (United States Department of Commerce, USA, 1972).
 31
S Kay, Fundamentals of Statistical Signal Processing, Volume II: Detection Theory (PrenticeHall, Inc., NJ, USA, 1998).
 32
M Souden, J Chen, J Benesty, S Affès, Gaussian modelbased multichannel speech presence probability. IEEE Trans. Audio Speech Lang. Process.18(5), 1072–1077 (2010).
 33
O Thiergart, G Del Galdo, EAP Habets, On the spatial coherence in mixed sound fields and its application to signaltodiffuse ratio estimation. J. Acoust. Soc. Am.132(4), 2337–2346 (2012).
 34
EAP Habets, I Cohen, S Gannot, Generating nonstationary multisensor signals under a spatial coherence constraint. J. Acoust. Soc. Am.124(5), 2911–2917 (2008).
 35
A Krueger, E Warsitz, R HaebUmbach, Speech enhancement with a GSClike structure employing eigenvectorbased transfer function ratios estimation. IEEE Trans. Audio Speech Lang. Process.19(1), 206–219 (2011).
 36
S Araki, H Sawada, R Mukai, S Makino, in IEEE International Workshop on Acoustic Signal Enhancement (IWAENC). A novel blind source separation method with observation vector clustering (IWAENC, Eindhoven, 2005).
 37
EAP Habets, Room impulse response generator. Technical report, Technische Universiteit Eindhoven (2006).
 38
ITUT: Perceptual evaluation of speech quality (PESQ), an objective method for endtoend speech quality assessment of narrowband telephone networks and speech codecs.
 39
CH Taal, RC Hendriks, R Heusdens, J Jensen, An algorithm for intelligibility prediction of timefrequency weighted noisy speech. IEEE Trans. Audio Speech Lang. Process.19(7), 2125–2136 (2011).
Acknowledgements
The authors would like to thank the Editorial board and the Reviewers for considering and revising this manuscript.
Funding
No funding was received or used to prepare this manuscript.
Author information
Affiliations
Contributions
Both authors had significant contribution to the development of early ideas and design of the final algorithms. Throughout all stages, the authors discussed the importance and the quality of the algorithms and the structure of the manuscript. Both authors read and approved the submitted version of the manuscript.
Corresponding author
Correspondence to Maja Taseska.
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Taseska, M., Habets, E.A. DOAinformed source extraction in the presence of competing talkers and background noise. EURASIP J. Adv. Signal Process. 2017, 60 (2017). https://doi.org/10.1186/s1363401704957
Received:
Accepted:
Published:
Keywords
 Spatial filtering
 Speech enhancement
 PSD matrix estimation
 RTF estimation
 Signal detection