 Research
 Open access
 Published:
Blind source separation for robot audition using fixed HRTF beamforming
EURASIP Journal on Advances in Signal Processing volumeÂ 2012, ArticleÂ number:Â 58 (2012)
Abstract
In this article, we present a twostage blind source separation (BSS) algorithm for robot audition. The first stage consists in a fixed beamforming preprocessing to reduce the reverberation and the environmental noise. Since we are in a robot audition context, the manifold of the sensor array in this case is hard to model due to the presence of the head of the robot, so we use premeasured head related transfer functions (HRTFs) to estimate the beamforming filters. The use of the HRTF to estimate the beamformers allows to capture the effect of the head on the manifold of the microphone array. The second stage is a BSS algorithm based on a sparsity criterion which is the minimization of the l_{1} norm of the sources. We present different configuration of our algorithm and we show that it has promising results and that the fixed beamforming preprocessing improves the separation results.
1 Introduction
Robot audition consists in the aptitude of an humanoid to understand its acoustic environment, separate and localize sources, identify speakers and recognize their emotions. This complex task is one of the target points of the Romeo project^{a} that we work on. This project aims to build an humanoid (Romeo) that can act as a comprehensive assistant for persons suffering from loss of autonomy. Our task in this project is focused on the blind source separation (BSS) topic using a microphone array (more than two sensors). Source separation is a very important step for humanrobot interaction: it allows latter tasks like speakers identification, speech and motion recognition and environmental sound analysis to be achieved properly. In a BSS task, the separation should be done from the received microphone signals without prior knowledge of the mixing process. The only knowledge is limited to the array geometry.
The problem of BSS has been studied by many authors [1], and we present here some of the stateoftheart methods related to robot audition. Tamai et al. [2] performed sound source localization by a delay and sum beamforming and source separation in a real environment with frequency band selection using a microphone array located on three rings with 32 microphones. Yamamoto et al. [3] proposed a source separation technique based on geometric constraints as a preprocessing for the speech recognition module in their robot audition system. This system was implemented in the humanoids SIG2 and Honda ASIMO with an eight sensors microphone array, as a part of a more complete system for robot audition named HARK [4]. Saruwatari et al. [5] proposed a twostage binaural BSS system for an humanoid. They combined a singleinput multipleoutput model based on independent component analysis (ICA) and a binary mask processing.
One of the main challenges of BSS remains to obtain good BSS performance in a real reverberant environments. A beamforming preprocessing can be a solution to improve BSS performance in a reverberant room. Beamforming consists in estimating a spatial filter that operates on the outputs of a microphone array in order to form a beam with a desired directivity pattern [6]. It is useful for many purposes, particularly for enhancing a desired signal from its measurement corrupted by noise, competing sources and reverberation [6]. Beamforming filters can be estimated in a fixed or in an adaptive way. A fixed beamforming, contrarily to an adaptive one, does not depend on the sensors data, the beamformer is built for a set of fixed desired directions. In this article, we propose a twostage BSS technique where a fixed beamforming is used in a preprocessing step.
Ding et al. propose to use a beamforming preprocessing where the steering directions are the directions of arrival (DOA) of the sources. In this case, the DOA of the sources are supposed to be known a priori[7]. The authors evaluate their method in a determined case with 2 and 4 sources and a circular microphone array. Saruwatari et al. present a combined ICA [8] and beamforming method: first the authors perform a subband ICA and estimate the direction of arrivals (DOA) of the sources using the directivity patterns in each frequency bin, second they use the estimated DOA to build a null beamforming, and third they integrate the subband ICA and the null beamforming by selecting the most suitable separation matrix in each frequency [9]. In this article, we propose to use a fixed beamforming preprocessing with fixed steering directions, independently from the direction of arrival of the sources, and we compare this preprocessing method to the one proposed by Wang et al. We are interested in studying the effect of the beamforming as a preprocessing tool so we are not going to include the algorithm of [9] in our evaluation (the authors of [9] use the beamforming as a separation method alternatively with ICA).
However, in a beamforming task, we need to know the manifold of the sensor array, which is hard to model for the robot audition case because the head of the robot alters the acoustic near field. To overcome the problem of the array geometry modeling and take into account the influence of the robot's head on the received signals, we propose to use the head related transfer functions (HRTFs) of the robot's head as steering vectors to build the fixed beamformer. The main advantages of our method are its reduced computational cost (as compared to the one based on adaptive beamforming), its improved separation quality and its relatively fast convergence rate. Its weaknesses consist in the lack of theoretical analysis or proofs that guarantee the convergence to the desired solution and in the case where source localization is needed, our method provides only a rough estimation of the direction of arrival.
This article is organized as follows: in Section 2, we present the signal model used in the BSS task, Sections 4 and 3 are dedicated respectively for the beamforming using HRTF step and for the presentation of the BSS using sparsity criterion step, and we assess the algorithms performances in Section 5, while Section 6 provides some concluding remarks.
2 Signal model
Assume N sound sources s(t) = [s_{1} (t),...,s_{ N }(t)]^{T}and an array of M microphones with outputs denoted by x(t) = [x_{1}(t),...,x_{ M }(t)]^{T}, where t is the time index. We assume that we are in an overdetermined case with M > N and that the number of sources N is known a priori. In Section 3.3 however, we propose a method of source number estimation in the robot audition case. As we are in a real environment context, the output signals in the time domain are modeled as the sum of the convolution between the sound sources and the impulse responses of the different propagation paths between the sources and the sensors, truncated at the length of L + 1:
where h(l) is the l^{th}matrix of impulse response and n(t) is a noise vector. We consider a spatially decorrelated diffuse noise which energy is supposed to be negligible comparing to the punctual sources ones. If the noise is punctual, it will be considered as a sound source. This scenario corresponds to our experimental and real life application setups.
In the frequency domain, when the length of the analysis window N_{ f }of the short time fourier transform (STFT) is longer than twice the length of the mixing filter L, the output signals at the timefrequency bin (f, k) can be approximated as:
where X (f,k) = [X_{1} (f,k),..., X_{ M }(f,k)]^{H}(respectively S(f, k) = [S_{1} (f,k),..., S_{ N }(f,k)]^{H}) is the STFT of {x(t)}_{1â‰¤tâ‰¤T}(respectively {s(t)}_{1â‰¤tâ‰¤T}) in the frequency bin f\xe2\u02c6\u02c6\left[1,\frac{{N}_{f}}{2}+1\right] and the time bin k âˆˆ [1, N_{ t }], and H is the Fourier transform of the mixing filters {h (l)}_{0â‰¤lâ‰¤L}. Using an appropriate separation criterion, our objective is to find for each frequency bin a separation matrix F(f) that leads to an estimation of the original sources in the timefrequency domain:
The inverse STFT of the estimated sources in the frequency domain Y allows the recovery of the estimated sources y(t) = [y_{1} (t),...,y_{ N }(t)]^{T}in the time domain.
Separating the sources for each frequency bin introduces the permutation problem: the order of the estimated sources is not the same from one frequency to another. To solve the permutation problem, we use the method proposed by Weihua and Fenggang and described in [10]. This method is based on the signals correlation between two adjacent frequencies. In this article, we are not going to investigate the permutation problem and we use the cited method for all the proposed algorithm.
The separation matrix F(f) is estimated using a twostep blind separation algorithm: a fixed beamforming preprocessing step and a BSS step (cf. Figure 1). F(f) is written as the combination of the results of those two steps:
where W(f) is the separation matrix estimated using a sparsity criterion and B(f) is a fixed beamforming filter. More details are presented in the following subsections (cf. Algorithm 1).
2.1 Beamforming preprocessing
The role of the beamformer is essentially to reduce the reverberation and the interferences coming from directions other than the looked up ones. Once the reverberation is reduced, Equation (2) is better satisfied which leads to an improved BSS quality.
We consider {\left\{\mathbf{B}\left(f\right)\right\}}_{1\xe2\u2030\xa4f\xe2\u2030\xa4\frac{{N}_{f}}{2}+1} a set of fixed beamforming filters of size K Ã— M, where K is the number of the desired beams, K â‰¥ N. Those filters are calculated beforehand (before the beginning of the processing) and used in the beamforming preprocessing step (cf. Section 3). The outputs of the beamformers at each frequency f are:
2.2 Blind source separation
The BSS step consists in estimating a separation matrix W(f) that leads to separated sources at each frequency bin f. The separation matrix W(f) is estimated by minimizing, with respect to W(f), a cost function Ïˆ based on a sparsity criterion, under a unit norm constraints for W(f). The chosen optimization technique is the natural gradient (cf. Section 4). The separation matrix is estimated from the output signals of the beamformers Z(f,k) and the estimated sources are then written as:
3 Fixed beamforming using HRTF
In the case of robot audition, the geometry of the microphone array is fixed once for all. To build the fixed beamformers, we need to determine the "desired" steering directions and the characteristics of the beam pattern (the beamwidth, the amplitude of the sidelobes and the position of nulls). The beamformers are estimated only once for all scenarii using these spatial information and independently of the measured mixture in the sensors.
The leastsquare (LS) technique is used [6] to estimate the beamformer filters that will achieve the desired beam pattern according to a desired direction response. To accomplish this beamformers estimation, we need to calculate the steering vectors which represent the phase delays of a plane wave evaluated at the microphone array elements.
In the free field, the steering vector of an M elements array at a frequency f and for a steering direction Î¸ is known. For example, for a linear array, we have:
where d is the distance between two sensors and c is the speed of sound.
In the case of robot audition, the microphones are often fixed in the head of the robot (cf. Figure 2). The free field model of the steering vectors presented in Equation (7) does not take into account the influence of the head on the surrounding acoustic fields, and in this case, the microphone array manifold is not modeled (unknown).
For a human hearing, there is a spectral filtering of the sound source by the head and the pinna, and thus a transfer function between the source and each ear is defined and refered to as: the HRTF. The HRTF takes into account the interaural time difference^{b} (ITD), the interaural intensity difference^{c} (IID) and the shape of the head and the pinna. It defines how a sound emitted from a specific location and altered by the head and the pinna is received at an ear. The notion of HRTF remains the same if we replace the human head by a dummy head and the ears by two microphones. We extend the usual concept of binaural HRTF to the context of robot audition where the humanoid is equipped with a microphone array. In our case, a HRTF h_{ m }(f,Î¸) at frequency f characterizes how a signal emitted from a specific direction Î¸ is received at the m th sensor fixed in a head.
We propose to use the HRTFs as steering vectors for the beamformer filters calculation (cf. figure 3) and replace the unknown array manifold by a discrete distribution of HRTFs on a group of N_{ S }a priori chosen steering directions \xce\u02dc=\left\{{\mathrm{\xce\xb8}}_{1},...,{\mathrm{\xce\xb8}}_{{N}_{S}}\right\}. The HRTFs are measured in an anechoÃ¯c room as explained in Section 5.
Let h_{ m }(f,Î¸) be the HRTF at frequency f from the emission point located at Î¸ to the m th sensor. The steering vector is then:
Given Equation (8), one can express the normalized LS beamformer for a desired direction Î¸_{ i }as [6]:
where {\mathbf{R}}_{\mathbf{a}\mathbf{a}}\left(f\right)=\frac{1}{{N}_{S}}{\xe2\u02c6\u2018}_{\mathrm{\xce\xb8}\xe2\u02c6\u02c6\xce\u02dc}\mathbf{a}\left(f,\mathrm{\xce\xb8}\right){\mathbf{a}}^{H}\left(f,\mathrm{\xce\xb8}\right). Given K desired steering directions Î¸_{1},...,Î¸_{ K }, the beamforming matrix B(f) is:
In the following, we present the different configurations of the combined beamformingBSS algorithm.
3.1 Beamforming with known DOA
If the directionofarrivals (DOAs) of the sources are known a priori, mainly by a source localization method, the beamforming filters are estimated using this spatial information of the sources location (cf. Figure 4). Therefore, the desired directions are the DOAs of the sources and we select the corresponding HRTFs to build the desired response vectors a(f,Î¸). This is an ideal method to compare our results with. Indeed, we consider that source localization is beyond the scope of this article (in [7] where the beamforming with known DOAs was proposed for a circular microphone array, the authors have assumed that the DOAs are known a priori).
3.2 Beamforming with fixed DOA
Estimating the DOAs of the sources to build the beamformers is time consuming and not always accurate in the reverberant environments. So we propose to build K fixed beams with arbitrary desired directions chosen such as they cover all the useful space directions (cf. Figure 5). We use the output of all the beamformers directly in the BSS algorithm. In this case, we still have an overdetermined separation problem with N sources and K mixtures.
3.3 Beamforming with beams selection
In this configuration, we still have K fixed beams with arbitrary desired directions, but we are not going to use all the outputs of those beamformers (cf. figure 6). We select the N beamformer outputs with the highest energy, corresponding to the beams that are the closest to the sources (we suppose that the energies of the sources are quite close to each other). In this case, after beamforming, we are in a determined separation problem with N sources and K = N mixtures (cf. Algorithm 2).
Fixed beamforming with beams selection can be derived and used for the source number as well as a rough DOAs estimation. We fix a maximum number of sources N_{max} < K. In each frequency bin, after the beamforming filtering (5), we select the N_{max} beams with the highest energies (instead of selecting N beams as in the previous paragraph). Then, we build over all the selected steering directions a histogram that corresponds to their overall number of occurrence (cf. Figure 7). After a thresholding, we select the beams corresponding to the peaks (a peak corresponds to a local maximum point associated to the number of selected beams over all the frequencies). The filters that correspond to those beams are our final beamforming filters, the number of peaks correspond to the number of sources and the corresponding steering directions provide us with a rough estimation of the DOAs.
4 BSS using sparsity criterion
In the BSS step, we estimate the separation matrix W(f) by minimizing, with respect to the separation matrix W(f), a cost function Ïˆ based on a sparsity criterion, under a unit norm constraint for W(f):
The optimization technique used to update the separation matrix W(f) is the natural gradient. Section 4.1 summarizes the natural gradient algorithm [11], Section 4.2 shows how we use this optimization algorithm in our cost function.
4.1 Natural gradient algorithm
The natural gradient is an optimization method proposed by Amari et al. [11]. In this modified gradient search method, the standard gradient search direction is altered according to the local Riemannien structure of the parameter space. This guarantees the invariance of the natural gradient search direction to the statistical relationship between the parameters of the model and leads to a statistically efficient learning performance [12].
Assume that we want to update a separation matrix W according to a loss function Ïˆ (W). The gradient update of this matrix is given by:
where âˆ‡Ïˆ (W) is the gradient of the function Ïˆ (W) and t refers to the iteration (or time) index. From [12], the natural gradient of a loss function Ïˆ (W), noted \stackrel{\xcc\u0192}{\xe2\u02c6\u2021}\mathrm{\xcf\u02c6}\left(\mathbf{W}\right), is given by:
The natural gradient update of the separation matrix W is then:
4.2 Sparsity separation criterion
Speech signal is known to be sparse in the timefrequency domain: the number of timefrequency points where the speech signal is active (i.e., of non negligeable energy) is small comparing to the total number of timefrequency points (cf. Figure 8).
We consider a separation criterion based on the sparsity of the signals in the timefrequency domain. For every frequency bin, we look for a separation matrix W(f) that leads to the sparsest estimated sources Y(f,:) = [Y(f,1),...,Y(f,N_{ T })].
In the same manner, we define the mixture matrix in each frequency bin X(f,:) = [X(f ,1),...,X(f,N_{ T })].
To measure the sparsity of a signal, the l_{1} norm is the most used sparsity measure thanks to its convexity [13]. The smaller is the l_{1} norm of a signal, the sparser it is. However, the l_{1} norm is not the only measure of sparsity [13]. We presented recently a parameterized l_{ p }norm algorithm for BSS, where we made the sparsity constraint harder through the iterations of the optimization process [14]. In this article, we use the l_{1} norm to measure the sparsity of signal Y(f,:), and hence the cost function is:
To have the sparsest estimated sources, we should minimize Ïˆ(W(f)) and we use the natural gradient search technique to find the optimum separation matrix W(f):
The differential of Ïˆ(W(f)) is:
where f(Y(f,:)) = sign(Y(f,:)) is a matrix with the same size as Y(f,:) in which the (i, j)th entry is sign(Y_{ i }(f, j)).^{d} Thus, the gradient of Ïˆ(W) is expressed as:
which gives the expression of the natural gradient of Ïˆ(W_{ t }(f)):
The update equation of W_{ t }(f) for a frequency bin f is then:
with {\mathbf{G}}_{t}\left(f\right)=\mathbf{f}\left({\mathbf{Y}}_{t}\left(f,:\right)\right){\mathbf{Y}}_{t}^{H}\left(f,:\right)..
The convergence of the natural gradient is conditioned both by the initial coefficients W_{0} (f) of the separation matrix and the step size of the update and it is quite difficult to choose the parameters that allow fast convergence without risking divergence. Douglas and Gupta [15] proposed to impose a scaling constraint to the separation matrix W_{ t }(f) to maintain a constant gradient magnitude along the algorithm iterations. They assert that with this scaling and a fixed step size Î¼, the algorithm has fast convergence and excellent performance independently of the magnitude of X(f,:) and W_{0} (f). Applying this scaling constraint, our update function becomes:
with {c}_{t}\left(f\right)=\frac{1}{\frac{1}{N}{\xe2\u02c6\u2018}_{i=1}^{N}{\xe2\u02c6\u2018}_{j=1}^{N}\left{g}_{t}^{ij}\left(f\right)\right} and {g}_{t}^{ij}\left(f\right)={\left[{\mathbf{G}}_{t}\left(f\right)\right]}_{ij}.
4.3 Initialization
When we are in an overdetermined case, we use a whitening process for the initialization of the separation matrix W_{0}. The whitening is an important preprocessing in an overdetermined BSS algorithm as it allows to focus the energy of the received signals in the useful signal space. The separation matrix is initialized as follow:
where D_{ m }is a matrix containing the first M rows and M columns of the matrix D and E_{:M}is the matrix containing the first M columns of the matrix E. D and E are respectively the diagonal matrix and the unitary matrix of the singular value decomposition of the autocorrelation matrix of the received data X(f,:) or the filtered data after beamforming Z(f,:).
If we are in a determined case, in particular when we select the beams with the highest energy after the beamforming filtering or when the steering directions correspond to the direction of arrivals of the sources, the initialization of the separation matrix is done with the identity matrix:
5 Experimental results
5.1 Experimental database
To evaluate the proposed BSS techniques, we built two databases: a HRTFs database and a speech database.
5.1.1 HRTF database
We recorded the HRTF database in the anechoic room of Telecom ParisTech (cf. Figure 2) using the Golay codes process [16]. As we are in a robot audition context, we model the future robot by a child size dummy (1m 20) for the sound acquisition process, with 16 sensors fixed in its head (cf. Figure 9).
We measured 504 HRTF for each microphone as follow:

72 azimuth angles from 0Â° to 355Â° with a 5Â° step

7 elevation angles: 40Â°, 27Â°, 0Â°, 20Â°, 45Â°, 60Â° and 90Â°
To measure the HRTFs, the dummy was fixed on a turntable in the center of the loudspeaker arc in the anechoic room (cf. Figure 2). For each azimuth angle, a sequence of complementary Golay codes is emitted sequentially from each loudspeaker (this is to vary the elevation) and recorded with the 16 sensors array. This operation was repeated for all the azimuth angles. The Golay complementary sequences have the useful property that their autocorrelation functions have complementary sidelobes: the sum of the autocorrelation sequences is exactly zero everywhere except at the origin. Using this property and the recorded complementary Golay codes, the HRTF are calculated as in [16].
Details about the experimental process of HRTF calculation as well as the HRTF databases at the sampling frequencies of 48 and 16 KHz are available at http://www.tsi.telecomparistech.fr/aao/?p=347.
5.1.2 Test database
The test signals were recorded in a moderately reverberant room where the reverberation time is RT_{30} = 300 ms (cf. Figure 10). Figure 11 shows the different positions of the sources in the room. We chose to evaluate the proposed algorithm on a separation of two sources: the first source is always the one placed at 0Â° and the second source is chosen from 20Â° to 90Â°.
The output signals x(t) are the convolutions of 40 pairs of speech sources (male and female speaking French and English) by two of the impulse responses {h (l)}_{0â‰¤lâ‰¤L}measured for the direction of arrivals presented in Figure 11.
The characteristics of the signals and the BSS algorithms are summarized in Table 1.
5.2 Evaluation results
In this section, we evaluate different configurations of the presented algorithm^{e}:

(1)
The beamforming stage only: beamforming of 37 lobes from 90Â° to 90Â° with a step angle of 5Â° (BF[5Â°])

(2)
The BSS algorithm only

(a)
with minimization of the l _{1} norm (BSSl _{1})

(b)
with ICA from [15] (ICA)

(3)
The twostage algorithm, BSS and the beamforming preprocessing:

(a)
beamforming of N lobes in the DOA of the sources (BF[DOA]+BSSl _{1})

(b)
beamforming of 7 lobes from 90Â° to 90Â° with a step angle of 30Â° (BF[30Â°]+BSSl _{1} when the l _{1} norm minimization is used in the BSS step and BF[30Â°]+ICA when ICA is used in the BSS step)

(c)
beamforming of 13 lobes from 90Â° to 90Â° with a step angle of 15Â° (BF[15Â°]+BSSl _{1})

(d)
beamforming of 19 lobes from 90Â° to 90Â° with a step angle of 10Â° (BF[10Â°]+BSSl _{1})

(e)
beamforming of 37 lobes from 90Â° to 90Â° with a step angle of 5Â° (BF[5Â°] +BSSl _{1})

(f)
beamforming of 7 lobes from 90Â° to 90Â° with a step angle of 30Â° with selection of the N beams containing the highest energy before proceeding the BSS (BF[30Â°]+BS +BSSl _{1})

(g)
beamforming of 37 lobes from 90Â° to 90Â° with a step angle of 5Â° with selection of the N beams containing the highest energy before proceeding the BSS (BF[5Â°]+BS +BSSl _{1})
We evaluate the proposed twostage algorithm by the signaltointerference ratio (SIR) and the signaltodistortion ratio (SDR) estimated using the BSSeval toolbox [17]. All the presented curves are the average result of the 40 pairs of speech.
5.2.1 Influence of the beamforming preprocessing
Figures 12 and 13 show that the SIR and SDR of the twostage algorithm with the fixed beamforming preprocessing BF[5Â°] +BSSl_{1} and BF[30Â°]+ BSSl_{1} are better than the SIR and SDR of the separation algorithm with l_{1} norm alone BSSl_{1} and much better than the ones we obtain by the fixed beamforming BF[5Â°] only. The SIR and SDR of the received signals in microphones 1 and 2 (labeled as sensors data in the figures) is taken as reference to illustrate the performance gain of our method. However this increase in the SIR and SDR by the fixed beamforming preprocessing is limited and do not reach the performance of the beamforming preprocessing with known DOA BF[DOA]+BSSl_{1} as shown in Figures 14 and 15. But we can overcome this limitation by the beam selection as shown in the sequel.
Figures 16 and 17 show the SIR and SDR obtained with different interbeam angle of the beamforming preprocessing, the steering directions vary from 90Â° to 90Â°: beamforming with 7 beams with a step angle of 30Â° (BF[30Â°] +BSSl_{1}), beamforming of 13 beams with a step angle of 15Â° (BF[15Â°]+ BSSl_{1}), beamforming of 19 beams with a step angle of 10Â° (BF[10Â°]+BSSl_{1}) and beamforming with 37 beams with a step angle of 5Â°. The results show that when we increase the number of the beams, the SIR and especially the SDR increases. For BF[15Â°]+BSSl_{1}, BF[10Â°] +BSSl_{1} and BF[5Â°]+BSSl_{1}, the beamforming preprocessing increases the SDR of the estimated sources comparing with the single stage BSSl_{1} algorithm. The SIR with a beamforming preprocessing is also better than the single stage BSSl_{1} algorithm, and this for all the tested configurations of the fixed steering direction beamforming prepossessing.
Influence of the beams selection
As we can observe from Figures 12, 13, 14, and 15, the beamforming preprocessing with beams selection (BF[30Â°] +BS+ BSSl_{1} and BF[5Â°]+BS+BSSl_{1}) and the beamforming preprocessing with known direction of arrivals (BF[DOA]+BSSl_{1}) have close results in terms of SIR (cf. Figures 12 and 14) and SDR (cf. Figures 13 and 15). However, if we are in a reverberant environment where the direction of arrivals can not be estimated accurately, the beamforming preprocessing with beams selection would be a good solution to improve the SIR and the SDR of the estimated sources comparing to the use of the BSS algorithm only (BSSl_{1}).
Comparing BF[5Â°]+BS +BSSl_{1} in Figure 12 and BF[30Â°] +BS+ BSSl_{1} in Figure 14 show that the impact of the interbeam angle is quite weak with respect to the separation gain. However, the beamforming preprocessing with beams selection of 5Â° interbeam angle step allows us to estimate correctly the DOA of the sources with a step of 5Â° as shown in Figure 18. The latter represents the selected beam directions for all considered experiments (i.e., the 40 experiments) and for different source locations.
5.2.2 Comparison between BSSl_{1}and ICA
Independent component analysis and the l_{1} norm minimization have quite close results with or without the preprocessing step. However, we believe that replacing BSSl_{1} by BSSl_{ p }with p < 1 or with varying p value might lead to a significant improvement of the separation quality. This observation is based on the preliminary results we obtained in [14] and would be the focus of future investigations.
5.2.3 Convergence analysis
We procceed to the analysis of the convergence of the proposed algorithm by observing the convergence rates through the iterations and for the considered DOA (cf. Figure 19). Each curve represents the average of cost function (15) averaged for all the frequencies. As we can see in Figure 19b, our iterative algorithm converges quite quickly (typically 10 to 20 iterations) towards its steady state. We notice also that the convergence rate of the proposed two stage method with beam selection is better than the convergence of BSSl_{1}. Indded, in this context, the separation algorithm BSSl_{1} converges to its steady state after 30 to 40 iterations. Moreover, the cost function of the two stage algorithm reaches lower values than the separation algorithm only and thus, the beamforming preprocessing helps for better convergence.
6 Conclusion
In this article, we present a twostage BSS algorithm for robot audition. The first stage is a preprocessing step with fixed beamforming. To deal with the effect of the head of the robot in the acoustic near field and model the manifold of the sensors array, we used HRTFs as steering vectors in the beamformers estimation step. The second stage is a BSS algorithm exploiting the sparsity of the sources in the timefrequency domain.
We tested different configurations of this algorithm with steering directions of the beams equal to the direction of arrivals of the sources and with fixed steering directions. We also varied the step angle between the beams. The beamforming preprocessing improves the separation performance as it reduces the reverberation and noise effects. The maximum gain is obtained when we select the beams with the highest energies and use the corresponding filters as beamformers or when the sources DOAs are known. The beamforming preprocessing with fixed steering directions has also good performance and does not use an estimation of the DOAs or beam selection, which represent a gain in the processing time. Using the 5Â° step beamforming preprocessing with beams selection, we can also have a rough estimation of the direction of arrivals of the sources.
Endnotes
^{a}Romeo project: http://www.projetromeo.com. ^{b}The ITD is the difference in arrival times of a sound wavefront at the left and right ears. ^{c}The IID is the amplitude difference of a sound that reaches the right and left ears. ^{d}For a complex number z, \mathsf{\text{sign}}\left(z\right)=\frac{z}{\leftz\right}. ^{e}The names of the algorithms that we are going to use in the legends of the figures are between brackets.
Algorithm 1 Combined beamforming and BSS algorithm

1.
Input:

(a)
The output of the microphone array x = [x (t _{1}),..., x(t _{ T })]

(b)
The beamforming precalculated filters {\left\{\mathbf{B}\left(f\right)\right\}}_{1\xe2\u2030\xa4f\xe2\u2030\xa4\frac{{N}_{f}}{2}+1}

(a)

2.
{\left\{\mathbf{X}\left(f,k\right)\right\}}_{1\xe2\u2030\xa4f\xe2\u2030\xa4{N}_{f},1\xe2\u2030\xa4k\xe2\u2030\xa4{N}_{T}}=\mathbf{S}TFT\left(\mathbf{x}\right)

3.
for each frequency bin f

(a)
beamforming preprocessing step: Z (f,:) = B (f) X (f,:)

(b)
initialization step: W(f) = W _{0} (f)

(c)
Y _{0} (f,:) = W _{0} (f) Z(f,:)

(d)
for each iteration t:
blind source separation step to estimate W(f)

(a)

4.
Permutation problem solving

5.
Output: the estimated sources \mathbf{y}=\mathsf{\text{ISTFT}}\left({\left\{\mathbf{Y}\left(f,k\right)\right\}}_{1\xe2\u2030\xa4f\xe2\u2030\xa4{N}_{f},1\xe2\u2030\xa4k\xe2\u2030\xa4{N}_{K}}\right)
Algorithm 2 Beams selection algorithm

1.
SelectedBeams = Ã˜

2.
for each frequency bin f:

(a)
Form K beams (beamformer outputs) Z(f,:) = B(f)X(f,:), Z(f,:) = [z _{1} (f,:),...,z _{ K }(f,:)]^{T}

(b)
Compute the energy of the beamformer outputs: E(f) = [e _{1}(f),...,e _{ K }(f)] with {e}_{i}\left(f\right)=\frac{1}{{N}_{T}}{\xe2\u02c6\u2018}_{k=1}^{{N}_{T}}{\left{\mathbf{z}}_{i}\left(f,k\right)\right}^{2}

(c)
Decreasing order sort of E(f), Beams are the beams corresponding to the sorted energies: Beams = sort (E(f))

(d)
Select the N highest energies, the indexes are stored in B.

(e)
SelectedBeams = SelectedBeams âˆª B

(a)

3.
Compute the frequency of appearance of each beam and store the occurrences in I.

4.
Select the N beams with the highest occurrence
References
Comon Pierre, Jutten Christian: Handbook of Blind Source Separation Independent Component Analysis and Applications. Elsevier; 2010.
Tamai Y, Sasaki Y, Kagami S, Mizoguchi H: "Three ring microphone array for 3d sound localization and separation for mobile robot audition,". IEEE/RSJ International Conference on Intelligent Robots and Systems 2005, 41724177.
Yamamoto S, Nakadai K, Nakano M, Tsujino H, Valin JM, Komatani K, Ogata T, Okuno HG: "Design and implementation of a robot audition system for automatic speech recognition of simultaneous speech,". IEEE Workshop on Automatic Speech Recognition Understanding 2007, 111116.
Nakajima H, Nakadai K, Hasegawa Y, Tsujino H: "High performance sound source separation adaptable to environmental changes for robot audition,". IEEE/RSJ International Conference on Intelligent Robots and Systems 2008, 21652171.
Saruwatari H, Mori Y, Takatani T, Ukai S, Shikano K, Hiekata T, Morita T: "Twostage blind source separation based on ica and binary masking for realtime robot audition system,". IEEE/RSJ International Conference on Intelligent Robots and Systems 2005, 23032308.
Benesty Jacob, Chen Jingdong, Huang Yiteng: Microphone Array Signal Processing Chapter 3: Conventional beamforming techniques. 1st edition. Springer; 2008.
Ding Heping, Wang Lin, Yin Fuliang: "Combining superdirective beamforming and frequencydomain blind source separation for highly reverberant signals,". EURASIP Journal on Audio Speech and Music Processing 2010., 2010:
Comon Pierre: "Independent component analysis, a new concept?,". Signal Processing 1994.
Saruwatari H, Kurita S, Takeda K, Itakura F, Nishikawa T, Shikano K: "Blind source separation combining independent component analysis and beamforming,". EURASIP Journal on Applied Signal Processing 2003, 11351146.
Weihua Wang, Fenggang Huang: "Improved method for solving permutation problem of frequency domain blind source separation,". 6th IEEE International Conference on Industrial Informatics 2008, 703706.
Amari S, Cichocki A, Yang HH: "A new learning algorithm for blind signal separation,". Advances in Neural Information Processing Systems 1996, 757763.
Amari ShunIchi: "Natural gradient works efficiently in learning,". Neural Computation 1998, 10: 251276. 10.1162/089976698300017746
Niall Hurley, Scott Rickard: "Comparing measures of sparsity,". IEEE Workshop on Machine Learning for Signal Processing 2009, 55: 47234741.
Maazaoui M, Grenier Y, AbedMeraim K: "Frequency domain blind source separation for robot audition using a parameterized sparsity criterion,". 19th European Signal Processing Conference EUSIPCO 2011.
Douglas SC, Gupta M: "Scaled natural gradient algorithms for instantaneous and convolutive blind source separation,". IEEE International Conference on Acoustics, Speech and Signal Processing 2007, 2: 637640.
Foster S: "Impulse response measurement using golay codes,". IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '86 1986, 11: 929932.
Vincent E, Gribonval R, Fevotte C: "Performance measurement in blind audio source separation,". IEEE Transactions on Audio, Speech, and Language Processing 2006, 14: 14621469.
Acknowledgements
This work is funded by the IledeFrance region, the General Directorate for Competitiveness, Industry and Services (DGCIS) and the City of Paris, as a part of the ROMEO project.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ original submitted files for images
Below are the links to the authorsâ€™ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Maazaoui, M., AbedMeraim, K. & Grenier, Y. Blind source separation for robot audition using fixed HRTF beamforming. EURASIP J. Adv. Signal Process. 2012, 58 (2012). https://doi.org/10.1186/16876180201258
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201258