Open Access

Independent vector analysis based on overlapped cliques of variable width for frequency-domain blind signal separation

EURASIP Journal on Advances in Signal Processing20122012:113

https://doi.org/10.1186/1687-6180-2012-113

Received: 15 June 2011

Accepted: 23 May 2012

Published: 23 May 2012

Abstract

A novel method is proposed to improve the performance of independent vector analysis (IVA) for blind signal separation of acoustic mixtures. IVA is a frequency-domain approach that successfully resolves the well-known permutation problem by applying a spherical dependency model to all pairs of frequency bins. The dependency model of IVA is equivalent to a single clique in an undirected graph; a clique in graph theory is defined as a subset of vertices in which any pair of vertices is connected by an undirected edge. Therefore, IVA imposes the same amount of statistical dependency on every pair of frequency bins, which may not match the characteristics of real-world signals. The proposed method allows variable amounts of statistical dependencies according to the correlation coefficients observed in real acoustic signals and, hence, enables more accurate modeling of statistical dependencies. A number of cliques constitutes the new dependency graph so that neighboring frequency bins are assigned to the same clique, while distant bins are assigned to different cliques. The permutation ambiguity is resolved by overlapped frequency bins between neighboring cliques. For speech signals, we observed especially strong correlations across neighboring frequency bins and a decrease in these correlations with an increase in the distance between bins. The clique sizes are either fixed, or determined by the reciprocal of the mel-frequency scale to impose a wider dependency on low-frequency components. Experimental results showed improved performances over conventional IVA. The signal-to-interference ratio improved from 15.5 to 18.8 dB on average for seven different source locations. When we varied the clique sizes according to the observed correlations, the stability of the proposed method increased with a large number of cliques.

Keywords

blind signal separation (BSS) independent component analysis (ICA) independent vector analysis (IVA)

1 Introduction

When an audio signal is recorded by a microphone in a closed room, it reaches the microphone via not only a direct path, but also infinitely many reverberant paths. The source sound wave is delayed in time and its energy is absorbed by walls when it is delivered by a reverberant path. In order to make the problem practically tractable, the time delay is usually limited to a certain number by which the signal energy may almost disappear through repeated reflections. The signal recorded by a digital microphone can then be modeled by a discrete convolution of a finite impulse response (FIR) filter and the source signal [13]. When there are multiple microphones and multiple sources, each microphone recording is expressed by the sum of the convolutions of corresponding transfer functions and source signals [46] such that
x j ( t ) = i = 1 M τ = 0 T a j i ( τ ) s i ( t - τ ) = i = 0 M a j i ( t ) * s i ( t ) , j = 1 , , N ,
(1)

where the integer numbers j, M, N, and T are, the microphone number, number of sources, number of microphones, and order of the FIR filter, respectively. The time-domain sequences x j (t) and s i (t) are the signals recorded by microphone j and generated by source i, respectively, and a ji (t) is the coefficient at time t of the FIR filter for the transfer function from source i to microphone j; it is affected by the recording environment, including the source and microphone locations. To ensure that the linear transformation is invertible, the number of sources should be equal to the number of microphones, i.e., N = M[4].

This type of problem is often called blind signal separation (BSS) because there is no assumption of the source characteristics. Many studies have been carried out to tackle BSS problems based on independent component analysis (ICA), which minimizes the statistical dependency among the output signals [48]. However, direct inversion of the time-domain mixing filter in Equation 1 is difficult and often leads to unstable solutions. To obtain a more stable convergence, the short-time Fourier transform (STFT) is used to convert the convolution in Equation 1 to multiplications in the frequency domain [5]:
X j ω , k = A j i ω S i ω , k , j = 1 , , N ,
(2)
where ω is the center frequency of each component of STFT, and the complex values X j (ω, k), A ji (ω), and S i (ω, k) are STFT components of x j (t), a ji (t), and s i (t), respectively. Note that another discrete time domain exists which is denoted by the dummy variable k. This is different from the real-time variable t, as each value of k corresponds to a frame of the STFT. The value of A ji (ω) is assumed to be constant over time, so it is not a function of k. Because we use discrete STFT, the center frequency of each discretized frequency bin is expressed as ω b = b B ω max , where B is the total number of frequency bins, b denotes the frequency bin number, and ωmax is the maximum frequency equivalent to half of the Nyquist sampling rate. This means that the frequency-domain BSS methods only consider the STFT components at the frequencies in [0 π] [5]. The components at frequencies in [ 0] can be reconstructed perfectly because a real-valued time-domain signal has a conjugate symmetric Fourier series: X ( - ω ) = X ̄ ( ω ) for ω [0 π], where X ¯ ( ω ) is the complex conjugate of X(ω). For a more compact notation, we rewrite Equation 2 as
x b k = A b s b k b = 1 , 2 , , B ,
(3)
where x b [k] = [X1(ω b , k) . . . X N (ω b , k)] T , s b [k] = [S1(ω b , k) . . . S M (ω b , k)] T , and A b is an N × M matrix whose (j, i)th element is A ji (ω b , k). Dealing with the signals in the frequency domain improves the performance since longer filter lengths are better handled in the frequency domain and the convolved mixture problem reduces to an instantaneous mixture problem in each frequency bin; this is expressed as
y b k = W b x b k , b = 1 , 2 , , B ,
(4)

where y b [k] is a vector of M estimated independent sources and W b is an M × N matrix. Ideally, when W b = (A b ) - 1, we can perfectly reconstruct the original sources by y b [k] = (A b ) - 1A b s b [k] = s b [k]. However, all frequency-domain ICA algorithms inherently suffer from permutation and scaling ambiguity because they assume different frequency components to be independent [4, 9]. The instantaneous ICA may assign individual frequency bins of a single source to different outputs, so grouping the frequency components of individual source signals is required for the success of the frequency-domain BSS [10]. One of the simplest solutions is smoothing the frequency-domain filter [1012] at the expense of performance because of the lost frequency resolution. There are other methods for colored signals, such as explicitly matching components with larger inter-frequency correlations of signal envelopes [1315].

Recently, a method called independent vector analysis (IVA) has been developed to overcome the permutation problem by embedding statistical dependency across different frequency components [1619]. The joint dependency model assumes that the frequency bins of the acoustic sources have radially symmetric distributions [20]. Because speech signals are known to be spherically invariant random processes in the frequency domain [21], such an assumption seems valid and also results in decent separation results. However, when compared to the frequency-domain ICA followed by perfect permutation correction, the separation performance of IVA using spherically symmetric joint densities is slightly inferior [19]. This suggests that such source priors do not exactly match the distribution of speech signals and that the IVA performance for speech separation can be improved by finding better dependency models [22, 23].

We propose a new dependency model for IVA. The single and fully-connected clique is decomposed into many cliques of smaller sizes. A new objective function is derived to account for strong dependency inside the individual cliques and weak dependency across the cliques by retaining a considerable amount of overlap between adjacent cliques. The clique sizes are either fixed or determined by a mel-scale with its frequency index reversed; the latter was proven to be more robust to the increased number of cliques through simulated 2 × 2 speech separation experiments.

This article is organized as follows. Section 2 explains conventional IVA; Section 3 gives a detailed algorithm of the proposed method to contrast with IVA. Section 4 presents the results of the simulated speech separation experiments, and Section 5 summarizes the proposed method and its future extensions.

2 IVA

The key idea behind IVA is that all of the frequency components of a single source are regarded as a single vector, the components of which are dependent on one another. The independence between source vectors is approximated by a multivariate, joint probability density function (pdf) of the components from each source vector, and the joint pdf is maximized rather than the individual independencies between each frequency bin. The IVA model consists of a set of basic ICA models where the univariate sources across different dimensions have some dependency such that they can be grouped and aligned as a multidimensional variable.

Figure 1 illustrates a 2 × 2 IVA mixture model. Let the multidimensional source vector be s i = [ s i 1 , s i 2 , , s i B ] T for i = 1, 2. Each component of s1 is linearly mixed with the component in the same dimension of s2 by A b such that
Figure 1

Mixture model of IVA.

x 1 b x 2 b = a 11 b a 12 b a 21 b a 22 b s 1 b s 2 b = a 11 b s 1 b + a 12 b s 2 b a 21 b s 1 b + a 22 b s 2 b ,
(5)
for b = 1, . . . , B. For microphone j = 1, 2, the observation vector is expressed as
x j = x j 1 x j 2 x j B = a j 1 1 s 1 1 + a j 2 1 s 2 1 a j 1 2 s 1 2 + a j 2 2 s 2 1 a j 1 B s 1 B + a j 2 B s 2 B .
(6)

The mixing of the multivariate sources is dimensionally constrained so that a linear mixture model is formulated in each layer. The instantaneous ICA is extended to a formulation with multidimensional variables or vectors, where the mixing process is constrained to the sources on the same horizontal layer or on the same dimensions. The joint dependency within the dependent sources is modeled by a multidimensional pdf, and hence, correct permutation is achieved.

To derive the objective function of IVA, a single dimension of the estimated sources in Equation 4 is extracted, and a new vector is constructed by collecting the source coefficients of all the frequency bins. The source estimate y i is expressed by the following matrix-vector multiplication:
y i = y i 1 y i 2 y i B = j = 1 N w i j 1 x j 1 j = 1 N w i j 2 x j 2 j = 1 N w i j B x j B
(7)
= w i 1 w i 2 w i B x 1 x 2 x N ,
(8)
where w i b is the i th row of matrix W b and w i j b is the j th element of w i b . For a simple derivation of the IVA algorithm, we assume that y i b has a unit variance to eliminate the variance terms from the original IVA learning algorithm [19]. This can easily be achieved by scaling w i b appropriately such that
w i b w i b E | y i b | 2 .
(9)
In resynthesis, the above normalization is reversed to restore the original scales. The likelihood of y i is computed by the following multivariate pdf [19, 20]:
p ( y i ) exp - b = 1 B | y i b | 2 .
(10)

The goal of IVA is optimizing {W1, W2, . . . , W B } to maximize the independence among the separated sources, {y1, y2, . . . , y M }, where the independence is approximated by the sum of the log likelihoods of the given data computed by Equation (10). The detailed learning algorithm can be found in [19, 20].

Figure 2 illustrates the mixing assumption and how the IVA algorithm works. Two sources are mixed at different amounts in different frequency bins. To find y1 and y2 for the estimates of s1 and s2, IVA instead estimates the unmixing matrices to minimize the dependency between different sources while maintaining strong dependency across frequency bins. There is only a single dependency model in which all the frequency bins distinguished by their center frequencies are connected to one another: that is, the spherical dependency described by Equation (10).
Figure 2

Mixing and separation models of conventional IVA.

3 Proposed dependency models for IVA

For real-sound sources, it is unreasonable for neighboring and distant frequency components to be assigned the same dependency because the dependency of neighboring frequency components is much stronger than that of distant frequency components. This section describes the proposed dependency models in which the single and fully connected statistical dependency of IVA is decomposed into several cliques whose sizes are set to be fixed or mel-scaled. The details of the proposed models are explained in this section.

3.1 Overlapped cliques of a fixed size

The statistical dependency between adjacent frequency components is much larger than that between distant components. For example, the dependency between y i b and y i b + 1 for an arbitrary b is much stronger than that between y i b and y i b + k when k 1. We considered the difference in center frequencies of the STFT components in the proposed dependency model. As shown in Figure 3, the clique of the components of the estimated source vectors y i was broken into several cliques in order to eliminate the direct dependency between distant frequency bins. This segmentation of the spherical model can be visualized as a chain of cliques [23]. The dependency among the source components propagates through chain-like overlaps of spherical dependencies such that the dependency between components weakens as the distance between them grows. The corresponding multivariate pdf is given in the following form:
Figure 3

Illustration of the proposed dependency model.

p y i exp - c = 1 C b = f c l c | y i b | 2 ,
(11)
where C is the number of cliques, and f c and l c are the first and last indices, respectively, of clique c designed to satisfy the condition
f c < l c - 1 , c = 2 , 3 , , C ,
(12)
so that the series of cliques have chained overlaps. With the proposed source prior in Equation (11), we derive a new learning algorithm to find a set of linear transformation matrices that make the components as statistically independent as possible, such that
{ W b * } = arg max { W b } L ( { W b } ) ,
(13)
where the log likelihood function is defined as
L ( { W b } ) log b B | det W b | i M p ( y i ) = b B log | det W b | + i M log p ( y i ) = b B log | det W b | - i M c = 1 C b = f c l c | y b i | ,
(14)
where M is the number of sources defined in Equation (1). We apply the natural gradient learning rule [24] to W b at each frequency bin b:
Δ W b I - φ y i b y i b W b ,
(15)
where I is an M × M identity matrix, (·) H is the Hermitian transpose operator, and φ(y b ) is a vector function whose ith element is
φ ( y b ) i = log p ( y i b ) y i b = c S b y i b b = f c l c | y i b | 2 ,
(16)
where S b is a set of cliques that includes bin b. At every adaptation step, W b is constrained to be orthogonal by the following symmetric decorrelation scheme:
W b W b ( W b ) H - 1 2 W b , b = 1 , 2 , , B .
(17)
At the end of the learning, the well-known minimal distortion principle [25] is applied to W b by
W b diag ( W b ) - 1 W b , b = 1 , 2 , , B .
(18)
To select an appropriate set of cliques that is suited to our goal, we constructed a matrix of size B × B whose (i, j)th element is the correlation coefficient between bin i and bin j from a single source. Figure 4A-D shows the computed correlation coefficient matrices obtained from four different speech signals of two females and two males. In all four cases, a strong correlation was observed around the diagonal with a positive slope because they were from closely located frequency pairs. The correlation decreased as it went off-diagonal. Although the low-frequency components had a widespread dependence over the 0-3 kHz region, it was much weaker than that along the positive sloping diagonal. All of the speech signals are from the TIMIT database, and the same observations held true for other speech signals as well. To consider strong correlations among neighboring frequency bins, we adopted a dependency graph consisting of several cliques of the same size and increasing center frequencies. Taking 1,024 frequency bins as an example, the beginning and ending indices of Equation (11) were [f1l1] = [1 256], [f2l2] = [2 257], [f3l3] = [3 258], . . ., [f C l C ] = [769 1024], where the number of frequency bins for each clique was fixed to 256. This simple dependency model using overlapped cliques is shown in Figure 5. All of the cliques were of the same size but with varying center frequencies.
Figure 4

Correlation coefficient matrix of (A) female speech; (B) male speech; (C) another female; (D) another male. All speech signals are from the TIMIT database.

Figure 5

Dependency model of fixed clique size.

3.2 Overlapped cliques of variable sizes

Figure 6 shows another model that reflects the spread dependence at low frequencies. The cliques have variable sizes based on the reversed mel-frequency scale. We adopted the mel-scale to prevent being biased to any specific cases; this scale has been proven to be efficient in numerous speech signal-processing applications such as speech recognition and enhancement. General human speech is characterized by rapid changes occurring more often in the lower-frequency regions. Therefore, most auditory frequency scales, including the mel-scale, use a narrow bandwidth for the low-frequency region based on the observation that there is little dependence among neighboring frequencies [26]. In the high-frequency region, there is greater dependence among neighboring frequencies, so a relatively large bandwidth is used. However, in the proposed method, we set the sizes of the bands in the opposite fashion. We assigned larger clique sizes to low frequencies because they have less statistical dependence to one another, and smaller clique sizes to higher frequencies. Since the cliques play the role of joining the same source components distributed in different frequencies, a larger bandwidth is necessary to cover the weak and spread dependence in the low-frequency region. For higher frequencies, a smaller amount of overlap is enough because of the greater dependence among neighboring frequency components, as shown in Figure 4. The overlapped vertices between the adjacent cliques in the dependency graph enables collection of the same source components. Therefore, the clique size is determined by the reversed mel-scale, which is computed by
Figure 6

Dependency model of mel-scale clique sizes.

h ω c = A log 10 1 + ω c 700 - log 10 1 + ω c - 1 700 ,
(19)
where ω c is the center frequency of clique c, A is a constant, and h(ω c ) is the bandwidth of clique c. The beginning and ending indices f c and l c in Equation (11) are then obtained by
f c = max 1 , b c - h ω c , l c = min B , b c + h ω c ,
(20)

where b c is the center-bin number of clique c. The max and min operators ensure that the computed bin numbers are within a valid range.

4 Experiments

We compared the performance of the audio source separation using the proposed dependency models with that of the fully connected dependency model of the conventional IVA. Both methods were applied to multiple speech separation problems. The geometric configuration for the simulated room environments is shown in Figure 7. Various 2×2 cases were simulated by combining pairs of source locations from A to J. For example, experiment 1 was a combination of sound source 1 from location A and sound source 2 from location H, experiment 2 was a combination of sources from locations B and G, etc. We set the dimensions of the room to 7 m × 5 m × 2.75 m and the heights of all microphones and source locations to 1.5 m. The reverberation time was 100 ms, and the corresponding reflection coefficient was 0.57 for every wall, floor, and ceiling. Room impulse responses were obtained by an image method [13] using the above parameters. The impulse responses of the transfer functions from source locations A and H to the two microphones are shown in Figure 8. The peak location was not at the origin because the direct path had its own delay. The filter length was 100 ms, which was equivalent to 800 tabs at an 8-kHz sampling rate. The amplitude dropped rapidly because of the loss of energy due to the reflection.
Figure 7

Geometric configuration of the simulated room environments.

Figure 8

Impulse responses of the transfer function of a simulated room. The source locations are A and H.

Male and female speech signals chosen from the TIMIT database were synthetically convolved with the impulse responses corresponding to the locations of the sources and microphones in each experiment. When the algorithm was applied to source separation in the STFT domain, a 2048-point FFT, 2048-tab Hanning window, and shift size of 512 samples were used. The separation performance was measured in terms of the signal-to-interference ratio (SIR) which is defined as [19]:
SIR = 10 log k , b | i r i q ( i ) b s q ( i ) b [ k ] | 2 n , b | i j r i q ( j ) b s q ( j ) b [ k ] | 2 ,
(21)

where q(i) indicates the separated source index of the i th source and r iq(j) is the overall impulse response computed by r i q ( j ) = m w i m b a m q ( j ) b . In order to represent how close the estimated W i b was to the inverse of the mixing filters A j b , the SIR numbers were measured in decibels, because the acoustic signal power ratio is in the log scale [26]. The higher SIR is, the closer the result is to perfect separation.

We compared the single clique model of IVA with the proposed multiple clique models. The multiple clique designs are shown in Figure 9. The numbers of cliques were 2, 4, 8, 12, and 16, and the overlap ratio between neighboring cliques was set to 50%. In A-E, the center frequencies were "linearly" increased, and the sizes were all fixed except for the first and last because they were located at opposite ends. For example, the four cliques in Figure 9B cover the frequency regions of 0-1.5, 0.5-2.5, 1.5-3.5, and 2.5-4 kHz. The neighboring cliques overlap by 50%, so the dependency is well propagated. In contrast, the center frequencies of F-J are on the "reversed mel-scale" in Equation (19): the clique sizes are inversely proportional to the rate of change in the mel-scale. The same four cliques in Figure 9G cover 0-2.2, 1.1-3.1, 2.4-3.7, and 3.3-4 kHz. Their actual bandwidths were 2.2, 2.0, 1.36, and 0.74 kHz, although the bandwidths computed by Equation (19) were 1.47, 1.02, 0.68, and 0.49 kHz. Because the first and last cliques had only one neighbor, their sizes were 1.5 times larger than the expected bandwidths, while the sizes of the second and the third cliques were twice as large to impose a 50% overlap with neighboring cliques.
Figure 9

Various clique designs. (A)-(E) The center frequencies are linearly scaled, and the clique sizes are equal. (F)-(J) The center frequencies and clique sizes are on an inverse mel-scale. The "CR" values are the ratios of the sum of the correlation coefficients included in the cliques to the sum of all of the coefficients.

The "CR" number in each of the clique designs in Figure 9 is the ratio of the sum of correlation coefficients enclosed by the union of all the cliques to the sum of the total correlation coefficients. It approaches unity as the enclosed region approaches the total area. The correlation map is identical to Figure 4A from the speech of female 1, who was one of the input sources of our experiments. The CR number does not account for the separation performance directly but roughly shows how well a clique design models the dependence of the frequency bins.

All of the separation performances were measured for their SIR and are summarized in Table 1. The first "IVA" row represents the SIR numbers obtained by the conventional IVA algorithm [19]. Rows labeled "LIN2," "LIN4," "LIN8," "LIN12," and "LIN16" are the results of the proposed models utilizing the clique designs in Figure 9A-E, and rows labeled "MEL2," ..., "MEL16" are the results with the clique designs in Figure 9F-J. Columns indicate various combinations of source locations, average SIR (denoted by "SIR") of the seven experiments, average number of iterations (denoted by "Iter.") for the solution to converge, and CR number of the corresponding clique design. The average SIRs that were larger than 18 dB are boldfaced. Among the linear scales, the average SIRs of LIN4 and LIN8 were 18.7 and 18.8 dB, and the average numbers of iterations were 397 and 544, respectively. These indicate that LIN4 and LIN8 greatly improved both the separation performance and convergence speed compared to IVA. However, when the number of cliques was more than 8, SIR degraded rapidly (13.9 and 12.0 dB), and the separation performance were even poorer than those of IVA. Among mel-scales, the average SIRs of MEL4 and MEL8 were both 18.7 dB, and their numbers of iterations were 543 and 408, respectively, which were about the same as those of LIN4 and LIN8. The difference from the linear scales was when the number of cliques was more than 8: the separation performance measured by SIR did not degrade as badly as that of LIN12 and LIN16. However, many more number of iterations was required for both MEL12 and MEL16, implying that the broken dependency made the algorithm oscillate around the optimal solution. When comparing LIN and MEL, their best SIRs were almost the same, but the average iterations revealed that the mel-scales were more robust for large numbers of cliques. This can be explained by comparing the amount of correlation captured by the clique designs. Figure 9 shows that the CR numbers of MEL12 and MEL16 were 0.356 and 0.291, and those of LIN12 and LIN16 were 0.333 and 0.272, respectively. For 12 and 16 cliques, mel-scale designs had CR numbers larger by about 0.02 than the linear-scale designs. The difference mostly originated from the low-frequency region: the spread dependence observed at 1-2 kHz was better captured by the mel-scale cliques, and which in turn enabled correct source permutation. In summary, the proposed method was more effective than the original IVA in most clique configurations in terms of separation performance, and the mel-scale clique designs were better than the fixed-size designs in terms of stability.
Table 1

Separation performances in SIR (dB)

Exp. number

1

2

3

4

5

6

7

Average

 

Source location

A,H

B,G

E,G

H,J

C,D

E,F

H,I

SIR

Iter.

CR

IVA

16.5

17.5

16.6

12.0

15.5

15.2

15.0

15.5

936

1.000

LIN2

21.5

19.6

19.3

14.2

17.2

18.7

17.1

18.2

674

0.905

LIN4

22.0

19.6

19.3

14.7

17.4

19.2

18.4

18.7

397

0.677

LIN8

22.7

19.9

19.5

14.9

17.5

19.2

18.2

18.8

544

0.443

LIN12

7.3

18.8

9.0

5.8

17.6

19.6

18.8

13.9

468

0.333

LIN16

11.8

1.8

10.1

8.2

16.9

17.1

18.3

12.0

493

0.272

MEL2

19.4

19.0

18.5

13.5

16.7

16.5

15.8

17.1

825

0.890

MEL4

22.0

19.8

19.4

14.5

17.6

19.2

18.2

18.7

543

0.663

MEL8

22.3

19.8

19.3

14.7

17.3

19.1

18.6

18.7

408

0.466

MEL12

20.4

18.7

19.4

14.8

17.2

19.4

18.2

18.3

922

0.356

MEL16

20.9

19.0

18.6

14.9

17.5

19.2

18.2

18.3

1000

0.291

5 Conclusions

The totally spherical dependency model of IVA was relaxed by the dependency models of chained cliques. The new clique designs are advantageous because the weak dependency among distant frequencies is modeled by indirect dependency propagation, which helps in finding a better local solution compared to the original IVA, where the same amount of dependency is assigned to any pair of frequency bins. In this article, two types of non-spherical models are proposed. The first uses the same number of frequency bins for all of the cliques, while the other varies the number of frequency bins in reversed mel-scales based on the measured correlation coefficients between different frequency bins. Both dependency models achieved higher source separation performance and faster convergence to correct solutions owing to more accurate modeling of the statistical dependency. For simulated mixtures of male and female speech signals, both models obtained the highest performance when the number of cliques was set to 4 or 8. When the clique size was fixed, the performance degraded drastically for more than eight cliques. However, when the clique size was determined by the mel-scales, the same level of performance was kept at the expense of convergence rate. This implies the presence of up to 16 independent units in speech signals along the mel-scale frequency axis. One of the ongoing research issues is finding more flexible dependency models, such as instantaneously varying the dependency graph based on the correlation coefficients measured from the input signals or on their harmonic structures. Another research issue is finding appropriate dependency models for natural sounds because the dependency among the frequency components may not be related to the mel-scale.

Declarations

Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2010-0025642), and by the Ministry of Knowledge Economy, Korea (2008-S-019-02, Development of Portable Korean-English Automatic Speech Translation Technology).

Authors’ Affiliations

(1)
Hamilton Glaucoma Center, University of California
(2)
Ulsan National Institute of Science and Technology (UNIST)

References

  1. Stephens RB: AE Bate. In Acoustics and Vibrational Physics. Edward Arnold Publishers, London; 1966.Google Scholar
  2. Allen JB, Berkley DA: Image method for efficiently simulating small room acoustics. J Acoust Soc Am 1979, 65: 943-950. 10.1121/1.382599View ArticleGoogle Scholar
  3. Gardner WG: The virtual acoustic room. Master's thesis, MIT 1992.Google Scholar
  4. Bell AJ, Sejnowski TJ: An information maximization approach to blind separation and blind deconvolution. Neural Comput 1995, 7(6):1129-1159. 10.1162/neco.1995.7.6.1129View ArticleGoogle Scholar
  5. Yellin D, Weinstein E: Multichannel signal separation: methods and analysis. IEEE Trans Signal Process 1996, 44: 106-118. 10.1109/78.482016View ArticleGoogle Scholar
  6. Torkkola K: Blind separation of convolved sources based on information maximization. In Proc IEEE Int Workshop on Neural Networks for Signal Processing. Kyoto, Japan; 1996:423-432.Google Scholar
  7. Lambert R: Multichannel blind deconvolution: FIR matrix algebra and separation of multipath mixtures. PhD thesis. University of Southern California; 1996.Google Scholar
  8. Lee TW, Bell AJ, Lambert R: Blind separation of delayed and convolved sources. Adv Neural Inf Process Syst 1997, 9: 758-764.Google Scholar
  9. Hyvärinen A, Oja E: Independent Component Analysis. John Wiley and Sons, New York; 2002.MATHGoogle Scholar
  10. Smaragdis P: Blind separation of convolved mixtures in the frequency domain. Neurocomputing 1998, 22: 21-34. 10.1016/S0925-2312(98)00047-2View ArticleMATHGoogle Scholar
  11. Parra L, Spence C: Convolutive blind separation of non-stationary sources. IEEE Trans Speech Audio Process 2000, 8(3):320-327. 10.1109/89.841214View ArticleMATHGoogle Scholar
  12. Asano F, Ikeda S, Ogawa M, Asoh H, Kitawaki N: A combined approach of array processing and independent component analysis for blind separation of acoustic signals. Proc IEEE Int Conf on Acoustics, Speech, and Signal Processing, Salt Lake City, Utah 2001, 5: 2729-2732.Google Scholar
  13. Anemueller J, Kollmeier B: Amplitude modulation decorrelation for convolutive blind source separation. In Proc Int Conf on Independent Component Analysis and Blind Source Separation. Helsinki, Finland; 2000:215-220.Google Scholar
  14. Murata N, Ikeda S, Ziehe A: An approach to blind source separation based on temporal structure of speech signals. Neurocomputing 2001, 41: 1-24. 10.1016/S0925-2312(00)00345-3View ArticleMATHGoogle Scholar
  15. Anemueller J, Sejnowski TJ, Makeig S: Complex independent component analysis of frequency-domain electroencephalographic data. Neural Netw 2003, 16(9):1311-1323. 10.1016/j.neunet.2003.08.003View ArticleGoogle Scholar
  16. Hiroe A: Solution of permutation problem in frequency domain ICA, using multivariate probability density functions. Lecture Notes in Computer Science 2006, 3889: 601-608. 10.1007/11679363_75View ArticleMATHGoogle Scholar
  17. Lee I, Kim T, Lee TW: Complex FastIVA: a robust maximum likelihood approach of MICA for convolutive BSS. Lecture Notes in Computer Science 2006, 3889: 625-632. 10.1007/11679363_78View ArticleMATHGoogle Scholar
  18. Lee I, Kim T, Lee TW: Independent Vector Analysis for Convolutive Blind Speech Separation. Volume chap 6. Springer, New York; 2007:169-192.View ArticleGoogle Scholar
  19. Kim T, Attias H, Lee SY, Lee TW: Blind source separation exploiting higher-order frequency dependencies. IEEE Trans Audio Speech Lang Process 2007, 15: 70-79.View ArticleGoogle Scholar
  20. Lee I, Lee TW: On the assumption of spherical symmetry and sparseness for the frequency-domain speech model. IEEE Trans Speech Audio Lang Process 2007, 15(5):1521-1528.View ArticleGoogle Scholar
  21. Brehm H, Stammler W: Description and generation of spherically invariant speech-model signals. Signal Process 1987, 12(2):119-141. 10.1016/0165-1684(87)90001-6View ArticleGoogle Scholar
  22. Lee I, Jang GJ, Lee TW: Independent vector analysis using densities represented by chain-like overlapped cliques in graphical models for separation of convolutedly mixed signals. Electron Lett 2009, 45(13):710-711. 10.1049/el.2009.0945View ArticleGoogle Scholar
  23. Jang GJ, Lee IT, Lee TW: Independent vector analysis using non-spherical joint densities for the separation of speech signals. In Proc IEEE Int Conf on Acoustics, Speech, and Signal Processing. Volume 2. Honolulu, Hawaii; 2007:629-632.Google Scholar
  24. Amari SI, Cichocki A, Yang HH: A new learning algorithm for blind signal separation. Adv Neural Inf Process Syst 1996, 8: 757-763.Google Scholar
  25. Matsuoka K, Nakashima S: Minimal distortion principle for blind source separation. In Proc Int Conf on Independent Component Analysis and Blind Source Separation. San Diego, California; 2001:722-727.Google Scholar
  26. O'Shaughnessy D: Speech Communication: Human and Machine. Addison-Wesley, New York; 1987.MATHGoogle Scholar

Copyright

© Lee and Jang; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.