2.1. From FDBSS to IVA
In real-world acoustic environment, signals are mixed with each other, as well as their delays, attenuations, and reverberations, i.e., signals are convolutively mixed together. Supposing there are N sources and M sensors (M ≥ N), the signal captured by sensor m can be modeled as (1) [14], where ★ is the convolution operation, a
mn
(t) is the finite duration impulse response mixing filter from source n to sensor m.
(1)
When STFT is used, if the STFT frame length is sufficiently longer than the mixing filter length [14], the time domain convolution in (1) can approximately be converted to the frequency domain multiplication in (2), where , , and are frequency domain versions of s
n
(t), x
m
(t), and a
mn
(t), respectively. For all sources and sensors , the complete mixing process can be formulated as (3), where A
[f] is the mixing matrix for frequency bin f, with as its entries.
(2)
Since signals are instantaneously mixed in each frequency bin, complex-valued ICA algorithms like [5, 6] can be used to separate signals, as depicted in (4), where W
[f] is the demixing matrix for frequency bin f, which is estimated by ICA. FDBSS utilizes (4) to separate signals, an example of 2 × 2 FDBSS demixing model is shown in Figure 1a. In this example, each horizontal layer is an ICA demixing model in (4) for each frequency bin, and the demixing procedure is carried out in layers independently. Since ICA in different layers may output the separated results in different order, the permutation ambiguity will occur in FDBSS, which is indicated by the different color of y
[f] in Figure 1a. The permutation ambiguity must be carefully addressed by algorithms like [7–12] before the inverse STFT is performed, or else the separation procedure will fail.
In addition to separate sources in each frequency bin, IVA utilizes inter-frequency bin information to solve the permutation problem in the separation procedure. The IVA model is very similar with the FDBSS model, as shown in Figure 1b. Their difference is that signals are considered as vectors in IVA, i.e. (vertical bars in Figure 1b), and they will be optimized as multivariate variables, instead of independent scalars like in ICA. The IVA model can also be formulated in a single equation: After data in each layer are concatenated into vectors as: x = [x
[1]; …; x
[F]], y = [y
[1]; …; y
[F]], and W is a block diagonal matrix with each W
[f] in its diagonal, the demixing procedure can be denoted as: y = Wx, just as the same expression as ICA.
2.2. IVA objective function
Mutual information I(·) is a natural measure of independence, which is minimized to zero when random variables are mutually independent, and it is often employed as the objective function in ICA. Mutual information can be calculated in the form of KL divergence KL(·∥·) in (5), where p
y
denotes the probability density function (PDF) of a random vector y, denotes the n th marginal PDF of y, and z is a dummy variable for the integral [16].
(5)
IVA objective function has the similar form as (5); however, each y
n
in IVA is a vector rather than a scalar. The IVA objective function and the corresponding derivations are given in (6) [16, 17], where H(·) represents the entropy.
(6)
In formula (6), the last equation is derived since H(Wx) = log|det(W)| + H(x) holds for a linear invertible transformation W, and the determinant of the block diagonal matrix . The term C = H(x) is a constant because the observed signals will not change in the optimization procedure [16, 17].
When the observed signals in each frequency bin are centered and whitened (x ← x - E(x) so that E(x) = 0, then x ← Vx so that E(xx
H) = I, E(·) for expectation, V is the whitening matrix), the demixing matrices W
[f] become orthonormal, so the term becomes zero. Then, by noting that , minimizing the IVA objective function in (6) is equivalent to minimizing (7) [17].
(7)
From here we can see that minimization of (7) balances the minimization of the term and the maximization of the I(y
n
) term. According to the basic ICA theory, independency is measured by non-Gaussianity, and minimizing is equivalent to maximizing the non-Gaussianity, which is responsible for separating data in individual frequency bins. Meanwhile, maximizing I(y
n
) means enhancing the dependency of entries in y
n
, which is responsible for solving the permutation problem. In short, minimizing the IVA objective function can simultaneously separate the data and solve the permutation problem [17].
2.3. Optimization procedures
To minimize the objective function in (6), the entropy of the estimated source vectors must be calculated. Although the actual PDF of each y
n
is unknown, a prior target PDF is often used, so the objective function in (6) can be simplified as in (8) [14].
(8)
Natural gradient descent and fast fixed-point iteration are two frequently used optimization methods in IVA. In the natural gradient-based approach [13, 14], after differentiating the objective function with respect to the demixing matrices, the updating rule can be formulated as (9)
(9)
In this equation, η is the learning rate, and φ[f](·) is a multivariate nonlinear function (also called score function) for frequency bin f. This nonlinear function is highly related to the chosen source prior PDF:
(10)
In [15, 16], a FIVA algorithm was proposed. Compared with the natural gradient-based approach, the convergence speed of FIVA is dramatically improved, and there is no need to choose the learning rate manually. After applying a nonlinear mapping G, the FIVA objective function can be transformed from (8) to (11) [15, 16]. The corresponding updating rule can be formulated in (12), followed by the symmetric decorrelation scheme in (13). In (12), represents the n th row of the demixing matrix W
[f]. In (13), the inverse square root of a symmetric matrix W
- 1/2 = PD
- 1/2
P
H, and W = PDP
H is the eigendecomposition of W.
(11)
(12)
(13)
Although the original nonlinearity G used in (11) is also derived from the source prior PDF as: [15, 16], nonlinearities in FIVA should be considered as entropy estimators, so, different nonlinearities can also be used, which may not have a direct association with source prior PDF. For example, and G(·) = log(·) are two frequently used nonlinear functions.
When the IVA updating rules in (9) and (12) are compared with the corresponding updating rules in conventional InfomaxICA [5] and complex-valued FastICA [6], one can find that they have nearly the same expressions, the only difference is the improvement from univariate nonlinearities to multivariate nonlinearities. It means that multivariate nonlinearity is very important for IVA algorithms, choosing proper nonlinearities will improve the source separation performance.