Skip to main content

A novel fixed-point algorithm for constrained independent component analysis


Constrained independent component analysis (ICA) is an effective method for solving the blind source separation with a prior knowledge. However, most constrained ICA algorithms are proposed for the real-valued sources. In this paper, a novel constrained noncircular complex fast independent component analysis (c-ncFastICA) algorithm based on the fixed-point learning is proposed to address the complex-valued sources. The c-ncFastICA algorithm uses the augmented Lagrangian method to obtain a new cost function and then utilizes the quasi-Newton method to search its optimal solution. Compared with other ICA and constrained ICA algorithms, c-ncFastICA has better separation performance. Simulations confirm the effectiveness and superiority of the c-ncFastICA algorithm.

1 Introduction

Independent component analysis (ICA) [1,2,3] is very effective in separating the linear mixture of independent sources. It has been widely used in many scenarios, such as in array signal processing [4], speech signal processing [5], and multivariate probability density function estimation [6]. During the past decades, a lot of ICA algorithms have been proposed for the separation of real-valued and complex-valued signals. These algorithms can be divided into the algebraic theory-based ones [7, 8], the information criterion-based ones [9,10,11,12], and the iterative projection-based ones [13, 14]. For example, Joint Approximate Diagonalization of Eigenmatrices (JADE) [7] and alternating columns-diagonal centers (AC-DC) [8] are algebraic theory-based algorithms; FastICA [9], complex FastICA (cFastICA) [10], and noncircular complex FastICA (ncFastICA) [11] are information criterion-based algorithms; and auxiliary function-based ICA [13] is the iterative projection-based algorithm. In addition, independent vector analysis (IVA) [15, 16] and independent low-rank matrix analysis (ILRMA) [17] are the extensions of ICA to multidimensional cases. In many practical applications, such as wind measurements and digital communication, the sources are noncircular complex-valued [18, 19]. Thus, it is necessary to develop an efficient noncircular complex ICA algorithm. Among the aforementioned ICA algorithms, one of the most effective and commonly used algorithms for noncircular complex sources is the ncFastICA algorithm [11]. It incorporates the noncircular information of the original sources into the fixed-point iteration and provides better separation performance under the case of noncircular sources. However, ncFastICA only exploits the independence and noncircular property of the original sources. It does not utilize the prior knowledge to improve the separation performance.

To overcome the defect of the ICA, constrained ICA has been proposed in [20,21,22]. It combines the prior information and the statistical independence of the signals and thus achieves better separation performance than ICA. The constrained ICA algorithms are mainly derived from information criterion, e.g., kurtosis-based constrained ICA algorithm [23], and maximum likelihood (ML)-based constrained ICA algorithm [24]. [23] shows better performance in extracting the sources of the mechanical system. [24] extends the solution space and thus has better performance than other constrained algorithms. However, both algorithms need to choose the threshold parameter. To overcome this defect, Shi et al. proposed a new constrained ICA algorithm based on multi-objective optimization recently [25]. The fixed-point learning is derived based on Kuhn-Tucker conditions, and it does not need to choose the threshold parameter. However, these algorithms are suitable only for the real-valued signals, which severely limit their applications. To extend the application of constrained ICA to complex sources, Wang et al. [26] proposed a class of complex constrained ICA algorithms based on gradient descent method, in which circular complex signals are considered. However, the aforementioned constrained ICA algorithms are not suitable for the general complex-valued signals.

In this paper, to improve the generality ability of the constrained complex ICA, we propose a new constrained noncircular complex fast independent component analysis (c-ncFastICA) algorithm. In c-ncFastICA, a new cost function is built using the augmented Lagrangian method. The prior information is then combined into the fixed-point iteration based on a quasi-Newton method. Stability analysis shows that the optimal solution corresponds to the fixed point of c-ncFastICA.

The rest of this paper is organized as follows: Section 2 presents the system model, and Section 3 derives the c-ncFastICA algorithm and gives the stability analysis. The superiority of the c-ncFastICA algorithm is verified by simulations in Sections 4, and the conclusion is drawn in Section 5.

2 System model

In this section, the complex ICA model and its constrained model are introduced respectively.

2.1 Complex ICA model

The basic complex ICA model is [10].

$$ \mathbf{z}=\mathbf{As} $$

where z = [z1zM]TM × 1 and \( \mathbf{s}={\left[{s}_1\kern0.5em \cdots \kern0.5em {s}_N\right]}^T\in {\mathrm{\mathbb{C}}}_{N\times 1} \) denote M observation signals and N original sources, respectively; AM × N is the mixing matrix; and notation denotes the complex domain.

It is noted that the sources and mixing matrix are complex-valued in complex ICA model, which is the main difference between the complex ICA model and real ICA model. In general, it assumes that the original sources are zero mean and unit variance. In addition, A is of full-column rank, and at most one original source is Gaussian [27]. The purpose of complex ICA is to find a complex-valued demixing matrix \( \overline{\mathbf{w}} \) for z to recover the original signal, with \( \widehat{\mathbf{s}}={\overline{\mathbf{w}}}^H\mathbf{z} \) denoting the recovered signal.

2.2 Constrained complex ICA model

The constrained complex ICA model is described by the complex ICA model with following constraints

$$ {\displaystyle \begin{array}{l}\kern2.1em {h}_n\left({\overline{\mathbf{w}}}_n,{\mathbf{r}}_n\right)={\rho}_n-\varepsilon \left({\overline{\mathbf{w}}}_n,{\mathbf{r}}_n\right)\le 0\\ {}\mathrm{and}\kern0.6em {f}_n\left({\overline{\mathbf{w}}}_n\right)=0\end{array}} $$

where hn and fn are the inequality constriction and the equality constriction, respectively; ε and ρn are the measurement function and the threshold for the inequality constriction; \( {\overline{\mathbf{w}}}_n \) is the n th column of the demixing matrix \( \overline{\mathbf{w}} \); rn is a reference vector for \( {\overline{\mathbf{w}}}_n \).

Typically, ε is a distance measurement and can be described by an inner product in practice [24]. Thus, it is chosen as \( \varepsilon \left({\overline{\mathbf{w}}}_n,{\mathbf{r}}_n\right)={\left|{\overline{\mathbf{w}}}_n^H{\mathbf{r}}_n\right|}^2 \) in this paper.

3 Constrained ncFastICA algorithm

In this section, we incorporate the constraints into the original ncFastICA algorithm and thus obtain a new algorithm, namely constrained noncircular complex FastICA (c-ncFastICA) algorithm. The stability analysis shows that the optimal solution corresponds to the fixed point of the new algorithm.

3.1 Whitening

Whitening is necessary for the c-ncFastICA algorithm, which is to make the observation data z uncorrelated and unit variance. It is implemented by the transform

$$ \mathbf{x}=\mathbf{V}\mathbf{z},\kern0.9000001em \mathrm{with}\kern0.5em \mathbf{V}={\boldsymbol{\Lambda}}^{-\mathbf{1}/\mathbf{2}}{\mathbf{U}}^H $$

where V is the whitening matrix; x is the whitened data; Λ is a diagonal matrix, with N largest eigenvalues of R = E(z  zH) as its entries; U is a matrix consisting of the corresponding eigenvectors; and N is the source number.

After whitening, the demixing matrix w for x is a unitary matrix and the recovered signal can be represented by \( \widehat{\mathbf{s}}={\mathbf{w}}^H\mathbf{x} \). The relationship between w and \( \overline{\mathbf{w}} \) is \( {\overline{\mathbf{w}}}^H={\mathbf{w}}^H\mathbf{V} \).

3.2 Cost function

As shown in [11], the cost function of original ncFastICA algorithm is

$$ J\left({\mathbf{w}}_n\right)=E\left\{G\left({\left|{\mathbf{w}}_n^H\mathbf{x}\right|}^{\mathbf{2}}\right)\right\} $$

where G : + {0} →  is a smooth function, and + correspond to the real domain and positive real domain, respectively; wnN is the nth column of the demixing matrix w, with w2 = 1.

In practice, there are three choices for G [11]: G1(u) = log(0.1 + u), \( {G}_2(u)=\sqrt{0.1+u} \), \( {G}_3(u)=\frac{1}{2}{u}^2 \). G1 and G2 provide more robust estimators, and G3 is motivated by kurtosis. By using the augmented Lagrangian method [24], the inequality constraint in (2) can be incorporated into the cost function.

Then, the new cost function can be derived as

$$ {J}^c\left({\mathbf{w}}_n,{\mu}_n\right)=E\left\{G\left({\left|{\mathbf{w}}_n^H\mathbf{x}\right|}^{\mathbf{2}}\right)\right\}+\frac{1}{2{\gamma}_n}\left({\left(\max \left\{0,{\gamma}_n{h}_n\left({\mathbf{w}}_n,{\mathbf{r}}_n\right)+{\mu}_n\right\}\right)}^2-{\mu}_n^2\right) $$

where μn and γn are the Lagrangian multiplier and positive learning parameter, respectively, with wn2 = 1. Thus, the equality constriction is expressed as \( {f}_n\left({\mathbf{w}}_n\right)={\mathbf{w}}_n{\mathbf{w}}_n^H-1=0 \).

3.3 Fixed-point iteration

By using the quasi-Newton methods, we derive the one unit iteration of c-ncFastICA as

$$ {\mu}_n\leftarrow \max \left\{{\gamma}_n{h}_n\left({\mathbf{w}}_n,{\mathbf{r}}_n\right)+{\mu}_n,0\right\} $$
$$ {\displaystyle \begin{array}{l}{\mathbf{w}}_n^{\left(m+1\right)}=-E\left\{g\left({\left|y\right|}^2\right){y}^{\ast}\mathbf{x}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{w}}_n^{(m)}\\ {}\kern4.099998em +E\left\{{\mathbf{xx}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{w}}_n^{(m)\ast}\\ {}\kern4.099998em +\operatorname{sign}\left({\mu}_n\right)\cdot \kern0.3em 2{\gamma}_n{\left|{\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right|}^2{\left({\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right)}^{\ast }{\mathbf{r}}_n\end{array}} $$

where \( y={\mathbf{w}}_n^{(m)H}\mathbf{x} \), g(u), and g'(u) denote the first-order and second-order derivative of G(u); m represents the number of iteration. The detailed derivation can be seen in Appendix A.

The one unit iteration can be extended to the whole demixing matrix w by the symmetric way or the deflation way [10]. In the symmetric way, w is orthogonalized by w ← (wwH)1/2w. In the deflation way, w is orthogonalized by a Gram-Schmidt-like method. In this paper, we use the symmetric scheme.

In short, the c-ncFastICA algorithm is summarized in Table 1.

Table 1 c-ncFastICA algorithm

3.4 Stability analysis

In this subsection, we present the stability analysis of the fixed-point iteration. We show that the optimal solution to the c-ncFastICA algorithm corresponds to the fixed point of the (7). The detailed proof is shown as follows:

Proof By making the orthogonal transformation qn = (VA)Hwn, the fixed-point iteration for the constraint optimization problem (5) becomes

$$ {\displaystyle \begin{array}{l}{\mathbf{q}}_n^{\left(m+1\right)}=-E\left\{g\left({\left|y\right|}^2\right){y}^{\ast}\mathbf{s}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{q}}_n^{(m)}\\ {}\kern3.699999em +E\left\{{\mathbf{ss}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{q}}_n^{(m)\ast}\\ {}\kern3.799999em +\operatorname{sign}\left(\max \left\{{\gamma}_n{h}_n\left({\mathbf{q}}_n^{(m)},{\overline{\mathbf{r}}}_n\right)+{\mu}_n,0\right\}\right)\cdot \kern0.3em 2{\gamma}_n{\left|{\mathbf{q}}_n^{(m)H}{\overline{\mathbf{r}}}_n\right|}^2{\left({\mathbf{q}}_n^{(m)H}{\overline{\mathbf{r}}}_n\right)}^{\ast }{\overline{\mathbf{r}}}_n\end{array}} $$

where \( {\overline{\mathbf{r}}}_n={\left(\mathbf{VA}\right)}^H{\mathbf{r}}_n \).

As we know, the optimal solution qn of c-ncFastICA algorithm satisfies the conditions that only the nth element is non-zero and qn2 = 1. Without loss of generality, the non-zero element is assumed to be \( {e}^{j{\theta}_n} \). Thus, \( y={\mathbf{q}}_n^{(m)H}\mathbf{s}={e}^{-j{\theta}_n}{s}_n \) and

$$ {\displaystyle \begin{array}{l}{\mathbf{q}}_n^{\left(m+1\right)}=-E\left\{g\left({\left|{s}_n\right|}^2\right){e}^{j{\theta}_n}{s}_n^{\ast}\mathbf{s}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^2+\kern0.5em g\left({\left|{s}_n\right|}^2\right)\right\}{\mathbf{q}}_n^{(m)}\\ {}\kern4.099998em +E\left\{{\mathbf{ss}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{q}}_n^{(m)\ast}\\ {}\kern4.099998em +\operatorname{sign}\left(\max \left\{{\gamma}_n{h}_n\left({\mathbf{q}}_n^{(m)},{\overline{\mathbf{r}}}_n\right)+{\mu}_n,0\right\}\right)\cdot \kern0.3em 2{\gamma}_n{\left|{\mathbf{q}}_n^{(m)H}{\overline{\mathbf{r}}}_n\right|}^2{\left({\mathbf{q}}_n^{(m)H}{\overline{\mathbf{r}}}_n\right)}^{\ast }{\overline{\mathbf{r}}}_n\end{array}} $$

For the first term in the right hand of Eq. (9), it can be derived that the nth element is \( -E\left\{g\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^2\right\}{e}^{j{\theta}_n} \) and other elements are zero since the original sources are independent. Thus, the first term is rewritten as \( -E\left\{g\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^2\right\}{\mathbf{q}}_n^{(m)} \). Similarly, for the third term in the right hand of Eq. (9), the nth element is \( E\left\{{s}_n^2\right\}E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){s_n}^{\ast 2}\right\}{e}^{j{\theta}_n} \) and other elements are zero. For the fourth term, the nth element is \( \operatorname{sign}\left(\max \left\{{\gamma}_n{h}_n\left({\mathbf{q}}_n^{(m)},{\overline{\mathbf{r}}}_n\right)+{\mu}_n,0\right\}\right)\cdot \kern0.3em 2{\gamma}_n{e}^{j{\theta}_n} \) and other elements are zero.

Thus, (9) is rewritten as

$$ {\mathbf{q}}_n^{\left(m+1\right)}=\left({\alpha}_1+{\alpha}_2+{\alpha}_3+{\alpha}_4\right){\mathbf{q}}_n^{(m)} $$

where α1 =  − E{g(|sn|2)|sn|2}, α2 = E{g'(|sn|2)|sn|2 +  g(|sn|2)}, \( {\alpha}_3=E\left\{{s}_n^2\right\}E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){s_n}^{\ast 2}\right\} \), and \( {\alpha}_4=\operatorname{sign}\left(\max \left\{{\gamma}_n{h}_n\left({\mathbf{q}}_n^{(m)},{\overline{\mathbf{r}}}_n\right)+{\mu}_n,0\right\}\right)\cdot \kern0.3em 2{\gamma}_n \).

Considering the constraint of qn2 = wn2 = 1, we can remove the real-valued coefficient α = α1 + α2 + α3 + α4 from (10) if it is not equal to zero. Therefore, \( {\mathbf{q}}_n^{(m)} \) is the fixed point of (10).


  1. 1)

    The parameters γn, μn, and ρn influence the convergence of the c-ncFastICA algorithm. If \( {\gamma}_n{h}_n\left({\mathbf{q}}_n^{(m)},{\overline{\mathbf{r}}}_n\right)+{\mu}_n\le 0 \), then α4 = 0. In this case, the constraints have no effect on the c-ncFastICA algorithm. Thus, the convergence of the c-ncFastICA algorithm is the same as the original ncFastICA algorithm. On the contrary, if \( {\gamma}_n{h}_n\left({\mathbf{q}}_n^{(m)},{\overline{\mathbf{r}}}_n\right)+{\mu}_n>0 \), then α4 = 2γn. In this case, the value of γn should be chosen to make sure the real-valued coefficient α is not equal or close to zero.

  2. 2)

    If the prior information is accurate enough, it is suggested to choose a big ρn; if the prior information is not accurate enough, it is suggested to choose a big ρn at the first several iterations and then choose a small ρn at the rest iterations. In practice, both G and ρn are usually chosen by trials.

  3. 3)

    As a byproduct, we can also see that the assumptions for the constrained sources can be a little relaxed in the constrained complex ICA model, i.e., the constrained source sn can be Gaussian, since γncan be properly chosen to make sure the real-valued coefficient is not equal to zero even if the source is Gaussian (\( -E\left\{g\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^2\right\}+E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^2+\kern0.5em g\left({\left|{s}_n\right|}^2\right)\right\}+E\left\{{s}_n^2\right\}E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){s_n}^{\ast 2}\right\}=0 \)).

4 Results and discussion

In this section, the superiority of the c-ncFastICA algorithm is demonstrated by some simulations. We employ the normalized Amari index [28] to measure the performance of the different algorithms

$$ {I}_A=\frac{1}{2M\left(M-1\right)}\left[\sum \limits_{i=1}^M\left(\sum \limits_{j=1}^M\frac{\left|{p}_{ij}\right|}{\max_k\left|{p}_{ik}\right|}-1\right)\right.\left.+\sum \limits_{j=1}^M\left(\sum \limits_{i=1}^M\frac{\left|{p}_{ij}\right|}{\max_k\left|{p}_{kj}\right|}-1\right)\right] $$

where P = wHVA. The lower of IA indicates the better of the separation. For all the simulations, the Amari index is obtained by the average of 100 Monte Carlo trials. We compare the c-ncFastICA algorithm with the symmetric gradient descent method [26] since it is the only algorithm which can be applied to complex-valued signals among the constrained ICA algorithms in [20,21,22,23,24,25,26]. In addition, we also compare c-ncFastICA with the ncFastICA algorithm since it is widely used for the separation of noncircular signals.

In the first simulation, we use the complex generalized Gaussian distributed (cGGD) signals [11] as the original sources. For each trial, the source number is eight and half of the sources are made noncircular with the same noncircularity index. The shape parameters and noncircularity index for each source are listed in Table 2. The sample size of the original sources varies from 250 to 1000. The complex-valued mixing matrix A is 8 × 8 dimensional. Each element is zero mean and unit variance, with the real part and imaginary part normally distributed. We use the third and seventh columns of A as reference.

Table 2 The shape parameters and noncircularity index for the original sources

Figure 1 shows the time sequences and histograms of original sources 3 and 7, and Fig. 2 shows the time sequences and histograms of recovered sources 3 and 7 by c-ncFastICA algorithm, where ρn = 0.9, γn = 3, and the nonlinear function is selected as G3. One can see that the constrained sources 3 and 7 are successfully recovered by c-ncFastICA even though they are almost Gaussian. Figure 3 depicts the separation results of c-ncFastICA algorithm, ncFastICA algorithm, and the symmetric gradient descent method [26], where ρn = 0.9 and γn = 3. It shows that the c-ncFastICA algorithm performs significantly better than other two methods. This is due to the fact that ncFastICA does not take the constrained condition into consideration, and symmetric gradient descent method cannot make full use of the noncircular characteristic of the sources. Moreover, the performance gap widens slightly with the increase of the sample size. It also shows that the ncFastICA algorithm cannot perform well even when the sample size is 1000. This is due to the fact that source 1, source 3, and source 7 are almost Gaussian in this simulation. On the contrary, the c-ncFastICA algorithm can get desirable separation result in this case, which verifies the stability analysis in the Section 3. Figure 4 compares the convergence curves of c-ncFastICA under different nonlinear functions, with the sample size fixed at 1000. One can observe that c-ncFastICA converges quickly under different nonlinear functions. Figure 5 shows the influence of ρn on the performance of c-ncFastICA, with the sample size fixed at 1000. It can be seen that the algorithm performs better when ρn is bigger, which is due to the fact that the prior information is accurate in this simulation.

Fig. 1
figure 1

a Time sequences of original sources 3 and 7. b Histograms of original sources 3 and 7

Fig. 2
figure 2

a Time sequences of recovered sources 3 and 7. b Histograms of recovered sources 3 and 7

Fig. 3
figure 3

Separation results for cGGD signals

Fig. 4
figure 4

Convergence curves of c-ncFastICA

Fig. 5
figure 5

Influence of ρn on the performance of c-ncFastICA

In the second simulation, we use three real-world frequency-modulated (FM) signals as the original sources. The powers of three sources are 1, 1, and 10, respectively. The first and third sources are the interference signals, and the second source is the desired signal. For each trial, the parameters for carrier frequency and maximum frequency deviation are set as 80 kHz and ± 75 kHz, respectively. The original FM sources are received by the linear uniform array, and the sensor number is five. The array interval is equal to the half of the wavelength. In this case, the mixing matrix is related to the directions of arrival (DOAs) of the original sources. The DOAs of the original sources are 30o, − 8o, and − 7o, respectively. We assume that the DOA of desired signal can be roughly detected, and thus, the second column of A (denoted as a2) is considered as a prior knowledge. The additive white Gaussian noise is added to the receiver and the variance is 0.1. Figure 6 depicts the separation results of the c-ncFastICA algorithm, ncFastICA algorithm, and symmetric gradient descent method [26], where ρn = 0.95 and γn = 3. One can see that the c-ncFastICA algorithm performs better than other two methods. Figure 7 shows the influence of ρn on the performance of c-ncFastICA algorithm. It can be seen that the performance of c-ncFastICA improves a lot when ρn ≥ 0.85 (G1 and G3) or ρn ≥ 0.9 (G2). However, this does not mean that the bigger the ρn, the better the performance. Figure 8 shows the influence of the accuracy of prior knowledge on the performance of c-ncFastICA, where ρn = 0.95, γn = 3, and the prior knowledge for each element of a2 is contaminated by Gaussian noise with zero mean and σ2. One can see that with ρn = 0.95, the performance of c-ncFastICA becomes worse when the prior knowledge is not accurate.

Fig. 6
figure 6

Separation results of different methods

Fig. 7
figure 7

Influence of ρn on the performance of c-ncFastICA

Fig. 8
figure 8

Influence of the accuracy of prior knowledge

5 Conclusion

The constrained ICA for complex sources is a challenging problem. In this paper, we focus on this problem and extend the noncircular complex FastICA (ncFastICA) algorithm to the constrained case. By adding the constrained conditions to the cost function and utilizing the quasi-Newton method, we derive a new fixed-point algorithm, namely constrained ncFastICA (c-ncFastICA) algorithm. Stability analysis shows that the optimal solution to constrained ICA corresponds to the fixed point of the c-ncFastICA algorithm. Simulations verify the correctness of the stability analysis and the superiority of the c-ncFastICA algorithm.



Alternating columns-diagonal centers


Complex FastICA


Constrained noncircular complex fast independent component analysis


Frequency modulated


Directions of arrival


Independent component analysis


Joint Approximate Diagonalization of Eigenmatrices


Maximum likelihood


Noncircular complex FastICA


  1. P. Comon, Independent component analysis, a new concept? Signal Process. 36, 287–314 (1994)

    Article  Google Scholar 

  2. A. Hyvarinen, J. Karunen, E. Oja, Independent Component Analysis (Wiley, New York, 2001)

  3. A. Tharwat, Independent component analysis: an introduction. Appl. Comput. Inform. (2018)

  4. H. Saruwatari, T. Kawamura, T. Nishikawa, et al., Blind source separation based on a fast-convergence algorithm combining ICA and beamforming. IEEE Trans. Audio, Speech, and Lang. Process. 14, 666–678 (2006)

    Article  Google Scholar 

  5. K. Mohanaprasad, P. Arulmozhivarman, Wavelet-based ICA using maximum likelihood estimation and information-theoretic measure for acoustic echo cancellation during double talk situation. Circuits Systems Signal Process. 34, 3915–3931 (2015)

    Article  MathSciNet  Google Scholar 

  6. M. Aladjem, I. Israeli-Ran, M. Bortman, Sequential independent component analysis density estimation. IEEE Trans. Neural Netw. Learn. Syst. 99, 1–14 (2018)

    MathSciNet  Google Scholar 

  7. J.F. Cardoso, A. Souloumiac, Blind beamforming for non-Gaussian signals. IEEE Proceedings F 140, 362–370 (1993)

    Google Scholar 

  8. A. Yeredor, Non-orthogonal joint diagonalization in the least-squares sense with application in blind source separation. IEEE Trans. Signal Process. 50, 1545–1553 (2002)

    Article  MathSciNet  Google Scholar 

  9. A. Hyvärinen, Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 10, 626–634 (1999)

    Article  Google Scholar 

  10. E. Bingham, A. Hyvärinen, A fast fixed-point algorithm for independent component analysis of complex valued signals. Int. J. Neural Syst. 10, 1–8 (2000)

    Article  Google Scholar 

  11. M. Novey, T. Adali, On extending the complex FastICA algorithm to noncircular sources. IEEE Trans. Signal Process. 56, 2148–2154 (2008)

    Article  MathSciNet  Google Scholar 

  12. P. Ablin, J.F. Cardoso, A. Gramfort, Faster independent component analysis by preconditioning with Hessian approximations. IEEE Trans. Signal Process. 66(15), 4040–4049 (2018)

    Article  MathSciNet  Google Scholar 

  13. N. Ono, S. Miyabe, in International Conference on Latent Variable Analysis and Signal Separation. Auxiliary-function-based independent component analysis for super-Gaussian sources (Springer, Berlin, 2010), pp. 165–172

    Chapter  Google Scholar 

  14. S. Gepshtein, Y. Keller, Iterative spectral independent component analysis. Signal Process. 155, 368–376 (2019)

    Article  Google Scholar 

  15. N. Ono, in Applications of Signal Processing to Audio and Acoustics (WASPAA), 2011 IEEE Workshop. Stable and fast update rules for independent vector analysis based on auxiliary function technique (2011), pp. 189–192

    Chapter  Google Scholar 

  16. D. Kitamura, N. Ono, H. Sawada, et al., Determined blind source separation unifying independent vector analysis and nonnegative matrix factorization. IEEE/ACM Trans. Audio Speech Lang. Proc. 24(9), 1622–1637 (2016)

    Google Scholar 

  17. Y. Mitsui, N. Takamune, D. Kitamura, et al., in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vectorwise coordinate descent algorithm for spatially regularized independent low-rank matrix analysis (2018), pp. 746–750

    Chapter  Google Scholar 

  18. Y. Xia, B. Jelfs, M.M. Van Hulle, J.C. Principe, D.P. Mandic, An augmented echo state network for nonlinear adaptive filtering of complex noncircular signals. IEEE Trans. Neural Netw. 22(1), 74–83 (2011)

    Article  Google Scholar 

  19. Y. Xia, D.P. Mandic, A full mean square analysis of CLMS for second order noncircular inputs. IEEE Trans. Signal Process. 65(21), 5578–5590 (2017)

    Article  MathSciNet  Google Scholar 

  20. W. Lu, J.C. Rajapakse, Constrained independent component analysis. Adv. Neural Inf. Process. Syst. 13, 570–576 (2000)

    Google Scholar 

  21. W. Lu, J.C. Rajapakse, Approach and applications of constrained ICA. IEEE Trans. Neural Netw. 16, 203–212 (2005)

    Article  Google Scholar 

  22. J. Lee, K.L. Park, K.J. Lee, Temporally constrained ICA-based fetal ECG separation. Electron. Lett. 41, 1158–1160 (2005)

    Article  Google Scholar 

  23. J. Zhang, Z. Zhang, W. Cheng, X. Li, B. Chen, et al., Kurtosis-based constrained independent component analysis and its application on source contribution quantitative estimation. IEEE Trans. Instrum. Meas. 63, 1842–1854 (2014)

    Article  Google Scholar 

  24. P.A. Rodriguez, M. Anderson, X.L. Li, T. Adali, General non-orthogonal constrained ICA. IEEE Trans. Signal Process. 62, 2778–2786 (2014)

    Article  MathSciNet  Google Scholar 

  25. Y. Shi, W. Zeng, N. Wang, Z. Le, A new method for independent component analysis with priori information based on multi-objective optimization. J. Neurosci. Methods 283, 72–82 (2017)

    Article  Google Scholar 

  26. X. Wang, Z. Huang, Y. Zhou, et al., Approaches and applications of semi-blind signal extraction for communication signals based on constrained independent component analysis: the complex case. Neurocomputing 101, 204–216 (2013)

    Article  Google Scholar 

  27. D. Kitamura, S. Mogami, Y. Mitsui, et al., Generalized independent low-rank matrix analysis using heavy-tailed distributions for blind source separation. EURASIP J. Adv. Sign. Proc. 28, 1–25 (2018)

    Google Scholar 

  28. S. Amari, A. Cichocki, H. Yang, in Advances in Neural Information Processing Systems, ed. by D. S. Touretzky, M. C. Mozer, M. E. Hasselmo. A new learning algorithm for blind signal separation, vol 8 (MIT Press, Cambridge, 1996), pp. 757–763

    Google Scholar 

Download references


The authors would like to thank the anonymous reviewers for their valuable comments and suggestions that helped improve the quality of this manuscript.


The work was supported by the National Natural Science Foundation of China under grant 61701419.

Availability of data and materials

Not available online. Please contact corresponding author for data requests.

Author information

Authors and Affiliations



All authors have contributed equally. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Lidan Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



1.1 Derivation of the fixed-point algorithm

The Lagrangian function for the constrained ICA is

$$ {L}^{\mathrm{c}}\left({\mathbf{w}}_n,\lambda, {\mu}_n\right)={J}^{\mathrm{c}}\left({\mathbf{w}}_n\right)+\lambda \left({\mathbf{w}}_n^H{\mathbf{w}}_n-1\right) $$

As the cost function is not analytic in wn but analytic in wn and \( {\mathbf{w}}_n^{\ast } \) independently [11], we derive the quasi-Newton update as follows:

$$ {\displaystyle \begin{array}{c}\varDelta {\tilde{\mathbf{w}}}_n=-{\left({\left.\frac{\partial^2{L}^c}{\partial {\tilde{\mathbf{w}}}_n^{\ast}\partial {\tilde{\mathbf{w}}}_n^T}\right|}_{{\mathbf{w}}_n={\mathbf{w}}_n^{(m)}}\right)}^{-1}{\left.\frac{\partial {L}^c}{\partial {\tilde{\mathbf{w}}}_n^{\ast }}\right|}_{{\mathbf{w}}_n={\mathbf{w}}_n^{(m)}}\\ {}=-{\left({\tilde{\mathbf{H}}}_{{\mathrm{w}}_n}{L}^c\right)}^{-1}{\tilde{\nabla}}_{{\mathbf{w}}_n}^{\ast }{L}^c\end{array}} $$


$$ {\displaystyle \begin{array}{l}{\mathbf{w}}_n={\left[{w}_1,{w}_2,\dots, {w}_N\right]}^T\in {\mathrm{\mathbb{C}}}^N\\ {}{\tilde{\mathbf{w}}}_n={\left[{w}_1,{w}_1^{\ast },\dots, {w}_{N,}{w}_N^{\ast}\right]}^T\in {\mathrm{\mathbb{C}}}^{2N}\end{array}} $$

and \( \varDelta {\tilde{\mathbf{w}}}_n={\tilde{\mathbf{w}}}_n^{\left(m+1\right)}-{\tilde{\mathbf{w}}}_n^{(m)} \); \( {\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{L}^c \) and \( {\tilde{\nabla}}_{{\mathbf{w}}_n}{L}^c \) are the Hessian matrix and gradient vector of the Lagrangian function, respectively.

Combining (12, 13), we can derive

$$ \varDelta {\tilde{\mathbf{w}}}_n=-{\left({\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{J}^c+\lambda \tilde{\mathbf{I}}\right)}^{-1}\left({\tilde{\nabla}}_{{\mathbf{w}}_n}^{\ast }{J}^c+\lambda {\tilde{\mathbf{w}}}_n^{(m)}\right) $$


$$ \left({\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{J}^c+\lambda \tilde{\mathbf{I}}\right){\tilde{\mathbf{w}}}_n^{\left(m+1\right)}=-{\tilde{\nabla}}_{{\mathbf{w}}_n}^{\ast }{J}^c+{\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{J}^c{\tilde{\mathbf{w}}}_n^{(m)} $$
  1. 1)

    If γnhn(wn, rn) + μn > 0,

$$ {\tilde{\nabla}}_{{\mathbf{w}}_n}{J}^c=E\left(\begin{array}{c}\frac{\partial {J}^c}{\partial {w}_1}\\ {}\frac{\partial {J}^c}{\partial {w}_1^{\ast }}\\ {}\vdots \\ {}\frac{\partial {J}^c}{\partial {w}_N}\\ {}\frac{\partial {J}^c}{\partial {w}_N^{\ast }}\end{array}\right)=\left(\begin{array}{c}E\left\{g\left({yy}^{\ast}\right){yx}_1^{\ast}\right\}\\ {}E\left\{g\left({yy}^{\ast}\right){y}^{\ast }{x}_1\right\}\\ {}\vdots \\ {}E\left\{g\left({yy}^{\ast}\right){yx}_N^{\ast}\right\}\\ {}E\left\{g\left({yy}^{\ast}\right){y}^{\ast }{x}_N\right\}\end{array}\right)-\left({\gamma}_n{h}_n+{\mu}_n\right)\left(\begin{array}{c}\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_1^{\ast}\\ {}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_1\\ {}\vdots \\ {}\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_N^{\ast}\\ {}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_N\end{array}\right) $$
$$ {\displaystyle \begin{array}{l}{\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{J}^c=E\left(\frac{\partial^2{J}^c}{\partial {\tilde{\mathbf{w}}}_n^{\ast}\partial {\tilde{\mathbf{w}}}_n^T}\right)\\ {}=E\left\{\left[\begin{array}{ccccc}{x}_1{x}_1^{\ast }e& {x}_1^2d& \cdots & {x}_1{x}_N^{\ast }e& {x}_1{x}_Nd\\ {}{x}_1^{\ast 2}{d}^{\ast }& {\widehat{x}}_1^{\ast }{\widehat{x}}_1e& \cdots & {x}_1^{\ast }{x}_N^{\ast }{d}^{\ast }& {x}_1^{\ast }{x}_Ne\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}{x}_N^{\ast }{x}_1^{\ast }{d}^{\ast }& {x}_N^{\ast }{x}_1e& \cdots & {x}_N^{\ast }{x}_N^{\ast }{d}^{\ast }& {x}_N^{\ast }{x}_Ne\end{array}\right]\right\}\\ {}-\left({\gamma}_n{h}_n+{\mu}_n\right)\left[\begin{array}{ccccc}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_1{\mathrm{r}}_1^{\ast }& 0& \cdots & {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_1{\mathrm{r}}_N^{\ast }& 0\\ {}0& \left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_1^{\ast }{\mathrm{r}}_1& \cdots & 0& \left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_1^{\ast }{\mathrm{r}}_N\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}0& \left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_N^{\ast }{\mathrm{r}}_1& \cdots & & \left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_N^{\ast }{\mathrm{r}}_N\end{array}\right]\\ {}+\left(\begin{array}{c}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_1\\ {}\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_1^{\ast}\\ {}\vdots \\ {}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_N\\ {}\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_N^{\ast}\end{array}\right)\left[\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_1^{\ast}\kern0.5em {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_1\kern0.5em \cdots \kern0.5em \left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right){\mathrm{r}}_N^{\ast}\kern0.5em {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast }{\mathrm{r}}_N\right]\end{array}} $$

where e = g'(|y|2)|y|2 +  g(|y|2), d = g'(|y|2)y2.

Rewrite the \( {\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{J}^c \) as two parts

$$ {\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}{J}^c={\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}^a{J}^c+{\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}^b{J}^c $$


$$ {\displaystyle \begin{array}{l}{\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}^a{J}^c=E\left\{\left[\begin{array}{ccccc}{x}_1{x}_1^{\ast }e& 0& \cdots & {x}_1{x}_N^{\ast }e& 0\\ {}0& {x}_1^{\ast }{x}_1e& \cdots & 0& {x}_1^{\ast }{x}_Ne\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}0& {x}_N^{\ast }{x}_1e& \cdots & 0& {x}_N^{\ast }{x}_Ne\end{array}\right]\right\}\\ {}-\left({\gamma}_n{h}_n+{\mu}_n\right)\left[\begin{array}{ccccc}{\mathrm{r}}_1{\mathrm{r}}_1^{\ast }& 0& \cdots & {\mathrm{r}}_1{\mathrm{r}}_N^{\ast }& 0\\ {}0& {\mathrm{r}}_1^{\ast }{\mathrm{r}}_1& \cdots & 0& {\mathrm{r}}_1^{\ast }{\mathrm{r}}_N\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}0& {\mathrm{r}}_N^{\ast }{\mathrm{r}}_1& \cdots & & {\mathrm{r}}_N^{\ast }{\mathrm{r}}_N\end{array}\right]\\ {}+{\gamma}_n\left[\begin{array}{ccccc}{\left|{{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right|}^2{\mathrm{r}}_1{\mathrm{r}}_1^{\ast }& 0& \cdots & {\left|{{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right|}^2{\mathrm{r}}_1{\mathrm{r}}_N^{\ast }& 0\\ {}0& {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_1^{\ast }{\mathrm{r}}_1& \cdots & 0& {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_1^{\ast }{\mathrm{r}}_N\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}0& {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_N^{\ast }{\mathrm{r}}_1& \cdots & 0& {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_N^{\ast }{\mathrm{r}}_N\end{array}\right]\end{array}} $$


$$ {\displaystyle \begin{array}{l}\kern1.2em {\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}^b{J}^c\\ {}=E\left\{\left[\begin{array}{ccccc}0& {x}_1^2d& \cdots & 0& {x}_1{x}_Nd\\ {}{x}_1^{\ast 2}{d}^{\ast }& 0& \cdots & {x}_1^{\ast }{x}_N^{\ast }{d}^{\ast }& 0\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}{x}_N^{\ast }{x}_1^{\ast }{d}^{\ast }& 0& \cdots & {x}_N^{\ast }{x}_N^{\ast }{d}^{\ast }& 0\end{array}\right]\right\}\\ {}+{\gamma}_n\left[\begin{array}{ccccc}0& {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast 2}{\mathrm{r}}_1^2& \cdots & 0& {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^{\ast 2}{\mathrm{r}}_1{\mathrm{r}}_N\\ {}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_1^{\ast 2}& 0& \cdots & {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_1^{\ast }{\mathrm{r}}_N^{\ast }& 0\\ {}\vdots & \vdots & \ddots & \vdots & \vdots \\ {}{\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_N^{\ast }{\mathrm{r}}_1^{\ast }& 0& \cdots & {\left({{\mathbf{w}}_n^{(m)}}^H{\mathbf{r}}_n\right)}^2{\mathrm{r}}_N^{\ast 2}& 0\end{array}\right]\end{array}} $$


$$ {\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}J{\tilde{\mathbf{w}}}_n={\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}^aJ{\tilde{\mathbf{w}}}_n+{\tilde{\mathbf{H}}}_{{\mathbf{w}}_n}^bJ{\tilde{\mathbf{w}}}_n $$

Taking (20) and (21) into consideration, and retaining only the odd-numbered rows, we can obtain

$$ {\displaystyle \begin{array}{l}{\mathbf{H}}_{{\mathbf{w}}_n}J{\mathbf{w}}_n=E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{w}}_n^{(m)}\\ {}\kern3.999998em +E\left\{{\mathbf{xx}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{w}}_n^{(m)\ast}\\ {}\kern3.999998em +2{\gamma}_n{\left|{\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right|}^2{\left({\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right)}^{\ast }{\mathbf{r}}_n\\ {}\kern3.899998em -\left({\gamma}_n{h}_n+{\mu}_n\right){\mathbf{r}}_n{\mathbf{r}}_n^H{\mathbf{w}}_n^{(m)}\end{array}} $$

Similarly, we can get

$$ {\mathbf{Kw}}_n^{\left(m+1\right)}=-{\nabla}_{{\mathbf{w}}_n}^{\ast }J+{\mathbf{H}}_{{\mathbf{w}}_n}J{\mathbf{w}}_n^{(m)} $$

where K = (HJc + λI).

Combining (23) and (24), we can obtain

$$ {\displaystyle \begin{array}{l}{\mathbf{Kw}}_n^{\left(m+1\right)}=-E\left\{g\left({\left|y\right|}^2\right){y}^{\ast}\mathbf{x}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{w}}_n^{(m)}\\ {}\kern4.799998em +E\left\{{\mathbf{xx}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{w}}_n^{(m)\ast}\\ {}\kern4.699998em +2{\gamma}_n{\left|{\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right|}^2{\left({\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right)}^{\ast }{\mathbf{r}}_n\end{array}} $$

At the optimal solution,

$$ {\displaystyle \begin{array}{l}{\left\{\overline{\mathbf{K}}{\mathbf{q}}_n\right\}}_n=\left(2E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^4\right\}+E\left\{g\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^2\right\}+\lambda +2{\gamma}_n-\left({\gamma}_n{h}_n+{\mu}_n\right)\right){e}^{j{\theta}_n}\\ {}\mathrm{and}\kern0.6em {\left\{\overline{\mathbf{K}}{\mathbf{q}}_n\right\}}_i=0\kern1.5em i\ne n\end{array}} $$

where qn = (VA)Hwn and \( \overline{\mathbf{K}}=\mathbf{K}\left(\mathbf{VA}\right) \).

The Lagrangian function for qn, λ, μn can be written as

$$ {L}^c\left({\mathbf{q}}_n,\lambda, {\mu}_n\right)={J}^c\left({\mathbf{q}}_n\right)+\lambda \left({\mathbf{q}}_n^H{\mathbf{q}}_n-1\right). $$

Calculating the derivative of Lagrangian function for qn at the optimal solution

$$ \frac{\partial {L}^c\left({\mathbf{q}}_n,\lambda, {\mu}_n\right)}{\partial {{\mathbf{q}}_n}^{\ast }}=E\left\{g\left({\left|{s}_i\right|}^2\right){\left|{s}_i\right|}^2\right\}{\mathbf{q}}_n+\lambda {\mathbf{q}}_n-\left({\gamma}_n{h}_n+{\mu}_n\right){\mathbf{q}}_n $$

and solving \( \frac{\partial {L}^c\left({\mathbf{q}}_n,\lambda, {\mu}_n\right)}{\partial {{\mathbf{q}}_n}^{\ast }}=\mathbf{0} \), we can get

$$ \lambda =-E\left\{g\left({\left|{s}_i\right|}^2\right){\left|{s}_i\right|}^2\right\}+\left({\gamma}_n{h}_n+{\mu}_n\right) $$


$$ \overline{\mathbf{K}}{\mathbf{q}}_n=k{\mathbf{q}}_n $$


$$ {\mathbf{Kw}}_n=k{\mathbf{w}}_n $$


$$ k=2E\left\{{g}^{\hbox{'}}\left({\left|{s}_n\right|}^2\right){\left|{s}_n\right|}^4\right\}+2{\gamma}_n\in \mathrm{\mathbb{R}} $$

Therefore, K can be removed due to the constraints wn = 1. The fixed-point iteration becomes

$$ {\displaystyle \begin{array}{l}{\mathbf{w}}^{\left(m+1\right)}=-E\left\{g\left({\left|y\right|}^2\right){y}^{\ast}\mathbf{x}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{w}}_n^{(m)}\\ {}\kern4.099998em +E\left\{{\mathbf{xx}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{w}}_n^{(m)\ast}\\ {}\kern4.099998em +2{\gamma}_n{\left|{\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right|}^2{\left({\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right)}^{\ast }{\mathbf{r}}_n\end{array}} $$
  1. 2)

    If γnhn(wn, rn) + μn < 0, we can similarly derive

$$ {\displaystyle \begin{array}{l}{\mathbf{w}}_n^{\left(m+1\right)}=-E\left\{g\left({\left|y\right|}^2\right){y}^{\ast}\mathbf{x}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{w}}_n^{(m)}\\ {}\kern4.099998em +E\left\{{\mathbf{xx}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{w}}_n^{(m)\ast}\end{array}} $$

Combining (33) and (34), we can get

$$ {\displaystyle \begin{array}{l}{\mathbf{w}}_n^{\left(m+1\right)}=-E\left\{g\left({\left|y\right|}^2\right){y}^{\ast}\mathbf{x}\right\}+E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){\left|y\right|}^2+\kern0.5em g\left({\left|y\right|}^2\right)\right\}{\mathbf{w}}_n^{(m)}\\ {}\kern4.099998em +E\left\{{\mathbf{xx}}^T\right\}E\left\{{g}^{\hbox{'}}\left({\left|y\right|}^2\right){y}^{\ast 2}\right\}{\mathbf{w}}_n^{(m)\ast}\\ {}\kern4.099998em +\operatorname{sign}\left(\max \left\{{\gamma}_n{h}_n\left({\mathbf{w}}_n,{\mathbf{r}}_n\right)+{\mu}_n,0\right\}\right)\cdot \kern0.3em 2{\gamma}_n{\left|{\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right|}^2{\left({\mathbf{w}}_n^{(m)H}{\mathbf{r}}_n\right)}^{\ast }{\mathbf{r}}_n\end{array}} $$

Updating the μn using the gradient descent method, we can obtain

$$ {\mu}_n\leftarrow \max \left\{{\gamma}_n{h}_n\left({\mathbf{w}}_n,{\mathbf{r}}_n\right)+{\mu}_n,0\right\} $$

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qian, G., Wang, L., Wang, S. et al. A novel fixed-point algorithm for constrained independent component analysis. EURASIP J. Adv. Signal Process. 2019, 28 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: