Skip to content

Advertisement

  • Research
  • Open Access

A low computational complexity normalized subband adaptive filter algorithm employing signed regressor of input signal

  • 1Email author,
  • 1 and
  • 1
EURASIP Journal on Advances in Signal Processing20182018:21

https://doi.org/10.1186/s13634-018-0542-z

Received: 28 September 2017

Accepted: 15 March 2018

Published: 2 April 2018

Abstract

In this paper, the signed regressor normalized subband adaptive filter (SR-NSAF) algorithm is proposed. This algorithm is optimized by L1-norm minimization criteria. The SR-NSAF has a fast convergence speed and a low steady-state error similar to the conventional NSAF. In addition, the proposed algorithm has lower computational complexity than NSAF due to the signed regressor of the input signal at each subband. The theoretical mean-square performance analysis of the proposed algorithm in the stationary and nonstationary environments is studied based on the energy conservation relation and the steady-state, the transient, and the stability bounds of the SR-NSAF are predicated by the closed form expressions. The good performance of SR-NSAF is demonstrated through several simulation results in system identification, acoustic echo cancelation (AEC) and line EC (LEC) applications. The theoretical relations are also verified by presenting various experimental results.

Keywords

  • Normalized subband adaptive filter (NSAF)
  • Mean-square performance
  • Signed regressor (SR)L 1-norm

1 Introduction

Fast convergence rate and low computational complexity features are important issues for high data rate applications such as speech processing, echo cancelation, network echo cancelation, and channel equalization. The least-mean-squares (LMS) and the normalized LMS (NLMS) algorithms are useful for a wide range of adaptive filter applications because of their low computational complexity. However, the performance of the LMS-type algorithms is corrupted when the input signals are colored [1, 2].

To solve this problem, various approaches such as affine projection algorithm (APA) [3, 4] and subband adaptive filter (SAF) algorithm have been proposed [57]. In [8], a new version of the SAF was developed based on a constrained optimization problem referred to as normalized SAF (NSAF). The filter update equation in [8] is similar to the update equation in [9, 10], where the full band filters are updated instead of subfilters as in the conventional SAF structure [5].

To reduce the computational complexity of NSAF and APA, different methods were proposed. In [11], the selective partial update NSAF (SPU-NSAF) algorithm was presented where the filter coefficients are partially updated rather than the entire filter at every adaptation. In [12], the dynamic selection of NSAF (DS-NSAF) algorithm was introduced. In this algorithm, the number of subbands was optimally selected during each iteration. The fix selection NSAF (FS-NSAF) was also introduced in [13]. In this algorithm, a subset of subbands was selected during the adaptation.

There are some classes of adaptive filter algorithms that make use of the signum of either the error signal or the input signal, or both. These approaches have been applied to the LMS algorithm for the simplicity of implementation, enabling a significant reduction in computational complexity [1418]. The sign algorithm (SA) takes the signum of the error signal. This algorithm is particularly useful against impulsive interferences [19, 20]. But, in other cases, the convergence speed of the SA is slower than conventional one [21]. This approach was also successfully extended to the NSAF algorithm to establish the sign SAF (SSAF) algorithm [22, 23].

In the signed regressor LMS (SR-LMS), the signum of the input regressors is utilized. In this algorithm, the polarity of the input signal is used to adjust the filter coefficients, which requires no multiplications. The SR-LMS has a convergence speed and a steady-state error level that are only slightly inferior to those of the LMS algorithm for the same parameter setting [24]. To increase the convergence speed of SR-LMS, the signed regressor NLMS (SR-NLMS) was firstly proposed in [14]. Also, the modified version of this algorithm (MSR-NLMS) was presented in [25]. The same as SR-LMS, the SR-NLMS enjoys advantages similar to those of the NLMS algorithm. Due to the normalization factor, the steady-state error level does not depend on the input signal power [18]. Note that no multiplications are needed to calculate the normalization factor. But for highly colored input signal, the convergence speed of SR-NLMS is still low. On the other hand, there is no definition for cost function or solving the optimization problem to establishment of the signed regressor algorithms in the literature.

Due to the effective features of signed regressor adaptive algorithms (low computational complexity and close convergence speed to the conventional algorithm) and to increase the performance of the SR-NLMS algorithm, this paper proposes the signed regressor NSAF (SR-NSAF) algorithm. The SR-NSAF is established with L1-norm optimization. A constraint is imposed on the decimated filter output to force a posteriori error to become zero. This constraint guarantees the convergence of the algorithm. This algorithm utilizes the signum of the input regressors at each subband during the adaptation. Again, no multiplications are required for normalization factor at each subband. To improve the performance of the SR-NSAF, the modified SR-NSAF (MSR-NSAF) is also established. The proposed SR-NSAF and MSR-NSAF algorithms have lower computational complexity than the NSAF, SPU-NSAF, DS-NSAF, and FS-NSAF, while they have a fast convergence rate similar to the NSAF. In addition, the steady-state error level is also nearly close to the NSAF. For performance evaluation of any proposed adaptive algorithm, a theoretical analysis is essential [26]. Therefore, in the following, the energy conservation approach [27] is applied to the SR-NSAF and the mean-square performance analysis of the proposed algorithms are studied in the stationary and nonstationary environments. This approach does not need a white or Gaussian assumption for the input regressors. Based on this, the transient, the steady-state, and the stability bounds of the SR-NSAF and MSR-NSAF are analyzed and closed form relations are derived.

What we propose in this paper can be summarized as follows:
  • The establishment of the SR-NSAF according to the proposed cost function. This algorithm utilizes the signum of the input regressors at each subband. Furthermore, no multiplications are required for normalization factor at each subband.

  • Mean-square performance analysis of the SR-NSAF algorithm in the stationary and nonstationary environments. The theoretical expressions for transient and steady-state performances of the SR-NSAF are extracted.

  • Analysis of the mean and mean-square stability bounds of the SR-NSAF and MSR-NSAF algorithms.

  • The performance of NSAF, SPU-NSAF, DS-NSAF, FS-NSAF, SR-NSAF, and MSR-NSAF are compared in convergence speed, steady-state error, and computational complexity features for system identification, acoustic echo cancelation, and line echo cancelation applications.

  • The theoretical expressions for transient, steady-state, and stability bounds are justified with various experiments.

The current paper is organized as follows. In Section II, the conventional NSAF is briefly reviewed. The proposed SR-NSAF and MSR-NSAF are presented in Section III. Section IV presents the mean square performance analysis of SR-NSAF. The theoretical stability bounds relations are given in Section V. In the following, the computational complexity of the proposed algorithm will be discussed. Finally, before concluding the paper, the usefulness of the introduced algorithms are demonstrated by presenting several experimental results.

Throughout the paper, the following notations are used:

|.|

Norm of a scalar

.2

Squared Euclidean norm of a vector.

.1

L1-norm of a vector.

(.) T

Transpose of a vector or a matrix.

E{.}

Expectation operator.

sgn

Sign function.

Tr(.)

Trace of a matrix.

λ max

The largest eigenvalue of a matrix.

+

The set of positive real numbers.

Α  Β

Kronecker product of matrices Α and Β

\( {\left\Vert \mathbf{t}\right\Vert}_{\boldsymbol{\Phi}}^2 \)

Φ-weighted Euclidean norm of a column vector t defined as t T Φ t.

diag(.)

Has the same meaning as the MATLAB operator with the same name: if its argument is a vector, a diagonal matrix with the diagonal elements given by the vector argument results. If the argument is a matrix, its diagonal is extracted into a resulting vector.

vec(T)

Creates an M2 × 1 column vector t through stacking the columns of the M × M matrix T.

vec(t)

Creates an M × M matrix T from the M2 × 1 column vector t.

2 Background on NSAF

Consider a linear data model for d(n) as
$$ d(n)={\mathbf{x}}^T(n){\mathbf{w}}^o+v(n) $$
(1)
where w o is an unknown M-dimensional vector that we expect to estimate, v(n) is the measurement noise with variance \( {\sigma}_v^2 \) and x(n) = [x(n), x(n − 1), …, x(n − M + 1)] T denotes an M-dimensional input (regressor) vector. It is assumed that v(n) is zero mean, white, Gaussian, and independent of x(n). Figure 1 shows the structure of the NSAF [8]. In this figure, f0, f1, …, fN − 1 and g0, g1, …, gN − 1, are analysis and synthesis filter unit impulse responses of an N channel orthogonal perfect reconstruction critically sampled filter bank system. x i (n) and d i (n) are nondecimated subband signals. It is important to note that n refers to the index of the original sequences, and k denotes the index of the decimated sequences (k = floor(n/N)). The decimated output signal is defined as \( {y}_{i,D}(k)={\mathbf{x}}_i^T(k)\mathbf{w}(k) \) where x i (k) = [x i (kN), x i (kN − 1), …, x i (kN − M + 1)] T and w(k) = [w0(k),   w1(k), …, wM − 1(k)] T . Also, the decimated subband error signal is defined as \( {e}_{i,D}(k)={d}_{i,D}(k)-{\mathbf{x}}_i^T(k)\mathbf{w}(k) \). The filter update equation for NSAF can be stated as
$$ \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu \sum \limits_{i=0}^{N-1}\frac{{\mathbf{x}}_i(k)}{{\left\Vert {\mathbf{x}}_i(k)\right\Vert}^2}{e}_{i,D}(k) $$
(2)
where μ is the step size and 0 < μ<2 [8].
Figure 1
Fig. 1

Structure of the NSAF algorithm

3 Sign regressor normalized subband adaptive filter(SR-NSAF)

Based on the principle of minimum disturbance, the SR-NSAF is formulated by the following optimization problem
$$ \min {\left\Vert \mathbf{w}\left(k+1\right)-\mathbf{w}(k)\right\Vert}_1 $$
(3)
subject to the N constraints (i = 0, 1, …, N − 1) which are defined as
$$ {d}_{i,D}(k)={\mathbf{x}}_i^T(k)\mathbf{w}\left(k+1\right) $$
(4)
By applying the method of Lagrange multipliers, the following Lagrangian function is obtained
$$ J\left(\mathbf{w}\left(k+1\right)\right)={\left\Vert \mathbf{w}\left(k+1\right)-\mathbf{w}(k)\right\Vert}_1+\sum \limits_{i=0}^{N-1}{\lambda}_i\left[{d}_{i,D}(k)-{\mathbf{x}}_i^T(k)\mathbf{w}\left(k+1\right)\right]\kern0.75em $$
(5)
where λ i is the ith Lagrange multiplier. Using, \( \frac{\partial J\left(\mathbf{w}\left(k+1\right)\right)}{\partial \mathbf{w}\left(k+1\right)}=0, \) we get the following relation as
$$ \operatorname{sgn}\left[\mathbf{w}\left(k+1\right)-\mathbf{w}(k)\right]=\sum \limits_{i=0}^{N-1}{\lambda}_i{\mathbf{x}}_i(k) $$
(6)
In NSAF algorithm, if the magnitude responses of the analysis filters do not significantly overlap, the cross-correlation between two arbitrary subband signals is negligible compared to the auto-correlation [8]. Therefore, by multiplying \( {\mathbf{x}}_i^T(k) \) on both sides of the above equation from the left and neglecting the crossterms, we obtain
$$ {\mathbf{x}}_i^T(k)\operatorname{sgn}\left[\mathbf{w}\left(k+1\right)-\mathbf{w}(k)\right]={\lambda}_i{\left\Vert {\mathbf{x}}_i(k)\right\Vert}^2 $$
(7)
By defining sgn(x i (k)) = Θ i (k)x i (k), sgn[w(k + 1) − w(k)] = Υ(k)[w(k + 1) − w(k)], and multiplying sgn(x i (k)) on both sides of (7) from the left, we get
$$ {\boldsymbol{\Theta}}_i(k){\mathbf{x}}_i(k){\mathbf{x}}_i^T(k)\mathbf{Y} (k)\left[\mathbf{w}\left(k+1\right)-\mathbf{w}(k)\right]={\lambda}_i{\boldsymbol{\Theta}}_i(k){\mathbf{x}}_i(k){\left\Vert {\mathbf{x}}_i(k)\right\Vert}^2 $$
(8)
Where
$$ {\boldsymbol{\Theta}}_i(k)=\operatorname{diag}\left[\frac{1}{\left|{x}_i(kN)\right|},\frac{1}{\left|{x}_i\left( kN-1\right)\right|},\dots, \frac{1}{\left|{x}_i\left( kN-M+1\right)\right|}\right] $$
(9)
and
$$ \mathbf{Y} (k)=\operatorname{diag}\left[\frac{1}{\left|{w}_0\left(k+1\right)-{w}_0(k)\right|},\dots, \frac{1}{\left|{w}_{M-1}\left(k+1\right)-{w}_{M-1}(k)\right|}\right] $$
(10)
If the number of subbands is large enough, x i (k) may be approximately assumed white [26, 28]. Therefore, by displacing the matrices in (8) and using (4), we obtain
$$ \mathbf{Y} (k){\boldsymbol{\Theta}}_i(k){\mathbf{x}}_i(k){e}_{i,D}(k)={\lambda}_i{\mathbf{x}}_i(k){\left\Vert {\mathbf{x}}_i(k)\right\Vert}_1 $$
(11)
Where \( {\left\Vert {\mathbf{x}}_i(k)\right\Vert}_{\mathbf{1}}=\operatorname{sgn}\left({\mathbf{x}}_i^T(k)\right){\mathbf{x}}_i(k) \). Now, by multiplying \( \operatorname{sgn}\left({\mathbf{x}}_i^T(k)\right) \) on both sides of (11) from the left, the Lagrange multipliers are given by
$$ {\lambda}_i=\frac{\operatorname{sgn}\left[{\mathbf{x}}_i^T(k)\right]{\boldsymbol{\Theta}}_i(k)\mathbf{Y} (k){\mathbf{x}}_i(k){e}_{i,D}(k)}{{\left({\left\Vert {\mathbf{x}}_i(k)\right\Vert}_1\right)}^2} $$
(12)
Substituting (12) into (6) leads to
$$ \mathbf{Y} (k)\left[\mathbf{w}\left(k+1\right)-\mathbf{w}(k)\right]=\sum \limits_{i=0}^{N-1}\frac{\operatorname{sgn}\left[{\mathbf{x}}_i^T(k)\right]{\boldsymbol{\Theta}}_i(k)\mathbf{Y} (k){\mathbf{x}}_i(k){e}_{i,D}(k)}{{\left({\left\Vert {\mathbf{x}}_i(k)\right\Vert}_1\right)}^2}{\mathbf{x}}_i(k) $$
(13)
By multiplying Υ 1 (k) on both sides of (13) from the left and rearranging the diagonal matrices, the filter coefficients of the update equation for SR-NSAF is established as
$$ \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu \sum \limits_{i=0}^{N-1}\frac{\operatorname{sgn}\left[{\mathbf{x}}_i(k)\right]}{{\left\Vert {\mathbf{x}}_i(k)\right\Vert}_1}{e}_{i,D}(k) $$
(14)
where μ is again the step size and should be selected in the stability bound.1 To avoid being divided by zero, it is common that the denominator of the update equation is replaced by ϵ + x i (k) 1 , where ϵ is the regularization parameter. Table 1 summarizes the SR-NSAF algorithm.
It is interesting to note that for N = 1, and f0 = 1, the SR-NSAF in (14) reduces to
$$ \mathbf{w}\left(n+1\right)=\mathbf{w}(n)+\mu \frac{\operatorname{sgn}\left[\mathbf{x}(n)\right]}{{\left\Vert \mathbf{x}(n)\right\Vert}_1}e(n) $$
(15)
which is the SR-NLMS algorithm [14]. In this case, the output error is given by e(n) = d(n) − x T (n)w(n).
Table 1

The SR-NSAF

In [25], the new version of SR-NLMS was proposed based on clipping of the input signal. When the absolute value of the sample is larger than the average of the absolute values of the input samples, the clipped sample is used to update coefficients. The performance of SR-NSAF can be improved by applying the proposed idea in [25] in each subband. Therefore, the new version of SR-NSAF which is called modified SR-NSAF (MSR-NSAF) is established based on the procedure in Table 2.
Table 2

The MSR-NSAF

4 Mean square performance analysis of SR-NSAF in stationary environment

The filter coefficients update equation in SR-NSAF can be represented as
$$ \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu \operatorname {sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T\mathbf{e}(k) $$
(16)
where F is the K × N matrix whose columns are the unit pulse responses of the channel filters of a critically sampled analysis filter bank,2 F = [f0, f1, …, fN − 1 ], X(k) is the M × K input signal matrix which is defined as
$$ \mathbf{X}(k)=\left[\mathbf{x}(kN),\mathbf{x}\left( kN-1\right),\dots, \mathbf{x}\left( kN-\left(K-1\right)\right)\right] $$
(17)
and
$$ \mathbf{d}(k)={\left[d(kN),d\left( kN-1\right),\dots, d\left( kN-\left(K-1\right)\right)\right]}^T $$
(18)
Also, (k) = [ϵI+ diag {diag{F T X T (k) sgn[X(k)F]}}] 1 , and
$$ \mathbf{e}(k)=\mathbf{d}(k)-{\mathbf{X}}^T(k)\mathbf{w}(k) $$
(19)
is the error signal vector. In the theoretical convergence analysis, we need to obtain the time evolution of the \( E\left\{{\left\Vert \overset{\sim }{\mathbf{w}}(k)\right\Vert}_{\boldsymbol{\Phi}}^2\right\} \), where \( \overset{\sim }{\mathbf{w}}(k)={\mathbf{w}}^o-\mathbf{w}(k) \) is the weight-error vector, and Φ is any Hermitian and positive-definite matrix. When Φ = I (I is the identity matrix), the mean square deviation (MSD) and when Φ = R (R is the autocorrelation matrix of the input signal), the excess mean square error (EMSE) expressions are derived.
The weight error vector update equation for SR-NSAF can be written as
$$ \tilde{\mathbf{w}}\left(k+1\right)=\tilde{\mathbf{w}}(k)-\mu \operatorname {sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T\mathbf{e}(k) $$
(20)
From (1) and (19), the error vector, e(k), can be described as
$$ \mathbf{e}(k)={\mathbf{X}}^T(k)\tilde{\mathbf{w}}(k)+\mathbf{v}(k) $$
(21)
Substituting (21) into (20) yields
$$ \tilde{\mathbf{w}}\left(k+1\right)=\tilde{\mathbf{w}}(k)-\mu \operatorname {sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T\left[{\mathbf{X}}^T(k)\tilde{\mathbf{w}}(k)+\mathbf{v}(k)\right] $$
(22)
By taking the Φ-weighted norm from both sides of (22), we obtain
$$ {\displaystyle \begin{array}{c}{\left\Vert \tilde{\mathbf{w}}\left(k+1\right)\right\Vert}_{\boldsymbol{\Phi}}^2={\left\Vert \tilde{\mathbf{w}}(k)\right\Vert}_{\boldsymbol{\Psi}}^2+{\mu}^2{\mathbf{v}}^T(k)\mathbf{Y}(k)\boldsymbol{v}(k)\\ {}-\mu {\tilde{\mathbf{w}}}^T(k)\boldsymbol{\Phi} \operatorname {sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T\mathbf{v}(k)\\ {}-\mu {\mathbf{v}}^T(k){\mathbf{F}\mathbf{W}}^T(k){\left(\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\right)}^T\boldsymbol{\Phi} \tilde{\mathbf{w}}(k)\\ {}+{\mu}^2{\tilde{\mathbf{w}}}^T(k)\mathbf{X}(k)\mathbf{Y}(k)\mathbf{v}(k)\\ {}+{\mu}^2{\mathbf{v}}^T(k){\mathbf{Y}}^T(k){\mathbf{X}}^T(k)\tilde{\mathbf{w}}(k)\end{array}} $$
(23)
where
$$ \mathbf{Y}(k)={\mathbf{F}\mathbf{W}}^T(k){\left(\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\right)}^T\boldsymbol{\Phi} \operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T $$
(24)
and
$$ \boldsymbol{\Psi} =\boldsymbol{\Phi} -\mu \boldsymbol{\Phi} \mathbf{Z}(k)-\mu {\mathbf{Z}}^T(k)\boldsymbol{\Phi} +{\mu}^2{\mathbf{Z}}^T(k)\boldsymbol{\Phi} \mathbf{Z}(k) $$
(25)
Also in (25), Z(k) = sgn[X(k)F]W(k)F T X T (k). By applying the expectation into both sides of (23), we obtain
$$ E\left\{{\left\Vert \tilde{\mathbf{w}}\left(k+1\right)\right\Vert}_{\boldsymbol{\Phi}}^2\right\}=E\left\{{\left\Vert \tilde{\mathbf{w}}(k)\right\Vert}_{\boldsymbol{\Psi}}^2\right\}+{\mu}^2E\left\{{\mathbf{v}}^T(k)\mathbf{Y}(k)\boldsymbol{v}(k)\right\} $$
(26)
To simplify the recent relation, we need the independence assumptions. The matrix X(k) is assumed an independent and identically distributed sequence matrix [2, 27]. This assumption guarantees that \( \overset{\sim }{\mathbf{w}}(k) \) is independent of both Ψ and X(k). Therefore,
$$ \boldsymbol{\Psi} =\boldsymbol{\Phi} -\mu \boldsymbol{\Phi} E\left\{\mathbf{Z}(k)\right\}-\mu E\left\{{\mathbf{Z}}^T(k)\right\}\boldsymbol{\Phi} +{\mu}^2E\left\{{\mathbf{Z}}^T(k)\boldsymbol{\Phi} \mathbf{Z}(k)\right\} $$
(27)
The second term of the right hand side of (26) can be presented as
$$ E\left\{{\mathbf{v}}^T(k)\mathbf{Y}(k)\boldsymbol{v}(k)\right\}=E\left\{\mathrm{Tr}\left(\boldsymbol{v}(k){\mathbf{v}}^T(k)\mathbf{Y}(k)\right)\right\}=\mathrm{Tr}\left(E\left\{\boldsymbol{v}(k){\mathbf{v}}^T(k)\right\}E\left\{\mathbf{Y}(k)\right\}\right) $$
(28)
Since \( E\left\{\boldsymbol{v}(k){\mathbf{v}}^T(k)\right\}={\sigma}_v^2\mathbf{I} \), we obtain
$$ E\left\{{\left\Vert \tilde{\mathbf{w}}\left(k+1\right)\right\Vert}_{\boldsymbol{\Phi}}^2\right\}=E\left\{{\left\Vert \tilde{\mathbf{w}}(k)\right\Vert}_{\boldsymbol{\Psi}}^2\right\}+{\mu}^2{\sigma}_v^2\mathrm{Tr}\left(E\left\{\mathbf{Y}(k)\right\}\right) $$
(29)
Applying the vec(.) operation on both sides of (25) and using vec(PΦQ) = (Q T P)vec(Φ) lead to
$$ \psi =\phi -\mu \left(E\left\{{\mathbf{Z}}^T(k)\right\}\otimes \mathbf{I}\right)\phi -\mu \left(\mathbf{I}\otimes E\left\{{\mathbf{Z}}^T(k)\right\}\right)\phi +{\mu}^2\left(E\left\{{\mathbf{Z}}^T(k)\otimes {\mathbf{Z}}^T(k)\right\}\right)\phi $$
(30)
where ψ = vec(Ψ), and ϕ = vec(Φ). Therefore, by defining the matrix P as
$$ \mathbf{P}=\mathbf{I}-\mu \left(E\left\{{\mathbf{Z}}^T(k)\right\}\otimes \mathbf{I}\right)-\mu \left(\mathbf{I}\otimes E\left\{{\mathbf{Z}}^T(k)\right\}\right)+{\mu}^2\left(E\left\{{\mathbf{Z}}^T(k)\otimes {\mathbf{Z}}^T(k)\right\}\right) $$
(31)
we obtain
$$ \psi =\mathbf{P}\phi $$
(32)
Defining
$$ \varphi =\mathrm{vec}\left(E\left\{\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T{\mathbf{F}\mathbf{W}}^T(k){\left(\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\right)}^T\right\}\right) $$
(33)
we get
$$ \mathrm{Tr}\left(E\left\{\mathbf{Y}(k)\right\}\right)={\varphi}^T\phi $$
(34)
Finally, (26) can be stated as
$$ E\left\{{\left\Vert \tilde{\mathbf{w}}\left(k+1\right)\right\Vert}_{\phi}^2\right\}=E\left\{{\left\Vert \tilde{\mathbf{w}}(k)\right\Vert}_{\mathbf{P}\phi}^2\right\}+{\mu}^2{\sigma}_v^2{\varphi}^T\phi $$
(35)
This equation is related to \( \overset{\sim }{\mathbf{w}}(0) \) as
$$ E\left\{{\left\Vert \tilde{\mathbf{w}}(k)\right\Vert}_{\phi}^2\right\}=E\left\{{\left\Vert \tilde{\mathbf{w}}(0)\right\Vert}_{{\mathbf{P}}^k\phi}^2\right\}+{\mu}^2{\sigma}_v^2{\varphi}^T\left[\mathbf{I}+\mathbf{P}+\dots +{\mathbf{P}}^{k-1}\right]\phi $$
(36)
By substituting R for Φ, and defining r = vec(R), the transient behavior of SR-NSAF can be predicted by (35). From this recursion, we can obtain EMSE, when k goes to infinity. Therefore, the EMSE in the steady-state can be stated as
$$ \mathrm{EMSE}={\mu}^2{\sigma}_v^2{\varphi}^T{\left(\mathbf{I}-\mathbf{P}\right)}^{-1}\mathbf{r} $$
(37)
The MSE and the EMSE are related as
$$ \mathrm{MSE}=\mathrm{EMSE}+{\sigma}_v^2 $$
(38)
Also, the steady state mean square coefficient deviation (MSD) is given by
$$ \mathrm{MSD}={\mu}^2{\sigma}_v^2{\varphi}^T{\left(\mathbf{I}-\mathbf{P}\right)}^{-1}\mathrm{vec}\left(\mathbf{I}\right) $$
(39)

It is important to note that selecting F = I and N = K = 1 lead to the performance analysis of SR-NLMS and MSR-NLMS algorithms, which was not presented in [14, 25]. This analysis can be successfully extended to nonstationary environment. In Appendix 1, the mean-square performance analysis of the SR-NSAF is presented in the nonstationary environment.

5 Mean and mean-square stability of the SR-NSAF

Taking the expectation from both sides of (22) leads to
$$ E\left\{\tilde{\mathbf{w}}\left(k+1\right)\right\}=E\left\{\tilde{\mathbf{w}}(k)-\mu \operatorname {sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T\left[{\mathbf{X}}^T(k)\tilde{\mathbf{w}}(k)+\mathbf{v}(k)\right]\right\} $$
(40)
From (40), the convergence to the mean of the SR-NSAF is guaranteed for any μ that satisfies
$$ \mu <\frac{2}{\lambda_{\mathrm{max}}\left(E\left\{\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T{\mathbf{X}}^T(k)\right\}\right)} $$
(41)
Equation (35) is stable if the matrix P is stable [27]. From (31), we know that P = I −μM +μ2N, where M =E{Z T (k)} I + IE{Z T (k)}, and N =E{Z T (k) Z T (k)}. The condition on μ to guarantee the convergence in the mean-square sense of the SR-NSAF algorithms is
$$ 0<\mu <\min \left\{\frac{1}{\lambda_{\mathrm{max}}\left({\mathbf{M}}^{-1}\mathbf{N}\right)},\frac{1}{\max \left(\lambda \left(\mathbf{H}\right)\in {\mathrm{\Re}}^{+}\right)}\right\} $$
(42)
where \( \mathbf{H}=\left[\begin{array}{cc}\frac{1}{2}\mathbf{M}& -\frac{1}{2}\mathbf{N}\\ {}\mathbf{I}& \mathbf{0}\end{array}\right] \).

6 Computational complexity

Table 3 compares the computational complexity of the NSAF and SR-NSAF algorithms. This table shows that the number of multiplications in SR-NSAF is lower than NSAF. Table 4 summarizes the number of multiplications at each iteration for different NSAF algorithms. In this table, M, N, K, B, S, L, N s , and N(k) are the filter length, the number of subbands, the length of channel filters, the number of blocks, the number of blocks to update, the length of blocks, the number of selected subbands (fix), and the number of selected subbands (dynamic), respectively. For NSAF, the exact computational complexity of this algorithm is 3M + 3NK + 1 multiplications [11]. From [11], we obtain that the computational complexity of SPU-NSAF is 2M + SL + 3NK + 1 multiplications. In comparison with NSAF, the reduction in number of multiplications is M − SL, which is considerable for large values of M. Also, the DS-NSAF needs \( \left(1+2\frac{N(k)}{N}\right)M+3 NK+N \) multiplications [12]. Due to the selection of subbands during the adaptation, the number of multiplications will be reduced in DS-NSAF. The exact number of multiplications in FS-NSAF is \( 2M+\left(\frac{N_s}{N}\right)M+3 NK+1 \). Compared with SPU-NSAF, FS-NSAF, and DS-NSAF algorithms, the proposed SR-NSAF algorithm needs 2M multiplications less than NSAF algorithm. Figure 2 compares the number of multiplications versus the filter length for NSAF, FS-NSAF, DS-NSAF, SPU-NSAF (B = 4, S = 1, 2, 3), and proposed SR-NSAF with N = 8. As we can see, the number of multiplications in SR-NSAF is significantly lower than other algorithms.
Table 3

The computational complexity of NSAF and SR-NSAF

Computation

Multiplications

\( {x}_i(n)={\mathbf{f}}_i^T\mathbf{x}(n) \). The input signal, x(n), is K × 1

NK

\( {d}_i(n)={\mathbf{f}}_i^T\mathbf{d}(n) \). The desired signal, d(n), is K × 1

NK

\( e(n)={\sum}_{i=0}^{N-1}{\mathbf{g}}_i^T{e}_i(n) \)

NK

\( {e}_{i,D}(k)={d}_{i,D}(k)-{\mathbf{x}}_i^T(k)\mathbf{w}(k) \)

M

\( \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu {\sum}_{i=0}^{N-1}\frac{{\mathbf{x}}_i(k)}{{\left\Vert {\mathbf{x}}_i(k)\right\Vert}^2}{e}_{i,D}(k) \)

M + 1

\( \mathbf{w}\left(k+1\right)=\mathbf{w}(k)+\mu {\sum}_{i=0}^{N-1}\frac{\operatorname{sgn}\left[{\mathbf{x}}_i(k)\right]}{{\left\Vert {\mathbf{x}}_i(k)\right\Vert}_1}{e}_{i,D}(k) \)

1

Total Complexity for NSAF

M + 3NK + 1

Total Complexity for SR-NSAF

M + 3NK + 1

Table 4

Computational complexity of the family of NSAF algorithms

Algorithm

Multiplications

NSAF[8]

3M + 3NK + 1

SPU-NSAF[11]

2M + SL + 3NK + 1

DS-NSAF[12]

\( \left(1+2\frac{N(k)}{N}\right)M+3 NK+N \)

FS-NSAF[13]

\( 2M+\left(\frac{N_s}{N}\right)M+3\mathrm{NK}+1 \)

Proposed SR-NSAF

M + 3NK + 1

Figure 2
Fig. 2

Number of multiplications versus the filter length for NSAF, DS-NSAF, FS-NSAF, SPU-NSAF and proposed SR-NSAF with N = 8

7 Simulation results

We demonstrated the performance of the proposed algorithm by several computer simulations in a system identification (SI), acoustic echo cancelation (AEC) and line echo cancelation (LEC) setups. The impulse response of the car echo path with 256 taps (M = 256) has been used as an unknown system in the experiment [29] (Fig. 3). The filter bank used in the NSAF algorithms was the extended lapped transform (ELT) (N = 2, 4, and 8) [11, 30]. In all simulations, we show the normalized mean square deviation (NMSD), \( E\left[\frac{{\left\Vert {\mathbf{w}}^{\boldsymbol{o}}-\mathbf{w}(k)\right\Vert}^2}{{\left\Vert {\mathbf{w}}^{\boldsymbol{o}}\right\Vert}^2}\right] \), which is evaluated by ensemble averaging over 20 independent trials.
Figure 3
Fig. 3

Impulse response of the car echo path (M = 256)

7.1 System identification: AR(2) input signal

In this experiment, the input signal is an AR(2) signal generated by passing a zero-mean white Gaussian noise through a second-order system \( \mathrm{T}(z)=\frac{1}{1-0.1{z}^{-1}-0.8{z}^{-2}} \). An additive white Gaussian noise was added to the system output, setting the signal-to-noise ratio (SNR) to 30 dB. Figure 4 compares the convergence of the NSAF, SR-NSAF, and MSR-NSAF algorithms with N = 4 for different step sizes (1, 0.2, and 0.05). As we can see, for large values of the step size, the fast convergence rate and high steady-state error are occurred and small step size leads to the slow convergence rate and a low steady-state error. Figure 5 shows the performance of the SR-NSAF, MSR-NSAF, and conventional NSAF for the number of subbands N = 4 and 8. The step-size was set to μ = 0.05. By increasing the number of subbands in all algorithms, the convergence rate is improved and the computational complexity is also increased. The results show that the SR-NSAF, and MSR-NSAF algorithms have close performance to the conventional NSAF. Furthermore, the computational complexity of SR-NSAF and MSR-NSAF is lower than NSAF.
Figure 4
Fig. 4

NMSD learning curves of NSAF, SR-NSAF, and MSR-NSAF with N = 4 and different values of μ

Figure 5
Fig. 5

NMSD learning curves of the NSAF (N = 4 and 8), SR-NSAF, and MSR-NSAF (μ = 0.05)

The performance of the proposed SR-NSAF and MSR-NSAF algorithms have been compared with other NSAF algorithms in Fig. 6. These algorithms are NSAF [8], DS-NSAF [12], FS-NSAF [13], and SPU-NSAF algorithm in [11]. Eight subbands have been used (N = 8) and the step-size was set to μ = 0.5. In FS-NSAF, the number of selected subbands (N s ) out of the number of subbands (N) was set to 4. For SPU-NSAF algorithm, the number of blocks (B) was set to 4 and the number of blocks to update (S) was set 3 and 2. As we can see, the proposed SR-NSAF and MSR-NSAF have a comparable performance to the family of NSAF in terms of the convergence speed and the steady-state error. In addition, the computational complexity of the introduced algorithms are lower than other algorithms.
Figure 6
Fig. 6

NMSD learning curves of NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, SR-NSAF, and MSR-NSAF with N = 8 and μ = 0.5

In Fig. 7, the step-size was set to 0.5 in NSAF algorithm and to make the comparison fair, the step-sizes for other NSAF algorithms were chosen to get approximately the same steady-state NMSD as NSAF. For DS-NSAF and FS-NSAF, the step-size was set to 0.5. In SPU-NSAF, the step-sizes for S = 2 and S = 3 were set to 0.32 and 0.42, respectively. Finally, this value for SR-NSAF was set to 0.32 and for MSR-NSAF, the step-size was set to 0.4. The NMSD learning curves show that the SR-NSAF and MSR-NSAF have a comparable performance with those of the family of NSAF algorithms. In Fig. 8, we compared the NMSD learning curves of NLMS and SR-NLMS algorithms with the family of NSAF algorithms. For this simulation, the number of multiplications until convergence was also presented in Table 5. This table indicates that the number of multiplications in SR-NSAF is 1025000 which is significantly lower than other algorithms.
Figure 7
Fig. 7

NMSD learning curves of NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, SR-NSAF, and MSR-NSAF with N = 8

Figure 8
Fig. 8

NMSD learning curves of NLMS, SR-NLMS, NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, and SR-NSAF with N = 8

Table 5

Number of multiplications for various NSAF algorithms until convergence in SI and AEC applications

Algorithm

No. of multi. in SI

No. of multi. in AEC

NLMS

15,380,000

SR-NLMS

5,654,000

NSAF

1,537,000

184,440,000

SPU-NSAF

1,473,000

176,760,000

DS-NSAF

1,224,000

146,880,000

FS-NSAF

1,409,000

169,080,000

Proposed SR-NSAF

1,025,000

123,000,000

For tracking performance analysis, we consider a system to identify the two unknown filters with M = 200, whose z-domain transfer functions are given by
$$ {\mathbf{W}}_1(z)=\sum \limits_{n=0}^{99}{z}^{-n}-\sum \limits_{n=100}^{M-1}{z}^{-n} $$
(43)
and
$$ {\mathbf{W}}_2(z)=-\sum \limits_{n=0}^{M-1}{z}^{-n} $$
(44)
where the transfer function of optimum filter coefficients will be W1(z) for n ≤ 5 × 103 and the transfer function of optimum filter coefficients will be W2(z) for 5 × 103 ≤ n ≤ 10 × 103. Figure 9 compares the tracking performance of SR-NSAF and MSR-NSAF with other NSAF algorithms. The number of subbands and the step-size were set to 8 and 0.5. As we can see, the SR-NSAF and MSR-NSAF have a close performance to the conventional NSAF algorithm.
Figure 9
Fig. 9

NMSD learning curves for tracking performance of NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, SR-NSAF, and MSR-NSAF with N = 8 and μ = 0.5

7.2 Acoustic echo cancelation (AEC): speech input signal

For AEC setup, we consider both the exact and under-modeling scenarios. For the under-modeling scenario, the NMSD is calculated by padding the tap-weight vector of the adaptive filter with M − J zeros (J=length of adaptive filter which is shorter than that of the unknown system in this case) [31]. In the exact-modeling scenario, the echo path is truncated to the first 128 tap weights [before the dotted line in Fig. 3]; in the under-modeling scenario, the length of the echo path is set to 256. For both scenarios, the length of all the adaptive filters is set to 128. Speech input signal is used as input signal for AEC setup [26].

Figures 10 and 11 compare the performance of proposed SR-NSAF and MSR-NSAF algorithms with other NSAF algorithms in exact and under modeling scenarios. The number of subbands (N) was set to 4. In FS-NSAF, the number of selected subbands (N s ) out of the number of subbands (N) was set to 2. As we can see, the proposed SR-NSAF and MSR-NSAF have a comparable performance to the family of NSAF in terms of the convergence speed and the steady-state misalignment with lower computational complexity. Table 5 shows the number of multiplications until convergence for different NSAF algorithms. This table indicates that the proposed SR-NSAF has significantly lower computational complexity than other algorithms.
Figure 10
Fig. 10

NMSD learning curves of NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, SR-NSAF, and MSR-NSAF with N = 4. Exact-modeling scenario, speech input signal

Figure 11
Fig. 11

NMSD learning curves of NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, SR-NSAF, and MSR-NSAF with N = 4. Under-modeling scenario, speech input signal

The tracking capability is examined by shifting the acoustic impulse response to the right by 10 samples at a certain time step. Figure 12 compares the tracking performance of SR-NSAF and MSR-NSAF with other NSAF algorithms in exact modeling scenario. The number of subbands was set to 4. As we can see, the SR-NSAF and MSR-NSAF have close performance to the conventional NSAF algorithm.
Figure 12
Fig. 12

NMSD learning curves for tracking performance of NSAF, DS-NSAF, FS-NSAF, SPU-NSAF, SR-NSAF, and MSR-NSAF with N = 4. Exact-modeling scenario, speech input signal. Echo path changes at a certain index

7.3 Line echo cancelation

In communications over phone lines, a signal traveling from a far-end point to a near-end point is usually reflected in the form of an echo at the near-end due to mismatches in circuity. The purpose of a line echo canceller (LEC) is to eliminate the echo from a received signal. Figure 13 shows the impulse response sequence of a typical echo path which was taken from G168 standard [32]. Figure 14 shows the far-end signal from real speech and echo signal (page 347 in [33]).
Figure 13
Fig. 13

Impulse response of the line echo path

Figure 14
Fig. 14

Far-end signal from real speech and echo signal

In this simulation, the length of the adaptive filter is 128. Figure 15 shows the error signals based on NSAF, SR-NSAF, and MSR-NSAF algorithms. The number of subbands, and the step-size were set to 4 and 0.5, respectively. As we can see, SR-NSAF and MSR-NSAF have close error performance to NSAF algorithm. Furthermore, the computational complexity of SR-NSAF and MSR-NSAF is considerably lower than that of NSAF. Also, to measure the effectiveness of the proposed algorithms, we have computed the echo return loss enhancement (ERLE). The ERLE is obtained by evaluating the difference between the powers of the echo and the error signal. The segmental ERLE estimates were obtained by averaging over 140 samples. The segmental ERLE curves for the measured speech and echo signals were shown in Fig. 16. This figure illustrates that the proposed algorithms and conventional NSAF have comparable ERLE performance.
Figure 15
Fig. 15

Error signals with NSAF, SR-NSAF, and MSR-NSAF algorithms

Figure 16
Fig. 16

Segmental ERLE curves of NSAF, SR-NSAF and MSR-NSAF algorithms

7.4 Performance in nonstationary environment

Figure 17 presents the NMSD learning curves of NSAF and SR-NSAF in nonstationary environment. The unknown system changes according to the random walk model. We assume an independent and identically distributed sequence for q(n) with autocorrelation matrix \( \mathbf{Q}={\sigma}_q^2\mathbf{I} \) [27]. The number of subbands and the step-size were set to 8 and 0.5 and different values for \( {\sigma}_q^2 \) (\( 0.00025{\sigma}_v^2 \) and \( 0.0025{\sigma}_v^2 \)) have been chosen in simulations. This figure shows that the steady-state NMSD in nonstationary environment is larger than stationary environment. We also observe that the convergence speed of SR-NSAF is faster than NSAF for both values of \( {\sigma}_q^2 \). To justify these results, we presented Fig. 18. This figure shows the simulated NMSD values as a function of the step-size for NSAF and SR-NSAF in stationary and nonstationary environments. The step-size changes from 0.1 to 1. The results show that in the stationary environment, the simulated NMSD values for NSAF are lower than SR-NSAF. In the nonstationary environment, there is an optimum value for the step-size that minimizes NMSD. This fact can be seen for \( {\sigma}_q^2=0.00025{\sigma}_v^2 \). Since the NMSD learning curves were obtained for μ equal to 0.5, the SR-NSAF has slightly better performance than NSAF in nonstationary environment.
Figure 17
Fig. 17

NMSD learning curves of NSAF and SR-NSAF with N = 8 and μ = 0.5 in nonstationary environment for different values of \( {\sigma}_q^2 \)

Figure 18
Fig. 18

Simulated steady-state NMSD of NSAF and SR-NSAF with N = 8 as a function of the step-size for different value of \( {\sigma}_q^2 \)

7.5 Theoretical performance analysis

7.5.1 Simulation results for transient performance

The theoretical results presented in this paper are confirmed by several computer simulations for a system identification setup. In this case, the unknown system has 16 randomly selected taps. Figures 19 and 20 show the simulated and theoretical MSE learning curves of SR-NSAF algorithm. The simulated learning curves are obtained by ensemble averaging over 100 independent trials. The theoretical learning curve are obtained from (36). In Fig. 19, different values for N have been selected. The good agreement between theoretical and simulated learning curves is observed. Figure 20 presents the results for different values of μ. As we can see, there is good a agreement between simulated and theoretical learning curves. In Figs. 21 and 22, the simulated and theoretical NMSD learning curves were presented. The same as Figs. 19 and 20, different values for N and μ were chosen. Again, a good agreement can be seen in both figures. Figure 23 presents the theoretical and simulated MSE, MSD, and EMSE learning curves for SR-NSAF algorithm. Good agreement between simulated and theoretical learning curves is observed.
Figure 19
Fig. 19

Simulated and theoretical MSE learning curves of SR-NSAF for different values of N

Figure 20
Fig. 20

Simulated and theoretical MSE learning curves of SR-NSAF for different values of μ

Figure 21
Fig. 21

Simulated and theoretical NMSD learning curves of SR-NSAF for different values of N

Figure 22
Fig. 22

Simulated and theoretical NMSD learning curves of SR-NSAF for different values of μ

Figure 23
Fig. 23

Simulated and theoretical MSE, MSD and EMSE learning curves of SR-NSAF with N = 4 and μ = 0.5

7.5.2 Simulation results for stability bounds

Table 6 shows the theoretical values of the mean and mean square stability bounds of SR-NSAF and MSR-NSAF algorithms. These values were obtained from (41) and (42). To justify these values, the simulated steady-state values of MSE were obtained. The steady-state MSE is obtained by averaging over 500 steady-state samples from 500 independent realizations for each value of μ for a given algorithm. The step-size (μ) changes from 0.05 to μmax. Figures 24 and 25 show the results for different values of N. As we can see, the theoretical values for μmax from Table 6 show good estimation of the stability bounds of SR-NSAF and MSR-NSAF algorithms. In [25], it was shown that the stability bound of MSR-NLMS is larger than that of SR-NLMS. It is interesting to note that this observation can be seen for the proposed algorithms. We observe that the stability bound of MSR-NSAF is larger than that of SR-NSAF algorithm.
Figure 24
Fig. 24

Simulated steady-state MSE of SR-NSAF with N = 2, 4, and 8 as a function of the step-size

Figure 25
Fig. 25

Simulated steady-state MSE of MSR-NSAF with N = 2, 4, and 8 as a function of the step-size

Table 6

Stability bounds of the SR-NSAF and MSR-NSAF for different values of N

Algorithm

\( \frac{2}{\lambda_{\mathrm{max}}\left(E\left\{\operatorname{sgn}\left[\mathbf{X}(k)\mathbf{F}\right]\mathbf{W}(k){\mathbf{F}}^T{\mathbf{X}}^T(k)\right\}\right)} \)

\( \frac{1}{\lambda_{\mathrm{max}}\left({\mathbf{M}}^{-1}\mathbf{N}\right)} \)

\( \frac{1}{\max \left(\lambda \left(\mathbf{H}\right)\in {\mathrm{\Re}}^{+}\right)} \)

μ max

SR-NSAF (N = 2)

5.7899

1.3786

4.5400

1.3786

SR-NSAF (N = 4)

5.1303

1.3768

4.2693

1.3768

SR-NSAF (N = 8)

4.2324

1.3704

3.9672

1.3704

MSR-NSAF (N = 2)

7.4337

1.6387

5.2346

1.6387

MSR-NSAF (N = 4)

6.4078

1.6355

4.7916

1.6355

MSR-NSAF (N = 8)

4.9383

1.6302

4.1312

1.6302

7.5.3 Simulation results for steady-state performance

Figures 26 and 27 show the theoretical and simulated steady-state MSE values of SR-NSAF and MSR-NSAF algorithms as a function of the step-size. The step-size (μ) changes from 0.05 to 1. The theoretical steady-state MSE values are obtained from (38). As we can see, there is good agreement between simulated and theoretical steady-state MSE values in both figures. For large values of the step-size, the agreement is slightly deviated. These figures show that the steady-state MSE values for MSR-NSAF are lower than those of SR-NSAF. This fact was also obtained for SR-NLMS and MSR-NLMS algorithms in [25].
Figure 26
Fig. 26

Simulated and theoretical steady-state MSE of SR-NSAF with N = 2, 4, and 8 as a function of the step-size

Figure 27
Fig. 27

Simulated and theoretical steady-state MSE of MSR-NSAF with N = 2, 4, and 8 as a function of the step-size

7.5.4 Theoretical results in nonstationary environment

The theoretical and simulated NMSD learning curves in nonstationary environment were presented in Fig. 28. The theoretical learning curves were obtained from (47). The number of subbands and the step-size were set to 8 and 0.5. Various values for \( {\sigma}_q^2 \) (\( 0.00025{\sigma}_v^2 \), \( 0.0025{\sigma}_v^2 \), and \( 0.025{\sigma}_v^2 \)) have been selected in this simulation. Good agreement between the simulated and theoretical learning curves is observed in nonstationary environment. In Fig. 29, the simulated and theoretical steady-state NMSD as a function of the step-size have been shown. The theoretical values were obtained from (49). This figure shows that there is an optimum step-size which minimizes the steady-state NMSD in nonstationary environment.
Figure 28
Fig. 28

Simulated and theoretical NMSD learning curves of SR-NSAF with N = 8 and μ = 0.5 in nonstationary environment for different values of \( {\sigma}_q^2 \)

Figure 29
Fig. 29

Simulated and theoretical steady-state NMSD of SR-NSAF as a function of the step-size in nonstationary environment for different values of \( {\sigma}_q^2 \)

8 Conclusion

In this paper, the NSAF algorithm with signed regressor of input signal was established. The optimization problem was formulated by L1-norm minimization. The result of this optimization leads to the sign operation on the input regressors at each subband. The computational complexity of the proposed SR-NSAF was lower than previous NSAF family while it had close convergence performance to the NSAF. Therefore, the SR-NSAF is a suitable candidate for many applications. To increase the performance of SR-NSAF, the MSR-NSAF was introduced. The performance of the SR-NSAF was confirmed by several computer simulations in SI, AEC, and LEC applications. Also, the theoretical mean-square performance analysis and the stability bound of the proposed algorithms were studied and confirmed by different experiments.

Footnotes
1

The theoretical stability bound of SR-NSAF is presented in Section VI. These values are justified in Section VIII

 
2

K is the length of the channel filters.

 

Declarations

Acknowledgements

The authors would like to thank Shahid Rajaee Teacher Training University (SRTTU) for financially support.

Funding

This work was financially supported by Shahid Rajaee Teacher Training University (SRTTU).

Authors’ contributions

Due to the effective features of signed regressor adaptive algorithms (low computational complexity and close convergence speed to the conventional algorithm) and to increase the performance of the SR-NLMS algorithm, this paper proposes the signed regressor NSAF (SRNSAF) algorithm. The SR-NSAF is established with L1-norm optimization. A constraint is imposed on the decimated filter output to force a posteriori error to become zero. This constraint guarantees the convergence of the algorithm. This algorithm utilizes the signum of the input regressors at each subband during the adaptation. Again, no multiplications are required for normalization factor at each subband. To improve the performance of the SR-NSAF, the modified SR-NSAF (MSR-NSAF) is also established. The proposed SR-NSAF and MSR-NSAF algorithms have lower computational complexity than the NSAF, SPU-NSAF, DS-NSAF, and FS-NSAF, while they have a fast convergence rate similar to the NSAF. In addition, the steady-state error level is also nearly close to the NSAF. For performance evaluation of any proposed adaptive algorithm, a theoretical analysis is essential [33]. Therefore, in the following, the energy conservation approach [27] is applied to the SR-NSAF and the mean-square performance analysis of the proposed algorithms are studied in the stationary and nonstationary environments. This approach does not need a white or Gaussian assumption for the input regressors. Based on this, the transient, the steady-state, and the stability bounds of the SR-NSAF and MSR-NSAF are analyzed and closed form relations are derived.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Faculty of Electrical Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran

References

  1. S Haykin, Adaptive Filter Theory, 4th edn. (Prentica-Hall, 2002)Google Scholar
  2. AH Sayed, Adaptive Filters (Wiley, 2008)Google Scholar
  3. K Ozeki, T Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron Commun Jpn 67-A, 19–27 (1984)MathSciNetView ArticleGoogle Scholar
  4. M Muneyasu, T Hinamoto, A realization of TD adaptive filters using affine projection algorithm. J Franklin Inst 335(7), 1185–1193 (1998)MathSciNetView ArticleMATHGoogle Scholar
  5. A Gilloire, M Vetterli, Adaptive filtering in subbands with critical sampling: Analysis, experiments, and application to acoustic echo cancellation. IEEE Trans. Signal Process 40, 1862–1875 (1992)View ArticleMATHGoogle Scholar
  6. K-A Lee, W-S Gan, SM Kuo, Subband Adaptive Filtering: Theory and Implementation (Wiley, Hoboken, 2009)View ArticleGoogle Scholar
  7. MSE Abadi, S Kadkhodazadeh, A family of proportionate normalized subband adaptive filter algorithms. J Franklin Inst 348(2), 212–238 (2011)View ArticleMATHGoogle Scholar
  8. KA Lee, WS Gan, Improving convergence of the NLMS algorithm using constrained subband updates. IEEE Signal Process Lett 11, 736–739 (2004)View ArticleGoogle Scholar
  9. M de Courville, P Duhamel, Adaptive filtering in subbands using a weighted criterion. IEEE Trans Signal Process 46, 2359–2371 (1998)View ArticleGoogle Scholar
  10. SS Pradhan, VE Reddy, A new approach to subband adaptive filtering. IEEE Trans Signal Process 47, 655–664 (1999)View ArticleGoogle Scholar
  11. MSE Abadi, JH Husøy, Selective partial update and set-membership subband adaptive filters. Signal Process. 88, 2463–2471 (2008)View ArticleMATHGoogle Scholar
  12. SE Kim, YS Choi, MK Song, WJ Song, A subband adaptive filtering algorithm employing dynamic selection of subband filters. IEEE Signal Process Lett 17(3), 245–248 (2010)View ArticleGoogle Scholar
  13. MK Song, SE Kim, YS Choi, WJ Song, Selective normalized subband adaptive filter with subband extension. IEEE Trans Circuits Syst II: EXPRESS BRIEFS 60(2), 101–105 (2013)View ArticleGoogle Scholar
  14. J Nagumo, A Noda, A learning method for system identification. IEEE Trans. Automat. Contr. 12, 282–287 (1967)View ArticleGoogle Scholar
  15. DL Duttweiler, Adaptive filter performance with nonlinearities in the correlation multipliers. IEEE Trans Acoust Speech Signal Process 30(8), 578–586 (1982)View ArticleGoogle Scholar
  16. A Gersho, Adaptive filtering with binary reinforcement. IEEE Trans. Inform. Theory 30(3), 191–199 (1984)View ArticleMATHGoogle Scholar
  17. WA Sethares, Adaptive algorithms with nonlinear data and error functions. IEEE Trans Signal Process 40(9), 2199–2206 (1992)MathSciNetView ArticleMATHGoogle Scholar
  18. S Koike, Analysis of adaptive filters using normalized signed regressor LMS algorithm. IEEE Trans Signal Process 47(10), 2710–2723 (1999)View ArticleGoogle Scholar
  19. VJ Mathews, SH Cho, Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm. IEEE Trans Acoust Speech Signal Process 35(4), 450–454 (1987)View ArticleMATHGoogle Scholar
  20. P Wen, S Zhang, J Zhang, A novel subband adaptive filter algorithm against impulsive noise and it’s performance analysis. Signal Process. 127(10), 282–287 (2016)View ArticleGoogle Scholar
  21. TMCM Classen, WFG Mecklenbraeuker, Comparsion of the convergence of two algorithms for adaptive FIR digital filters. IEEE Trans Acoust Speech Signal Process 29(6), 670–678 (1981)View ArticleMATHGoogle Scholar
  22. J Ni, F Li, Variable regularisation parameter sign subband adaptive filter. Electron. Lett. 64, 1605–1607 (2010)View ArticleGoogle Scholar
  23. J Shin, J Yoo, P Park, Variable step-size sign subband adaptive filter. IEEE Signal Process Lett 20, 173–176 (2013)View ArticleGoogle Scholar
  24. NJ Bershad, Comments on ‘comparison of the convergence of two algorithms for adaptive FIR digital filters. IEEE Trans Acoust Speech Signal Process 33(12), 1604–1606 (1985)View ArticleGoogle Scholar
  25. K Takahashi, S Mori, in Proc. ICCS/ISITA. A new normalized signed regressor LMS algorithm (Singapore, 1992), pp. 1181–1185Google Scholar
  26. J Ni, F Li, A variable step-size matrix normalized subband adaptive filter. IEEE Trans Audio Speech Lang Process 18, 1290–1299 (2010)View ArticleGoogle Scholar
  27. H-C Shin, AH Sayed, Mean-square performance of a family of affine projection algorithms. IEEE Trans Signal Process 52(1), 90–102 (2004)MathSciNetView ArticleMATHGoogle Scholar
  28. JJ Jeong, SH Kim, G Koo, SW Kim, Mean-square deviation analysis of multiband-structured subband adaptive filter algorithm. IEEE Trans Signal Process 64(4), 985–994 (2016)MathSciNetView ArticleGoogle Scholar
  29. K Dogancay, O Tanrikulu, Adaptive filtering algorithms with selective partial updates. IEEE Trans Circuits Syst II Analog Digit Signal Process 48, 762–769 (2001)View ArticleMATHGoogle Scholar
  30. H Malvar, Signal Processing with Lapped Transforms (Artech House, 1992)Google Scholar
  31. C Paleologu, J Benesty, S Ciochina, A variable step-size affine projection algorithm designed for acoustic echo cancellation. IEEE Trans Audio Speech Lang Process 16, 1466–1478 (2008)View ArticleGoogle Scholar
  32. ITU-T Rec. G. 168, Digital Network Echo Cancellers, 2007.Google Scholar
  33. AH Sayed, Fundamentals of Adaptive Filtering (Wiley, 2003)Google Scholar

Copyright

© The Author(s). 2018

Advertisement