 Research
 Open Access
 Published:
A low computational complexity normalized subband adaptive filter algorithm employing signed regressor of input signal
EURASIP Journal on Advances in Signal Processing volume 2018, Article number: 21 (2018)
Abstract
ᅟ
In this paper, the signed regressor normalized subband adaptive filter (SRNSAF) algorithm is proposed. This algorithm is optimized by L_{1}norm minimization criteria. The SRNSAF has a fast convergence speed and a low steadystate error similar to the conventional NSAF. In addition, the proposed algorithm has lower computational complexity than NSAF due to the signed regressor of the input signal at each subband. The theoretical meansquare performance analysis of the proposed algorithm in the stationary and nonstationary environments is studied based on the energy conservation relation and the steadystate, the transient, and the stability bounds of the SRNSAF are predicated by the closed form expressions. The good performance of SRNSAF is demonstrated through several simulation results in system identification, acoustic echo cancelation (AEC) and line EC (LEC) applications. The theoretical relations are also verified by presenting various experimental results.
Introduction
Fast convergence rate and low computational complexity features are important issues for high data rate applications such as speech processing, echo cancelation, network echo cancelation, and channel equalization. The leastmeansquares (LMS) and the normalized LMS (NLMS) algorithms are useful for a wide range of adaptive filter applications because of their low computational complexity. However, the performance of the LMStype algorithms is corrupted when the input signals are colored [1, 2].
To solve this problem, various approaches such as affine projection algorithm (APA) [3, 4] and subband adaptive filter (SAF) algorithm have been proposed [5,6,7]. In [8], a new version of the SAF was developed based on a constrained optimization problem referred to as normalized SAF (NSAF). The filter update equation in [8] is similar to the update equation in [9, 10], where the full band filters are updated instead of subfilters as in the conventional SAF structure [5].
To reduce the computational complexity of NSAF and APA, different methods were proposed. In [11], the selective partial update NSAF (SPUNSAF) algorithm was presented where the filter coefficients are partially updated rather than the entire filter at every adaptation. In [12], the dynamic selection of NSAF (DSNSAF) algorithm was introduced. In this algorithm, the number of subbands was optimally selected during each iteration. The fix selection NSAF (FSNSAF) was also introduced in [13]. In this algorithm, a subset of subbands was selected during the adaptation.
There are some classes of adaptive filter algorithms that make use of the signum of either the error signal or the input signal, or both. These approaches have been applied to the LMS algorithm for the simplicity of implementation, enabling a significant reduction in computational complexity [14,15,16,17,18]. The sign algorithm (SA) takes the signum of the error signal. This algorithm is particularly useful against impulsive interferences [19, 20]. But, in other cases, the convergence speed of the SA is slower than conventional one [21]. This approach was also successfully extended to the NSAF algorithm to establish the sign SAF (SSAF) algorithm [22, 23].
In the signed regressor LMS (SRLMS), the signum of the input regressors is utilized. In this algorithm, the polarity of the input signal is used to adjust the filter coefficients, which requires no multiplications. The SRLMS has a convergence speed and a steadystate error level that are only slightly inferior to those of the LMS algorithm for the same parameter setting [24]. To increase the convergence speed of SRLMS, the signed regressor NLMS (SRNLMS) was firstly proposed in [14]. Also, the modified version of this algorithm (MSRNLMS) was presented in [25]. The same as SRLMS, the SRNLMS enjoys advantages similar to those of the NLMS algorithm. Due to the normalization factor, the steadystate error level does not depend on the input signal power [18]. Note that no multiplications are needed to calculate the normalization factor. But for highly colored input signal, the convergence speed of SRNLMS is still low. On the other hand, there is no definition for cost function or solving the optimization problem to establishment of the signed regressor algorithms in the literature.
Due to the effective features of signed regressor adaptive algorithms (low computational complexity and close convergence speed to the conventional algorithm) and to increase the performance of the SRNLMS algorithm, this paper proposes the signed regressor NSAF (SRNSAF) algorithm. The SRNSAF is established with L_{1}norm optimization. A constraint is imposed on the decimated filter output to force a posteriori error to become zero. This constraint guarantees the convergence of the algorithm. This algorithm utilizes the signum of the input regressors at each subband during the adaptation. Again, no multiplications are required for normalization factor at each subband. To improve the performance of the SRNSAF, the modified SRNSAF (MSRNSAF) is also established. The proposed SRNSAF and MSRNSAF algorithms have lower computational complexity than the NSAF, SPUNSAF, DSNSAF, and FSNSAF, while they have a fast convergence rate similar to the NSAF. In addition, the steadystate error level is also nearly close to the NSAF. For performance evaluation of any proposed adaptive algorithm, a theoretical analysis is essential [26]. Therefore, in the following, the energy conservation approach [27] is applied to the SRNSAF and the meansquare performance analysis of the proposed algorithms are studied in the stationary and nonstationary environments. This approach does not need a white or Gaussian assumption for the input regressors. Based on this, the transient, the steadystate, and the stability bounds of the SRNSAF and MSRNSAF are analyzed and closed form relations are derived.
What we propose in this paper can be summarized as follows:

The establishment of the SRNSAF according to the proposed cost function. This algorithm utilizes the signum of the input regressors at each subband. Furthermore, no multiplications are required for normalization factor at each subband.

Meansquare performance analysis of the SRNSAF algorithm in the stationary and nonstationary environments. The theoretical expressions for transient and steadystate performances of the SRNSAF are extracted.

Analysis of the mean and meansquare stability bounds of the SRNSAF and MSRNSAF algorithms.

The performance of NSAF, SPUNSAF, DSNSAF, FSNSAF, SRNSAF, and MSRNSAF are compared in convergence speed, steadystate error, and computational complexity features for system identification, acoustic echo cancelation, and line echo cancelation applications.

The theoretical expressions for transient, steadystate, and stability bounds are justified with various experiments.
The current paper is organized as follows. In Section II, the conventional NSAF is briefly reviewed. The proposed SRNSAF and MSRNSAF are presented in Section III. Section IV presents the mean square performance analysis of SRNSAF. The theoretical stability bounds relations are given in Section V. In the following, the computational complexity of the proposed algorithm will be discussed. Finally, before concluding the paper, the usefulness of the introduced algorithms are demonstrated by presenting several experimental results.
Throughout the paper, the following notations are used:
.  Norm of a scalar 

‖.‖^{2}  Squared Euclidean norm of a vector. 
‖.‖_{1}  L_{1}norm of a vector. 
(.)^{T}  Transpose of a vector or a matrix. 
E{.}  Expectation operator. 
sgn  Sign function. 
Tr(.)  Trace of a matrix. 
λ _{ max }  The largest eigenvalue of a matrix. 
ℜ^{+}  The set of positive real numbers. 
Α ⊗ Β  Kronecker product of matrices Α and Β 
\( {\left\Vert \mathbf{t}\right\Vert}_{\boldsymbol{\Phi}}^2 \)  Φweighted Euclidean norm of a column vector t defined as t^{T}Φ t. 
diag(.)  Has the same meaning as the MATLAB operator with the same name: if its argument is a vector, a diagonal matrix with the diagonal elements given by the vector argument results. If the argument is a matrix, its diagonal is extracted into a resulting vector. 
vec(T)  Creates an M^{2} × 1 column vector t through stacking the columns of the M × M matrix T. 
vec(t)  Creates an M × M matrix T from the M^{2} × 1 column vector t. 
Background on NSAF
Consider a linear data model for d(n) as
where w^{o} is an unknown Mdimensional vector that we expect to estimate, v(n) is the measurement noise with variance \( {\sigma}_v^2 \) and x(n) = [x(n), x(n − 1), …, x(n − M + 1)]^{T} denotes an Mdimensional input (regressor) vector. It is assumed that v(n) is zero mean, white, Gaussian, and independent of x(n). Figure 1 shows the structure of the NSAF [8]. In this figure, f_{0}, f_{1}, …, f_{N − 1} and g_{0}, g_{1}, …, g_{N − 1}, are analysis and synthesis filter unit impulse responses of an N channel orthogonal perfect reconstruction critically sampled filter bank system. x_{ i }(n) and d_{ i }(n) are nondecimated subband signals. It is important to note that n refers to the index of the original sequences, and k denotes the index of the decimated sequences (k = floor(n/N)). The decimated output signal is defined as \( {y}_{i,D}(k)={\mathbf{x}}_i^T(k)\mathbf{w}(k) \) where x_{ i }(k) = [x_{ i }(kN), x_{ i }(kN − 1), …, x_{ i }(kN − M + 1)]^{T} and w(k) = [w_{0}(k), w_{1}(k), …, w_{M − 1}(k)]^{T}. Also, the decimated subband error signal is defined as \( {e}_{i,D}(k)={d}_{i,D}(k){\mathbf{x}}_i^T(k)\mathbf{w}(k) \). The filter update equation for NSAF can be stated as
where μ is the step size and 0 < μ<2 [8].
Sign regressor normalized subband adaptive filter(SRNSAF)
Based on the principle of minimum disturbance, the SRNSAF is formulated by the following optimization problem
subject to the N constraints (i = 0, 1, …, N − 1) which are defined as
By applying the method of Lagrange multipliers, the following Lagrangian function is obtained
where λ_{ i } is the ith Lagrange multiplier. Using, \( \frac{\partial J\left(\mathbf{w}\left(k+1\right)\right)}{\partial \mathbf{w}\left(k+1\right)}=0, \) we get the following relation as
In NSAF algorithm, if the magnitude responses of the analysis filters do not significantly overlap, the crosscorrelation between two arbitrary subband signals is negligible compared to the autocorrelation [8]. Therefore, by multiplying \( {\mathbf{x}}_i^T(k) \) on both sides of the above equation from the left and neglecting the crossterms, we obtain
By defining sgn(x_{ i }(k)) = Θ_{ i }(k)x_{ i }(k), sgn[w(k + 1) − w(k)] = Υ(k)[w(k + 1) − w(k)], and multiplying sgn(x_{ i }(k)) on both sides of (7) from the left, we get
Where
and
If the number of subbands is large enough, x_{ i }(k) may be approximately assumed white [26, 28]. Therefore, by displacing the matrices in (8) and using (4), we obtain
Where \( {\left\Vert {\mathbf{x}}_i(k)\right\Vert}_{\mathbf{1}}=\operatorname{sgn}\left({\mathbf{x}}_i^T(k)\right){\mathbf{x}}_i(k) \). Now, by multiplying \( \operatorname{sgn}\left({\mathbf{x}}_i^T(k)\right) \) on both sides of (11) from the left, the Lagrange multipliers are given by
Substituting (12) into (6) leads to
By multiplying Υ^{−1}(k) on both sides of (13) from the left and rearranging the diagonal matrices, the filter coefficients of the update equation for SRNSAF is established as
where μ is again the step size and should be selected in the stability bound.^{Footnote 1} To avoid being divided by zero, it is common that the denominator of the update equation is replaced by ϵ + ‖x_{ i }(k)‖_{ 1 }, where ϵ is the regularization parameter. Table 1 summarizes the SRNSAF algorithm.
It is interesting to note that for N = 1, and f_{0} = 1, the SRNSAF in (14) reduces to
which is the SRNLMS algorithm [14]. In this case, the output error is given by e(n) = d(n) − x^{T}(n)w(n).
In [25], the new version of SRNLMS was proposed based on clipping of the input signal. When the absolute value of the sample is larger than the average of the absolute values of the input samples, the clipped sample is used to update coefficients. The performance of SRNSAF can be improved by applying the proposed idea in [25] in each subband. Therefore, the new version of SRNSAF which is called modified SRNSAF (MSRNSAF) is established based on the procedure in Table 2.
Mean square performance analysis of SRNSAF in stationary environment
The filter coefficients update equation in SRNSAF can be represented as
where F is the K × N matrix whose columns are the unit pulse responses of the channel filters of a critically sampled analysis filter bank,^{Footnote 2} F = [f_{0}, f_{1}, …, f_{N − 1} ], X(k) is the M × K input signal matrix which is defined as
and
Also, (k) = [ϵI + diag {diag{F^{T}X^{T}(k) sgn[X(k)F]}}]^{−1}, and
is the error signal vector. In the theoretical convergence analysis, we need to obtain the time evolution of the \( E\left\{{\left\Vert \overset{\sim }{\mathbf{w}}(k)\right\Vert}_{\boldsymbol{\Phi}}^2\right\} \), where \( \overset{\sim }{\mathbf{w}}(k)={\mathbf{w}}^o\mathbf{w}(k) \) is the weighterror vector, and Φ is any Hermitian and positivedefinite matrix. When Φ = I (I is the identity matrix), the mean square deviation (MSD) and when Φ = R (R is the autocorrelation matrix of the input signal), the excess mean square error (EMSE) expressions are derived.
The weight error vector update equation for SRNSAF can be written as
From (1) and (19), the error vector, e(k), can be described as
Substituting (21) into (20) yields
By taking the Φweighted norm from both sides of (22), we obtain
where
and
Also in (25), Z(k) = sgn[X(k)F]W(k)F^{T}X^{T}(k). By applying the expectation into both sides of (23), we obtain
To simplify the recent relation, we need the independence assumptions. The matrix X(k) is assumed an independent and identically distributed sequence matrix [2, 27]. This assumption guarantees that \( \overset{\sim }{\mathbf{w}}(k) \) is independent of both Ψ and X(k). Therefore,
The second term of the right hand side of (26) can be presented as
Since \( E\left\{\boldsymbol{v}(k){\mathbf{v}}^T(k)\right\}={\sigma}_v^2\mathbf{I} \), we obtain
Applying the vec(.) operation on both sides of (25) and using vec(PΦQ) = (Q^{T} ⊗ P)vec(Φ) lead to
where ψ = vec(Ψ), and ϕ = vec(Φ). Therefore, by defining the matrix P as
we obtain
Defining
we get
Finally, (26) can be stated as
This equation is related to \( \overset{\sim }{\mathbf{w}}(0) \) as
By substituting R for Φ, and defining r = vec(R), the transient behavior of SRNSAF can be predicted by (35). From this recursion, we can obtain EMSE, when k goes to infinity. Therefore, the EMSE in the steadystate can be stated as
The MSE and the EMSE are related as
Also, the steady state mean square coefficient deviation (MSD) is given by
It is important to note that selecting F = I and N = K = 1 lead to the performance analysis of SRNLMS and MSRNLMS algorithms, which was not presented in [14, 25]. This analysis can be successfully extended to nonstationary environment. In Appendix 1, the meansquare performance analysis of the SRNSAF is presented in the nonstationary environment.
Mean and meansquare stability of the SRNSAF
Taking the expectation from both sides of (22) leads to
From (40), the convergence to the mean of the SRNSAF is guaranteed for any μ that satisfies
Equation (35) is stable if the matrix P is stable [27]. From (31), we know that P = I − μM + μ^{2}N, where M = E{Z^{T}(k)} ⊗ I + I ⊗ E{Z^{T}(k)}, and N = E{Z^{T}(k) ⊗ Z^{T}(k)}. The condition on μ to guarantee the convergence in the meansquare sense of the SRNSAF algorithms is
where \( \mathbf{H}=\left[\begin{array}{cc}\frac{1}{2}\mathbf{M}& \frac{1}{2}\mathbf{N}\\ {}\mathbf{I}& \mathbf{0}\end{array}\right] \).
Computational complexity
Table 3 compares the computational complexity of the NSAF and SRNSAF algorithms. This table shows that the number of multiplications in SRNSAF is lower than NSAF. Table 4 summarizes the number of multiplications at each iteration for different NSAF algorithms. In this table, M, N, K, B, S, L, N_{ s }, and N(k) are the filter length, the number of subbands, the length of channel filters, the number of blocks, the number of blocks to update, the length of blocks, the number of selected subbands (fix), and the number of selected subbands (dynamic), respectively. For NSAF, the exact computational complexity of this algorithm is 3M + 3NK + 1 multiplications [11]. From [11], we obtain that the computational complexity of SPUNSAF is 2M + SL + 3NK + 1 multiplications. In comparison with NSAF, the reduction in number of multiplications is M − SL, which is considerable for large values of M. Also, the DSNSAF needs \( \left(1+2\frac{N(k)}{N}\right)M+3 NK+N \) multiplications [12]. Due to the selection of subbands during the adaptation, the number of multiplications will be reduced in DSNSAF. The exact number of multiplications in FSNSAF is \( 2M+\left(\frac{N_s}{N}\right)M+3 NK+1 \). Compared with SPUNSAF, FSNSAF, and DSNSAF algorithms, the proposed SRNSAF algorithm needs 2M multiplications less than NSAF algorithm. Figure 2 compares the number of multiplications versus the filter length for NSAF, FSNSAF, DSNSAF, SPUNSAF (B = 4, S = 1, 2, 3), and proposed SRNSAF with N = 8. As we can see, the number of multiplications in SRNSAF is significantly lower than other algorithms.
Simulation results
We demonstrated the performance of the proposed algorithm by several computer simulations in a system identification (SI), acoustic echo cancelation (AEC) and line echo cancelation (LEC) setups. The impulse response of the car echo path with 256 taps (M = 256) has been used as an unknown system in the experiment [29] (Fig. 3). The filter bank used in the NSAF algorithms was the extended lapped transform (ELT) (N = 2, 4, and 8) [11, 30]. In all simulations, we show the normalized mean square deviation (NMSD), \( E\left[\frac{{\left\Vert {\mathbf{w}}^{\boldsymbol{o}}\mathbf{w}(k)\right\Vert}^2}{{\left\Vert {\mathbf{w}}^{\boldsymbol{o}}\right\Vert}^2}\right] \), which is evaluated by ensemble averaging over 20 independent trials.
System identification: AR(2) input signal
In this experiment, the input signal is an AR(2) signal generated by passing a zeromean white Gaussian noise through a secondorder system \( \mathrm{T}(z)=\frac{1}{10.1{z}^{1}0.8{z}^{2}} \). An additive white Gaussian noise was added to the system output, setting the signaltonoise ratio (SNR) to 30 dB. Figure 4 compares the convergence of the NSAF, SRNSAF, and MSRNSAF algorithms with N = 4 for different step sizes (1, 0.2, and 0.05). As we can see, for large values of the step size, the fast convergence rate and high steadystate error are occurred and small step size leads to the slow convergence rate and a low steadystate error. Figure 5 shows the performance of the SRNSAF, MSRNSAF, and conventional NSAF for the number of subbands N = 4 and 8. The stepsize was set to μ = 0.05. By increasing the number of subbands in all algorithms, the convergence rate is improved and the computational complexity is also increased. The results show that the SRNSAF, and MSRNSAF algorithms have close performance to the conventional NSAF. Furthermore, the computational complexity of SRNSAF and MSRNSAF is lower than NSAF.
The performance of the proposed SRNSAF and MSRNSAF algorithms have been compared with other NSAF algorithms in Fig. 6. These algorithms are NSAF [8], DSNSAF [12], FSNSAF [13], and SPUNSAF algorithm in [11]. Eight subbands have been used (N = 8) and the stepsize was set to μ = 0.5. In FSNSAF, the number of selected subbands (N_{ s }) out of the number of subbands (N) was set to 4. For SPUNSAF algorithm, the number of blocks (B) was set to 4 and the number of blocks to update (S) was set 3 and 2. As we can see, the proposed SRNSAF and MSRNSAF have a comparable performance to the family of NSAF in terms of the convergence speed and the steadystate error. In addition, the computational complexity of the introduced algorithms are lower than other algorithms.
In Fig. 7, the stepsize was set to 0.5 in NSAF algorithm and to make the comparison fair, the stepsizes for other NSAF algorithms were chosen to get approximately the same steadystate NMSD as NSAF. For DSNSAF and FSNSAF, the stepsize was set to 0.5. In SPUNSAF, the stepsizes for S = 2 and S = 3 were set to 0.32 and 0.42, respectively. Finally, this value for SRNSAF was set to 0.32 and for MSRNSAF, the stepsize was set to 0.4. The NMSD learning curves show that the SRNSAF and MSRNSAF have a comparable performance with those of the family of NSAF algorithms. In Fig. 8, we compared the NMSD learning curves of NLMS and SRNLMS algorithms with the family of NSAF algorithms. For this simulation, the number of multiplications until convergence was also presented in Table 5. This table indicates that the number of multiplications in SRNSAF is 1025000 which is significantly lower than other algorithms.
For tracking performance analysis, we consider a system to identify the two unknown filters with M = 200, whose zdomain transfer functions are given by
and
where the transfer function of optimum filter coefficients will be W_{1}(z) for n ≤ 5 × 10^{3} and the transfer function of optimum filter coefficients will be W_{2}(z) for 5 × 10^{3} ≤ n ≤ 10 × 10^{3}. Figure 9 compares the tracking performance of SRNSAF and MSRNSAF with other NSAF algorithms. The number of subbands and the stepsize were set to 8 and 0.5. As we can see, the SRNSAF and MSRNSAF have a close performance to the conventional NSAF algorithm.
Acoustic echo cancelation (AEC): speech input signal
For AEC setup, we consider both the exact and undermodeling scenarios. For the undermodeling scenario, the NMSD is calculated by padding the tapweight vector of the adaptive filter with M − J zeros (J=length of adaptive filter which is shorter than that of the unknown system in this case) [31]. In the exactmodeling scenario, the echo path is truncated to the first 128 tap weights [before the dotted line in Fig. 3]; in the undermodeling scenario, the length of the echo path is set to 256. For both scenarios, the length of all the adaptive filters is set to 128. Speech input signal is used as input signal for AEC setup [26].
Figures 10 and 11 compare the performance of proposed SRNSAF and MSRNSAF algorithms with other NSAF algorithms in exact and under modeling scenarios. The number of subbands (N) was set to 4. In FSNSAF, the number of selected subbands (N_{ s }) out of the number of subbands (N) was set to 2. As we can see, the proposed SRNSAF and MSRNSAF have a comparable performance to the family of NSAF in terms of the convergence speed and the steadystate misalignment with lower computational complexity. Table 5 shows the number of multiplications until convergence for different NSAF algorithms. This table indicates that the proposed SRNSAF has significantly lower computational complexity than other algorithms.
The tracking capability is examined by shifting the acoustic impulse response to the right by 10 samples at a certain time step. Figure 12 compares the tracking performance of SRNSAF and MSRNSAF with other NSAF algorithms in exact modeling scenario. The number of subbands was set to 4. As we can see, the SRNSAF and MSRNSAF have close performance to the conventional NSAF algorithm.
Line echo cancelation
In communications over phone lines, a signal traveling from a farend point to a nearend point is usually reflected in the form of an echo at the nearend due to mismatches in circuity. The purpose of a line echo canceller (LEC) is to eliminate the echo from a received signal. Figure 13 shows the impulse response sequence of a typical echo path which was taken from G168 standard [32]. Figure 14 shows the farend signal from real speech and echo signal (page 347 in [33]).
In this simulation, the length of the adaptive filter is 128. Figure 15 shows the error signals based on NSAF, SRNSAF, and MSRNSAF algorithms. The number of subbands, and the stepsize were set to 4 and 0.5, respectively. As we can see, SRNSAF and MSRNSAF have close error performance to NSAF algorithm. Furthermore, the computational complexity of SRNSAF and MSRNSAF is considerably lower than that of NSAF. Also, to measure the effectiveness of the proposed algorithms, we have computed the echo return loss enhancement (ERLE). The ERLE is obtained by evaluating the difference between the powers of the echo and the error signal. The segmental ERLE estimates were obtained by averaging over 140 samples. The segmental ERLE curves for the measured speech and echo signals were shown in Fig. 16. This figure illustrates that the proposed algorithms and conventional NSAF have comparable ERLE performance.
Performance in nonstationary environment
Figure 17 presents the NMSD learning curves of NSAF and SRNSAF in nonstationary environment. The unknown system changes according to the random walk model. We assume an independent and identically distributed sequence for q(n) with autocorrelation matrix \( \mathbf{Q}={\sigma}_q^2\mathbf{I} \) [27]. The number of subbands and the stepsize were set to 8 and 0.5 and different values for \( {\sigma}_q^2 \) (\( 0.00025{\sigma}_v^2 \) and \( 0.0025{\sigma}_v^2 \)) have been chosen in simulations. This figure shows that the steadystate NMSD in nonstationary environment is larger than stationary environment. We also observe that the convergence speed of SRNSAF is faster than NSAF for both values of \( {\sigma}_q^2 \). To justify these results, we presented Fig. 18. This figure shows the simulated NMSD values as a function of the stepsize for NSAF and SRNSAF in stationary and nonstationary environments. The stepsize changes from 0.1 to 1. The results show that in the stationary environment, the simulated NMSD values for NSAF are lower than SRNSAF. In the nonstationary environment, there is an optimum value for the stepsize that minimizes NMSD. This fact can be seen for \( {\sigma}_q^2=0.00025{\sigma}_v^2 \). Since the NMSD learning curves were obtained for μ equal to 0.5, the SRNSAF has slightly better performance than NSAF in nonstationary environment.
Theoretical performance analysis
Simulation results for transient performance
The theoretical results presented in this paper are confirmed by several computer simulations for a system identification setup. In this case, the unknown system has 16 randomly selected taps. Figures 19 and 20 show the simulated and theoretical MSE learning curves of SRNSAF algorithm. The simulated learning curves are obtained by ensemble averaging over 100 independent trials. The theoretical learning curve are obtained from (36). In Fig. 19, different values for N have been selected. The good agreement between theoretical and simulated learning curves is observed. Figure 20 presents the results for different values of μ. As we can see, there is good a agreement between simulated and theoretical learning curves. In Figs. 21 and 22, the simulated and theoretical NMSD learning curves were presented. The same as Figs. 19 and 20, different values for N and μ were chosen. Again, a good agreement can be seen in both figures. Figure 23 presents the theoretical and simulated MSE, MSD, and EMSE learning curves for SRNSAF algorithm. Good agreement between simulated and theoretical learning curves is observed.
Simulation results for stability bounds
Table 6 shows the theoretical values of the mean and mean square stability bounds of SRNSAF and MSRNSAF algorithms. These values were obtained from (41) and (42). To justify these values, the simulated steadystate values of MSE were obtained. The steadystate MSE is obtained by averaging over 500 steadystate samples from 500 independent realizations for each value of μ for a given algorithm. The stepsize (μ) changes from 0.05 to μ_{max}. Figures 24 and 25 show the results for different values of N. As we can see, the theoretical values for μ_{max} from Table 6 show good estimation of the stability bounds of SRNSAF and MSRNSAF algorithms. In [25], it was shown that the stability bound of MSRNLMS is larger than that of SRNLMS. It is interesting to note that this observation can be seen for the proposed algorithms. We observe that the stability bound of MSRNSAF is larger than that of SRNSAF algorithm.
Simulation results for steadystate performance
Figures 26 and 27 show the theoretical and simulated steadystate MSE values of SRNSAF and MSRNSAF algorithms as a function of the stepsize. The stepsize (μ) changes from 0.05 to 1. The theoretical steadystate MSE values are obtained from (38). As we can see, there is good agreement between simulated and theoretical steadystate MSE values in both figures. For large values of the stepsize, the agreement is slightly deviated. These figures show that the steadystate MSE values for MSRNSAF are lower than those of SRNSAF. This fact was also obtained for SRNLMS and MSRNLMS algorithms in [25].
Theoretical results in nonstationary environment
The theoretical and simulated NMSD learning curves in nonstationary environment were presented in Fig. 28. The theoretical learning curves were obtained from (47). The number of subbands and the stepsize were set to 8 and 0.5. Various values for \( {\sigma}_q^2 \) (\( 0.00025{\sigma}_v^2 \), \( 0.0025{\sigma}_v^2 \), and \( 0.025{\sigma}_v^2 \)) have been selected in this simulation. Good agreement between the simulated and theoretical learning curves is observed in nonstationary environment. In Fig. 29, the simulated and theoretical steadystate NMSD as a function of the stepsize have been shown. The theoretical values were obtained from (49). This figure shows that there is an optimum stepsize which minimizes the steadystate NMSD in nonstationary environment.
Conclusion
In this paper, the NSAF algorithm with signed regressor of input signal was established. The optimization problem was formulated by L_{1}norm minimization. The result of this optimization leads to the sign operation on the input regressors at each subband. The computational complexity of the proposed SRNSAF was lower than previous NSAF family while it had close convergence performance to the NSAF. Therefore, the SRNSAF is a suitable candidate for many applications. To increase the performance of SRNSAF, the MSRNSAF was introduced. The performance of the SRNSAF was confirmed by several computer simulations in SI, AEC, and LEC applications. Also, the theoretical meansquare performance analysis and the stability bound of the proposed algorithms were studied and confirmed by different experiments.
Notes
 1.
The theoretical stability bound of SRNSAF is presented in Section VI. These values are justified in Section VIII
 2.
K is the length of the channel filters.
References
 1.
S Haykin, Adaptive Filter Theory, 4th edn. (PrenticaHall, 2002)
 2.
AH Sayed, Adaptive Filters (Wiley, 2008)
 3.
K Ozeki, T Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron Commun Jpn 67A, 19–27 (1984)
 4.
M Muneyasu, T Hinamoto, A realization of TD adaptive filters using affine projection algorithm. J Franklin Inst 335(7), 1185–1193 (1998)
 5.
A Gilloire, M Vetterli, Adaptive filtering in subbands with critical sampling: Analysis, experiments, and application to acoustic echo cancellation. IEEE Trans. Signal Process 40, 1862–1875 (1992)
 6.
KA Lee, WS Gan, SM Kuo, Subband Adaptive Filtering: Theory and Implementation (Wiley, Hoboken, 2009)
 7.
MSE Abadi, S Kadkhodazadeh, A family of proportionate normalized subband adaptive filter algorithms. J Franklin Inst 348(2), 212–238 (2011)
 8.
KA Lee, WS Gan, Improving convergence of the NLMS algorithm using constrained subband updates. IEEE Signal Process Lett 11, 736–739 (2004)
 9.
M de Courville, P Duhamel, Adaptive filtering in subbands using a weighted criterion. IEEE Trans Signal Process 46, 2359–2371 (1998)
 10.
SS Pradhan, VE Reddy, A new approach to subband adaptive filtering. IEEE Trans Signal Process 47, 655–664 (1999)
 11.
MSE Abadi, JH Husøy, Selective partial update and setmembership subband adaptive filters. Signal Process. 88, 2463–2471 (2008)
 12.
SE Kim, YS Choi, MK Song, WJ Song, A subband adaptive filtering algorithm employing dynamic selection of subband filters. IEEE Signal Process Lett 17(3), 245–248 (2010)
 13.
MK Song, SE Kim, YS Choi, WJ Song, Selective normalized subband adaptive filter with subband extension. IEEE Trans Circuits Syst II: EXPRESS BRIEFS 60(2), 101–105 (2013)
 14.
J Nagumo, A Noda, A learning method for system identification. IEEE Trans. Automat. Contr. 12, 282–287 (1967)
 15.
DL Duttweiler, Adaptive filter performance with nonlinearities in the correlation multipliers. IEEE Trans Acoust Speech Signal Process 30(8), 578–586 (1982)
 16.
A Gersho, Adaptive filtering with binary reinforcement. IEEE Trans. Inform. Theory 30(3), 191–199 (1984)
 17.
WA Sethares, Adaptive algorithms with nonlinear data and error functions. IEEE Trans Signal Process 40(9), 2199–2206 (1992)
 18.
S Koike, Analysis of adaptive filters using normalized signed regressor LMS algorithm. IEEE Trans Signal Process 47(10), 2710–2723 (1999)
 19.
VJ Mathews, SH Cho, Improved convergence analysis of stochastic gradient adaptive filters using the sign algorithm. IEEE Trans Acoust Speech Signal Process 35(4), 450–454 (1987)
 20.
P Wen, S Zhang, J Zhang, A novel subband adaptive filter algorithm against impulsive noise and it’s performance analysis. Signal Process. 127(10), 282–287 (2016)
 21.
TMCM Classen, WFG Mecklenbraeuker, Comparsion of the convergence of two algorithms for adaptive FIR digital filters. IEEE Trans Acoust Speech Signal Process 29(6), 670–678 (1981)
 22.
J Ni, F Li, Variable regularisation parameter sign subband adaptive filter. Electron. Lett. 64, 1605–1607 (2010)
 23.
J Shin, J Yoo, P Park, Variable stepsize sign subband adaptive filter. IEEE Signal Process Lett 20, 173–176 (2013)
 24.
NJ Bershad, Comments on ‘comparison of the convergence of two algorithms for adaptive FIR digital filters. IEEE Trans Acoust Speech Signal Process 33(12), 1604–1606 (1985)
 25.
K Takahashi, S Mori, in Proc. ICCS/ISITA. A new normalized signed regressor LMS algorithm (Singapore, 1992), pp. 1181–1185
 26.
J Ni, F Li, A variable stepsize matrix normalized subband adaptive filter. IEEE Trans Audio Speech Lang Process 18, 1290–1299 (2010)
 27.
HC Shin, AH Sayed, Meansquare performance of a family of affine projection algorithms. IEEE Trans Signal Process 52(1), 90–102 (2004)
 28.
JJ Jeong, SH Kim, G Koo, SW Kim, Meansquare deviation analysis of multibandstructured subband adaptive filter algorithm. IEEE Trans Signal Process 64(4), 985–994 (2016)
 29.
K Dogancay, O Tanrikulu, Adaptive filtering algorithms with selective partial updates. IEEE Trans Circuits Syst II Analog Digit Signal Process 48, 762–769 (2001)
 30.
H Malvar, Signal Processing with Lapped Transforms (Artech House, 1992)
 31.
C Paleologu, J Benesty, S Ciochina, A variable stepsize affine projection algorithm designed for acoustic echo cancellation. IEEE Trans Audio Speech Lang Process 16, 1466–1478 (2008)
 32.
ITUT Rec. G. 168, Digital Network Echo Cancellers, 2007.
 33.
AH Sayed, Fundamentals of Adaptive Filtering (Wiley, 2003)
Acknowledgements
The authors would like to thank Shahid Rajaee Teacher Training University (SRTTU) for financially support.
Funding
This work was financially supported by Shahid Rajaee Teacher Training University (SRTTU).
Author information
Affiliations
Contributions
Due to the effective features of signed regressor adaptive algorithms (low computational complexity and close convergence speed to the conventional algorithm) and to increase the performance of the SRNLMS algorithm, this paper proposes the signed regressor NSAF (SRNSAF) algorithm. The SRNSAF is established with L_{1}norm optimization. A constraint is imposed on the decimated filter output to force a posteriori error to become zero. This constraint guarantees the convergence of the algorithm. This algorithm utilizes the signum of the input regressors at each subband during the adaptation. Again, no multiplications are required for normalization factor at each subband. To improve the performance of the SRNSAF, the modified SRNSAF (MSRNSAF) is also established. The proposed SRNSAF and MSRNSAF algorithms have lower computational complexity than the NSAF, SPUNSAF, DSNSAF, and FSNSAF, while they have a fast convergence rate similar to the NSAF. In addition, the steadystate error level is also nearly close to the NSAF. For performance evaluation of any proposed adaptive algorithm, a theoretical analysis is essential [33]. Therefore, in the following, the energy conservation approach [27] is applied to the SRNSAF and the meansquare performance analysis of the proposed algorithms are studied in the stationary and nonstationary environments. This approach does not need a white or Gaussian assumption for the input regressors. Based on this, the transient, the steadystate, and the stability bounds of the SRNSAF and MSRNSAF are analyzed and closed form relations are derived.
Corresponding author
Correspondence to Mohammad Shams Esfand Abadi.
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
The theoretical relations in nonstationary environment
In the nonstationary environment, the unknown system (w^{o}) is assumed timevariant which is changed according to the following random walk model [27, 33].
where the random sequence of q(k) is a zero mean, an independent and identically distributed sequence with autocorrelation matrix Q = E{q(k)q^{T}(k)} and independent of x(kN) and v(k) [33]. Now by defining \( \overset{\sim }{\mathbf{w}}(k)={\mathbf{w}}^o(k)\mathbf{w}(k) \), the weight error vector update equation can be expressed as
By taking the Φweighted norm from both sides of (46), then expectation, and following the same approach for stationary environment, we get
When k goes to infinity, the steadystate EMSE in nonstationary environment is given by
and the steadystate MSD is obtained as
It is important to note that there is an optimal value for the stepsize that minimizes the steadystate EMSE in the nonstationary environment [33] (Chapter 7). This effect comes from the second term in (48). In this term, the inverse of the stepsize (μ^{−1}) will appear, which has different effects on EMSE. For large values of the stepsize, this term will be small and the effect of the nonstationary environment on EMSE will be small and the performance will be similar to the stationary case (Fig. 29). For small values of the stepsize, this effect will be large and the EMSE will be large. Therefore, there is an optimal value for the stepsize that minimizes EMSE in nonstationary environment. More explanations about this issue can be found in [33].
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Shams Esfand Abadi, M., Shafiee, M.S. & Zalaghi, M. A low computational complexity normalized subband adaptive filter algorithm employing signed regressor of input signal. EURASIP J. Adv. Signal Process. 2018, 21 (2018). https://doi.org/10.1186/s136340180542z
Received:
Accepted:
Published:
Keywords
 Normalized subband adaptive filter (NSAF)
 Meansquare performance
 Signed regressor (SR)L _{1}norm