 Research
 Open Access
 Published:
A family of variable stepsize affine projection adaptive filter algorithms using statistics of channel impulse response
EURASIP Journal on Advances in Signal Processing volume 2011, Article number: 97 (2011)
Abstract
This paper extends the recently introduced variable stepsize (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal stepsize vector is obtained by minimizing the meansquare deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSSAPA), the VSS selective partial update NLMS (VSSSPUNLMS), the VSSSPUAPA, and the VSS selective regressor APA (VSSSRAPA). In VSSSPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSSSRAPA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
1. Introduction
Adaptive filtering has been, and still is, an area of active research that plays an active role in an ever increasing number of applications, such as noise cancellation, channel estimation, channel equalization and acoustic echo cancellation [1, 2]. The least mean squares (LMS) and its normalized version (NLMS) are the workhorses of adaptive filtering. In the presence of colored input signals, the LMS and NLMS algorithms have extremely slow convergence rates. To solve this problem, a number of adaptive filtering structures, based on affine subspace projections [3, 4], data reusing adaptive algorithms [5, 6], block adaptive filters [2] and multi rate techniques [7, 8] have been proposed in the literatures. In all these algorithms, the selected fixed stepsize can change the convergence and the steadystate mean square error (MSE). It is well known that the steadystate MSE decreases when the stepsize decreases, while the convergence speed increases when the stepsize increases. By optimally selecting the stepsize during the adaptation, we can obtain both fast convergence rates and low steadystate MSE. These selections are based on various criteria. In [9], squared instantaneous errors were used. To improve noise immunity under Gaussian noise, the squared autocorrelation of errors at adjacent times was used in [10], and in [11], the fourth order cumulant of instantaneous error was adopted.
In [12], two adaptive stepsize gradient adaptive filters were presented. In these algorithms, the step sizes were changed using a gradient descent algorithm designed to minimize the squared estimation error. This algorithm had fast convergence, low steadystate MSE, and good performance in nonstationary environment. The blind adaptive gradient (BAG) algorithm for codeaided suppression of multipleaccess interference (MAI) and narrowband interference (NBI) in directsequence/codedivision multipleaccess (DS/CDMA) systems was presented in [13]. The BAG algorithm was based on the concept of accelerating the convergence of a stochastic gradient algorithm by averaging. The authors shown that the BAG had identical convergence and tracking properties to recursive least squares (RLS) but had a computational cost similar to the LMS algorithm. In [14], two low complexity variable step size mechanisms were proposed for constrained minimum variance (CMV) receivers that operate with stochastic gradient algorithms and are also incorporated in the channel estimation algorithms. Also, the low lowcomplexity variable step size mechanism for blind codeconstrained constant modulus algorithm (CCM) receivers was proposed in [15]. This approach was very useful for nonstationary wireless environment.
In [16], a generalized normalized gradient descent (GNGD) algorithm for linear finiteimpulse response (FIR) adaptive filters was introduced that represents an extension of the NLMS algorithm by means of an additional gradient adaptive term in the denominator of the learning rate of NLMS. The simulation results show that the GNGD algorithm is robust to significant variations of initial values of its parameters.
Important examples of two new variable stepsize (VSS) versions of the NLMS and the affine projection algorithm (APA) can be found in [17]. In [17], the stepsize is obtained by minimizing the meansquare deviation (MSD). This introduced algorithms show good performance in convergence rate and steadystate MSE. This approach was successfully extended to selective partial update (SPU) adaptive filter algorithm in [18].
To improve the performance of adaptive filter algorithms, the adaptive filter algorithm was proposed based on channel impulse response statistics [19, 20]. In [21] a new variablestepsize control was proposed for the NLMS algorithm. In this algorithm, the stepsize vector with different values for each filter coefficient was used. In this approach, based on prior knowledge of the channel impulse response statistics, the optimal stepsize vector is obtained by minimizing the meansquare deviation (MSD) between the optimal and estimated filter coefficients.
Another feature which is important in adaptive filter algorithms is computational complexity. Several adaptive filters with fixed stepsize, such as the adaptive filter algorithms with selective partial updates have been proposed to reduce the computational complexity. These algorithms update only a subset of the filter coefficients in each time iteration. The MaxNLMS [22], the variants of the SPUNLMS [23], and SPUAPA [24, 25] are important examples of this family of adaptive filter algorithms. Recently an affine projection adaptive filtering algorithm with selective regressors (SR) was also proposed to reduce the computational complexity of APA [26–28]. In this algorithm, the recent regressors are optimally selected during the adaptation.
In this paper, we extend the approach in [21] to the APA, SPUNLMS, SPUAPA, and SRAPA and four novel VSS adaptive filter algorithms are established. These algorithms are computationally efficient. We demonstrate the good performance of the presented algorithms through several simulation results in system identification scenario. The comparison of the proposed algorithms with other algorithms is also presented.
What we propose in this paper can be summarized as follows:

The establishment of the VSSAPA.

The establishment of the VSSSPUNLMS.

The establishment of the VSSSPUAPA.

The establishment of the VSSSRAPA.

Demonstrating of the proposed algorithms in system identification scenario.

Demonstrating the tracking ability of the proposed algorithms.
We have organized our paper as follows. In section 2, the NLMS and SPUNLMS algorithms will be briefly reviewed. Then, the family of APA, SRAPA and SPUAPA are presented and the family of variable stepsize adaptive filters is established. In the following, the computational complexity of the VSS adaptive filters is discussed. Finally, before concluding the paper, we demonstrate the usefulness of these algorithms by presenting several experimental results.
Throughout the paper, the following notations are adopted:
. norm of a scalar
.^{2} squared Euclidean norm of a vector
(.) ^{T} transpose of a vector or a matrix
tr(.) trace of a matrix
E[.] expectation operator
2 Background on family of NLMS algorithm
21 Background on NLMS
The output y(n) of an adaptive filter at time n is given by
where w(n) = [w_{0}(n), w_{1}(n), ..., w_{M1}(n)] ^{T} is the M × 1 filter coefficients vector and X(n) = [x(n), x(n1), ..., x(nM+1)] ^{T} is the M × 1 of input signal vector. The NLMS algorithm is derived from the solution of the following constrained minimization problem [1]
where d(n) is the desired signal. The resulting NLMS algorithm is given by the recursion
where μ is the stepsize which is introduced to control the convergence speed (0 < μ < 2). Also, e(n) is the output error signal which is defined as
22 Selective Partial Update NLMS
By partitioning the filter coefficients and input signal vectors to the B blocks each of length L (note that B = M/L and is an integer), X(n) = [x_{1}(n), x_{2}(n), ..., x_{ B } (n)] ^{T} and w(n) = [w_{1}(n), w_{2}(n), ..., w_{ B } (n)] ^{T} , the SPUNLMS algorithm for a single block update at every iteration minimizes the following optimization problem:
where F = {j_{1}, j_{2}, ..., j_{ S } } denote the indices of the S blocks out of B blocks that should be updated at every adaptation and ${w}_{F}\left(n\right)={\left\{{w}_{{j}_{1}},{w}_{{j}_{2}},...,{w}_{{j}_{S}}\right\}}^{T}$[24]. Again by using the method of Lagrange multipliers, and defining ${X}_{F}\left(n\right)={\left\{{x}_{{j}_{1}}\left(n\right),{x}_{{j}_{2}}\left(n\right),...,{x}_{{j}_{S}}\left(n\right)\right\}}^{T}$, the update equation for SPUNLMS is given by:
For M = B and L = 1, the SPUNLMS algorithm in (6) reduces to
which is the maxNLMS algorithm [22]. For M = B and L = B, the SPUNLMS algorithm becomes identical to NLMS algorithm in (3).
3. Background on APA, SRAPA and SPUAPA
31. Affine projection algorithm (APA)
Now, define the M × K matrix of the input signal and K × 1 of the desired signal as:
where K is a positive integer (usually, but not necessarily K ≤ M). The family of APA can be established by minimizing relation (2) but subject to d(n) = X^{T}(n) w(n+1). Again, by using the method of Lagrange multipliers, the filter vector update equation for the family of APA is given by:
where e(n) = [e(n), e(n1), ..., e(nK+1)] ^{T} is the output error vector, which is defined as:
32 Selective Regressor APA (SRAPA)
In [26], another novel affine projection algorithm with selective regressors (SR), called (SRAPA), was presented. In this section, we briefly review the SRAPA. The SRAPA minimizes relation (2) subject to:
where G = {i_{1}, i_{2}, ..., i_{ p } } denote the P subset (subset with P member) of the set {0, 1, ..., K1}.
is the M × P matrix of the input signal and:
is the P × 1 vector of the desired signal. Using the method of Lagrange multipliers to solve this optimization problem leads to the following update equation:
where
The indices of G are obtained by the following procedure:

1.
Compute the following values for 0 ≤ i ≤ K  1:
$$\frac{{e}^{2}\left(ni\right)}{{\parallel X\left(n+i\right)\parallel}^{2}}$$(17) 
2.
The indices of G correspond to P largest values of (17).
33. Selective Partial Update APA (SPUAPA)
The SPUAPA solves the following optimization problem [24]:
where F = {j_{1}, j_{2}, ..., j_{ S } } denote the indices of the S blocks out of B blocks that should be updated at every adaptation. Again, by using the Lagrange multipliers approach, the filter vector update equation is given by:
where
is the SL × K matrix and:
is the L × K matrix. The indices of F are obtained by the following procedure:

1.
Compute the following values for 1 ≤ i ≤ B:
$$Tr\left({X}_{i}^{T}\left(n\right){X}_{i}\left(n\right)\right)$$(22) 
2.
The indices of F correspond to S largest values of relation (22).
4. VSSNLMS and the proposed VSS Adaptive Filter Algorithms
41. Variable StepSize NLMS algorithm using statistics of channel response
Consider a linear system with its input signal X(n) and desired signal d(n) are related by
where h = [h_{0}, h_{1}, ..., h_{M1}] ^{T} is the true unknown system with memory length M, X(n) = [x(n), ..., x(nM+1)] ^{T} is the system input vector, and v(n) is the additive noise.
The filter coefficients of VSSNLMS are updated by [21]
where the stepsize matrix is defined as U(n) = diag[μ_{0}(n), ..., μ_{M1}(n)].
To quantitatively evaluate the misadjustment of the filter coefficients, the MSD is taken as a figure of merit, which is defined as
where the weight error vector is given by
Note that at each iteration, the MSD depends on μ_{ i } (n), and by using the independent and identically distributed (i.i.d) sequence for input signal, we have
The optimal stepsize is obtained by minimizing the MSD at each iteration. Taking the firstorder partial derivative of Λ(n+1) with respect to μ_{ i } (n)(i = 0, ..., M1), and setting it to zero, we obtain
and
We can estimate E[e^{2}(n)] by a moving average approach of e^{2}(n):
where 0 < λ < 1 is the forgetting factor. Also, the initial value for $E\left[{\stackrel{\u0303}{w}}_{i}^{2}\left(0\right)\right]$ is given by the secondorder statistics of the channel impulse response, i.e. $E\left[{h}_{i}^{2}\right]$.
42. Variable StepSize Selective Partial Update NLMS algorithm using statistics of channel response
The filter coefficients in SPUNLMS are updated by
Approximating e(n) with
and substituting (31) into (32), we obtain
and
where
and I_{ SL } is the SL × SL identity matrix.
For obtaining the MSD, we can write
where ${\stackrel{\u0303}{w}}_{\u1e1e}\left(n\right)$ are the weights that are not selected to update and we know
Combining (34), (36) and (37) we have
From (35), we can write
Combining (38) and (39), we get
Taking the firstorder partial derivative of Λ(n+1) with respect to μ_{ i } (n)(I = 0, ..., M1), and setting it to zero for j ∈ F we have
Therefore,
To update $\left[{\stackrel{\u0303}{w}}_{j}^{2}\left(n\right)\right]$, we use the following equation obtained by taking the mean square of the j th entry in (34):
From (32), we can write
and
Also, E(e^{2}(n)) is obtained according to (30).
43. Variable StepSize APA using statistics of channel response
Suppose X(n), and d(n) are defined similar to Section 31, and
is the noise vector, X(n) is the input signal matrix and d(n) is the desired signal vector which are related by
The filter coefficients in VSSAPA are updated by
Combining (11) and (47), we rewrite the estimation error signal in (11) as
Substituting (49) into (48), we obtain
where
and I_{ M } is the M × M identity matrix.
The MSD is defined as relation (26) and combining it with (50), we have
Similar to [21], we assume that the entries of X(n) and v(n) are zeromean independent and identically distributed (i.i.d) sequence with variance ${\sigma}_{x}^{2}$ and ${\sigma}_{v}^{2}$, respectively; $\stackrel{\u0303}{w}\left(n\right)$, X(n) and v (n) are mutually independent. Therefore, we obtain from (51),
Combining (52) and (53), we get
The optimal stepsize is obtained by minimizing the MSD at each iteration. Taking the firstorder partial derivative of Λ(n+1) with respect to μ_{ i } (n)(i = 0, ..., M1), and setting it to zero, we obtain
To update ${\stackrel{\u0303}{w}}_{i}^{2}\left(n\right)$, we use the following equation obtained by taking the mean square of the i th entry in (50):
We obtain from (49)
Substitution of (57) into (56) leads to
It is straightforward to estimate E[e(n)^{2}] by a moving average of e(n)^{2}:
44. Variable StepSize Selective Regressor AP algorithm using statistics of channel response
The filter coefficients in VSSSRAPA are updated by
Assuming ${\mathit{d}}_{G}\left(n\right)={\mathit{X}}_{G}^{T}\left(n\right)h+{\mathit{v}}_{G}\left(n\right)$ and combining it with (16), we have
where
Substituting (61) into (60), we obtain
and
where
Combining MSD and (65) we have
From (66), we can write
Combining (66) and (67), we get
Taking the firstorder partial derivative of Λ(n+1)with respect to μ_{ i } (n)(i = 0, ..., M1), and setting it to zero we have
Therefore,
To update $\left[{\stackrel{\u0303}{w}}_{i}^{2}\left(n\right)\right]$, we use the following equation obtained by taking the mean square of the j th entry in (64):
From (61), we can write
Therefore,
It is straightforward to estimate E[e_{ G }(n)^{2}] by a moving average of e_{ G }(n)^{2}:
45. Variable StepSize Selective Partial Update AP algorithm using statistics of channel response
The filter coefficients in VSSSPUAPA are updated by
Approximating e(n) with
and substituting (76) into (75), we obtain
and
where
Combining MSD and (78) we have
From (79), we can write
Combining (81) and (80), we get
Taking the firstorder partial derivative of Λ(n+1) with respect to μ_{ i } (n)(i = 0, ..., M1), and setting it to zero for j ∈ F we have
Therefore, for j ∈ F we have
To update $\left[{\stackrel{\u0303}{w}}_{i}^{2}\left(n\right)\right]$, we use the following equation obtained by taking the mean square of the j th entry in (78):
From (76), we can write
Therefore
Also E[e(n)^{2}] is obtained according to (59).
5. Computational complexity
The computational complexity of the VSS adaptive algorithms has been given in Tables 1 and 2. The computational complexity of the APA and NLMS is from [4]. The SPUNLMS needs 3SL+1 multiplications. This algorithm needs 1 additional multiplication and B log_{2}S+O(B) comparisons. Comparing the updated equation for NLMS and VSSNLMS shows that the VSSNLMS needs 4M + 3 additional multiplication and M division due to variable stepsize. In VSSSPUNLMS, the additional multiplication and additional division is respectively 4SL + 3 and SL. Also, this algorithm needs B log_{2}S+O(B) comparisons. It is obvious that the computational complexity of VSSSPUNLMS is lower than VSSNLMS. The number of reductions in multiplication and division for VSSSPUNLMS is respectively 3(MSL)1 and MSL.
The SPUAPA needs (K^{2}+2K)SL+K^{3}+K^{2} multiplications. This algorithm needs 1 additional multiplication and B log_{2}S+O(B) comparisons. The SRAPA needs (P^{2}+2P)M+P^{3}+P^{2} multiplications and K divisions. This algorithm needs (KP)M+K+1 additional multiplications and K log_{2}P+O(K) comparisons. Comparing the updated equation for APA and VSSAPA shows that the VSSAPA needs KM^{2}+3M+K+1 additional multiplications and M divisions due to variable stepsize. In VSSSPUAPA, the additional multiplication is KS^{2}+L^{2}+3SL+K+1 and additional division is SL. Also, this algorithm needs B log_{2}S+O(B) comparisons. It is obvious that the computational complexity of VSSSPUAPA is lower than VSSAPA. The number of reductions in multiplication and division for VSSSPUAPA is respectively (K^{2}+2K+4)(MSL)+K(M^{2}S^{2}L^{2}) and MSL. In VSSSRAPA, the additional multiplication is 3M+P+1 and additional division is M compared with SRAPA. Also this algorithm needs K log_{2}P+O(K) comparisons. It is obvious that the computational complexity of VSSSRAPA is lower than VSSAPA.
6. Experimental results
We presented several simulation results in system identification scenario. The unknown impulse response is generated according to h_{ i } = e^{iτ} r(i), i = 0, ..., M1, where r(i) is a white Gaussian random sequence with zeromean and variance ${\sigma}_{r}^{2}$ of 0.09 [20]. The length of impulse responses were set to M = 20, and 50 in simulations. Also, the envelope decay rate τ was set 0.04. The filter input is a zeromean i.i.d. Gaussian process with variance ${\sigma}_{x}^{2}=1$. Another input signal is colored Gaussian signal which is generated by filtering white Gaussian noise through a firstorder autoregressive (AR(1)) system with the transfer function:
The white zeromean Gaussian noise was added to the filter output such that the SNR = 15dB.
In all simulations, the MSD curves are obtained by ensemble averaging over 200 independent trials.
Figure 1 shows the MSD curves of APA, VSSAPA in [17], proposed VSSAPA, ES [19] and GNGD algorithm [16] for colored Gaussian input and M = 20. The parameter K was set to 4 and different values for μ were used in APA. As we can see, by increasing the stepsize, the convergence speed increases but the steadystate MSD also increases. The VSSAPA in [17] has both fast convergence speed and low steadystate MSD. The proposed VSSAPA has faster convergence speed and lower steadystate MSD than VSSAPA in [17], and classical APA.
In Figure 2 we presented the MSD curves for colored Gaussian input and M = 50. In this simulation, the parameter K was set to 10. The results are also compared with APA with different values for the stepsize and VSSAPA proposed in [17]. As we can see the performance of proposed VSSAPA has both fast convergence speed and low steadystate MSD features.
Figure 3 compares the MSD curves of SPUAPA, proposed VSSAPA, and proposed VSSSPUAPA for colored Gaussian input and M = 20. The parameters K, and the number of blocks (B) were set to 4. In this figure, the number of blocks to update (S) was set to 3 for SPUAPA. Again, different values for the stepsize have been used in SPUAPA. The first one is (S/B), which is upper stability bound of SPUAPA, and the second one is (0.05 S/B) wich is low values for the stepsize. As we can see, the convergence speed and steadystate MSD is changed by the stepsize. The proposed VSSSPUAPA has fast convergence speed and low steadystate MSD. The VSSSPUAPA has been also compared with proposed VSSAPA. Close performance can be seen for these proposed algorithms. But the computational complexity of proposed VSSSPUAPA is lower than VSSAPA
Figure 4 presents the MSD curves of SPUAPA, proposed VSSSPUAPA, and VSSAPA, for M = 50, and colored Gaussian input. The parameter K and the number of blocks were set to 10, and 5 respectively. In this figure the parameter S was set to 3, and different values for the stepsize have been used in SPUAPA. The simulation results show that the VSSSPUAPA has good performance compared with SPUAPA. Also, the proposed VSSAPA, and VSSSPUAPA have close performance. But the computational complexity of VSSSPUAPA is lower than VSSAPA.
Figure 5 compares the MSD curves of SPUNLMS, VSSNLMS in [21], proposed VSSSPUNLMS, ES [19] and GNGD algorithms [16]. The parameters B, and S were set to 4, and 2 respectively. Various values for the stepsize have been used for SPUNLMS. The simulation results show that the proposed VSSSPUNLMS has fast convergence speed and low steadystate MSD features. Also this algorithm has close performance to VSSNLMS in [21]. Good performance can be seen for proposed VSSSPUNLMS. This fact can be seen in Figure 6 for M = 50.
Figure 7 compares the MSD curves of SRAPA, proposed VSSSRAPA, and VSSAPA for colored Gaussian input signal. The parameter K was set 4. Again, large and small values for stepsize have been used in SRAPA. As we can see, the proposed VSSSRAPA has good convergence speed and low steadystate MSD. This figure shows that the proposed VSSAPA has better performance than proposed VSSSRAPA. In Figure 8, we presented the results for M = 50, and colored Gaussian input signal. Comparing the MSD curves show that the VSSSRAPA has also good performance in this case.
Figure 9 compares the MSD curves of proposed VSSSPUAPA with S = 2 and S = 3, proposedVSSAPA, ES [19], GNGD algorithm [16] and VSSAPA proposed in [17] with M = 20, K = 4 for colored Gaussian signal input. The simulation results show that the proposed VSSAPA has better performance than proposed VSSAPA in [17]. Also the performance of proposed VSSSPUAPA for S = 3 is close to proposed VSSAPA.
Figure 10 compares the MSD curves of proposed VSSSPUNLMS with S = 1 and S = 2 and S = 3, proposed VSSNLMS, ES [19], GNGD algorithm [16], and VSSNLMS proposed in [17] with M = 20, K = 4 for colored Gaussian signal input. This figure shows that the proposed VSSNLMS has better performance than VSSNLMS in [17]. Also by increasing the parameter S, the MSD curves of VSSSPUNLMS will be closed to VSSNLMS.
Figure 11 presents the MSD curves of proposed VSSSRAPA with P = 1 and P = 2 and P = 3 and proposed VSSAPA, ES [19], GNGD algorithm [16], and VSSAPA proposed in [17] with M = 20, K = 4 for colored Gaussian signal input. This figure shows that the performance of VSSSRAPA is better than VSSAPA in [17]. Also, by increasing the parameter P, the performance of VSSSRAPA will be closed to VSSAPA.
Finally, we justified the tracking performance of the proposed methods in Figure 12. The impulse response variations are simulated by toggling between different impulse responses h_{ i } = e^{iτ} r(i) and g_{ i } = e^{iτ} r(i). The impulse response is changed to g_{ i } = e^{iτ} r(i) at iteration 4000. As we can see, the proposed VSS algorithms have good tracking performance. The proposed VSSAPA has better performance than other algorithms. The performance of proposed VSSSPUAPA with B = 4, and S = 3 is close to proposed VSSAPA.
7. Conclusions
In this paper, we presented the novel VSS adaptive filter algorithms such as VSSSPUNLMS, VSSAPA, VSSSRAPA and VSSSPUAPA based on prior knowledge of the channel impulse response statistic. These algorithms exhibit fast convergence while reducing the steadystate MSD as compared to the ordinary SPUNLMS, APA, SRAPA and SPUAPA algorithms. The presented algorithms were also computationally efficient. We demonstrated the good performance of the presented VSS adaptive algorithms in system identification scenario.
References
 1.
Widrow B, Stearns SD: Adaptive Signal Processing. Englewood Cliffs, NJ: PrenticeHall; 1985.
 2.
Haykin S: Adaptive Filter Theory. 4th edition. NJ: PrenticeHall; 2002.
 3.
Ozeki K, Umeda T: An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron Commun Jpn 1984, 67A: 1927.
 4.
Sayed AH: Fundamentals of Adaptive Filtering. Wiley; 2003.
 5.
Roy S, Shynk JJ: Analysis of datareusing LMS algorithm. Proc Midwest Symp Circuits Syst 1989, 11271130.
 6.
Shin HC, Song WJ, Sayed AH: Meansquare performance of datareusing adaptive algorithms. IEEE Signal Processing Letters 2005,12(12):851854.
 7.
Pradhan SS, Reddy VE: A new approach to sub band adaptive filtering. IEEE Trans Signal Processing 1999, 47: 655664. 10.1109/78.747773
 8.
Lee KA, Gan WS: Improving convergence of the NLMS algorithm using constrained sub band updates. IEEE Signal Processing Letters 2004,11(9):736739. 10.1109/LSP.2004.833445
 9.
Kwong OW, Johnston ED: A variable stepsize LMS algorithm. IEEE Trans Signal Processing 1992,40(7):16331642. 10.1109/78.143435
 10.
Aboulnasr T, Mayyas K: A robust variable stepsize lmstype algorithm: Analysis and simulations. IEEE Trans Signal Processing 1997,45(3):631639. 10.1109/78.558478
 11.
Pazaitis DI, Constantinides AG: A novel kurtosis driven va riable stepsize adaptive algorithm. IEEE Trans Signal Processing 1999,47(3):864872. 10.1109/78.747793
 12.
Mathews VJ, Xie Z: A stochastic gradient adaptive filter with gradient adaptive step size. IEEE Trans Signal Processing 1993,41(6):20752087. 10.1109/78.218137
 13.
Krishnamurthy V: Averaged stochastic gradient algorithms for adaptive blind multiuser detection in DS/CDMA systems. IEEE Trans Communications 2000,48(1):125134. 10.1109/26.818880
 14.
de Lamare RC, SampaioNeto R: Lowcomplexity blind variable step size mechanisms for minimum variance CDMA receivers. IEEE Transactions on Signal Processing 2006,54(6):23022317.
 15.
Cai Y, de Lamare RC: LowComplexity Variable Step Size Mechanism for CodeConstrained Constant Modulus Stochastic Gradient Algorithms applied to CDMA Interference Suppression. IEEE Transactions on Signal Processing 2009,57(1):313323.
 16.
Mandic DP: A Generalized Normalized Gradient Descent Algorithm. IEEE Signal Processing Letters 2004,11(2):115118. 10.1109/LSP.2003.821649
 17.
Shin HC, Sayed AH, Song WJ: Variable stepsize NLMS and affine projection algorithms. IEEE Signal Processing Letters 2004,11(2):132135. 10.1109/LSP.2003.821722
 18.
Abadi MSE, Mehrdad V, Gholipour A: Family of Variable StepSize Affine Projection Adaptive Filtering Algorithms with Selective Regressors and Selective Partial Update. International Journal of Science and Technology, Scientia, Iranica 2010,17(1):8198.
 19.
Makino S, Kaneda Y, Koizumi N: Exponentially weighted stepsize NLMS adaptive filter based on the statistics of a room impulse response. IEEE Transactions on Speech Audio Processing 1993,1(1):101108. 10.1109/89.221372
 20.
Li N, Zhang Y, Hao Y, Chambers JA: A new variable stepsize NLMS algorithm designed for applications with exponential decay impulse responses. Signal Processing 2008,88(9):23462349. 10.1016/j.sigpro.2008.03.002
 21.
Shi Kun, Ma Xiaoli: A variablestepsize NLMS algorithm using statistics of channel response. Signal Processing 2010,90(6):21072111. 10.1016/j.sigpro.2010.01.015
 22.
Douglas SC: Analysis and implementation of the maxNLMS adaptive filter. In Proc 29th Asilomar Conf on Signals, Systems, and Computers. Pacific Grove, CA; 1995:659663.
 23.
Schertler T: Selective block update NLMS type algorithms. Proc IEEE Int Conf on Acoustics, Speech, and Signal Processing, Seattle, WA 1998, 17171720.
 24.
Dogancay K, Tanrikulu O: Adaptive filtering algorithms with selective partial updates. IEEE Trans Circuits, Syst. II: Analog and Digital Signal Processing 2001,48(8):762769. 10.1109/82.959866
 25.
Abadi MSE, Husøy JH: Meansquare performance of adaptive filters with selective partial update. Signal Processing 2008,88(8):20082018. 10.1016/j.sigpro.2008.02.005
 26.
Dogancay K: PartialUpdate Adaptive Signal Processing: Design, Analysis and Implementation. Academic Press; 2008.
 27.
Hwang KY, Song WJ: An affine projection adaptive filtering algorithm with selective regressors. IEEE Trans Circuits, Syst. II: EXPRESS BRIEFS 2007, 54: 4346.
 28.
Abadi MSE, Palangi H: Meansquare performance analysis of the family of selective partial update and selective regressor affine projection algorithms. Signal Processing 2010,90(1):197206. 10.1016/j.sigpro.2009.06.013
Author information
Affiliations
Corresponding author
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Shams Esfand Abadi, M., AbbasZadeh Arani, S.A.A. A family of variable stepsize affine projection adaptive filter algorithms using statistics of channel impulse response. EURASIP J. Adv. Signal Process. 2011, 97 (2011). https://doi.org/10.1186/16876180201197
Received:
Accepted:
Published:
Keywords
 Adaptive filter
 Normalized Least Mean Square
 Affine projection
 Selective partial update
 Selective regressor
 Variable stepsize