Skip to main content

A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

Abstract

This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

1. Introduction

Adaptive filtering has been, and still is, an area of active research that plays an active role in an ever increasing number of applications, such as noise cancellation, channel estimation, channel equalization and acoustic echo cancellation [1, 2]. The least mean squares (LMS) and its normalized version (NLMS) are the workhorses of adaptive filtering. In the presence of colored input signals, the LMS and NLMS algorithms have extremely slow convergence rates. To solve this problem, a number of adaptive filtering structures, based on affine subspace projections [3, 4], data reusing adaptive algorithms [5, 6], block adaptive filters [2] and multi rate techniques [7, 8] have been proposed in the literatures. In all these algorithms, the selected fixed step-size can change the convergence and the steady-state mean square error (MSE). It is well known that the steady-state MSE decreases when the step-size decreases, while the convergence speed increases when the step-size increases. By optimally selecting the step-size during the adaptation, we can obtain both fast convergence rates and low steady-state MSE. These selections are based on various criteria. In [9], squared instantaneous errors were used. To improve noise immunity under Gaussian noise, the squared autocorrelation of errors at adjacent times was used in [10], and in [11], the fourth order cumulant of instantaneous error was adopted.

In [12], two adaptive step-size gradient adaptive filters were presented. In these algorithms, the step sizes were changed using a gradient descent algorithm designed to minimize the squared estimation error. This algorithm had fast convergence, low steady-state MSE, and good performance in nonstationary environment. The blind adaptive gradient (BAG) algorithm for code-aided suppression of multiple-access interference (MAI) and narrow-band interference (NBI) in direct-sequence/code-division multiple-access (DS/CDMA) systems was presented in [13]. The BAG algorithm was based on the concept of accelerating the convergence of a stochastic gradient algorithm by averaging. The authors shown that the BAG had identical convergence and tracking properties to recursive least squares (RLS) but had a computational cost similar to the LMS algorithm. In [14], two low complexity variable step size mechanisms were proposed for constrained minimum variance (CMV) receivers that operate with stochastic gradient algorithms and are also incorporated in the channel estimation algorithms. Also, the low low-complexity variable step size mechanism for blind code-constrained constant modulus algorithm (CCM) receivers was proposed in [15]. This approach was very useful for nonstationary wireless environment.

In [16], a generalized normalized gradient descent (GNGD) algorithm for linear finite-impulse response (FIR) adaptive filters was introduced that represents an extension of the NLMS algorithm by means of an additional gradient adaptive term in the denominator of the learning rate of NLMS. The simulation results show that the GNGD algorithm is robust to significant variations of initial values of its parameters.

Important examples of two new variable step-size (VSS) versions of the NLMS and the affine projection algorithm (APA) can be found in [17]. In [17], the step-size is obtained by minimizing the mean-square deviation (MSD). This introduced algorithms show good performance in convergence rate and steady-state MSE. This approach was successfully extended to selective partial update (SPU) adaptive filter algorithm in [18].

To improve the performance of adaptive filter algorithms, the adaptive filter algorithm was proposed based on channel impulse response statistics [19, 20]. In [21] a new variable-step-size control was proposed for the NLMS algorithm. In this algorithm, the step-size vector with different values for each filter coefficient was used. In this approach, based on prior knowledge of the channel impulse response statistics, the optimal step-size vector is obtained by minimizing the mean-square deviation (MSD) between the optimal and estimated filter coefficients.

Another feature which is important in adaptive filter algorithms is computational complexity. Several adaptive filters with fixed step-size, such as the adaptive filter algorithms with selective partial updates have been proposed to reduce the computational complexity. These algorithms update only a subset of the filter coefficients in each time iteration. The Max-NLMS [22], the variants of the SPU-NLMS [23], and SPU-APA [24, 25] are important examples of this family of adaptive filter algorithms. Recently an affine projection adaptive filtering algorithm with selective regressors (SR) was also proposed to reduce the computational complexity of APA [26–28]. In this algorithm, the recent regressors are optimally selected during the adaptation.

In this paper, we extend the approach in [21] to the APA, SPU-NLMS, SPU-APA, and SR-APA and four novel VSS adaptive filter algorithms are established. These algorithms are computationally efficient. We demonstrate the good performance of the presented algorithms through several simulation results in system identification scenario. The comparison of the proposed algorithms with other algorithms is also presented.

What we propose in this paper can be summarized as follows:

  • The establishment of the VSS-APA.

  • The establishment of the VSS-SPU-NLMS.

  • The establishment of the VSS-SPU-APA.

  • The establishment of the VSS-SR-APA.

  • Demonstrating of the proposed algorithms in system identification scenario.

  • Demonstrating the tracking ability of the proposed algorithms.

We have organized our paper as follows. In section 2, the NLMS and SPU-NLMS algorithms will be briefly reviewed. Then, the family of APA, SR-APA and SPU-APA are presented and the family of variable step-size adaptive filters is established. In the following, the computational complexity of the VSS adaptive filters is discussed. Finally, before concluding the paper, we demonstrate the usefulness of these algorithms by presenting several experimental results.

Throughout the paper, the following notations are adopted:

|.| norm of a scalar

||.||2 squared Euclidean norm of a vector

(.) T transpose of a vector or a matrix

tr(.) trace of a matrix

E[.] expectation operator

2 Background on family of NLMS algorithm

2-1 Background on NLMS

The output y(n) of an adaptive filter at time n is given by

y ( n ) = w T ( n ) X ( n )
(1)

where w(n) = [w0(n), w1(n), ..., wM-1(n)] T is the M × 1 filter coefficients vector and X(n) = [x(n), x(n-1), ..., x(n-M+1)] T is the M × 1 of input signal vector. The NLMS algorithm is derived from the solution of the following constrained minimization problem [1]

min w ( n + 1 ) ∥ w n + 1 - w n ∥ 2 S u b j e c t t o w T n + 1 X ( n ) = d ( n )
(2)

where d(n) is the desired signal. The resulting NLMS algorithm is given by the recursion

w ( n + 1 ) = w ( n ) + μ ∥ X ( n ) ∥ 2 X ( n ) e ( n )
(3)

where μ is the step-size which is introduced to control the convergence speed (0 < μ < 2). Also, e(n) is the output error signal which is defined as

e ( n ) = d ( n ) - y ( n )
(4)

2-2 Selective Partial Update NLMS

By partitioning the filter coefficients and input signal vectors to the B blocks each of length L (note that B = M/L and is an integer), X(n) = [x1(n), x2(n), ..., x B (n)] T and w(n) = [w1(n), w2(n), ..., w B (n)] T , the SPU-NLMS algorithm for a single block update at every iteration minimizes the following optimization problem:

min w F ( n + 1 ) ∥ w F n + 1 - w F n ∥ 2 S u b j e c t t o w T n + 1 X ( n ) = d ( n )
(5)

where F = {j1, j2, ..., j S } denote the indices of the S blocks out of B blocks that should be updated at every adaptation and w F ( n ) = w j 1 , w j 2 , . . . , w j S T [24]. Again by using the method of Lagrange multipliers, and defining X F ( n ) = x j 1 ( n ) , x j 2 ( n ) , . . . , x j S ( n ) T , the update equation for SPU-NLMS is given by:

w F n + 1 = w F ( n ) + μ ∥ X F ( n ) ∥ 2 X F ( n ) e ( n ) I n d i c e s o f F c o r r e s p o n d t o S l a r g e s t v a l u e s o f ∥ x j ( n ) ∥ 2 f o r 1 ≤ j ≤ B
(6)

For M = B and L = 1, the SPU-NLMS algorithm in (6) reduces to

w i n + 1 = w i ( n ) + μ x ( n - i ) e ( n ) i = a r g max 0 ≤ j ≤ M - 1 ∣ x ( n - j ) ∣
(7)

which is the max-NLMS algorithm [22]. For M = B and L = B, the SPU-NLMS algorithm becomes identical to NLMS algorithm in (3).

3. Background on APA, SR-APA and SPU-APA

3-1. Affine projection algorithm (APA)

Now, define the M × K matrix of the input signal and K × 1 of the desired signal as:

X ( n ) = x ( n ) … x ( n - K + 1 ) ⋮ ⋱ ⋮ x ( n - M + 1 ) … x ( n - K - M + 2 )
(8)
d ( n ) = d ( n ) , . . . , d ( n - K + 1 ) T
(9)

where K is a positive integer (usually, but not necessarily K ≤ M). The family of APA can be established by minimizing relation (2) but subject to d(n) = XT(n) w(n+1). Again, by using the method of Lagrange multipliers, the filter vector update equation for the family of APA is given by:

w ( n + 1 ) = w ( n ) + μ X ( n ) X T ( n ) X ( n ) - 1 e ( n )
(10)

where e(n) = [e(n), e(n-1), ..., e(n-K+1)] T is the output error vector, which is defined as:

e ( n ) = d ( n ) - X T ( n ) w ( n )
(11)

3-2 Selective Regressor APA (SR-APA)

In [26], another novel affine projection algorithm with selective regressors (SR), called (SR-APA), was presented. In this section, we briefly review the SR-APA. The SR-APA minimizes relation (2) subject to:

d G ( n ) = X G T ( n ) w ( n + 1 )
(12)

where G = {i1, i2, ..., i p } denote the P subset (subset with P member) of the set {0, 1, ..., K-1}.

X G ( n ) = x ( n - i 1 ) … x ( n - i p ) ⋮ ⋱ ⋮ x ( n - M + 1 - i 1 ) … x ( n - M + 1 - i p )
(13)

is the M × P matrix of the input signal and:

d G ( n ) = d n - i i , . . . , d n - i p T
(14)

is the P × 1 vector of the desired signal. Using the method of Lagrange multipliers to solve this optimization problem leads to the following update equation:

w ( n + 1 ) = w ( n ) + μ X G ( n ) X G T ( n ) X G ( n ) - 1 e G ( n )
(15)

where

e G ( n ) = d G ( n ) - X G T ( n ) w ( n )
(16)

The indices of G are obtained by the following procedure:

  1. 1.

    Compute the following values for 0 ≤ i ≤ K - 1:

    e 2 ( n - i ) ∥ X ( n + i ) ∥ 2
    (17)
  2. 2.

    The indices of G correspond to P largest values of (17).

3-3. Selective Partial Update APA (SPU-APA)

The SPU-APA solves the following optimization problem [24]:

min w F ( n + 1 ) ∥ w F n + 1 - w F n ∥ 2 S u b j e c t t o X T n w ( n + 1 ) = d ( n )
(18)

where F = {j1, j2, ..., j S } denote the indices of the S blocks out of B blocks that should be updated at every adaptation. Again, by using the Lagrange multipliers approach, the filter vector update equation is given by:

w F ( n + 1 ) = w F ( n ) + μ X F ( n ) X F T ( n ) X F ( n ) - 1 e ( n )
(19)

where

X F ( n ) = X j 1 T ( n ) , X j 2 T ( n ) , . . . , X j s T ( n ) T
(20)

is the SL × K matrix and:

X i ( n ) = x i ( n ) , x i ( n - 1 ) , . . . , x i ( n - K + 1 )
(21)

is the L × K matrix. The indices of F are obtained by the following procedure:

  1. 1.

    Compute the following values for 1 ≤ i ≤ B:

    T r X i T ( n ) X i ( n )
    (22)
  2. 2.

    The indices of F correspond to S largest values of relation (22).

4. VSS-NLMS and the proposed VSS Adaptive Filter Algorithms

4-1. Variable Step-Size NLMS algorithm using statistics of channel response

Consider a linear system with its input signal X(n) and desired signal d(n) are related by

d ( n ) = h T X ( n ) + v ( n )
(23)

where h = [h0, h1, ..., hM-1] T is the true unknown system with memory length M, X(n) = [x(n), ..., x(n-M+1)] T is the system input vector, and v(n) is the additive noise.

The filter coefficients of VSS-NLMS are updated by [21]

w ( n + 1 ) = w ( n ) + U ( n ) X ( n ) e ( n ) ∥ X ( n ) ∥ 2
(24)

where the step-size matrix is defined as U(n) = diag[μ0(n), ..., μM-1(n)].

To quantitatively evaluate the mis-adjustment of the filter coefficients, the MSD is taken as a figure of merit, which is defined as

Λ ( n ) = E ∥ w ̃ ( n ) ∥ 2
(25)

where the weight error vector is given by

w ̃ ( n ) = w ( n ) - h
(26)

Note that at each iteration, the MSD depends on μ i (n), and by using the independent and identically distributed (i.i.d) sequence for input signal, we have

Λ ( n + 1 ) ≈ 1 + t r [ U 2 ( n ) ] M 2 E ∥ w ̃ ( n ) ∥ 2 - 2 M E w ̃ T ( n ) U ( n ) w ̃ ( n ) + t r U 2 ( n ) σ v 2 M 2 σ x 2
(27)

The optimal step-size is obtained by minimizing the MSD at each iteration. Taking the first-order partial derivative of Λ(n+1) with respect to μ i (n)(i = 0, ..., M-1), and setting it to zero, we obtain

μ i n = M E w ̃ i 2 ( n ) E ∥ w ̃ ( n ) ∥ 2 + σ v 2 σ x 2
(28)

and

E [ w ̃ i 2 ( n + 1 ) ] = 1 - 2 μ i ( n ) M E [ w ̃ i 2 ( n ) ] + μ i 2 ( n ) M 2 σ x 2 E [ e 2 ( n ) ]
(29)

We can estimate E[e2(n)] by a moving average approach of e2(n):

σ ^ e 2 ( n ) = λ σ ^ e 2 ( n - 1 ) + ( 1 - λ ) e 2 ( n )
(30)

where 0 < λ < 1 is the forgetting factor. Also, the initial value for E w ̃ i 2 ( 0 ) is given by the second-order statistics of the channel impulse response, i.e. E h i 2 .

4-2. Variable Step-Size Selective Partial Update NLMS algorithm using statistics of channel response

The filter coefficients in SPU-NLMS are updated by

w F ( n + 1 ) = w F ( n ) + U F ( n ) ∥ X F ( n ) ∥ 2 X F ( n ) e ( n ) w h e r e U F ( n ) = μ j 1 ( n ) , μ j 2 ( n ) , . . . , μ j S ( n ) T .
(31)

Approximating e(n) with

e ( n ) ≈ X F T ( n ) h F - w F ( n ) + v ( n ) ≈ - X F T ( n ) w ̃ F ( n ) + v ( n ) w h e r e h F = { h j 1 , h j 2 , . . . , h j S } T
(32)

and substituting (31) into (32), we obtain

w F ( n + 1 ) = w F ( n ) + U F ( n ) X F ( n ) - X F T ( n ) w ̃ F ( n ) + v ( n ) ∥ X F ( n ) ∥ 2
(33)

and

w ̃ F ( n + 1 ) = Q ( n ) w ̃ F ( n ) + U F ( n ) X F ( n ) v ( n ) ∥ X F ( n ) ∥ 2
(34)

where

Q ( n ) = I S L - U F ( n ) X F ( n ) X F T ( n ) ∥ X F ( n ) ∥ 2
(35)

and I SL is the SL × SL identity matrix.

For obtaining the MSD, we can write

Λ ( n ) = E ∥ w ̃ ( n ) ∥ 2 2 = E ∥ w ̃ F ( n ) ∥ 2 2 = E ∥ w ̃ Ḟ ( n ) ∥ 2 2
(36)

where w ̃ Ḟ ( n ) are the weights that are not selected to update and we know

w ̃ Ḟ ( n + 1 ) = w ̃ Ḟ ( n )
(37)

Combining (34), (36) and (37) we have

Λ ( n + 1 ) = E w ̃ F T ( n ) Q T ( n ) Q ( n ) w ̃ F ( n ) + σ v 2 ( S L ) 2 σ x 2 t r U F 2 ( n ) + E ∥ w ̃ Ḟ ( n ) ∥ 2 2
(38)

From (35), we can write

E Q T ( n ) Q ( n ) = 1 + 1 S L 2 t r U F 2 ( n ) I S L - 2 ( S L ) U F ( n )
(39)

Combining (38) and (39), we get

Λ ( n + 1 ) = 1 + 1 ( S L ) 2 t r U F 2 ( n ) E ∥ w ̃ F ( n ) ∥ 2 2 - 2 S L E w ̃ F T ( n ) U F ( n ) w ̃ F ( n ) + σ v 2 ( S L ) 2 σ x 2 t r U F 2 ( n ) + E ∥ w ̃ Ḟ ( n ) ∥ 2 2
(40)

Taking the first-order partial derivative of Λ(n+1) with respect to μ i (n)(I = 0, ..., M-1), and setting it to zero for j ∈ F we have

∂ Λ n + 1 ∂ μ j = 2 ( S L ) 2 μ j ( n ) E ∥ w ̃ F ( n ) ∥ 2 2 - 2 S L E w ̃ j 2 ( n ) + 2 ( S L ) 2 σ v 2 σ x 2 μ j ( n ) = 0
(41)

Therefore,

μ j = S L E [ w ̃ j 2 ( n ) ] E ∥ w ̃ F ( n ) ∥ 2 2 + σ v 2 σ x 2
(42)

To update w ̃ j 2 ( n ) , we use the following equation obtained by taking the mean square of the j th entry in (34):

E w ̃ j 2 ( n + 1 ) = 1 - 2 S L μ j ( n ) E w ̃ j 2 ( n ) + 1 ( S L ) 2 μ j 2 ( n ) E w ̃ F ( n ) 2 2 + 1 ( S L ) 2 σ v 2 σ x 2 μ i 2 ( n )
(43)

From (32), we can write

E e 2 ( n ) ≈ σ x 2 E ∥ w ̃ F ( n ) ∥ 2 2 + σ v 2
(44)

and

E w ̃ j 2 ( n + 1 ) = 1 - 2 S L μ j ( n ) E w ̃ j 2 ( n ) + μ j 2 ( n ) ( S L ) 2 σ x 2 E e 2 ( n )
(45)

Also, E(e2(n)) is obtained according to (30).

4-3. Variable Step-Size APA using statistics of channel response

Suppose X(n), and d(n) are defined similar to Section 3-1, and

v ( n ) = v ( n ) , . . . , v ( n - K + 1 ) T
(46)

is the noise vector, X(n) is the input signal matrix and d(n) is the desired signal vector which are related by

d n = X T ( n ) h + v ( n )
(47)

The filter coefficients in VSS-APA are updated by

w ( n + 1 ) = w ( n ) + U ( n ) X ( n ) X T ( n ) X ( n ) - 1 e ( n )
(48)

Combining (11) and (47), we rewrite the estimation error signal in (11) as

e ( n ) = - X T ( n ) w ̃ ( n ) + v ( n )
(49)

Substituting (49) into (48), we obtain

w ̃ ( n + 1 ) = Q ( n ) w ̃ ( n ) + U ( n ) X ( n ) X T ( n ) X ( n ) - 1 v ( n )
(50)

where

Q ( n ) = I M - U ( n ) X ( n ) X T ( n ) X ( n ) - 1 X T ( n )
(51)

and I M is the M × M identity matrix.

The MSD is defined as relation (26) and combining it with (50), we have

Λ ( n + 1 ) = E w ̃ T ( n ) Q T ( n ) Q ( n ) w ̃ ( n ) + K σ v 2 M 2 σ x 2 t r U 2 ( n )
(52)

Similar to [21], we assume that the entries of X(n) and v(n) are zero-mean independent and identically distributed (i.i.d) sequence with variance σ x 2 and σ v 2 , respectively; w ̃ ( n ) , X(n) and v (n) are mutually independent. Therefore, we obtain from (51),

E Q T ( n ) Q ( n ) = 1 + K M 2 t r U 2 ( n ) - 2 K M U ( n )
(53)

Combining (52) and (53), we get

Λ ( n + 1 ) = 1 + K M 2 t r U 2 ( n ) E ∥ w ̃ ( n ) ∥ 2 2 - 2 K M E w ̃ T ( n ) U ( n ) w ̃ ( n ) + K σ v 2 M 2 σ x 2 t r U 2 ( n )
(54)

The optimal step-size is obtained by minimizing the MSD at each iteration. Taking the first-order partial derivative of Λ(n+1) with respect to μ i (n)(i = 0, ..., M-1), and setting it to zero, we obtain

μ i ( n ) = M E w ̃ i 2 ( n ) E ∥ w ̃ ( n ) ∥ 2 + σ v 2 σ x 2
(55)

To update w ̃ i 2 ( n ) , we use the following equation obtained by taking the mean square of the i th entry in (50):

E w ̃ i 2 ( n + 1 ) = 1 - 2 K M μ i ( n ) E w ̃ i 2 ( n ) + K M 2 μ i 2 ( n ) E ∥ w ̃ ( n ) ∥ 2 2 + K M 2 σ v 2 σ x 2 μ i 2 ( n )
(56)

We obtain from (49)

E ∥ e ( n ) ∥ 2 = K σ x 2 E ∥ w ̃ ( n ) ∥ 2 2 + K σ v 2
(57)

Substitution of (57) into (56) leads to

E w ̃ i 2 ( n + 1 ) = 1 - 2 K M μ i 2 ( n ) E w ̃ i 2 ( n ) μ i 2 ( n ) M 2 σ x 2 E ∥ e ( n ) ∥ 2
(58)

It is straightforward to estimate E[||e(n)||2] by a moving average of ||e(n)||2:

σ ^ e 2 ( n ) = λ σ ^ e 2 ( n - 1 ) + ( 1 - λ ) ∥ e ( n ) ∥ 2
(59)

4-4. Variable Step-Size Selective Regressor AP algorithm using statistics of channel response

The filter coefficients in VSS-SR-APA are updated by

w ( n + 1 ) = w ( n ) + U ( n ) X G ( n ) X G T ( n ) X G ( n ) - 1 e G ( n )
(60)

Assuming d G ( n ) = X G T ( n ) h+ v G ( n ) and combining it with (16), we have

e G ( n ) = X G T ( n ) h - w ( n ) + v G ( n ) =- X G T ( n ) w ̃ ( n ) + v G ( n )
(61)

where

v G ( n ) = v ( n - i 1 ) , . . . , v ( n - i p ) T
(62)

Substituting (61) into (60), we obtain

w ( n + 1 ) = w ( n ) + U ( n ) X G ( n ) X G T ( n ) X G ( n ) - 1 - X G T ( n ) w ̃ G ( n ) + v G ( n )
(63)

and

w ̃ ( n + 1 ) = Q ( n ) w ̃ ( n ) + U ( n ) X G ( n ) X G T ( n ) X G ( n ) - 1 v G ( n )
(64)

where

Q ( n ) = I M - U ( n ) X G ( n ) X G T ( n ) X G ( n ) - 1 X G T ( n )
(65)

Combining MSD and (65) we have

Λ ( n + 1 ) = E w ̃ T ( n ) Q T ( n ) Q ( n ) w ̃ ( n ) + P σ v 2 M 2 σ x 2 t r U 2 ( n )
(66)

From (66), we can write

E Q T ( n ) Q ( n ) = 1 + P M 2 t r U 2 ( n ) I M - 2 P M U ( n )
(67)

Combining (66) and (67), we get

Λ ( n + 1 ) = 1 + P M 2 t r U 2 ( n ) E ∥ w ̃ ( n ) ∥ 2 2 - 2 P M E w ̃ T ( n ) U ( n ) w ̃ ( n ) + P σ v 2 M 2 σ x 2 t r U 2 ( n )
(68)

Taking the first-order partial derivative of Λ(n+1)with respect to μ i (n)(i = 0, ..., M-1), and setting it to zero we have

∂ Λ ( n + 1 ) ∂ μ i = 2 P M 2 μ i ( n ) E ∥ w ̃ i ( n ) ∥ 2 2 - 2 M E w ̃ i 2 ( n ) + 2 M 2 σ v 2 σ x 2 μ i ( n ) = 0
(69)

Therefore,

μ i ( n ) = M E w ̃ i 2 ( n ) E ∥ w ̃ ( n ) ∥ 2 + σ v 2 σ x 2
(70)

To update w ̃ i 2 ( n ) , we use the following equation obtained by taking the mean square of the j th entry in (64):

E w ̃ i 2 ( n + 1 ) = 1 + 2 P M μ i ( n ) E w ̃ i 2 ( n ) + P M 2 μ i 2 n E ∥ w ̃ ( n ) ∥ 2 + P M 2 σ v 2 σ x 2 μ i 2 ( n )
(71)

From (61), we can write

E ∥ e G ( n ) ∥ 2 = P σ x 2 E ∥ w ̃ ( n ) ∥ 2 2 + P σ v 2
(72)

Therefore,

E w ̃ i 2 ( n + 1 ) = 1 + 2 P M μ i n E w ̃ i 2 ( n ) + μ i 2 ( n ) M 2 σ x 2 E ∥ e G ( n ) ∥ 2
(73)

It is straightforward to estimate E[||e G (n)||2] by a moving average of ||e G (n)||2:

σ ^ e G 2 ( n ) = λ σ ^ e G 2 ( n - 1 ) + ( 1 - λ ) ∥ e G ( n ) ∥ 2
(74)

4-5. Variable Step-Size Selective Partial Update AP algorithm using statistics of channel response

The filter coefficients in VSS-SPU-APA are updated by

w F ( n + 1 ) = w F ( n ) + U F ( n ) X F ( n ) X F T ( n ) X F ( n ) - 1 e ( n )
(75)

Approximating e(n) with

e n ≈ X F T ( n ) h F - w F ( n ) + v ( n ) ≈ - X F T ( n ) w ̃ F ( n ) + v ( n )
(76)

and substituting (76) into (75), we obtain

w F ( n + 1 ) = w F ( n ) + U F ( n ) X F ( n ) X F T ( n ) X F ( n ) - 1 - X F T ( n ) w ̃ F ( n ) + v ( n )
(77)

and

w ̃ F ( n + 1 ) = Q ( n ) w ̃ F ( n ) + U F ( n ) X F ( n ) X F T ( n ) X F ( n ) - 1 v ( n )
(78)

where

Q ( n ) = I S L - U F ( n ) X F ( n ) X F T ( n ) X F ( n ) - 1 X F T ( n )
(79)

Combining MSD and (78) we have

Λ ( n + 1 ) = E w ̃ F T ( n ) Q T ( n ) Q ( n ) w ̃ F ( n ) + K σ v 2 ( S L ) 2 σ x 2 t r U F 2 ( n ) + E ∥ w ̃ Ḟ ( n ) ∥ 2 2
(80)

From (79), we can write

E Q T ( n ) Q ( n ) = 1 + K ( S L ) 2 t r U F 2 ( n ) - 2 K S L U F ( n )
(81)

Combining (81) and (80), we get

Λ ( n + 1 ) = 1 + K ( S L ) 2 t r U F 2 ( n ) E ∥ w ̃ F ( n ) ∥ 2 2 - 2 K S L E w ̃ F T ( n ) U F ( n ) w ̃ F ( n ) + K σ v 2 ( S L ) 2 σ x 2 t r U F 2 ( n ) + E ∥ w ̃ Ḟ ( n ) ∥ 2 2
(82)

Taking the first-order partial derivative of Λ(n+1) with respect to μ i (n)(i = 0, ..., M-1), and setting it to zero for j ∈ F we have

∂ Λ ( n + 1 ) ∂ μ j = 2 K ( S L ) 2 μ j ( n ) E ∥ w ̃ F ( n ) ∥ 2 2 - 2 K S L E w ̃ j 2 ( n ) + 2 K ( S L ) 2 σ v 2 σ x 2 μ j ( n ) = 0
(83)

Therefore, for j ∈ F we have

μ j = S L E w ̃ j 2 ( n ) E ∥ w ̃ F ( n ) ∥ 2 2 + σ v 2 σ x 2
(84)

To update w ̃ i 2 ( n ) , we use the following equation obtained by taking the mean square of the j th entry in (78):

E w ̃ j 2 ( n + 1 ) = 1 + 2 K S L μ j ( n ) E w ̃ j 2 ( n ) + K ( S L ) 2 μ j 2 n E ∥ w ̃ F ( n ) ∥ 2 2 + K ( S L ) 2 σ v 2 σ x 2 μ j 2 ( n )
(85)

From (76), we can write

E ∥ e ( n ) ∥ 2 = K σ x 2 E ∥ w ̃ F ( n ) ∥ 2 2 + K σ v 2
(86)

Therefore

E w ̃ j 2 ( n + 1 ) = 1 - 2 K S L μ j n E w ̃ j 2 ( n ) + μ j 2 ( n ) ( S L ) 2 σ x 2 E ∥ e ( n ) ∥ 2
(87)

Also E[||e(n)||2] is obtained according to (59).

5. Computational complexity

The computational complexity of the VSS adaptive algorithms has been given in Tables 1 and 2. The computational complexity of the APA and NLMS is from [4]. The SPU-NLMS needs 3SL+1 multiplications. This algorithm needs 1 additional multiplication and B log2S+O(B) comparisons. Comparing the updated equation for NLMS and VSS-NLMS shows that the VSS-NLMS needs 4M + 3 additional multiplication and M division due to variable step-size. In VSS-SPU-NLMS, the additional multiplication and additional division is respectively 4SL + 3 and SL. Also, this algorithm needs B log2S+O(B) comparisons. It is obvious that the computational complexity of VSS-SPU-NLMS is lower than VSS-NLMS. The number of reductions in multiplication and division for VSS-SPU-NLMS is respectively 3(M-SL)-1 and M-SL.

Table 1 The computational complexity of NLMS, SPU-NLMS, VSS-NLMS, and VSS-SPU-NLMS algorithms.
Table 2 The computational complexity of APA, SPU-APA, SR-APA, VSS-APA, VSS-SPU-APA, and VSS-SR-APA.

The SPU-APA needs (K2+2K)SL+K3+K2 multiplications. This algorithm needs 1 additional multiplication and B log2S+O(B) comparisons. The SR-APA needs (P2+2P)M+P3+P2 multiplications and K divisions. This algorithm needs (K-P)M+K+1 additional multiplications and K log2P+O(K) comparisons. Comparing the updated equation for APA and VSS-APA shows that the VSS-APA needs KM2+3M+K+1 additional multiplications and M divisions due to variable step-size. In VSS-SPU-APA, the additional multiplication is KS2+L2+3SL+K+1 and additional division is SL. Also, this algorithm needs B log2S+O(B) comparisons. It is obvious that the computational complexity of VSS-SPU-APA is lower than VSS-APA. The number of reductions in multiplication and division for VSS-SPU-APA is respectively (K2+2K+4)(M-SL)+K(M2-S2L2) and M-SL. In VSS-SR-APA, the additional multiplication is 3M+P+1 and additional division is M compared with SR-APA. Also this algorithm needs K log2P+O(K) comparisons. It is obvious that the computational complexity of VSS-SR-APA is lower than VSS-APA.

6. Experimental results

We presented several simulation results in system identification scenario. The unknown impulse response is generated according to h i = eiτ r(i), i = 0, ..., M-1, where r(i) is a white Gaussian random sequence with zero-mean and variance σ r 2 of 0.09 [20]. The length of impulse responses were set to M = 20, and 50 in simulations. Also, the envelope decay rate τ was set 0.04. The filter input is a zero-mean i.i.d. Gaussian process with variance σ x 2 =1. Another input signal is colored Gaussian signal which is generated by filtering white Gaussian noise through a first-order autoregressive (AR(1)) system with the transfer function:

G z = 1 1 - 0 . 8 Z - 1
(88)

The white zero-mean Gaussian noise was added to the filter output such that the SNR = 15dB.

In all simulations, the MSD curves are obtained by ensemble averaging over 200 independent trials.

Figure 1 shows the MSD curves of APA, VSS-APA in [17], proposed VSS-APA, ES [19] and GNGD algorithm [16] for colored Gaussian input and M = 20. The parameter K was set to 4 and different values for μ were used in APA. As we can see, by increasing the step-size, the convergence speed increases but the steady-state MSD also increases. The VSS-APA in [17] has both fast convergence speed and low steady-state MSD. The proposed VSS-APA has faster convergence speed and lower steady-state MSD than VSS-APA in [17], and classical APA.

Figure 1
figure 1

Comparing the MSD curves of APA with high and low step-sizes, variable step-size proposed in [17], proposed VSS-APA, GNGD [16]and ES [19]algorithms with M = 20, K = 4 and colored Gaussian signal as input.

In Figure 2 we presented the MSD curves for colored Gaussian input and M = 50. In this simulation, the parameter K was set to 10. The results are also compared with APA with different values for the step-size and VSS-APA proposed in [17]. As we can see the performance of proposed VSS-APA has both fast convergence speed and low steady-state MSD features.

Figure 2
figure 2

Comparing the MSD curves of APA with high and low step-sizes, variable step-size proposed in [17], proposed VSS-APA, GNGD [16]and ES [19]algorithms with M = 50, K = 10 and colored Gaussian signal as input.

Figure 3 compares the MSD curves of SPU-APA, proposed VSS-APA, and proposed VSS-SPU-APA for colored Gaussian input and M = 20. The parameters K, and the number of blocks (B) were set to 4. In this figure, the number of blocks to update (S) was set to 3 for SPU-APA. Again, different values for the step-size have been used in SPU-APA. The first one is (S/B), which is upper stability bound of SPU-APA, and the second one is (0.05 S/B) wich is low values for the step-size. As we can see, the convergence speed and steady-state MSD is changed by the step-size. The proposed VSS-SPU-APA has fast convergence speed and low steady-state MSD. The VSS-SPU-APA has been also compared with proposed VSS-APA. Close performance can be seen for these proposed algorithms. But the computational complexity of proposed VSS-SPU-APA is lower than VSS-APA

Figure 3
figure 3

Comparing the MSD curves of SPU-APA with high and low step-sizes, proposed VSS-APA and proposed VSS-SPU-APA with M = 20, K = 4 and colored Gaussian signal as input.

Figure 4 presents the MSD curves of SPU-APA, proposed VSS-SPU-APA, and VSS-APA, for M = 50, and colored Gaussian input. The parameter K and the number of blocks were set to 10, and 5 respectively. In this figure the parameter S was set to 3, and different values for the step-size have been used in SPU-APA. The simulation results show that the VSS-SPU-APA has good performance compared with SPU-APA. Also, the proposed VSS-APA, and VSS-SPU-APA have close performance. But the computational complexity of VSS-SPU-APA is lower than VSS-APA.

Figure 4
figure 4

Comparing the MSD curves of SPU-APA with high and low step-sizes, proposed VSS-APA and proposed VSS-SPU-APA with M = 50, K = 10 and colored Gaussian signal as input.

Figure 5 compares the MSD curves of SPU-NLMS, VSS-NLMS in [21], proposed VSS-SPU-NLMS, ES [19] and GNGD algorithms [16]. The parameters B, and S were set to 4, and 2 respectively. Various values for the step-size have been used for SPU-NLMS. The simulation results show that the proposed VSS-SPU-NLMS has fast convergence speed and low steady-state MSD features. Also this algorithm has close performance to VSS-NLMS in [21]. Good performance can be seen for proposed VSS-SPU-NLMS. This fact can be seen in Figure 6 for M = 50.

Figure 5
figure 5

Comparing the MSD curves of SPU-NLMS with high and low step-sizes, VSS-NLMS proposed in [17], VSS-NLMS proposed in [21], proposed VSS-SPU-NLMS, GNGD [16]and ES [19]algorithms with M = 20 and colored Gaussian signal as input.

Figure 6
figure 6

Comparing the MSD curves of SPU-NLMS with high and low step-sizes, VSS-NLMS proposed in [17], VSS-NLMS proposed in [21], proposed VSS-SPU-NLMS, GNGD [16]and ES [19]algorithms with M = 50 and colored Gaussian signal as input.

Figure 7 compares the MSD curves of SR-APA, proposed VSS-SR-APA, and VSS-APA for colored Gaussian input signal. The parameter K was set 4. Again, large and small values for step-size have been used in SR-APA. As we can see, the proposed VSS-SR-APA has good convergence speed and low steady-state MSD. This figure shows that the proposed VSS-APA has better performance than proposed VSS-SR-APA. In Figure 8, we presented the results for M = 50, and colored Gaussian input signal. Comparing the MSD curves show that the VSS-SR-APA has also good performance in this case.

Figure 7
figure 7

comparing the MSD curves of SR-APA with high and low step-sizes, proposed VSS-APA and proposed VSS-SR-APA in a filter with M = 20, K = 4 and colored Gaussian signal as input.

Figure 8
figure 8

Comparing the MSD curves of SR-APA with high and low step-sizes, proposed VSS-APA and proposed VSS-SR-APA with M = 50, K = 5 and colored Gaussian signal as input.

Figure 9 compares the MSD curves of proposed VSS-SPU-APA with S = 2 and S = 3, proposed-VSS-APA, ES [19], GNGD algorithm [16] and VSS-APA proposed in [17] with M = 20, K = 4 for colored Gaussian signal input. The simulation results show that the proposed VSS-APA has better performance than proposed VSS-APA in [17]. Also the performance of proposed VSS-SPU-APA for S = 3 is close to proposed VSS-APA.

Figure 9
figure 9

Comparing the MSD curves of proposed VSS-SPU-APA with S = 2 and S = 3, proposed-VSS-APA, VSS-APA proposed in [17], GNGD [16]and ES [19]algorithms with M = 20, K = 4 and colored Gaussian signal as input.

Figure 10 compares the MSD curves of proposed VSS-SPU-NLMS with S = 1 and S = 2 and S = 3, proposed VSS-NLMS, ES [19], GNGD algorithm [16], and VSS-NLMS proposed in [17] with M = 20, K = 4 for colored Gaussian signal input. This figure shows that the proposed VSS-NLMS has better performance than VSS-NLMS in [17]. Also by increasing the parameter S, the MSD curves of VSS-SPU-NLMS will be closed to VSS-NLMS.

Figure 10
figure 10

Comparing the MSD curves of proposed VSS-SPU-NLMS with S = 1 and S = 2 and S = 3, proposed VSS-NLMS, VSS-NLMS proposed in [17], GNGD [16]and ES [19]algorithms with M = 20, K = 4 and colored Gaussian signal as input.

Figure 11 presents the MSD curves of proposed VSS-SR-APA with P = 1 and P = 2 and P = 3 and proposed VSS-APA, ES [19], GNGD algorithm [16], and VSS-APA proposed in [17] with M = 20, K = 4 for colored Gaussian signal input. This figure shows that the performance of VSS-SR-APA is better than VSS-APA in [17]. Also, by increasing the parameter P, the performance of VSS-SR-APA will be closed to VSS-APA.

Figure 11
figure 11

Comparing the MSD curves of proposed VSS-SR-APA with P = 1 and P = 2 and P = 3, proposed VSS-APA, VSS-APA proposed in [17], GNGD [16]and ES [19]algorithms with M = 20, K = 4 and colored Gaussian signal as input.

Finally, we justified the tracking performance of the proposed methods in Figure 12. The impulse response variations are simulated by toggling between different impulse responses h i = e-iτ r(i) and g i = eiτ r(i). The impulse response is changed to g i = eiτ r(i) at iteration 4000. As we can see, the proposed VSS algorithms have good tracking performance. The proposed VSS-APA has better performance than other algorithms. The performance of proposed VSS-SPU-APA with B = 4, and S = 3 is close to proposed VSS-APA.

Figure 12
figure 12

Comparing the tracking performance of proposed VSS algorithms.

7. Conclusions

In this paper, we presented the novel VSS adaptive filter algorithms such as VSS-SPU-NLMS, VSS-APA, VSS-SR-APA and VSS-SPU-APA based on prior knowledge of the channel impulse response statistic. These algorithms exhibit fast convergence while reducing the steady-state MSD as compared to the ordinary SPU-NLMS, APA, SR-APA and SPU-APA algorithms. The presented algorithms were also computationally efficient. We demonstrated the good performance of the presented VSS adaptive algorithms in system identification scenario.

References

  1. Widrow B, Stearns SD: Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall; 1985.

    Google Scholar 

  2. Haykin S: Adaptive Filter Theory. 4th edition. NJ: Prentice-Hall; 2002.

    Google Scholar 

  3. Ozeki K, Umeda T: An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron Commun Jpn 1984, 67-A: 19-27.

    Article  MathSciNet  Google Scholar 

  4. Sayed AH: Fundamentals of Adaptive Filtering. Wiley; 2003.

    Google Scholar 

  5. Roy S, Shynk JJ: Analysis of data-reusing LMS algorithm. Proc Midwest Symp Circuits Syst 1989, 1127-1130.

    Google Scholar 

  6. Shin HC, Song WJ, Sayed AH: Mean-square performance of data-reusing adaptive algorithms. IEEE Signal Processing Letters 2005,12(12):851-854.

    Article  Google Scholar 

  7. Pradhan SS, Reddy VE: A new approach to sub band adaptive filtering. IEEE Trans Signal Processing 1999, 47: 655-664. 10.1109/78.747773

    Article  Google Scholar 

  8. Lee KA, Gan WS: Improving convergence of the NLMS algorithm using constrained sub band updates. IEEE Signal Processing Letters 2004,11(9):736-739. 10.1109/LSP.2004.833445

    Article  Google Scholar 

  9. Kwong OW, Johnston ED: A variable step-size LMS algorithm. IEEE Trans Signal Processing 1992,40(7):1633-1642. 10.1109/78.143435

    Article  Google Scholar 

  10. Aboulnasr T, Mayyas K: A robust variable step-size lms-type algorithm: Analysis and simulations. IEEE Trans Signal Processing 1997,45(3):631-639. 10.1109/78.558478

    Article  Google Scholar 

  11. Pazaitis DI, Constantinides AG: A novel kurtosis driven va riable step-size adaptive algorithm. IEEE Trans Signal Processing 1999,47(3):864-872. 10.1109/78.747793

    Article  Google Scholar 

  12. Mathews VJ, Xie Z: A stochastic gradient adaptive filter with gradient adaptive step size. IEEE Trans Signal Processing 1993,41(6):2075-2087. 10.1109/78.218137

    Article  Google Scholar 

  13. Krishnamurthy V: Averaged stochastic gradient algorithms for adaptive blind multiuser detection in DS/CDMA systems. IEEE Trans Communications 2000,48(1):125-134. 10.1109/26.818880

    Article  Google Scholar 

  14. de Lamare RC, Sampaio-Neto R: Low-complexity blind variable step size mechanisms for minimum variance CDMA receivers. IEEE Transactions on Signal Processing 2006,54(6):2302-2317.

    Article  MathSciNet  Google Scholar 

  15. Cai Y, de Lamare RC: Low-Complexity Variable Step Size Mechanism for Code-Constrained Constant Modulus Stochastic Gradient Algorithms applied to CDMA Interference Suppression. IEEE Transactions on Signal Processing 2009,57(1):313-323.

    Article  MathSciNet  Google Scholar 

  16. Mandic DP: A Generalized Normalized Gradient Descent Algorithm. IEEE Signal Processing Letters 2004,11(2):115-118. 10.1109/LSP.2003.821649

    Article  Google Scholar 

  17. Shin HC, Sayed AH, Song WJ: Variable step-size NLMS and affine projection algorithms. IEEE Signal Processing Letters 2004,11(2):132-135. 10.1109/LSP.2003.821722

    Article  Google Scholar 

  18. Abadi MSE, Mehrdad V, Gholipour A: Family of Variable Step-Size Affine Projection Adaptive Filtering Algorithms with Selective Regressors and Selective Partial Update. International Journal of Science and Technology, Scientia, Iranica 2010,17(1):81-98.

    Google Scholar 

  19. Makino S, Kaneda Y, Koizumi N: Exponentially weighted step-size NLMS adaptive filter based on the statistics of a room impulse response. IEEE Transactions on Speech Audio Processing 1993,1(1):101-108. 10.1109/89.221372

    Article  Google Scholar 

  20. Li N, Zhang Y, Hao Y, Chambers JA: A new variable step-size NLMS algorithm designed for applications with exponential decay impulse responses. Signal Processing 2008,88(9):2346-2349. 10.1016/j.sigpro.2008.03.002

    Article  Google Scholar 

  21. Shi Kun, Ma Xiaoli: A variable-step-size NLMS algorithm using statistics of channel response. Signal Processing 2010,90(6):2107-2111. 10.1016/j.sigpro.2010.01.015

    Article  Google Scholar 

  22. Douglas SC: Analysis and implementation of the max-NLMS adaptive filter. In Proc 29th Asilomar Conf on Signals, Systems, and Computers. Pacific Grove, CA; 1995:659-663.

    Google Scholar 

  23. Schertler T: Selective block update NLMS type algorithms. Proc IEEE Int Conf on Acoustics, Speech, and Signal Processing, Seattle, WA 1998, 1717-1720.

    Google Scholar 

  24. Dogancay K, Tanrikulu O: Adaptive filtering algorithms with selective partial updates. IEEE Trans Circuits, Syst. II: Analog and Digital Signal Processing 2001,48(8):762-769. 10.1109/82.959866

    Article  Google Scholar 

  25. Abadi MSE, Husøy JH: Mean-square performance of adaptive filters with selective partial update. Signal Processing 2008,88(8):2008-2018. 10.1016/j.sigpro.2008.02.005

    Article  Google Scholar 

  26. Dogancay K: Partial-Update Adaptive Signal Processing: Design, Analysis and Implementation. Academic Press; 2008.

    Google Scholar 

  27. Hwang KY, Song WJ: An affine projection adaptive filtering algorithm with selective regressors. IEEE Trans Circuits, Syst. II: EXPRESS BRIEFS 2007, 54: 43-46.

    Article  Google Scholar 

  28. Abadi MSE, Palangi H: Mean-square performance analysis of the family of selective partial update and selective regressor affine projection algorithms. Signal Processing 2010,90(1):197-206. 10.1016/j.sigpro.2009.06.013

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammad Shams Esfand Abadi.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Shams Esfand Abadi, M., AbbasZadeh Arani, S.A.A. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response. EURASIP J. Adv. Signal Process. 2011, 97 (2011). https://doi.org/10.1186/1687-6180-2011-97

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-97

Keywords