# A variable-parameter normalized mixed-norm (VPNMN) adaptive algorithm

## Abstract

Since both the least mean-square (LMS) and least mean-fourth (LMF) algorithms suffer individually from the problem of eigenvalue spread, so will the mixed-norm LMS-LMF algorithm. Therefore, to overcome this problem for the mixed-norm LMS-LMF, we are adopting here the same technique of normalization (normalizing with the power of the input) that was successfully used with the LMS and LMF separately. Consequently a new normalized variable-parameter mixed-norm (VPNMN) adaptive algorithm is proposed in this study. This algorithm is derived by exploiting a time-varying mixing parameter in the traditional mixed-norm LMS-LMF weight update equation. The time-varying mixing parameter is adjusted according to a well-known technique used in the adaptation of the step-size parameter of the LMS algorithm. In order to study the theoretical aspects of the proposed VPNMN adaptive algorithm, our study also addresses its convergence analysis, and assesses its performance using the concept of energy conservation. Extensive simulation results corroborate our theoretical findings and show that a substantial improvement, in both convergence time and steady-state error, can be obtained with the proposed algorithm. Finally, the VPNMN algorithm proved its usefulness in a noise cancellation application where it showed its superiority over the normalized least-mean square algorithm.

## 1 Introduction

Due to its simplicity, the least mean-square (LMS) [1, 2] algorithm is the most widely used algorithm for adaptive filters in many applications. The least mean-fourth (LMF)  algorithm was also proposed later as a special case of the more general family of steepest descent algorithms  with 2k error norms, k being a positive integer.

But for both of these algorithms, the convergence behavior depends on the condition number, i.e., on the ratio of the maximum to the minimum eigenvalues of the input signal autocorrelation matrix, $R =E [ x n x n T ]$ where x n is the input signal. This is clearly seen from their respective time constants [1, 3]

$τ i LMS = 1 μ λ i , i = 0 , 1 , … , N - 1 ,$
(1)

and

$τ i LMF = 1 6 μ σ η 2 λ i , i = 0 , 1 , … , N - 1 ,$
(2)

where $σ η 2$ is the noise power, λ i is the ith eigenvalue of the autocorrelation matrix of the input signal, µ is the step size used in the adaptation scheme and N is the number of coefficients in the adaptive filter. As seen from (1) and (2), the ratio of $τ max τ min$ is constant for both algorithms and is given by the eigenvalue spread (i.e., condition number), $λ max λ min$ i.e.,

$τ max τ min = λ max λ min .$
(3)

To remove the dependency of the convergence of the LMS algorithm on the condition number, the normalized least-mean square (NLMS)  was introduced. As reported in , a great improvement in convergence is obtained through the use of the NLMS algorithm over that of the LMS algorithm at the expense of a larger steady-state error. Similar results were obtained for the case of the normalized LMF (NLMF) algorithm .

A mixed-norm algorithm , combining both the LMS and the LMF algorithms, will suffer as well from the problem of the eigenvalue spread dependency. Since both of these algorithms suffer individually from this problem, to circumvent this problem for the mixed-norm LMS-LMF, we are adopting here the same technique of normalization that was successfully used with the LMS and LMF separately.

It is well known that fast convergence and lower steady-state error are two conflicting parameters in general adaptive filtering. When compared to the LMS algorithm, the NLMS algorithm results in a faster convergence but only at the expense of a higher steady-state error [12, 13]. A promising solution to this conflict is a time-varying normalized mixed-norm LMS-LMF algorithm. In this mixed-norm algorithm and during the transient state, the NLMS algorithm is used to speed up the algorithm's convergence. However when steady-state is reached, the algorithm automatically switches from the NLMS to the NLMF , thanks to a built-in "gear shifting" property, to secure a lower steady-state error.

In this work, the performance of a variable-parameter normalized mixed-norm (VPNMN) LMS-LMF algorithm is evaluated. It will be shown that a better performance in both convergence and steady-state error will be achieved by the VPNMN algorithm than either the NLMS or the NLMF algorithm.

The rest of the article is organized as follows. Section 2 deals with a more explicit development of the proposed algorithm, and Section 3 treats its convergence analysis. The steady-state analysis of the proposed algorithm is detailed in Section 4, while its tracking analysis is given in Section 5. Performance evaluation of the resulting algorithm is carried out in Section 6. Finally, the conclusion section summarizes this work.

## 2 Algorithm development

The mixed-norm LMS-LMF algorithm is based on the minimization of the following cost function [9, 10]:

$J n = α E [ e n 2 ] + ( 1 - α ) E [ e n 4 ] ,$
(4)

where α is a positive mixing parameter in the interval [0, 1] and the error e n is defined as

$e n = d n + η n - x n T w n ,$
(5)

where d n is the desired value, w n is the filter coefficient of the adaptive filter, x n is the input signal and η n is the additive noise.

A major drawback of this algorithm is, however, the choice of the mixing parameter that is hard to fix a priori for an unknown system. In , a self-adapting LMS-LMF algorithm with a time-varying weighting factor was proposed. This time-variation of the weighting factor was achieved by allowing for a variable mixing factor that is updated every iteration using the modified variable step-size (MVSS) algorithm proposed in . The variable weight mixed-norm LMS-LMF algorithm was defined to minimize the following performance measure :

$J n = α n E [ e n 2 ] + ( 1 - α n ) E [ e n 4 ] ,$
(6)

where α n , chosen in [0, 1] such that the unimodal character of the above cost is preserved, is a time-varying parameter updated according to 

$α n + 1 = δ α n + γ p n 2 ,$
(7)

and

$p n = β p n - 1 + ( 1 - β ) e n e n - 1 .$
(8)

The parameters δ and β, both confined to the interval [0,1], are exponential weighting parameters that govern the averaging time constant, i.e., the quality of estimation of the algorithm, and γ > 0. Note that the algorithm defined by (4) is restored when δ = 1 and γ = 0, which forces α n to have a fixed value.

Based on this motivation, the weight mixed-norm LMS-LMF algorithm for recursively adjusting the coefficients of the system is expressed in the following form:

$w n + 1 = w n + μ [ α n e n + 2 ( 1 - α n ) e n 3 ] x n ,$
(9)

where μ is the step size.

As mentioned earlier and because of its reliance on the LMS and the LMF, the algorithm defined by (9) will be affected by the eigenvalue spread of the autocorrelation matrix of the input signal. To overcome this dependency, a VPNMN adaptive algorithm is introduced and its weight update recursion is given by the following expression:

$w n + 1 = w n + μ [ α n e n + 2 ( 1 - α n ) e n 3 ] x n ∥ x n ∥ 2 ,$
(10)

where ║x n 2 is the Euclidean norm of the input signal x n . In the case of zero input, the ε-VPNMN algorithm defined as follows:

$w n + 1 = w n + μ [ α n e n + 2 ( 1 - α n ) e n 3 ] x n ε + ∥ x n ∥ 2 ,$
(11)

must be used for regularization purposes.

## 3 Convergence analysis of the VPNMN algorithm

In this section, the convergence analysis of the proposed VPNMN algorithm is carried out. Both the mean and the mean-square behaviors of the weight error vector are presented in the ensuing analysis.

### 3.1 Mean behavior

In the ensuing analysis, the following assumptions are used in the derivations of the convergence in the mean for the normalized mixed-norm LMS-LMF algorithm. These are quite similar to what is usually assumed in literature [24, 16] and which can also be justified in several practical instances

A.1 The noise sequence {η n } is statistically independent of the input signal sequence {x n } and both sequences have zero mean.

A.2 The weight error vector (v n ), to be defined later, is independent of the input x n .

A.3 The mixing parameter is independent of both the input signal and the error.

Examining the mean behavior of (10) under the above assumptions, sufficient conditions for convergence of the proposed algorithm in-the-mean can be derived and are stated as follows.

Proposition 1 For the algorithm defined by (10) to converge in-the-mean, a sufficient condition is that μ be chosen in the following range:

$0 < μ < 2 α ̄ n + 3 ( 1 - α ̄ n ) ( σ η 2 + C ) ,$
(12)

where$σ η 2$is the noise power, $α ̄ n = E [ α n ]$is the mean of the mixing parameter, and$C$is is the Cramer-Rao bound associated with the problem of estimating the random quantity$x n T w opt$by using$x n T w n$.

Proof: The mean convergence of the proposed algorithm is now studied by taking the expectation of the weight error vector, v n = w n - wopt. In this regard, the error e n can be set up in the following way:

$e n = η n - x n T v n ,$
(13)

and hence (10) becomes

$v n + 1 = v n + μ [ α n e n + 2 ( 1 - α n ) e n 3 ] x n ∥ x n ∥ 2 .$
(14)

Consequently, taking the expectation on both sides of (14), under A.1-A.3, the mean weight-error vector of the proposed algorithm evolves as

$E [ v n + 1 ] = E [ v n ] + μ α ̄ n E e n x n ∥ x n ∥ 2 + ( 1 - α ̄ n ) E e n 3 x n ∥ x n ∥ 2 .$
(15)

Now, considering the second expectation in the above equation, This will be especially true when the filter is long enough. Consequently, the independence assumption can be invoked to obtain the following:

$E e n x n ∥ x n ∥ 2 ≈ E [ e n x n ] tr ( R ) .$
(16)

To solve the expectation E[e n x n ] we use the technique of , and thus it results in

$E [ e n x n ] = - tr ( R ) E [ v n ] .$
(17)

Now, considering the second expectation in the above equation, This will be especially true when the filter is long enough. Consequently, the independence assumption can be invoked to obtain the following:

$E e n 3 x n ∥ x n ∥ 2 ≈ E [ e n 3 x n ] tr ( R ) .$
(18)

To solve the expectation $E e n 3 x n$ we use the technique of [17, 18], which does not employ any linearization of $e n 3$ As a result, $E [ e n 3 x n ]$ is found to be

$E [ e n 3 x n ] = - 3 ( σ η 2 + ζ n ) R E [ v n ] .$
(19)

Ultimately, (15) can be set up in the following form:

$E [ v n + 1 ] ≈ I - μ α ̄ n + 3 ( 1 - α ̄ n ) ( σ η 2 + ζ n ) tr ( R ) R E [ v n ] .$
(20)

If $C ≤ ζ n$ is the Cramer-Rao bound associated with the problem of estimating the random quantity $x n T w opt$ by using $x n T w n$, then after taking into account the fact that the eigenvalues of R are all real and positive, λmax being the largest eigenvalue of R and in general λmax< tr(R) , it follows that a sufficient condition for convergence of the proposed algorithm is that the step-size parameter μ satisfies (12). ▪

Two extreme scenarios can be considered here for the value of the mixing parameter α n

1. (1)

Scenario 1: When α n = 0, the VPNMN algorithm reduces to the NLMF algorithm , and it can be shown that (12) becomes

$0 < μ < 2 3 ( σ η 2 + C ) .$
(21)
2. (2)

Scenario 2: When α n = 1, both the NLMS algorithm and its step size range, that is 0 < μ < 2, are recovered.

Remarks:

1. (1)

It can be seen from (10) that the VPNMN algorithm can be viewed as a variable step-size LMS-LMF algorithm with time varying step size.

2. (2)

The error is usually large during the initial adaptation and gradually decreases toward a minimum. Therefore, the signal power, ║x n2, will act as a threshold to avoid taking large step sizes when the error converges to a minimum in the recursive updating equation.

3. (3)

The bound for the step-size (μ) of the proposed algorithm that guarantees convergence of the mean weight-vector, given by (12), shows that the mean-weight-vector stability depends on the Cramer-Rao bound. Therefore, the convergence of the mean-weight-vector of the proposed algorithm depends on its mean-square stability. A similar fact was observed in  for the LMF algorithm.

### 3.2 Mean square behavior

In this section the performance of the VPNMN algorithm in the mean-square sense is analyzed. Here, we have used a unified approach to the transient analysis of adaptive filters with error nonlinearities. This approach does not restrict the regression data to be Gaussian and avoids the need for explicit recursions for the covariance matrix of the weight-error vector. This approach assumes that the adaptive filter is long enough to justify the following assumptions which are realistic for longer adaptive filters:

A.4 The residual or a priori error ean, to be defined later, can be assumed to be Gaussian.

A.5 The norm of the input regressor ( x n 2) can be assumed to be uncorrelated with f2(e n ) (f(e n ) is defined in (23)).

The framework is based on the concept of energy conservation relation which was first noted in  and in general the adaptation scheme defined in (14) can be written in the following form:

$v n + 1 = v n + μ x n f ( e n ) ,$
(22)

where f(e n ) denotes a general scalar function of the output estimation error e n and in our case it is given by

$f ( e n ) = α n e n + 2 ( 1 - α n ) e n 3 ∥ x n ∥ 2 .$
(23)

We are interested in studying the time-evolution and the steady-state values of $E [ | e a n 2 | ]$ and E[║v n 2] which represent the mean-square-error and the mean-square-deviation performances of the filter, respectively, whereas their time-evolution relate to the learning or the transient behavior of the filter.

Then, for some symmetric positive definite weighting matrix A to be specified later, the weighted a priori and a posteriori estimation errors are, respectively, defined as 

$e a n A = x n T A v n , and e p n A = x n T A v n + 1 .$
(24)

For the special case when A = I, the weighted a priori and a posteriori estimation errors defined above are reduced to standard a priori and a posteriori estimation errors, respectively, that is,

$e a n = e a n I = x n T v n , and e p n = e p n I = x n T v n + 1 .$
(25)

It can be shown that the estimation error, e n , and the a priori error, ean, are related via e n = ean+ η n . Also, using (10) and (24), it can be shown that

$e p n A = e a n A - ∥ x n ∥ A 2 μ f ( e n ) ,$
(26)

where the notation $∥ x n ∥ A 2$ denotes the weighted squared Euclidean norm $∥ x n ∥ A 2 = x n T A x n$.

The performance measure in the analysis is the excess mean-square-error (EMSE), denoted by ζ n , and is defined as follows:

$ζ n = E [ | e n | 2 ] - σ η 2 .$
(27)

Since $e a n = x n T v n$, the EMSE can also be written as follows:

$ζ n = E [ | e a n | 2 ] = E [ ‖ v n ‖ R 2 | .$
(28)

Next, the fundamental weighted-energy conservation relation given in  is presented to develop the framework for the transient analysis of the proposed algorithm. Thus, by substituting (26) in (22), the following relation can be obtained:

$v n + 1 = v n - x n ∥ x n ∥ A 2 [ e a n A - e p n A ] .$
(29)

Ultimately, the fundamental weighted-energy conservation relation can be shown to be

$∥ v n + 1 ∥ A 2 + 1 ∥ x n ∥ A 2 | e a n A | 2 = ∥ v n ∥ A 2 + 1 ∥ x n ∥ A 2 | e p n A | 2 .$
(30)

This relation shows how the weighted energies of the error quantities evolve in time. It has been shown that different choices of A allow us to evaluate different performance measures of an adaptive filter.

#### 3.2.1 Time evolution of the weighted variance $E [ ∥ v n ∥ A 2 ]$

In this section, the time evolution of the weighted variance $E [ ∥ v n ∥ A 2 ]$ is derived for the proposed algorithm using the fundamental weighted-energy conservation relation (30). Substituting the expression for a posteriori error from (26) in (30) and taking expectation on both sides to obtain the following relation:

$E [ ‖ v n + 1 ‖ A 2 ] = E [ ‖ v n ‖ A 2 ] − 2 μ E [ e a n A f ( e n ) ] + μ 2 E [ ‖ x n ‖ A 2 f 2 ( e n ) ] .$
(31)

Now, evaluating the two expectations in second and third terms on the right hand side of the above equation, that is, $E [ e a n A f ( e n ) ]$ and $E [ ∥ x n ∥ A 2 f 2 ( e n ) ]$ The details for these two quantities are given next. First, we will use the following assumption which was adopted in , that is,

A.6 For any constant matrix A and for all n, eanand $e a n A$ are jointly Gaussian.

This assumption is reasonable for longer filters using the concept of central limit arguments . Moreover, a similar assumption was used in . Hence, we can simplify the expectation $E [ e a n A e n ]$ using Price's Theorem [23, 24] and assumptions A.4 and A.6 as follows:

$E [ e a n A f ( e n ) ] = E [ e a n A f ( e n ) ] = E [ e a n A e a n ] E [ e a n f ( e n ) ] E [ e a n 2 ] .$
(32)

Since $e a n A = x n T A v n$ and $e a n = x n T I v n$ we can simplify the expectation $E [ e a n A e a n ]$ as follows:

$E [ e a n A e a n ] = E [ x n T A v n x n T I v n ] = E [ ∥ v n ∥ A x n T x n I 2 ] = E [ ∥ v n ∥ A R 2 ] .$
(33)

Ultimately, (32) can be written as

$E [ e a n A f ( e n ) ] =E [ ∥ v n ∥ A R 2 ] E [ e a n f ( e n ) ] E [ e a n A ] .$
(34)

The term $E [ e a n f ( e n ) ] E [ e a n 2 ]$ for the case of proposed algorithm, can be shown to be

$E [ e a n f ( e n ) ] E [ e a n 2 ] = 1 N α ̄ n + 6 ( 1 - α ̄ n ) ( ζ n + σ η 2 ) , ≜ – Z n .$
(35)

Second, to solve the expectation $E [ ∥ x n ∥ A 2 f 2 ( e n ) ]$, we will resort to the following assumption :

A.7 The adaptive filter is long enough such that $∥ x n ∥ A 2$ and f2(e n ) are uncorrelated.

This assumption is found to be more realistic as the filter gets longer  and unweighted version of this assumption was used in [22, 25]. The assumption enable us to split the expectation $E [ ∥ x n ∥ A 2 f 2 ( e n ) ]$ as follows:

$E [ ∥ x n ∥ A 2 f 2 ( e n ) ] =E [ ∥ x n ∥ A 2 ] E [ f 2 ( e n ) ] ,$
(36)

where E[f2(e n )] can be shown to be (with $α n 2 ¯ =E [ α n 2 ]$)

$E [ f 2 ( e n ) ] = 1 N 2 α n 2 ¯ ( ζ n + σ η 2 ) + 60 ( 1 - 2 α ̄ n + α n 2 ¯ ) ( ζ n + σ η 2 ) 3 + 4 ( α ̄ n - α n 2 ¯ ) ( 3 ζ n 2 + 6 ζ n σ η 2 + 3 σ η 2 ) , ≜ ℱ n .$
(37)

Ultimately, we can rewrite (31) as follows:

$E [ ‖ v n + 1 ‖ A 2 ] = E [ ‖ v n ‖ A 2 ] − 2 μ E [ ‖ v n ‖ A >R 2 ] Z n + μ 2 E [ ‖ x n ‖ A 2 ] ℱ n .$
(38)

The above equation shows the time evaluation or the transient behavior of the weighted variance $E [ ∥ v n ∥ A 2 ]$ for any constant weight matrix A. Different performance measures can be obtained by the proper choice of the weight matrix A.

#### 3.2.2 The EMSE and the MSD learning curves

The learning curves for the EMSE and MSD can be obtained using the fact that $E [ e a n 2 ] = E [ ∥ v n ∥ R 2 ]$ while $MSD = E [ ∥ v n ∥ I 2 ]$. If we choose A = IR...RN-1, a set of relations can be obtained from (38) which is given by

$E [ v n + 1 I 2 ] = E [ v n I 2 ] - 2 μ – Z n E [ v n R 2 ] + μ 2 E [ x n I 2 ] ℱ n , E [ v n + 1 R 2 ] = E [ v n R 2 ] - 2 μ – Z n E [ v n R 2 2 ] + μ 2 E [ x n R 2 ] ℱ n , ⋮ E [ v n + 1 R N - 1 2 ] = E [ v n R N - 1 2 ] - 2 μ – Z n E [ v n R N 2 ] + μ 2 E [ x n R N - 1 2 ] ℱ n .$
(39)

Now, using Cayley-Hamilton theorem, we can write

$R N = - p 0 I - p 1 R - ⋯ - p N - 1 R N - 1 ,$
(40)

where

$p ( x ) ≜ det ( x I - R ) = p 0 + p 1 x + ⋯ + p N - 1 x N - 1 + x N ,$
(41)

is the characteristic polynomial of R. Consequently, the following relation is obtained:

$E [ ‖ v n + 1 ‖ R N − 1 2 ] = E [ ‖ v n ‖ R N − 1 2 ] − 2 μ ( p 0 E [ ‖ v n ‖ I 2 ] + p 1 E [ ‖ v n ‖ R 2 ] + ⋯ + p N − 1 E [ ‖ v n + 1 ‖ R N − 1 2 ] ) Z n + μ 2 E [ ‖ x n ‖ R N − 1 2 ] ℱ n .$
(42)

Ultimately, using (39) and (42), the transient behavior of the proposed algorithm can be shown to be governed by the following recursion:

$W n + 1 = A n W n + μ 2 Y ,$
(43)

where

$W n = E [ ∥ v n ∥ 2 ] E [ ∥ v n ∥ R 2 ] … E [ ∥ v n ∥ R N - 1 2 ] T ,$
(44)
$Y = E [ ∥ x n ∥ 2 ] E [ ∥ x n ∥ R 2 ] … E [ ∥ x n ∥ R N - 1 2 ] T ℱ n ,$
(45)

and

$A n = 1 - 2 μ – Z n 0 … 0 0 0 1 - 2 μ – Z n … 0 0 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 0 0 0 … 1 - 2 μ – Z n 2 μ p 0 – Z n 2 μ p 1 – Z n 2 μ ρ 2 – Z n … 2 μ p N - 2 – Z n 1 + 2 μ p N - 1 – Z n$
(46)

It can be noticed that the learning curves for the MSD and the EMSE can be obtained from the first and second elements of vector $W n$, respectively.

#### 3.2.3 Mean-square stability

Finally, in this section, the mean-square stability of the proposed algorithm is investigated. Consequently, we provide a nontrivial upper bound on µ for which E[ v n 2 remains uniformly bounded for all n.

Starting from (31) with A = I and using the Gaussian behavior of ean, it can be shown that the proposed algorithm will be mean-square stable provided that

$E [ ∥ v n + 1 ∥ 2 ] ≤ E [ ∥ v n + 1 ∥ 2 ] μ E [ ∥ x n ∥ 2 f 2 ( e n ) ] ≤ 2 E [ e a n f ( e n ) ] .$
(47)

The above inequality, upon substituting the values of the two expectations (E[eanf(e n )] and E[ x n 2f2(e n )]), will lead us to get the following bound:

$μ ≤ [ α ̄ n + 6 ( 1 - α ̄ n ) ( C + σ η 2 ) ] N C α n 2 ¯ ( C + σ η 2 ) + 60 ( 1 - 2 α ̄ n + α n 2 ¯ ) ( C + σ η 2 ) 3 + 4 ( α ̄ n - α n 2 ¯ ) ( 3 C 2 + 6 C σ η 2 + 3 σ η 2 ) tr ( R ) .$
(48)

## 4 Steady-state analysis of the VPNMN algorithm

The purpose of the steady state analysis of an adaptive filter is to study the behavior of steady state EMSE. Now, analyzing (31) for the limiting case when n → ∞ . Assuming that the weight error vector reaches a steady-state mean square error value, i.e.,

(49)

Consequently, for a unity weight matrix (A = I), (31) reduces to the following:

$lim n → ∞ E [ e a n 2 ] = μ 2 lim n → ∞ E [ ∥ x n ∥ 2 ] lim n → ∞ ℱ n lim n → ∞ – z n .$
(50)

Now, using the definition of the EMSE given by (28), its steady-state value denoted by ζ is found to be

$ζ ∞ = μ 2 lim n → ∞ ℱ n lim n → ∞ – Z n tr ( R ) .$
(51)

The terms limn→∞Z n and limn→∞ n can be obtained from (35) and (37), respectively.

Since, the EMSE is very close to zero at steady state, therefore, the higher powers of ζ can be ignored. Ultimately, the steady-state EMSE of the proposed algorithm can be shown to be

$ζ ∞ = μ α ∞ 2 ¯ σ η 2 + 60 ( 1 - 2 α ̄ ∞ + σ ∞ 2 ¯ ) σ η 6 + 12 ( α ̄ ∞ - α ∞ 2 ¯ ) σ η 2 tr ( R ) 2 N α ̄ ∞ + 12 N ( 1 - α ̄ ∞ ) σ η 2 - μ α ∞ 2 ¯ + 180 ( 1 - 2 α ̄ ∞ + α ∞ 2 ¯ ) σ η 4 + 24 ( α ̄ ∞ - σ ∞ 2 ¯ z ) σ η 2 tr ( R ) .$
(52)

## 5 Tracking analysis of the VPNMN algorithm

Cyclic and random system nonstationarities are a common impairment in communication systems and especially in applications that involve channel estimation, channel equalization, and inter-symbol-interference cancellation. Random nonstationarity is present due to variations in channel characteristics which is true in most of cases, particularly in the case of a mobile communication environment . Cyclic system nonstationarities arise in communication systems due to mismatches between the transmitter and receiver carrier generator.

The ability of adaptive filtering algorithms to track such system variations is not yet fully understood. In this regard, Rupp  presented a first-order analysis of the performance of the LMS algorithm in the presence of the carrier frequency offset. In [21, 25, 28, 29] a general framework for the tracking analysis of adaptive algorithms was developed. It can handle both cyclic as well as random system nonstationarities simultaneously. This framework, based on an energy conservation principle , holds for all adaptive algorithms whose recursions are of the form

$w n + 1 = w n + μ x n * f ( e n ) .$
(53)

In the ensuing analysis, the tracking analysis of the proposed algorithm is carried out in the presence of both random and acyclic nonstationarities. It should be noted here that in this case, unlike the convergence analysis which is a linear process, the tracking analysis is a nonlinear one due to the presence of the term (ej Ωn) in (54). This therefore justifies our use of complex signals, instead of real ones, in the (tracking) analysis.

A general system model is presented here which includes both types of nonstationarities, that is random and cyclic ones. To start, consider the noisy measurement d n that arises in a model of the form:

$d n = x n H w n o e j Ω n + η n ,$
(54)

where η n is the measurement noise and $w n o$ is the unknown system to be tracked. The multiplicative term ej Ωnaccounts for a possible frequency offset between the transmitter and the receiver carriers in a digital communication scenario. Furthermore it is assumed that the unknown system vector $w n o$ is randomly changing according to:

$w n o = w o + q n ,$
(55)

where wois a fixed vector, and q n is assumed to be a zero-mean stationary random vector process with a positive definite autocorrelation matrix $Q n =E [ q n q n H ]$ Moreover, it is also assumed that the sequence {q n } is mutually independent of the sequences {x n } and {η n }. Thus, from the generalized system model given by (54) and (55), it can be seen that the effects of both cyclic and random system nonstationarities are included in this system model.

In the tracking analysis of adaptive algorithms, an important measure of performance is their steady-state tracking EMSE and is given by

$ζ tracking = lim n → ∞ E { | x n H v ̃ n | 2 } ,$
(56)

where $v ̃ n$ is the weight-error vector for tracking scenario and is defined as follows:

$v ̃ n = w n o e j Ω n - w n .$
(57)

Using (53), (55) and (57) the following recursion is obtained:

$v ̃ n + 1 = v ̃ n - μ x n * f ( e n ) + c n e j Ω n ,$
(58)

where c n is defined as

$c n = w o ( e j Ω - 1 ) + q n + 1 e j Ω - q n .$
(59)

Now, let us define the following a priori estimation error, $e a n = x n H v ̃ n$ and a posteriori estimation error, $e p n = x n H ( v ̃ n + 1 - c n e j Ω n )$ Then, it is very easy to show that the estimation error and the a priori error are related via e n = ean+ η n . Also, from (26) when A = I, the a posteriori error is defined in terms of the a priori error as follows:

$e p n = e a n - μ μ ^ n f ( e n ) ,$
(60)

where $μ ^ n =1/∥ x n ∥ 2$ Substituting (60) into (58) results into the following update relation:

$v ̃ n + 1 = v ̃ n - μ ^ n x n * e a n - e p n + c n e j Ω n .$
(61)

By evaluating the energies of both sides of the above equation (taking into account that $μ ^ n ∥ x n ∥ 2 =1$) the following relation is obtained:

$∥ v ̃ n + 1 - c n e j Ω n ∥ 2 + μ ^ n | e a n | 2 = ∥ v ̃ n ∥ 2 + μ ^ n | e p n | 2 .$
(62)

It can be seen that if Ω = 0 (i.e., no frequency offset between the transmitter and the receiver), the above equation reduces to the basic fundamental energy conservation relation.

The energy relation (62) will be used to evaluate the excess-mean-square error at steady state. But before starting the analysis, first the following assumptions are stated:

A.8 In steady-state, the weight error vector $v ̃ n$ takes the generic form z n ej Ωnwith the stationary random process z n independent of the frequency offset Ω.

Using (60), assumption A.8, and taking expectation of both sides of (62) and the fact that at steady state $E v ̃ n + 1 =E v ̃ n$ the following relation can be obtained:

$E μ ^ n ∥ e a n ∥ 2 = 2 tr Q n + ∥ w o ∥ 2 | 1 - e j Ω | 2 - 2 Re E q n * ( z n - μ x n * f ( e n ) e - j Ω n ) - 2 Re ( 1 - e j Ω ) * w o * E z n - μ x n * f ( e n ) e - j Ω n + E μ ^ n e a n - μ μ ^ n f ( e n ) 2 .$
(63)

The above equation can be used to solve for the steady-state EMSE. To find the value of z = E[z n ], (58) is used where it is multiplied by the term e-j Ωnand then expectation is taken on both sides to get

$( 1 - e j Ω ) z = μ E x n * f ( e n ) e - j Ω n + w o ( 1 - e j Ω ) ,$
(64)

which yields the following value of z at steady-state:

$z = I - μ γ o ( 1 - e j Ω ) R - 1 w o ,$
(65)

where γ o is defined as

$γ o = α ̄ n + 6 ( 1 - α ̄ n ) σ η 2 1 tr{ R } .$
(66)

Ultimately, the steady-state excess-mean-square error of the proposed algorithm, ζtracking, is obtained from (63):

$ζ tracking = a a − μ b [ tr{ Q n R } + β O 2 μ γ O + c μ a ] ,$
(67)

where

$β o = | 1 − e j Ω | 2 R e { tr( ‖ w o ‖ 2 ( I − 2 X )) } ,$
(68)
$X = ( I - μ γ o R ) I - μ γ o R - e j Ω I - 1 ,$
(69)

and

$a = 2 { α ̄ n + 6 ( 1 - α ̄ n ) σ η 2 } , b = E α n 2 + 12 E α n ( 1 - α n ) σ η 2 + 36 E ( 1 - α n ) 2 χ η 4 , c = E α n 2 σ η 2 + 4 E α n ( 1 - α n ) χ η 4 + 4 E ( 1 - α n ) 2 χ η 6 , χ η 6 = E η n 6 , χ η 4 = E η n 4 .$

It can be seen from the above result that the steady-state tracking EMSE of the NLMS algorithm  and the NLMF algorithm  can be recovered by substituting α n = 1 and α n = 0, respectively, in (67).

For a white Gaussian input signal, the autocorrelation of the input signal $R = σ x 2 I$, and therefore (67) will look like the following:

$ζ tracking = a a - μ b σ x 2 tr{ Q n } + N 2 σ x 2 ( 4 N - μ a ) Ω 2 μ 2 a 2 ∥ w o ∥ 2 + c μ a .$
(70)

## 6 Simulation results

The performance of the proposed algorithm, the VPNMN LMS-LMF, is assessed in different scenarios. Experiments are carried out where an unknown system is to be identified under noisy conditions. The unknown system is a non-minimum phase channel. The input signal to both the unknown system and the adaptive filter is obtained by passing a zero-mean white Gaussian sequence through a channel that is used to vary the eigenvalue spread of the autocorrelation matrix of the input signal. The example considered for the sequence {x n } has an eigenvalue spread of 68.9. The additive noise, η n , is a zero-mean. The signal to noise ratio is set to be equal to 20 dB and the performance measure considered is the normalized weight error norm 10log10 w n - wopt2/║ wopt2. Results are obtained by averaging over 500 independent runs. The proposed algorithm is implemented with the parameters 8 = 0.97, β = 0.98, γ = 10-2α0= 0.8 and p0 = 0. In the ensuing, different aspects of the performance are considered during the course of this study.

### 6.1 Convergence behavior

Figure 1 compares the fastest convergence characteristics of both the proposed algorithm and the NLMS algorithm. It can be seen from this figure that the proposed algorithm converges as fast as the NLMS algorithm but results in a lower weight mismatch. An improvement of 25 dB is obtained through the use of the proposed algorithm. Also, as shown in Figure 2, the proposed algorithm outperforms the NLMS algorithm, for the lowest steady-state error reached by the later, thanks to its built-in gear-shifting mechanism which gives it an extra degree of freedom in this region.

The fast convergence obtained by the proposed algorithm can be justified by the fact that when far from the optimum solution, this algorithm exhibits faster convergence than the NLMS algorithm by automatically increasing the step size (gear-shifting property).

Figure 3 summarizes the performance of the proposed VPNMN algorithm in the three different noise environments with an SNR of 20 dB when the input signal is white. As can be depicted from this figure that the best performance is obtained when the noise statistics are uniform while the worst performance is obtained when the noise statistics are laplacian.

Similarly, Figure 4 depicts the results for the proposed VPNMN algorithm when the input signal is highly correlated and as can seen from this figure that almost equal performance is obtained by the VPNMN algorithm for the different noise statistics.

In order to verify the stability bound on step-size given in (48), we investigate it in a Gaussian environment and an SNR of 20 dB. Here, we choose a misadjustment of five which results in the Cramer-Rao bound to be C ≤ 0.05. Thus, choosing a tr(R) = 5, the upper bound given in (48) is found to be 0.95. It is observed from the various performed simulations that the NCLMF algorithm is stable while µ is less than 1.0 and thus, eventually validating the derived stability bound.

Finally, from the viewpoint of computational load the proposed algorithm requires an additional seven multiplications and three additions when compared to the fixed mixed-norm algorithm defined by (4), and only eleven multiplications and six additions when compared to the NLMS algorithm. The small computational over head of the proposed algorithm is therefore well worth the gain in the steady-state error reduction it brings about.

### 6.2 Results for the MSE learning curve

Figure 5 depicts the time evolution of the MSE obtained for both the theoretical analysis, the second entry of (44), and the simulations. Excellent agreement between theory and simulation results is obtained; hence, a consistency in performance is obtained by the proposed VPNMN algorithm.

### 6.3 Results for tracking

For tracking, the simulations are carried out for a system identification problem, where the unknown system, having an FIR model, is given by [1.0119 - j 0.7589, - 0.3796 + j 0.5059]T, while the system characteristics are time-varying according to the system model (54) and (55). Results for the proposed algorithm are presented to validate the theoretical findings for different values of Ω and different values of µ. The input signal x n to both the unknown system and the adaptive filter is a zero-mean white Gaussian sequence. The signal to noise ratio is set to be equal to 30 dB two values are considered for tr{Q n }: a very small value of tr{Q n } = 10-7, and a very large one of tr{Q n } = 10-2.

Figure 6 depicts the comparison of the theory to the simulation results for three different values of Ω, i.e., Ω = 0.001, 0.002, and 0.003. As can be seen from this figure, close agreement between theory and simulation results are obtained. Furthermore, it is observed from this figure that degradation in performance is obtained by increasing the frequency offset Ω and unlike the stationary case, the steady-state EMSE is not a monotonically increasing function of the step-size µ, that is the steady-state EMSE is smaller at larger values of the step-size µ.

Figure 6 is obtained for the case when tr{Q n } = 10-7 which is represents a small value. Increasing this value to 10-2, the results depicted in Figure 7 for three larger values of Ω, i.e., 0.01, 0.02, and 0.03, still show that the previously stated observations are similar to those obtained for a smaller value of tr{Q n }.

Finally, the consistency in the performance of the steady-state EMSE of the proposed algorithm is observed in both cases (two different values of tr{Q n }) and different values of Ω.

### 6.4 Noise cancelation using VPNMN algorithm

In this example, we study the performance of the VPNMN algorithm for the application of noise cancelation. A pure sinusoidal noise generated by the process (u n = 0.8 sin (ωn + 0.5π)) with ω = 0.1 π is to be removed from a square wave generated by (s n = 2 × ((mod(n, 1000) < 1000/ 2) - 0.5)) where mod (n, 1000) computes the modulus of n over 1,000. Summing u n and s n gives us the reference signal to the adaptive filter. The input to the adaptive filter is a sinusoidal signal generated by $x n = 2 sin ( ω n )$ with ω = 0.1 π. The resulting output error signal e n will, in time, converge to the desired signal which will be noiseless.

Figure 8 depicts the reference response and the processed results by the VPNMN algorithm and NLMS algorithm. It is clear that both algorithms are able to remove the noise component but VPNMN algorithm exhibits better noise cancelation capabilities as compared to the NLMS algorithm.

## 7 Conclusion

In this study, a normalized VPNMN algorithm is proposed where a combination of the LMS and the LMF algorithms is incorporated using the concept of variable step-size LMS adaptation. It is found that the proposed algorithm has the fast convergence property of the NLMS algorithm while resulting in a lower steady-state error, therefore eliminating the conflict between these two parameters, i.e., fast convergence and low steady-state error. Moreover, the consistency of the performance of the proposed algorithm has been confirmed by many simulation results which are reported here.

The analytical results of the tracking steady-state EMSE are derived for the proposed algorithm in the presence of both random and cyclic nonstationarities. The results, show that unlike in the stationary case, the steady-state EMSE is not a monotonically increasing function of the step-size µ, while the ability of the algorithm to track the variations in the environment degrades by increasing the frequency offset Ω.

Finally, the VPNMN algorithm proved its usefulness in a noise cancelation scenario where it showed its superiority over the NLMS algorithm.

## References

1. 1.

Widrow B, McCool JM, Larimore MG, Johnson CR: Stationary and nonstationary learning characteristics of the LMS adaptive filter. Proc IEEE 1976, 64(8):1151-1162.

2. 2.

Macchi O: Adaptive Processing: The LMS Approach with Applications in Transmission. Wiley, New York; 1995.

3. 3.

Walach E, Widrow B: The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans Inf Theory 1984, 30: 275-283. 10.1109/TIT.1984.1056886

4. 4.

Haykin S: Adaptive Filter Theory. 3rd edition. Prentice-Hall, Englewood Cliffs; 1996.

5. 5.

Nagumo JI, Noda A: A learning method for system identification. IEEE Trans Automat Control 1967, 12: 282-287.

6. 6.

Zerguine A: Convergence and steady-state analysis of the normalized least mean fourth algorithm. Digital Signal Process 2007, 17(1):17-31. 10.1016/j.dsp.2006.01.005

7. 7.

Zerguine A: Convergence behavior of the normalized least mean fourth algorithm. Proc 34th Annual Asilomar Conf Signals, Syst Comput 2000, 275-278.

8. 8.

Chan MK, Cowan CFN: Using a normalised LMF algorithm for channel equalisation with co-channel interference. XI Euro Sig Process Conf Eusipco 2002 2002, 2: 49-51.

9. 9.

Tanrikulu O, Chambers JA: Convergence and steady-state properties of the least mean mixed-norm (LMMN) adaptive filtering. J Ind Microbiol Biotechnol 1996, 143(3):137-142.

10. 10.

Zerguine A, Cowan CFN, Bettayeb M: LMS-LMF adaptive scheme for echo cancellation. Electron Lett 1996, 32(19):1776-1778. 10.1049/el:19961202

11. 11.

Al-Naffouri TY, Zerguine A, Bettayeb M: Convergence properties of mixed-norm algorithms under general error criteria. IEEE ISCAS '99 1999, 211-214.

12. 12.

Tarrab M, Feuer A: Convergence and performance analysis of the normalized LMS algorithm with uncorrelated gaussian data. IEEE Trans Inf Theory 1988, 34: 680-691. 10.1109/18.9768

13. 13.

Sulyman AI, Zerguine A: Convergence and steady-state analysis of a variable step-size NLMS algorithm. Signal Process 2003, 83(6):1255-1273. 10.1016/S0165-1684(03)00044-6

14. 14.

Zerguine A, Aboulnasr T: Convergence analysis of the variable weight mixed-norm LMS-LMF adaptive algorithm. Proc 34th Annual Asilomar Conf Signals, Syst, Comput 2000, 279-282.

15. 15.

Aboulnasr T, Mayyas K: A robust variable step-size LMS-type algorithm: analysis and simulations. IEEE Trans Signal Process 1997, 45(3):631-639. 10.1109/78.558478

16. 16.

Mazo JE: On the independence theory of equalizer convergence. Bell Syst Tech J 1979, 58: 963-993.

17. 17.

Cho SH, Kim SD, Jean KY: Statistical convergence of the adaptive least mean fourth algorithm. Proceedings of the ICSP'96 1996, 610-613.

18. 18.

Hubscher PI, Bermudez JCM: An improved statistical analysis of the least mean fourth (LMF) adaptive algorithm. IEEE Trans Signal Process 2003, 51(3):664-671. 10.1109/TSP.2002.808126

19. 19.

Proakis JG: Digital Communications. 4th edition. McGraw-Hill, Singapore; 2001.

20. 20.

Rupp M, Sayed AH: A time-domain feedback analysis of filtered-error adaptive gradient algorithms. IEEE Trans Signal Process 1996, 44: 1428-1439. 10.1109/78.506609

21. 21.

Sayed AH: Adaptive Filters. Wiley, NJ; 2008.

22. 22.

Bershad NJ, Bonnet M: Saturation effects in LMS adaptive echo cancellation for binary data. IEEE Trans Acoust Speech Signal Process 1990, 38(10):1687-1696. 10.1109/29.60100

23. 23.

Papoulis A: Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York; 1991.

24. 24.

Douglas SC, Meng TH-Y: Stochastic gradient adaptation under general error criteria. IEEE Trans Signal Process 1994, 42(6):1352-1365. 10.1109/78.286952

25. 25.

Yousef NR, Sayed AH: A unified approach to the steady-state and tracking analysis of adaptive filters. IEEE Trans Signal Process 2001, 49: 314-324. 10.1109/78.902113

26. 26.

Rappaport TS: Wireless Communications. Prentice-Hall, Upper Saddle River; 1996.

27. 27.

Rupp M: LMS tracking behavior under periodically changing systems. In EUSIPCO-1998. Island of Rhodes, Greece; 1998:1253-1256.

28. 28.

Moinuddin M, Zerguine A: Tracking analysis of the NLMS algorithm in the presence of both random and cyclic nonstationarities. IEEE Signal Process Lett 2003, 10(9):256-258.

29. 29.

Moinuddin M, Zerguine A, Sheikh AUH: Tracking analysis of the NLMF algorithm in the presence of both random and cyclic nonstationarities. In ISSPA 2005. Sydney, Australia; 2005:755-758.

## Acknowledgements

The author acknowledges the support of the Deanship of Scientific Research at King Fahd University of Petroleum & Minerals. This research work is funded by the King Fahd University of Petroleum & Minerals under Research Grants (FT090016) and (SB101024).

## Author information

Authors

### Corresponding author

Correspondence to Azzedine Zerguine.

### Competing interests

The author declares that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Reprints and Permissions

Zerguine, A. A variable-parameter normalized mixed-norm (VPNMN) adaptive algorithm. EURASIP J. Adv. Signal Process. 2012, 55 (2012). https://doi.org/10.1186/1687-6180-2012-55 