# A variable-parameter normalized mixed-norm (VPNMN) adaptive algorithm

- Azzedine Zerguine
^{1}Email author

**2012**:55

https://doi.org/10.1186/1687-6180-2012-55

© Zerguine; licensee Springer. 2012

**Received: **1 August 2011

**Accepted: **5 March 2012

**Published: **5 March 2012

## Abstract

Since both the least mean-square (LMS) and least mean-fourth (LMF) algorithms suffer individually from the problem of eigenvalue spread, so will the mixed-norm LMS-LMF algorithm. Therefore, to overcome this problem for the mixed-norm LMS-LMF, we are adopting here the same technique of normalization (normalizing with the power of the input) that was successfully used with the LMS and LMF separately. Consequently a new normalized variable-parameter mixed-norm (VPNMN) adaptive algorithm is proposed in this study. This algorithm is derived by exploiting a time-varying mixing parameter in the traditional mixed-norm LMS-LMF weight update equation. The time-varying mixing parameter is adjusted according to a well-known technique used in the adaptation of the step-size parameter of the LMS algorithm. In order to study the theoretical aspects of the proposed VPNMN adaptive algorithm, our study also addresses its convergence analysis, and assesses its performance using the concept of energy conservation. Extensive simulation results corroborate our theoretical findings and show that a substantial improvement, in both convergence time and steady-state error, can be obtained with the proposed algorithm. Finally, the VPNMN algorithm proved its usefulness in a noise cancellation application where it showed its superiority over the normalized least-mean square algorithm.

## Keywords

## 1 Introduction

Due to its simplicity, the least mean-square (LMS) [1, 2] algorithm is the most widely used algorithm for adaptive filters in many applications. The least mean-fourth (LMF) [3] algorithm was also proposed later as a special case of the more general family of steepest descent algorithms [4] with 2*k* error norms, *k* being a positive integer.

**x**

_{ n }is the input signal. This is clearly seen from their respective time constants [1, 3]

_{ i }is the

*ith*eigenvalue of the autocorrelation matrix of the input signal,

*µ*is the step size used in the adaptation scheme and

*N*is the number of coefficients in the adaptive filter. As seen from (1) and (2), the ratio of $\left(\frac{{\tau}_{\text{max}}}{{\tau}_{\text{min}}}\right)$ is constant for both algorithms and is given by the eigenvalue spread (i.e., condition number), $\frac{{\lambda}_{\text{max}}}{{\lambda}_{\text{min}}}$ i.e.,

To remove the dependency of the convergence of the LMS algorithm on the condition number, the normalized least-mean square (NLMS) [5] was introduced. As reported in [5], a great improvement in convergence is obtained through the use of the NLMS algorithm over that of the LMS algorithm at the expense of a larger steady-state error. Similar results were obtained for the case of the normalized LMF (NLMF) algorithm [6–8].

A mixed-norm algorithm [9–11], combining both the LMS and the LMF algorithms, will suffer as well from the problem of the eigenvalue spread dependency. Since both of these algorithms suffer individually from this problem, to circumvent this problem for the mixed-norm LMS-LMF, we are adopting here the same technique of normalization that was successfully used with the LMS and LMF separately.

It is well known that fast convergence and lower steady-state error are two conflicting parameters in general adaptive filtering. When compared to the LMS algorithm, the NLMS algorithm results in a faster convergence but only at the expense of a higher steady-state error [12, 13]. A promising solution to this conflict is a time-varying normalized mixed-norm LMS-LMF algorithm. In this mixed-norm algorithm and during the transient state, the NLMS algorithm is used to speed up the algorithm's convergence. However when steady-state is reached, the algorithm automatically switches from the NLMS to the NLMF [7], thanks to a built-in "gear shifting" property, to secure a lower steady-state error.

In this work, the performance of a variable-parameter normalized mixed-norm (VPNMN) LMS-LMF algorithm is evaluated. It will be shown that a better performance in both convergence and steady-state error will be achieved by the VPNMN algorithm than either the NLMS or the NLMF algorithm.

The rest of the article is organized as follows. Section 2 deals with a more explicit development of the proposed algorithm, and Section 3 treats its convergence analysis. The steady-state analysis of the proposed algorithm is detailed in Section 4, while its tracking analysis is given in Section 5. Performance evaluation of the resulting algorithm is carried out in Section 6. Finally, the conclusion section summarizes this work.

## 2 Algorithm development

*e*

_{ n }is defined as

where *d*_{
n
} is the desired value, **w**_{
n
}is the filter coefficient of the adaptive filter, **x**_{
n
}is the input signal and *η*_{
n
} is the additive noise.

*a priori*for an unknown system. In [14], a self-adapting LMS-LMF algorithm with a time-varying weighting factor was proposed. This time-variation of the weighting factor was achieved by allowing for a variable mixing factor that is updated every iteration using the modified variable step-size (MVSS) algorithm proposed in [15]. The variable weight mixed-norm LMS-LMF algorithm was defined to minimize the following performance measure [14]:

*α*

_{ n }, chosen in [0, 1] such that the unimodal character of the above cost is preserved, is a time-varying parameter updated according to [15]

The parameters *δ* and *β*, both confined to the interval [0,1], are exponential weighting parameters that govern the averaging time constant, i.e., the quality of estimation of the algorithm, and γ *>* 0. Note that the algorithm defined by (4) is restored when *δ =* 1 and *γ* = 0, which forces *α*_{
n
} to have a fixed value.

where *μ* is the step size.

**x**

_{ n }║

^{2}is the Euclidean norm of the input signal

**x**

_{ n }. In the case of zero input, the ε-VPNMN algorithm defined as follows:

must be used for regularization purposes.

## 3 Convergence analysis of the VPNMN algorithm

In this section, the convergence analysis of the proposed VPNMN algorithm is carried out. Both the mean and the mean-square behaviors of the weight error vector are presented in the ensuing analysis.

### 3.1 Mean behavior

In the ensuing analysis, the following assumptions are used in the derivations of the convergence in the mean for the normalized mixed-norm LMS-LMF algorithm. These are quite similar to what is usually assumed in literature [2–4, 16] and which can also be justified in several practical instances

*A.1* The noise sequence {*η*_{
n
} } is statistically independent of the input signal sequence {*x*_{
n
} } and both sequences have zero mean.

*A.2* The weight error vector (**v**_{
n
}), to be defined later, is independent of the input **x**_{
n
}.

*A.3* The mixing parameter is independent of both the input signal and the error.

Examining the mean behavior of (10) under the above assumptions, sufficient conditions for convergence of the proposed algorithm in-the-mean can be derived and are stated as follows.

**Proposition 1**

*For the algorithm defined by*(10)

*to converge in-the-mean, a sufficient condition is that μ be chosen in the following range*:

*where*${\sigma}_{\eta}^{2}$*is the noise power*, ${\stackrel{\u0304}{\alpha}}_{n}=E\left[{\alpha}_{n}\right]$*is the mean of the mixing parameter, and*$\mathcal{C}$*is is the Cramer-Rao bound associated with the problem of estimating the random quantity*${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{\mathsf{\text{opt}}}$*by using*${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{n}$.

**Proof:**The mean convergence of the proposed algorithm is now studied by taking the expectation of the weight error vector,

**v**

_{ n }=

**w**

_{ n }-

**w**

_{opt}. In this regard, the error

*e*

_{ n }can be set up in the following way:

*A.1-A.3*, the mean weight-error vector of the proposed algorithm evolves as

*E*[

*e*

_{ n }

**x**

_{ n }] we use the technique of [17], and thus it results in

If $\mathcal{C}\le {\zeta}_{n}$ is the Cramer-Rao bound associated with the problem of estimating the random quantity ${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{\mathsf{\text{opt}}}$ by using ${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{n}$, then after taking into account the fact that the eigenvalues of **R** are all real and positive, λ_{max} being the largest eigenvalue of **R** and in general λ_{max}*<* tr(**R**) [19], it follows that a sufficient condition for convergence of the proposed algorithm is that the step-size parameter *μ* satisfies (12). ▪

_{ n }

- (1)
*Scenario 1*: When α_{ n }= 0, the VPNMN algorithm reduces to the NLMF algorithm [6], and it can be shown that (12) becomes$0<\mu <\frac{2}{3\left({\sigma}_{\eta}^{2}+\mathcal{C}\right)}.$(21) - (2)
*Scenario 2:*When α_{ n }= 1, both the NLMS algorithm and its step size range, that is 0 < μ < 2, are recovered.

**Remarks:**

- (1)
It can be seen from (10) that the VPNMN algorithm can be viewed as a variable step-size LMS-LMF algorithm with time varying step size.

- (2)
The error is usually large during the initial adaptation and gradually decreases toward a minimum. Therefore, the signal power, ║

**x**_{n}║^{2}, will act as a threshold to avoid taking large step sizes when the error converges to a minimum in the recursive updating equation. - (3)
The bound for the step-size (

*μ*) of the proposed algorithm that guarantees convergence of the mean weight-vector, given by (12), shows that the mean-weight-vector stability depends on the Cramer-Rao bound. Therefore, the convergence of the mean-weight-vector of the proposed algorithm depends on its mean-square stability. A similar fact was observed in [18] for the LMF algorithm.

### 3.2 Mean square behavior

In this section the performance of the VPNMN algorithm in the mean-square sense is analyzed. Here, we have used a unified approach to the transient analysis of adaptive filters with error nonlinearities. This approach does not restrict the regression data to be Gaussian and avoids the need for explicit recursions for the covariance matrix of the weight-error vector. This approach assumes that the adaptive filter is long enough to justify the following assumptions which are realistic for longer adaptive filters:

*A.4* The residual or *a priori* error *e*_{an}, to be defined later, can be assumed to be Gaussian.

*A.5* The norm of the input regressor (*║* **x**_{
n
}║^{2}) can be assumed to be uncorrelated with *f*^{2}(*e*_{
n
} ) (*f*(*e*_{
n
} ) is defined in (23)).

*f(e*

_{ n }

*)*denotes a general scalar function of the output estimation error

*e*

_{ n }and in our case it is given by

We are interested in studying the time-evolution and the steady-state values of $E\left[\left|{e}_{\mathsf{\text{a}}n}^{2}\right|\right]$ and *E*[║**v**_{
n
}║^{2}] which represent the mean-square-error and the mean-square-deviation performances of the filter, respectively, whereas their time-evolution relate to the learning or the transient behavior of the filter.

**A**to be specified later, the weighted

*a priori*and

*a posteriori*estimation errors are, respectively, defined as [21]

**A**=

**I**, the weighted

*a priori*and

*a posteriori*estimation errors defined above are reduced to standard

*a priori*and

*a posteriori*estimation errors, respectively, that is,

*e*

_{ n }, and the

*a priori*error,

*e*

_{an}, are related via

*e*

_{ n }=

*e*

_{an}+ η

_{ n }. Also, using (10) and (24), it can be shown that

where the notation $\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}$ denotes the weighted squared Euclidean norm $\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}={\mathbf{x}}_{n}^{T}\mathbf{A}{\mathbf{x}}_{n}$.

_{ n }, and is defined as follows:

This relation shows how the weighted energies of the error quantities evolve in time. It has been shown that different choices of **A** allow us to evaluate different performance measures of an adaptive filter.

#### 3.2.1 Time evolution of the weighted variance $E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{A}}^{2}\right]$

*a posteriori*error from (26) in (30) and taking expectation on both sides to obtain the following relation:

Now, evaluating the two expectations in second and third terms on the right hand side of the above equation, that is, $E\left[{e}_{\mathsf{\text{a}}n}^{\mathbf{A}}f\left({e}_{n}\right)\right]$ and $E\left[\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}{f}^{2}\left({e}_{n}\right)\right]$ The details for these two quantities are given next. First, we will use the following assumption which was adopted in [21], that is,

*A.6* For any constant matrix **A** and for all *n, e*_{an}and ${e}_{\mathsf{\text{a}}n}^{\mathbf{A}}$ are jointly Gaussian.

*A.4*and

*A.6 as*follows:

Second, to solve the expectation $E\left[\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}{f}^{2}\left({e}_{n}\right)\right]$, we will resort to the following assumption [21]:

*A.7* The adaptive filter is long enough such that $\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}$ and *f*^{2}(*e*_{
n
} ) are uncorrelated.

*E*[

*f*

^{2}(

*e*

_{ n })] can be shown to be (with $\overline{{\alpha}_{n}^{2}}=E\left[{\alpha}_{n}^{2}\right]$)

The above equation shows the time evaluation or the transient behavior of the weighted variance $E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{A}}^{2}\right]$ for any constant weight matrix **A**. Different performance measures can be obtained by the proper choice of the weight matrix **A**.

#### 3.2.2 The EMSE and the MSD learning curves

**A**=

**IR**...

**R**

^{N-1}, a set of relations can be obtained from (38) which is given by

**R**. Consequently, the following relation is obtained:

It can be noticed that the learning curves for the MSD and the EMSE can be obtained from the first and second elements of vector ${\mathcal{W}}_{n}$, respectively.

#### 3.2.3 Mean-square stability

Finally, in this section, the mean-square stability of the proposed algorithm is investigated. Consequently, we provide a nontrivial upper bound on *µ* for which *E*[*║* **v**_{
n
}║^{2} remains uniformly bounded for all *n*.

**A**=

**I**and using the Gaussian behavior of

*e*

_{an}, it can be shown that the proposed algorithm will be mean-square stable provided that

*E*[

*e*

_{an}

*f*(

*e*

_{ n })] and

*E*[

*║*

**x**

_{ n }║

^{2}

*f*

^{2}(

*e*

_{ n })]), will lead us to get the following bound:

## 4 Steady-state analysis of the VPNMN algorithm

*n → ∞*. Assuming that the weight error vector reaches a steady-state mean square error value, i.e.,

**A**=

**I**), (31) reduces to the following:

*ζ*

_{∞}is found to be

The terms lim_{n→∞}*Z*_{
n
} and lim_{n→∞}*ℱ*_{
n
}can be obtained from (35) and (37), respectively.

_{∞}can be ignored. Ultimately, the steady-state EMSE of the proposed algorithm can be shown to be

## 5 Tracking analysis of the VPNMN algorithm

Cyclic and random system nonstationarities are a common impairment in communication systems and especially in applications that involve channel estimation, channel equalization, and inter-symbol-interference cancellation. Random nonstationarity is present due to variations in channel characteristics which is true in most of cases, particularly in the case of a mobile communication environment [26]. Cyclic system nonstationarities arise in communication systems due to mismatches between the transmitter and receiver carrier generator.

In the ensuing analysis, the tracking analysis of the proposed algorithm is carried out in the presence of both random and acyclic nonstationarities. It should be noted here that in this case, unlike the convergence analysis which is a linear process, the tracking analysis is a nonlinear one due to the presence of the term (*e*^{j Ωn}) in (54). This therefore justifies our use of complex signals, instead of real ones, in the (tracking) analysis.

*d*

_{ n }that arises in a model of the form:

_{ n }is the measurement noise and ${\mathbf{w}}_{n}^{o}$ is the unknown system to be tracked. The multiplicative term

*e*

^{j Ωn}accounts for a possible frequency offset between the transmitter and the receiver carriers in a digital communication scenario. Furthermore it is assumed that the unknown system vector ${\mathbf{w}}_{n}^{o}$ is randomly changing according to:

where **w**^{
o
}is a fixed vector, and **q**_{
n
}is assumed to be a zero-mean stationary random vector process with a positive definite autocorrelation matrix ${\mathbf{Q}}_{n}=E\left[{\mathbf{q}}_{n}{\mathbf{q}}_{n}^{H}\right]$ Moreover, it is also assumed that the sequence {**q**_{
n
}} is mutually independent of the sequences {**x**_{
n
}} and {η_{
n
}}. Thus, from the generalized system model given by (54) and (55), it can be seen that the effects of both cyclic and random system nonstationarities are included in this system model.

_{ n }is defined as

*a priori*estimation error, ${e}_{\mathsf{\text{a}}n}={\mathbf{x}}_{n}^{H}{\stackrel{\u0303}{\mathbf{v}}}_{n}$ and

*a posteriori*estimation error, ${e}_{\mathsf{\text{p}}n}={\mathbf{x}}_{n}^{H}\left({\stackrel{\u0303}{\mathbf{v}}}_{n+1}-{\mathbf{c}}_{n}{e}^{j\mathrm{\Omega}n}\right)$ Then, it is very easy to show that the estimation error and the

*a priori*error are related via

*e*

_{ n }

*= e*

_{an}+

*η*

_{ n }. Also, from (26) when

**A**=

**I**, the

*a posteriori*error is defined in terms of the

*a priori*error as follows:

It can be seen that if Ω *=* 0 (i.e., no frequency offset between the transmitter and the receiver), the above equation reduces to the basic fundamental energy conservation relation.

The energy relation (62) will be used to evaluate the excess-mean-square error at steady state. But before starting the analysis, first the following assumptions are stated:

*A.8* In steady-state, the weight error vector ${\stackrel{\u0303}{\mathbf{v}}}_{n}$ takes the generic form **z**_{
n
}*e*^{j Ωn}with the stationary random process **z**_{
n
}independent of the frequency offset Ω.

*A.8*, and taking expectation of both sides of (62) and the fact that at steady state $E\left[{\stackrel{\u0303}{\mathbf{v}}}_{n+1}\right]=E\left[{\stackrel{\u0303}{\mathbf{v}}}_{n}\right]$ the following relation can be obtained:

**z**=

*E*[

**z**

_{ n }], (58) is used where it is multiplied by the term

*e*

^{-j Ωn}and then expectation is taken on both sides to get

**z**at steady-state:

*γ*

_{ o }is defined as

*ζ*

^{tracking}, is obtained from (63):

It can be seen from the above result that the steady-state tracking EMSE of the NLMS algorithm [28] and the NLMF algorithm [29] can be recovered by substituting *α*_{
n
} *=* 1 and *α*_{
n
} *=* 0, respectively, in (67).

## 6 Simulation results

The performance of the proposed algorithm, the VPNMN LMS-LMF, is assessed in different scenarios. Experiments are carried out where an unknown system is to be identified under noisy conditions. The unknown system is a non-minimum phase channel. The input signal to both the unknown system and the adaptive filter is obtained by passing a zero-mean white Gaussian sequence through a channel that is used to vary the eigenvalue spread of the autocorrelation matrix of the input signal. The example considered for the sequence {*x*_{
n
} } has an eigenvalue spread of 68.9. The additive noise, *η*_{
n
} , is a zero-mean. The signal to noise ratio is set to be equal to 20 dB and the performance measure considered is the normalized weight error norm 10log_{10}*║* **w**_{
n
}- **w**_{opt}*║*^{2}*/║* **w**_{opt}*║*^{2}. Results are obtained by averaging over 500 independent runs. The proposed algorithm is implemented with the parameters *8 =* 0.97, *β =* 0.98, γ = 10^{-2}*α*_{0}*=* 0.8 and *p*_{0} = 0. In the ensuing, different aspects of the performance are considered during the course of this study.

### 6.1 Convergence behavior

The fast convergence obtained by the proposed algorithm can be justified by the fact that when far from the optimum solution, this algorithm exhibits faster convergence than the NLMS algorithm by automatically increasing the step size (gear-shifting property).

In order to verify the stability bound on step-size given in (48), we investigate it in a Gaussian environment and an SNR of 20 dB. Here, we choose a misadjustment of five which results in the Cramer-Rao bound to be *C ≤* 0.05. Thus, choosing a tr(**R**) = 5, the upper bound given in (48) is found to be 0.95. It is observed from the various performed simulations that the NCLMF algorithm is stable while *µ* is less than 1.0 and thus, eventually validating the derived stability bound.

Finally, from the viewpoint of computational load the proposed algorithm requires an additional seven multiplications and three additions when compared to the fixed mixed-norm algorithm defined by (4), and only eleven multiplications and six additions when compared to the NLMS algorithm. The small computational over head of the proposed algorithm is therefore well worth the gain in the steady-state error reduction it brings about.

### 6.2 Results for the MSE learning curve

### 6.3 Results for tracking

For tracking, the simulations are carried out for a system identification problem, where the unknown system, having an FIR model, is given by [1.0119 - *j* 0.7589, - 0.3796 + *j* 0.5059]^{
T
}, while the system characteristics are time-varying according to the system model (54) and (55). Results for the proposed algorithm are presented to validate the theoretical findings for different values of Ω and different values of *µ*. The input signal *x*_{
n
} to both the unknown system and the adaptive filter is a zero-mean white Gaussian sequence. The signal to noise ratio is set to be equal to 30 dB two values are considered for tr{**Q**_{
n
}}: a very small value of tr{**Q**_{
n
}} = 10^{-7}, and a very large one of tr{**Q**_{
n
}} = 10^{-2}.

*=*0.001, 0.002, and 0.003. As can be seen from this figure, close agreement between theory and simulation results are obtained. Furthermore, it is observed from this figure that degradation in performance is obtained by increasing the frequency offset Ω and unlike the stationary case, the steady-state EMSE is not a monotonically increasing function of the step-size

*µ*, that is the steady-state EMSE is smaller at larger values of the step-size

*µ*.

**Q**

_{ n }} = 10

^{-7}which is represents a small value. Increasing this value to 10

^{-2}, the results depicted in Figure 7 for three larger values of

*Ω*, i.e., 0.01, 0.02, and 0.03, still show that the previously stated observations are similar to those obtained for a smaller value of tr{

**Q**

_{ n }}.

Finally, the consistency in the performance of the steady-state EMSE of the proposed algorithm is observed in both cases (two different values of tr{**Q**_{
n
}}) and different values of *Ω*.

### 6.4 Noise cancelation using VPNMN algorithm

In this example, we study the performance of the VPNMN algorithm for the application of noise cancelation. A pure sinusoidal noise generated by the process (*u*_{
n
} = 0.8 sin (*ωn* + 0.5*π*)) with *ω =* 0.1 *π* is to be removed from a square wave generated by (*s*_{
n
} *=* 2 *×* ((mod(*n*, 1000) *<* 1000*/* 2) *-* 0.5)) where mod (*n*, 1000) computes the modulus of *n* over 1,000. Summing *u*_{
n
} and *s*_{
n
} gives us the reference signal to the adaptive filter. The input to the adaptive filter is a sinusoidal signal generated by $\left({x}_{n}=\sqrt{2}\text{sin}\left(\omega n\right)\right)$ with *ω =* 0.1 *π*. The resulting output error signal *e*_{
n
} will, in time, converge to the desired signal which will be noiseless.

## 7 Conclusion

In this study, a normalized VPNMN algorithm is proposed where a combination of the LMS and the LMF algorithms is incorporated using the concept of variable step-size LMS adaptation. It is found that the proposed algorithm has the fast convergence property of the NLMS algorithm while resulting in a lower steady-state error, therefore eliminating the conflict between these two parameters, i.e., fast convergence and low steady-state error. Moreover, the consistency of the performance of the proposed algorithm has been confirmed by many simulation results which are reported here.

The analytical results of the tracking steady-state EMSE are derived for the proposed algorithm in the presence of both random and cyclic nonstationarities. The results, show that unlike in the stationary case, the steady-state EMSE is not a monotonically increasing function of the step-size *µ*, while the ability of the algorithm to track the variations in the environment degrades by increasing the frequency offset Ω.

Finally, the VPNMN algorithm proved its usefulness in a noise cancelation scenario where it showed its superiority over the NLMS algorithm.

## Declarations

### Acknowledgements

The author acknowledges the support of the Deanship of Scientific Research at King Fahd University of Petroleum & Minerals. This research work is funded by the King Fahd University of Petroleum & Minerals under Research Grants (FT090016) and (SB101024).

## Authors’ Affiliations

## References

- Widrow B, McCool JM, Larimore MG, Johnson CR: Stationary and nonstationary learning characteristics of the LMS adaptive filter.
*Proc IEEE*1976, 64(8):1151-1162.MathSciNetView ArticleGoogle Scholar - Macchi O:
*Adaptive Processing: The LMS Approach with Applications in Transmission*. Wiley, New York; 1995.MATHGoogle Scholar - Walach E, Widrow B: The least mean fourth (LMF) adaptive algorithm and its family.
*IEEE Trans Inf Theory*1984, 30: 275-283. 10.1109/TIT.1984.1056886View ArticleGoogle Scholar - Haykin S:
*Adaptive Filter Theory*. 3rd edition. Prentice-Hall, Englewood Cliffs; 1996.MATHGoogle Scholar - Nagumo JI, Noda A: A learning method for system identification.
*IEEE Trans Automat Control*1967, 12: 282-287.View ArticleGoogle Scholar - Zerguine A: Convergence and steady-state analysis of the normalized least mean fourth algorithm.
*Digital Signal Process*2007, 17(1):17-31. 10.1016/j.dsp.2006.01.005View ArticleGoogle Scholar - Zerguine A: Convergence behavior of the normalized least mean fourth algorithm.
*Proc 34th Annual Asilomar Conf Signals, Syst Comput*2000, 275-278.Google Scholar - Chan MK, Cowan CFN: Using a normalised LMF algorithm for channel equalisation with co-channel interference.
*XI Euro Sig Process Conf Eusipco 2002*2002, 2: 49-51.Google Scholar - Tanrikulu O, Chambers JA: Convergence and steady-state properties of the least mean mixed-norm (LMMN) adaptive filtering.
*J Ind Microbiol Biotechnol*1996, 143(3):137-142.Google Scholar - Zerguine A, Cowan CFN, Bettayeb M: LMS-LMF adaptive scheme for echo cancellation.
*Electron Lett*1996, 32(19):1776-1778. 10.1049/el:19961202View ArticleGoogle Scholar - Al-Naffouri TY, Zerguine A, Bettayeb M: Convergence properties of mixed-norm algorithms under general error criteria.
*IEEE ISCAS '99*1999, 211-214.Google Scholar - Tarrab M, Feuer A: Convergence and performance analysis of the normalized LMS algorithm with uncorrelated gaussian data.
*IEEE Trans Inf Theory*1988, 34: 680-691. 10.1109/18.9768MathSciNetView ArticleMATHGoogle Scholar - Sulyman AI, Zerguine A: Convergence and steady-state analysis of a variable step-size NLMS algorithm.
*Signal Process*2003, 83(6):1255-1273. 10.1016/S0165-1684(03)00044-6View ArticleMATHGoogle Scholar - Zerguine A, Aboulnasr T: Convergence analysis of the variable weight mixed-norm LMS-LMF adaptive algorithm.
*Proc 34th Annual Asilomar Conf Signals, Syst, Comput*2000, 279-282.Google Scholar - Aboulnasr T, Mayyas K: A robust variable step-size LMS-type algorithm: analysis and simulations.
*IEEE Trans Signal Process*1997, 45(3):631-639. 10.1109/78.558478View ArticleGoogle Scholar - Mazo JE: On the independence theory of equalizer convergence.
*Bell Syst Tech J*1979, 58: 963-993.MathSciNetView ArticleMATHGoogle Scholar - Cho SH, Kim SD, Jean KY: Statistical convergence of the adaptive least mean fourth algorithm.
*Proceedings of the ICSP'96*1996, 610-613.Google Scholar - Hubscher PI, Bermudez JCM: An improved statistical analysis of the least mean fourth (LMF) adaptive algorithm.
*IEEE Trans Signal Process*2003, 51(3):664-671. 10.1109/TSP.2002.808126View ArticleGoogle Scholar - Proakis JG:
*Digital Communications*. 4th edition. McGraw-Hill, Singapore; 2001.MATHGoogle Scholar - Rupp M, Sayed AH: A time-domain feedback analysis of filtered-error adaptive gradient algorithms.
*IEEE Trans Signal Process*1996, 44: 1428-1439. 10.1109/78.506609View ArticleGoogle Scholar - Sayed AH:
*Adaptive Filters*. Wiley, NJ; 2008.View ArticleGoogle Scholar - Bershad NJ, Bonnet M: Saturation effects in LMS adaptive echo cancellation for binary data.
*IEEE Trans Acoust Speech Signal Process*1990, 38(10):1687-1696. 10.1109/29.60100View ArticleGoogle Scholar - Papoulis A:
*Probability, Random Variables, and Stochastic Processes*. McGraw-Hill, New York; 1991.MATHGoogle Scholar - Douglas SC, Meng TH-Y: Stochastic gradient adaptation under general error criteria.
*IEEE Trans Signal Process*1994, 42(6):1352-1365. 10.1109/78.286952View ArticleGoogle Scholar - Yousef NR, Sayed AH: A unified approach to the steady-state and tracking analysis of adaptive filters.
*IEEE Trans Signal Process*2001, 49: 314-324. 10.1109/78.902113View ArticleGoogle Scholar - Rappaport TS:
*Wireless Communications*. Prentice-Hall, Upper Saddle River; 1996.Google Scholar - Rupp M: LMS tracking behavior under periodically changing systems. In
*EUSIPCO-1998*. Island of Rhodes, Greece; 1998:1253-1256.Google Scholar - Moinuddin M, Zerguine A: Tracking analysis of the NLMS algorithm in the presence of both random and cyclic nonstationarities.
*IEEE Signal Process Lett*2003, 10(9):256-258.View ArticleGoogle Scholar - Moinuddin M, Zerguine A, Sheikh AUH: Tracking analysis of the NLMF algorithm in the presence of both random and cyclic nonstationarities. In
*ISSPA 2005*. Sydney, Australia; 2005:755-758.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.