 Research
 Open Access
 Published:
A variableparameter normalized mixednorm (VPNMN) adaptive algorithm
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 55 (2012)
Abstract
Since both the least meansquare (LMS) and least meanfourth (LMF) algorithms suffer individually from the problem of eigenvalue spread, so will the mixednorm LMSLMF algorithm. Therefore, to overcome this problem for the mixednorm LMSLMF, we are adopting here the same technique of normalization (normalizing with the power of the input) that was successfully used with the LMS and LMF separately. Consequently a new normalized variableparameter mixednorm (VPNMN) adaptive algorithm is proposed in this study. This algorithm is derived by exploiting a timevarying mixing parameter in the traditional mixednorm LMSLMF weight update equation. The timevarying mixing parameter is adjusted according to a wellknown technique used in the adaptation of the stepsize parameter of the LMS algorithm. In order to study the theoretical aspects of the proposed VPNMN adaptive algorithm, our study also addresses its convergence analysis, and assesses its performance using the concept of energy conservation. Extensive simulation results corroborate our theoretical findings and show that a substantial improvement, in both convergence time and steadystate error, can be obtained with the proposed algorithm. Finally, the VPNMN algorithm proved its usefulness in a noise cancellation application where it showed its superiority over the normalized leastmean square algorithm.
1 Introduction
Due to its simplicity, the least meansquare (LMS) [1, 2] algorithm is the most widely used algorithm for adaptive filters in many applications. The least meanfourth (LMF) [3] algorithm was also proposed later as a special case of the more general family of steepest descent algorithms [4] with 2k error norms, k being a positive integer.
But for both of these algorithms, the convergence behavior depends on the condition number, i.e., on the ratio of the maximum to the minimum eigenvalues of the input signal autocorrelation matrix, $\mathbf{R}=E\left[{\mathbf{x}}_{n}{\mathbf{x}}_{n}^{T}\right]$ where x_{ n }is the input signal. This is clearly seen from their respective time constants [1, 3]
and
where ${\sigma}_{\eta}^{2}$ is the noise power, λ _{ i } is the ith eigenvalue of the autocorrelation matrix of the input signal, µ is the step size used in the adaptation scheme and N is the number of coefficients in the adaptive filter. As seen from (1) and (2), the ratio of $\left(\frac{{\tau}_{\text{max}}}{{\tau}_{\text{min}}}\right)$ is constant for both algorithms and is given by the eigenvalue spread (i.e., condition number), $\frac{{\lambda}_{\text{max}}}{{\lambda}_{\text{min}}}$ i.e.,
To remove the dependency of the convergence of the LMS algorithm on the condition number, the normalized leastmean square (NLMS) [5] was introduced. As reported in [5], a great improvement in convergence is obtained through the use of the NLMS algorithm over that of the LMS algorithm at the expense of a larger steadystate error. Similar results were obtained for the case of the normalized LMF (NLMF) algorithm [6–8].
A mixednorm algorithm [9–11], combining both the LMS and the LMF algorithms, will suffer as well from the problem of the eigenvalue spread dependency. Since both of these algorithms suffer individually from this problem, to circumvent this problem for the mixednorm LMSLMF, we are adopting here the same technique of normalization that was successfully used with the LMS and LMF separately.
It is well known that fast convergence and lower steadystate error are two conflicting parameters in general adaptive filtering. When compared to the LMS algorithm, the NLMS algorithm results in a faster convergence but only at the expense of a higher steadystate error [12, 13]. A promising solution to this conflict is a timevarying normalized mixednorm LMSLMF algorithm. In this mixednorm algorithm and during the transient state, the NLMS algorithm is used to speed up the algorithm's convergence. However when steadystate is reached, the algorithm automatically switches from the NLMS to the NLMF [7], thanks to a builtin "gear shifting" property, to secure a lower steadystate error.
In this work, the performance of a variableparameter normalized mixednorm (VPNMN) LMSLMF algorithm is evaluated. It will be shown that a better performance in both convergence and steadystate error will be achieved by the VPNMN algorithm than either the NLMS or the NLMF algorithm.
The rest of the article is organized as follows. Section 2 deals with a more explicit development of the proposed algorithm, and Section 3 treats its convergence analysis. The steadystate analysis of the proposed algorithm is detailed in Section 4, while its tracking analysis is given in Section 5. Performance evaluation of the resulting algorithm is carried out in Section 6. Finally, the conclusion section summarizes this work.
2 Algorithm development
The mixednorm LMSLMF algorithm is based on the minimization of the following cost function [9, 10]:
where α is a positive mixing parameter in the interval [0, 1] and the error e_{ n }is defined as
where d_{ n } is the desired value, w_{ n }is the filter coefficient of the adaptive filter, x_{ n }is the input signal and η_{ n } is the additive noise.
A major drawback of this algorithm is, however, the choice of the mixing parameter that is hard to fix a priori for an unknown system. In [14], a selfadapting LMSLMF algorithm with a timevarying weighting factor was proposed. This timevariation of the weighting factor was achieved by allowing for a variable mixing factor that is updated every iteration using the modified variable stepsize (MVSS) algorithm proposed in [15]. The variable weight mixednorm LMSLMF algorithm was defined to minimize the following performance measure [14]:
where α_{ n } , chosen in [0, 1] such that the unimodal character of the above cost is preserved, is a timevarying parameter updated according to [15]
and
The parameters δ and β, both confined to the interval [0,1], are exponential weighting parameters that govern the averaging time constant, i.e., the quality of estimation of the algorithm, and γ > 0. Note that the algorithm defined by (4) is restored when δ = 1 and γ = 0, which forces α_{ n } to have a fixed value.
Based on this motivation, the weight mixednorm LMSLMF algorithm for recursively adjusting the coefficients of the system is expressed in the following form:
where μ is the step size.
As mentioned earlier and because of its reliance on the LMS and the LMF, the algorithm defined by (9) will be affected by the eigenvalue spread of the autocorrelation matrix of the input signal. To overcome this dependency, a VPNMN adaptive algorithm is introduced and its weight update recursion is given by the following expression:
where ║x_{ n }║^{2} is the Euclidean norm of the input signal x_{ n }. In the case of zero input, the εVPNMN algorithm defined as follows:
must be used for regularization purposes.
3 Convergence analysis of the VPNMN algorithm
In this section, the convergence analysis of the proposed VPNMN algorithm is carried out. Both the mean and the meansquare behaviors of the weight error vector are presented in the ensuing analysis.
3.1 Mean behavior
In the ensuing analysis, the following assumptions are used in the derivations of the convergence in the mean for the normalized mixednorm LMSLMF algorithm. These are quite similar to what is usually assumed in literature [2–4, 16] and which can also be justified in several practical instances
A.1 The noise sequence {η_{ n } } is statistically independent of the input signal sequence {x_{ n } } and both sequences have zero mean.
A.2 The weight error vector (v_{ n }), to be defined later, is independent of the input x_{ n }.
A.3 The mixing parameter is independent of both the input signal and the error.
Examining the mean behavior of (10) under the above assumptions, sufficient conditions for convergence of the proposed algorithm inthemean can be derived and are stated as follows.
Proposition 1 For the algorithm defined by (10) to converge inthemean, a sufficient condition is that μ be chosen in the following range:
where${\sigma}_{\eta}^{2}$is the noise power, ${\stackrel{\u0304}{\alpha}}_{n}=E\left[{\alpha}_{n}\right]$is the mean of the mixing parameter, and$\mathcal{C}$is is the CramerRao bound associated with the problem of estimating the random quantity${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{\mathsf{\text{opt}}}$by using${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{n}$.
Proof: The mean convergence of the proposed algorithm is now studied by taking the expectation of the weight error vector, v_{ n }= w_{ n } w_{opt}. In this regard, the error e_{ n }can be set up in the following way:
and hence (10) becomes
Consequently, taking the expectation on both sides of (14), under A.1A.3, the mean weighterror vector of the proposed algorithm evolves as
Now, considering the second expectation in the above equation, This will be especially true when the filter is long enough. Consequently, the independence assumption can be invoked to obtain the following:
To solve the expectation E[e_{ n }x_{ n }] we use the technique of [17], and thus it results in
Now, considering the second expectation in the above equation, This will be especially true when the filter is long enough. Consequently, the independence assumption can be invoked to obtain the following:
To solve the expectation $E\left[{e}_{n}^{3}{\mathbf{x}}_{n}\right]$ we use the technique of [17, 18], which does not employ any linearization of ${e}_{n}^{3}$ As a result, $E\left[{e}_{n}^{3}{\mathbf{x}}_{n}\right]$ is found to be
Ultimately, (15) can be set up in the following form:
If $\mathcal{C}\le {\zeta}_{n}$ is the CramerRao bound associated with the problem of estimating the random quantity ${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{\mathsf{\text{opt}}}$ by using ${\mathbf{x}}_{n}^{T}{\mathbf{w}}_{n}$, then after taking into account the fact that the eigenvalues of R are all real and positive, λ_{max} being the largest eigenvalue of R and in general λ_{max}< tr(R) [19], it follows that a sufficient condition for convergence of the proposed algorithm is that the stepsize parameter μ satisfies (12). ▪
Two extreme scenarios can be considered here for the value of the mixing parameter α_{ n }

(1)
Scenario 1: When α_{ n }= 0, the VPNMN algorithm reduces to the NLMF algorithm [6], and it can be shown that (12) becomes
$$0<\mu <\frac{2}{3\left({\sigma}_{\eta}^{2}+\mathcal{C}\right)}.$$(21) 
(2)
Scenario 2: When α_{ n }= 1, both the NLMS algorithm and its step size range, that is 0 < μ < 2, are recovered.
Remarks:

(1)
It can be seen from (10) that the VPNMN algorithm can be viewed as a variable stepsize LMSLMF algorithm with time varying step size.

(2)
The error is usually large during the initial adaptation and gradually decreases toward a minimum. Therefore, the signal power, ║x _{n}║^{2}, will act as a threshold to avoid taking large step sizes when the error converges to a minimum in the recursive updating equation.

(3)
The bound for the stepsize (μ) of the proposed algorithm that guarantees convergence of the mean weightvector, given by (12), shows that the meanweightvector stability depends on the CramerRao bound. Therefore, the convergence of the meanweightvector of the proposed algorithm depends on its meansquare stability. A similar fact was observed in [18] for the LMF algorithm.
3.2 Mean square behavior
In this section the performance of the VPNMN algorithm in the meansquare sense is analyzed. Here, we have used a unified approach to the transient analysis of adaptive filters with error nonlinearities. This approach does not restrict the regression data to be Gaussian and avoids the need for explicit recursions for the covariance matrix of the weighterror vector. This approach assumes that the adaptive filter is long enough to justify the following assumptions which are realistic for longer adaptive filters:
A.4 The residual or a priori error e_{an}, to be defined later, can be assumed to be Gaussian.
A.5 The norm of the input regressor (║ x_{ n }║^{2}) can be assumed to be uncorrelated with f^{2}(e_{ n } ) (f(e_{ n } ) is defined in (23)).
The framework is based on the concept of energy conservation relation which was first noted in [20] and in general the adaptation scheme defined in (14) can be written in the following form:
where f(e_{ n }) denotes a general scalar function of the output estimation error e_{ n } and in our case it is given by
We are interested in studying the timeevolution and the steadystate values of $E\left[\left{e}_{\mathsf{\text{a}}n}^{2}\right\right]$ and E[║v_{ n }║^{2}] which represent the meansquareerror and the meansquaredeviation performances of the filter, respectively, whereas their timeevolution relate to the learning or the transient behavior of the filter.
Then, for some symmetric positive definite weighting matrix A to be specified later, the weighted a priori and a posteriori estimation errors are, respectively, defined as [21]
For the special case when A = I, the weighted a priori and a posteriori estimation errors defined above are reduced to standard a priori and a posteriori estimation errors, respectively, that is,
It can be shown that the estimation error, e_{ n }, and the a priori error, e_{an}, are related via e_{ n } = e_{an}+ η_{ n }. Also, using (10) and (24), it can be shown that
where the notation $\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}$ denotes the weighted squared Euclidean norm $\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}={\mathbf{x}}_{n}^{T}\mathbf{A}{\mathbf{x}}_{n}$.
The performance measure in the analysis is the excess meansquareerror (EMSE), denoted by ζ_{ n }, and is defined as follows:
Since ${e}_{\mathsf{\text{a}}n}={\mathbf{x}}_{n}^{T}{\mathbf{v}}_{n}$, the EMSE can also be written as follows:
Next, the fundamental weightedenergy conservation relation given in [21] is presented to develop the framework for the transient analysis of the proposed algorithm. Thus, by substituting (26) in (22), the following relation can be obtained:
Ultimately, the fundamental weightedenergy conservation relation can be shown to be
This relation shows how the weighted energies of the error quantities evolve in time. It has been shown that different choices of A allow us to evaluate different performance measures of an adaptive filter.
3.2.1 Time evolution of the weighted variance $E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{A}}^{2}\right]$
In this section, the time evolution of the weighted variance $E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{A}}^{2}\right]$ is derived for the proposed algorithm using the fundamental weightedenergy conservation relation (30). Substituting the expression for a posteriori error from (26) in (30) and taking expectation on both sides to obtain the following relation:
Now, evaluating the two expectations in second and third terms on the right hand side of the above equation, that is, $E\left[{e}_{\mathsf{\text{a}}n}^{\mathbf{A}}f\left({e}_{n}\right)\right]$ and $E\left[\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}{f}^{2}\left({e}_{n}\right)\right]$ The details for these two quantities are given next. First, we will use the following assumption which was adopted in [21], that is,
A.6 For any constant matrix A and for all n, e_{an}and ${e}_{\mathsf{\text{a}}n}^{\mathbf{A}}$ are jointly Gaussian.
This assumption is reasonable for longer filters using the concept of central limit arguments [21]. Moreover, a similar assumption was used in [22]. Hence, we can simplify the expectation $E\left[{e}_{\mathsf{\text{a}}n}^{\mathbf{A}}{e}_{n}\right]$ using Price's Theorem [23, 24] and assumptions A.4 and A.6 as follows:
Since ${e}_{\mathsf{\text{a}}n}^{\mathbf{A}}={\mathbf{x}}_{n}^{T}\mathbf{A}{\mathbf{v}}_{n}$ and ${e}_{an}={\mathbf{x}}_{n}^{T}\mathbf{I}{\mathbf{v}}_{n}$ we can simplify the expectation $E\left[{e}_{\mathsf{\text{a}}n}^{\mathbf{A}}{e}_{\mathsf{\text{a}}n}\right]$ as follows:
Ultimately, (32) can be written as
The term $\frac{E\left[{e}_{\mathsf{\text{a}}n}f\left({e}_{n}\right)\right]}{E\left[{e}_{\mathsf{\text{a}}n}^{2}\right]}$ for the case of proposed algorithm, can be shown to be
Second, to solve the expectation $E\left[\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}{f}^{2}\left({e}_{n}\right)\right]$, we will resort to the following assumption [21]:
A.7 The adaptive filter is long enough such that $\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}$ and f^{2}(e_{ n } ) are uncorrelated.
This assumption is found to be more realistic as the filter gets longer [21] and unweighted version of this assumption was used in [22, 25]. The assumption enable us to split the expectation $E\left[\parallel {\mathbf{x}}_{n}{\parallel}_{\mathbf{A}}^{2}{f}^{2}\left({e}_{n}\right)\right]$ as follows:
where E[f^{2}(e_{ n } )] can be shown to be (with $\overline{{\alpha}_{n}^{2}}=E\left[{\alpha}_{n}^{2}\right]$)
Ultimately, we can rewrite (31) as follows:
The above equation shows the time evaluation or the transient behavior of the weighted variance $E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{A}}^{2}\right]$ for any constant weight matrix A. Different performance measures can be obtained by the proper choice of the weight matrix A.
3.2.2 The EMSE and the MSD learning curves
The learning curves for the EMSE and MSD can be obtained using the fact that $E\left[{e}_{\mathsf{\text{a}}n}^{2}\right]=E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{R}}^{2}\right]$ while $\mathsf{\text{MSD}}=E\left[\parallel {\mathbf{v}}_{n}{\parallel}_{\mathbf{I}}^{2}\right]$. If we choose A = IR...R^{N1}, a set of relations can be obtained from (38) which is given by
Now, using CayleyHamilton theorem, we can write
where
is the characteristic polynomial of R. Consequently, the following relation is obtained:
Ultimately, using (39) and (42), the transient behavior of the proposed algorithm can be shown to be governed by the following recursion:
where
and
It can be noticed that the learning curves for the MSD and the EMSE can be obtained from the first and second elements of vector ${\mathcal{W}}_{n}$, respectively.
3.2.3 Meansquare stability
Finally, in this section, the meansquare stability of the proposed algorithm is investigated. Consequently, we provide a nontrivial upper bound on µ for which E[║ v_{ n }║^{2} remains uniformly bounded for all n.
Starting from (31) with A = I and using the Gaussian behavior of e_{an}, it can be shown that the proposed algorithm will be meansquare stable provided that
The above inequality, upon substituting the values of the two expectations (E[e_{an}f(e_{ n } )] and E[║ x_{ n }║^{2}f^{2}(e_{ n } )]), will lead us to get the following bound:
4 Steadystate analysis of the VPNMN algorithm
The purpose of the steady state analysis of an adaptive filter is to study the behavior of steady state EMSE. Now, analyzing (31) for the limiting case when n → ∞ . Assuming that the weight error vector reaches a steadystate mean square error value, i.e.,
Consequently, for a unity weight matrix (A = I), (31) reduces to the following:
Now, using the definition of the EMSE given by (28), its steadystate value denoted by ζ_{∞} is found to be
The terms lim_{n→∞}Z_{ n } and lim_{n→∞}ℱ_{ n }can be obtained from (35) and (37), respectively.
Since, the EMSE is very close to zero at steady state, therefore, the higher powers of ζ_{∞} can be ignored. Ultimately, the steadystate EMSE of the proposed algorithm can be shown to be
5 Tracking analysis of the VPNMN algorithm
Cyclic and random system nonstationarities are a common impairment in communication systems and especially in applications that involve channel estimation, channel equalization, and intersymbolinterference cancellation. Random nonstationarity is present due to variations in channel characteristics which is true in most of cases, particularly in the case of a mobile communication environment [26]. Cyclic system nonstationarities arise in communication systems due to mismatches between the transmitter and receiver carrier generator.
The ability of adaptive filtering algorithms to track such system variations is not yet fully understood. In this regard, Rupp [27] presented a firstorder analysis of the performance of the LMS algorithm in the presence of the carrier frequency offset. In [21, 25, 28, 29] a general framework for the tracking analysis of adaptive algorithms was developed. It can handle both cyclic as well as random system nonstationarities simultaneously. This framework, based on an energy conservation principle [20], holds for all adaptive algorithms whose recursions are of the form
In the ensuing analysis, the tracking analysis of the proposed algorithm is carried out in the presence of both random and acyclic nonstationarities. It should be noted here that in this case, unlike the convergence analysis which is a linear process, the tracking analysis is a nonlinear one due to the presence of the term (e^{j Ωn}) in (54). This therefore justifies our use of complex signals, instead of real ones, in the (tracking) analysis.
A general system model is presented here which includes both types of nonstationarities, that is random and cyclic ones. To start, consider the noisy measurement d_{ n }that arises in a model of the form:
where η_{ n }is the measurement noise and ${\mathbf{w}}_{n}^{o}$ is the unknown system to be tracked. The multiplicative term e^{j Ωn}accounts for a possible frequency offset between the transmitter and the receiver carriers in a digital communication scenario. Furthermore it is assumed that the unknown system vector ${\mathbf{w}}_{n}^{o}$ is randomly changing according to:
where w^{o}is a fixed vector, and q_{ n }is assumed to be a zeromean stationary random vector process with a positive definite autocorrelation matrix ${\mathbf{Q}}_{n}=E\left[{\mathbf{q}}_{n}{\mathbf{q}}_{n}^{H}\right]$ Moreover, it is also assumed that the sequence {q_{ n }} is mutually independent of the sequences {x_{ n }} and {η_{ n }}. Thus, from the generalized system model given by (54) and (55), it can be seen that the effects of both cyclic and random system nonstationarities are included in this system model.
In the tracking analysis of adaptive algorithms, an important measure of performance is their steadystate tracking EMSE and is given by
where ${\stackrel{\u0303}{\mathbf{v}}}_{n}$ is the weighterror vector for tracking scenario and is defined as follows:
Using (53), (55) and (57) the following recursion is obtained:
where c _{ n } is defined as
Now, let us define the following a priori estimation error, ${e}_{\mathsf{\text{a}}n}={\mathbf{x}}_{n}^{H}{\stackrel{\u0303}{\mathbf{v}}}_{n}$ and a posteriori estimation error, ${e}_{\mathsf{\text{p}}n}={\mathbf{x}}_{n}^{H}\left({\stackrel{\u0303}{\mathbf{v}}}_{n+1}{\mathbf{c}}_{n}{e}^{j\mathrm{\Omega}n}\right)$ Then, it is very easy to show that the estimation error and the a priori error are related via e_{ n } = e_{an}+ η_{ n } . Also, from (26) when A = I, the a posteriori error is defined in terms of the a priori error as follows:
where ${\widehat{\mu}}_{n}=1/\parallel {\mathbf{x}}_{n}{\parallel}^{2}$ Substituting (60) into (58) results into the following update relation:
By evaluating the energies of both sides of the above equation (taking into account that ${\widehat{\mu}}_{n}\parallel {\mathbf{x}}_{n}{\parallel}^{2}=1$) the following relation is obtained:
It can be seen that if Ω = 0 (i.e., no frequency offset between the transmitter and the receiver), the above equation reduces to the basic fundamental energy conservation relation.
The energy relation (62) will be used to evaluate the excessmeansquare error at steady state. But before starting the analysis, first the following assumptions are stated:
A.8 In steadystate, the weight error vector ${\stackrel{\u0303}{\mathbf{v}}}_{n}$ takes the generic form z_{ n }e^{j Ωn}with the stationary random process z_{ n }independent of the frequency offset Ω.
Using (60), assumption A.8, and taking expectation of both sides of (62) and the fact that at steady state $E\left[{\stackrel{\u0303}{\mathbf{v}}}_{n+1}\right]=E\left[{\stackrel{\u0303}{\mathbf{v}}}_{n}\right]$ the following relation can be obtained:
The above equation can be used to solve for the steadystate EMSE. To find the value of z = E[z_{ n }], (58) is used where it is multiplied by the term e^{j Ωn}and then expectation is taken on both sides to get
which yields the following value of z at steadystate:
where γ_{ o } is defined as
Ultimately, the steadystate excessmeansquare error of the proposed algorithm, ζ^{tracking}, is obtained from (63):
where
and
It can be seen from the above result that the steadystate tracking EMSE of the NLMS algorithm [28] and the NLMF algorithm [29] can be recovered by substituting α_{ n } = 1 and α_{ n } = 0, respectively, in (67).
For a white Gaussian input signal, the autocorrelation of the input signal $\mathbf{R}={\sigma}_{x}^{2}\mathbf{I}$, and therefore (67) will look like the following:
6 Simulation results
The performance of the proposed algorithm, the VPNMN LMSLMF, is assessed in different scenarios. Experiments are carried out where an unknown system is to be identified under noisy conditions. The unknown system is a nonminimum phase channel. The input signal to both the unknown system and the adaptive filter is obtained by passing a zeromean white Gaussian sequence through a channel that is used to vary the eigenvalue spread of the autocorrelation matrix of the input signal. The example considered for the sequence {x_{ n } } has an eigenvalue spread of 68.9. The additive noise, η_{ n } , is a zeromean. The signal to noise ratio is set to be equal to 20 dB and the performance measure considered is the normalized weight error norm 10log_{10}║ w_{ n } w_{opt}║^{2}/║ w_{opt}║^{2}. Results are obtained by averaging over 500 independent runs. The proposed algorithm is implemented with the parameters 8 = 0.97, β = 0.98, γ = 10^{2}α_{0}= 0.8 and p_{0} = 0. In the ensuing, different aspects of the performance are considered during the course of this study.
6.1 Convergence behavior
Figure 1 compares the fastest convergence characteristics of both the proposed algorithm and the NLMS algorithm. It can be seen from this figure that the proposed algorithm converges as fast as the NLMS algorithm but results in a lower weight mismatch. An improvement of 25 dB is obtained through the use of the proposed algorithm. Also, as shown in Figure 2, the proposed algorithm outperforms the NLMS algorithm, for the lowest steadystate error reached by the later, thanks to its builtin gearshifting mechanism which gives it an extra degree of freedom in this region.
The fast convergence obtained by the proposed algorithm can be justified by the fact that when far from the optimum solution, this algorithm exhibits faster convergence than the NLMS algorithm by automatically increasing the step size (gearshifting property).
Figure 3 summarizes the performance of the proposed VPNMN algorithm in the three different noise environments with an SNR of 20 dB when the input signal is white. As can be depicted from this figure that the best performance is obtained when the noise statistics are uniform while the worst performance is obtained when the noise statistics are laplacian.
Similarly, Figure 4 depicts the results for the proposed VPNMN algorithm when the input signal is highly correlated and as can seen from this figure that almost equal performance is obtained by the VPNMN algorithm for the different noise statistics.
In order to verify the stability bound on stepsize given in (48), we investigate it in a Gaussian environment and an SNR of 20 dB. Here, we choose a misadjustment of five which results in the CramerRao bound to be C ≤ 0.05. Thus, choosing a tr(R) = 5, the upper bound given in (48) is found to be 0.95. It is observed from the various performed simulations that the NCLMF algorithm is stable while µ is less than 1.0 and thus, eventually validating the derived stability bound.
Finally, from the viewpoint of computational load the proposed algorithm requires an additional seven multiplications and three additions when compared to the fixed mixednorm algorithm defined by (4), and only eleven multiplications and six additions when compared to the NLMS algorithm. The small computational over head of the proposed algorithm is therefore well worth the gain in the steadystate error reduction it brings about.
6.2 Results for the MSE learning curve
Figure 5 depicts the time evolution of the MSE obtained for both the theoretical analysis, the second entry of (44), and the simulations. Excellent agreement between theory and simulation results is obtained; hence, a consistency in performance is obtained by the proposed VPNMN algorithm.
6.3 Results for tracking
For tracking, the simulations are carried out for a system identification problem, where the unknown system, having an FIR model, is given by [1.0119  j 0.7589,  0.3796 + j 0.5059]^{T}, while the system characteristics are timevarying according to the system model (54) and (55). Results for the proposed algorithm are presented to validate the theoretical findings for different values of Ω and different values of µ. The input signal x_{ n } to both the unknown system and the adaptive filter is a zeromean white Gaussian sequence. The signal to noise ratio is set to be equal to 30 dB two values are considered for tr{Q_{ n }}: a very small value of tr{Q_{ n }} = 10^{7}, and a very large one of tr{Q_{ n }} = 10^{2}.
Figure 6 depicts the comparison of the theory to the simulation results for three different values of Ω, i.e., Ω = 0.001, 0.002, and 0.003. As can be seen from this figure, close agreement between theory and simulation results are obtained. Furthermore, it is observed from this figure that degradation in performance is obtained by increasing the frequency offset Ω and unlike the stationary case, the steadystate EMSE is not a monotonically increasing function of the stepsize µ, that is the steadystate EMSE is smaller at larger values of the stepsize µ.
Figure 6 is obtained for the case when tr{Q_{ n }} = 10^{7} which is represents a small value. Increasing this value to 10^{2}, the results depicted in Figure 7 for three larger values of Ω, i.e., 0.01, 0.02, and 0.03, still show that the previously stated observations are similar to those obtained for a smaller value of tr{Q_{ n }}.
Finally, the consistency in the performance of the steadystate EMSE of the proposed algorithm is observed in both cases (two different values of tr{Q_{ n }}) and different values of Ω.
6.4 Noise cancelation using VPNMN algorithm
In this example, we study the performance of the VPNMN algorithm for the application of noise cancelation. A pure sinusoidal noise generated by the process (u_{ n } = 0.8 sin (ωn + 0.5π)) with ω = 0.1 π is to be removed from a square wave generated by (s_{ n } = 2 × ((mod(n, 1000) < 1000/ 2)  0.5)) where mod (n, 1000) computes the modulus of n over 1,000. Summing u_{ n } and s_{ n } gives us the reference signal to the adaptive filter. The input to the adaptive filter is a sinusoidal signal generated by $\left({x}_{n}=\sqrt{2}\text{sin}\left(\omega n\right)\right)$ with ω = 0.1 π. The resulting output error signal e_{ n } will, in time, converge to the desired signal which will be noiseless.
Figure 8 depicts the reference response and the processed results by the VPNMN algorithm and NLMS algorithm. It is clear that both algorithms are able to remove the noise component but VPNMN algorithm exhibits better noise cancelation capabilities as compared to the NLMS algorithm.
7 Conclusion
In this study, a normalized VPNMN algorithm is proposed where a combination of the LMS and the LMF algorithms is incorporated using the concept of variable stepsize LMS adaptation. It is found that the proposed algorithm has the fast convergence property of the NLMS algorithm while resulting in a lower steadystate error, therefore eliminating the conflict between these two parameters, i.e., fast convergence and low steadystate error. Moreover, the consistency of the performance of the proposed algorithm has been confirmed by many simulation results which are reported here.
The analytical results of the tracking steadystate EMSE are derived for the proposed algorithm in the presence of both random and cyclic nonstationarities. The results, show that unlike in the stationary case, the steadystate EMSE is not a monotonically increasing function of the stepsize µ, while the ability of the algorithm to track the variations in the environment degrades by increasing the frequency offset Ω.
Finally, the VPNMN algorithm proved its usefulness in a noise cancelation scenario where it showed its superiority over the NLMS algorithm.
References
 1.
Widrow B, McCool JM, Larimore MG, Johnson CR: Stationary and nonstationary learning characteristics of the LMS adaptive filter. Proc IEEE 1976, 64(8):11511162.
 2.
Macchi O: Adaptive Processing: The LMS Approach with Applications in Transmission. Wiley, New York; 1995.
 3.
Walach E, Widrow B: The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans Inf Theory 1984, 30: 275283. 10.1109/TIT.1984.1056886
 4.
Haykin S: Adaptive Filter Theory. 3rd edition. PrenticeHall, Englewood Cliffs; 1996.
 5.
Nagumo JI, Noda A: A learning method for system identification. IEEE Trans Automat Control 1967, 12: 282287.
 6.
Zerguine A: Convergence and steadystate analysis of the normalized least mean fourth algorithm. Digital Signal Process 2007, 17(1):1731. 10.1016/j.dsp.2006.01.005
 7.
Zerguine A: Convergence behavior of the normalized least mean fourth algorithm. Proc 34th Annual Asilomar Conf Signals, Syst Comput 2000, 275278.
 8.
Chan MK, Cowan CFN: Using a normalised LMF algorithm for channel equalisation with cochannel interference. XI Euro Sig Process Conf Eusipco 2002 2002, 2: 4951.
 9.
Tanrikulu O, Chambers JA: Convergence and steadystate properties of the least mean mixednorm (LMMN) adaptive filtering. J Ind Microbiol Biotechnol 1996, 143(3):137142.
 10.
Zerguine A, Cowan CFN, Bettayeb M: LMSLMF adaptive scheme for echo cancellation. Electron Lett 1996, 32(19):17761778. 10.1049/el:19961202
 11.
AlNaffouri TY, Zerguine A, Bettayeb M: Convergence properties of mixednorm algorithms under general error criteria. IEEE ISCAS '99 1999, 211214.
 12.
Tarrab M, Feuer A: Convergence and performance analysis of the normalized LMS algorithm with uncorrelated gaussian data. IEEE Trans Inf Theory 1988, 34: 680691. 10.1109/18.9768
 13.
Sulyman AI, Zerguine A: Convergence and steadystate analysis of a variable stepsize NLMS algorithm. Signal Process 2003, 83(6):12551273. 10.1016/S01651684(03)000446
 14.
Zerguine A, Aboulnasr T: Convergence analysis of the variable weight mixednorm LMSLMF adaptive algorithm. Proc 34th Annual Asilomar Conf Signals, Syst, Comput 2000, 279282.
 15.
Aboulnasr T, Mayyas K: A robust variable stepsize LMStype algorithm: analysis and simulations. IEEE Trans Signal Process 1997, 45(3):631639. 10.1109/78.558478
 16.
Mazo JE: On the independence theory of equalizer convergence. Bell Syst Tech J 1979, 58: 963993.
 17.
Cho SH, Kim SD, Jean KY: Statistical convergence of the adaptive least mean fourth algorithm. Proceedings of the ICSP'96 1996, 610613.
 18.
Hubscher PI, Bermudez JCM: An improved statistical analysis of the least mean fourth (LMF) adaptive algorithm. IEEE Trans Signal Process 2003, 51(3):664671. 10.1109/TSP.2002.808126
 19.
Proakis JG: Digital Communications. 4th edition. McGrawHill, Singapore; 2001.
 20.
Rupp M, Sayed AH: A timedomain feedback analysis of filterederror adaptive gradient algorithms. IEEE Trans Signal Process 1996, 44: 14281439. 10.1109/78.506609
 21.
Sayed AH: Adaptive Filters. Wiley, NJ; 2008.
 22.
Bershad NJ, Bonnet M: Saturation effects in LMS adaptive echo cancellation for binary data. IEEE Trans Acoust Speech Signal Process 1990, 38(10):16871696. 10.1109/29.60100
 23.
Papoulis A: Probability, Random Variables, and Stochastic Processes. McGrawHill, New York; 1991.
 24.
Douglas SC, Meng THY: Stochastic gradient adaptation under general error criteria. IEEE Trans Signal Process 1994, 42(6):13521365. 10.1109/78.286952
 25.
Yousef NR, Sayed AH: A unified approach to the steadystate and tracking analysis of adaptive filters. IEEE Trans Signal Process 2001, 49: 314324. 10.1109/78.902113
 26.
Rappaport TS: Wireless Communications. PrenticeHall, Upper Saddle River; 1996.
 27.
Rupp M: LMS tracking behavior under periodically changing systems. In EUSIPCO1998. Island of Rhodes, Greece; 1998:12531256.
 28.
Moinuddin M, Zerguine A: Tracking analysis of the NLMS algorithm in the presence of both random and cyclic nonstationarities. IEEE Signal Process Lett 2003, 10(9):256258.
 29.
Moinuddin M, Zerguine A, Sheikh AUH: Tracking analysis of the NLMF algorithm in the presence of both random and cyclic nonstationarities. In ISSPA 2005. Sydney, Australia; 2005:755758.
Acknowledgements
The author acknowledges the support of the Deanship of Scientific Research at King Fahd University of Petroleum & Minerals. This research work is funded by the King Fahd University of Petroleum & Minerals under Research Grants (FT090016) and (SB101024).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zerguine, A. A variableparameter normalized mixednorm (VPNMN) adaptive algorithm. EURASIP J. Adv. Signal Process. 2012, 55 (2012). https://doi.org/10.1186/16876180201255
Received:
Accepted:
Published:
Keywords
 LMS algorithm
 NLMS algorithm
 LMF algorithm
 mixednorm algorithms
 normalized mixednorm algorithms