EURASIP Journal on Applied Signal Processing 2005:8, 1229–1234 c ○ 2005 Hindawi Publishing Corporation Modified Clipped LMS Algorithm

A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization () scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS) algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.


INTRODUCTION
Adaptive signal processing has been one of the fastest growing fields of research in recent years. It has attained its popularity due to a broad range of useful applications in such diverse areas as communications, radar, sonar, seismology, navigation and control systems, and biomedical electronics. The LMS adaptive filter is very popular due to its simplicity, but even simpler approaches are required for many realtime applications, several different versions of the LMS algorithm have been proposed in the literature [1,2,3,4,5,6]. Reduction of the complexity of the LMS algorithm has received attention in the area of adaptive filters [5,7,8,9]. The sign algorithm and clipped data algorithm are in this category [2,5,8,9,10].
The tracking behavior of adaptive filtering algorithms is a fundamental issue in defining their performance in nonstationary operating environments. It has been established that adaptive algorithms that exhibit good convergence properties in stationary environments do not necessarily provide good tracking performance in a nonstationary environment because the convergence behavior of an adaptive filter is a transient phenomenon, whereas the tracking behavior is a steady-state property [11,12]. Thus, much research is done for the measurement of tracking performance of variants of the LMS algorithm from different views [10,13,14,15].
For applications in which slow adaptation is acceptable, the clipped LMS (CLMS) algorithm has an edge over the others in terms of speed of processing [16]. Also fast CLMS is proposed for increasing the speed of convergence [2].
Much effort from the viewpoint of reduction of the computations of the LMS algorithm is seen in the aforementioned references. The present work concerns the presentation of a modified version of the CLMS algorithm whose tracking is much better than the CLMS and LMS and has less computation as well.
The variants of LMS are discussed in Section 2. The proposed new algorithm, which is a modification of the aforementioned algorithm, appears in Section 3. Section 4 deals with the computation of tracking performance of the proposed algorithm. Section 5 is concerned with computer simulation issues. Reduction of computational complexity of the proposed algorithm is investigated in Section 6. The final section presents conclusions for the present work and summarises the main findings.

VARIANTS OF THE LMS ALGORITHM
The purpose of this section is to briefly introduce the main existing variants of the LMS algorithm. In order to clarify the background of the new algorithm, it is necessary to show how they are interrelated and how they have evolved. The LMS algorithm has been studied in [17,18] as where W n = [w n (1), w n (2), . . . , w n (N)] T is the weight vector of the estimator, X n is the vector of the input data sequence, which is assumed to be a stationary random process, N is the number of filter taps, e n is the estimation error, d n is the desired response, and µ is the step size. A simple change can be made to the LMS algorithm to obtain the CLMS algorithm [2,16,19]: where X is the clipped input signal vector, whose ith component isx(i) = sgn[x(i)]. Other variations of the LMS algorithm that have been studied are the "sign" algorithm [20,21] W n+1 = W n + µẽ n X n , whereẽ n = sgn e n , and "the zero-forcing" algorithm [22,23] W n+1 = W n + µẽ n X n .
The CLMS algorithm involves clipping the input signal vector in the weight update formula (3). This quantization scheme can best be illustrated by Figure 1.

THE PROPOSED MODIFIED CLIPPED LMS ALGORITHM
Here we propose a new modification to the clipped LMS algorithm to further simplify the implementation of the LMS algorithm. Rather than representing the input signal X n by a two-level signal as shown earlier by (3), we quantize it into a three-level signal according to the quantization scheme shown in Figure 2. Thus, the adaptation equation can be written as msgn(x, δ) where X n is the modified clipped input signal vector whose ith component is It should be noted that the implementation of such an adaptive filter has potentially greater throughput because for those times when the tap input signal x n (i) is less than the specified threshold, δ, then x n (i) will be equal to zero and no coefficient adaptation for the corresponding weight needs to be performed.
This means that some of the time-consuming operations in the weight update formula (7) can be omitted, thereby leading to a reduction of the computational load on the processor. Whether this potential can be realised depends on the architecture used in the processor and also the application. Convergence of the mean of the weight vector for MCLMS is proved in the next subsection. It is shown that the mean of the weight vector of the modified clipped LMS algorithm converges to the optimum weight vector of the Wiener filter.

Derivation of the convergence of the MCLMS algorithm
Now, we want to prove that the statistical average of the weight vector converges in the limit to the optimum Wiener weight vector. Taking expectations on both sides of (7) yields Substituting (2) in (10) gives Assuming lack of correlation between the weights and X n X T n as in [17], (10) gives Now, with regard to (A.1) in the appendix, we have where and σ x is the standard deviation of the input signal. We know that the optimum Wiener weight vector is W * = R −1 P. Substituting in (13) yields If V n = W n − W * , then Now, the principal axes are rotated according to V = QV , where the rows of Q are eigenvectors of R = QΛQ −1 and Λ is a diagonal matrix whose elements are eigenvalues of R. Thus we have the following relation: where Q and V k are uncorrelated because R and W are uncorrelated. Thus, If (I − µ(α /σ x )Λ) n in the limit converges to zero, then lim n→∞ E{V n+1 }=0. In this case, lim n→∞ E{V n+1 }=0 and consequently lim n→∞ E{W n+1 } = W * , that is, the MCLMS algorithm will converge. In order that lim n→∞ (I − µ(α /σ x )Λ) n = 0, it is necessary to find a condition for µ in terms of the eigenvalues. We have Therefore, the convergence condition is that for each eigenvalue of matrix R, µ satisfies the following relation: If µ satisfies this relation for the largest eigenvalue λ max , then (19) is also satisfied for all other eigenvalues. Thus, the convergence condition for MCLMS is as follows: Also, the time constant for the exponential relaxation of the weight vector to its optimal value is

EVALUATING THE TRACKING PERFORMANCE OF THE MCLMS ALGORITHM
Tracking is a steady-state phenomenon that is different from the convergence, which is a transient phenomenon. In general, convergence and tracking are two different properties. That is, if an algorithm has good convergence, its tracking ability is not necessarily fast and vice versa. In the tracking phase, a reasonable assumption is that the optimum weights vary according to a first-order Markov process [12], and the filter must track these weights. The following relation shows the variation of the filter's optimum weights: where a is a constant and ω n is the process noise vector in the nth step, which has zero mean with correlation matrix Φ, and ν n is the measurement noise, which is assumed to be white Gaussian with zero mean and variance σ 2 ν .

The misadjustment criterion in MCLMS
According to [12], the algorithm misadjustment is usable as a criterion in tracking: The above relation shows that the weight misadjustment is related to the process noise power and X n . Now, we calculate the misadjustment (23) in which the numerator can be written as With the assumption of independence of X n and ω n and using relation (A.1) in the appendix, E ω T n X n X T n ω n = tr E ω T n X n E X T n ω n Also, the denominator of the fraction (23) is Hence, the MCLMS algorithm misadjustment can be written as Comparing this misadjustment value with that of the LMS algorithm, M LMS = (1/σ 2 ν ) tr{RΦ}, the following relation can be obtained: The above relation shows that increasing the threshold δ such that α in less than σ x gives rise to a decrease in the misadjustment error relative to LMS in tracking, but with regard to (21), it causes the MCLMS to be slower in convergence. In the next section, this issue will be shown for identification of a filter and its tracking.

APPLICATION OF MCLMS IN THE IDENTIFICATION PROBLEM
In order to demonstrate the convergence behavior of the LMS, CLMS, and the new MCLMS algorithm, 100 runs of simulation experiments have been performed (with µ = 0.17 and δ = 0.7 for MCLMS, µ = 0.105 for LMS, and µ = 0.13 for CLMS, which were the best parameters for maximum speed of convergence). In all experiments carried out for the system identification, a stationary white noise sequence was used and the system is a 7-tap FIR transversal filter having parameters that are arbitrarily chosen. The input data were normalized to have unit variance. The norm of the difference between the plant FIR weights and adaptive filter weights generated by each algorithm was averaged over 100 independent simulation runs and plotted as a function of time, as depicted in Figure 3. The norm is calculated by It can be seen that MCLMS has much better convergence than CLMS and it is also almost as good as LMS in terms of convergence speed. Of course, the MCLMS speed of convergence is reduced by increasing the threshold δ. In the above case with a threshold of 0.7, the difference weight norm of CLMS is improved by 12%, whereas the CLMS in comparison to the LMS has both lower convergence speed and higher weight error. Figure 4 shows the ratio of tracking error norm for weights of the proposed MCLMS algorithm to optimum weights of LMS and CLMS. The existence of the second parameter in the MCLMS algorithm, that is, δ, in comparison with LMS and CLMS, has caused an increase in its performance.

REDUCTION OF COMPUTATIONAL COMPLEXITY OF THE MCLMS RELATIVE TO THE CLMS ALGORITHM
The proposed algorithm has less computational complexity relative to the CLMS algorithm. If we assume that the input signal has a Gaussian distribution with zero mean and standard deviation σ x , then the probability that the signal falls in the interval between [−δσ x , δσ x ] is where N(µ x , σ x ) is the input probability density function and P(−δσ x < x < δσ x ) which, in addition to being the probability of the occurrence of the signal in interval [−δσ x , δσ x ], is also the computational reduction of MCLMS relative to CLMS. The reason is that the signal is falling between the two thresholds with a probability of P(−δσ x < x < δσ x ), and within this interval, the proposed algorithm has no weight update, since according to (8) msgn[x n (i), δσ x ] is equal to zero. The computational reduction of MCLMS compared to the CLMS is shown in Table 1 for several different thresholds.
It is interesting to note that regarding (30) and Figure 4 for δ = 0.7, the computational complexity of the weight update formula can be reduced about 52% without any noticeable change in the convergence behavior.

CONCLUSIONS AND SUGGESTIONS FOR FURTHER WORK
A proposed modified clipped LMS algorithm for discretetime adaptive FIR filtering has been studied. This new algorithm was analytically treated from a theoretical viewpoint and the convergence rate and tracking performance from a misadjustment viewpoint were derived. The advantages include a simple weight update formula, better convergence capability, and better tracking performance. Further work could apply the three-level clipping idea to the error signal instead of the input signal.
Proof. We define the random variable Now we have With regard to the assumption of the theorem, Therefore z and v are uncorrelated. Also, therefore, since z and v are uncorrelated, we have E z v = E{z}E v = E{z}× 0 = 0.
On the other hand, The density function of v v is also Gaussian with distribution N(0, σ v ) hence, Now regarding (A.6) and (A.9) we have (A.10) Finally, with regard to E{uv} = ρσ u σ v in (A.10), we have proved the theorem.