Skip to content

Advertisement

  • Research
  • Open Access

Filtered-X Least Mean Fourth (FXLMF) and Leaky FXLMF adaptive algorithms

  • 1,
  • 1,
  • 1 and
  • 1Email author
EURASIP Journal on Advances in Signal Processing20162016:39

https://doi.org/10.1186/s13634-016-0337-z

  • Received: 30 June 2015
  • Accepted: 16 March 2016
  • Published:

Abstract

Adaptive filtering algorithms promise an improvement of the active noise control (ANC) problem encountered in many scenarios. Just to name a few, the Filtered-X Least Mean Square (FXLMS) algorithm, the Leaky FXLMS (LFXLMS) algorithm, and other modified LMS-based algorithms have been developed and utilized to combat the ANC problem. All of these algorithms enjoy great performance when the signal-to-noise ratio (SNR) is high. On the other hand, when the SNR is low, which is a known trend in ANC scenarios, the performance of these algorithms is not attractive. The performance of the Least Mean Fourth (LMF) algorithm has never been tested on any ANC scenario under low or high SNR. Therefore, in this work, reflecting the development in the LMS family on the LMF, we are proposing two new adaptive filtering algorithms, which are the Filtered-X Least Mean Fourth (FXLMF) algorithm and the Leakage-based variant (LFXLMF) of the FXLMF algorithm. The main target of this work is to derive the FXLMF and LFXLMF adaptive algorithms, study their convergence behaviors, examine their tracking and transient conduct, and analyze their performance for different noise environments. Moreover, a convex combination filter utilizing the proposed algorithm and algorithm robustness test is carried out. Finally, several simulation results are obtained to validate the theoretical findings and show the effectiveness of the proposed algorithms over other adaptive algorithms.

Keywords

  • Minimum Mean Square Error
  • Little Mean Square
  • Adaptive Filter
  • Little Mean Square Algorithm
  • Combine Filter

1 Introduction

Adaptive filtering algorithms are by now omnipresent in a variety of applications, such as plant modeling, adaptive equalization, and system identification, to name a few [18]. Add to that, noise control and noise cancelation are important issues whose effects adaptive filtering algorithms strive to mitigate. Active noise control (ANC) techniques use adaptive filtering algorithms to cancel the effect of acoustic noise, by playing anti-noise signal estimated from the noise source itself.

The Least Mean Square (LMS) algorithm suffers from problems, such as a degradation in the algorithm efficiency, due to the presence of a filter in the auxiliary or error path, as in the case of the ANC technique, as well as slow convergence, instability of the algorithm, increased residual noise power, and lower convergence rate. These constraints urged researchers to enhance the performance of the conventional LMS algorithm [911].

The Filtered-X LMS (FXLMS) algorithm is considered as the cornerstone for ANC applications [1215]. In this algorithm, an identical copy of the secondary path, mainly used to solve the instability problem and to eliminate the noise from the primary signal, is used to filter the input before the adaptive algorithm uses it in order to adjust the coefficient vector of the adaptive filter, as depicted in Fig. 1 [9]. More details about the different parameters in Fig. 1 will be given in the next section.
Fig. 1
Fig. 1

Block diagram for ANC system

In the last decade, intensive research was carried out for the purpose of enhancing the performance of the FXLMS algorithm. In [14], a new stochastic analysis for the FXLMS algorithm was introduced, using an analytical model not based on the independence theory [16], to derive the first moment of the adaptive weight filter. The main assumption of this work was to ignore the correlation between the data vector and the weights and compare the correlation between data vectors, preserving past and present data vector correlations. This model was validated for both white and colored primary input signals and shows stability even when using large step sizes.

The FXLMS algorithm is preferred because of its inherent stability and simplicity, but sometimes, the adaptive filter suffers from high noise levels caused by low-frequency resonances, which may cause nonlinear distortion due to overloading of the secondary source. This problem was solved by adding output power constraints to the cost function, as was proposed in the Leaky FXLMS (LFXLMS) algorithm [17]. Moreover, the LFXLMS reduces the numerical error in the finite precision implementation and limits the output power of the secondary source to avoid nonlinear distortion. The LFXLMS increases the algorithm’s stability, especially when a large source strength is used.

Another modification of the FXLMS algorithm is the Modified FXLMS (MFXLMS) algorithm [15]. Since the FXLMS exhibits poor convergence performance, the MFXLMS proposes a better convergence and reduces the computational load.

The LMS may suffer from divergence due to the insufficient spectral excitation, like a sinusoid signal without noise, which consequently may cause overflow for the weight vector during the updating process. This divergence problem can be resolved by proposing a leak term during the update process of the weight vector. Of course, this will result in lesser performance; however, the leakage factor is controlled, which is necessary to balance for the lost performance. In addition, this will add complexity, but more robustness of the adaptive filter is achieved, as was done in the case of the Leaky LMS (LLMS) algorithm.

In [17], a stochastic analysis for the LFXLMS algorithms was proposed without resorting to the independence theory. Furthermore, to strengthen their work, the authors assumed an inexact estimation for the secondary path, which is the case in most practical implementations for the adaptive filter.

Due to very low input signal, the Leaky LMS algorithm proposed in [18] aims to reduce the stalling effect, where the gradient estimate is too small to adjust the coefficients of the algorithm. Moreover, the leakage term stabilized the LMS algorithm successfully. Also, the LLMS solved the problem of bursting in short-distance telephones when they added the adaptive echo canceller [19].

A very important extension of the LMS algorithm is the Least Mean Fourth (LMF) algorithm [20], where the cost function for LMF algorithm, defined in terms of the error signal (e(n)) is given by
$$ {J}_{\mathrm{LMF}}(n)=E\left[{e}^4(n)\right]. $$
(1)

The LMF weights converge proportionally to the LMS weights. The performance of the LMF algorithm has never been tested on any ANC scenario under low or high signal-to-noise ratio (SNR). In this work, we propose two new algorithms, the Filtered-X LMF (FXLMF) and Leaky FXLMF (LFXLMF) algorithms. We analyze the convergence behaviors and examine the performance of both of them. This is carried out under different statistical input signals and noise for the mean and mean square error of the adaptive filter weights, depending on secondary path modeling error using an energy conservation relation framework. These two algorithms are expected to have a high effectiveness on the ANC issue at an extra computational complexity. Monte Carlo simulations used to assess the analytical assumptions, as well as the accuracy of the proposed model, are verified and assessed.

This paper is organized as follows: Section 1 provides an introduction and a literature review. In Section 2, analytical derivations for the FXLMF and LFXLMF algorithms are presented. Section 3 proposes the convex combination of the FXLMF algorithm with the FXLMS algorithm. Simulation results are presented in Section 4, and finally, the conclusions and future work are presented in Section 5.

2 Analysis

Figure 1 illustrates the block diagram of an ANC and illustrates the location of the secondary path S and its estimated secondary path Ŝ. The secondary path is a transfer function which can be represented by a group of a digital-to-analog (D/A) converter, a power amplifier, a canceling loudspeaker, an error microphone, and an (A/D) converter.

The realization of the secondary path is usually obtained using a system identification technique. For this work, the assumption used considers an inexact estimation for the secondary path which may cause errors on the number of coefficients or on their values as was done in [21]. Consequently, the values of the secondary path will be as Ŝ ≠ S, and the filter coefficients’ number \( \widehat{M}\ne M \). Table 1 describes the different parameters used in Fig. 1.
Table 1

Parameters and their descriptions used in Fig. 1

Adaptive filter weights

w(n) = [w 0(nw 1(n) … w N − 1(n)] T

Stationary input signal

x(n) = [x(nx(n − 1) … x(n − N + 1)] T

Secondary path

S = [s 0 s 1 … s M − 1] T

Estimate of the secondary path

Ŝ = [ŝ 0 ŝ 1 … ŝ M − 1] T

Primary (desired) signal

d(n)

Stationary noise process

z(n)

Number of tap weight coefficients

N

Number of the secondary path coefficients

M

Referring to Fig. 1, the error signal is given by
$$ e(n)=d(n) - y^{\prime }(n)+z(n), $$
(2)
where d(n) is the desired response and y(n) is the output of the adaptive filter given by
$$ y(n)={\boldsymbol{x}}^T(n)\boldsymbol{w}(n)={\boldsymbol{w}}^T(n)\boldsymbol{x}(n), $$
(3)
y ' (n) is the output of the secondary path
$$ \begin{array}{c}\hfill y\mathit{\hbox{'}}(n)={\displaystyle \sum_{i=0}^{M-1}}{s}_iy\left(n-i\right)\hfill \\ {}\hfill = {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right),\hfill \end{array} $$
(4)
and z(n) is the active noise. Finally, the filtered input signal is given as
$$ x\mathit{\hbox{'}}(n)={\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\boldsymbol{x}\left(n-i\right). $$
(5)

For the case of an exact approximation for the secondary path, that is S = Ŝ, the input signal, x(n), will be filtered by S.

2.1 Development of FXLMF algorithm

Using the block diagram in Fig. 1, the cost function for the FXLMF algorithm is given by the following relation:
$$ {J}_{\mathrm{FXLMF}}(n)=E\ \left[{e}^4(n)\right], $$
(6)
where the error signal, e(n), is given below as the difference between the output signal from the secondary path and the primary signal, that is,
$$ e(n)=d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)+z(n). $$
(7)

During the course of derivations, we will resort to the same assumptions, used in the literature [1723], to simplify our algorithms. These assumptions are as follows:

2.1.1 Assumption A1

x(n) is the input signal, a zero mean wide-stationary Gaussian process with variance σ x    2, and R i,j  = E[x(n − j)x T (n − i)] > 0 is a positive definite autocorrelation matrix of the input signal.

2.1.2 Assumption A2

z(n) is the measurement noise, an independent and identically distributed (i.i.d) random variable with zero mean and variance σ z 2 = E[z 2(n)], and there is no correlation between the input signal and the measurement noise. In other words, the sequence z(n) is independent of x(n) and w(n). The measurement is assumed to have an even probability density function.

Assuming that the vector w is fixed, then the cost function looks like the following:
$$ \begin{array}{c}\hfill {J}_{\mathrm{FXLMF}} = \left\{\left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{M-1}}{s}_i{s}_j{s}_k{s}_l\ E\left[{\boldsymbol{x}}^T\left(n-l\right)\boldsymbol{x}\left(n-k\right)\ {\boldsymbol{x}}^T\left(n-j\right)\boldsymbol{x}\left(n-i\right)\right]\right)\ \right\}{\left\Vert \boldsymbol{w}\right\Vert}^4\hfill \\ {}\hfill - 4\ \left\{\left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{s}_i{s}_j{s}_k\ E\left[d(n)\boldsymbol{x}\left(n-k\right)\ {\boldsymbol{x}}^T\left(n-j\right)\boldsymbol{x}\left(n-i\right)\right]\right)\right\}{\left\Vert \boldsymbol{w}\right\Vert}^3\hfill \\ {}\hfill + 6\ \left\{\left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ E\left[{d}^2(n)\ \boldsymbol{x}\left(n-j\right)\ {\boldsymbol{x}}^T\left(n-i\right)\right]\right)+{\sigma_z}^2\ \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ E\left[\boldsymbol{x}\left(n-j\right)\ {\boldsymbol{x}}^T\left(n-i\right)\right]\right)\right\}{\left\Vert \boldsymbol{w}\right\Vert}^2\hfill \\ {}\hfill -4\ \left\{\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i\ E\left[{d}^3(n)\ {\boldsymbol{x}}^T\left(n-i\right)\right]\right)-{\sigma_z}^2\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i\ E\left[d(n){\boldsymbol{x}}^T\left(n-i\right)\right]\right)\right\}\left\Vert \boldsymbol{w}\right\Vert \hfill \\ {}\hfill + \left\{\left(E\ \left[{d}^4(n)\right]+{\sigma_z}^2-4\ E\ \left[{z}^3(n)\right]E\ \left[d(n)\right]+6\ {\sigma_d}^2{\sigma_z}^2\right)\right\}.\hfill \end{array} $$
(8)
To obtain the optimal weight vector for the cost function, we take the derivative of Eq. (8) with respect to w and set it to zero. Discarding the noise, z(n), the derivative of Eq. (8) will be
$$ \frac{\partial {J}_{\mathrm{FXLMF}}(n)}{\partial \boldsymbol{w}(n)}={\left\Vert \boldsymbol{w}\right\Vert}^3 - 3\left({{\tilde{\boldsymbol{R}}}_{s^4}}^{-1}{\tilde{\boldsymbol{P}}}_{d,{s}^3}\right){\left\Vert \boldsymbol{w}\right\Vert}^2 + 3{{\tilde{\boldsymbol{R}}}_{s^4}}^{-1}\left({\tilde{\boldsymbol{P}}}_{d^2,{s}^2}\right)\left\Vert \boldsymbol{w}\right\Vert -{{\tilde{\boldsymbol{R}}}_{s^4}}^{-1}\left({\tilde{\boldsymbol{P}}}_{d^3,s}\right), $$
(9)
where
$$ \begin{array}{c}\hfill {\tilde{\boldsymbol{R}}}_{s^2}={\displaystyle {\sum}_{i=0}^{M-1}{\displaystyle {\sum}_{j=0}^{M-1}{s}_i{s}_j\ E\left[\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\right]}}\hfill \\ {}\hfill ={\displaystyle {\sum}_{i=0}^{M-1}{\displaystyle {\sum}_{j=0}^{M-1}{s}_i{s}_j\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}},}}\hfill \end{array} $$
R i,j  = E [x(n − j)x T (n − i)] is the input autocorrelation matrix,
$$ \begin{array}{c}\hfill {\tilde{\boldsymbol{P}}}_{d,s}={\displaystyle {\sum}_{j=0}^{M-1}{s}_j\ E\ \left[d(n)\boldsymbol{x}\left(n-j\right)\right]}\hfill \\ {}\hfill = {\displaystyle {\sum}_{j=0}^{M-1}{s}_j{P}_{d,j},}\hfill \end{array} $$
P d,j  = E [d(n)x(n − j)] is the cross-correlation between the input and the primary signals,
$$ \begin{array}{c}\hfill {\tilde{\boldsymbol{R}}}_{s^4} = {\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{M-1}}{s}_i{s}_j{s}_k{s}_l\ E\left[{\boldsymbol{x}}^T\left(n-l\right)\boldsymbol{x}\left(n-k\right)\ {\boldsymbol{x}}^T\left(n-j\right)\boldsymbol{x}\left(n-i\right)\right]\hfill \\ {}\hfill = {\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{M-1}}{s}_i{s}_j{s}_k{s}_l\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k},\boldsymbol{l}}, \hfill \end{array} $$
$$ \begin{array}{c}\hfill {\tilde{\boldsymbol{P}}}_{d,{s}^3} = {\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{s}_i{s}_j{s}_k\ E\left[d(n)\boldsymbol{x}\left(n-k\right)\ {\boldsymbol{x}}^T\left(n-j\right)\boldsymbol{x}\left(n-i\right)\right]\hfill \\ {}\hfill = {\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{s}_i{s}_j{s}_k\ {\boldsymbol{P}}_{d,i,j,k},\hfill \end{array} $$
$$ \begin{array}{c}\hfill {\tilde{\boldsymbol{P}}}_{d^2,{s}^2} = {\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ E\left[{d}^2(n)\ \boldsymbol{x}\left(n-j\right)\ {\boldsymbol{x}}^{\boldsymbol{T}}\left(n-i\right)\right]\hfill \\ {}\hfill ={\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j{\boldsymbol{P}}_{d^2,i,j},\hfill \end{array} $$
and
$$ \begin{array}{c}\hfill {\tilde{\boldsymbol{P}}}_{d^3,s} = {\displaystyle \sum_{i=0}^{M-1}}{s}_i\ E\left[{d}^3(n)\ {\boldsymbol{x}}^T\left(n-i\right)\right]\hfill \\ {}\hfill ={\displaystyle \sum_{i=0}^{M-1}}{s}_i\ {\boldsymbol{P}}_{d^3,i}.\hfill \end{array} $$
Equation (9) has three solutions, and the optimal solution is given by
$$ {\boldsymbol{w}}_{\boldsymbol{o}}={{\tilde{\boldsymbol{R}}}_{s^2}}^{-1}{\tilde{\boldsymbol{P}}}_{d,s}. $$
(10)

2.2 Mean behavior for the FXLMF algorithm

The FXLMF algorithm is governed by the following recursion:
$$ \boldsymbol{w}\left(n+1\right)=\boldsymbol{w}(n)-\frac{\mu }{4}\kern0.5em \frac{\partial {J}_{\mathrm{FXLMF}}(n)}{\partial \boldsymbol{w}(n)}, $$
(11)
where the instantaneous gradient can be approximated as
$$ \frac{\partial {\widehat{j}}_{\mathrm{FXLMF}}(n)}{\partial \boldsymbol{w}(n)} \approx -4\ {e}^3(n){\displaystyle \sum_{i=1}^{\widehat{M}-1}}{\widehat{s}}_i{\boldsymbol{x}}^T\left(n-i\right) $$
(12)
due to the absence of the exact knowledge of the secondary path. Substituting Eqs. (2)–(5) and (11) in (10), the adaptive weight vector update is given by
$$ \begin{array}{c}\hfill \boldsymbol{w}\left(n+1\right)=\boldsymbol{w}\ (n)+\hfill \\ {}\hfill\ \mu\ {\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {d}^3(n)\boldsymbol{x}\left(n-i\right)-3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ {d}^2(n)\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right)\hfill \\ {}\hfill +3\kern0.5em \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{\widehat{M}-1}}{s}_i{s}_j{\widehat{s}}_k\ d(n)\boldsymbol{x}\left(n-k\right){\boldsymbol{x}}^T\left(n-j\right)\ \boldsymbol{x}\left(n-i\right)\boldsymbol{w}\left(n-j\right)\ {\boldsymbol{w}}^T\left(n-i\right)\right)\hfill \\ {}\hfill -6\kern0.5em \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ z(n)\ d(n)\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right)\hfill \\ {}\hfill - \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{\widehat{M}-1}}{s}_i{s}_j{s}_k{\widehat{s}}_l\ \boldsymbol{x}\left(n-l\right){\boldsymbol{x}}^T\left(n-k\right)\ \boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-k\right)\ {\boldsymbol{w}}^T\left(n-j\right)\boldsymbol{w}\left(n-i\right)\right)\hfill \\ {}\hfill + 3\kern0.5em \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{\widehat{M}-1}}{s}_i{s}_j{\widehat{s}}_k\ z(n)\boldsymbol{x}\left(n-k\right){\boldsymbol{x}}^T\left(n-j\right)\ \boldsymbol{x}\left(n-i\right)\boldsymbol{w}\left(n-j\right)\ {\boldsymbol{w}}^T\left(n-i\right)\right)\hfill \\ {}\hfill - 3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ {z}^2(n)\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right)+\mu \left({\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {z}^3(n)\boldsymbol{x}\left(n-i\right)\ \right)\hfill \\ {}\hfill +3\ \mu\ \left({\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ d(n){z}^2(n)\boldsymbol{x}\left(n-i\right)\right)+3\ \mu \left(\ {\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {d}^2(n)z(n)\boldsymbol{x}\left(n-i\right)\right).\hfill \end{array} $$
(13)

To find the expectations for the terms on the right-hand side of Eq. (13), we resort to the following assumptions [1, 21]:

2.2.1 Assumption A3

Independence theory (IT) states that the taps of the input vector x(ni), i = 0,1,2,… are statistically dependent so that E[x(n − i)x T (n − j)] = E[x(n − i)x T (n − j)x(n − k)] = E[x(n − l)x T (n − k)x(n − j)x T (n − i)] = 0, for any i ≠ j, i ≠ j ≠ k, and i ≠ j ≠ k ≠ l, respectively.

2.2.2 Assumption A4

Take into consideration the correlation between x(ni), x(nj), x(nk), and x(nl) i, j, k, l and ignore the correlation between w(nv) and x(ni) or x(nj) or x(nk) i, j, k.

Using assumption A3, the mean weight update recursion for the FXLMF algorithm will look like the following:
$$ \begin{array}{c}\hfill E\left[\boldsymbol{w}\left(n+1\right)\right]=E\ \left[\boldsymbol{w}\ (n)\right] + \mu\ {\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {\boldsymbol{P}}_{d^3,i}-3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\kern0.5em {\boldsymbol{P}}_{d^2,i,j}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)\hfill \\ {}\hfill + 3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{\widehat{M}-1}}{s}_i{s}_j{\widehat{s}}_k\ {\boldsymbol{P}}_{d,i,j,k}E\left[\boldsymbol{w}\left(n-j\right){\boldsymbol{w}}^T\left(n-i\right)\right]\right)\hfill \\ {}\hfill - \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{\widehat{M}-1}}{s}_i{s}_j{s}_k{\widehat{s}}_l\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k},\boldsymbol{l}}E\left[\boldsymbol{w}\left(n-k\right){\boldsymbol{w}}^T\left(n-j\right)\boldsymbol{w}\left(n-i\right)\right]\right)\ \hfill \\ {}\hfill - 3\ \mu\ {\sigma_z}^2\left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)+3\ \mu\ {\sigma_z}^2\left({\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {\boldsymbol{P}}_{d,i}\right).\hfill \end{array} $$
(14)
Consequently, after taking into account the independence theory, Eq. (14) looks like the following:
$$ \begin{array}{c}\hfill E\left[\boldsymbol{w}\left(n+1\right)\right]=E\left[\boldsymbol{w}(n)\right] + \mu\ {\widehat{s}}_0\ E\left[{d}^3(n){\boldsymbol{x}}^{\boldsymbol{T}}(n)\right]\hfill \\ {}\hfill - 3\ \mu \left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{\widehat{s}}_i\ {\boldsymbol{P}}_{d^2,i,j}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)-3\ \mu {\sigma_z}^2\left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{\widehat{s}}_i{\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)\hfill \\ {}\hfill +3\ \mu \left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{s}_i{\widehat{s}}_i\ {\boldsymbol{P}}_{d,i,j,k}E\left[\boldsymbol{w}\left(n-i\right){\boldsymbol{w}}^T\left(n-i\right)\right]\right)+3\ \mu\ {\sigma_z}^2{\widehat{s}}_0\ E\left[d(n)\boldsymbol{x}(n)\right]\hfill \\ {}\hfill - \mu \left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{s}_i{s}_i{\widehat{s}}_i{\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k},\boldsymbol{l}}\ E\left[\boldsymbol{w}\left(n-i\right){\boldsymbol{w}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right]\right)\ .\ \hfill \end{array} $$
(15)
Since in a practical situation, an exact modeling for the secondary path cannot be achieved, which may lead to incorrect number of tap weights, such as \( \widehat{M}<M \) or may have the same number of taps but they do not have the same Ŝ ≠ S. Here, we consider the case of overestimation for the secondary path, as was the case for Eq. (12). Moreover, to study the steady-state condition, we assume that the optimal solution of tap weights is governed by lim n → ∞ E[w(n + 1)] = lim n → ∞ E[w(n)] = w ; as a result,
$$ \begin{array}{l}{\boldsymbol{w}}_{\infty}\approx {\boldsymbol{w}}_o\\ {} = {{\tilde{\boldsymbol{R}}}_{s^2}}^{-1}{\tilde{\boldsymbol{P}}}_{d,s}\end{array} $$
(16)

2.3 Second moment analysis for FXLMF algorithm

Using Eq. (7), the mean square error (MSE) for the FXLMF algorithm is obtained:
$$ \begin{array}{c}\hfill {\mathrm{MSE}}_{\mathrm{FXLMF}}(n)=E\ \left[{e}^2(n)\right]\hfill \\ {}\hfill = E\left[{d}^2(n)\right]-2{\displaystyle \sum_{i=0}^{M-1}}{s}_i\ E\left[d(n)\boldsymbol{x}\left(n-i\right)\right]E\left[\boldsymbol{w}\left(n-i\right)\right]\hfill \\ {}\hfill + {\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ E\left[\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\right]E\left[\boldsymbol{w}\left(n-i\right){\boldsymbol{w}}^T\left(n-i\right)\right]+E\left[{z}^2(n)\right]\kern0.5em \hfill \\ {}\hfill = {\sigma_d}^2-2{\displaystyle \sum_{i=0}^{M-1}}{s}_i\ {\boldsymbol{P}}_{d,i}\ E\left[\boldsymbol{w}\left(n-i\right)\right]\hfill \\ {}\hfill +{\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}}\ E\left[\boldsymbol{w}\ \left(n-i\right){\boldsymbol{w}}^T\ \left(n-i\right)\right]+{\sigma_z}^2.\hfill \end{array} $$
(17)
Next, to find the minimum mean square error (MMSE), we need to substitute the optimal solution of the FXLMF algorithm (16) in (17); moreover, the optimal error is given by
$$ {e}_o(n)=d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right){\boldsymbol{w}}_o $$
(18)
Relying on the orthogonality principle [1, 2], the input signal will be orthogonal to the error, and noting that \( {\sigma_d}^2={{\tilde{\boldsymbol{P}}}_{d,s}}^{*}{{\tilde{\boldsymbol{R}}}_{s^2}}^{-1}{\tilde{\boldsymbol{P}}}_{d,s} \), where σ d 2 is the power of the desired response and \( {\tilde{\boldsymbol{P}}}_{\boldsymbol{d},\boldsymbol{s}} \) and \( {\tilde{\boldsymbol{R}}}_{{\boldsymbol{s}}^2} \) have been already defined in Section 2.1, the MMSEFXLMF can be expressed as follows:
$$ {\mathrm{MMSE}}_{\mathrm{FXLMF}} = {\sigma_z}^2 $$
(19)

2.4 FXLMF algorithm stability

Choosing the right value of step size ensures that the algorithm will converge. The FXLMF algorithm weight update equation is given by
$$ \boldsymbol{w}\left(n+1\right) = \boldsymbol{w}(n)+\mu\ {e}^3(n)\ x\mathit{\hbox{'}}(n). $$
(20)
Then, the algorithm converges when E[w(n + 1)] = E[w(n)], that is, the expected value of the weight adjustment term will be zero:
$$ \begin{array}{l}\mu\ E\left[{e}^3(n)\ x\mathit{\hbox{'}}(n)\right]=0\\ {}\mathrm{or}\\ {}\mu\ E\left[{\left(d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)+z(n)\right)}^3\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\right)\right]=0,\end{array} $$
(21)
since
$$ \begin{array}{c}\hfill e(n)=d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)+z(n)\hfill \\ {}\hfill ={\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right){\boldsymbol{w}}_{\boldsymbol{o}} - {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)+z(n).\hfill \end{array} $$
(22)
The weight error vector is defined as
$$ \boldsymbol{v}(n)=\boldsymbol{w}(n)-{\boldsymbol{w}}_{\boldsymbol{o}}. $$
(23)
Hence,
$$ e(n)=z(n) - {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{v}(n). $$
(24)
Using Eqs. (23) and (24) in Eq. (19), then
$$ \boldsymbol{v}\left(n+1\right)=\boldsymbol{v}(n)+\mu {\left(z(n) - {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{v}(n)\right)}^3\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\right). $$
(25)
The value of v(n) approaches zero when the algorithm converges, and therefore, higher order terms of v(n) can be ignored, and as a result, the weight error update equation can look like
$$ \boldsymbol{v}\left(n+1\right)\cong \boldsymbol{v}(n)+\mu \left({z}^3(n)-3\ {\displaystyle \sum_{i=0}^{M-1}}{s}_i{z}^2(n){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{v}(n)\right)\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\right). $$
(26)
Following assumptions A1 and A2 that the noise is independent of the input signal and independent of the weight error vector, the expected value of Eq. (26) results into
$$ \begin{array}{c}\hfill E\left[\boldsymbol{v}\left(n+1\right)\right]=E\left[\boldsymbol{v}(n)\right]-\mu \left\{3{\sigma_z}^2{\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_iE\left[\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\right]E\left[\boldsymbol{v}(n)\right]\right\}\hfill \\ {}\hfill = \left[I-\mu \left(3{\sigma_z}^2{\tilde{\boldsymbol{R}}}_{s^2}\right)\right]E\left[\boldsymbol{v}(n)\right].\hfill \end{array} $$
(27)
Since the autocorrelation matrix R i,j  > 0, the range of the step size for the FXLMF algorithm can be shown to be given by
$$ 0<\mu <\frac{2}{3{\sigma_z}^2{\lambda}_{\max}\left({\tilde{\boldsymbol{R}}}_{s^2}\right)}, $$
(28)

where \( {\lambda}_{\max}\left({\tilde{\boldsymbol{R}}}_{s^2}\right) \) represents the maximum eigenvalue of \( {\tilde{\boldsymbol{R}}}_{s^2} \).

2.5 Development of Leaky FXLMF (LFXLMF) algorithm

In this section, the leaky version of the FXLMF algorithm is developed using assumptions A1A4. Using the block diagram in Fig. 1, the cost function for the LFXLMF algorithm will be as follows:
$$ {\boldsymbol{J}}_{\mathrm{LFXLMF}}(n)=E\ \left[{e}^4(n)\right] + \gamma\ {\boldsymbol{w}}^T(n)\boldsymbol{w}(n), $$
(29)
where γ is the leakage factor γ ≥ 0. In the case where γ = 0, then the cost function will be for the FXLMF algorithm, and the error signal, e(n), is given by
$$ e(n)=d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)+z(n). $$
(30)
The derivative of the cost function with respect to w(n) will be as follows:
$$ \begin{array}{c}\hfill \frac{\partial {J}_{\mathrm{LFXLMF}}(n)}{\partial \boldsymbol{w}(n)}={\left\Vert \boldsymbol{w}\right\Vert}^3-3\left({{\tilde{\boldsymbol{R}}}_{s^4}}^{-1}{\tilde{\boldsymbol{P}}}_{d,{s}^3}\right){\left\Vert \boldsymbol{w}\right\Vert}^2+3{{\tilde{\boldsymbol{R}}}_{s^4}}^{-1}\left({\tilde{\boldsymbol{P}}}_{d^2,{s}^2}+\frac{\gamma }{2}I\right)\left\Vert \boldsymbol{w}\right\Vert \hfill \\ {}\hfill -{{\tilde{\boldsymbol{R}}}_{s^4}}^{-1}{\tilde{\boldsymbol{P}}}_{d^3,s}\ .\kern0.5em \hfill \end{array} $$
(31)

2.6 Mean behavior of the adaptive weight vector for LFXLMF algorithm

Using the same block diagram used for the FXLMF algorithm in Fig. 1, the weight update equation for the LFXLMF algorithm is given by
$$ \begin{array}{c}\hfill \boldsymbol{w}\left(n+1\right) = \boldsymbol{w}(n)-\frac{\mu }{4}\kern0.5em \frac{\partial {J}_{\mathrm{LFXLMF}}(n)}{\partial \boldsymbol{w}(n)}\hfill \\ {}\hfill =\left(1-\mu \frac{\gamma }{2}\right)\ \boldsymbol{w}(n)+\mu\ {e}^3(n)\ x\mathit{\hbox{'}}(n),\hfill \end{array} $$
(32)
where the instantaneous gradient can be approximated as follows:
$$ \frac{\partial {\widehat{\boldsymbol{j}}}_{\mathrm{LFXLMF}}(n)}{\partial \boldsymbol{w}(n)} \approx -4\ {e}^3(n){\displaystyle \sum_{i=1}^{\widehat{M}-1}}{\widehat{s}}_i{\boldsymbol{x}}^T\left(n-i\right)+2\gamma \boldsymbol{w}(n). $$
(33)
Since we do not have the exact knowledge of the secondary path, we can substitute Eqs. (2)–(4) and (32) in Eq. (31) to get the adaptive weight vector update expression as follows:
$$ \begin{array}{c}\hfill \boldsymbol{w}\left(n+1\right)=\left(1-\mu \frac{\gamma }{2}\right)\boldsymbol{w}\ (n)+\mu \left({\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {z}^3(n)x\left(n-i\right)\ \right)\hfill \\ {}\hfill +\mu\ {\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {d}^3(n)\boldsymbol{x}\left(n-i\right)-3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ {d}^2(n)\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right)\hfill \\ {}\hfill +3\kern0.5em \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{\widehat{M}-1}}{s}_i{s}_j{\widehat{s}}_k\ d(n)\boldsymbol{x}\left(n-k\right){\boldsymbol{x}}^T\left(n-j\right)\ \boldsymbol{x}\left(n-i\right)\boldsymbol{w}\left(n-j\right)\ {\boldsymbol{w}}^T\left(n-i\right)\right)\hfill \\ {}\hfill -6\kern0.5em \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ z(n)d(n)\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right)+3\ \mu \left(\ {\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {d}^2(n)z(n)\boldsymbol{x}\left(n-i\right)\right)\hfill \\ {}\hfill - \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{\widehat{M}-1}}{s}_i{s}_j{s}_k{\widehat{s}}_l\ \boldsymbol{x}\left(n-l\right){\boldsymbol{x}}^T\left(n-k\right)\ \boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-k\right)\ {\boldsymbol{w}}^T\left(n-j\right)\boldsymbol{w}\left(n-i\right)\right)\hfill \\ {}\hfill +3\kern0.5em \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{\widehat{M}-1}}{s}_i{s}_j{\widehat{s}}_k\ z(n)\boldsymbol{x}\left(n-k\right){\boldsymbol{x}}^T\left(n-j\right)\ \boldsymbol{x}\left(n-i\right)\boldsymbol{w}\left(n-j\right)\ {\boldsymbol{w}}^T\left(n-i\right)\right)\hfill \\ {}\hfill -3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ {z}^2(n)\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right)+3\ \mu\ \left({\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ d(n){z}^2(n)\boldsymbol{x}\left(n-i\right)\right).\hfill \end{array} $$
(34)
Following assumptions A1A4, the mean weight of the adaptive weight vector for LFXLMF algorithm is expressed as in the following:
$$ \begin{array}{c}\hfill E\left[\boldsymbol{w}\left(n+1\right)\right]=\left(1-\mu \frac{\gamma }{2}\right)E\ \left[\boldsymbol{w}\ (n)\right]\hfill \\ {}\hfill \begin{array}{l} + \mu\ {\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {\boldsymbol{P}}_{d^3,i}-3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\kern0.5em {\boldsymbol{P}}_{d^2,i,j}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)\\ {}+3\ \mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{\widehat{M}-1}}{s}_i{s}_j{\widehat{s}}_k\ {\boldsymbol{P}}_{d,i,j,k}E\left[\boldsymbol{w}\left(n-j\right){\boldsymbol{w}}^T\left(n-i\right)\right]\right)\end{array}\hfill \\ {}\hfill -\mu \left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{\displaystyle \sum_{k=0}^{M-1}}{\displaystyle \sum_{l=0}^{\widehat{M}-1}}{s}_i{s}_j{s}_k{\widehat{s}}_l\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k},\boldsymbol{l}}E\left[\boldsymbol{w}\left(n-k\right){\boldsymbol{w}}^T\left(n-j\right)\boldsymbol{w}\left(n-i\right)\right]\right)\hfill \\ {}\hfill -3\ \mu\ {\sigma_z}^2\left({\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{\widehat{M}-1}}{s}_i{\widehat{s}}_j\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)+3\ \mu\ {\sigma_z}^2\left({\displaystyle \sum_{i=0}^{\widehat{M}-1}}{\widehat{s}}_i\ {\boldsymbol{P}}_{d,i}\right).\hfill \end{array} $$
(35)
The mean weight of the adaptive weight vector for LFXLMF algorithm considering the independence theory looks like the following:
$$ \begin{array}{c}\hfill E\left[\boldsymbol{w}\left(n+1\right)\right]=\left(1-\mu \frac{\gamma }{2}\right)E\left[\boldsymbol{w}(n)\right] + \mu\ {\widehat{s}}_0\ E\left[{d}^3(n){\boldsymbol{x}}^{\boldsymbol{T}}(n)\right]\hfill \\ {}\hfill -3\ \mu \left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{\widehat{s}}_i\ {\boldsymbol{P}}_{d^2,i,j}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)\hfill \\ {}\hfill +3\ \mu \left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{s}_i{\widehat{s}}_i\ {\boldsymbol{P}}_{d,i,j,k}E\left[\boldsymbol{w}\left(n-i\right){\boldsymbol{w}}^T\left(n-i\right)\right]\right)\hfill \\ {}\hfill - \mu \left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{s}_i{s}_i{\widehat{s}}_i{\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j},\boldsymbol{k},\boldsymbol{l}}\ E\left[\boldsymbol{w}\left(n-i\right){\boldsymbol{w}}^T\left(n-i\right)\boldsymbol{w}\left(n-i\right)\right]\right)\kern0.5em \hfill \\ {}\hfill -3\ \mu {\sigma_z}^2\left({\displaystyle \sum_{i=0}^{\min \left(\widehat{M},M\right)-1}}{s}_i{\widehat{s}}_i\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}}E\left[\boldsymbol{w}\ \left(n-i\right)\right]\right)+3\ \mu\ {\sigma_z}^2\left({\widehat{s}}_0\ E\left[d(n)\boldsymbol{x}(n)\right]\right).\hfill \end{array} $$
(36)

2.7 Second moment analysis for LFXLMF

The performance analysis for the mean square error, E[e 2(n)]., of the LFXLMF algorithm is carried out, where the error is updated according to Eq. (7). Therefore, the MSE for the LFXLMF algorithm is obtained as follows:
$$ \begin{array}{c}\hfill {\mathrm{MSE}}_{\mathrm{LFXLMF}}(n)=E\ \left[{e}^2(n)\right]\hfill \\ {}\hfill = E\left[{d}^2(n)\right]-2{\displaystyle \sum_{i=0}^{M-1}}{s}_i\ E\left[d(n)\boldsymbol{x}\left(n-i\right)\right]E\left[\boldsymbol{w}\left(n-i\right)\right]\hfill \\ {}\hfill +{\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ E\left[\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\right]E\left[\boldsymbol{w}\ \left(n-i\right){\boldsymbol{w}}^T\ \left(n-i\right)\right]+E\left[{z}^2(n)\right].\ \hfill \\ {}\hfill = {\sigma_d}^2-2{\displaystyle \sum_{i=0}^{M-1}}{s}_i\ {\boldsymbol{P}}_{d,i}\ E\left[\boldsymbol{w}\left(n-i\right)\right]\hfill \\ {}\hfill +{\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_i{s}_j\ {\boldsymbol{R}}_{\boldsymbol{i},\boldsymbol{j}}\ E\left[\boldsymbol{w}\ \left(n-i\right){\boldsymbol{w}}^T\ \left(n-i\right)\right]+{\sigma_z}^2.\hfill \end{array} $$
(37)
Following the steps used in deriving Eq. (19), here, we reach the same results for the MMSE of the LFXLMF as given by
$$ {\mathrm{MMSE}}_{\mathrm{LFXLMF}} = {\sigma_z}^2. $$
(38)

2.8 LFXLMF algorithm stability

In the ensuing, the effect of the leakage factor γ on the stability of the LFXLMF algorithm is discussed. As was done in [21], the value of γ is determined by the filter designer using trial and error methodology. For this work, the range of the leakage factor can be found with respect to the step size μ. To do that, first, we start with the LFXLMF algorithm weight update:
$$ \begin{array}{c}\hfill \boldsymbol{w}\left(n+1\right) = \left(1-\mu \frac{\gamma }{2}\right)\boldsymbol{w}(n)-\frac{\mu }{4}\kern0.5em \frac{\partial {\boldsymbol{J}}_{\mathrm{LFXLMF}}(n)}{\partial \boldsymbol{w}(n)}\hfill \\ {}\hfill = \left(1-\mu \frac{\gamma }{2}\right)\ \boldsymbol{w}(n)+\mu\ {e}^3(n)\ x\mathit{\hbox{'}}(n).\hfill \end{array} $$
(39)
The algorithm converges when E[w(n + 1)] = E[w(n)]. In other words, the weight adjustment term will be zero, that is,
$$ \begin{array}{c}\hfill \mu\ E\left[\ {e}^3(n)\ x\mathit{\hbox{'}}(n)\right]=0\hfill \\ {}\hfill E\left[{\left(d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}+z(n)\right)}^3\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\right)\right]=0.\hfill \end{array} $$
(40)
But, since
$$ \begin{array}{c}\hfill e(n)=d(n)-{\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}+z(n)\hfill \\ {}\hfill ={\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right){\boldsymbol{w}}_{\boldsymbol{o}} - {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{w}+z(n),\hfill \end{array} $$
(41)
and assuming fixed w, then we can define the weight error vector
$$ \boldsymbol{v}(n)=\boldsymbol{w}(n)-{\boldsymbol{w}}_{\boldsymbol{o}}, $$
(42)
Hence, Eq. (41) looks like the following:
$$ e(n)=z(n) - {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{v}(n). $$
(43)
Using Eqs. (42) and (43) in Eq. (39), one obtains
$$ \boldsymbol{v}\left(n+1\right)=\left(1-\mu \frac{\gamma }{2}\right)\boldsymbol{v}(n)+\mu {\left(z(n) - {\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{v}(n)\right)}^3\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\right). $$
(44)
The value of v(n) approaches zero when the algorithm converges so that we can ignore the high-order terms of v(n), and as a result, the weight error update equation can be written as
$$ \boldsymbol{v}\left(n+1\right)\cong \left(1-\mu \frac{\gamma }{2}\right)\boldsymbol{v}(n)+\mu \left({z}^3(n)-3\ {\displaystyle \sum_{i=0}^{M-1}}{s}_i{z}^2(n){\boldsymbol{x}}^T\left(n-i\right)\boldsymbol{v}(n)\right)\left({\displaystyle \sum_{i=0}^{M-1}}{s}_i{\boldsymbol{x}}^T\left(n-i\right)\right). $$
(45)
To find the mean weight error, we need to take the expectation of Eq. (45), and relying on the assumptions A1A4, the noise is independent of the input signal as well as the weight error vector. Consequently, the mean of the weight error vector is given by
$$ \begin{array}{l}E\left[\boldsymbol{v}\left(n+1\right)\right]=\left(1-\mu \frac{\gamma }{2}\right)E\left[\boldsymbol{v}(n)\right]\hfill \\ {}-\mu \left\{3{\sigma_z}^2{\displaystyle \sum_{i=0}^{M-1}}{\displaystyle \sum_{j=0}^{M-1}}{s}_iE\left[\boldsymbol{x}\left(n-j\right){\boldsymbol{x}}^T\left(n-i\right)\right]E\left[\boldsymbol{v}(n)\right]\right\}\hfill \\ {}E\left[\boldsymbol{v}\left(n+1\right)\right]=\left(\left(1-\mu \frac{\gamma }{2}\right)I-\mu \left(3{\sigma_z}^2{\tilde{\boldsymbol{R}}}_{s^2}\right)\right)E\left[\boldsymbol{v}(n)\right].\hfill \end{array} $$
(46)
Assuming a positive definite autocorrelation matrix, R i,j  > 0, the range of the leakage factor γ for LFXLMF algorithm is given by
$$ \frac{3{\sigma_z}^2{\lambda}_{\max}\left({\tilde{\boldsymbol{R}}}_{s^2}\right)-1}{\frac{3}{2}{\sigma_z}^2{\lambda}_{\max}\left({\tilde{\boldsymbol{R}}}_{s^2}\right)}<\gamma <\frac{2}{\mu } $$
(47)

where \( {\lambda}_{\max}\left({\tilde{\boldsymbol{R}}}_{s^2}\right) \) represents the maximal eigenvalue of \( {\tilde{\boldsymbol{R}}}_{s^2} \). As can be seen from Eq. (48), the leakage factor has an effect on the step size.

3 Algorithms’ convex combination

In this section, we examine the behavior of our algorithm through the convex combination approach, namely the convex combination with the FXLMF algorithm. The method of combining two algorithms is an interesting proposal. It aims to mix the output of each filter and highlights the best features of each individual algorithm. Then, it utilizes the features in the overall equivalent filter to improve the performance of the adaptive filter [2429]. In this section, we will examine our proposed algorithms with members from the LMS and LMF families.

Figure 2 is the proposed block diagram for the convex combination of two filtered input signals, where the output of the overall combined filter can be given as in [25] by the following equation:
Fig. 2
Fig. 2

Block diagram of adaptive convex combination for two filtered input signal algorithms

$$ y(n)=\lambda (n)y{\mathit{\hbox{'}}}_1(n)+\left[1-\lambda (n)\right]y{\mathit{\hbox{'}}}_2(n) $$
(48)
where y ' 1(n) and y ' 2(n) are the output of the two filters and λ(n) is the contribution or mixing parameter, where 0 ≤ λ(n) ≤ 1. This parameter shows the percentage of involvement for each algorithm in the overall filter output. Therefore, the combined filter will extract the best features for each filter w 1(n) and w 2(n) individually. Assuming both filters w 1(n) and w 2(n) have the same size M, then the weight vector of the overall filter can be given as
$$ \boldsymbol{w}(n)=\lambda (n){\boldsymbol{w}}_1(n)+\left[1-\lambda (n)\right]{\boldsymbol{w}}_2(n). $$
(49)

Each filter is updated individually, depending on its own error e 1(n) or e 2(n), and the overall weight vector is updated according to the total error e(n) = [d(n) − y(n) + z(n)] which adapts the mixing parameter λ(n). Using the gradient descent method, we can minimize the fourth-order e 4(n) and the second-order e 2(n) errors for the overall filter. Based on that, we can use the convex combined filter over two scenarios.

In the first scenario, we will do the minimization for the quadratic error e 2(n), where λ(n) is the sigmoidal function given as
$$ \lambda (n)=\frac{1}{1+{e}^{-a(n)}}, $$
(50)
and instead of doing the update equation with respect to λ(n), we will define the update equation with respect to the changing value a(n) as follows:
$$ \begin{array}{c}\hfill a\left(n+1\right)=a(n)-\frac{\mu_{a^2}}{2}\frac{\partial {e}^2(n)}{\partial a(n)}\hfill \\ {}\hfill = a(n)-\frac{\mu_{a^2}}{2}\frac{\partial {e}^2(n)}{\partial \lambda (n)}\frac{\partial \lambda (n)}{\partial a(n)}\hfill \\ {}\hfill =a(n)+{\mu}_{a^2}e(n)\left[y{\mathit{\hbox{'}}}_1(n)-y{\mathit{\hbox{'}}}_2(n)\right]\lambda (n)\left[1-\lambda (n)\right].\hfill \end{array} $$
(51)
The second scenario is to conduct the minimization for the fourth-order error of the overall filter; then, the updated equation with respect to a(n) will be as the following:
$$ \begin{array}{c}\hfill a\left(n+1\right)=a(n)-\frac{\mu_{a^4}}{4}\frac{\partial {e}^4(n)}{\partial a(n)}\hfill \\ {}\hfill = a(n)-\frac{\mu_{a^4}}{4}\frac{\partial {e}^4(n)}{\partial \lambda (n)}\frac{\partial \lambda (n)}{\partial a(n)}\hfill \\ {}\hfill =a(n)+{\mu}_{a^4}{e}^3(n)\left[y{\mathit{\hbox{'}}}_1(n)-y{\mathit{\hbox{'}}}_2(n)\right]\lambda (n)\left[1-\lambda (n)\right],\hfill \end{array} $$
(52)
where \( {\mu}_{a^2} \) and \( {\mu}_{a^4} \) are the step sizes for the overall filter, for the quadratic and fourth-order errors, respectively. In this work, we study the mean square performance for the convex-combined filter using the filtered input signal. Since the range of λ(n) is between zero and one, we need to insure that the combined filter keeps adapting and does not stick with only one algorithm all the time. For this purpose, we have to reduce the interval of the mixing parameter by limiting the value of a(n) inside [1 − a +, a +]; then, the range of the mixing parameter will be between 1 − λ + ≤ λ(n) ≤ λ +, as the following:
$$ \lambda (n)=\left\{\begin{array}{ll}0.998,\hfill & a(n) > {a}^{+}\hfill \\ {}\lambda (n),\hfill & {a}^{+}\ge a(n)\ge -{a}^{+}\hfill \\ {}0.002,\hfill & a(n)<-{a}^{+}\hfill \end{array}\right. $$
(53)

Simulations in Section 4 will investigate four cases, where the comparison will be done by using the FXLMF and FXLMS algorithms, as the two transversal filters are used in the convex combination, according to the second error order minimization.

4 Simulation results

Simulations in this section are divided into two parts. The first part examines the proposed algorithms in the mean square error and mean weight context. The simulation has been done for the FXLMF and LFXLMF algorithms under some conditions and environments. While in the second part, we test the concept of convex combinations over the FXLMF and FXLMS algorithms. Furthermore, comparisons with other algorithms are carried out to show under which circumstances the new proposed algorithms outperform algorithms from the LMS family, in convergence. The plant vector used to filter the input signal is w p with nine taps where
$$ \boldsymbol{w}p=\left[\begin{array}{ccc}\hfill \begin{array}{ccc}\hfill 0.0179\hfill & \hfill 0.1005\hfill & \hfill 0.2795\hfill \end{array}\hfill & \hfill \begin{array}{ccc}\hfill 0.4896\hfill & \hfill 0.5860\hfill & \hfill 0.4896\hfill \end{array}\hfill & \hfill \begin{array}{ccc}\hfill 0.2795\hfill & \hfill 0.1005\hfill & \hfill 0.0179\hfill \end{array}\hfill \end{array}\right]. $$

In addition, for simplicity, we assume the secondary path and the estimated secondary path are equal \( \boldsymbol{S} = \widehat{\boldsymbol{S}}=\left[\begin{array}{ccc}\hfill 0.7756\hfill & \hfill 0.5171\hfill & \hfill -0.3620\hfill \end{array}\right] \).

The result for all simulations are the average of 500 Monte Carlo simulations, the noise is white Gaussian for Figs. 3, 4, 5, 6, and 7 and uniform noise for Figs. 8, 9, 10, 11, and 12.
Fig. 3
Fig. 3

Comparison over MSE for FXLMF and LFXLMF with other algorithms using fixed step size μ = 0.001 and high SNR = 40 dB and Gaussian noise, leakage factor γ = 0.05

Fig. 4
Fig. 4

Comparison over MSE for FXLMF and LFXLMF algorithms with other algorithms using fixed step size μ = 0.001 and low SNR = 5 dB and Gaussian noise, leakage factor γ = 0.05

Fig. 5
Fig. 5

Comparison over mean weight vector for FXLMF algorithms using different step sizes μ = [red = 0.001, green = 0.0005, blue = 0.0001] using Gaussian noise at low SNR = 5 dB. Solid line: proposed models (a), (b), and (c). Dashed line: IT model

Fig. 6
Fig. 6

Comparison over mean weight vector for LFXLMF algorithms using different leakage factors γ = [0.1, 0.250, 0.50, 1] and fixed step size μ = 0.001 using Gaussian noise at low SNR = 5 dB

Fig. 7
Fig. 7

MSE for the FXLMF and LFXLMF algorithm robustness using Gaussian noise at low SNR = 5 dB, fixed step size μ = 0.00125, and leakage factor γ = 0.50

Fig. 8
Fig. 8

Comparison over MSE for FXLMF and LFXLMF with other algorithms using a fixed step size μ = 0.001 using uniform noise at high SNR = 40 dB and leakage factor γ = 0.05

Fig. 9
Fig. 9

Comparison over MSE for the FXLMF and LFXLMF algorithms with other algorithms using a fixed step size μ = 0.001 and uniform noise at low SNR = 5 dB and leakage factor γ = 0.05

Fig. 10
Fig. 10

Comparison over mean weight vector for FXLMF algorithms using different step sizes μ = [red = 0.001, green = 0.0005, blue = 0.0001] using uniform noise at low SNR = 5 dB. Solid line: proposed models (a), (b), and (c). Dashed line: IT model

Fig. 11
Fig. 11

Comparison over mean weight vector for LFXLMF algorithms using different leakage factors γ = [0.1, 0.250, 0.50, 1] and fixed step size μ = 0.001 using uniform noise at low SNR = 5 dB

Fig. 12
Fig. 12

MSE for the FXLMF and LFXLMF algorithm robustness using uniform noise at low SNR = 5 dB, fixed step size μ = 0.00125, and leakage factor γ = 0.50

Next, Figs. 13, 14, 15, 16, 17, 18, and 19 are for the convex combination; we used both of the transversal filters to have the same adaptive algorithm but with different step sizes. Then, we did a comparison between the FXLMF and FXLMS algorithms at low and high SNR for white Gaussian noise. All previous simulations were done using the minimization for quadratic error equation.
Fig. 13
Fig. 13

MSE for combined FXLMF and FXLMS using Gaussian noise at high SNR = 40 dB and fixed step size μ = 0.00125

Fig. 14
Fig. 14

Values of the mixing parameter λ(n) for Fig. 13

Fig. 15
Fig. 15

MSE for combined FXLMF and FXLMS using Gaussian noise at low SNR = 5 dB and fixed step size μ = 0.00125

Fig. 16
Fig. 16

Values of the mixing parameter for Fig. 15

Fig. 17
Fig. 17

MSE for combined FXLMF algorithm robustness using Gaussian noise at low SNR = 5 dB and fixed step sizes μ = 0.00125 (green) and 0.000625 (blue)

Fig. 18
Fig. 18

MSE for the combined FXLMF and FXLMS algorithm robustness test using Gaussian noise at high SNR = 40 dB and fixed step size μ = 0.00125

Fig. 19
Fig. 19

MSE for combined FXLMF and FXLMS algorithm robustness test using Gaussian noise at low SNR = 5 dB and fixed step size μ = 0.00125

Figure 3 shows a comparison of the mean square error MSE behavior for different algorithms from the LMS family (i.e., NLMS, FXLMS, FeLMS, MFXLMS), the NLMF, and our proposed ones. It can be shown that the FXLMF algorithm converges, and it will reach the white noise level after a large number of iterations. For the LFXLMF algorithm, it reaches the steady state level faster than the others and after almost 5000 iterations, but it converges to a higher white noise level at almost 12 dB. Using a larger step size μ may lead the algorithm to diverge.

Figure 4 shows a comparison of the mean square error MSE behavior for different algorithms with fixed step size but this time for low SNR with a value of 5 dB. We can clearly notice that the FXLMF and LFXLMF algorithms outperform other LMS family algorithms in speed of convergence, an advantage to our proposed algorithms with almost 500 iterations. FXLMF and LFXLMF almost have identical curves because we are using a small leakage factor γ.

Figure 5 shows the effect of changing the step size on the mean weight vector of the FXLMF algorithm; when we increase the values of the step size, the algorithm converges faster to the larger mean of the weight. Moreover, using assumption A4 makes the algorithm converge to a higher mean weight level.

Figure 6 shows the effect of changing the leakage factor on the mean weight of the LFXLMF algorithm. We can see that increasing the value of the leakage factor will increase the mean weight of the LFXLMF algorithm and it does not affect the speed of convergence.

Figure 7 shows the robustness of the proposed algorithms FXLMF and LFXLMF at low SNR and using Gaussian noise, when a sudden change occurred in the weight vector.

Figure 8 reports the performance of the algorithms when the uniform noise ids used instead of Gaussian, using the same conditions as we used before in Fig. 3. As we can see, we have almost the same result, since both the FXLMF and LFXLMF algorithms converge, where the first one keeps converging while the second one reaches the steady state faster.

Figure 9 is the same as Fig. 4 but using a fixed step size and uniform noise. In addition, the FXLMF and LFXLMF algorithms outperform the LMS family in convergence.

Figure 10 shows the effect of changing the step size on the mean weight vector of the FXLMF algorithm, and as depicted in Fig. 5, the algorithm converges faster as we increase the step size using uniform noise.

Figure 11 shows the effect of changing the leakage factor on the mean weight of the LFXLMF algorithm as shown in Fig. 6; increasing the value of the leakage factor will increase the mean weight of the LFXLMF algorithm.

Figure 12 shows the robustness of the proposed algorithms FXLMF and LFXLMF at low SNR and using uniform noise.

Figure 13 illustrates the behavior of the convex-combined filter of FXLMS and FXLMF algorithms; we can see at the beginning that the combined filter followed the FXLMF algorithm since it has a faster speed of convergence. After that, the combined filter moved to the FXLMS algorithm, which showed better convergence at high SNR. Also, we can see from Fig. 14 the behavior of the mixing parameter λ(n). We assume a 50 % mixing percentage as the initial case, then λ(n) followed the FXLMF algorithm at the beginning where the FXLMF shows faster convergence, and after that, the mixing parameter switched to the other algorithm FXLMS where it has a better convergence. In Fig. 15, with the same environment as in Fig. 13 but with low SNR, the FXLMF algorithm outperforms the FXLMS algorithm and the combined filter followed the FXLMF algorithm at the beginning; then, when both algorithms have the same convergence, the mixing parameter is λ(n) = 50 %. This is shown in Fig. 16.

Figure 17 shows the robustness of the convex-combined filter of FXLMF for two different step sizes at low SNR and using Gaussian noise. We can clearly see that the combined filter followed the one with a larger step size, which already shows better performance. Similarly, Fig. 18 shows the robustness of the convex-combined filter of the FXLMF and FXLMS algorithms at high SNR and using Gaussian noise. We can clearly see that the combined filter followed the FXLMF algorithm at the beginning and then switched to the FXLMS algorithm, which shows better performance at high SNR. Finally, Fig. 19 shows the robustness of the convex-combined filter of the FXLMF and FXLMS algorithms at low SNR and using Gaussian noise. We can clearly see that the combined filter followed the FXLMF algorithm all the time since it shows better performance than the FXLMS algorithm at low SNR.

Finally, the performance of the proposed algorithms is tested using the echo return loss enhancement (ERLE) metric. As can be depicted from Fig. 20, our proposed algorithms outperform the rest of the algorithms.
Fig. 20
Fig. 20

ERLE performance of the FXLMF and LFXLMF algorithms

5 Conclusions

Two algorithms FXLMF and LFXLMF were proposed in this work; an analytical study and mathematical derivations for the mean weight adaptive vector and the mean square error for both algorithms have been obtained. Moreover, the step size and the leakage factor bound ranges were investigated.

From the literature, we received a good sense about proposing new algorithms to the LMF family, as was proposed before in the LMS. The FXLMF and LFXLMF algorithms successfully converge under a large range of SNR. Furthermore, we see the ability of both algorithms to converge under different environments of noise: Gaussian and uniform. However, the LMF family requires more computational complexity; our proposed algorithms were faster in convergence than members of the LMS family under some circumstances.

From the simulations, we saw that both algorithms converge well under relatively high SNR but they converge faster under low SNR. In addition, using a step size near the upper boundary will guarantee less time to converge; however, working close to the upper boundary of the step size ensures faster convergence, but we have to take the risk of algorithm divergence. Also, we see that a larger step size will increase the mean of the weight vector. Step size under the upper boundary is given in Eq. (28).

The leakage factor in the LFXLMF algorithm adds more stability to the algorithm, at the expense of reduced performance as was expected from the literature. The leakage factor boundaries were derived in Eq. (47).

The convex combination is an interesting proposal to get the best feature of two or more adaptive algorithms. We were able to successfully apply it using the FXLMF and FXLMS algorithms with different step sizes. In the other scenario, we applied the combination over the FXLMS and FXLMF algorithms and we noticed that the convex-combined filter, at every iteration, followed the best algorithm.

A robustness test was done for all the scenarios used, to ensure that the proposed algorithms are able to adapt in case of a sudden change in the tap weights of the filter, either in the transient or steady state stage.

Declarations

Acknowledgements

The authors would like to thank the anonymous reviewers for their feedback that had improved the quality of the paper. The authors acknowledge the support provided by the Deanship of Scientific Research at KFUPM.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Electrical Engineering Department, KFUPM, Dhahran, Saudi Arabia

References

  1. S Haykin, Adaptive filter theory, 4th edn. (Prentice-Hall, Englewood Cliffs, NJ, 2002)MATHGoogle Scholar
  2. AH Sayed, Adaptive filters (Wiley, NJ, USA, 2008)View ArticleGoogle Scholar
  3. E Hänsler, G Schmidt, Acoustic echo and noise control: a practical approach (Wiley & Sons, New Jersey, 2004)View ArticleGoogle Scholar
  4. SM Kuo, DR Morgan, Active noise control systems, algorithms and DSP implementation functions (Wiley, New York, 1996)Google Scholar
  5. B Widrow, D Shur, S Shaffer, On adaptive inverse control, in Proc. 15th Asilomar Conf, 1981, pp. 185–189Google Scholar
  6. B. Widrow, S.D. Stearns, Adaptive signal processing, (Prentice-Hall, Upper-Saddle River, NJ, 1985)Google Scholar
  7. WA Gardner, Learning characteristics of stochastic-gradient-descent algorithms: a general study, analysis and critique. Signal Processing 6, 113–133 (1984)MathSciNetView ArticleGoogle Scholar
  8. B Widrow, ME Hoff, Adaptive switching circuits, in Proc. Of WESCON Conv. Rec., part 4, 1960, pp. 96–140Google Scholar
  9. P Dreiseitel, E Hänsler, H Puder, Acoustic echo and noise control—a long lasting challenge, in Proc EUSIPCO, 1998, pp. 945–952Google Scholar
  10. E Hänsler, GU Schmidt, Hands-free telephones—joint control of echo cancellation and postfiltering. Signal Processing 80, 2295–2305 (2000)View ArticleMATHGoogle Scholar
  11. C Breining, P Dreiscitel, E Hänsler, A Mader, B Nitsch, H Puder, T Schertler, G Schmidt, J Tilp, Acoustic echo control. An application of very-high-order adaptive filters. IEEE Signal Proc. Mag. 16(4), 42–69 (1999)View ArticleGoogle Scholar
  12. E. Bjarnason, Analysis of the Filtered-X LMS algorithm. IEEE Trans. Speech Audio Process. 3, 504–514 (1995)Google Scholar
  13. OJ Tobias, JCM Bermudez, NJ Bershad, R Seara, Mean weight behavior of the Filtered-X LMS algorithm, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process, 1998, pp. 3545–3548Google Scholar
  14. OJ Tobias, Stochastic analysis of the Filtered-X LMS algorithm, in Ph.D dissertation (Federal Univ, Santa Catarina, Brazil, 1999)Google Scholar
  15. OJ Tobias, JCM Bermudez, NJ Bershad, Mean weight behavior of the Filtered-X LMS algorithm. IEEE Trans. Signal Process. 48(4), 1061–1075 (2000)View ArticleGoogle Scholar
  16. JE Mazo, On the independence theory of equalizer convergence. Bell Syst. Tech. J. 58, 963–993 (1979)MathSciNetView ArticleMATHGoogle Scholar
  17. GJ Rey, RR Bitmead, CR Johnson, The dynamics of bursting in simple adaptive feedback systems with leakage. IEEE Trans. Circuits Syst. 38, 475–488 (1991)View ArticleGoogle Scholar
  18. L Vicente, E Masgrau, Novel FxLMS convergence condition with deterministic reference. IEEE Trans. Signal Process. 54, 3768–3774 (2006)View ArticleGoogle Scholar
  19. K Mayyas, T Aboulnasr, Leaky LMS algorithm: MSE analysis for Gaussian data. IEEE Trans. Signal Process. 45(4), 927–934 (1997)View ArticleGoogle Scholar
  20. E. Walach, B. Widrow, The Least Mean Fourth (LMF) adaptive algorithm and its family. IEEE Trans. Inf. Theory. IT-30(2), (1984), pp. 275–283Google Scholar
  21. O.J. Tobias, R. Seara, Leaky-FXLMS algorithm: stochastic analysis for Gaussian data and secondary path modeling Error. IEEE T. Speech Audi. P. 13(6), 1217–1230 (2005)Google Scholar
  22. RD Gitlin, HC Meadors Jr, SB Weinstein, The tap-leakage algorithm: an algorithm for the stable operation of a digitally implemented fractionally spaced equalizer. Bell Syst. Tech. J. 61(8), 1817–1839 (1982)View ArticleGoogle Scholar
  23. O Khattak, A Zerguine, Leaky Least Mean Fourth adaptive algorithm. IET Signal Process. 7(2), 134–145 (2013)MathSciNetView ArticleGoogle Scholar
  24. LA Azpicueta-Ruiz, M Zeller, AR Figueiras-Vidal, J Arenas-García, Least squares adaptation of affine combinations of multiple adaptive filters, in Proc. of IEEE Intl. Symp. on Circuits and Systems, Paris, France, 2010, pp. 2976–2979Google Scholar
  25. JC Burgess, Active adaptive sound control in a duct: a computer simulation. J. Acoust. Soc. Am. 70, 715–726 (1981)View ArticleGoogle Scholar
  26. SS Kozat, AC Singer, Multi-stage adaptive signal processing algorithms, in Proc. 2000 IEEE Sensor Array Multichannel SignalWork- shop, Cambridge, MA, 2000, pp. 380–384Google Scholar
  27. AC Singer, M Feder, Universal linear prediction by model order weighting. IEEE Trans. Signal Process. 47, 2685–2700 (1999)View ArticleMATHGoogle Scholar
  28. M Niedz`wiecki, Multiple-model approach to finite memory adaptive filtering. IEEE Trans. Signal Process 40, 470–473 (1992)View ArticleGoogle Scholar
  29. J. Arenas-Garcia, L.A. Azpicuet-Ruiz, M.T.M. Silva, V.H. Nascimento, A.H. Sayed, Combinations of adaptive filters: performance and convergence properties. IEEE Signal Proc. Mag. 33, 120-140 (2016)Google Scholar

Copyright

Advertisement