Skip to main content

Two variants of the IIR spline adaptive filter for combating impulsive noise

Abstract

It has been pointed out that the nonlinear spline adaptive filter (SAF) is appealing for modeling nonlinear systems with good performance and low computational burden. This paper proposes a normalized least M-estimate adaptive filtering algorithm based on infinite impulse respomse (IIR) spline adaptive filter (IIR-SAF-NLMM). By using a robust M-estimator as the cost function, the IIR-SAF-NLMM algorithm obtains robustness against non-Gaussian impulsive noise. In order to further improve the convergence rate, the set-membership framework is incorporated into the IIR-SAF-NLMM, leading to a new set-membership IIR-SAF-NLMM algorithm (IIR-SAF-SMNLMM). The proposed IIR-SAF-SMNLMM inherits the benefits of the set-membership framework and least-M estimate scheme and acquires the faster convergence rate and effective suppression of impulsive noise on the filter weight and control point adaptation. In addition, the computational burdens and convergence properties of the proposed algorithms are analyzed. Simulation results in the identification of the IIR-SAF nonlinear model show that the proposed algorithms offer the effectiveness in the absence of non-Gaussian impulsive noise and robustness in non-Gaussian impulsive noise environments.

1 Introduction

Due to their concise design and low complexity, the adaptive linear filters have gained wide attention in system modeling and identification [1, 2]. The adaptive linear filter is conventionally modeled as a finite impulse response (FIR) filter or an infinite impulse response (IIR) filter. Its tap weights are updated iteratively by using adaptive algorithms such as the least mean square (LMS) algorithm, normalized least mean square (NLMS) algorithm, and affine projection algorithm (APA). However, in the case of nonlinear system, linear models are inadequate and suffer from the performance losses due to the failure to model the nonlinearity. Hence, in order to model the nonlinearity, several adaptive nonlinear structures have been presented such as truncated Volterra adaptive filters (VAF) [3], neural networks (NNs) [4], block-oriented architecture [5], and spline adaptive filters (SAF) [69]. Truncated VAF, originated from the Taylor series expansion, is one of the most used model for the nonlinearity. However, its implementation is limited because of a huge complexity requirement, in particular, for high-order Volterra models. To overcome the drawbacks of the truncated VAF, one of the most well-known structure is the block-oriented nonlinear architecture, which can be represented by the connections of linear time-invariant (LTI) models and memoryless nonlinear functions. There are several basic classes of the block-oriented nonlinear structure including the Wiener model [10], the Hammerstein model [11], and the variants originated from these two classes in accordance with different topologies (i.e., parallel, feedback, and cascade). Specifically, the Wiener model consists of a cascade of a linear LTI filter followed by a static nonlinear function which sometimes is deemed as linear-nonlinear (LN) model, and the Hammerstein model comprises a static nonlinear function connected behind a linear LTI filter which usually is considered as nonlinear-linear (NL) model. The cascade model, such as linear-nonlinear-linear (LNL) model or nonlinear-linear-nonlinear (NLN) model, has been proved to be more suitable for the generality of the model to be identified [12]. NNs are a flexible application for modeling nonlinearity, but it suffers from a large computational cost and difficulties in online adaptation.

Recently, combining the block-oriented architecture with the spline function, several novel adaptive nonlinear spline adaptive filters (SAFs) have been introduced such as Wiener spline filter, Hammerstein spline filter, cascade spline filter and IIR spline adaptive filter (IIR-SAF). These spline adaptive models can be implemented by different connections of the spline function and linear time-invariant (LTI) model. The nonlinearity in this kind of structure is modeled by the spline function, which can be represented by the adaptive look-up table (LUT) interpolated by a local low-order spline curve. The SAFs achieve improved performance in modeling the nonlinearity. Furthermore, in each iteration, only a portion of the control points is tuned depending on the order of the spline function and the nonlinear shape is slightly changed. Consequently, this local behavior of the spline function results in the considerable saving in the computation complexity.

Note that in all spline filters mentioned above, their update rules are based on the mean square error (MSE) criterion in additive white Gaussian noise (AWGN) environment which considers the cost function J=E[e2(n)], where E[·] denotes the mathematical expectation and e(n) is the output error. However, in some cases of non-Gaussian noise such as underwater acoustic signal processing [13], radar signal processing [14], and communication systems [15], the SAFs may suffer from performance deterioration or failure to be robust against non-Gaussian noises. To address this problem, a least M-estimate scheme [16, 17] has been proposed by using the least M-estimator as the cost function which achieves the satisfactory performance when the input and desired signals are corrupted by non-Gaussian impulsive noises.

In this paper, extending the least M-estimate idea into the IIR-SAF, a normalized least M-estimate adaptive filtering algorithm based on IIR spline adaptive filter (IIR-SAF-NLMM) is proposed for nonlinear system identification. The update rule is based on the modified Huber M-estimate function, thus yielding a good effectiveness in suppressing non-Gaussian impulsive noises. To further improve the convergence performance of the IIR-SAF-NLMM, we incorporate the set-membership framework into the IIR-SAF-NLMM and propose a set-membership IIR-SAF-NLMM (IIR-SAF-SMNLMM) algorithm. It is derived by minimizing a new M-estimate-based cost function associated with a robust set-membership error bound. Due to the combination of the robust set-membership error bound and threshold parameter used to reject the outliers, the IIR-SAF-SMNLMM provides faster convergence rate and robustness against non-Gaussian impulsive noise compared with the conventional SAF algorithms.

The paper is organized as follows. The IIR-SAF structure is reviewed in Section 2. In Section 3, we derive the IIR-SAF-NLMM and IIR-SAF-SMNLMM algorithms. The computational complexity is given in Section 4, and convergence properties of the IIR-SAF-SMNLMM are analyzed in Section 5. Some simulation results are demonstrated in Section 6. Finally, Section 7 concludes the paper.

2 IIR-SAF structure

The structure of the IIR-SAF is shown in Fig. 1, which consists of an adaptive infinite impulse response (IIR) filter followed by a nonlinear network. In the nonlinear network, the spline interpolater, connected behind the adaptive LUT, determines the number and spacing of points (knots) contained in the LUT. The output of the adaptive IIR is given by:

$$\begin{array}{@{}rcl@{}} s(n) = \mathbf{w}^{T} (n)\bar{\mathbf{x}}(n), \end{array} $$
(1)
Fig. 1
figure 1

Structure of the IIR-SAF

where w(n) is the weight vector of the IIR filter which is defined as w(n)=[b0(n),b1(n),,bM−1(n),a1(n),,aN(n)]T, bl(n)(l=0,1,,M−1), and ak(n)(k=1,2, ,N) denote the lth coefficient of the MA part and kth coefficient of the AR part in the IIR adaptive filter respectively. \(\bar {\mathbf {x}}(n)\,=\, \left [x(n),x(n - 1), \cdots, x(n - M + 1),s(n - 1), \cdots,s(n - N)\right ]^{T}\) is the input vector of the IIR filter.

The local parameter un and span index i can be computed as:

$$\begin{array}{@{}rcl@{}} u_{n} = s(n)/\Delta x - \left\lfloor {s(n)/\Delta x} \right\rfloor, \end{array} $$
(2)
$$\begin{array}{@{}rcl@{}} i = \left\lfloor {s(n)/\Delta x} \right\rfloor {{+ (}}Q - 1)/2, \end{array} $$
(3)

where Q is the number of the control point, Δx is the uniform space between two adjacent control points, and · denotes the floor operator.

The output of the whole system is given as:

$$\begin{array}{@{}rcl@{}} y(n) = \varphi_{i} (u_{n})=\mathbf{u}_{n}^{T} {\mathbf{Cq}}_{i,n}, \end{array} $$
(4)

where, in this paper considering the cubic spline interpolation scheme, thus the control point vector qi,n can be defined qi,n =[qi,n,qi+1,n,qi+2,n,qi+3,n]T with length 4, and the vector un is defined as \(\mathbf {u}_{n} = \left [u_{n}^{3},u_{n}^{2},u_{n},1\right ]^{T}\). The superscript T denotes the transposition operation. C is the spline basis matrix whose dimension is selected to be 4×4. Two suitable types of spline basis matrix are Catmul-Rom (CR) spline and B-spline matrices which are given by:

$$\begin{array}{@{}rcl@{}} C_{\text{CR}} = \frac{1}{2} \left[ {\begin{array}{cccc} { - 1} & 3 & { - 3} & 1 \\ 2 & { - 5} & 4 & - 1 \\ { - 1} & 0 & 1 & 0 \\ 0 & 2 & 0 & 0 \\ \end{array}} \right], \end{array} $$
(5)
$$\begin{array}{@{}rcl@{}} C_{B} = \frac{1}{6}\left[ {\begin{array}{cccc} { - 1} & 3 & { - 3} & 1 \\ 3 & { - 6} & 3 & 0 \\ { - 3} & 0 & 3 & 0 \\ 1 & 4 & 1 & 0 \\ \end{array}} \right]. \end{array} $$
(6)

According to the Lagrange multiplier method presented in [18], two recursive equations of the tap weights and control points of the normalized least mean square algorithm based on IIR-SAF (IIR-SAF-NLMS) can be formulated as;

$$\begin{array}{@{}rcl@{}} \mathbf{w}_{n + 1} = \mathbf{w}_{n} + \mu_{\mathrm{w}} \frac{{e (n)}}{{\mathbf{u}_{n}^{T} \mathbf{u}_{n}^{} + \varepsilon }}\frac{1}{{\Delta x}}\dot{\mathbf{u}}_{n}^{T} {\mathbf{Cq}}_{i,n} \mathbf{g}_{n}, \end{array} $$
(7)
$$\begin{array}{@{}rcl@{}} \mathbf{q}_{i,n+1} = \mathbf{q}_{i,n} + \mu_{\mathrm{q}} \frac{{e (n)}}{{\mathbf{u}_{n}^{T} \mathbf{u}_{n}^{} + \varepsilon }}\mathbf{C}^{T} \mathbf{u}_{n}, \end{array} $$
(8)

where μw and μq are the step-sizes in the linear network and nonlinear network, respectively; the small positive constant ε is used for avoiding zero division. The vector gn is defined as gn= [ s(n)/b0(n),, s(n)/bM−1(n),s(n)/a1(n),,s(n)/aN(n)]T, and \(\dot {\mathbf {u}}_{n} = \left [3u_{n}^{2},2u_{n}^{},1,0\right ]^{T}\), e(n) is the output error which can be expressed as \(e (n) = d(n) - y(n) = d(n) - \mathbf {u}_{n}^{T} {\mathbf {Cq}}_{i,n}\), where d(n) is the desired signal which contains non-Gaussian impulsive noises.

3 Proposed IIR-SAF-NLMM and IIR-SAF-SMNLMM algorithms

3.1 IIR-SAF-NLMM algorithm

In the non-Gaussian impulsive noise environment, the desired signal d(n) is commonly contaminated by impulsive noises. Then, the performances of the SAF-LMS [6] and SAF-NLMS [18] algorithms based on the MSE criterion can be affected severely by large elements in the output error. Instead of the MSE cost function, the least M-estimate scheme makes use of a robust M-estimate cost function to suppress the adverse effect caused by the outliers in output errors. In this paper, the cost function can be expressed as:

$$\begin{array}{@{}rcl@{}} J(n){\mathrm{= }}\left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \rho[\! e(n)], \end{array} $$
(9)

where ρ[e(n)] is the modified Huber M-estimate function which gives:

$$\begin{array}{@{}rcl@{}} \rho [\! e(n)]{{= }}\left\{ \begin{array}{l} e^{2} /2, 0 \le \left| e \right| < \xi \\ \xi^{2} /2, {\text{ otherwise}}, \\ \end{array} \right. \end{array} $$
(10)

where ξ is a threshold parameter for rejecting the outliers which is computed as \(\xi = 2.576 \hat \sigma _{e} (n)\), and \( \hat \sigma _{e}^{2} (n)\) is the variance estimate of the impulsive-free error [17], which is given by:

$$\begin{array}{@{}rcl@{}} \hat \sigma_{e}^{2} (n){{= }}\lambda_{0} \hat \sigma_{e}^{2} (n - 1) + a_{1} (1 - \lambda_{0}){\text{med}}[\! C_{e} (n)], \end{array} $$
(11)

where λ0 is the forgetting factor close to but smaller than 1, a1=1.483(1+5/(Nw−1)) is a finite correction factor, and Nw is the data window. med[·] denotes the median operator and Ce(n)=[e2(n),e2(n−1),,e2(nNw+1)].

Note that in [17], the threshold parameter ξ is evaluated with the assumption of Gaussian distribution of the output error. However, even in the case that e(n) is subject to other distribution, we also can compute the threshold value which is used to reject the impulse in output errors.

Taking the derivative of the cost function J(n) with respect to the IIR weight vector wn and applying the steepest descent method, the update equation of the weight vector can be obtained by:

$$ {\begin{aligned} \mathbf{w}_{n + 1} = \mathbf{w}_{n} - \mu_{\mathrm{w}} \frac{{\partial J(n)}}{{\partial \mathbf{w}_{n} }}=\mathbf{w}_{n} + \mu_{\mathrm{w}} \frac{{\psi [\! e (n)]}}{{\mathbf{u}_{n}^{T} \mathbf{u}_{n}^{} + \varepsilon }}\frac{1}{{\Delta x}} \dot{\mathbf{u}}_{n}^{T} {\mathbf{Cq}}_{i,n} \mathbf{g}_{n}, \end{aligned}} $$
(12)

where the function ψ[ e(n)] is given as:

$$\begin{array}{@{}rcl@{}} \psi [e(n)]= \left\{ \begin{array}{l} e, 0 \le \left| e \right| < \xi \\ 0, {\text{ otherwise, }} \\ \end{array} \right. \end{array} $$
(13)

In a similar way, taking the derivative of the cost function J(n) with respect to qi,n and using the steepest descent method, the recursive equation of the control point vector is expressed by:

$$\begin{array}{@{}rcl@{}} \mathbf{q}_{i,n+ 1} = \mathbf{q}_{i,n} - \mu_{\mathrm{q}} \frac{{\partial J(n)}}{{\partial \mathbf{q}_{i,n} }} = \mathbf{q}_{i,n} + \mu_{\mathrm{q}} \frac{{\psi [\! e (n)]}}{{\mathbf{u}_{n}^{T} \mathbf{u}_{n}^{} {\mathrm{+ }}\varepsilon }}\mathbf{C}^{T} \mathbf{u}_{n}. \end{array} $$
(14)

It can be seen in (12) and (14) that the output error is replaced by the score function ψ[ e(n)], resulting into the freezing on the update of the IIR weight vector and control point vector when the output error is larger than the threshold parameter. This way helps the IIR-SAF-SNLMM algorithm to suppress the adverse effect of the non-Gaussian impulsive noise.

3.2 IIR-SAF-SMNLMM algorithm

As we know, in the case of linear adaptive filters, the set-membership scheme chooses the specified bound γR+ to find an appropriate data set S containing all possible input-desired data pairs (x,d) which satisfy [19]

$$\begin{array}{@{}rcl@{}} \Theta = \underset{(\mathbf{x}, d)}{\cap} \left\{ \mathbf{w} \in {\mathrm{R}}^{L} :\left| {d - \mathbf{w}^{T} \mathbf{x}} \right| \le \gamma \right\}, \end{array} $$
(15)

where Θ is the feasibility set in which all the tap-weight vectors are available for |dwTx|≤γ, and L denotes the linear filter length.

In the case of the IIR-SAF, we consider both the IIR adaptive filter tap-weight vector and control point vector, and the feasibility set can by given by:

$$\begin{array}{@{}rcl@{}} \Theta_{0} = \underset{(\mathbf{x}, d)}{\cap} \left\{ (\mathbf{w}, \mathbf{q}) \in {\mathrm{R}}^{M + N} \times {\mathrm{R}}^{4} :\left| {d - \mathbf{u}^{T} {\mathbf{Cq}}} \right| \le \gamma \right\}, \end{array} $$
(16)

The spline adaptive filter updates IIR tap weights and control points by using the input-desired data pairs [xn,d(n)] at time instant n, and then we define the constraint set Hn with all the combined vectors (w,q) for which the output error is upper bounded by γ and is mathematically expressed by:

$$ \begin{aligned} H_{n} = \left\{ (\mathbf{w}_{n}, \mathbf{q}_{i,n}) \in {\mathrm{R}}^{M + N} \times {\mathrm{R}}^{4} :\left| {d(n) - \mathbf{u}_{n}^{T} {\mathbf{Cq}}_{i,n}} \right| \le \gamma \right\}, \end{aligned} $$
(17)

The exact membership set which is interpreted as the intersection of the constraint sets Hk over all time instants k=1,2,,n is given as:

$$\begin{array}{@{}rcl@{}} \Lambda_{n} = \cap_{k = {\mathrm{1}}}^{n} H_{k}, \end{array} $$
(18)

Note that the membership set Λn is the minimal set estimate of Θ0 at time n, if we choose the magnitude of the error upper bound γ properly, the membership set is nonempty. Thus, the set-membership adaptive scheme can be incorporated into the IIR-SAF-NLMM to seek the valid estimates of combined vectors (w,q) which lie in the membership set at the steady-state.

Employing the set-membership framework in the IIR-SAF-NLMM and using the set-membership constraint value g(n), the modified M-estimate-based cost function can be set as:

$$\begin{array}{@{}rcl@{}} \bar J(n) = \theta \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \rho \left[e(n) - g(n)\right], \end{array} $$
(19)

Then, the modified Huber M-estimate function associated with the constraint value is given by:

$$\begin{array}{@{}rcl@{}} \rho \left[e(n) - g(n)\right] = \left\{ \begin{array}{l} \left[e(n) - g(n)\right]^{2} /2, 0 \le \left| {e(n)} \right| < \xi \\ \xi^{2} /2, {\text{otherwise}}, \\ \end{array}\right. \end{array} $$
(20)

where θ is a constant, g(n)=γsgn[e(n)], γ≥0 is the set-membership error bound, and sgn[·] is the sign function.

Applying the steepest descent method, the update equation of the IIR tap-weight vector can be obtained as:

$$\begin{array}{@{}rcl@{}} \mathbf{w}_{n + 1} = \mathbf{w}_{n} - \frac{{\partial \bar J(n)}}{{\partial \mathbf{w}_{n} }}, \end{array} $$
(21)

For 0≤|e(n)|<ξ, the derivative of the cost function (19) with respect to wn is derived as;

$$ {\begin{aligned}{c} \frac{{\bar{J}(n)}}{{\partial \mathbf{w}_{n}}} &= \frac{{\theta \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \partial \rho \left[e(n) - g(n)\right]}}{{\partial \mathbf{w}_{n} }} \\ &= - 2\theta \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \left[ {e(n) - g(n)} \right]\frac{{\partial y (n)}}{{\partial u_{n} }}\frac{{\partial u_{n} }}{{\partial s(n)}}\nabla_{\mathbf{w}_{n}} s(n) \\ &= - 2\theta \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \left({e(n) - \gamma {\text{sgn}}[e(n)]} \right)\left({1 / {\Delta x}} \right)\varphi^{\prime}_{i} (u_{n})\mathbf{g}_{n} \\ &= - 2\theta \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} e(n)\left({1 - \frac{\gamma }{{\left| {e(n)} \right|}}} \right)\left({{1 / {\Delta x}}} \right)\varphi^{\prime}_{i} (u_{n})\mathbf{g}_{n}, \\ \end{aligned}} $$
(22)

where \(\varphi ^{\prime }_{i} (u_{n}){\mathrm {= }}\dot {\mathbf {u}}_{n}^{T} {\mathbf {Cq}}_{i,n}\). Substituting (13) into (22), the derivative of \(\bar {J}(n)\) with respect to wn can also be expressed as:

$$\begin{array}{@{}rcl@{}} \frac{{\partial \bar{J}(n)}}{{\partial \mathbf{w}_{n} }}= - \theta \alpha_{n} \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \psi [e(n)]{{(1 / \Delta x)\varphi^{\prime}_{i} (u_{n})\mathbf{g}_{n} }}, \end{array} $$
(23)

where the constant 2 is absorbed by θ and the parameter αn is defined as:

$$\begin{array}{@{}rcl@{}} \alpha_{n} = \left\{ \begin{array}{l} 1 - \frac{\gamma }{{\left| {e(n)} \right|}}, \gamma < \left| {e(n)} \right| < \xi \\ 0, {\text{otherwise}}, \\ \end{array} \right. \end{array} $$
(24)

Hence, the recursive relation of the IIR tap-weight vector is given as:

$$ {\begin{aligned} \mathbf{w}_{n + 1} \,=\, \mathbf{w}_{n} \,+\, \theta \alpha_{n} \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} + \varepsilon_{0} \right)^{- 1} \psi [e(n)]{{(1 / \Delta x)\varphi^{\prime}_{i} (u_{n})\mathbf{g}_{n} }}, \end{aligned}} $$
(25)

where ε0 is a small regular parameter for preventing from zero division. For the special case e(n)=0 and ψ[ e(n)]=0, the weight updating is suspended.

For the updating of the control point vector, taking the derivative of the cost function (19) with respect to qi,n, we have:

$$\begin{array}{@{}rcl@{}} \frac{{\partial \bar{J}(n)}}{{\partial \mathbf{q}_{i,n} }}= - \theta \alpha_{n} \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \right)^{- 1} \psi [\!e(n)]\mathbf{C}^{T} \mathbf{u}_{n}. \end{array} $$
(26)

Using the steepest descent method, the learning rule of the control point vector is given as:

$$\begin{array}{@{}rcl@{}} \mathbf{q}_{i,n + 1} = \mathbf{q}_{i,n} + \theta \alpha_{n} \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} + \varepsilon_{0} \right)^{- 1} \psi [\!e(n)]\mathbf{C}^{T} \mathbf{u}_{n}, \end{array} $$
(27)

It is noted that in (25) and (27), θαn is equivalent to the step size in the IIR-SAF-NLMM, i.e., the step sizes μw=μq=θαn for the IIR-SAF-SMNLMM are not constants any more. Furthermore, the IIR-SAF-SMNLMM algorithm can be viewed as the variable step size IIR-SAF-NLMM algorithm. When the upper bound γ is set to be 0, then resulting into αn=1, the SAF-SMNLMM algorithm degenerates into the SAF-NLMM.

In (20), the outlier rejection depends on the choice of ξ; improper choice of ξ leads to the presence of a part of the impulsive noise in e(n). This makes αn in (24) nonoptimal. Here, we use the impulsive-free estimation of E[|e(n)|] [20] instead of e(n) in (24) which is derived as:

$$\begin{array}{@{}rcl@{}} \bar{\sigma}_{e}^{} (n)= \lambda_{1} \bar{\sigma}_{e}^{} (n - 1) + (1 - \lambda_{1}){\text{med}}[A_{e} (n)], \end{array} $$
(28)

where Ae(n)=[|e(n)|,|e(n−1)|,,|e(nNw+1)|], and λ1 is the forgetting factor approaching but smaller than one.

Hence, (24) can be approximated as:

$$\begin{array}{@{}rcl@{}} \alpha_{n} = \left\{ \begin{array}{l} 1 - \frac{\gamma }{{\bar{\sigma}_{e}^{} (n)}}, \gamma < \bar{\sigma}_{e}^{} (n) < \xi, \\ 0, {\text{otherwise}}.\\ \end{array} \right. \end{array} $$
(29)

4 Computational complexity

The computational burdens of the IIR-SAF-LMS, IIR-SAF-NLMS, IIR-SAF-NLMM, and IIR-SAF-SMNLMM algorithms per iteration are compared in Table 1. For the spline output calculation and adaptation, we take into account of the terms \(\mathbf {u}_{n}^{T} {\mathbf {Cq}}_{i,n}\), \(\dot {\mathbf {u}}_{n}^{T} {\mathbf {Cq}}_{i,n}\), and CTun; it only needs 4Kp multiplications plus 4Kq additions, where Kp and Kq (less than 16) are the constants which can be defined with reference to the implementation spline structure in [21]. Due to the normalized operation, extra four multiplications, four additions, and two divisions are required for the IIR-SAF-NLMS algorithm. Compared to the IIR-SAF-NLMS algorithm, the proposed IIR-SAF-SMNLMM algorithm needs extra eight multiplications and three additions caused by (25)–(29). If M+N4, the proposed algorithms only require more O(Nw log2Nw) median operations per iteration than the other two cited algorithms.

Table 1 Comparison of the computational complexities

5 Convergence properties

In this section, we study the convergence properties based on the energy conservation relation. The identification scheme is shown in Fig. 2; w0 and q0 represent the IIR filter and the nonlinear network of the target nonlinear system, respectively. It is reasonable to suppose that the adaptation of the variables wn and qi,n is in two separate phases, e.g., only the adaptation of linear filters is considered in the first phase of learning and then it is optimal in the second one [7]. To make the analysis tractable, the following assumptions are given:

Fig. 2
figure 2

Identification scheme for IIR-SAF nonlinear system

Assumption 1

The ambient noise η(n)=ηG(n)+ηI(n), where ηG(n) is white Gaussian background noise with zero-mean and variance \(\sigma _{G}^{2}\) and ηI(n) is the impulsive noise, modeled by an independent and identically distributed (i.i.d) random variable. The sequence η(n) whose variance is \(\sigma _{\eta }^{2}\) is independent of x(n) and s(n).

Assumption 2

For sufficient long IIR weight error vector, the output error e(n) is independent of \({\varphi ^{\prime }_{i} (u_{n})}\), gn2 and Cun2 and the parameter αn involved with e(n) in (24) is also independent of \({\varphi ^{\prime }_{i} (u_{n})}\), gn2 and Cun2.

In the first phase, we define the IIR weight error vector as Δwn=w0wn; the iteration of Δwn can be written as:

$$ \Delta \mathbf{w}_{n + 1} \,=\, \Delta \mathbf{w}_{n} - \theta \alpha_{n} \!\left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \,+\, \varepsilon_{0} \!\right)^{- 1}\! \!\psi [e(n)]\!{{(1 / \Delta x)\varphi^{\prime}_{i} (u_{n})\mathbf{g}_{n} }}, $$
(30)

Setting the regularization parameter ε0 to zero and taking the mathematical expectation of the squared Euclidean norm of both sides of (30), we have:

$$ {\begin{aligned} D(n + 1) &= D(n) - 2\theta E\left\{ \psi [e(n)]\alpha_{n} ||\mathbf{u}_{n} ||^{- 2} \xi_{w} (n){\varphi^{\prime}_{i} (u_{n})} / \left(\mathbf{c}_{3} \mathbf{q}_{i,n} \right)\right\} \\ &\quad + \left(\theta^{2} /\Delta x^{2} \right)E\left\{ \alpha_{n}^{2} \varphi^{\prime}_{i} (u_{n})^{2} \psi^{2} [e(n)]||\mathbf{g}_{n} ||^{2} ||\mathbf{u}_{n} ||^{- 4} \right\}, \\ \end{aligned}} $$
(31)

where D(n)=E[||Δwn||2] denotes the mean square deviation (MSD); ξw(n) is defined as a noise-free priori error associated with the IIR weight error vector Δwn which can be expressed by [22]:

$$\begin{array}{@{}rcl@{}} \xi_{w} (n)= \left(\mathbf{c}_{3} \mathbf{q}_{i,n} /\Delta x\right)\Delta \mathbf{w}_{n}^{T} \mathbf{g}_{n}, \end{array} $$
(32)

where c3 is the third row of the matrix C.

In addition, in this phase, the control point vector is assumed to be optimal; thus, the output error is given as:

$$\begin{array}{@{}rcl@{}} e(n) = d(n) - y(n) = \xi_{w} (n)+ \eta (n), \end{array} $$
(33)

Considering that ξw(n) is not corrupted by the impulsive noise, based on the features of the function (13) which rejects the outliers, ψ[ e(n)] can be approximated as [23]:

$$\begin{array}{@{}rcl@{}} \psi [\!e(n)] = \psi \left[\xi_{w} (n){{+ }}\eta (n)\right] \approx \xi_{w} (n) + \psi [\eta (n)], \end{array} $$
(34)

Assuming ξw(n) is independent of ψ[η(n)], we substitute (34) into (31) and apply the Assumptions 1 and 2, we have:

$$ {\begin{aligned} D(n + 1) &= D(n) - 2\theta E\left[\alpha_{n} \xi_{w}^{2} (n)\right]E\left[||\mathbf{u}_{n} ||^{- 2} \varphi^{\prime}_{i} (u_{n})/\left(\mathbf{c}_{3} \mathbf{q}_{i,n} \right)\right] \\ & \quad +\! \left(\theta^{2} /\Delta x^{2} \right)\left\{ E\left[\alpha_{n}^{2} \xi_{w}^{2} (n)\right]\!+ \sigma_{\psi (\eta)}^{2} E\left[\alpha_{n}^{2} \right]\right\} E\left[\varphi^{\prime}_{i} (u_{n})^{2} ||\mathbf{g}_{n} ||^{2} ||\mathbf{u}_{n} ||^{- 4} \right], \\ \end{aligned}} $$
(35)

where \(\sigma _{\psi (\eta)}^{2}\) denotes the variance of ψ[η(n)].

By using the property tr[ AB]=tr[ BA] and inserting (32) into (35), where tr[·] denotes the trace operator for matrices. The relation (35) can be rewritten equaivalently as:

$$ {\begin{aligned} {\text{tr}}\left[{\text{cov}} \left(\Delta \mathbf{w}_{n + 1} \right)\right] &= {\text{tr}}\left[{\text{cov}} \left(\Delta \mathbf{w}_{n} \right)\right] - 2A_{n} \theta {\text{tr}}\left[E\left(\alpha_{n} \mathbf{g}_{n} \mathbf{g}_{n}^{T} \Delta \mathbf{w}_{n} \Delta \mathbf{w}_{n}^{T} \right)\right] \\ & \quad+ B_{n} \theta^{2} {\text{tr}}\left[E\left(\alpha_{n}^{2} \mathbf{g}_{n} \mathbf{g}_{n}^{T} \Delta \mathbf{w}_{n} \Delta \mathbf{w}_{n}^{T} \right)\right]+ C_{n} E\left[\alpha_{n}^{2} \right], \\ \end{aligned}} $$
(36)

where \( {\text {cov}} \left (\Delta \mathbf {w}_{n} \right) = E\left (\Delta \mathbf {w}_{n} \Delta \mathbf {w}_{n}^{T} \right), A_{n}\! =\! \left [\left (\mathbf {c}_{3} \mathbf {q}_{i,n} \right)/\Delta x^{2} \right ] \)

\( E \left [||\mathbf {u}_{n} ||^{- 2} \varphi ^{\prime }_{i} (u_{n})\right ], B_{n}\!\! =\!\! \left [\!\left (\mathbf {c}_{3} \mathbf {q}_{i,n} \right)^{2} /\Delta x^{4} \right ] E\left [\varphi ^{\prime }_{i} (u_{n})^{2} ||\mathbf {g}_{n}||^{2} \right. \)

||un||−4], and \( C_{n} = \sigma _{\psi (\eta)}^{2} E\left [\varphi ^{\prime }_{i} (u_{n})^{2} ||\mathbf {g}_{n} ||^{2} ||\mathbf {u}_{n} ||^{- 4} \right ]. \)

Now by applying the unitary matrix Q, we have:

$$ {\begin{aligned} {\text{tr}}\!\left[\mathbf{Q}^{T} {\text{cov}} \left(\Delta \mathbf{w}_{n + 1} \right)\mathbf{Q}\right] \!&= {\text{tr}}\left[\mathbf{Q}^{T} {\text{cov}} \left(\Delta \mathbf{w}_{n} \right)\mathbf{Q}\right] \\ &\quad- 2A_{n} \theta {\text{tr}}\left[E\left(\alpha_{n} \mathbf{Q}^{T} \mathbf{g}_{n} \mathbf{g}_{n}^{T} {\mathbf{QQ}}^{T} \Delta \mathbf{w}_{n} \Delta \mathbf{w}_{n}^{T} \mathbf{Q}\right)\right] \\ &\quad + B_{n} \theta^{2} {\text{tr}}\left[E\left(\alpha_{n}^{2} \mathbf{Q}^{T} \mathbf{g}_{n} \mathbf{g}_{n}^{T} {\mathbf{QQ}}^{T} \Delta \mathbf{w}_{n} \Delta \mathbf{w}_{n}^{T} \mathbf{Q}\right)\right]\\ &\quad + C_{n} E\left[\alpha_{n}^{2} \right], \\ \end{aligned}} $$
(37)

Assuming that Δwn+1 is independent of the filter inputs and using the Assumption 2, (37) can be rewritten as:

$$ {\begin{aligned} {\text{tr}}\left[{\text{cov}} \left(\Delta {\mathbf{w}^{\prime}}_{n + 1} \right)\right] &= {\text{tr}}\left[{\text{cov}} \left(\Delta {\mathbf{w}^{\prime}}_{n} \right)\right] - 2A_{n} \theta {\text{tr}}\left[E\left(\alpha_{n} \mathbf{\Lambda }_{n} \right)\right]{\text{tr}}\left[{\text{cov}} \left(\Delta {\mathbf{w}^{\prime}}_{n} \right)\right] \\ & \quad+ B_{n} \theta^{2} {\text{tr}}\left[E\left(\alpha_{n}^{2} \right)\mathbf{\Lambda }_{n} \right]{\text{tr}}\left[{\text{cov}} \left(\Delta {\mathbf{w}^{\prime}}_{n} \right)\right]+C_{n} E\left[\alpha_{n}^{2} \right] \\ &{\mathrm{= tr}}\left[\mathbf{I} - 2A_{n} \theta E(\alpha_{n})\mathbf{\Lambda }_{n} + B_{n} \theta^{2} E\left(\alpha_{n}^{2} \right)\mathbf{\Lambda }_{n} \right]{\text{tr}}\left[{\text{cov}} \left(\Delta {\mathbf{w}^{\prime}}_{n} \right)\right] \\&\quad+ C_{n} E\left[\alpha_{n}^{2} \right] \\ \end{aligned}} $$
(38)

where Δwn+1=QTΔwn+1, Λn is a diagonal matrix whose elements are the eigenvalues of \(E\left (\mathbf {g}_{n} \mathbf {g}_{n}^{T} \right)\), denotes as λl for l=0,,M+N−1. From (38), the algorithm is stable when \( \left | {1 - 2A_{n} \theta E(\alpha _{n})\lambda _{l} {{+ }}B_{n} \theta ^{2} E\left (\alpha _{n}^{2} \right)\lambda _{l}} \right | < 1\), which gives:

$$\begin{array}{@{}rcl@{}} 0 < \theta < \frac{{2A_{n} E[\alpha_{n} ]}}{{B_{n} E\left[\alpha_{n}^{2} \right]}}, \end{array} $$
(39)

In the second phase, we define the control point error vector Δqi,n=q0qi,n and then obtain:

$$\begin{array}{@{}rcl@{}} \Delta \mathbf{q}_{i,n + 1} \,=\, \Delta \mathbf{q}_{i,n} \,-\, \theta \alpha_{n} \left(\mathbf{u}_{n}^{T} \mathbf{u}_{n} \,+\, \varepsilon_{0} \right)^{- 1} \psi [e(n)]\mathbf{C}^{T} \mathbf{u}_{n}, \end{array} $$
(40)

Taking the mathematical expectation of the energies of both sides of (40) and using Assumptions 1 and 2, again, we obtain:

$$ {\begin{aligned} K(n + 1) &= K(n) - 2\theta E\left[\alpha_{n} \xi_{q}^{2} (n)\right]E\left[||\mathbf{u}_{n} ||^{- 2} \right] \\ & \quad + \theta^{2} \left\{ E\left[\alpha_{n}^{2} \xi_{q}^{2} (n)\right]+ E\left[\alpha_{n}^{2} \right]\sigma_{\psi (\eta)}^{2} \right\} E\left[||\mathbf{u}_{n} ||^{- 4} ||\mathbf{C}^{T} \mathbf{u}_{n} ||^{2} \right], \\ \end{aligned}} $$
(41)

where K(n)=E[||Δqi,n||2], ξq(n) is defined as the a noise-free priori error associated with the control point error vector Δqi,n which is given by:

$$\begin{array}{@{}rcl@{}} \xi_{q} (n)=\Delta \mathbf{q}_{i,n}^{T} \mathbf{C}^{T} \mathbf{u}_{n}, \end{array} $$
(42)

By using again the property tr[AB]=tr[BA] and inserting (42) into (41), we obtain:

$$ {\begin{aligned} {\text{tr}}\left[{\text{cov}} \left(\Delta \mathbf{q}_{i,n + 1} \right)\right] =& {\text{tr}}[{\text{cov}} (\Delta \mathbf{q}_{i,n})]- 2\bar A_{n} \theta {\text{tr}}\left\{ E\left[\alpha_{n} \mathbf{C}^{T} \mathbf{u}_{n} \left(\mathbf{C}^{T} \mathbf{u}_{n} \right)^{T} \Delta \mathbf{q}_{i,n} \Delta \mathbf{q}_{i,n}^{T} \right]\right\} \\ &+ \bar{ B}_{n} \theta^{2} {\text{tr}}\left\{ E\left[\alpha_{n}^{2} \mathbf{C}^{T} \mathbf{u}_{n} \left(\mathbf{C}^{T} \mathbf{u}_{n} \right)^{T} \Delta \mathbf{q}_{i,n} \Delta \mathbf{q}_{i,n}^{T} \right]\right\} + \theta^{2}\sigma_{\psi (\eta)}^{2} \bar{B}_{n} E\left[\alpha_{n}^{2} \right], \\ \end{aligned}} $$
(43)

In a similar way to cov(Δwn), the relation (43) can be rewritten as:

$$ {\begin{aligned} {\text{tr}}\left[{\text{cov}} \left(\Delta \bar{\mathbf{q}}_{i,n + 1} \right)\right] = &{\text{tr}}\left[\mathbf{I} - 2\bar{ A}_{n} \theta E(\alpha_{n})\bar{\mathbf{\Lambda }}_{n} {{+ }}\bar B_{n} \theta^{2} E\left(\alpha_{n}^{2} \right)\bar{\mathbf{\Lambda} }_{n} \right]\\&{\text{tr}}\left[{\text{cov}} \left(\Delta \bar{\mathbf{q}}_{i,n} \right)\right] + \theta^{2} \sigma_{\psi (\eta)}^{2} \bar B_{n} E\left[\alpha_{n}^{2} \right], \\ \end{aligned}} $$
(44)

where \({\text {cov}} \left (\Delta \bar {\mathbf {q}}_{i,n + 1} \right) = \mathbf {Q}^{T} {\text {cov}} \left (\Delta \mathbf {q}_{i,n + 1} \right) \), \(\bar { A}_{n} = E\left [||\mathbf {u}_{n} ||^{- 2} \right ]\), \(\bar { B}_{n} = E\left [||\mathbf {u}_{n} ||^{- 4} ||\mathbf {C}^{T} \mathbf {u}_{n} ||^{2} \right ]\), \(\bar {\mathbf {\Lambda } }_{n}\) is a diagonal matrix whose elements \(\bar \lambda _{p}\) (p=0,1,2,3) are the eigenvalues of E[CTun(CTun)T]. The system is stable when \(\left | {1 - 2\bar A_{n} \theta E(\alpha _{n})\lambda _{p} {{+ }}\bar B_{n} \theta ^{2} E\left (\alpha _{n}^{2} \right)\lambda _{p}} \right | < 1\), which gives:

$$\begin{array}{@{}rcl@{}} 0 < \theta < \frac{{2\bar A_{n} E[\alpha_{n} ]}}{{\bar B_{n} E\left[\alpha_{n}^{2} \right]}}, \end{array} $$
(45)

Note that in (39) and (45), the bound of the constant θ can be set by:

$$\begin{array}{@{}rcl@{}} 0 < \theta < \min \left\{ {\frac{{2A_{n} E[\alpha_{n} ]}}{{B_{n} E\left[\alpha_{n}^{2} \right]}},\frac{{2\bar A_{n} E[\alpha_{n} ]}}{{\bar B_{n} E\left[\alpha_{n}^{2} \right]}}} \right\}. \end{array} $$
(46)

6 Results and discussion

In this section, several detailed experimental results are presented in the context of the IIR-SAF nonlinear system identification as shown in Fig. 2. The mean square error (MSE) which is defined as 10 log10[e(n)]2 is used to evaluate the performance. All the following results are obtained by averaging over 100 Monte Carlo trials. The input signal is generated by the following relationship:

$$\begin{array}{@{}rcl@{}} x(n) = \omega x(n - 1) + \sqrt {1 - \omega^{2}} a(n), \end{array} $$
(47)

where a(n) is the white Gaussian noise signal with zero-mean and unitary variance, and the parameter 0<ω<0.95 represents the level of correlation between the adjacent samples. The lengths of the MA and AR parts in the IIR adaptive filter are set to M=2 and N=3, respectively. The initial tap-weight vector for the IIR adaptive filter is w−1=[1,0,...,0] with length M+N=5, while the control point vector is initially set to a straight line with a unitary slope. Only the CR-spline basis is applied in the simulations; however, similar results can also be obtained by using the B-spline basis.

The unknown IIR-SAF nonlinear system is composed of an IIR filter whose transfer function is given by:

$$\begin{array}{@{}rcl@{}} \mathbf{w}_{0} (z) = \frac{{0.6 - 0.4z^{- 1} }}{{1 + 0.2z^{- 1} - 0.5z^{- 2} + 0.1z^{- 3} }}, \end{array} $$
(48)

and the nonlinear spline function is implemented by a LUT q0 with 23 control points, Δx is set to 0.2 and q0 is defined by:

$$ {\begin{aligned} \mathbf{q}_{0} &= [ - 2.2, - 2.0, - 1.8, - 1.6, - 1.4, - 1.2, - 1.0, - 0.8, - 0.91, \\ & \quad- 0.4, - 0.2,0.05,0, - 0.4,0.58,1.0,1.0,1.2,1.4,1.6,1,8,2.0,2.2]. \\ \end{aligned}} $$
(49)

The ambient noise η(n)=ηG(n)+ηI(n), where ηG(n) is the white Gaussian background noise and ηI(n) is the impulsive noise. The background noise ηG(n) is the zero-mean independent white Gaussian sequence with variance \(\sigma _{G}^{2}\), with 40 dB signal-to-noise ratio (SNR) which is added to the input of the unknown system. The SNR is defined as \(\text {SNR} = 10\log _{10} \left (\sigma _{x}^{2} /\sigma _{G}^{2} \right)\), where \(\sigma _{x}^{2}\) is the variance of the system input x(n). The impulsive interference ηI(n) is modeled by the contaminated Gaussian (CG) process or the symmetric αS distribution. The CG impulse can be represented by ηI(n)=z(n)b(n) with a signal-to-interference ratio (SIR) of − 10 dB or − 20 dB, where z(n) is a white Gaussian process with zero-mean and b(n) is a Bernoulli sequence with the probability mass function with P(b)=1−P for b=0 and P(b)=P for b=1, where P is the probability of the occurrence of the impulsive interference. The SIR is defined as \(\text {SIR} = 10\log _{10} \left (\sigma _{d}^{2} /\sigma _{z}^{2} \right)\), where \(\sigma _{z}^{2}\) and \(\sigma _{d}^{2}\) are the variances of z(n) and the desired signal \(\tilde d(n),\) respectively. The symmetric αS distribution is characterized by the fractional order parameter p and characteristic exponent α, for which the fractional-order signal-to-noise ratio (FSNR) can be defined as \(\text {FSNR} = 10\log _{10} [E(|\tilde d(n)|^{p})/E(|\eta _{I} (n)|^{p})]\) and 0<p<α. The step sizes are set to μw=μq=0.01 for the IIR-SAF-LMS, IIR-SAF-NLMS, and proposed IIR-SAF-NLMM. For the proposed SAF-SMNLMM, the constant θ is set to 0.06 except in Fig. 4. Other parameters are selected as follows: \(\gamma = \sqrt {\tau \sigma _{G}^{2}} /(\kappa + 1)\), τ=5, κ=0.6 except in Fig. 5, λ0=λ1=0.99, ε0=ε=0.001, ω=0.7, α=0.8, and p=0.7.

The first experiment is to evaluate the performance of the proposed algorithms in the absence of impulsive noise. Figure 3 shows the MSE learning curves of the IIR-SAF-LMS, IIR-SAF-NLMS, proposed IIR-SAF-NLMM, and IIR-SAF-SMNLMM in the absence of impulsive noise. It can be noted that all the algorithms acquire the nearly identical steady-state MSEs. Compared with the cited algorithms, the IIR-SAF-NLMM suffers from the convergence performance deterioration due to the application of the modified Huber M-estimate function. However, the IIR-SAF-SMNLMM obtains the faster convergence rate than the other algorithms. The number of update ratio for the corresponding algorithms in the absence of impulsive noise is demonstrated in Table 2; we can see the proposed algorithms have lower update ratio over the other two cited algorithms, especially for the IIR-SAF-SMNLMM which involves in the set-membership error bound.

Fig. 3
figure 3

MSEs of the SAF-LMS, SAF-NLMS, and proposed algorithms in the absence of impulsive noise

Table 2 Number of update ratio for the corresponding algorithms in the absence of impulsive noise

Figure 4 shows the MSE learning curves of the proposed IIR-SAF-SMNLMM for different values of θ in the absence of impulsive noise. It can be clearly seen that the larger value of θ leads to the faster convergence rate, and the proposed IIR-SAF-SMNLMM gets nearly similar steady-state MSEs for different values of θ. Besides, Table 3 displays the larger value of θ can decrease the number of the update ratio because of the faster convergence rate. Therefore, the parameter θ which is bounded by (45) can be set as large as possible in the application of the proposed IIR-SAF-SMNLMM. The performance of the IIR-SAF-SMNLMS algorithm for different values of κ in the absence of impulsive noise is shown in Fig. 5. It can be noted that the proposed IIR-SAF-SMNLMS holds similar convergence rate with respect to different values of κ. Moreover, the larger value of κ results in the lower steady-state MSE and a lager number of update ratio which is shown in Table 4.

Fig. 4
figure 4

MSEs of the IIR-SAF-SMNLMS algorithm for different θ in the absence of impulsive noise

Table 3 Number of update ratio of the IIR-SAF-SMNLMS algorithm for different θ in the absence of impulsive noise
Fig. 5
figure 5

MSEs of the IIR-SAF-SMNLMS algorithm for different κ in the absence of impulsive noise

Table 4 Number of update ratio of the IIR-SAF-SMNLMS algorithm for different κ in the absence of impulsive noise

In the second experiment, the performances of the proposed algorithms are compared with those of the IIR-SAF-LMS and IIR-SAF-NLMS algorithms in the CG process impulsive noise or the symmetric αS impulsive noise. Figures 6, 7, and 8 show the performance comparison at different SIR and probability of the occurrence of the impulsive interference for CG noises. Figures 9, 10, and 11 indicate the MSE learning curves of four algorithms in the symmetric noise environment at different FSNR. From these plots, the proposed algorithms provide the robust performance in the impulsive noise environment, whereas the other two cited algorithms fail to suppress the impulse. The proposed IIR-SAF-SMNLMS achieves the faster convergence rate. Besides, in Table 5, it can also be seen that the proposed algorithms have lower update ratios over the cited algorithms.

Fig. 6
figure 6

MSE curves for the corresponding algorithms in the CG impulsive noise (SNR = 40 dB, SIR = − 10 dB, P = 0.01)

Fig. 7
figure 7

MSE curves for the corresponding algorithms in the CG impulsive noise (SNR = 40 dB, SIR = − 20 dB, P = 0.001)

Fig. 8
figure 8

MSE curves for the corresponding algorithms in the CG impulsive noise (SNR = 40 dB, SIR = − 10 dB, P = 0.001)

Fig. 9
figure 9

MSE curves for the corresponding algorithms in the the symmetric αS impulsive noise (SNR = 40 dB, FSNR = − 5 dB)

Fig. 10
figure 10

MSE curves for the corresponding algorithms in the the symmetric αS impulsive noise (SNR = 40 dB, FSNR = 0 dB)

Fig. 11
figure 11

MSE curves for the corresponding algorithms in the the symmetric αS impulsive noise (SNR = 40 dB, FSNR = 15 dB)

Table 5 Number of update ratio % for the corresponding algorithms in the CG impulsive noise

The third experiment evaluates the tracking ability of the proposed algorithms. The target system changes abruptly after 30,000 samples, i.e., (w0,q0)→(w1,q1), where the system (w1,q1) contains an IIR filter which is given as:

$$\begin{array}{@{}rcl@{}} \mathbf{w}_{1} (z) = \frac{{0.4 - 0.2z^{- 1} }}{{1 - 0.2z^{- 1} + 0.01z^{- 2} - 0.002z^{- 3} }}, \end{array} $$
(50)

and a nonlinear spline network which is implemented by a LUT q1 with 23 control points and q1 is defined by:

$$ {\begin{aligned} \mathbf{q}_{1} =& [ - 2.2, - 2.12, - 2.0, - 1.52, - 1.43, - 1.1, - 0.92, - 0.71, - 0.88, \\ &- 0.44, - 0.18,0.12, - 0.12, - 0.2,0.42,0.75,1.0,1.2,1.31,1.52,1,93,2.1,2.2]. \\ \end{aligned}} $$
(51)

Figures 12 and 13 show the MSE tracking curves of four algorithms in case of the CG noise and the symmetric αS impulsive noise, respectively. It can be clearly seen that the proposed algorithms get better tracking ability and more robust against impulsive noise than the cited algorithms. The IIR-SAF-SMNLMM algorithm performs best.

Fig. 12
figure 12

Tracking ability for the corresponding algorithms in the CG impulsive noise (SNR = 40 dB, SIR = − 10 dB, P = 0.01)

Fig. 13
figure 13

Tracking ability for the corresponding algorithms in the the symmetric αS impulsive noise (SNR = 40 dB, FSNR = − 5 dB)

7 Conclusions

In order to suppress the effect of the impulsive noise and decrease the computational burden, this paper combines the set-membership framework and least-M estimate scheme and proposes two variants based on the IIR spline adaptive filter. The proposed SAF-IIR-NLMM algorithm is derived by using a robust M-estimator as the cost function and the SAF-IIR-SMNLMM is characterized by the set-membership error bound leading into an evident decrease of the number of the update ratio. Moreover, the computational burdens and the convergence properties of the proposed SAF-IIR-SMNLMM algorithm are also given. Compared to the cited spline adaptive filtering algorithms, the proposed algorithms offer more robustness against impulsive noise, better tracking ability, and lower computational complexity.

8 Methods/Experimental

This paper studies the SAF-IIR-NLMM and SAF-IIR-SMNLMM algorithms aiming at suppressing the effect of the impulsive noise and decreasing the computational burden compared with the conventional nonlinear adaptive spline adaptive algorithms. The derivation of the algorithms are based on the modified Huber M-estimate function and set-membership framework. Besides, the convergence properties of the SAF-IIR-SMNLMM algorithm are analyzed by using the energy conversion relation. The numerical experiments are carried out by applying the white Gaussian noise signal and colored noise signal in the CG impulsive noise or symmetric αS impulsive noise environment. The results demonstrated that the two proposed variants of the SAF are robust to the impulsive noise, and the SAF-IIR-SMNLMM algorithm obtains low updating ratio.

Abbreviations

APA:

Affine projection algorithm

AWGN:

Additive white Gaussian noise

IIR:

Infinite impulse response

LMS:

Least mean square

LN:

Linear-nonlinear

LNL:

Linear-nonlinear-linear

LTI:

Linear time-invariant

LUT:

Look-up table

MSE:

Mean square error

NL:

Nonlinear-linear

NLMM:

Normalized least M-estimate

NLMS:

Normalized least mean square

NLN:

Nonlinear-linear-nonlinear

NN:

Neural networks

SAF:

Spline adaptive filter

VAF:

Volterra adaptive filters

References

  1. S. Haykin, Adaptive filter theory, 4th edn (Prentice-Hall, Englewood Cliffs, 2002).

    MATH  Google Scholar 

  2. A. H. Sayed, Adaptive filters (Wiley, NJ, 2008).

    Book  Google Scholar 

  3. V. J. Mathews, Adaptive polynomial filters. IEEE Sig. Process. Mag. 8:, 10–26 (1991).

    Article  Google Scholar 

  4. S. Haykin, Neural networks and learning machines, 2th edn (Prentice-Hall, Englewood Cliffs, 2008).

    Google Scholar 

  5. E. W. Bai, F. Giri, Introduction to block-oriented nonlinear systems (Springer, London, 2010).

    Book  Google Scholar 

  6. M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Nonlinear spline adaptive filtering. Sig. Process.93:, 772–783 (2013).

    Article  Google Scholar 

  7. M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Hammerstein uniform cubic spline adaptive filtering: learning and convergence properties. Sig. Process.100:, 112–123 (2014).

    Article  Google Scholar 

  8. M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Novel cascade spline architectures for the identification of nonlinear systems. IEEE Trans. Circ. Syst.-I: Regular Papares.62:, 1825–1835 (2015).

    MathSciNet  MATH  Google Scholar 

  9. M. Scarpiniti, D. Comminiello, R. Parisi, A. Uncini, Nonlinear system identification using IIR spline adaptive filters. Sig. Process.108:, 30–35 (2015).

    Article  Google Scholar 

  10. F. Lindsten, T. B. Schon, M. I. Jordanb, Bayesian semiparametric Wiener system identification. Automatica. 49:, 2053–2063 (2013).

    Article  MathSciNet  Google Scholar 

  11. M. Rasouli, D. Westwick, W. Rosehart, Quasiconvexity analysis of the Hammerstein model. Automatica. 50:, 277–281 (2014).

    Article  MathSciNet  Google Scholar 

  12. A. E. Nordsjo, L. H. Zetterberg, Identification of certain time-varying Wiener and Hammerstein systems. IEEE Trans. Signal Process.49:, 577–592 (2001).

    Article  Google Scholar 

  13. M. A. Chitre, J. R. Potter, S. H. Ong, Optimal and near-optimal signal detection in snapping shrimp dominated ambient noise. IEEE Ocean. J. Eng. 31:, 497–503 (2006).

    Article  Google Scholar 

  14. K. J. Sangston, K. R. Gerlach, Non-Gaussian noise models and coherent detection of radar targets. IEEE Trans. Aerosp. Electron. Syst.30:, 330–340 (1992).

    Article  Google Scholar 

  15. A. Mahmood, M. A. Chitre, M. A. Armand, Detecting OFDM signals in alpha-stable noise. IEEE Trans. Commun.62:, 3571–3583 (2014).

    Article  Google Scholar 

  16. S. C. Chan, Y. X. Zou, A recursive least M-estimate algorithm for robust adaptive filtering in impulsive noise: fast algorithm and convergence performance analysis. IEEE Trans. Sig. Procecss.52:, 975–991 (2004).

    Article  MathSciNet  Google Scholar 

  17. Y. Zou, S. C. Chan, T. S. Ng, Lesat mean M-estiamte algorithms for robust adaptive filtering in impulse noise. IEEE Trans. Circ. Syst.-II: Analog. Digit. Sig. Process.47:, 1564–1569 (2000).

    Article  Google Scholar 

  18. S. Guan, Z. Li, Normalised spline adaptive filtering algorithm for nonlinear system identification. Neural Process. Lett.5:, 1–13 (2017).

    Google Scholar 

  19. S. Gollamudi, S. Nagaraj, S. Kapoor, Y. F. Huang, Set-membership filtering and a set-membership normalized LMS algorithm with an adaptive step size. IEEE Sig. Process. Lett. 5:, 111–114 (1998).

    Article  Google Scholar 

  20. S Zhang, J Zhang, H Han, Robust shrinkage normalized sign algorithm in an impulsive noise environment. IEEE Trans. Circuits Syst.-II: Express Briefs. 64:, 91–95 (2017).

    Article  Google Scholar 

  21. S. Guarnieri, F. Piazza, A. Uncini, Multilayer feedforward networks with adaptive spline activation function. IEEE Trans. Neural Netw.10:, 672–683 (1999).

    Article  Google Scholar 

  22. M. Scarpiniti, D. Comminiello, G. Scarano, R. Parisi, A. Uncini, Steady-state performance of spline adaptive filters. IEEE Trans. Sig. Process.64:, 816–828 (2016).

    Article  MathSciNet  Google Scholar 

  23. Z. Zheng, H. Zhao, Affine projection M-estiamte subband adaptive filters for robust adaptive filtering in impulsive noise. Sig. Process.120:, 64–70 (2016).

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank National Natural Science Foundation of China for financially support.

Funding

This work was financially supported by the National Natural Science Foundation of China under Grant 61501119 and by the Fund for the Dongguan Municipal Science and Technology Bureau under Grant 2016508140

Availability of data and materials

Please contact author for data requests.

Ethic approval and consent to participate

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chang Liu.

Ethics declarations

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, C., Peng, C., Tang, X. et al. Two variants of the IIR spline adaptive filter for combating impulsive noise. EURASIP J. Adv. Signal Process. 2019, 8 (2019). https://doi.org/10.1186/s13634-019-0605-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-019-0605-9

Keywords