- Research
- Open access
- Published:
A gradient-adaptive lattice-based complex adaptive notch filter
EURASIP Journal on Advances in Signal Processing volume 2016, Article number: 79 (2016)
Abstract
This paper presents a new complex adaptive notch filter to estimate and track the frequency of a complex sinusoidal signal. The gradient-adaptive lattice structure instead of the traditional gradient one is adopted to accelerate the convergence rate. It is proved that the proposed algorithm results in unbiased estimations by using the ordinary differential equation approach. The closed-form expressions for the steady-state mean square error and the upper bound of step size are also derived. Simulations are conducted to validate the theoretical analysis and demonstrate that the proposed method generates considerably better convergence rates and tracking properties than existing methods, particularly in low signal-to-noise ratio environments.
1 Introduction
The adaptive notch filter (ANF) is an efficient frequency estimation and tracking technique that is utilised in a wide variety of applications, such as communication systems, biomedical engineering and radar systems [1–12]. The complex ANF (CANF) has recently gained much attention [13–20]. A direct-form poles and zeros constrained CANF was first developed in [13] with a modified Gauss-Newton algorithm. A recursive least square (RLS)-based Steiglitz-McBride (RLS-SM) algorithm was also established to accelerate the convergence rate [14]. However, both algorithms are computationally complicated and can result in biased estimations.
To address this problem, numerous efficient and unbiased least mean square (LMS)-based algorithms have been developed, such as the complex plain gradient (CPG) [15], modified CPG (MCPG) [16], lattice-form CANF (LCANF) [17], and arctangent-based algorithms [18]. However, all these LMS-based algorithms generate a lower convergence rate than the RLS-based algorithms do. Moreover, the upper bound of the step size in LMS-based methods must be maintained within a limited range to ensure stability; this range depends on the eigenvalue of the correlation matrix of the input signal. These drawbacks limit the practical applications of LMS-based algorithms.
Several normalized LMS (NLMS)-based CANF algorithms were established, including the normalized CPG (NCPG) algorithm [19] and the improved simplified lattice complex algorithm [20]. However, the former may be unstable in low signal-to-noise ratio (SNR) conditions, and the latter can only be used to estimate positive instantaneous frequency.
In this paper, we develop a new CANF system based on the lattice algorithm [21]. Instead of the traditional gradient estimation filter, we proposed a normalized lattice predictor that makes both forward and backward predictions. This scheme reduces computational complexity and enhances the robustness to noise influence. Furthermore, convergence rate is improved significantly when compared with conventional gradient-based or nongradient-based methods without sacrificing tracking property.
A classic ordinary differential equation (ODE) method is applied to confirm the unbiasedness of the proposed algorithm. In addition, theoretical analyses are conducted on the stable range of the step size and the steady-state mean square error (MSE) under different conditions. Computer simulations are conducted to confirm the validity of the theoretical analysis results and the effectiveness of the proposed algorithm.
The following notations are adopted throughout this paper. j denotes square root of minus one. ln[·] denotes the principal branch of the complex natural logarithm function and Im {·} means taking the imaginary part of a complex value. Z{·} and E{·} denote the z-transform operator and statistical expectation operator, respectively. δ(·) represents the Dirac function. Asterisk ∗ denotes a complex conjugate and ⊗ is the convolution operator.
2 Filter structure and adaptive algorithm
We consider the following noisy complex sinusoidal input signal x(n) with amplitude A, frequency ω 0 and initial phase ϕ 0:
where ϕ 0 is uniformly distributed over [0, 2π) and v(n)=v r (n)+j v i (n) is assumed to be a zero-mean white complex Gaussian noise process. It is assumed v r (n) and v i (n) are uncorrelated zero-mean real white noise processes with identical variances. The first-order, pole-zero-constrained CANF with the following transfer function is widely used to estimate frequency ω 0: \(H(z) = \frac {{1 - {e^{j\theta }}{z^{- 1}}}}{{1 - \alpha {e^{j\theta }}{z^{- 1}}}}\) where θ is the notch frequency and α represents the pole-zero constrained factor and determines the notch filter’s 3-dB attenuation bandwidth. The pole can remain in the unit circle by restricting the value of α.
We now propose a new structure to implement the complex notch filter. As shown in Fig. 1, the input signal x(n) is first processed by an all-pole prefilter H p (z)=1/D(z)=1/(1+a 0 z −1) to obtain s 0(n), where a 0 is the coefficient of the all-pole filter. Then, a lattice predictor is employed to identify the forward and backward prediction errors s 1(n) and r 1(n), respectively. The transform functions from s 1(n) and r 1(n) to s 0(n) are given by H f (z)=N(z)=1+k 0 z −1 and H b (z)=z −1 N ∗(z)=k 0 ∗+z −1 (k 0 being the reflection coefficient of the lattice filter). To acquire the desired pole-zero constrained notch filter, the following relations must be satisfied:
Thus, θ can be computed as θ=Im{ln[−k 0]}.
At this point, a normalized stochastic gradient algorithm is derived to update the reflection coefficient k 0. We consider the following cost function:
We replace cost function J fb with its instantaneous estimation, i.e.,
By taking the derivative of \({\hat J_{fb}}\) with respect to θ(n), we obtain
Considering that θ(n) is real, the adaptation equation can be written as
where μ is the step size and the normalized signal ξ(n) can be recursively calculated as
where ρ denotes the smoothing factor.
Table 1 shows the computational complexities of the proposed algorithm and of four conventional methods [14, 16, 17, 19]. Note that the complexity of the proposed algorithm is comparable to that of LMS-based methods and lower than that of NLMS-based and RLS-based algorithms.
3 Convergence analysis
We now use the ODE approach to analyse the convergence properties of the adaptive algorithm, which has been applied to analyse several other ANF algorithms [17, 22]. Assuming that the adaptation is sufficiently slow and the input signal is stationary, the associated ODEs for the proposed adaptive algorithm can be expressed as
where G(θ(τ))=E{s 0 ∗(n)s 0(n)} and
Here, S x (ω) is the power spectral density (PSD) of x(n): S x (ω)=2π A 2 δ(ω−ω 0)+σ v 2 [17] and the transfer functions N(e jω) and 1/D(e jω) are defined in the previous section where e jω is substituted by z. Since Eq. 9 is the associated ordinary differential equation of the proposed adaptive algorithm, according to [23], θ(n) will always converge to the stationary point of Eq. 9 without exception, and this stationary point must satisfy \(\frac {d}{{d\tau }}\theta (\tau) = 0. \xi (\tau)\) is always positive; therefore, the stationary point of θ(n) converges to a solution of equation f(θ(τ))=0. Based on Eq. 11, θ=ω 0 is the sole stationary point over one period of the function. To confirm that the stationary point is stable, we choose a Lyapunov function L(τ)=[ω 0−θ(τ)]2. L(τ)≥0 for all τ. Meanwhile,
is maintained for all θ(τ)≠ω 0. This equation implies that L(τ) is a decreasing function of τ for |ω 0−θ(τ)|<π. Thus, it is proved that θ(n) can always converge to the expected frequency ω 0 [23].
Now, we would like to compute the upper bound of step size μ. Taking the expectation on both sides of Eq. 7, we obtain
where \(\bar {\theta } (n) = E\{\theta (n)\} \). Expanding Eq. 8 yields
Taking ensemble expectations on both sides and assuming that s 0(n) is wide-sense stationary, we have
where
In each step, we consider that [24]
where Δ ξ(n) is the zero-mean stochastic error sequence that is independent of the input signal. By applying Eq. 17 and disregarding the second-order error, we obtain
By substituting Eqs. 11, 16, and 18 into Eq. 13, we get
Considering the approximations \(\frac {{{\text {sin}}(\bar {\theta } - {\omega _{0}})}}{{{{\left | {1 - \alpha {e^{j(\bar {\theta } - {\omega _{0}})}}} \right |}^{2}}}} \approx \frac {{\bar {\theta } - {\omega _{0}}}}{{{{(1 - \alpha)}^{2}}}}\) and \({\text {sin}}(\bar {\theta } - {\omega _{0}})/(\bar {\theta } - {\omega _{0}}) \approx 1 \left (\text {for a small} \left | {\bar {\theta } - {\omega _{0}}} \right |\right)\) [17], we have
To satisfy \(\left | {{\omega _{0}} - \bar {\theta } (n + 1)} \right | < \left | {{\omega _{0}} - \bar {\theta } (n)} \right |\), the step-size μ should satisfy:
Furthermore, when SNR→∞ or α→1, we have μ∈(0,2], which is independent of the input.
4 Steady-state MSE analysis
In this section, a PSD-based method [19, 25] is exploited to derive the accurate expressions for the steady-state MSE of the estimated frequency. As discussed in the previous section, the estimated frequency can converge to an unbiased value, i.e., \(\mathop {\lim }\limits _{n \to \infty } \;\theta (n) = {\omega _{0}}\). Defining that Δ θ(n)=θ(n)−ω 0, we obtain the following two approximations: \(\mathop {\lim }\limits _{n \to \infty } \;{\text {sin}}(\Delta \theta (n)) \approx \Delta \theta (n)\) and \(\mathop {\lim }\limits _{n \to \infty } \;{\text {cos}}(\Delta \theta (n)) \approx 1.\) Then, the steady-state transfer function from s 1(n) and s 0(n) to x(n) can be written as:
The input signal x(n) in Eq. 1 is assumed to be composed of a single frequency part and Gaussian white noise. Thus, the steady-state outputs s 1(n) and s 0(n) can be expressed as:
where \({n_{{s_{1}}}}(n)\) and \({n_{{s_{0}}}}(n)\) are the complex Gaussian parts of s 1(n) and s 0(n), respectively. By using Eqs. 21 and 22, we obtain
By substituting Eqs. 23 and 24 into Eq. 7, the adaptive update equation can be rewritten as
where
and
Substituting Eqs. 25 and 26 into Eq. 31 yields
Meanwhile, Eq. 32 can be rearranged as
Then,
Assuming α is close to unity or the SNR is sufficient large, it stands that \(\left | {\frac {{{u_{3}}(n)}}{{{u_{4}}(n)}}} \right | \ge \frac {A}{{(1 - \alpha)\left | {{n_{{s_{0}}}}^{*}(n)} \right |}} \gg 1\). Thus, u 4(n) in Eq. 27 can be neglected.
Therefore, by subtracting ω 0 from both sides of Eq. 27 and assuming u(n)=u 1(n)+u 2(n) and \(\beta = 1 - \bar \mu {A^{2}}/{(1 - \alpha)^{2}}\), we obtain
With Eq. 36, the transform function from u(n) to Δ θ(n) is written as:
Hence, the MSE of the estimated frequency can be expressed as [26]:
where R u (z) denotes the z-transform of r u (l), which is the autocorrelation sequence of u(n) and can be calculated as:
where
and
Thus R u (z) in Eq. 38 can be divided into three parts:
where \({R_{{u_{1}}}}(z)\), \({R_{{u_{2}}}}(z)\), and \({R_{{u_{1}}{u_{2}}}}(z)\) denote the z-transform of \({r_{{u_{1}}}}(l)\), \({r_{{u_{2}}}}(l)\), and \({r_{{u_{1}}{u_{2}}}}(l)\), which will be calculated in what follows.
To get \({r_{{u_{1}}}}(l)\), we transform Eq. 29 as:
and then Eq. 40 can be rearranged as:
where
By using the results in Appendix A and considering that \({s_{{s_{0}}}}(n)\) and \({n_{{s_{1}}}}(n)\) are uncorrelated, we can rewrite Eqs. 46, 47, 48, and 49 as
where
Substituting Eqs. 50, 51, 52, and 53 into Eq. 45, we get
Considering Eq. 26, \({r_{{s_{{s_{0}}}}}}(l)\) in Eq. 56 can be written as:
Substituting Eq. 57 into Eq. 56 yields
The z-transform of both sides of Eq. 58 can be expressed as:
Note that \({R_{{n_{{s_{1}}}}}}(z)\) can be expanded as [26]:
where \({R_{n}}(z) = Z\{ v(n)\} = {\sigma _{v}^{2}}\) and \({H_{{s_{1}}}}(z) = \frac {{1 + {k_{0}}{z^{- 1}}}}{{1 + \alpha {k_{0}}{z^{- 1}}}}\). Utilizing the Taylor series expansion e jΔθ=1+j Δ θ+o(Δ θ 2), we obtain
Using the similar method of deriving \({R_{{u_{1}}}}(z)\), we get the following results (see Appendix B for details)
and
Substituting Eqs. 61, 62, and 63 into Eq. 38, finally we get
Equation 64 indicates that the estimated MSE is independent of input frequency ω 0 and smooth factor ρ.
5 Simulation results
Computer simulations are conducted to confirm the effectiveness of the proposed algorithm and the validity of the theoretical analysis results.
5.1 Performance comparisons
In the following two simulations, the proposed algorithm is compared with four conventional algorithms [14, 16, 17, 19] under two different kinds of inputs, namely a fixed frequency input and a quadratic chirp input. The input signal takes the form \(\phantom {\dot {i}\!}x(n) = {e^{j(\varphi (n) + {\theta _{0}})}} + v(n)\), where φ(n) is the instantaneous phase. The parameters are adjusted to establish an equal steady-state MSE and an equal notch bandwidth for all the algorithms. The initial notch frequency value is set to zero for all the methods.
Figure 2 presents the MSE curves of five algorithms with a fixed frequency φ(n)=0.4π n at SNR = 10 and 0 dB, respectively. Note that the proposed algorithm outperforms the other four algorithms. The NCPG algorithm achieves the similar convergence rate as the proposed algorithm at SNR = 10 dB while the former diverges at SNR = 0 dB. This indicates that the proposed algorithm is robust even at very low SNR conditions.
Figure 3 presents the tracking rate of the five algorithms with a quadratic chirp input signal: φ(n)=A c (ϕ 1 n+ϕ 2 n 2+ϕ 3 n 3), where ϕ 1=−π/4, ϕ 2=π/2×10−3 and ϕ 3=−π/6×10−6. Parameter A c is adopted to control the value of chirp rate. For this case, the desired true frequency can be obtained by ∂ φ(n)/∂ n=A c (ϕ 1+2ϕ 2 n+3ϕ 3 n). Figure 3 a depicts the tracking MSE obtained when A c =1, and Fig. 3 b presents the MSE with an increased chirp rate: A c =2. The results imply that under the non-stationary case, the proposed method can achieve faster convergence speed than all the other four algorithms. When tracking speed is concerned, we see that the RLS-SM method and the proposed method can maintain an equally small MSE than the other three methods especially at the high chirp rate part. We checked each of the learning curves of the NCPG algorithm and found that this algorithm even diverges in some runs.
5.2 Simulations of steady-state estimation MSE
In the following four simulations, the simulated steady-state MSE of the proposed algorithm is compared with the theoretical results in Eq. 64 with different input frequency ω 0, SNR, pole radius α and step size μ. The simulation results are obtained by averaging over 500 trials.
Figure 4 displays the comparison of the theoretical and simulated steady-state MSEs versus signal frequency ω 0 under two different SNRs (SNR = 60 and 10 dB). The curves show that the theoretical MSEs can predict the simulated MSEs precisely, and the steady-state MSEs are independent of input frequency ω 0. We also see that a higher SNR leads to a larger MSE.
Figure 5 exhibits the comparison of the theoretical and simulated steady-state MSEs versus SNR under two different parameter settings: (1) α=0.9,μ=0.8 and (2) α=0.98,μ=0.1. The proposed approach predicts the MSEs well, although some discrepancies are observed with α=0.9,μ=0.8. That is because the CANF can hardly converge when the SNR is very low.
Figure 6 illustrates the comparison of the theoretical and simulated steady-state MSEs versus pole radius α. When α decreases, the MSEs increase and the mismatch between the theoretical and simulated steady-state MSEs is somewhat large. It is because Eq. 36 is derived on the basis of the assumption that α is close to unity. When α is small, the assumption does not hold. This explains the mismatch in Fig. 6. This finding implies that the theoretical MSE remains valid when α is close to unity.
As shown in Fig. 7, the theoretical MSEs can predict the simulated steady-state MSEs well particularly for μ<1.8 but the mismatch occurs when μ approaches the up boundary of the step size. Moreover, it is noted that a large step size yields a large MSE.
6 Conclusions
This paper has presented a complex adaptive notch filter based on the gradient-adaptive lattice approach. The new algorithm is computationally efficient and can provide an unbiased estimation. The closed-form expressions for the steady-state MSE and the upper bound of step size have been worked out. Simulation results demonstrate that (1) the proposed algorithm can achieve faster convergence rate than the traditional methods particularly in the low SNR conditions and (2) theoretical analysis of the proposed algorithm is in good agreement with computer simulation results. By cascading the proposed first-order gradient-adaptive lattice filters, the algorithm can be extended to handle complex signal with multiple sinusoids, which will be the focus of our further research.
7 Appendix A
Given complex sequences f(n) and g(n), we define a new function ζ fg (l) as
Thus, for the input signal x(n) defined in Eq. 1, we have
Given that ϕ 0 is uniformly distributed over [0, 2π), we have \(\phantom {\dot {i}\!}E\{ {e^{j2({\omega _{0}}n + {\phi _{0}})}}\} = 0\). v(n)=v r (n)+j v i (n) is assumed to be a zero-mean white complex Gaussian noise process where v r (n) and v i (n) are uncorrelated zero-mean real white noise processes with identical variances. Therefore, we have the following relations:
where \({r_{{v_{r}}}}(l)\) and \({r_{{v_{i}}}}(l)\) are the autocorrelation sequences of v r (n) and v i (n), respectively. \({r_{{v_{r}}{v_{i}}}}(l)\) is the cross-correlation sequence of v r (n) and v i (n). Consequently, we obtain
Substituting Eq. 70 into Eq. 66, we get
Suppose y(n)=h(n)⊗x(n), where h(n) denotes the impulse response of an arbitrary linear system. Then,
Moreover,
Substituting Eq. 72 into Eq. 73 and considering Eq. 71, we get
By using Eq. 74, it is clear that
8 Appendix B
To get \({r_{{u_{2}}}}(l)\), we transform Eq. 30 as:
and then Eq. 41 can be rearranged as:
where
and
By assuming that \({n_{{s_{0}}}}(n)\) and \({n_{{s_{1}}}}(n)\) are jointly Gaussian stationary processes and utilising the Gaussian moment factoring theorem [27], we get
where cum(·) denotes high order cumulants of the complex random variables. We adopt the widely used independence assumption [28], which tells that the present sample is independent of the past samples. Thus, we have \({\text {cum}}({n_{{s_{0}}}}^{*}(n + l),{n_{{s_{1}}}}(n + l),{n_{{s_{0}}}}^{*}(n),{n_{{s_{1}}}}(n)) = 0.\) And, furthermore, considering \({\zeta _{n_{{s_{0}}}^ * n_{{s_{0}}}^ * }}(l)\) and \({\zeta _{{n_{{s_{1}}}}{n_{{s_{1}}}}}}(l)\) are all zero (see Appendix A), q 1(l) in Eq. 82 can be rewritten as
where \({r_{{n_{{s_{i}}}}{n_{{s_{j}}}}}}(l) = E\{ {n_{{s_{i}}}}(n){n_{{s_{j}}}}^{*}(n - l)\},\;{\text {for}}\;i \ne j.\) Utilizing the same method, we get
and
where \({r_{{n_{{s_{i}}}}}}(l) = E\{ {n_{{s_{i}}}}(n){n_{{s_{i}}}}^{*}(n - l)\},\;i \in \{ 0,1\},\) and Substituting Eqs. 83, 84, 85 and 86 into Eq. 77, we get
In the following part, the exact forms of \({r_{{n_{{s_{1}}}}}}(l)\), \({r_{{n_{{s_{0}}}}}}(l)\), \({r_{{n_{{s_{1}}}}{n_{{s_{0}}}}}}(l)\), and \({r_{{n_{{s_{0}}}}{n_{{s_{1}}}}}}(l)\) are derived. Note that \({R_{{n_{{s_{1}}}}}}(z)\) can be expanded as [26]
where \({R_{n}}(z) = {\sigma _{v}^{2}}\) and \({H_{{s_{1}}}}(z) = \frac {{1 + {k_{0}}{z^{- 1}}}}{{1 + \alpha {k_{0}}{z^{- 1}}}}\). Since \({r_{{n_{{s_{1}}}}}}(l)\) is a two-sided sequence with the region of convergence given by |k 0|/α>|z|>α|k 0|, the inverse z-transform of \({R_{{n_{{s_{1}}}}}}(z)\) can be expressed as
where u(l) denotes the unit step sequence. Using the same method, we have
and
Substituting Eqs. 89, 90, 91, and 92 into 87, and taking the z-transform on both sides we have
Substituting Eqs. 44 and 76 into Eq. 42 and considering that \({s_{{s_{0}}}}(n)\) is uncorrelated with \({n_{{s_{1}}}}(n)\) and \({n_{{s_{0}}}}(n)\), we have
Since \({s_{{s_{0}}}}(n)\) is a zero-mean stationary process, it holds that \({r_{{u_{1}}{u_{2}}}}(l){\mathrm {= }}0\). Thus we get
References
L-M Li, LB Milstein, Rejection of pulsed cw interference in pn spread-spectrum systems using complex adaptive filters. IEEE Trans. Comm.COM-31:, 10–20 (1983).
D Borio, L Camoriano, LL Presti, Two-pole and multi-pole notch filters: a computationally effective solution for GNSS interference detection and mitigation. IEEE Syst. J.2(1), 38–47 (2008).
RM Ramli, AOA Noor, SA Samad, A review of adaptive line enhancers for noise cancellation. Aust. J. Basic Appl. Sci.6(6), 337–352 (2012).
R Zhu, FR Yang, J Yang, in 21st Int. Congress on Sound and Vibration 2014 (ICSV 2014). A variable coefficients adaptive IIR notch filter for bass enhancement (International Institute of Acoustics and Vibrations (IIAV)USA, 2014).
SW Kim, YC Park, YS Seo, DH Youn, A robust high-order lattice adaptive notch filter and its application to narrowband noise cancellation. EURASIP J. Adv. Signal Process.2014(1), 1–12 (2014).
A Nehorai, A minimal parameter adaptive notch filter with constrained poles and zeros. IEEE Trans. Acoust. Speech Signal Process.ASSP-33(8), 983–996 (1985).
NI Choi, CH Choi, SU Lee, Adaptive line enhancement using an IIR lattice notch filter. IEEE Trans. Acoust. Speech Signal Process.37(4), 585–589 (1989).
T Kwan, K Martin, Adaptive detection and enhancement of multiple sinusoids using a cascade IIR filter. IEEE Trans. Circ. Syst.36(7), 937–947 (1989).
PA Regalia, An improved lattice-based adaptive IIR notch filter. IEEE Trans. Signal Process.39:, 2124–2128 (1991).
Y Xiao, L Ma, K Khorasani, A Ikuta, Statistical performance of the memoryless nonlinear gradient algorithm for the constrained adaptive IIR notch filter. IEEE Trans. Circ. Syst. I. 52(8), 1691–1702 (2005).
J Zhou, in Proc. Inst. Elect. Eng., Vis., Image Signal Process, 153. Simplified adaptive algorithm for constrained notch filters with guaranteed stability (The Institution of Engineering and Technology (IET)UK, 2006), pp. 574–580.
L Tan, J Jiang, L Wang, Pole-radius-varying iir notch filter with transient suppression. IEEE Trans. Instrum. Meas.61(6), 1684–1691 (2012).
SC Pei, CC Tseng, Complex adaptive IIR notch filter algorithm and its applications. IEEE Trans. Circ. Syst. II. 41(2), 158–163 (1994).
Y Liu, TI Laakso, PSR Diniz, in Proc. 2001 Finnish Signal Process. Symp. (FINSIG01). A complex adaptive notch filter based on the Steiglitz-Mcbride method (Helsinki University of TechnologyFinland, 2001), pp. 5–8.
S Noshimura, HY Jiang, in Proc. IEEE Asia Pacific Conf. Circuits and Systems. Gradient-based complex adaptive IIR notch filters for frequency estimation (Institute of Electrical and Electronics Engineers (IEEE)USA, 1996), pp. 235–238.
A Nosan, R Punchalard, A complex adaptive notch filter using modified gradient algorithm. Signal Process.92(6), 1508–1514 (2012).
PA Regalia, A complex adaptive notch filter. IEEE Signal Process. Lett.17(11), 937–940 (2010).
R Punchalard, Arctangent based adaptive algorithm for a complex iir notch filter for frequency estimation and tracking. Signal Process.94:, 535–544 (2014).
A Mvuma, T Hinamoto, S Nishimura, in Proc. IEEE MWSCAS. Gradient-based algorithms for a complex coefficient adaptive iir notch filter: steady-state analysis and application (Institute of Electrical and Electronics Engineers (IEEE)USA, 2004).
H Liang, N Jia, CS Yang, in Int. Proc. of Computer Science and Information Technology, 58. Complex algorithms for lattice adaptive IIR notch filter (IACSIT PressSingapore, 2012), pp. 68–72.
S Haykin, Adaptive Filter Theory, 4th edn. (Prentice-Hall, Upper Saddle River, NJ, 2002).
NI Cho, SU Lee, On the adaptive lattice notch filter for the detection of sinusoids. IEEE Circ. Syst.40(7), 405–416 (1993).
L Ljung, T Soderstrom, Theory and practice of recursive identification (MIT Press, Cambridge, 1983).
PSR Diniz, Adaptive filtering: algorithms and practical implementation, 3rd edn. (Springer, New York, 2008).
R Punchalard, Steady-state analysis of a complex adaptive notch filter using modified gradient algorithm. AEU-Intl. J. Electron. Commun.68(11), 1112–1118 (2014).
DG Manolakis, VK Ingle, SM Kogon, Statistical and adaptive signal processing: spectral estimation, signal modeling, adaptive filtering, and array processing (McGraw-Hill, New York, 2000).
A Swami, System identification using cumulants. PhD thesis (University of Southern California, Dep. Elec. Eng.-Syst., 1989).
B Farhang-Boroujeny, Adaptive filters: theory and applications (John Wiley & Sons, Chichester, UK, 2013).
Acknowledgements
This work is supported by Strategic Priority Research Program of the Chinese Academy of Sciences under Grants XDA06040501, and in part by the National Science Fund of China under Grant 61501449. We thank the reviewers for their constructive comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhu, R., Yang, F. & Yang, J. A gradient-adaptive lattice-based complex adaptive notch filter. EURASIP J. Adv. Signal Process. 2016, 79 (2016). https://doi.org/10.1186/s13634-016-0377-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634-016-0377-4