 Research
 Open access
 Published:
A low complexity reweighted proportionate affine projection algorithm with memory and row action projection
EURASIP Journal on Advances in Signal Processing volume 2015, Article number: 99 (2015)
Abstract
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mulaw and the l _{0} norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter’s coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l _{0} PAPA and mulaw PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mulaw PAPA, and l _{0} PAPA, etc., which makes it very appealing for realtime implementation.
1 Introduction
Adaptive filtering has been studied for decades and has found wide areas of application. The most common adaptive filter is the normalized least mean square (NLMS) algorithm due to its simplicity and robustness [1]. In the 1990’s, the affine projection algorithm (APA), a generalization of NLMS was found to have better convergence than NLMS for colored input [2, 3]. The optimal step size control of the adaptive algorithm has been widely studied in order to improve their performance [4, 5]. The impulse responses in many applications, such as network echo cancellation (NEC), are sparse, that is, a small percentage of the impulse response components have a significant magnitude while the rest are zero or small. To exploit this property, the family of proportionate algorithms was proposed to improve performance in such applications [2]. These algorithms include proportionate NLMS (PNLMS) [6, 7], and proportionate APA (PAPA) [8], etc.
The idea behind proportionate algorithms is to update each coefficient of the filter independently of the others by adjusting the adaptation step size in proportion to the magnitude of the estimated filter coefficient [6]. In comparison to NLMS and APA, PNLMS and PAPA have very fast initial convergence and tracking when the echo path is sparse. However, the big coefficients converge very quickly (in the initial period) at the cost of slowing down dramatically the convergence of the small coefficients (after the initial period). In order to combat this issue, mulaw PNLMS (MPNLMS) and mulaw PAPA algorithms were proposed [9–11]. Furthermore, the l _{0} norm family of algorithms have recently drawn lots of attention for sparse system identification [12]. Therefore, a new PNLMS algorithm based on the l _{0} norm was proposed to represent a better measure of sparseness than the l _{1} norm in PNLMS [13].
On the other hand, the PNLMS and PAPA algorithms converge much slower than corresponding NLMS and APA algorithms when the impulse response is dispersive. In response, the improved PNLMS (IPNLMS) and improved PAPA (IPAPA) were proposed by introducing a controlled mixture of proportionate and nonproportionate adaptation [14, 15]. The IPNLMS and IPAPA algorithms perform very well for both sparse and nonsparse systems. Also, recently, the blocksparse PNLMS (BSPNLMS) algorithm was proposed to improve the performance of PNLMS for identifying blocksparse systems [16].
In order to reduce the computational complexity of PAPA, the memory improved PAPA (MIPAPA) algorithm was proposed to not only speed up the convergence rate but also reduce computational complexity by taking into account the memory of the proportionate coefficients [17]. Dichotomous coordinate descent (DCD) iterations have previous been applied to the PAPA family of algorithms to implement the MIPAPA adaptive filter [18, 19]. Meanwhile, an iterative method based on the PAPA with row action projection (RAP) has been shown to have good convergence properties with relatively low complexity [20].
In [21] the proportionate adaptive filter was derived from a unified view of variablemetric projection algorithms. In addition, the PNLMS algorithm and PAPA can both be deduced from a basis pursuit perspective [22, 23]. A more general framework was further proposed to derive PNLMS adaptive algorithms for sparse system identification, which employed convex optimization [24]. Here, a family of PAPA algorithms are firstly derived based on convex optimization, in which PAPA, mulaw PAPA, and l _{0} PAPA are all special cases. Then, a reweighted PAPA is suggested in order to reduce the computational complexity. Finally, an efficient implementation of PAPA is proposed based on RAP and memory PAPA.
The organization of this article is as follows. The review of various PAPAs is presented in Section 2. Section 3 derives the proposed reweighted PAPA and presents an efficient memory implementation with RAP. The computational complexity is compared with PAPA, mulaw PAPA and l _{0} PAPA in Section 4. In Section 5, simulation results of the proposed algorithm are presented. The last section concludes the paper with remarks.
2 Review of various PAPAs
The input signal x(n) is filtered through the unknown coefficients to be identified h(n) to get the observed output signal d(n).
where
and v(n) is the measurement noise, and L is the length of impulse response. We define the estimated error as
where \(\hat {\mathbf {h}}(n)\) is the adaptive filter’s coefficients. Grouping the M most recent input vectors x(n) together gives the input signal matrix
Therefore, the estimated error vector is
in which
where M is the projection order. PAPA updates the filter coefficients as follows [8]:
in which μ is the stepsize, δ is the regularization parameter, I _{ M } is the M×M identity matrix, and the proportionate stepsize control matrix G(n−1) is defined as
where \(\mathrm {F}(\hat {h}_{l})\) is specific to the algorithm, q prevents the filter coefficients \(\hat {h}_{l}(n1)\) from stalling when \(\hat {\mathbf {h}}(0)=\mathbf {0}_{L\times 1}\) at initialization and ρ prevents the coefficients from stalling when they are much smaller than the largest coefficient. The classical PAPA employs stepsizes that are proportional to the magnitude of the estimated impulse response as below [8]
The mulaw PNLMS and the mulaw PAPA algorithm proposed in [9–11] use the logarithm of the coefficient magnitudes rather than magnitudes directly as below:
in which σ _{ μ } is a positive parameter.
Based on the motivation that the l _{0} norm can represent an even better measure of sparseness than the l _{1} norm, the improved PNLMS and PAPA algorithms based on an approximation of the l _{0} norm (l _{0}PNLMS) were proposed as below [13]:
where σ _{ l0} is a positive parameter. The main disadvantage of the mulaw or l _{0} norm PAPA algorithms are their heavy computation cost because of the L logarithmic or exponential operations. Therefore, a line segment was given to approximate the mulaw function [9], where
It should be noted that, without loss of performance, the line segment was normalized to be of unit gain for \(\hat {h}_{l}\geq 0.005\), compared to the original one proposed in [9]. Meanwhile, the exponential form in (12) can be approximated by the first order Taylor series expansions of exponential functions [12]
Then (12) becomes
It is interesting to see that the first order Taylor series approximation of l _{0} PAPA in (12) is actually the same as the line segment implementation of mulaw PAPA in (11) for σ _{ l0}=200.
3 The proposed SCRPAPA with MRAP
Based on the minimization of the convex target, the reweighted PAPA (RPAPA) will be firstly derived from a new sparseness measure with low computational complexity. Meanwhile, the sparseness controlled RPAPA (SCRPAPA) is presented to improve the performance for both sparse and dispersive system identification. Finally, the SCRPAPA with memory and RAP (MRAP) is proposed by combing the memory of the coefficients with iterative RAP to further reduce the computational complexity.
3.1 The proposed RPAPA
The proportionate APA algorithm can be deduced from a basis pursuit perspective [22]
where \(\tilde {\mathbf {h}}(n)\) is the correction component defined as
According to [24], the family of PAPA algorithms can be derived from the following target
where G ^{−1}(n) is the inverse matrix of proportionate matrix G(n), which is also a diagonal matrix. If the optimization target in (17) is convex, the family of PAPA algorithms can be derived using Lagrange Multipliers. It should be noted that, using the approximation
the proposed formulation in (17) becomes the variablemetric in [21], which is an approximation of the proposed formulation. The function \(\mathrm {G}(t), t\in \mathbb {R}\) should satisfy the following properties:

1) G(0)=0, G(t) is even and not identically zero;

2) G(t) is nondecreasing on [0,∞);

3) \(\frac {\mathrm {G}(t)}{t}\) is nonincreasing on (0,∞).
The above properties follow the requirements of the sparseness measure proposed in [25]. From the perspective of proportionate algorithms, the first two requirements are intuitive, since the family of the proportionate algorithms should be proportionate to the magnitude of the filter’s coefficients. The third property will guarantee the convexity of the optimization target. PAPA, mulaw PAPA and l _{0} PAPA are all special cases of the sparseness measures fulfilling all three properties. In this paper, considering the computational complexity, we propose using the following reweighted PAPA:
where σ _{ r } is a small positive constant.
The proposed reweighted metric is compared with PAPA, mulaw PAPA and l _{0} PAPA in Fig. 1. The σ parameters for each algorithm were σ _{ μ }=1000, σ _{ l0}=50, σ _{ r }=0.01. These parameters were recommended and widely simulated in the literature for each algorithm [9, 13]. It should be noted that, the plots in [24] set the σ parameters respectively so that they all contain the point (0.9,0.9). However, in actual application, this parameter should be tuned to maximize the performance. In order to facilitate the comparison of the different sparseness measure, they are normalized to pass through the point, (1,1) here instead. Without loss of generality, it is assumed that the filter’s coefficients are normalized and the maximum possible magnitude is 1. Therefore, it is convenient to compare the gain distribution of different metrics with different σ parameters.
3.2 The proposed SCRPAPA
It should be noted that the reweighting factor σ _{ r } in the proposed RPAPA (19) is related to the sparseness of the impulse system. It is straightforward to verify that if σ _{ r }=0, reweighted PAPA simplifies to APA. If the impulse system is more sparse, σ _{ r } should be relatively larger than \(\left \hat {h}_{l}\right \), which makes it more like the PAPA. This agrees with the fact that we fully benefit from PNLMS only when the impulse response is close to a delta function [26]. Therefore, it is natural to take the sparseness of impulse response into account. The sparsity of an impulse response could be estimated as
where L>1 is the length of the channel, \(\Vert \hat {\mathbf {h}}(n)\Vert _{1}\) and \(\Vert \hat {\mathbf {h}}(n)\Vert _{2}\) are the l _{1} norm and l _{2} norm of \(\hat {\mathbf {h}}\), respectively. The value of \(\hat {\epsilon }(n)\) is between 0 and 1. For a sparse channel, the value of the sparseness is close to 1 and for a dispersive channel, this value is close to 0. Therefore, the SCRPAPA is
where σ _{ max } is the maximum value for the sparse system identification. The plot of the reweighted metric for different σs is presented in Fig. 2. In practical implementation, we would like to apply the APA algorithm to the dispersive system under certain sparseness threshold. For example, the sparsity of the dispersive channel is about 0.4, and a heuristic implementation that works pretty well in the simulations is
where ε _{min}=1e ^{−4} is a minimum sparsity in order to avoid dividing by zero for \(\hat {h}_{l}=0\).
3.3 The proposed SCRPAPA with MRAP
However, the main computational complexity of the family of PAPA algorithm is the matrix inversion in (5). Reduction in complexity is achieved by using 5M DCD iterations, thus requiring about 10M ^{2} additions [18]. Meanwhile, a slidingwindow recursive least squares (SRLS) lowcost implementation of PAPA is given based on DCD, which does not depend on M. The SRLS implementation is only efficient when the projection order is very high (e.g., such as M=512) [19]. However, it is known that if the projection order increases, the convergence speed is faster, but the steadystate error also increases.
Another way to avoid the matrix inversion altogether is to use the method of RAP [27]. RAP is also known in the literature as a data reuse algorithm (see [28]). It has been shown in [29] that RAP is effectively the same as APA, except that the system of equations problem that is solved with a direct matrix inversion (DMI) in APA is solved iteratively in RAP [20].The iterative PAPA algorithm proposed in [30] was made efficient by implementing it using RAP in [27]. RAP is an iterative approach to solving a system of M equations. It cycles through the M equations J times performing an NLMSlike update on the coefficients for each equation. In this instance, the number of RAP iterations, J is set to one. It should be noted that, by limiting J to one, the solution of the system of equations through RAP is approximate. However, the simulation results will demonstrate that this approximation works pretty well, especially for relatively high projection order. In each sample period a new equation is added to the system of equations and the oldest equation is dropped. Thus, M RAP updates are performed on a given equation every M sample periods. The PAPA algorithm with RAP updates the coefficients
where P _{ m }(n) is the m _{ th } column of P(n) defined as
in which the operation ⊙ denotes the Hadamard product and m=0,1,…,M−1.
The traditional PAPA requires M×L multiplications to calculate P(n), and in order to further reduce the computational complexity, we propose to apply the memory of the proportionate coefficients [17] into SCRPAPA. Therefore, the matrix P(n) in (4) can be approximated as P ^{′}(n)
where \(\mathbf {P}^{\prime }_{1}(n1)\) contains the first M−1 columns of P ^{′}(n−1). Meanwhile, we define
in which
and \(\mathbf {P}^{\prime }_{m}(n)\) is the m _{ th } column of P ^{′}(n) defined as
Considering the timeshift property, the calculation of p(n) could be
where p _{−1}(n−1) contains the first M−1 values of p(n−1). The proposed update for the PAPA with memory and RAP is
As mentioned in [17], the proposed RPAPA with MRAP takes into account the “history” of the proportionate factors from the last M steps. The convergence and the tracking become faster when the projection order increases. Meanwhile, combined with the RAP, the computational complexity is also significantly lower as compared to the MPAPA through avoiding the direct matrix inversion and using the memory. The proposed SCRPAPA with MRAP algorithm is summarized in detail in Table 1.
4 Computational complexity
The computational complexity of the SCRPAPA with MRAP algorithm is compared with traditional PAPA, MPAPA, RPAPA, and SCRPAPA in Table 2, in terms of the total number of additions (A), multiplications (M), divisions (D), comparisons (C), square root (Sqrt), and direct matrix inversion (DMI) needed per algorithm iteration. All the algorithms require L · operations for calculating the magnitude of the filter’s coefficients.
Compared with traditional PAPA, the MPAPA reduced the complexity of G X, but the calculation of X ^{T} P ^{′} still requires M ^{2} L multiplications. Meanwhile, due to the memory and the iterative RAP structure, only L multiplications are needed to update p(n) instead.
What’s more important is that, both the PAPA and the MPAPA algorithms require a M×M direct matrix inversion, which is especially expensive for high projection orders. The combination of the memory and the iterative RAP structure, not only avoids the M×M direct matrix inversion, but also reduces the computational complexity required for the calculation of both G X and X ^{T} G X.
The additional computational complexity for the SCRPAPA with MRAP algorithm arises from the computation of the sparseness measure \(\hat {\epsilon }\). As in [31], given that \(L/(L\sqrt {L})\) can be computed offline, the remaining lnorms require an additional 2L additions and L multiplications. Furthermore, this sparseness measure can be reused in many other sparseness controlled algorithms too, for example [31]. The calculation of the F in (22) requires additional L divisions, L+1 additions, one multiplication, and one comparison more than PAPA. The complexity of division is much lower than the L exponential or logarithmic operations required by either the mulaw or the l _{0} PAPA. Meanwhile, (22) also offers the robustness to dispersive system identification.
5 Simulation results
The performance of the proposed SCRPAPA with MRAP was evaluated via simulations. Throughout our simulation, the length of the unknown system was L=512, and the adaptive filter was with the same length. The sampling rate was 8 kHz. The parameters for each algorithm were δ=0.01/L, ρ=0.01, q=0.01. The stepsize for all the algorithms was set to μ=0.2.
The algorithms were tested using both the white Gaussian noise (WGN), and colored noise as inputs. The colored input signals were generated by filtering the WGN through a first order system with a pole at 0.8. Independent WGN was added to the system background with a signaltonoise ratio (SNR) as 30 dB.
Two impulse responses were used to verify the performance of the proposed SCRPAPA MRAP algorithm, as shown in Fig. 3. The first one in Fig. 3a is a sparse impulse response of typical network echo with sparseness 0.92. Figure 3b is a dispersive channel with sparseness 0.44. In order to demonstrate the tracking ability, an echo path change was incurred through switching the impulse response from the sparse system in Fig. 3a to the dispersive one in Fig. 3b.
The convergence state of adaptive filter is evaluated with the normalized misalignment which is defined as
5.1 The performance of the proposed RPAPA
The proposed reweighted PAPA in (19) was firstly compared to PAPA, mulaw PAPA, and l _{0} PAPA. The parameters for the algorithm were σ _{ μ }=1000, σ _{ l0}=200, and σ _{ r }=0.01. The affine projection order was selected as M=2.
In the first simulation shown in Fig. 4, the input signal was the WGN. According to the results, the proposed RPAPA could outperform PAPA, and has similar performance with respect to mulaw and l _{0} PAPA. However, the reweighted PAPA has much lower computational complexity. In the second simulation, the input signal was colored, and a similar result could be obtained according to Fig. 5.
5.2 The performance of the proposed SCRPAPA
To demonstrate the benefit of sparseness control, the proposed SCRPAPA algorithm was simulated using an echo path change from the sparse to the dispersive impulse response in Fig. 3. The SCRPAPA algorithm was compared with APA, PAPA, and the above RPAPA algorithms. The parameters for the algorithm were σ _{ r }=0.01, and σ _{ max }=0.02. The affine projection order was selected as M=2. In Fig. 6, the input signal was the WGN input. Both the proposed RPAPA and SCRPAPA algorithms had similar performance for sparse system identification, which outperformed APA and PAPA. Meanwhile, due to the sparseness control, SCRPAPA outperformed RPAPA as expected for the dispersive system. The colored input was used in Fig. 7, and similar results are observed.
5.3 The performance of the proposed SCRPAPA with MRAP
An efficient implementation of the SCRPAPA algorithm was proposed through combining the memory of the filter’s coefficients with RAP. The new SCRAPA with MRAP algorithm significantly decreases computational complexity. In this subsection, the performance of the efficient implementation was compared with APA, PAPA and SCRPAPA through simulations.
In the first simulation, the WGN input was used. As shown in Fig. 8, SCRPAPA with MRAP worked as well as SCRPAPA for sparse system identification. However, for dispersive system, the performance of SCRPAPA MRAP was worse than SCRPAPA and the APA. This fact becomes more apparent for the colored input as shown in Fig. 9. This was caused by the relatively low projection order (M=2), and the implementation of the MRAP was slower than the direct matrix inversion. However, this drawback could be mitigated through increasing the projection order. Furthermore, the memory of the filter’s coefficients will also improve the performance as the projection order increases. We verify this point through simulations with M=32 for both the WGN (see Fig. 10) and the colored input (see Fig. 11). It could be observed that the SCRPAPA with MRAP works better than APA, PAPA, and SCRPAPA for sparse system identification. Meanwhile, the performance for dispersive system with colored input has been significantly improved too.
6 Conclusion
A low complexity reweighted proportionate affine projection algorithm was proposed in this paper. The sparseness of the channel was taken into account to improve the performance for dispersive systems. In order to reduce computational complexity, the direct matrix inversion of PAPA was iteratively implemented with RAP. Meanwhile, the memory of the filter’s coefficients were exploited to improve the performance and further reduce the complexity for high projection orders. Simulation results demonstrate that the proposed sparseness controlled reweighted proportionate affine projection algorithm with memory and RAP outperforms traditional PAPA, with much lower computational complexity compared to mulaw and l _{0} PAPA.
References
E Hänsler, G Schmidt, Acoustic Echo and Noise Control: a Practical Approach, vol. 40 (Wiley, Hoboken, New Jersey, 2005).
E Hänsler, G Schmidt, Topics in Acoustic Echo and Noise Control: Selected Methods for the Cancellation of Acoustical Echoes, the Reduction of Background Noise, and Speech Processing (Springer, Berlin, Heidelberg, 2006).
K Ozeki, T Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. in Japan (Part I: Commun.)67(5), 19–27 (1984).
E Hänsler, GU Schmidt, Handsfree telephones–joint control of echo cancellation and postfiltering. Sig. Process. 80(11), 2295–2305 (2000).
A Mader, H Puder, GU Schmidt, Stepsize control for acoustic echo cancellation filters–an overview. Sig. Process. 80(9), 1697–1719 (2000).
DL Duttweiler, Proportionate normalized leastmeansquares adaptation in echo cancelers. Speech Audio Process. IEEE Trans. 8(5), 508–518 (2000).
K Wagner, M Doroslovacki, Proportionatetype Normalized Least Mean Square Algorithms (Wiley, Hoboken, New Jersey, 2013).
T Gansler, J Benesty, SL Gay, MM Sondhi, in Acoustics, Speech, and Signal Processing, 2000. ICASSP’00. Proceedings. 2000 IEEE International Conference On, 2. A robust proportionate affine projection algorithm for network echo cancellation (IEEEIstanbul, 2000), pp. 793–796.
H Deng, M Doroslovacki, Improving convergence of the PNLMS algorithm for sparse impulse response identification. Signal Process. Lett. IEEE. 12(3), 181–184 (2005).
H Deng, M Doroslovacki, Proportionate adaptive algorithms for network echo cancellation. Signal Process. IEEE Trans. 54(5), 1794–1803 (2006).
L Liu, M Fukumoto, S Saiki, S Zhang, A variable stepsize proportionate affine projection algorithm for identification of sparse impulse response. EURASIP J. Adv. Signal Process. 2009:, 1–10 (2009). doi:10.1155/2009/150914.
Y Gu, J Jin, S Mei, l _{0} norm constraint LMS algorithm for sparse system identification. Signal Process. Lett., IEEE. 16(9), 774–777 (2009).
C Paleologu, J Benesty, S Ciochina, in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference On. An improved proportionate NLMS algorithm based on the l _{0} norm (IEEEDallas, TX, 2010), pp. 309–312.
J Benesty, SL Gay, in Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference On, 2. An improved PNLMS algorithm (IEEEOrlando, FL, USA, 2002), pp. 1881–1884.
O Hoshuyama, R Goubran, A Sugiyama, in Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference On, 4. A generalized proportionate variable stepsize algorithm for fast changing acoustic environments (IEEEMontreal, 2004), p. 161.
J Liu, SL Grant, Proportionate adaptive filtering for blocksparse system identification (2015). arXiv preprint arXiv:1508.04172.
C Paleologu, S Ciochină, J Benesty, An efficient proportionate affine projection algorithm for echo cancellation. Signal Process. Lett. IEEE. 17(2), 165–168 (2010).
C Stanciu, C Anghel, C Paleologu, J Benesty, F Albu, S Ciochina, in Signals, Circuits and Systems (ISSCS), 2011 10th International Symposium On. A proportionate affine projection algorithm using dichotomous coordinate descent iterations (Iasi, 2011), pp. 1–4.
Y Zakharov, VH Nascimento, Slidingwindow RLS lowcost implementation of proportionate affine projection algorithms. Audio Speech Lang. Process. IEEE/ACM Trans. 22(12), 1815–1824 (2014).
SL Grant, P Shah, J Benesty, in Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 AsiaPacific. An efficient iterative method for basis pursuit adaptive filters for sparse systems (IEEEHollywood, CA, 2012), pp. 1–4.
M Yukawa, I Yamada, A unified view of adaptive variablemetric projection algorithms. EURASIP J. Adv. Signal Process. 2009:, 34 (2009).
J Benesty, C Paleologu, S Ciochin, Proportionate adaptive filters from a basis pursuit perspective. Signal Process. Lett. IEEE. 17(12), 985–988 (2010).
C Paleologu, J Benesty, in Circuits and Systems (ISCAS), 2012 IEEE International Symposium On. Proportionate affine projection algorithms from a basis pursuit perspective (IEEESeoul, 2012), pp. 2757–2760.
J Liu, SL Grant, in Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference On. A generalized proportionate adaptive algorithm based on convex optimization (IEEEXi’an, 2014), pp. 748–752.
R Gribonval, M Nielsen, Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl. Comput. Harmon. Anal.22(3), 335–355 (2007).
SL Gay, in Signals, Systems, Computers, 1998. Conference Record of the ThirtySecond Asilomar Conference On, 1. An efficient, fast converging adaptive filter for network echo cancellation (IEEEPacific Grove, CA, USA, 1998), pp. 394–398.
S Kaczmarz, Angenäherte auflösung von systemen linearer gleichungen. Bulletin International de lAcademie Polonaise des Sciences et des Lettres. 35:, 355–357 (1937).
J Benesty, T Gänsler, in Proc. Int. Workshop on Acoustic Echo and Noise Control (IWAENC). On datareuse adaptive algorithms (Kyoto, 2003).
SL Gay, Fast projection algorithms with application to voice echo cancellation. PhD thesis, New Brunswick, NJ, USA, 1994.
P Shah, SL Grant, J Benesty, in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference On. On an iterative method for basis pursuit with application to echo cancellation with sparse impulse responses (IEEEKyoto, 2012), pp. 177–180.
P Loganathan, AW Khong, P Naylor, A class of sparsenesscontrolled algorithms for echo cancellation. Audio Speech Lang. Process. IEEE Trans. 17(8), 1591–1601 (2009).
Acknowledgements
This work was performed under the Wilkens Missouri Endowment. The authors would like to thank the Associate Editor and the reviewers for their valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ information
Steven L. Grant was formerly Steven L. Gay.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Liu, J., Grant, S.L. & Benesty, J. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection. EURASIP J. Adv. Signal Process. 2015, 99 (2015). https://doi.org/10.1186/s1363401502804
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363401502804