A low complexity reweighted proportionate affine projection algorithm with memory and row action projection
 Jianming Liu^{1}View ORCID ID profile,
 Steven L. Grant^{1}Email author and
 Jacob Benesty^{2}
https://doi.org/10.1186/s1363401502804
© Liu et al. 2015
Received: 2 August 2015
Accepted: 5 November 2015
Published: 25 November 2015
Abstract
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mulaw and the l _{0} norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter’s coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l _{0} PAPA and mulaw PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mulaw PAPA, and l _{0} PAPA, etc., which makes it very appealing for realtime implementation.
Keywords
1 Introduction
Adaptive filtering has been studied for decades and has found wide areas of application. The most common adaptive filter is the normalized least mean square (NLMS) algorithm due to its simplicity and robustness [1]. In the 1990’s, the affine projection algorithm (APA), a generalization of NLMS was found to have better convergence than NLMS for colored input [2, 3]. The optimal step size control of the adaptive algorithm has been widely studied in order to improve their performance [4, 5]. The impulse responses in many applications, such as network echo cancellation (NEC), are sparse, that is, a small percentage of the impulse response components have a significant magnitude while the rest are zero or small. To exploit this property, the family of proportionate algorithms was proposed to improve performance in such applications [2]. These algorithms include proportionate NLMS (PNLMS) [6, 7], and proportionate APA (PAPA) [8], etc.
The idea behind proportionate algorithms is to update each coefficient of the filter independently of the others by adjusting the adaptation step size in proportion to the magnitude of the estimated filter coefficient [6]. In comparison to NLMS and APA, PNLMS and PAPA have very fast initial convergence and tracking when the echo path is sparse. However, the big coefficients converge very quickly (in the initial period) at the cost of slowing down dramatically the convergence of the small coefficients (after the initial period). In order to combat this issue, mulaw PNLMS (MPNLMS) and mulaw PAPA algorithms were proposed [9–11]. Furthermore, the l _{0} norm family of algorithms have recently drawn lots of attention for sparse system identification [12]. Therefore, a new PNLMS algorithm based on the l _{0} norm was proposed to represent a better measure of sparseness than the l _{1} norm in PNLMS [13].
On the other hand, the PNLMS and PAPA algorithms converge much slower than corresponding NLMS and APA algorithms when the impulse response is dispersive. In response, the improved PNLMS (IPNLMS) and improved PAPA (IPAPA) were proposed by introducing a controlled mixture of proportionate and nonproportionate adaptation [14, 15]. The IPNLMS and IPAPA algorithms perform very well for both sparse and nonsparse systems. Also, recently, the blocksparse PNLMS (BSPNLMS) algorithm was proposed to improve the performance of PNLMS for identifying blocksparse systems [16].
In order to reduce the computational complexity of PAPA, the memory improved PAPA (MIPAPA) algorithm was proposed to not only speed up the convergence rate but also reduce computational complexity by taking into account the memory of the proportionate coefficients [17]. Dichotomous coordinate descent (DCD) iterations have previous been applied to the PAPA family of algorithms to implement the MIPAPA adaptive filter [18, 19]. Meanwhile, an iterative method based on the PAPA with row action projection (RAP) has been shown to have good convergence properties with relatively low complexity [20].
In [21] the proportionate adaptive filter was derived from a unified view of variablemetric projection algorithms. In addition, the PNLMS algorithm and PAPA can both be deduced from a basis pursuit perspective [22, 23]. A more general framework was further proposed to derive PNLMS adaptive algorithms for sparse system identification, which employed convex optimization [24]. Here, a family of PAPA algorithms are firstly derived based on convex optimization, in which PAPA, mulaw PAPA, and l _{0} PAPA are all special cases. Then, a reweighted PAPA is suggested in order to reduce the computational complexity. Finally, an efficient implementation of PAPA is proposed based on RAP and memory PAPA.
The organization of this article is as follows. The review of various PAPAs is presented in Section 2. Section 3 derives the proposed reweighted PAPA and presents an efficient memory implementation with RAP. The computational complexity is compared with PAPA, mulaw PAPA and l _{0} PAPA in Section 4. In Section 5, simulation results of the proposed algorithm are presented. The last section concludes the paper with remarks.
2 Review of various PAPAs
in which σ _{ μ } is a positive parameter.
It is interesting to see that the first order Taylor series approximation of l _{0} PAPA in (12) is actually the same as the line segment implementation of mulaw PAPA in (11) for σ _{ l0}=200.
3 The proposed SCRPAPA with MRAP
Based on the minimization of the convex target, the reweighted PAPA (RPAPA) will be firstly derived from a new sparseness measure with low computational complexity. Meanwhile, the sparseness controlled RPAPA (SCRPAPA) is presented to improve the performance for both sparse and dispersive system identification. Finally, the SCRPAPA with memory and RAP (MRAP) is proposed by combing the memory of the coefficients with iterative RAP to further reduce the computational complexity.
3.1 The proposed RPAPA

1) G(0)=0, G(t) is even and not identically zero;

2) G(t) is nondecreasing on [0,∞);

3) \(\frac {\mathrm {G}(t)}{t}\) is nonincreasing on (0,∞).
where σ _{ r } is a small positive constant.
3.2 The proposed SCRPAPA
where ε _{min}=1e ^{−4} is a minimum sparsity in order to avoid dividing by zero for \(\hat {h}_{l}=0\).
3.3 The proposed SCRPAPA with MRAP
However, the main computational complexity of the family of PAPA algorithm is the matrix inversion in (5). Reduction in complexity is achieved by using 5M DCD iterations, thus requiring about 10M ^{2} additions [18]. Meanwhile, a slidingwindow recursive least squares (SRLS) lowcost implementation of PAPA is given based on DCD, which does not depend on M. The SRLS implementation is only efficient when the projection order is very high (e.g., such as M=512) [19]. However, it is known that if the projection order increases, the convergence speed is faster, but the steadystate error also increases.
The SCRPAPA algorithm with MRAP
Initialization  \(\hat {\mathbf {h}}(0)=\mathbf {0}_{L \times 1},\rho =0.01,q=0.01,\delta =0.01/L\) 

σ _{max}=0.02, ε _{min}=1e ^{−4}, μ=0.2  
Sparseness  
control  \(\hat {\epsilon }(n)=\frac {L}{L\sqrt {L}}\left (1\frac {\Vert \hat {\mathbf {h}}(n1)\Vert _{1}}{\sqrt {L}\Vert \hat {\mathbf {h}}(n1)\Vert _{2}}\right)\) 
\(\mathrm {F}(\hat {h}_{l})=\frac {\hat {h}_{l}}{\hat {h}_{l}+{\text {max}}\{\hat {\epsilon }(n)0.4,\epsilon _{\text {min}} \}\sigma _{\text {max}}}\)  
\(r_{l}=\text {max}\{\rho \text {max}\{q,\mathrm {F}(\hat {h}_{0}),\ldots,\mathrm {F}(\hat {h}_{L1})\}, \mathrm {F}(\hat {h}_{l})\} \)  
\(g_{l}(n1)=\frac {r_{l}(n1)}{\frac {1}{L}\sum _{i=0}^{L}r_{l}(n1)}\)  
g(n−1)=[g _{0}(n−1),g _{1}(n−1),…,g _{ L−1}(n−1)]^{ T }  
Memory  
update  \(\mathbf {P}^{\prime }(n)=[\mathbf {g}(n1)\odot \mathbf {x}(n),\mathbf {P}^{\prime }_{1}(n1)]\) 
\(\mathbf {p}(n)=[\mathbf {x}^{T}(n)\mathbf {P}^{\prime }_{0}(n),\mathbf {p}_{1}(n1)]\)  
Error output  \(e(n)=d(n)\mathbf {x}^{T}(nm)\hat {\mathbf {h}}(n1)\) 
RAP  
iteration  \(\hat {\mathbf {h}}^{[0]}=\hat {\mathbf {h}}(n1)\) 
for m=0,1,…,M−1  
α(m)=μ/(p _{ m }(n)+δ)  
\(\hspace {26pt} e^{[m]}=d(nm)\mathbf {x}^{T}(nm)\hat {\mathbf {h}}^{[m]}\)  
\(\hspace {26pt} \hat {\mathbf {h}}^{[m+1]}=\hat {\mathbf {h}}^{[m]}+\alpha (m)\mathbf {P}^{\prime }_{m}(n)e^{[m]}\)  
m=m+1  
Filter update  \(\hat {\mathbf {h}}(n)=\hat {\mathbf {h}}^{[M]}\) 
4 Computational complexity
Computational complexity of the algorithms’ coefficient updates
A  M  D  C  Sqrt  DMI  

PAPA  (M ^{2}+2M+1)L−M−1  (M ^{2}+3M+1)L+2M ^{2}+2  L  2L  0  Yes, M×M 
MPAPA  (M ^{2}+2M+1)L−M−1  (M ^{2}+2M+2)L+2M ^{2}+2  L  2L  0  Yes, M×M 
RPAPA  (M ^{2}+2M+2)L−M−1  (M ^{2}+3M+1)L+2M ^{2}+2  2L  2L  0  Yes, M×M 
SCRPAPA  (M ^{2}+2M+4)L−M−1  (M ^{2}+3M+2)L+2M ^{2}+5  2L+1  2L+1  1  Yes, M×M 
SCRPAPA MRAP  (2M+5)L+M−2  (2M+3)L+M+5  2L+M+1  2L+1  1  No 
Compared with traditional PAPA, the MPAPA reduced the complexity of G X, but the calculation of X ^{ T } P ^{′} still requires M ^{2} L multiplications. Meanwhile, due to the memory and the iterative RAP structure, only L multiplications are needed to update p(n) instead.
What’s more important is that, both the PAPA and the MPAPA algorithms require a M×M direct matrix inversion, which is especially expensive for high projection orders. The combination of the memory and the iterative RAP structure, not only avoids the M×M direct matrix inversion, but also reduces the computational complexity required for the calculation of both G X and X ^{ T } G X.
The additional computational complexity for the SCRPAPA with MRAP algorithm arises from the computation of the sparseness measure \(\hat {\epsilon }\). As in [31], given that \(L/(L\sqrt {L})\) can be computed offline, the remaining lnorms require an additional 2L additions and L multiplications. Furthermore, this sparseness measure can be reused in many other sparseness controlled algorithms too, for example [31]. The calculation of the F in (22) requires additional L divisions, L+1 additions, one multiplication, and one comparison more than PAPA. The complexity of division is much lower than the L exponential or logarithmic operations required by either the mulaw or the l _{0} PAPA. Meanwhile, (22) also offers the robustness to dispersive system identification.
5 Simulation results
The performance of the proposed SCRPAPA with MRAP was evaluated via simulations. Throughout our simulation, the length of the unknown system was L=512, and the adaptive filter was with the same length. The sampling rate was 8 kHz. The parameters for each algorithm were δ=0.01/L, ρ=0.01, q=0.01. The stepsize for all the algorithms was set to μ=0.2.
The algorithms were tested using both the white Gaussian noise (WGN), and colored noise as inputs. The colored input signals were generated by filtering the WGN through a first order system with a pole at 0.8. Independent WGN was added to the system background with a signaltonoise ratio (SNR) as 30 dB.
5.1 The performance of the proposed RPAPA
The proposed reweighted PAPA in (19) was firstly compared to PAPA, mulaw PAPA, and l _{0} PAPA. The parameters for the algorithm were σ _{ μ }=1000, σ _{ l0}=200, and σ _{ r }=0.01. The affine projection order was selected as M=2.
5.2 The performance of the proposed SCRPAPA
5.3 The performance of the proposed SCRPAPA with MRAP
An efficient implementation of the SCRPAPA algorithm was proposed through combining the memory of the filter’s coefficients with RAP. The new SCRAPA with MRAP algorithm significantly decreases computational complexity. In this subsection, the performance of the efficient implementation was compared with APA, PAPA and SCRPAPA through simulations.
6 Conclusion
A low complexity reweighted proportionate affine projection algorithm was proposed in this paper. The sparseness of the channel was taken into account to improve the performance for dispersive systems. In order to reduce computational complexity, the direct matrix inversion of PAPA was iteratively implemented with RAP. Meanwhile, the memory of the filter’s coefficients were exploited to improve the performance and further reduce the complexity for high projection orders. Simulation results demonstrate that the proposed sparseness controlled reweighted proportionate affine projection algorithm with memory and RAP outperforms traditional PAPA, with much lower computational complexity compared to mulaw and l _{0} PAPA.
Declarations
Acknowledgements
This work was performed under the Wilkens Missouri Endowment. The authors would like to thank the Associate Editor and the reviewers for their valuable comments and suggestions.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 E Hänsler, G Schmidt, Acoustic Echo and Noise Control: a Practical Approach, vol. 40 (Wiley, Hoboken, New Jersey, 2005).Google Scholar
 E Hänsler, G Schmidt, Topics in Acoustic Echo and Noise Control: Selected Methods for the Cancellation of Acoustical Echoes, the Reduction of Background Noise, and Speech Processing (Springer, Berlin, Heidelberg, 2006).View ArticleGoogle Scholar
 K Ozeki, T Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. in Japan (Part I: Commun.)67(5), 19–27 (1984).MathSciNetView ArticleGoogle Scholar
 E Hänsler, GU Schmidt, Handsfree telephones–joint control of echo cancellation and postfiltering. Sig. Process. 80(11), 2295–2305 (2000).MATHView ArticleGoogle Scholar
 A Mader, H Puder, GU Schmidt, Stepsize control for acoustic echo cancellation filters–an overview. Sig. Process. 80(9), 1697–1719 (2000).MATHView ArticleGoogle Scholar
 DL Duttweiler, Proportionate normalized leastmeansquares adaptation in echo cancelers. Speech Audio Process. IEEE Trans. 8(5), 508–518 (2000).View ArticleGoogle Scholar
 K Wagner, M Doroslovacki, Proportionatetype Normalized Least Mean Square Algorithms (Wiley, Hoboken, New Jersey, 2013).MATHView ArticleGoogle Scholar
 T Gansler, J Benesty, SL Gay, MM Sondhi, in Acoustics, Speech, and Signal Processing, 2000. ICASSP’00. Proceedings. 2000 IEEE International Conference On, 2. A robust proportionate affine projection algorithm for network echo cancellation (IEEEIstanbul, 2000), pp. 793–796.Google Scholar
 H Deng, M Doroslovacki, Improving convergence of the PNLMS algorithm for sparse impulse response identification. Signal Process. Lett. IEEE. 12(3), 181–184 (2005).View ArticleGoogle Scholar
 H Deng, M Doroslovacki, Proportionate adaptive algorithms for network echo cancellation. Signal Process. IEEE Trans. 54(5), 1794–1803 (2006).View ArticleGoogle Scholar
 L Liu, M Fukumoto, S Saiki, S Zhang, A variable stepsize proportionate affine projection algorithm for identification of sparse impulse response. EURASIP J. Adv. Signal Process. 2009:, 1–10 (2009). doi:10.1155/2009/150914.MATHGoogle Scholar
 Y Gu, J Jin, S Mei, l _{0} norm constraint LMS algorithm for sparse system identification. Signal Process. Lett., IEEE. 16(9), 774–777 (2009).View ArticleGoogle Scholar
 C Paleologu, J Benesty, S Ciochina, in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference On. An improved proportionate NLMS algorithm based on the l _{0} norm (IEEEDallas, TX, 2010), pp. 309–312.View ArticleGoogle Scholar
 J Benesty, SL Gay, in Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference On, 2. An improved PNLMS algorithm (IEEEOrlando, FL, USA, 2002), pp. 1881–1884.Google Scholar
 O Hoshuyama, R Goubran, A Sugiyama, in Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference On, 4. A generalized proportionate variable stepsize algorithm for fast changing acoustic environments (IEEEMontreal, 2004), p. 161.Google Scholar
 J Liu, SL Grant, Proportionate adaptive filtering for blocksparse system identification (2015). arXiv preprint arXiv:1508.04172.Google Scholar
 C Paleologu, S Ciochină, J Benesty, An efficient proportionate affine projection algorithm for echo cancellation. Signal Process. Lett. IEEE. 17(2), 165–168 (2010).View ArticleGoogle Scholar
 C Stanciu, C Anghel, C Paleologu, J Benesty, F Albu, S Ciochina, in Signals, Circuits and Systems (ISSCS), 2011 10th International Symposium On. A proportionate affine projection algorithm using dichotomous coordinate descent iterations (Iasi, 2011), pp. 1–4.Google Scholar
 Y Zakharov, VH Nascimento, Slidingwindow RLS lowcost implementation of proportionate affine projection algorithms. Audio Speech Lang. Process. IEEE/ACM Trans. 22(12), 1815–1824 (2014).View ArticleGoogle Scholar
 SL Grant, P Shah, J Benesty, in Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 AsiaPacific. An efficient iterative method for basis pursuit adaptive filters for sparse systems (IEEEHollywood, CA, 2012), pp. 1–4.Google Scholar
 M Yukawa, I Yamada, A unified view of adaptive variablemetric projection algorithms. EURASIP J. Adv. Signal Process. 2009:, 34 (2009).View ArticleGoogle Scholar
 J Benesty, C Paleologu, S Ciochin, Proportionate adaptive filters from a basis pursuit perspective. Signal Process. Lett. IEEE. 17(12), 985–988 (2010).View ArticleGoogle Scholar
 C Paleologu, J Benesty, in Circuits and Systems (ISCAS), 2012 IEEE International Symposium On. Proportionate affine projection algorithms from a basis pursuit perspective (IEEESeoul, 2012), pp. 2757–2760.View ArticleGoogle Scholar
 J Liu, SL Grant, in Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference On. A generalized proportionate adaptive algorithm based on convex optimization (IEEEXi’an, 2014), pp. 748–752.View ArticleGoogle Scholar
 R Gribonval, M Nielsen, Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl. Comput. Harmon. Anal.22(3), 335–355 (2007).MATHMathSciNetView ArticleGoogle Scholar
 SL Gay, in Signals, Systems, Computers, 1998. Conference Record of the ThirtySecond Asilomar Conference On, 1. An efficient, fast converging adaptive filter for network echo cancellation (IEEEPacific Grove, CA, USA, 1998), pp. 394–398.Google Scholar
 S Kaczmarz, Angenäherte auflösung von systemen linearer gleichungen. Bulletin International de lAcademie Polonaise des Sciences et des Lettres. 35:, 355–357 (1937).Google Scholar
 J Benesty, T Gänsler, in Proc. Int. Workshop on Acoustic Echo and Noise Control (IWAENC). On datareuse adaptive algorithms (Kyoto, 2003).Google Scholar
 SL Gay, Fast projection algorithms with application to voice echo cancellation. PhD thesis, New Brunswick, NJ, USA, 1994.Google Scholar
 P Shah, SL Grant, J Benesty, in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference On. On an iterative method for basis pursuit with application to echo cancellation with sparse impulse responses (IEEEKyoto, 2012), pp. 177–180.View ArticleGoogle Scholar
 P Loganathan, AW Khong, P Naylor, A class of sparsenesscontrolled algorithms for echo cancellation. Audio Speech Lang. Process. IEEE Trans. 17(8), 1591–1601 (2009).View ArticleGoogle Scholar