Open Access

A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

EURASIP Journal on Advances in Signal Processing20152015:99

https://doi.org/10.1186/s13634-015-0280-4

Received: 2 August 2015

Accepted: 5 November 2015

Published: 25 November 2015

Abstract

A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter’s coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

Keywords

Proportionate affine projection algorithmSparse system identificationRow action projectionAdaptive filter

1 Introduction

Adaptive filtering has been studied for decades and has found wide areas of application. The most common adaptive filter is the normalized least mean square (NLMS) algorithm due to its simplicity and robustness [1]. In the 1990’s, the affine projection algorithm (APA), a generalization of NLMS was found to have better convergence than NLMS for colored input [2, 3]. The optimal step size control of the adaptive algorithm has been widely studied in order to improve their performance [4, 5]. The impulse responses in many applications, such as network echo cancellation (NEC), are sparse, that is, a small percentage of the impulse response components have a significant magnitude while the rest are zero or small. To exploit this property, the family of proportionate algorithms was proposed to improve performance in such applications [2]. These algorithms include proportionate NLMS (PNLMS) [6, 7], and proportionate APA (PAPA) [8], etc.

The idea behind proportionate algorithms is to update each coefficient of the filter independently of the others by adjusting the adaptation step size in proportion to the magnitude of the estimated filter coefficient [6]. In comparison to NLMS and APA, PNLMS and PAPA have very fast initial convergence and tracking when the echo path is sparse. However, the big coefficients converge very quickly (in the initial period) at the cost of slowing down dramatically the convergence of the small coefficients (after the initial period). In order to combat this issue, mu-law PNLMS (MPNLMS) and mu-law PAPA algorithms were proposed [911]. Furthermore, the l 0 norm family of algorithms have recently drawn lots of attention for sparse system identification [12]. Therefore, a new PNLMS algorithm based on the l 0 norm was proposed to represent a better measure of sparseness than the l 1 norm in PNLMS [13].

On the other hand, the PNLMS and PAPA algorithms converge much slower than corresponding NLMS and APA algorithms when the impulse response is dispersive. In response, the improved PNLMS (IPNLMS) and improved PAPA (IPAPA) were proposed by introducing a controlled mixture of proportionate and non-proportionate adaptation [14, 15]. The IPNLMS and IPAPA algorithms perform very well for both sparse and non-sparse systems. Also, recently, the block-sparse PNLMS (BS-PNLMS) algorithm was proposed to improve the performance of PNLMS for identifying block-sparse systems [16].

In order to reduce the computational complexity of PAPA, the memory improved PAPA (MIPAPA) algorithm was proposed to not only speed up the convergence rate but also reduce computational complexity by taking into account the memory of the proportionate coefficients [17]. Dichotomous coordinate descent (DCD) iterations have previous been applied to the PAPA family of algorithms to implement the MIPAPA adaptive filter [18, 19]. Meanwhile, an iterative method based on the PAPA with row action projection (RAP) has been shown to have good convergence properties with relatively low complexity [20].

In [21] the proportionate adaptive filter was derived from a unified view of variable-metric projection algorithms. In addition, the PNLMS algorithm and PAPA can both be deduced from a basis pursuit perspective [22, 23]. A more general framework was further proposed to derive PNLMS adaptive algorithms for sparse system identification, which employed convex optimization [24]. Here, a family of PAPA algorithms are firstly derived based on convex optimization, in which PAPA, mu-law PAPA, and l 0 PAPA are all special cases. Then, a reweighted PAPA is suggested in order to reduce the computational complexity. Finally, an efficient implementation of PAPA is proposed based on RAP and memory PAPA.

The organization of this article is as follows. The review of various PAPAs is presented in Section 2. Section 3 derives the proposed reweighted PAPA and presents an efficient memory implementation with RAP. The computational complexity is compared with PAPA, mu-law PAPA and l 0 PAPA in Section 4. In Section 5, simulation results of the proposed algorithm are presented. The last section concludes the paper with remarks.

2 Review of various PAPAs

The input signal x(n) is filtered through the unknown coefficients to be identified h(n) to get the observed output signal d(n).
$$\begin{array}{@{}rcl@{}} d(n)=\mathbf{x}^{T}(n)\mathbf{h}(n)+v(n), \end{array} $$
(1)
where
$$\mathbf{x}(n)=\;[x(n),x(n-1),\ldots,x(n-L+1)]^{T}, $$
and v(n) is the measurement noise, and L is the length of impulse response. We define the estimated error as
$$\begin{array}{@{}rcl@{}} e(n)=d(n)-\mathbf{x}^{T}(n)\hat{\mathbf{h}}(n-1), \end{array} $$
(2)
where \(\hat {\mathbf {h}}(n)\) is the adaptive filter’s coefficients. Grouping the M most recent input vectors x(n) together gives the input signal matrix
$$\mathbf{X}(n)=\;[\mathbf{x}(n),\mathbf{x}(n-1),\ldots,\mathbf{x}(n-M+1)]. $$
Therefore, the estimated error vector is
$$\begin{array}{@{}rcl@{}} \mathbf{e}(n)=\mathbf{d}(n)-\mathbf{X}^{T}(n)\hat{\mathbf{h}}(n-1), \end{array} $$
(3)
in which
$$\mathbf{d}(n)=\;[d(n),d(n-1),\ldots,d(n-M+1)]^{T}, $$
$$\mathbf{e}(n)=\;[e(n),e(n-1),\ldots,e(n-M+1)]^{T}, $$
where M is the projection order. PAPA updates the filter coefficients as follows [8]:
$$\begin{array}{@{}rcl@{}} \mathbf{P}(n)=\mathbf{G}(n-1)\mathbf{X}(n), \end{array} $$
(4)
$$\begin{array}{@{}rcl@{}} \hat{\mathbf{h}}(n)=\hat{\mathbf{h}}(n)+\mu\mathbf{P}(n)(\mathbf{X}^{T}(n)\mathbf{P}(n)+\delta\mathbf{I_{M}})^{-1}\mathbf{e}(n), \end{array} $$
(5)
in which μ is the step-size, δ is the regularization parameter, I M is the M×M identity matrix, and the proportionate step-size control matrix G(n−1) is defined as
$$\begin{array}{@{}rcl@{}} \mathbf{G}(n-1) = diag\{\mathbf{g}(n-1)\}, \end{array} $$
(6)
$$\begin{array}{@{}rcl@{}} \mathbf{g}(n-1)=\;[g_{0}(n-1),g_{1}(n-1),\ldots,g_{L-1}(n-1)]^{T}, \end{array} $$
(7)
$$\begin{array}{@{}rcl@{}} g_{l}(n-1)=\frac{r_{l}(n-1)}{\frac{1}{L}\sum_{i=0}^{L}r_{l}(n-1)}, \end{array} $$
(8)
$$\begin{array}{@{}rcl@{}} r_{l}=\text{max}\{\rho \text{max}\{q,\mathrm{F}(|\hat{h}_{0}|),\ldots,\mathrm{F}(|\hat{h}_{L-1}|)\},\mathrm{F}(|\hat{h}_{l}|)\}, \end{array} $$
(9)
where \(\mathrm {F}(|\hat {h}_{l}|)\) is specific to the algorithm, q prevents the filter coefficients \(\hat {h}_{l}(n-1)\) from stalling when \(\hat {\mathbf {h}}(0)=\mathbf {0}_{L\times 1}\) at initialization and ρ prevents the coefficients from stalling when they are much smaller than the largest coefficient. The classical PAPA employs step-sizes that are proportional to the magnitude of the estimated impulse response as below [8]
$$\begin{array}{@{}rcl@{}} \mathrm{F}(|\hat{h}_{l}|)=|\hat{h}_{l}|. \end{array} $$
(10)
The mu-law PNLMS and the mu-law PAPA algorithm proposed in [911] use the logarithm of the coefficient magnitudes rather than magnitudes directly as below:
$$\begin{array}{@{}rcl@{}} \mathrm{F}(|\hat{h}_{l}|)=ln(1+\sigma_{\mu}|\hat{h}_{l}|), \end{array} $$
(11)

in which σ μ is a positive parameter.

Based on the motivation that the l 0 norm can represent an even better measure of sparseness than the l 1 norm, the improved PNLMS and PAPA algorithms based on an approximation of the l 0 norm (l 0-PNLMS) were proposed as below [13]:
$$\begin{array}{@{}rcl@{}} \mathrm{F}(|\hat{h}_{l}|)=1-e^{-\sigma_{l0}|\hat{h}_{l}|}, \end{array} $$
(12)
where σ l0 is a positive parameter. The main disadvantage of the mu-law or l 0 norm PAPA algorithms are their heavy computation cost because of the L logarithmic or exponential operations. Therefore, a line segment was given to approximate the mu-law function [9], where
$$\begin{array}{@{}rcl@{}} \mathrm{F}(|\hat{h}_{l}|)=\left\{ \begin{array}{rcl} 200|\hat{h}_{l}|,&& |\hat{h}_{l}|<0.005;\\ 1, && otherwise. \end{array} \right. \end{array} $$
(13)
It should be noted that, without loss of performance, the line segment was normalized to be of unit gain for \(|\hat {h}_{l}|\geq 0.005\), compared to the original one proposed in [9]. Meanwhile, the exponential form in (12) can be approximated by the first order Taylor series expansions of exponential functions [12]
$$\begin{array}{@{}rcl@{}} e^{-\sigma|\hat{h}_{l}|}=\left\{ \begin{array}{rcl} 1-\sigma_{l0}|\hat{h}_{l}|,&& |\hat{h}_{l}|<\frac{1}{\sigma_{l0}};\\ 0, && otherwise. \end{array} \right. \end{array} $$
(14)
Then (12) becomes
$$\begin{array}{@{}rcl@{}} \mathrm{F}(|\hat{h}_{l}|)=\left\{ \begin{array}{rcl} \sigma_{l0}|\hat{h}_{l}|,&& |\hat{h}_{l}|<\frac{1}{\sigma_{l0}};\\ 1, && otherwise. \end{array} \right. \end{array} $$
(15)

It is interesting to see that the first order Taylor series approximation of l 0 PAPA in (12) is actually the same as the line segment implementation of mu-law PAPA in (11) for σ l0=200.

3 The proposed SC-RPAPA with MRAP

Based on the minimization of the convex target, the reweighted PAPA (RPAPA) will be firstly derived from a new sparseness measure with low computational complexity. Meanwhile, the sparseness controlled RPAPA (SC-RPAPA) is presented to improve the performance for both sparse and dispersive system identification. Finally, the SC-RPAPA with memory and RAP (MRAP) is proposed by combing the memory of the coefficients with iterative RAP to further reduce the computational complexity.

3.1 The proposed RPAPA

The proportionate APA algorithm can be deduced from a basis pursuit perspective [22]
$$\begin{array}{@{}rcl@{}} \begin{array}{rcl} \text{min} &&\Vert\tilde{\mathbf{h}}\Vert_{1}\\ \mathrm{subject\ to} &&\mathbf{d}(n)=\mathbf{X}^{T}\tilde{\mathbf{h}}(n), \end{array} \end{array} $$
(16)
where \(\tilde {\mathbf {h}}(n)\) is the correction component defined as
$$\tilde{\mathbf{h}}(n)=\mathbf{G}(n)\mathbf{X}(n)[\mathbf{X}^{T}(n)\mathbf{G}(n)\mathbf{X}(n)]^{-1}\mathbf{d}(n). $$
According to [24], the family of PAPA algorithms can be derived from the following target
$$\begin{array}{@{}rcl@{}} \begin{aligned} \text{min}\quad \mathrm{J}(\tilde{\mathbf{h}})&=\int\mathbf{G}^{-1}(n)\tilde{\mathbf{h}}(n)\mathrm{d}\tilde{\mathbf{h}}\\ \mathrm{subject\ to}\quad \mathbf{d}(n)&=\mathbf{X}^{T}\tilde{\mathbf{h}}(n), \end{aligned} \end{array} $$
(17)
where G −1(n) is the inverse matrix of proportionate matrix G(n), which is also a diagonal matrix. If the optimization target in (17) is convex, the family of PAPA algorithms can be derived using Lagrange Multipliers. It should be noted that, using the approximation
$$\begin{array}{@{}rcl@{}} \int\mathbf{G}^{-1}(n)\tilde{\mathbf{h}}(n)\mathrm{d}\tilde{\mathbf{h}}\approx\frac{1}{2}\tilde{\mathbf{h}}^{T}(n)\mathbf{G}^{-1}(n)\tilde{\mathbf{h}}(n), \end{array} $$
(18)
the proposed formulation in (17) becomes the variable-metric in [21], which is an approximation of the proposed formulation. The function \(\mathrm {G}(t), t\in \mathbb {R}\) should satisfy the following properties:
  • 1) G(0)=0, G(t) is even and not identically zero;

  • 2) G(t) is non-decreasing on [0,);

  • 3) \(\frac {\mathrm {G}(t)}{t}\) is non-increasing on (0,).

The above properties follow the requirements of the sparseness measure proposed in [25]. From the perspective of proportionate algorithms, the first two requirements are intuitive, since the family of the proportionate algorithms should be proportionate to the magnitude of the filter’s coefficients. The third property will guarantee the convexity of the optimization target. PAPA, mu-law PAPA and l 0 PAPA are all special cases of the sparseness measures fulfilling all three properties. In this paper, considering the computational complexity, we propose using the following reweighted PAPA:
$$\begin{array}{@{}rcl@{}}</p><p class="noindent">\mathrm{F}\left(\left|\hat{h}_{l}\right|\right)=\frac{\left|\hat{h}_{l}\right|}{\left|\hat{h}_{l}\right|+\sigma_{r}}, \end{array} $$
(19)

where σ r is a small positive constant.

The proposed reweighted metric is compared with PAPA, mu-law PAPA and l 0 PAPA in Fig. 1. The σ parameters for each algorithm were σ μ =1000, σ l0=50, σ r =0.01. These parameters were recommended and widely simulated in the literature for each algorithm [9, 13]. It should be noted that, the plots in [24] set the σ parameters respectively so that they all contain the point (0.9,0.9). However, in actual application, this parameter should be tuned to maximize the performance. In order to facilitate the comparison of the different sparseness measure, they are normalized to pass through the point, (1,1) here instead. Without loss of generality, it is assumed that the filter’s coefficients are normalized and the maximum possible magnitude is 1. Therefore, it is convenient to compare the gain distribution of different metrics with different σ parameters.
Fig. 1

Comparison of the different metrics

3.2 The proposed SC-RPAPA

It should be noted that the reweighting factor σ r in the proposed RPAPA (19) is related to the sparseness of the impulse system. It is straightforward to verify that if σ r =0, reweighted PAPA simplifies to APA. If the impulse system is more sparse, σ r should be relatively larger than \(\left |\hat {h}_{l}\right |\), which makes it more like the PAPA. This agrees with the fact that we fully benefit from PNLMS only when the impulse response is close to a delta function [26]. Therefore, it is natural to take the sparseness of impulse response into account. The sparsity of an impulse response could be estimated as
$$\begin{array}{@{}rcl@{}} \hat{\epsilon}(n)=\frac{L}{L-\sqrt{L}}\left(1-\frac{\Vert\hat{\mathbf{h}}(n)\Vert_{1}}{\sqrt{L}\Vert\hat{\mathbf{h}}(n)\Vert_{2}}\right), \end{array} $$
(20)
where L>1 is the length of the channel, \(\Vert \hat {\mathbf {h}}(n)\Vert _{1}\) and \(\Vert \hat {\mathbf {h}}(n)\Vert _{2}\) are the l 1 norm and l 2 norm of \(\hat {\mathbf {h}}\), respectively. The value of \(\hat {\epsilon }(n)\) is between 0 and 1. For a sparse channel, the value of the sparseness is close to 1 and for a dispersive channel, this value is close to 0. Therefore, the SC-RPAPA is
$$\begin{array}{@{}rcl@{}} \mathrm{F}\left(\left|\hat{h}_{l}\right|\right)=\frac{\left|\hat{h}_{l}\right|}{\left|\hat{h}_{l}\right|+\hat{\epsilon}(n)\sigma_{\text{max}}}, \end{array} $$
(21)
where σ max is the maximum value for the sparse system identification. The plot of the reweighted metric for different σs is presented in Fig. 2. In practical implementation, we would like to apply the APA algorithm to the dispersive system under certain sparseness threshold. For example, the sparsity of the dispersive channel is about 0.4, and a heuristic implementation that works pretty well in the simulations is
$$\begin{array}{@{}rcl@{}} \mathrm{F}\left(\left|\hat{h}_{l}\right|\right)=\frac{\left|\hat{h}_{l}\right|}{\left|\hat{h}_{l}\right|+\text{max}\{\hat{\epsilon}(n)-0.4,\epsilon_{\text{min}}\}\sigma_{\text{max}}}, \end{array} $$
(22)
Fig. 2

Reweighted metric with different σ parameters

where ε min=1e −4 is a minimum sparsity in order to avoid dividing by zero for \(\hat {h}_{l}=0\).

3.3 The proposed SC-RPAPA with MRAP

However, the main computational complexity of the family of PAPA algorithm is the matrix inversion in (5). Reduction in complexity is achieved by using 5M DCD iterations, thus requiring about 10M 2 additions [18]. Meanwhile, a sliding-window recursive least squares (SRLS) low-cost implementation of PAPA is given based on DCD, which does not depend on M. The SRLS implementation is only efficient when the projection order is very high (e.g., such as M=512) [19]. However, it is known that if the projection order increases, the convergence speed is faster, but the steady-state error also increases.

Another way to avoid the matrix inversion altogether is to use the method of RAP [27]. RAP is also known in the literature as a data reuse algorithm (see [28]). It has been shown in [29] that RAP is effectively the same as APA, except that the system of equations problem that is solved with a direct matrix inversion (DMI) in APA is solved iteratively in RAP [20].The iterative PAPA algorithm proposed in [30] was made efficient by implementing it using RAP in [27]. RAP is an iterative approach to solving a system of M equations. It cycles through the M equations J times performing an NLMS-like update on the coefficients for each equation. In this instance, the number of RAP iterations, J is set to one. It should be noted that, by limiting J to one, the solution of the system of equations through RAP is approximate. However, the simulation results will demonstrate that this approximation works pretty well, especially for relatively high projection order. In each sample period a new equation is added to the system of equations and the oldest equation is dropped. Thus, M RAP updates are performed on a given equation every M sample periods. The PAPA algorithm with RAP updates the coefficients
$$\begin{array}{lll} Initialize&\hat{\mathbf{h}}^{[0]}=\hat{\mathbf{h}}(n-1)\\ Loop & m=0,1,\ldots,M-1 \\ &\alpha(m)=\mu/(\mathbf{x}^{T}(n-m)\mathbf{P}_{m}(n)+\delta)\\ & e^{[m]}=d(n-m)-\mathbf{x}^{T}(n-m)\hat{\mathbf{h}}^{[m]}\\ & \hat{\mathbf{h}}^{[m+1]}=\hat{\mathbf{h}}^{[m]}+\alpha(m)\mathbf{P}_{m}(n)e^{[m]}\\ & m=m+1\\ Update&\hat{\mathbf{h}}(n)=\hat{\mathbf{h}}^{[M]}\\ \end{array} $$
where P m (n) is the m th column of P(n) defined as
$$\mathbf{P}_{m}(n)=\mathbf{g}(n-1)\odot\mathbf{x}(n-m), $$
in which the operation denotes the Hadamard product and m=0,1,…,M−1.
The traditional PAPA requires M×L multiplications to calculate P(n), and in order to further reduce the computational complexity, we propose to apply the memory of the proportionate coefficients [17] into SC-RPAPA. Therefore, the matrix P(n) in (4) can be approximated as P (n)
$$\begin{array}{@{}rcl@{}} \mathbf{P}^{\prime}(n)=\;[\mathbf{g}(n-1)\odot\mathbf{x}(n),\mathbf{P}^{\prime}_{-1}(n-1)], \end{array} $$
(23)
where \(\mathbf {P}^{\prime }_{-1}(n-1)\) contains the first M−1 columns of P (n−1). Meanwhile, we define
$$\mathbf{p}(n)=\;[p_{0}(n),p_{1}(n),\ldots,p_{M-1}(n)], $$
in which
$$p_{m}(n)=\mathbf{x}^{T}(n-m)\mathbf{P}^{\prime}_{m}(n), $$
and \(\mathbf {P}^{\prime }_{m}(n)\) is the m th column of P (n) defined as
$$\mathbf{P}^{\prime}_{m}(n)=\mathbf{g}(n-m-1)\odot\mathbf{x}(n-m). $$
Considering the time-shift property, the calculation of p(n) could be
$$\begin{array}{@{}rcl@{}} \mathbf{p}(n)=\;[\mathbf{x}^{T}(n)\mathbf{P}^{\prime}_{0}(n),\mathbf{p}_{-1}(n-1)], \end{array} $$
(24)
where p −1(n−1) contains the first M−1 values of p(n−1). The proposed update for the PAPA with memory and RAP is
$$\begin{array}{lll} Initialize&\hat{\mathbf{h}}^{[0]}=\hat{\mathbf{h}}(n-1)\\ Loop & m=0,1,\ldots,M-1 \\ &\alpha(m)=\mu/(p_{m}(n)+\delta)\\ & e^{[m]}=d(n-m)-\mathbf{x}^{T}(n-m)\hat{\mathbf{h}}^{[m]}\\ & \hat{\mathbf{h}}^{[m+1]}=\hat{\mathbf{h}}^{[m]}+\alpha(m)\mathbf{P}^{\prime}_{m}(n)e^{[m]}\\ & m=m+1\\ Update&\hat{\mathbf{h}}(n)=\hat{\mathbf{h}}^{[M]}\\ \end{array} $$
As mentioned in [17], the proposed RPAPA with MRAP takes into account the “history” of the proportionate factors from the last M steps. The convergence and the tracking become faster when the projection order increases. Meanwhile, combined with the RAP, the computational complexity is also significantly lower as compared to the MPAPA through avoiding the direct matrix inversion and using the memory. The proposed SC-RPAPA with MRAP algorithm is summarized in detail in Table 1.
Table 1

The SC-RPAPA algorithm with MRAP

Initialization

\(\hat {\mathbf {h}}(0)=\mathbf {0}_{L \times 1},\rho =0.01,q=0.01,\delta =0.01/L\)

 

σ max=0.02, ε min=1e −4, μ=0.2

Sparseness

 

control

\(\hat {\epsilon }(n)=\frac {L}{L-\sqrt {L}}\left (1-\frac {\Vert \hat {\mathbf {h}}(n-1)\Vert _{1}}{\sqrt {L}\Vert \hat {\mathbf {h}}(n-1)\Vert _{2}}\right)\)

 

\(\mathrm {F}(|\hat {h}_{l}|)=\frac {|\hat {h}_{l}|}{|\hat {h}_{l}|+{\text {max}}\{\hat {\epsilon }(n)-0.4,\epsilon _{\text {min}} \}\sigma _{\text {max}}}\)

 

\(r_{l}=\text {max}\{\rho \text {max}\{q,\mathrm {F}(|\hat {h}_{0}|),\ldots,\mathrm {F}(|\hat {h}_{L-1}|)\}, \mathrm {F}(|\hat {h}_{l}|)\} \)

 

\(g_{l}(n-1)=\frac {r_{l}(n-1)}{\frac {1}{L}\sum _{i=0}^{L}r_{l}(n-1)}\)

 

g(n−1)=[g 0(n−1),g 1(n−1),…,g L−1(n−1)] T

Memory

 

update

\(\mathbf {P}^{\prime }(n)=[\mathbf {g}(n-1)\odot \mathbf {x}(n),\mathbf {P}^{\prime }_{-1}(n-1)]\)

 

\(\mathbf {p}(n)=[\mathbf {x}^{T}(n)\mathbf {P}^{\prime }_{0}(n),\mathbf {p}_{-1}(n-1)]\)

Error output

\(e(n)=d(n)-\mathbf {x}^{T}(n-m)\hat {\mathbf {h}}(n-1)\)

RAP

 

iteration

\(\hat {\mathbf {h}}^{[0]}=\hat {\mathbf {h}}(n-1)\)

 

for m=0,1,…,M−1

 

α(m)=μ/(p m (n)+δ)

 

\(\hspace {26pt} e^{[m]}=d(n-m)-\mathbf {x}^{T}(n-m)\hat {\mathbf {h}}^{[m]}\)

 

\(\hspace {26pt} \hat {\mathbf {h}}^{[m+1]}=\hat {\mathbf {h}}^{[m]}+\alpha (m)\mathbf {P}^{\prime }_{m}(n)e^{[m]}\)

 

m=m+1

Filter update

\(\hat {\mathbf {h}}(n)=\hat {\mathbf {h}}^{[M]}\)

4 Computational complexity

The computational complexity of the SC-RPAPA with MRAP algorithm is compared with traditional PAPA, MPAPA, RPAPA, and SC-RPAPA in Table 2, in terms of the total number of additions (A), multiplications (M), divisions (D), comparisons (C), square root (Sqrt), and direct matrix inversion (DMI) needed per algorithm iteration. All the algorithms require L |·| operations for calculating the magnitude of the filter’s coefficients.
Table 2

Computational complexity of the algorithms’ coefficient updates

 

A

M

D

C

Sqrt

DMI

PAPA

(M 2+2M+1)LM−1

(M 2+3M+1)L+2M 2+2

L

2L

0

Yes, M×M

MPAPA

(M 2+2M+1)LM−1

(M 2+2M+2)L+2M 2+2

L

2L

0

Yes, M×M

RPAPA

(M 2+2M+2)LM−1

(M 2+3M+1)L+2M 2+2

2L

2L

0

Yes, M×M

SC-RPAPA

(M 2+2M+4)LM−1

(M 2+3M+2)L+2M 2+5

2L+1

2L+1

1

Yes, M×M

SC-RPAPA MRAP

(2M+5)L+M−2

(2M+3)L+M+5

2L+M+1

2L+1

1

No

Compared with traditional PAPA, the MPAPA reduced the complexity of G X, but the calculation of X T P still requires M 2 L multiplications. Meanwhile, due to the memory and the iterative RAP structure, only L multiplications are needed to update p(n) instead.

What’s more important is that, both the PAPA and the MPAPA algorithms require a M×M direct matrix inversion, which is especially expensive for high projection orders. The combination of the memory and the iterative RAP structure, not only avoids the M×M direct matrix inversion, but also reduces the computational complexity required for the calculation of both G X and X T G X.

The additional computational complexity for the SC-RPAPA with MRAP algorithm arises from the computation of the sparseness measure \(\hat {\epsilon }\). As in [31], given that \(L/(L-\sqrt {L})\) can be computed offline, the remaining l-norms require an additional 2L additions and L multiplications. Furthermore, this sparseness measure can be reused in many other sparseness controlled algorithms too, for example [31]. The calculation of the F in (22) requires additional L divisions, L+1 additions, one multiplication, and one comparison more than PAPA. The complexity of division is much lower than the L exponential or logarithmic operations required by either the mu-law or the l 0 PAPA. Meanwhile, (22) also offers the robustness to dispersive system identification.

5 Simulation results

The performance of the proposed SC-RPAPA with MRAP was evaluated via simulations. Throughout our simulation, the length of the unknown system was L=512, and the adaptive filter was with the same length. The sampling rate was 8 kHz. The parameters for each algorithm were δ=0.01/L, ρ=0.01, q=0.01. The step-size for all the algorithms was set to μ=0.2.

The algorithms were tested using both the white Gaussian noise (WGN), and colored noise as inputs. The colored input signals were generated by filtering the WGN through a first order system with a pole at 0.8. Independent WGN was added to the system background with a signal-to-noise ratio (SNR) as 30 dB.

Two impulse responses were used to verify the performance of the proposed SC-RPAPA MRAP algorithm, as shown in Fig. 3. The first one in Fig. 3a is a sparse impulse response of typical network echo with sparseness 0.92. Figure 3b is a dispersive channel with sparseness 0.44. In order to demonstrate the tracking ability, an echo path change was incurred through switching the impulse response from the sparse system in Fig. 3a to the dispersive one in Fig. 3b.
Fig. 3

Two impulse responses used in the simulation a the sparse network echo path, and b the dispersive echo path

The convergence state of adaptive filter is evaluated with the normalized misalignment which is defined as
$$20log_{10}\left(\frac{\Vert\mathbf{h}-\hat{\mathbf{h}}\Vert_{2}}{\Vert\mathbf{h}\Vert_{2}}\right). $$

5.1 The performance of the proposed RPAPA

The proposed reweighted PAPA in (19) was firstly compared to PAPA, mu-law PAPA, and l 0 PAPA. The parameters for the algorithm were σ μ =1000, σ l0=200, and σ r =0.01. The affine projection order was selected as M=2.

In the first simulation shown in Fig. 4, the input signal was the WGN. According to the results, the proposed RPAPA could outperform PAPA, and has similar performance with respect to mu-law and l 0 PAPA. However, the reweighted PAPA has much lower computational complexity. In the second simulation, the input signal was colored, and a similar result could be obtained according to Fig. 5.
Fig. 4

Comparison of RPAPA with PAPA, l 0 PAPA and mu-law PAPA for WGN input, SNR=30 dB, M=2, μ=0.2

Fig. 5

Comparison of RPAPA with PAPA, l 0 PAPA and mu-law PAPA for colored input, SNR=30 dB, M=2, μ=0.2

5.2 The performance of the proposed SC-RPAPA

To demonstrate the benefit of sparseness control, the proposed SC-RPAPA algorithm was simulated using an echo path change from the sparse to the dispersive impulse response in Fig. 3. The SC-RPAPA algorithm was compared with APA, PAPA, and the above RPAPA algorithms. The parameters for the algorithm were σ r =0.01, and σ max =0.02. The affine projection order was selected as M=2. In Fig. 6, the input signal was the WGN input. Both the proposed RPAPA and SC-RPAPA algorithms had similar performance for sparse system identification, which outperformed APA and PAPA. Meanwhile, due to the sparseness control, SC-RPAPA outperformed RPAPA as expected for the dispersive system. The colored input was used in Fig. 7, and similar results are observed.
Fig. 6

Comparison of SC-RPAPA with APA, PAPA, and RPAPA for WGN input, SNR=30 dB, M=2, μ=0.2

Fig. 7

Comparison of SC-RPAPA with APA, PAPA, and RPAPA for colored input, SNR=30 dB, M=2, μ=0.2

5.3 The performance of the proposed SC-RPAPA with MRAP

An efficient implementation of the SC-RPAPA algorithm was proposed through combining the memory of the filter’s coefficients with RAP. The new SC-RAPA with MRAP algorithm significantly decreases computational complexity. In this subsection, the performance of the efficient implementation was compared with APA, PAPA and SC-RPAPA through simulations.

In the first simulation, the WGN input was used. As shown in Fig. 8, SC-RPAPA with MRAP worked as well as SC-RPAPA for sparse system identification. However, for dispersive system, the performance of SC-RPAPA MRAP was worse than SC-RPAPA and the APA. This fact becomes more apparent for the colored input as shown in Fig. 9. This was caused by the relatively low projection order (M=2), and the implementation of the MRAP was slower than the direct matrix inversion. However, this drawback could be mitigated through increasing the projection order. Furthermore, the memory of the filter’s coefficients will also improve the performance as the projection order increases. We verify this point through simulations with M=32 for both the WGN (see Fig. 10) and the colored input (see Fig. 11). It could be observed that the SC-RPAPA with MRAP works better than APA, PAPA, and SC-RPAPA for sparse system identification. Meanwhile, the performance for dispersive system with colored input has been significantly improved too.
Fig. 8

Comparison of SC-RPAPA MRAP with APA, PAPA and RPAPA for WGN input, SNR=30 dB, M=2, μ=0.2

Fig. 9

Comparison of SC-RPAPA MRAP with APA, PAPA and RPAPA for colored input, SNR=30 dB, M=2, μ=0.2

Fig. 10

Comparison of SC-RPAPA MRAP with APA, PAPA and RPAPA for WGN input, SNR=30 dB, M=32, μ=0.2

Fig. 11

Comparison of SC-RPAPA MRAP with APA, PAPA and RPAPA for colored input, SNR=30 dB, M=32, μ=0.2

6 Conclusion

A low complexity reweighted proportionate affine projection algorithm was proposed in this paper. The sparseness of the channel was taken into account to improve the performance for dispersive systems. In order to reduce computational complexity, the direct matrix inversion of PAPA was iteratively implemented with RAP. Meanwhile, the memory of the filter’s coefficients were exploited to improve the performance and further reduce the complexity for high projection orders. Simulation results demonstrate that the proposed sparseness controlled reweighted proportionate affine projection algorithm with memory and RAP outperforms traditional PAPA, with much lower computational complexity compared to mu-law and l 0 PAPA.

Declarations

Acknowledgements

This work was performed under the Wilkens Missouri Endowment. The authors would like to thank the Associate Editor and the reviewers for their valuable comments and suggestions.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Electrical and Computer Engineering, Missouri University of Science and Technology
(2)
INRS-EMT, University of Quebec

References

  1. E Hänsler, G Schmidt, Acoustic Echo and Noise Control: a Practical Approach, vol. 40 (Wiley, Hoboken, New Jersey, 2005).Google Scholar
  2. E Hänsler, G Schmidt, Topics in Acoustic Echo and Noise Control: Selected Methods for the Cancellation of Acoustical Echoes, the Reduction of Background Noise, and Speech Processing (Springer, Berlin, Heidelberg, 2006).View ArticleGoogle Scholar
  3. K Ozeki, T Umeda, An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. in Japan (Part I: Commun.)67(5), 19–27 (1984).MathSciNetView ArticleGoogle Scholar
  4. E Hänsler, GU Schmidt, Hands-free telephones–joint control of echo cancellation and postfiltering. Sig. Process. 80(11), 2295–2305 (2000).MATHView ArticleGoogle Scholar
  5. A Mader, H Puder, GU Schmidt, Step-size control for acoustic echo cancellation filters–an overview. Sig. Process. 80(9), 1697–1719 (2000).MATHView ArticleGoogle Scholar
  6. DL Duttweiler, Proportionate normalized least-mean-squares adaptation in echo cancelers. Speech Audio Process. IEEE Trans. 8(5), 508–518 (2000).View ArticleGoogle Scholar
  7. K Wagner, M Doroslovacki, Proportionate-type Normalized Least Mean Square Algorithms (Wiley, Hoboken, New Jersey, 2013).MATHView ArticleGoogle Scholar
  8. T Gansler, J Benesty, SL Gay, MM Sondhi, in Acoustics, Speech, and Signal Processing, 2000. ICASSP’00. Proceedings. 2000 IEEE International Conference On, 2. A robust proportionate affine projection algorithm for network echo cancellation (IEEEIstanbul, 2000), pp. 793–796.Google Scholar
  9. H Deng, M Doroslovacki, Improving convergence of the PNLMS algorithm for sparse impulse response identification. Signal Process. Lett. IEEE. 12(3), 181–184 (2005).View ArticleGoogle Scholar
  10. H Deng, M Doroslovacki, Proportionate adaptive algorithms for network echo cancellation. Signal Process. IEEE Trans. 54(5), 1794–1803 (2006).View ArticleGoogle Scholar
  11. L Liu, M Fukumoto, S Saiki, S Zhang, A variable step-size proportionate affine projection algorithm for identification of sparse impulse response. EURASIP J. Adv. Signal Process. 2009:, 1–10 (2009). doi:10.1155/2009/150914.MATHGoogle Scholar
  12. Y Gu, J Jin, S Mei, l 0 norm constraint LMS algorithm for sparse system identification. Signal Process. Lett., IEEE. 16(9), 774–777 (2009).View ArticleGoogle Scholar
  13. C Paleologu, J Benesty, S Ciochina, in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference On. An improved proportionate NLMS algorithm based on the l 0 norm (IEEEDallas, TX, 2010), pp. 309–312.View ArticleGoogle Scholar
  14. J Benesty, SL Gay, in Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference On, 2. An improved PNLMS algorithm (IEEEOrlando, FL, USA, 2002), pp. 1881–1884.Google Scholar
  15. O Hoshuyama, R Goubran, A Sugiyama, in Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference On, 4. A generalized proportionate variable step-size algorithm for fast changing acoustic environments (IEEEMontreal, 2004), p. 161.Google Scholar
  16. J Liu, SL Grant, Proportionate adaptive filtering for block-sparse system identification (2015). arXiv preprint arXiv:1508.04172.Google Scholar
  17. C Paleologu, S Ciochină, J Benesty, An efficient proportionate affine projection algorithm for echo cancellation. Signal Process. Lett. IEEE. 17(2), 165–168 (2010).View ArticleGoogle Scholar
  18. C Stanciu, C Anghel, C Paleologu, J Benesty, F Albu, S Ciochina, in Signals, Circuits and Systems (ISSCS), 2011 10th International Symposium On. A proportionate affine projection algorithm using dichotomous coordinate descent iterations (Iasi, 2011), pp. 1–4.Google Scholar
  19. Y Zakharov, VH Nascimento, Sliding-window RLS low-cost implementation of proportionate affine projection algorithms. Audio Speech Lang. Process. IEEE/ACM Trans. 22(12), 1815–1824 (2014).View ArticleGoogle Scholar
  20. SL Grant, P Shah, J Benesty, in Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific. An efficient iterative method for basis pursuit adaptive filters for sparse systems (IEEEHollywood, CA, 2012), pp. 1–4.Google Scholar
  21. M Yukawa, I Yamada, A unified view of adaptive variable-metric projection algorithms. EURASIP J. Adv. Signal Process. 2009:, 34 (2009).View ArticleGoogle Scholar
  22. J Benesty, C Paleologu, S Ciochin, Proportionate adaptive filters from a basis pursuit perspective. Signal Process. Lett. IEEE. 17(12), 985–988 (2010).View ArticleGoogle Scholar
  23. C Paleologu, J Benesty, in Circuits and Systems (ISCAS), 2012 IEEE International Symposium On. Proportionate affine projection algorithms from a basis pursuit perspective (IEEESeoul, 2012), pp. 2757–2760.View ArticleGoogle Scholar
  24. J Liu, SL Grant, in Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference On. A generalized proportionate adaptive algorithm based on convex optimization (IEEEXi’an, 2014), pp. 748–752.View ArticleGoogle Scholar
  25. R Gribonval, M Nielsen, Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl. Comput. Harmon. Anal.22(3), 335–355 (2007).MATHMathSciNetView ArticleGoogle Scholar
  26. SL Gay, in Signals, Systems, Computers, 1998. Conference Record of the Thirty-Second Asilomar Conference On, 1. An efficient, fast converging adaptive filter for network echo cancellation (IEEEPacific Grove, CA, USA, 1998), pp. 394–398.Google Scholar
  27. S Kaczmarz, Angenäherte auflösung von systemen linearer gleichungen. Bulletin International de lAcademie Polonaise des Sciences et des Lettres. 35:, 355–357 (1937).Google Scholar
  28. J Benesty, T Gänsler, in Proc. Int. Workshop on Acoustic Echo and Noise Control (IWAENC). On data-reuse adaptive algorithms (Kyoto, 2003).Google Scholar
  29. SL Gay, Fast projection algorithms with application to voice echo cancellation. PhD thesis, New Brunswick, NJ, USA, 1994.Google Scholar
  30. P Shah, SL Grant, J Benesty, in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference On. On an iterative method for basis pursuit with application to echo cancellation with sparse impulse responses (IEEEKyoto, 2012), pp. 177–180.View ArticleGoogle Scholar
  31. P Loganathan, AW Khong, P Naylor, A class of sparseness-controlled algorithms for echo cancellation. Audio Speech Lang. Process. IEEE Trans. 17(8), 1591–1601 (2009).View ArticleGoogle Scholar

Copyright

© Liu et al. 2015