Skip to main content

Improved analysis of SP and CoSaMP under total perturbations

Abstract

Practically, in the underdetermined model y=A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

1 Introduction

Compressed sensing [1] has been attracted more and more attention since it has been proposed. According to compressed sensing, the sparse signals can be accurately reconstructed from far less samples than those required in the classical Shannon-Nyquist theorem.

Typically, an underdetermined equation

$$\begin{array}{*{20}l} \mathbf{y}=\mathbf{A}\mathbf{x} \end{array} $$
(1)

is to be solved, where measurement matrix \(\mathbf {A}\in \mathbb {R}^{m\times N}\). There exists a unique solution for (1) when x is assumed to be K-sparse, i.e., x has at most K nonzero entries.

To get the sparsest solution of Eq. (1), minimizing \(\|\mathbf {x}\|_{\ell _{0}}\) (the 0-“norm” counts the number of nonzero entries in x) is an intuitive idea. However, this is an NP-hard problem [2]. Many suboptimal methods were presented to overcome this difficulty.

The greedy algorithms have received considerable attention due to their low complexity and simple interpretation in geometry. They mainly include orthogonal matching pursuit (OMP) [3], subspace pursuit (SP) [4], compressive sampling matching pursuit (CoSaMP) [5], analysis SP (ASP), and analysis CoSaMP (ACoSaMP) [6]. The basic idea behind this kind of algorithms is to find the support of the unknown signal sequentially. Recently, using new method, two relaxed sufficient conditions were presented for SP and CoSaMP by Song et al. [7, 8]. In this paper, we mainly discuss SP and CoSaMP that are efficient algorithms.

In practice, both y and A are often perturbed in model (1). It is important to consider these perturbations since it can account for precision errors when applications call for physically implementing the matrix A in a sensor [9]. This case can be found in source separation [10].

In fact, under total perturbations, model (1) was formulated as

$$\begin{array}{*{20}l} \hat{\mathbf{y}}=\mathbf{A}\mathbf{x}+\mathbf{e},\quad {\hat{\mathbf{A}}}=\mathbf{A}+\mathbf{E}, \end{array} $$
(2)

with inputs \(\hat {\mathbf {A}}\in \mathbb {R}^{m\times N}\) and \(\hat {\mathbf {y}}\). Here, e and E can be defined as addition noise and multiple noise, respectively. This case can be found in remote sensing [11], radar [12], and so on. By restricted isometry property (RIP) [2], model (2) was discussed by Ding et al. [13] using OMP. Under total perturbations, the work of [14, 15] discussed the performance of SP and CoSaMP. They presented that oracle-order recovery performance of SP and CoSaMP is guaranteed. In addition, there are many previous works in the context of near-oracle performance [1620].

Using the results in [7, 8], in this paper, we give improved conditions for SP and CoSaMP under total perturbations. For numerical experiments, figures validate that SP and CoSaMP can provide oracle-order recovery performance.

Now, we give some notations that will be used in this paper. Scalars are written as lowercase letters, e.g., d. We denote vectors by boldface lowercase letters, e.g., x, and matrices as boldface uppercase letters, e.g., D. The ith element of x is denoted by x i . D denotes the transpose of D. The cardinality of a finite set Γ is denoted by |Γ|. \(\|\mathbf {D}\|^{(K)}_{2}\) denotes the largest spectral norm taken over all K-column submatrices of D. We write D Γ for the column submatrix of D whose indices are listed in the set Γ.

2 Problem formulation

In practice, we often encounter the approximately sparse vectors [21] rather than sparse vectors. Although these vectors are not exactly sparse, they are well approximated by a K-sparse vector. The vector x is assumed to be approximately sparse and we can use a K-sparse vector x K to approximate it when the energy of \(\mathbf {x}_{K}^{c}=\mathbf {x}-\mathbf {x}_{K}\) is very small, where x K is K-sparse that is the best K-term approximation of x, i.e., the nonzero entries in x K correspond to the K largest (in magnitude) entries in x. The approximation error can be quantified as

$$\begin{array}{*{20}l} r_{K}=\frac{\|\mathbf{x}_{K}^{c}\|_{2}}{\|\mathbf{x}\|_{2}},\quad s_{K}=\frac{\|\mathbf{x}_{K}^{c}\|_{1}}{\sqrt{K}{\|\mathbf{x}\|_{2}}} \end{array} $$
(3)

In this paper, the following model is considered

$$\begin{array}{*{20}l} \hat{\mathbf{y}}=\mathbf{A}\mathbf{x}+\mathbf{e}=\mathbf{A}\mathbf{x}_{K}+\mathbf{A}(\mathbf{x}-\mathbf{x}_{K})+\mathbf{e}. \end{array} $$
(4)

Here, the available information for recovering x is \(\hat {\mathbf {y}}\) and \(\hat {\mathbf {A}}=\mathbf {A}+\mathbf {E}\).

In real-world applications, we often do not know the exact nature of E and e and are forced to estimate their relative upper bounds, instead. The perturbations E and e are quantified with the following relative bounds

$$\begin{array}{*{20}l} \varepsilon_{\mathbf{A}}=\frac{\|\mathbf{E}\|^{(K)}_{2}}{\|\mathbf{A}\|^{(K)}_{2}},\quad \varepsilon_{\mathbf{y}}=\frac{\|\mathbf{e}\|_{2}}{\|\mathbf{Ax}\|_{2}}, \end{array} $$
(5)

where \(\|\mathbf {A}\|^{(K)}_{2}\) and A x2 are nonzero. Now, according to \(\mathbf {A}=\hat {\mathbf {A}}-\mathbf {E}\), we give the upper bound of \(\|\mathbf {E}\|_{2}^{(K)}\).

$$\begin{array}{*{20}l} \|\mathbf{E}\|^{(K)}_{2}&=\varepsilon_{\mathbf{A}}^{}\|\mathbf{A}\|^{(K)}_{2}\notag\\ &=\varepsilon_{\mathbf{A}}^{}\|\hat{\mathbf{A}}-\mathbf{E}\|_{2}^{(K)}\notag\\ &\leq\varepsilon_{\mathbf{A}}^{}\|\hat{\mathbf{A}}\|_{2}^{(K)}+\varepsilon_{\mathbf{A}}^{}\|{\mathbf{E}}\|_{2}^{(K)}. \end{array} $$
(6)

Then, we have

$$\begin{array}{*{20}l} \|{\mathbf{E}}\|_{2}^{(K)}\leq\frac{\varepsilon_{\mathbf{A}}^{}}{1-\varepsilon_{\mathbf{A}}^{}}\|\hat{\mathbf{A}}\|_{2}^{(K)}. \end{array} $$
(7)

In this paper, we are only interested in the case where \(\varepsilon _{\mathbf {A}}^{}\) and ε y are far less than 1.

3 RIP-based recovery condition

Definition 1

([2]) A matrix A satisfies RIP of the order K if there exists a constant δ(0,1) such that

$$\begin{array}{*{20}l} (1-\delta)\|\mathbf{h}\|_{2}^{2}\leq \|\mathbf{Ah}\|_{2}^{2}\leq(1+\delta)\|\mathbf{h}\|_{2}^{2} \end{array} $$
(8)

for all K-sparse vector h. In particular, the minimum of all constants δ satisfying (8) is called as the restricted isometry constant (RIC) δ K .

Theorem 1

Given a noisy measurement vector y=A x K +e. If A satisfies δ aK c, then the sequence of x n defined by SP and CoSaMP satisfies

$$\begin{array}{*{20}l} \|\mathbf{x}_{K}-\mathbf{x}^{n}\|_{2}\leq\rho^{n}\|\mathbf{x}_{K}\|_{2}+\tau\|{\mathbf{e}}\|_{2}. \end{array} $$
(9)

The specific values of the constants a, c, ρ, and τ are illustrated in Table 1.

Table 1 The value of constants

Proof

After slightly manipulation, according to [7] and [8], the results can be obtained. □

Theorem 2

Consider (4). If the perturbed matrix \(\hat {\mathbf {A}}\) satisfies RIP with

$$\begin{array}{*{20}l} {\hat{\delta}_{aK}}\leq c, \end{array} $$
(10)

then the relative error of the solution x n of SP and CoSaMP satisfies

$$\begin{array}{*{20}l} &\frac{\|\mathbf{x}-\mathbf{x}^{n}\|_{2}}{\|\mathbf{x}\|_{2}}\notag\\ &\leq r_{K}+\hat{\rho}^{n}+\hat{\tau}\frac{\sqrt{1+\hat{\delta}_{K}}}{1-\varepsilon_{\mathbf{A}}}(\varepsilon_{\mathbf{A}}\!+ \varepsilon_{\mathbf{y}}+(1+\varepsilon_{\mathbf{y}})(r_{K}\!+s_{K})), \end{array} $$
(11)

where the specific values of the constants a, c, \(\hat {\rho }\), and \(\hat {\tau }\) are illustrated in Tables 1 and 2. In addition, after at most

$$\begin{array}{*{20}l} n=\lceil\makebox{log}_{\hat{\rho}}(\varepsilon_{\mathbf{A}}+\varepsilon_{\mathbf{y}}+s_{K})\rceil \end{array} $$
(12)
Table 2 The value of constants

iterations, SP and SoSaMP can obtain the error

$$\begin{array}{*{20}l} &\frac{\|\mathbf{x}-\mathbf{x}^{n}\|_{2}}{\|\mathbf{x}\|_{2}}\notag\\ &\leq\left(\hat{\tau}\frac{\sqrt{1+\hat{\delta}_{K}}}{1-\varepsilon_{\mathbf{A}}}+1\right)(\varepsilon_{\mathbf{A}}+\varepsilon_{\mathbf{y}}+ (1+\varepsilon_{\mathbf{y}})(r_{K}+s_{K})). \end{array} $$
(13)

Proof

The sensing process (4) is equivalent to

$$\begin{array}{*{20}l} \hat{\mathbf{y}}&={\mathbf{A}}\mathbf{x}+\mathbf{e}=(\hat{\mathbf{A}}-\mathbf{E})\left(\mathbf{x}_{K}+\mathbf{x}_{K}^{c}\right)+\mathbf{e}\notag\\ &=\hat{\mathbf{A}}\mathbf{x}_{K}+(-\mathbf{E}\mathbf{x}+\hat{\mathbf{A}}\mathbf{x}_{K}^{c}+\mathbf{e})\notag\\ &=\hat{\mathbf{A}}\mathbf{x}_{K}+\hat{\mathbf{e}}, \end{array} $$
(14)

where \(\hat {\mathbf {e}}=-\mathbf {E}\mathbf {x}+\hat {\mathbf {A}}\mathbf {x}_{K}^{c}+\mathbf {e}\) is the error term. Its energy is bounded as follows. By Proposition 3.5 in [5] and (7),

$$\begin{array}{*{20}l} \|\mathbf{Ex}\|_{2}&\leq\|\mathbf{E}\mathbf{x}_{K}\|_{2}+\|\mathbf{E}\mathbf{x}_{K}^{c}\|_{2}\notag\\ &\leq\|\mathbf{E}\|_{2}^{(K)}\|\mathbf{x}_{K}\|_{2}+\|\mathbf{E}\|_{2}^{(K)}\left(\|\mathbf{x}_{K}^{c}\|_{2}+\frac{\|\mathbf{x}_{K}^{c}\|_{1}}{\sqrt{K}}\right)\notag\\ &=\|\mathbf{E}\|_{2}^{(K)}\left(\|\mathbf{x}_{K}\|_{2}+\|\mathbf{x}_{K}^{c}\|_{2}+\frac{\|\mathbf{x}_{K}^{c}\|_{1}}{\sqrt{K}}\right)\notag\\ &\leq\frac{\varepsilon_{\mathbf{A}}}{1-\varepsilon_{\mathbf{A}}}\sqrt{1+\hat{\delta}_{K}}(1+r_{K}+s_{K})\|\mathbf{x}_{K}\|_{2}, \end{array} $$
(15)

where (15) follows from (7) and (3).

Furthermore,

$$\begin{array}{*{20}l} \|\hat{\mathbf{A}}\mathbf{x}_{K}^{c}\|&\leq\sqrt{1+\hat{\delta}_{K}}\left(\|\mathbf{x}_{K}^{c}\|_{2}+\frac{\|\mathbf{x}_{K}^{c}\|_{1}}{\sqrt{K}}\right)\notag\\ &=\sqrt{1+\hat{\delta}_{K}}(r_{K}+s_{K})\|\mathbf{x}_{K}\|_{2}. \end{array} $$
(16)

Then, combing (5), (15), and (16), we have

$$\begin{array}{*{20}l} \|\hat{\mathbf{e}}\|_{2}&\leq\|\mathbf{Ex}\|_{2}+\|\hat{\mathbf{A}}\mathbf{x}_{K}^{c}\|_{2}+\|\mathbf{e}\|_{2}\notag\\ &=\|\mathbf{Ex}\|_{2}+\|\hat{\mathbf{A}}\mathbf{x}_{K}^{c}\|_{2}+\varepsilon_{\mathbf{y}}\|\mathbf{Ax}\|_{2}\notag\\ &\leq\|\mathbf{Ex}\|_{2}+\|\hat{\mathbf{A}}\mathbf{x}_{K}^{c}\|_{2}+\varepsilon_{\mathbf{y}}\|\hat{\mathbf{A}}\mathbf{x}\|_{2} +\varepsilon_{\mathbf{y}}\|\mathbf{Ex}\|_{2}\notag\\ &=(\|\mathbf{Ex}\|_{2}+\|\hat{\mathbf{A}}\mathbf{x}_{K}^{c}\|_{2})(1+\varepsilon_{\mathbf{y}})+ \varepsilon_{\mathbf{y}}\|\hat{\mathbf{A}}\mathbf{x}_{K}\|_{2}\notag\\ &\leq\frac{\sqrt{1+\hat{\delta}_{K}}}{1-\varepsilon_{\mathbf{A}}}\left(\varepsilon_{\mathbf{A}}+ \varepsilon_{\mathbf{y}}+(1+\varepsilon_{\mathbf{y}})(r_{K}+s_{K})\right)\|\mathbf{x}_{K}\|_{2}. \end{array} $$
(17)

By Theorem 1, under condition (10), the solution x n defined by SP and CoSaMP satisfies

$$\begin{array}{*{20}l} \|\mathbf{x}_{K}-\mathbf{x}^{n}\|_{2}<\hat{\rho}^{n}\|\mathbf{x}_{K}\|_{2}+\hat{\tau}\|\hat{\mathbf{e}}\|_{2}, \end{array} $$
(18)

where \(\hat {\rho }<1\) and \(\hat {\tau }\) are constants specified in Table 2.

According to the triangle inequality, we have

$$\begin{array}{*{20}l} &\frac{\|\mathbf{x}-\mathbf{x}^{n}\|_{2}}{\|\mathbf{x}\|_{2}}\notag\\ &\leq\frac{\|\mathbf{x}-\mathbf{x}_{K}\|_{2}}{\|\mathbf{x}\|_{2}}+\frac{\|\mathbf{x}_{K}-\mathbf{x}^{n}\|_{2}}{\|\mathbf{x}\|_{2}}\notag\\ &\leq\frac{\|\mathbf{x}_{K}^{c}\|_{2}}{\|\mathbf{x}\|_{2}}+\hat{\rho}^{n}\frac{\|\mathbf{x}_{K}\|_{2}}{\|\mathbf{x}\|_{2}}+\hat{\tau}\frac{\|\hat{\mathbf{e}}\|_{2}}{\|\mathbf{x}\|_{2}}\notag\\ &\leq r_{K}+\hat{\rho}^{n}+\hat{\tau}\frac{\sqrt{1+\hat{\delta}_{K}}}{1-\varepsilon_{\mathbf{A}}}(\varepsilon_{\mathbf{A}}+ \varepsilon_{\mathbf{y}}+(1+\varepsilon_{\mathbf{y}})(r_{K}+s_{K})). \end{array} $$
(19)

By condition (12),

$$\begin{array}{*{20}l} \hat{\rho}^{n}+r_{K}\leq\varepsilon_{\mathbf{A}}+\varepsilon_{\mathbf{y}}+(1+\varepsilon_{\mathbf{y}})(r_{K}+s_{K}). \end{array} $$
(20)

Combined (11) and (20), (13) follows immediately. □

Remark 1

The weaker the RIC bound is, the less required number of measurements we need, the improved RIC results can be used in many CS-based applications [7]. It is clear that when \(\hat {\rho }=\frac {1}{2}\), for SP, Theorem 2 presents \(\hat {\delta }_{3K}=0.3063\) and \(\hat {\tau }=13.1303\), while Theorem 2 in [15] gives \(\hat {\delta }_{3K}=0.1397\) and \(\hat {\tau }=15.6476\) (\(\tilde {C}\) and \(\tilde {D}\) in [15]). For CoSaMP, Theorem 2 presents \(\hat {\delta }_{4K}=0.3083\) and \(\hat {\tau }=13.9536\), while Theorem 2 in [15] gives \(\hat {\delta }_{4K}=0.101\) and \(\hat {\tau }=15.3485\) (\(\tilde {C}\) and \(\tilde {D}\) in [15]). So, the proposed results improve the theoretical guarantee for SP and CoSaMP relative to [15].

To be specific, for an m×N random matrix \(\hat {\mathbf {A}}\), whose entries are independent and identically distributed Gaussian random variables \(\mathcal {N}(0,\frac {1}{m})\), then \(\hat {\mathbf {A}}\) satisfies the RIP condition (\(\hat {\delta }_{K}\leq \varepsilon \)) with overwhelming probability under [22]

$$\begin{array}{*{20}l} m\geq\frac{bK\makebox{log}(\frac{N}{K})}{\varepsilon^{2}}, \end{array} $$
(21)

where b is a constant. Consider SP, by Lemma 4.1 in [23], \(\hat {\delta }_{3K}<0.4859\) can be changed to \(\hat {\delta }_{K}<0.097\), while \(\hat {\delta }_{3K}<0.206\) ([15]) can be changed to \(\hat {\delta }_{K}<0.041\). Hence, according to (21), the dimension of the measurements m ensuring reconstruction for Theorem 2 is \(m\geq 106.2812bK\makebox {log}(\frac {N}{K})\), while the measurements for Theorem 2 in [15] is \(m\geq 594.8840bK\makebox {log}(\frac {N}{K})\).

Remark 2

It follows from (11) that the recovery performance is stable under both perturbations. It depends on the three terms r K +s K , e2, and \(\|\mathbf {E}\|_{2}^{(K)}\). In general, no recovery can do better than the oracle least squares (LS) method. The authors in [15] presented an upper bound of oracle recovery (Part IV.B in [15]):

$$\begin{array}{*{20}l} {}\frac{\mathbf{x}-\mathbf{x}^{n}}{\|\mathbf{x}\|_{2}}\leq\kappa_{\Psi}\left(\hat{D}\sqrt{1+\hat{\delta}_{K}}\right)(\varepsilon_{\mathbf{y}}+\varepsilon_{\mathbf{A}}+(1+\varepsilon_{\mathbf{y}})(r_{K}+s_{K})), \end{array} $$
(22)

where κ Ψ =Ψ2Ψ −12=1 (Ψis identity matrix in our paper) and \(\hat {D}=\frac {1}{\sqrt {1-\hat {\delta }_{K}}}\). When \(\hat {\mathbf {A}}\) is fixed, then \(\hat {D}\) and \(\hat {\tau }\) are constants. So, comparing (13) with (22), the error bound of SP (or CoSaMP) and the error bound of oracle recovery only differ in coefficients.

When x is K-sparse, it can be derive that r K =s K =0. The relative error of the solution is stated as Corollary 1.

Corollary 1

Suppose that x is K-sparse in model (4). If the perturbed matrix \(\hat {\mathbf {A}}\) satisfies RIP with

$$\begin{array}{*{20}l} {\hat{\delta}_{aK}}\leq c, \end{array} $$
(23)

then, the relative error of the solution x n of SP and CoSaMP satisfies

$$\begin{array}{*{20}l} &\frac{\|\mathbf{x}-\mathbf{x}^{n}\|_{2}}{\|\mathbf{x}\|_{2}}\leq \hat{\rho}^{n}+\hat{\tau}\frac{\sqrt{1+\hat{\delta}_{K}}}{1-\varepsilon_{\mathbf{A}}}(\varepsilon_{\mathbf{A}}+ \varepsilon_{\mathbf{y}}), \end{array} $$
(24)

where the specific values of the constants \(a, c, \hat {\rho }\), and \(\hat {\tau }\) are illustrated in Tables 1 and 2. In addition, after at most

$$\begin{array}{*{20}l} n=\lceil\makebox{log}_{\hat{\rho}}(\varepsilon_{\mathbf{A}}+\varepsilon_{\mathbf{y}})\rceil \end{array} $$
(25)

iterations, SP and SoSaMP can obtain the error

$$\begin{array}{*{20}l} &\frac{\|\mathbf{x}-\mathbf{x}^{n}\|_{2}}{\|\mathbf{x}\|_{2}}\leq\left(\hat{\tau}\frac{\sqrt{1+\hat{\delta}_{K}}}{1-\varepsilon_{\mathbf{A}}}+1\right)(\varepsilon_{\mathbf{A}}+\varepsilon_{\mathbf{y}}). \end{array} $$
(26)

4 Numerical experiments

In this section, we perform some numerical experiments in MATLAB R2013a and research the performance of SP and CoSaMP under total perturbations. These algorithms are tested with two random matrix ensembles:

\(\bullet \mathcal {N}\): Gaussian matrices with entries drawn i.i.d. from \(\mathcal {N}\left (0,\frac {1}{m}\right)\);

\(\bullet \mathcal {S}_{7}\): sparse matrices with seven nonzero entries per column drawn with equal probability from \(\left \{-\frac {1}{\sqrt {7}},\frac {1}{\sqrt {7}}\right \}\) and locations in each column chosen uniformly.

As noted in [24], the above two random matrix ensembles are representative of the random matrices frequently encountered in compressed sensing.

The sparse vector x (with length of N=1024) is taken from the random binary vector distribution and are formed by uniformly selecting K locations for nonzero entries with values {−1,1} chosen with equal probability.

The addition noise e and multiple noise E are random Gaussian matrices. In each trial, according to (5), the relative perturbations are set to

$$\begin{array}{*{20}l} \varepsilon_{\mathbf{A}}=\frac{\|\mathbf{E}\|^{(K)}_{2}}{\|\mathbf{A}\|^{(K)}_{2}}\approx\frac{\|\mathbf{E}\|^{}_{2}}{\|\mathbf{A}\|^{}_{2}}=0.05,\quad \varepsilon_{\mathbf{y}}=\frac{\|\mathbf{e}\|_{2}}{\|\mathbf{Ax}\|_{2}}=0.05, \end{array} $$
(27)

where A is measurement matrix. Then, \(\hat {\mathbf {y}}\) and \(\hat {\mathbf {A}}\) are generated by (2). The relative approximation error defined by

$$\begin{array}{*{20}l} \frac{\|\mathbf{x}^{\ast}-\mathbf{x}\|_{2}}{\|\mathbf{x}\|_{2}}, \end{array} $$
(28)

where x is an approximate solution. The simulation is conducted 500 times to obtain the average relative error.

4.1 Different sparsity level for SP and CoSaMP

For SP and CoSaMP, sparsity level K needs to be known a priori. The first experiment demonstrates the performance degradation of SP and CoSaMP when K is misestimated. Here, sparsity level K is 20. The number of measurements m varies from 100 to 550 with step size 50. The results are shown in Figs. 1 and 2.

Fig. 1
figure 1

Relative approximation error vs. estimation of sparsity in the SP and CoSaMP algorithms for Gaussian matrix. Here, sparsity level K=20

Fig. 2
figure 2

Relative approximation error vs. estimation of sparsity in the SP and CoSaMP algorithms for sparse matrix. Here, sparsity level K=20

Figures 1 and 2 show the curves of the relative error vs. estimation of the sparsity. It is easy to see that the error decreases as m increases. In addition, one can see clearly that the relative errors of the SP and CoSaMP increase if the estimated sparsity K is far from the truth. So, in the future, we will propose some algorithm that recovers unknown signals without the sparsity level information.

4.2 Observed noise stability for SP and CoSaMP

In the second simulation, we infer an observed stability to general perturbations for SP and CoSaMP. We compare the performance of SP and CoSaMP with oracle LS method. The number of measurements m varies from 50 to 500 with step size 50. The results are shown in Figs. 3 and 4.

Fig. 3
figure 3

Relative approximation error vs. number of measurements in the SP and CoSaMP algorithms for Gaussian matrix

Fig. 4
figure 4

Relative approximation error vs. number of measurements in the SP and CoSaMP algorithms for Sparse matrix

As can be seen from Figs. 3 and 4, the error decreases as m increases. In addition, the curves of SP, CoSaMP, and oracle LS method are almost the same when m is not smaller than 150. So, SP and CoSaMP can provide oracle-order recovery performance.

5 Conclusions

For SP and CoSaMP, in this paper, improved sufficient conditions are presented under total perturbations to guarantee that sparse vector x can be recovered. Comparing with the condition in [15], taking random matrix as measurement matrix, our condition can decrease the number of measurements. By numerical experiments, we point out that SP and CoSaMP algorithms can obtain oracle-order recovery performance under total perturbations. Furthermore, proposing an algorithm that does not need the sparsity level K is our future work.

References

  1. J E Candès, T Romberg, Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).

    Article  MATH  Google Scholar 

  2. T E Candès, Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005).

    Article  MATH  Google Scholar 

  3. J Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans.Inf. Theory. 50(10), 2231–2242 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  4. W Dai, O Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans.Inf. Theory. 55(5), 2230–2249 (2009).

    Article  MathSciNet  Google Scholar 

  5. D Needell, J Tropp, CoSaMP: Iterative signal recovery from in-complete and inaccurate samples, Appl.Comput. Harmon. Anal. 26(3), 301–321 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  6. R Giryes, S Nam, M Elad, R Gribonval, M Davies, Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl. 441:, 22–60 (2014).

    Article  MathSciNet  MATH  Google Scholar 

  7. C Song, S Xia, X Liu, Improved analysis for subspace pursuit algorithm in terms of restricted isometry constant. IEEE Signal Process. Lett.21(11), 1365–1369 (2014).

    Article  Google Scholar 

  8. C Song, S Xia, X Liu, Improved analysis for SP and CoSaMP algorithms in terms of restricted isometry constants. http://arxiv.org/pdf/1309.6073.pdf.

  9. M Herman, T Strohmer, General deviants: an analysis of perturbations in compressed sensing. IEEE J.Sel. Top. Signal Process.4(2), 342–3496 (2010).

    Article  Google Scholar 

  10. T Blumensath, M Davies, in Int. Conf. Ind. Comp. Anal. Source Sep.Compressed sensing and source separation, (2007), pp. 341–348.

  11. A Fannjiang, P Yan, T Strohmer, Compressed remote sensing of sparse objects. SIAM J.Imag.Sci.3(3), 596–618 (2010).

    Article  MathSciNet  MATH  Google Scholar 

  12. M Herman, T Strohmer, High-resolution radar via compressed sensing. IEEE Trans. Signal process.57(6), 2275–2284 (2009).

    Article  MathSciNet  Google Scholar 

  13. J Ding, YGu L Chen, Perturbation analysis of orthogonal matching pursuit. IEEE Trans. Signal Process.61(2), 398–410 (2013).

    Article  MathSciNet  Google Scholar 

  14. M Herman, D Needell, in Information Sciences and Systems (CISS), 2010 44th Annual Conference on. Mixed operators in compressed sensing (IEEEPrinceton, 2010), pp. 1–6.

    Chapter  Google Scholar 

  15. L Chen, Y Gu, Oracle-order recovery performance of greedy pursuits with replacement against general perturbations. IEEE Trans.Signal Process.61(18), 4625–4636 (2013).

    Article  MathSciNet  Google Scholar 

  16. E Candès, T Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35:, 2313–2351 (2007).

    Article  MathSciNet  MATH  Google Scholar 

  17. Z Ben-Haim, YC Eldar, M Elad, Coherence-based perfor- mance guarantees for estimating a sparse vector under random noise. IEEE Trans. Signal Process. 58(10), 5030–5043 (2010).

    Article  MathSciNet  Google Scholar 

  18. P Bickel, Y Ritov, AB Tsybakov, Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat.37(4), 1705–1732 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  19. R Giryes, M Elad, RIP-based near-oracle performance guarantees for SP, CoSaMP, and IHT. IEEE Trans.Signal Process.60(3), 1465–1468 (2012).

    Article  MathSciNet  Google Scholar 

  20. T Cai, L Wang, G Xu, Stable recovery of sparse signals and an oracle inequality. IEEE Trans. Inf. Theory. 56(7), 3516–3522 (2010).

    Article  MathSciNet  Google Scholar 

  21. T Blumensath, M Davies, Iterative hard thresholding for compressed sensing, Appl.Comput. Harmon. Anal. 27(3), 265–274 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  22. R Baraniuk, M Davenport, R DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices. Construct.Approx.28(3), 253–263 (2008).

    Article  MathSciNet  MATH  Google Scholar 

  23. T Cai, A Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Appl.Comput. Harmon. Anal. 35(1), 74–93 (2013).

    Article  MathSciNet  MATH  Google Scholar 

  24. J Blanchar, J Tanner, K Wei, Conjugate gradient iterative hard thresholding: observed noise stability for compressed sensing. IEEE Trans.Signal process.63(2), 528–537 (2015).

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Scientific Research Foundation for Ph.D. of Henan Normal University (no. qd14142), the Key Scientific Research Project of Colleges and Universities in Henan Province (no. 15B120004) and National Natural Science Foundation of China (no. 11526081 and 11601134).

Authors’ contributions

∙ Two relaxed sufficient conditions are presented under total perturbations for SP and CoSaMP.

∙ The advantage of our condition is discussed.

∙ Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

Competing interests

The author declares that he has no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haifeng Li.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, H. Improved analysis of SP and CoSaMP under total perturbations. EURASIP J. Adv. Signal Process. 2016, 112 (2016). https://doi.org/10.1186/s13634-016-0412-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0412-5

Keywords