- Research
- Open Access
- Published:

# Improved analysis of SP and CoSaMP under total perturbations

*EURASIP Journal on Advances in Signal Processing*
**volume 2016**, Article number: 112 (2016)

## Abstract

Practically, in the underdetermined model **y**=**A**
**x**, where **x** is a *K* sparse vector (i.e., it has no more than *K* nonzero entries), both **y** and **A** could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector **x** is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

## Introduction

Compressed sensing [1] has been attracted more and more attention since it has been proposed. According to compressed sensing, the sparse signals can be accurately reconstructed from far less samples than those required in the classical Shannon-Nyquist theorem.

Typically, an underdetermined equation

is to be solved, where measurement matrix \(\mathbf {A}\in \mathbb {R}^{m\times N}\). There exists a unique solution for (1) when **x** is assumed to be *K*-sparse, i.e., **x** has at most *K* nonzero entries.

To get the sparsest solution of Eq. (1), minimizing \(\|\mathbf {x}\|_{\ell _{0}}\) (the *ℓ*
_{0}-“norm” counts the number of nonzero entries in **x**) is an intuitive idea. However, this is an NP-hard problem [2]. Many suboptimal methods were presented to overcome this difficulty.

The greedy algorithms have received considerable attention due to their low complexity and simple interpretation in geometry. They mainly include orthogonal matching pursuit (OMP) [3], subspace pursuit (SP) [4], compressive sampling matching pursuit (CoSaMP) [5], analysis SP (ASP), and analysis CoSaMP (ACoSaMP) [6]. The basic idea behind this kind of algorithms is to find the support of the unknown signal sequentially. Recently, using new method, two relaxed sufficient conditions were presented for SP and CoSaMP by Song et al. [7, 8]. In this paper, we mainly discuss SP and CoSaMP that are efficient algorithms.

In practice, both **y** and **A** are often perturbed in model (1). It is important to consider these perturbations since it can account for precision errors when applications call for physically implementing the matrix **A** in a sensor [9]. This case can be found in source separation [10].

In fact, under total perturbations, model (1) was formulated as

with inputs \(\hat {\mathbf {A}}\in \mathbb {R}^{m\times N}\) and \(\hat {\mathbf {y}}\). Here, **e** and **E** can be defined as addition noise and multiple noise, respectively. This case can be found in remote sensing [11], radar [12], and so on. By restricted isometry property (RIP) [2], model (2) was discussed by Ding et al. [13] using OMP. Under total perturbations, the work of [14, 15] discussed the performance of SP and CoSaMP. They presented that oracle-order recovery performance of SP and CoSaMP is guaranteed. In addition, there are many previous works in the context of near-oracle performance [16–20].

Using the results in [7, 8], in this paper, we give improved conditions for SP and CoSaMP under total perturbations. For numerical experiments, figures validate that SP and CoSaMP can provide oracle-order recovery performance.

Now, we give some notations that will be used in this paper. Scalars are written as lowercase letters, e.g., *d*. We denote vectors by boldface lowercase letters, e.g., **x**, and matrices as boldface uppercase letters, e.g., **D**. The *i*th element of **x** is denoted by *x*
_{
i
}. **D**
^{′} denotes the transpose of **D**. The cardinality of a finite set *Γ* is denoted by |*Γ*|. \(\|\mathbf {D}\|^{(K)}_{2}\) denotes the largest spectral norm taken over all *K*-column submatrices of **D**. We write **D**
_{
Γ
} for the column submatrix of **D** whose indices are listed in the set *Γ*.

## Problem formulation

In practice, we often encounter the approximately sparse vectors [21] rather than sparse vectors. Although these vectors are not exactly sparse, they are well approximated by a *K*-sparse vector. The vector **x** is assumed to be approximately sparse and we can use a *K*-sparse vector **x**
_{
K
} to approximate it when the energy of \(\mathbf {x}_{K}^{c}=\mathbf {x}-\mathbf {x}_{K}\) is very small, where **x**
_{
K
} is *K*-sparse that is the best *K*-term approximation of **x**, i.e., the nonzero entries in **x**
_{
K
} correspond to the *K* largest (in magnitude) entries in **x**. The approximation error can be quantified as

In this paper, the following model is considered

Here, the available information for recovering **x** is \(\hat {\mathbf {y}}\) and \(\hat {\mathbf {A}}=\mathbf {A}+\mathbf {E}\).

In real-world applications, we often do not know the exact nature of **E** and **e** and are forced to estimate their relative upper bounds, instead. The perturbations **E** and **e** are quantified with the following relative bounds

where \(\|\mathbf {A}\|^{(K)}_{2}\) and ∥**A**
**x**∥_{2} are nonzero. Now, according to \(\mathbf {A}=\hat {\mathbf {A}}-\mathbf {E}\), we give the upper bound of \(\|\mathbf {E}\|_{2}^{(K)}\).

Then, we have

In this paper, we are only interested in the case where \(\varepsilon _{\mathbf {A}}^{}\) and *ε*
_{
y
} are far less than 1.

## RIP-based recovery condition

###
**Definition 1**

([2]) A matrix **A** satisfies RIP of the order *K* if there exists a constant *δ*∈(0,1) such that

for all *K*-sparse vector **h**. In particular, the minimum of all constants *δ* satisfying (8) is called as the restricted isometry constant (RIC) *δ*
_{
K
}.

###
**Theorem 1**

Given a noisy measurement vector **y**=**A**
**x**
_{
K
}+**e**. If **A** satisfies *δ*
_{
aK
}≤*c*, then the sequence of **x**
^{n} defined by SP and CoSaMP satisfies

The specific values of the constants *a*, *c*, *ρ*, and *τ* are illustrated in Table 1.

###
*Proof*

After slightly manipulation, according to [7] and [8], the results can be obtained. □

###
**Theorem 2**

Consider (4). If the perturbed matrix \(\hat {\mathbf {A}}\) satisfies RIP with

then the relative error of the solution **x**
^{n} of SP and CoSaMP satisfies

where the specific values of the constants *a*, *c*, \(\hat {\rho }\), and \(\hat {\tau }\) are illustrated in Tables 1 and 2. In addition, after at most

iterations, SP and SoSaMP can obtain the error

###
*Proof*

The sensing process (4) is equivalent to

where \(\hat {\mathbf {e}}=-\mathbf {E}\mathbf {x}+\hat {\mathbf {A}}\mathbf {x}_{K}^{c}+\mathbf {e}\) is the error term. Its energy is bounded as follows. By Proposition 3.5 in [5] and (7),

where (15) follows from (7) and (3).

Furthermore,

Then, combing (5), (15), and (16), we have

By Theorem 1, under condition (10), the solution **x**
^{n} defined by SP and CoSaMP satisfies

where \(\hat {\rho }<1\) and \(\hat {\tau }\) are constants specified in Table 2.

According to the triangle inequality, we have

By condition (12),

Combined (11) and (20), (13) follows immediately. □

###
**Remark 1**

The weaker the RIC bound is, the less required number of measurements we need, the improved RIC results can be used in many CS-based applications [7]. It is clear that when \(\hat {\rho }=\frac {1}{2}\), for SP, Theorem 2 presents \(\hat {\delta }_{3K}=0.3063\) and \(\hat {\tau }=13.1303\), while Theorem 2 in [15] gives \(\hat {\delta }_{3K}=0.1397\) and \(\hat {\tau }=15.6476\) (\(\tilde {C}\) and \(\tilde {D}\) in [15]). For CoSaMP, Theorem 2 presents \(\hat {\delta }_{4K}=0.3083\) and \(\hat {\tau }=13.9536\), while Theorem 2 in [15] gives \(\hat {\delta }_{4K}=0.101\) and \(\hat {\tau }=15.3485\) (\(\tilde {C}\) and \(\tilde {D}\) in [15]). So, the proposed results improve the theoretical guarantee for SP and CoSaMP relative to [15].

To be specific, for an *m*×*N* random matrix \(\hat {\mathbf {A}}\), whose entries are independent and identically distributed Gaussian random variables \(\mathcal {N}(0,\frac {1}{m})\), then \(\hat {\mathbf {A}}\) satisfies the RIP condition (\(\hat {\delta }_{K}\leq \varepsilon \)) with overwhelming probability under [22]

where *b* is a constant. Consider SP, by Lemma 4.1 in [23], \(\hat {\delta }_{3K}<0.4859\) can be changed to \(\hat {\delta }_{K}<0.097\), while \(\hat {\delta }_{3K}<0.206\) ([15]) can be changed to \(\hat {\delta }_{K}<0.041\). Hence, according to (21), the dimension of the measurements *m* ensuring reconstruction for Theorem 2 is \(m\geq 106.2812bK\makebox {log}(\frac {N}{K})\), while the measurements for Theorem 2 in [15] is \(m\geq 594.8840bK\makebox {log}(\frac {N}{K})\).

###
**Remark 2**

It follows from (11) that the recovery performance is stable under both perturbations. It depends on the three terms *r*
_{
K
}+*s*
_{
K
}, ∥**e**∥_{2}, and \(\|\mathbf {E}\|_{2}^{(K)}\). In general, no recovery can do better than the oracle least squares (LS) method. The authors in [15] presented an upper bound of oracle recovery (Part IV.B in [15]):

where *κ*
_{
Ψ
}=∥*Ψ*∥_{2}∥*Ψ*
^{−1}∥_{2}=1 (*Ψ*is identity matrix in our paper) and \(\hat {D}=\frac {1}{\sqrt {1-\hat {\delta }_{K}}}\). When \(\hat {\mathbf {A}}\) is fixed, then \(\hat {D}\) and \(\hat {\tau }\) are constants. So, comparing (13) with (22), the error bound of SP (or CoSaMP) and the error bound of oracle recovery only differ in coefficients.

When **x** is *K*-sparse, it can be derive that *r*
_{
K
}=*s*
_{
K
}=0. The relative error of the solution is stated as Corollary 1.

###
**Corollary 1**

Suppose that **x** is *K*-sparse in model (4). If the perturbed matrix \(\hat {\mathbf {A}}\) satisfies RIP with

then, the relative error of the solution **x**
^{n} of SP and CoSaMP satisfies

where the specific values of the constants \(a, c, \hat {\rho }\), and \(\hat {\tau }\) are illustrated in Tables 1 and 2. In addition, after at most

iterations, SP and SoSaMP can obtain the error

## Numerical experiments

In this section, we perform some numerical experiments in MATLAB R2013a and research the performance of SP and CoSaMP under total perturbations. These algorithms are tested with two random matrix ensembles:

\(\bullet \mathcal {N}\): Gaussian matrices with entries drawn i.i.d. from \(\mathcal {N}\left (0,\frac {1}{m}\right)\);

\(\bullet \mathcal {S}_{7}\): sparse matrices with seven nonzero entries per column drawn with equal probability from \(\left \{-\frac {1}{\sqrt {7}},\frac {1}{\sqrt {7}}\right \}\) and locations in each column chosen uniformly.

As noted in [24], the above two random matrix ensembles are representative of the random matrices frequently encountered in compressed sensing.

The sparse vector **x** (with length of *N*=1024) is taken from the random binary vector distribution and are formed by uniformly selecting *K* locations for nonzero entries with values {−1,1} chosen with equal probability.

The addition noise **e** and multiple noise **E** are random Gaussian matrices. In each trial, according to (5), the relative perturbations are set to

where **A** is measurement matrix. Then, \(\hat {\mathbf {y}}\) and \(\hat {\mathbf {A}}\) are generated by (2). The relative approximation error defined by

where **x**
^{∗} is an approximate solution. The simulation is conducted 500 times to obtain the average relative error.

### Different sparsity level for SP and CoSaMP

For SP and CoSaMP, sparsity level *K* needs to be known a priori. The first experiment demonstrates the performance degradation of SP and CoSaMP when *K* is misestimated. Here, sparsity level *K* is 20. The number of measurements *m* varies from 100 to 550 with step size 50. The results are shown in Figs. 1 and 2.

Figures 1 and 2 show the curves of the relative error vs. estimation of the sparsity. It is easy to see that the error decreases as *m* increases. In addition, one can see clearly that the relative errors of the SP and CoSaMP increase if the estimated sparsity *K* is far from the truth. So, in the future, we will propose some algorithm that recovers unknown signals without the sparsity level information.

### Observed noise stability for SP and CoSaMP

In the second simulation, we infer an observed stability to general perturbations for SP and CoSaMP. We compare the performance of SP and CoSaMP with oracle LS method. The number of measurements *m* varies from 50 to 500 with step size 50. The results are shown in Figs. 3 and 4.

As can be seen from Figs. 3 and 4, the error decreases as *m* increases. In addition, the curves of SP, CoSaMP, and oracle LS method are almost the same when *m* is not smaller than 150. So, SP and CoSaMP can provide oracle-order recovery performance.

## Conclusions

For SP and CoSaMP, in this paper, improved sufficient conditions are presented under total perturbations to guarantee that sparse vector **x** can be recovered. Comparing with the condition in [15], taking random matrix as measurement matrix, our condition can decrease the number of measurements. By numerical experiments, we point out that SP and CoSaMP algorithms can obtain oracle-order recovery performance under total perturbations. Furthermore, proposing an algorithm that does not need the sparsity level *K* is our future work.

## References

J E Candès, T Romberg, Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory.

**52**(2), 489–509 (2006).T E Candès, Tao, Decoding by linear programming. IEEE Trans. Inf. Theory.

**51**(12), 4203–4215 (2005).J Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans.Inf. Theory.

**50**(10), 2231–2242 (2004).W Dai, O Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans.Inf. Theory.

**55**(5), 2230–2249 (2009).D Needell, J Tropp, CoSaMP: Iterative signal recovery from in-complete and inaccurate samples, Appl.Comput. Harmon. Anal.

**26**(3), 301–321 (2009).R Giryes, S Nam, M Elad, R Gribonval, M Davies, Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl.

**441:**, 22–60 (2014).C Song, S Xia, X Liu, Improved analysis for subspace pursuit algorithm in terms of restricted isometry constant. IEEE Signal Process. Lett.

**21**(11), 1365–1369 (2014).C Song, S Xia, X Liu, Improved analysis for SP and CoSaMP algorithms in terms of restricted isometry constants. http://arxiv.org/pdf/1309.6073.pdf.

M Herman, T Strohmer, General deviants: an analysis of perturbations in compressed sensing. IEEE J.Sel. Top. Signal Process.

**4**(2), 342–3496 (2010).T Blumensath, M Davies, in

*Int. Conf. Ind. Comp. Anal. Source Sep.*Compressed sensing and source separation, (2007), pp. 341–348.A Fannjiang, P Yan, T Strohmer, Compressed remote sensing of sparse objects. SIAM J.Imag.Sci.

**3**(3), 596–618 (2010).M Herman, T Strohmer, High-resolution radar via compressed sensing. IEEE Trans. Signal process.

**57**(6), 2275–2284 (2009).J Ding, YGu L Chen, Perturbation analysis of orthogonal matching pursuit. IEEE Trans. Signal Process.

**61**(2), 398–410 (2013).M Herman, D Needell, in

*Information Sciences and Systems (CISS), 2010 44th Annual Conference on*. Mixed operators in compressed sensing (IEEEPrinceton, 2010), pp. 1–6.L Chen, Y Gu, Oracle-order recovery performance of greedy pursuits with replacement against general perturbations. IEEE Trans.Signal Process.

**61**(18), 4625–4636 (2013).E Candès, T Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat.

**35:**, 2313–2351 (2007).Z Ben-Haim, YC Eldar, M Elad, Coherence-based perfor- mance guarantees for estimating a sparse vector under random noise. IEEE Trans. Signal Process.

**58**(10), 5030–5043 (2010).P Bickel, Y Ritov, AB Tsybakov, Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat.

**37**(4), 1705–1732 (2009).R Giryes, M Elad, RIP-based near-oracle performance guarantees for SP, CoSaMP, and IHT. IEEE Trans.Signal Process.

**60**(3), 1465–1468 (2012).T Cai, L Wang, G Xu, Stable recovery of sparse signals and an oracle inequality. IEEE Trans. Inf. Theory.

**56**(7), 3516–3522 (2010).T Blumensath, M Davies, Iterative hard thresholding for compressed sensing, Appl.Comput. Harmon. Anal.

**27**(3), 265–274 (2009).R Baraniuk, M Davenport, R DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices. Construct.Approx.

**28**(3), 253–263 (2008).T Cai, A Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Appl.Comput. Harmon. Anal.

**35**(1), 74–93 (2013).J Blanchar, J Tanner, K Wei, Conjugate gradient iterative hard thresholding: observed noise stability for compressed sensing. IEEE Trans.Signal process.

**63**(2), 528–537 (2015).

## Acknowledgements

This work was supported by the Scientific Research Foundation for Ph.D. of Henan Normal University (no. qd14142), the Key Scientific Research Project of Colleges and Universities in Henan Province (no. 15B120004) and National Natural Science Foundation of China (no. 11526081 and 11601134).

### Authors’ contributions

∙ Two relaxed sufficient conditions are presented under total perturbations for SP and CoSaMP.

∙ The advantage of our condition is discussed.

∙ Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

### Competing interests

The author declares that he has no competing interests.

## Author information

### Affiliations

### Corresponding author

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Li, H. Improved analysis of SP and CoSaMP under total perturbations.
*EURASIP J. Adv. Signal Process. * **2016, **112 (2016). https://doi.org/10.1186/s13634-016-0412-5

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s13634-016-0412-5

### Keywords

- Compressed sensing
- Perturbation
- Restricted isometry property
- Greedy algorithm