# Effect of errors in partially known support on the recoverability of modified compressive sensing

## Abstract

The recently proposed modified compressive sensing (modified-CS), which utilizes the partially known support as prior knowledge, significantly improves the performance of compressive sensing. In practice, the known part will inevitably involve some errors, which may degrade the gain of modified-CS. Within the stochastic framework, this article discuss the effect of errors in known part on the recoverability of modified-CS. First, based on the probabilistic measure of recoverability, two probability inequalities on recoverability are established, which reflect the changing tendencies of the recoverability of modified-CS with respect to the addition of errors in the known support and sparsity of original sparse vector. A direct corollary reveals further the effect degree of recoverability as adding an error in known support. Second, the maximum number of errors that the modified-CS can bear is also analyzed. We prove a quantitative-bound of errors in known part that relates with the number of samples and the sparsity of original vector. This bound mirrors the fault-tolerance capability of modified-CS. Simulation experiments have been carried out to validate our theoretical results.

## Introduction

The problem of finding sparse solutions to under-determined linear systems from limited data arises in many applications, including biomedical imaging [1], sensor network [2], wireless communication [3], pattern recognition [4]. This problem can be modeled as follows:

$\mathbf{y}=\mathbf{A}{\mathbf{x}}^{âˆ—}$
(1)

where AâˆˆRmÃ—n is referred to as a measurement matrix with m<n, ${\mathbf{x}}^{âˆ—}={\left({x}_{1}^{âˆ—},â€¦,{x}_{n}^{âˆ—}\right)}^{T}âˆˆ{\mathbf{R}}^{n}$ is an unknown sparse vector, yâˆˆRmis an observable vector. The goal is to recover a high-dimensional vector xâˆ—from its lower-dimensional observable vector y. It is one of the central problems in compressed sensing (CS) and the major breakthrough in this area has been the demonstration that â„“ 1 minimization can efficiently recover the sparse vector xâˆ—via far smaller number of measurements y than its ambient dimension [5, 6].

Recently, some works have investigated the way of exploiting prior knowledge to improve the performance of compressive sensing [7â€“9]. In [7], typically, a promising approach named modified compressive sensing (modified-CS) is proposed for the case when the support of the signal is partially known. The modified-CS is especially suitable for the recovery of (time) sequences of sparse vectors when their supports evolve slowly over time. Vaswani and Lu [7] analyzed when is the solution of modified-CS equal to the original vector xâˆ—, i.e., the recoverability problem. They demonstrated when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size, the sufficient conditions on recoverability problem are much weaker than those needed for classical â„“ 1minimization method. Further, we derived a sufficient and necessary condition on recoverability of modified-CS and investigated the recoverability in a probability way [10].

Obviously, the key assumption that modified-CS utilizes is that the support changes slowly over time, i.e., the unknown part of support and errors in the partially known support are small compared to the support size [11]. As we demonstrated in [10], the prior support information improves the recoverability of modified-CS. However, what is the effect of errors in partially known support on the recoverability of modified-CS? Furthermore, how many errors can modified-CS bear? These problems, to the best of our knowledge, have not been studied in any other work. We define fault-tolerance capability of modified-CS as the maximum number of errors that the modified-CS can bear. The main objective of this article is to analyze the effect of errors on the recoverability as well as the fault-tolerance capability of modified-CS. First, we propose a probabilistic measure of recoverability expressed as the probability that the solution of modified-CS is equal to the original vector xâˆ—. Furthermore, two probability inequalities on recoverability are established. These inequalities reflect the changing tendencies of the recoverability of modified-CS with respect to the sparsity of vector xâˆ— and the number of errors in the known part. Second, the fault-tolerance capability of modified-CS is analyzed. We prove that for a given matrix A, if the number of errors exceeds the fault-tolerance capability of modified-CS, modified-CS cannot recover any vectors.

## Preliminaries

### Notations and modified-CS

We first establish some important notations. Let N denote the index set of the nonzero entries of xâˆ—, also known as the support of xâˆ—. We assume that the support of xâˆ— is partially known, but the known part may contain some errors. In the sequel, the known part of support is denoted by T, the unknown part of support by Î”, and the set of errors in T by Î” e . Hence, N can be split as N=(TâˆªÎ”)âˆ– Î” e . The set operations âˆªand âˆ–stand for set union and set except, respectively. The recovery based on the modified-CS is implemented by solving the following optimization problem.

$\underset{\mathbf{x}}{min}{âˆ¥{\mathbf{x}}_{{\mathbf{T}}^{c}}^{}âˆ¥}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\mathbf{y}=\mathbf{Ax}$
(2)

where âˆ¥xâˆ¥1 denotes the 1-norm of a vector x and Tc={1,â€¦,n}âˆ–T, ${\mathbf{x}}_{{\mathbf{T}}^{c}}^{}$ is a column vector composed of the entries of x with their indices being in Tc.

### Sufficient and necessary condition on recoverability of modified-CS

Let x(1) denote the solution of the model in (2). In this section, we introduce a sufficient and necessary condition on recoverability of modified-CS so that the x(1) is equal to the original sparse vector xâˆ—(see [10] for more details of this result).

#### Theorem 1

For a given vector xâˆ—, x(1)=xâˆ—, if and only if âˆ€IâˆˆF, the optimal value of the objective function of the following optimization problem is greater than zero, provided that this optimization problem is solvable:

$\begin{array}{cc}\underset{\mathbit{Î´}}{min}& \underset{kâˆˆ\left({\mathbf{T}}^{c}âˆ–\mathbf{I}\right)}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathbit{Î´}=\mathbf{}0,\phantom{\rule{2.83864pt}{0ex}}{âˆ¥\mathbit{Î´}âˆ¥}_{1}=1\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}>0\phantom{\rule{1em}{0ex}}\mathrm{for}\phantom{\rule{2.83864pt}{0ex}}kâˆˆ\mathbf{I}\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}â‰¤0\phantom{\rule{1em}{0ex}}\mathrm{for}\phantom{\rule{2.83864pt}{0ex}}kâˆˆ\mathbit{Î”}âˆ–\mathbf{I}\end{array}$
(3)

where Î´=(Î´ 1,â€¦, Î´ n )TâˆˆRn. F denotes the set of all subsets of Î”.

It can be concluded from Theorem 1 that for a given measurement matrix A, the recoverability of the sparse vector xâˆ—based on the model in (2) depends only on the index set of nonzeros of xâˆ— in Tc and the signs of these nonzeros, i.e., the sign pattern of xâˆ—in Tcinstead of the magnitudes of these nonzeros [10]. Therefore, assume the support N of xâˆ— is fixed but just partially known. Thus, for a given partially known support T, the total number of sign patterns of xâˆ— in Tc, denoted as N sp, is determinable. We denote the number of sign patterns that can be recovered by the model in (2) as RN sp. Suppose all the nonzero entries of the vector xâˆ—take either positive or negative sign with equal probability. In this article, we consider the probability that the original vector xâˆ—can be recovered by modified-CS, which is also called recoverability probability (RP). If the measurement matrix A and the partially known support T are given, the support N of xâˆ— is fixed, the RP can then be denoted as the conditional probability P T (x(1)=xâˆ—;A N)and defined as follows.

${\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)=\frac{R{N}_{\mathrm{sp}}}{{N}_{\mathrm{sp}}}$
(4)

Obviously, The RP is a probabilistic measure of recoverability of modified-CS. In next section, based on the RP, we will analyze the changing tendencies of the recoverability of modified-CS with respect to the sparsity of vector xâˆ—, the number of errors in the partially known support, as well as the fault-tolerance capability of modified-CS.

## Effect analysis of errors in partially known support on recoverability of modified-CS

For convenience of presentation, we have below definitions. Adding an error i e â€‰( i e âˆˆTcâˆ–Î”) in set T, the RP is denoted as P T âˆª{ i e }(x(1)=xâˆ—;A,N); Given any vector xâ€³, whose support is Nâˆ–{ i r }and i r âˆˆN, consider recovering xâ€³from yâ€³=Axâ€³by applying the modified-CS with the known support T, the RP is denoted as P T (x(1)=xâˆ—;A,Nâˆ–{ i r }). The following theorem is established.

### Theorem 2

Below inequalities hold.

$\begin{array}{cc}\phantom{\rule{-14.0pt}{0ex}}{\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}& ={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)â‰¤{\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)â‰¤{\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}\\ ={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}âˆ–\left\{{i}_{r}\right\}\right).\end{array}$

### Proof

Consider the following two optimization problems:

$\begin{array}{l}\underset{\mathbf{x}}{min}{âˆ¥{\mathbf{x}}_{{\mathbf{T}}^{c}}^{}âˆ¥}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{1em}{0ex}}\mathbf{y}=\mathbf{A}\mathbf{x}\end{array}$
(5)
$\begin{array}{l}\underset{\mathbf{x}}{min}{âˆ¥{\mathbf{x}}_{{\left(\mathbf{T}âˆª\left\{{i}_{e}\right\}\right)}^{c}}^{}âˆ¥}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{1em}{0ex}}\mathbf{y}=\mathbf{A}\mathbf{x}\end{array}$
(6)

where i e âˆˆTcâˆ–Î” is an error of support N.

Denote x 1 and x 2 the two solutions of (5) and (6), respectively. From the definitions of P T (x(1)=xâˆ—;â€‰A,â€‰N)and P T âˆª{ i e }(x(1)=xâˆ—;â€‰A,â€‰N), we have

$\begin{array}{l}\mathbf{P}\left({\mathbf{x}}^{1}={\mathbf{x}}^{âˆ—}\right)={\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)\phantom{\rule{2em}{0ex}}\\ \mathbf{P}\left({\mathbf{x}}^{2}={\mathbf{x}}^{âˆ—}\right)={\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)\phantom{\rule{2em}{0ex}}\end{array}$

where P(Î²) represents the probability that an event Î²will occur.

Now suppose that x 1=xâˆ—. It follows from Theorem 1 that, âˆ€IâˆˆF, when the following optimization problem (7) is solvable, its optimal value is greater than zero

$\begin{array}{cc}min& \underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathbit{Î´}=0,\phantom{\rule{1em}{0ex}}{âˆ¥\mathbit{Î´}âˆ¥}_{1}=1\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ\mathbf{I}\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}â‰¤0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ\mathbit{Î”}âˆ–\mathbf{I}\end{array}$
(7)

where F denotes the set of all subsets of Î”.

Meanwhile, when an error i e is added in the set T, we consider the RP${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)$.

Since i e âˆˆTcâˆ–Î”, it means that âˆ€IâˆˆF, we have i e âˆˆTcâˆ–I. Thus, it can be deduced that (Tâˆª{ i e })câˆ–I=Tcâˆ–(Iâˆª{ i e }). Suppose x 2=xâˆ—. It also follows from Theorem 1 that, âˆ€IâˆˆF, when the following optimization problem (8) is solvable, its optimal value is greater than zero

$\begin{array}{cc}min& \underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\left|{\mathrm{Î´}}_{{i}_{e}}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s}\mathrm{.t.}\\ \mathbf{A}\mathbit{Î´}=0,\phantom{\rule{1em}{0ex}}{âˆ¥\mathbit{Î´}âˆ¥}_{1}=1\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ\mathbf{I}\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}â‰¤0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ\mathbit{Î”}âˆ–\mathbf{I}\end{array}$
(8)

Obviously, optimization problems (7) and (8) have the same feasible region. Moreover, when the optimal value of (7) is greater than zero, the optimal value of (8) must be. Hence, we have

${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)â‰¤{\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)$

The first inequality is proved.

Further, consider the following optimization problem and denote its solution by x 3:

$\underset{\mathbf{x}}{min}{âˆ¥{\mathbf{x}}_{{\mathbf{T}}^{c}}^{}âˆ¥}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{1em}{0ex}}{\mathbf{y}}^{\mathrm{â€³}}=\mathbf{A}\mathbf{x}$
(9)

From the definition of P T (x(1)=xâ€³;â€‰A,â€‰Nâˆ–{ i r }), we have P(x 3=xâ€³)= P T (x(1)=xâ€³;â€‰A,â€‰Nâˆ–{ i r }). For i r âˆˆN, there are two cases: I) i r âˆˆT; and II) i r âˆˆÎ”. Denote Î”â€³ is the unknown support of the vector xâ€³.

Now consider the first case, i.e., i r âˆˆT.

From the definitions of N, Nâˆ–{ i r }and i r , we have Î”â€³=Î”. Thus,vectors xâˆ—and xâ€³have the same number of sign patterns on their unknown supports Î” and Î”â€³, respectively. Because optimization problems (9) and (7) have the same known support T, it follows from Theorem 1 that (9) and (7) have the same recoverability for every sign patterns on unknown support Î”. According to (4), if i r âˆˆT, we have

${\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)={\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\mathrm{â€³}};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}âˆ–\left\{{i}_{r}\right\}\right).$

In the first case, the second inequality hold.

For the second case, i.e., i r âˆˆÎ”.

Since i r âˆˆÎ”, we have Î”â€³=Î”âˆ–{ i r }. It is easy to know that the number of sign patterns of xâˆ— on the unknown support Î” equals twice the number of sign patterns of xâ€³ on the unknown support Î”â€³. For any sign pattern of xâ€³ on the unknown support Î” â€³, there exist two corresponding sign patterns of xâˆ— on the unknown support Î”. These two have the same sign pattern with xâ€³ on the unknown support Î” â€³, but are nonzero and of the opposite sign in position i r . To prove the second inequality all we need to do is show that, for any sign pattern of xâˆ—, if we have x 1=xâˆ—, the sign pattern of xâ€³, which has the same sign pattern with xâˆ— on the positions Î”â€³, can certainly be recovered by modified-CS, i.e., x 3=xâ€³.

It follows from Theorem 1 that, x 3=xâ€³, if and only if âˆ€Iâ€³âˆˆFâ€³, when the following optimization problem (10) is solvable, its optimal value is greater than zero

$\begin{array}{cc}min& \underset{kâˆˆ{\mathbf{T}}^{c}âˆ–{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathbit{Î´}=0,\phantom{\rule{1em}{0ex}}{âˆ¥\mathbit{Î´}âˆ¥}_{1}=1\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}â‰¤0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ{\mathbit{Î”}}^{\mathrm{â€³}}âˆ–{\mathbf{I}}^{\mathrm{â€³}}\end{array}$
(10)

where Fâ€³ denotes the set of all subsets of Î”â€³.

Suppose one of sign patterns of xâˆ—on unknown support Î” can be recovered by modified-CS, i.e., x 1=xâˆ—.

Thus, âˆ€Iâ€³âˆˆFâ€³, we have âˆƒ(I=Iâ€³âˆª{ i r })âˆˆFso that when the following optimization problem (11) is solvable, its optimal value is greater than zero

$\begin{array}{cc}min& \underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathbit{Î´}=0,\phantom{\rule{1em}{0ex}}{âˆ¥\mathbit{Î´}âˆ¥}_{1}=1\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}\\ {\mathrm{Î´}}_{{i}_{r}}{x}_{{i}_{r}}^{âˆ—}>0\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}â‰¤0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ{\mathbit{Î”}}^{\mathrm{â€³}}âˆ–{\mathbf{I}}^{\mathrm{â€³}}\end{array}$
(11)

Hence,

$\begin{array}{cc}min& \left(\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|\right)\\ =min\left(\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\left|{\mathrm{Î´}}_{j}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\left|{\mathrm{Î´}}_{j}\right|\right)>0\\ â‡’min\left(\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|\right)>0\end{array}$

Meanwhile, âˆƒ(I=Iâ€³)âˆˆF, when the following optimization problem (12) is solvable, its optimal value is greater than zero

$\begin{array}{cc}min& \underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathbit{Î´}=0,\phantom{\rule{1em}{0ex}}{âˆ¥\mathbit{Î´}âˆ¥}_{1}=1\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}\\ {\mathrm{Î´}}_{{i}_{r}}{x}_{{i}_{r}}^{âˆ—}â‰¤0\\ {\mathrm{Î´}}_{k}{x}_{k}^{âˆ—}â‰¤0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}kâˆˆ{\mathbit{Î”}}^{\mathrm{â€³}}âˆ–{\mathbf{I}}^{\mathrm{â€³}}\end{array}$
(12)

Hence,

$\begin{array}{c}min\left(\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|\right)>0\\ â‡’min\left(\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|\right)>0\end{array}$

Since xâˆ— has the same sign pattern with xâ€³ on the unknown support Î”â€³, the unit of feasible region of optimization problems (11) and (12) is the feasible region of optimization problem (10). Moreover, âˆ€Iâ€³âˆˆFâ€³, when the optimization problem (10) is solvable, we have

$min\left(\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{\mathrm{â€³}}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|\right)>0.$

It follows from Theorem 1 that, we have x 3=xâ€³. In the second case, the second inequality is proved.

Combining Cases I and II, the second inequality is proved. Theorem 2 is proved.

### Remarks 1

The first inequality of Theorem 2 describes quantitatively the changing tendency of RP with respect to the number of errors in known support T. It shows that known support T contains more errors, the lower RP of the modified-CS has. The second inequality of Theorem 2 indicates that the higher sparsity of a vector xâˆ—is, the higher RP of the modified-CS is.

Further, given any vector xâ€¡, whose support is Nâˆª{ i e }. Consider recovering xâ€¡ from yâ€¡=Axâ€¡ by applying the modified-CS with the known support Tâˆª{ i e }, we denote the RP as ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\mathrm{â€¡}};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}âˆª\left\{{i}_{e}\right\}\right)$. From the second inequality of Theorem 2, one can establish the following Corollary to Theorem 2.

### Corollary 1

Below equalities hold.

$\phantom{\rule{-14.0pt}{0ex}}{\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.83864pt}{0ex}}\mathbf{A},\phantom{\rule{2.83864pt}{0ex}}\mathbf{N}\right)={\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\mathrm{â€¡}};\phantom{\rule{2.83864pt}{0ex}}\mathbf{A},\phantom{\rule{2.83864pt}{0ex}}\mathbf{N}âˆª\left\{{i}_{e}\right\}\right).$

### Remarks 2

The Corollary 1 reveals the fact that when adding an error in the known support T, the effect of RP equals the effect through decreasing the sparsity of vector xâˆ—in position i e but adding the position i e into set T. Because the prior knowledge of the support will increase the RP of modified-CS, the effect through adding an error i e in the known support T is less than the one by decreasing the sparsity of vector xâˆ—in position i e .

It is deducible from the above discussion that the errors in known support T will reduce the RP of modified-CS. However, the first, under certain number of samples, to recover a sparse vector with â„“ nonzero entries, how many errors in set T can the modified-CS bear? The second, within the acceptance range of errors, whether the modified-CS can guarantee the recoverability? Hereinafter, we consider these problems and reach following results.

### Theorem 3

Given any vector xâˆ—, whose support N=TâˆªÎ”âˆ– Î” e . Consider recovering it from y=Axâˆ—, where A is a mÃ—nmatrix, by applying the modified-CS.

(1) If |TâˆªÎ”|>m, i.e, | Î” e |>mâˆ’|N|,

$\begin{array}{cc}{\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)& ={\mathbf{P}}_{\mathbf{T}âˆª\left\{i\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)=\phantom{\rule{2.77695pt}{0ex}}Ã‚Â¯\phantom{\rule{2.77695pt}{0ex}}\\ ={\mathbf{P}}_{\mathbf{T}âˆª\mathbit{Î”}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)=0,\\ \phantom{\rule{1em}{0ex}}\mathrm{where}\phantom{\rule{2.77695pt}{0ex}}iâˆˆ\mathbit{Î”}\end{array}$

(2) If |TâˆªÎ”|â‰¤m, i.e, | Î” e |â‰¤mâˆ’|N|,

${\mathbf{P}}_{\mathbf{T}âˆª\mathbit{Î”}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)=1$

### Proof

According to the rank theorem in matrix theory,

$\mathrm{dim}\left[\text{Null}\left(\mathbf{A}\right)\right]=n\text{-}\text{rank}\left(\mathbf{A}\right),$
(13)

where Null(â€¢),â€‰dim(â€¢)and rank(â€¢)represents the null-space of a matrix, the dimension of a space and the rank of a matrix, respectively.

Suppose rank(A)=m, i.e., the matrix A has full row rank. It is well known that in the CS field, measurement matrix A is always a Gaussian random matrix, which has full row rank with a probability of one.

From Equation (13),

$âˆƒ\mathbit{Î´}âˆˆ\mathrm{N}\mathrm{ull}\left(\mathbf{A}\right),$

we select arbitrary (nâˆ’m) entries of Î´ as independent variables and other entries of Î´ can be expressed by the (nâˆ’m) variables. Hence, if | Î” e |>mâˆ’|N|, the number of zero entries of sub-vector ${\mathbf{x}}_{{\mathbf{T}}^{c}}^{âˆ—}$ equals

$\begin{array}{cc}|\left[1,2,â€¦,n\right]âˆ–\left(\mathbf{N}âˆª{\mathbit{Î”}}_{e}\right)|& =nâˆ’|\mathbf{N}|âˆ’|{\mathbit{Î”}}_{e}|
(14)

Denote Î© the indexes set of zero entries of ${\mathbf{x}}_{{\mathbf{T}}^{c}}^{âˆ—}$. Then, it follows from (14) that

$âˆƒ{\mathbit{Î´}}^{1}âˆˆ\mathrm{N}\mathrm{ull}\left(\mathbf{A}\right),$
(15)

we have

${\mathbit{Î´}}_{\mathbit{Î©}}^{1}=0$

and âˆ¥Î´ 1âˆ¥1=1.

Suppose x(1)=xâˆ—, it follows from Theorem 1 that, âˆ€IâˆˆF, the optimal value of the objective function of the optimization problem (3) is greater than zero, provided that this optimization problem is solvable. Denote the objective function of (3) as

$\mathbf{f}\left(\mathbit{Î´}\right)=\underset{kâˆˆ{\mathbf{T}}^{c}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|$
(16)

From the definition of Î©, set Tcâˆ–I=Î©âˆª(Î”âˆ–I). Therefore, (16) is equivalent to

$\underset{kâˆˆ\mathbit{Î©}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|+\underset{kâˆˆ\mathbit{Î”}âˆ–\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|âˆ’\underset{kâˆˆ\mathbf{I}}{âˆ‘}\left|{\mathrm{Î´}}_{k}\right|$
(17)

For Î´ 1, there exist I 1, S 1 and Z 1, so that

$\begin{array}{l}{\mathbf{I}}^{1}=\left\{k|{\mathrm{Î´}}_{k}^{1}{x}_{k}^{âˆ—}>0,\phantom{\rule{1em}{0ex}}kâˆˆ\mathbit{Î”}\right\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{S}}^{1}=\left\{k|{\mathrm{Î´}}_{k}^{1}{x}_{k}^{âˆ—}<0,\phantom{\rule{1em}{0ex}}kâˆˆ\mathbit{Î”}\right\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{Z}}^{1}=\left\{k|{\mathrm{Î´}}_{k}^{1}{x}_{k}^{âˆ—}=0,\phantom{\rule{1em}{0ex}}kâˆˆ\mathbit{Î”}\right\}\phantom{\rule{2em}{0ex}}\end{array}$
(18)

Combining (17) and (18), it follows from Theorem 1 that, for Î´ 1, we have

$\begin{array}{cc}\mathbf{f}\left({\mathbit{Î´}}^{1}\right)& =\underset{kâˆˆ\mathbit{Î©}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{1}\right|+\underset{kâˆˆ\mathbit{Î”}âˆ–{\mathbf{I}}^{1}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{1}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{1}}{âˆ‘}\left|{\mathrm{Î´}}_{{}_{k}}^{1}\right|\\ =\underset{kâˆˆ{\mathbf{S}}^{1}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{1}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{1}}{âˆ‘}\left|{\mathrm{Î´}}_{{}_{k}}^{1}\right|>0\end{array}$
(19)

Meanwhile, if Î´ 1âˆˆNull(A), there exists

$âˆ’{\mathbit{Î´}}^{1}âˆˆ\mathrm{N}\mathrm{ull}\left(\mathbf{A}\right)$
(20)

Let Î´ 2=âˆ’Î´ 1, Denote

$\begin{array}{l}{\mathbf{I}}^{2}=\left\{k|{\mathrm{Î´}}_{k}^{2}{x}_{k}^{âˆ—}>0,\phantom{\rule{1em}{0ex}}kâˆˆ\mathbit{Î”}\right\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{S}}^{2}=\left\{k|{\mathrm{Î´}}_{k}^{2}{x}_{k}^{âˆ—}<0,\phantom{\rule{1em}{0ex}}kâˆˆ\mathbit{Î”}\right\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{Z}}^{2}=\left\{k|{\mathrm{Î´}}_{k}^{2}{x}_{k}^{âˆ—}=0,\phantom{\rule{1em}{0ex}}kâˆˆ\mathbit{Î”}\right\}\phantom{\rule{2em}{0ex}}\end{array}$
(21)

Obviously, I 2=S 1, S 2=I 1 and Z 2=Z 1.

For Î´ 2, we have

$\begin{array}{cc}\mathbf{f}\left({\mathbit{Î´}}^{2}\right)& =\underset{kâˆˆ\mathbit{Î©}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{2}\right|+\underset{kâˆˆ\mathbit{Î”}âˆ–{\mathbf{I}}^{2}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{2}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{2}}{âˆ‘}\left|{\mathrm{Î´}}_{{}_{k}}^{2}\right|\\ =\underset{kâˆˆ{\mathbf{S}}^{2}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{1}\right|âˆ’\underset{kâˆˆ{\mathbf{I}}^{2}}{âˆ‘}\left|{\mathrm{Î´}}_{{}_{k}}^{1}\right|=\underset{kâˆˆ{\mathbf{I}}^{1}}{âˆ‘}\left|{\mathrm{Î´}}_{k}^{1}\right|âˆ’\underset{kâˆˆ{\mathbf{S}}^{1}}{âˆ‘}\left|{\mathrm{Î´}}_{{}_{k}}^{1}\right|\end{array}$
(22)

From (19), it can be deduced that f(Î´ 2)<0.

Hence, there are Î´ 2âˆˆNull(A)and I 2âˆˆF so that the optimal value of the objective function of the optimization problem (3) is less than zero. According to Theorem 1, the assumption x(1)=xâˆ—does not hold, i.e.,

${\mathbf{P}}_{\mathbf{T}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\right)=0.$

It is easy to know that the above discussion is hold for known supports {Tâˆª{i}}, â€¦, {TâˆªÎ”}, where iâˆˆÎ”. Therefore, the result 1) is proved.

From the definition of N, if the known support Tâ€³=TâˆªÎ”, it is obvious that the optimal value of the objective function of mode (2) is zero. Hence, the solution of the modified-CS satisfies the constraint ${\mathbf{A}}_{{\mathbf{T}}^{\mathrm{â€³}}}{\mathbf{x}}_{{\mathbf{T}}^{\mathrm{â€³}}}=\mathbf{y}$. If |TâˆªÎ”|â‰¤m, according to the linear algebra theory, the solution satisfied ${\mathbf{A}}_{{\mathbf{T}}^{\mathrm{â€³}}}{\mathbf{x}}_{{\mathbf{T}}^{\mathrm{â€³}}}=\mathbf{y}$ is the unique solution xâˆ—. The result 2) is proved. Theorem 3 is proved.

### Remarks 3

Theorem 3 provides a bound of errors in known support T that relates with the number of samples m and the sparsity â„“of the original vector. This bound mirrors the fault-tolerance capability of the modified-CS: to recover the sparse vectors with â„“nonzero entries, if the number of errors in set T exceed mâˆ’â„“, it is impossible that the modified-CS can recover any sparse vector, regardless of how many positions of the support involved in the set T; Conversely, within this bound, as the addition of prior knowledge of the support in set T, the RP is steadily improving. Furthermore, providing sufficient prior knowledge of the support, the recoverability of the modified-CS can be guaranteed.

## Numerical simulation

In this section, simulation results are presented to support the theoretical derivations. In all experiments, matrix AâˆˆR 7Ã—9 is taken according to the uniform distribution in [âˆ’0.5,0.5]. All nonzero entries of the sparse vector xâˆ— drawn from a uniform distribution valued in the range [âˆ’1, + 1].

### Experiment

We will validate the probability relationships of P T (x(1)=xâˆ—;A,N), ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbit{x}}^{\left(1\right)}={\mathbit{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)$, where i e âˆˆTcâˆ–Î”, ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{r}\right\}}^{}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)$, where i r âˆˆÎ”and ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{r}\right\}}\left({\mathbit{x}}^{\left(1\right)}={\mathbit{x}}^{\mathrm{â€¡}};\mathbf{A},\mathbf{N}âˆ–\left\{{i}_{r}\right\}\right)$, where i r âˆˆÎ”. It is assumed that xâˆ—has â„“(â„“=3,4,â€¦,7)nonzero entries but two is known. To calculate P T (x(1)=xâˆ—;A,N), we find the number of sign vectors that can be recovered by solving the modified-CS under matrix A and support N with partially known support T. Furthermore, for ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{e}\right\}}\left({\mathbit{x}}^{\left(1\right)}={\mathbit{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)$, where i e âˆˆTcâˆ–Î”and ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{r}\right\}}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{âˆ—};\mathbf{A},\mathbf{N}\right)$, where i r âˆˆÎ”, we add randomly a new index i e âˆˆTcâˆ–Î”or i r âˆˆÎ”into set T, respectively. Then, the recoverability probabilities are calculated with the same way. For ${\mathbf{P}}_{\mathbf{T}âˆª\left\{{i}_{r}\right\}}^{}\left({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\mathrm{â€¡}};\mathbf{A},\mathbf{N}âˆ–\left\{{i}_{r}\right\}\right)$, we first place the nonzero entries i r ( i r âˆˆÎ”)as null value. Adding the position { i r }into the set T, we find the number of sign vectors that can by recovered by solving the modified-CS under matrix A, support Nâˆ–{ i r }and the partially known support Tâˆª{ i r }. Notice that index i r is an error position of support for xâ€¡. The experimental results in Figure 1 validate the results in Theorem 2.

### Experiment 2.

We validate the results of Theorems 2 and 3 in this experiment. Without loss of generality, we suppose sparse vector xâˆ—has three nonzero entries. At the beginning, the known part of support T is a null set. We randomly select 0, 1, 2, 3, 4 positions in {1,2â€¦n}âˆ–Nas errors and add these errors into the known part of support T step by step. Next, the experiment is divided into two cases. Case I, we will add the fifth error into the set T. After this moment, 1, 2, 3 new elements of the support are added in turn into the set T. Computing the RP at each point. Case II, we will add 1, 2, 3 new elements of support into the set T after adding the fourth error into set T. Computing the RP at each point also. The experimental results are showed in Figure 2. Black curve denote the RP of the modified-CS whose partially known support T contains 0, 1, 2, 3, 4 errors. Red curve denotes the RP of the case I. Blue curve denotes the RP of the Case II. According to Theorem 2, the RP of the modified-CS will steadily decrease as more and more errors are added into set T. From Theorem 3, when | Î” e |>mâˆ’|N|=4, it is impossible that the modified-CS can recover any 3-sparse signal, regardless of how many elements of the support involved into the set T. Conversely, when | Î” e |â‰¤mâˆ’|N|=4, as the addition of the elements of support, the RP of modified-CS is steadily improving. Furthermore, if providing sufficient elements of the support, the RP of modified-CS can be guaranteed. As shown in Figure 2, the experimental results validate Theorems 2 and 3.

## Conclusions

In this article, we analyzed the effect of errors in partially known support on the recoverability of modified-CS. Two probability inequalities were established that indicate the changing tendencies of the recoverability of modified-CS respect to the addition of errors in the known support and sparsity of original vector. An exact quantitative-bound of errors, associated with the number of measurements and the sparsity of original vector, in known part was also derived. We proved that if the number of errors does not exceed this bound, the recoverability of modified-CS can be guaranteed when enough support information can be provided. Conversely, no matter how much support information we know, modified-CS canâ€™t recover any vectors. These results reveal the relationships between error, measurement and sparsity, which can provide an important guidance for the application of modified-CS.

## References

1. MLustig DL, Donoho JM, Santos JM: Pauly, Compressed sensing MRI. IEEE Signal Process. Mag 2008, 25(2):72-82.

2. Haupt J, Bajwa WU, Rabbat M, Nowak R: Compressed sensing for networked data. IEEE Signal Process. Mag 2008, 25(2):92-101.

3. WU Bajwa J, Haupt AM, Sayeed R: Nowak, Compressed channel sensing: a new approach to estimating sparse multipath channels. Proc. IEEE 2010, 98(6):1058-1076.

4. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell 2008, 31(2):210-227.

5. CandÃ¨s EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52(2):489-509.

6. Donoho DL, Compressed sensing: IEEE Trans. Inf. Theory. 2006, 52(4):1289-1306.

7. Vaswani N, Lu W: Modified-CS: modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process 2010, 58(9):4595-4607.

8. Miosso CJ, von Borries R, ArgÃ ez M, Velazquez L, Quintero C, Potes CM: Compressive sensing reconstruction with prior information by iteratively reweighted least-squares. IEEE Trans. Signal Process 2009, 57(6):2424-2431.

9. Wang Y, Yin W: Sparse signal reconstruction via iterative support detection. SIAM. J. Imag. Sci 2010, 3(3):462-491. 10.1137/090772447

10. Zhang J, Li YQ, Yu ZL, Gu ZH: Recoverability analysis for modified compressive sensing with partially known support. arXiv, 2012.http://arxiv.org/abs/1207.1855. Accessed 8 July 2012

11. Vaswani N: Stability (over time) of modified-CS and LS-CS for recursive causal sparse reconstruction. arXiv, 2010.http://arxiv.org/abs/1006.4818. Accessed 24 June 2010

## Acknowledgements

This work was supported by the National Nature Science Foundation of China under Grants 60825306, 91120305, 61175114 and 61105121, the National High-tech R&D Program of China (863 Program) under grant 2012AA011601, the Program for New Century Excellent Talents in University under Grant NCET-10-0370 and Excellent Youth Development Project of Universities in Guangdong Province.

## Author information

Authors

### Corresponding author

Correspondence to Jun Zhang.

### Competing interests

The authors declare that they have no competing interests.

## Authorsâ€™ original submitted files for images

Below are the links to the authorsâ€™ original submitted files for images.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

Zhang, J., Li, Y., Gu, Z. et al. Effect of errors in partially known support on the recoverability of modified compressive sensing. EURASIP J. Adv. Signal Process. 2012, 199 (2012). https://doi.org/10.1186/1687-6180-2012-199