Skip to main content

Effect of errors in partially known support on the recoverability of modified compressive sensing

Abstract

The recently proposed modified compressive sensing (modified-CS), which utilizes the partially known support as prior knowledge, significantly improves the performance of compressive sensing. In practice, the known part will inevitably involve some errors, which may degrade the gain of modified-CS. Within the stochastic framework, this article discuss the effect of errors in known part on the recoverability of modified-CS. First, based on the probabilistic measure of recoverability, two probability inequalities on recoverability are established, which reflect the changing tendencies of the recoverability of modified-CS with respect to the addition of errors in the known support and sparsity of original sparse vector. A direct corollary reveals further the effect degree of recoverability as adding an error in known support. Second, the maximum number of errors that the modified-CS can bear is also analyzed. We prove a quantitative-bound of errors in known part that relates with the number of samples and the sparsity of original vector. This bound mirrors the fault-tolerance capability of modified-CS. Simulation experiments have been carried out to validate our theoretical results.

Introduction

The problem of finding sparse solutions to under-determined linear systems from limited data arises in many applications, including biomedical imaging [1], sensor network [2], wireless communication [3], pattern recognition [4]. This problem can be modeled as follows:

y=A x ∗
(1)

where A∈Rm×n is referred to as a measurement matrix with m<n, x ∗ = ( x 1 ∗ , … , x n ∗ ) T ∈ R n is an unknown sparse vector, y∈Rmis an observable vector. The goal is to recover a high-dimensional vector x∗from its lower-dimensional observable vector y. It is one of the central problems in compressed sensing (CS) and the major breakthrough in this area has been the demonstration that ℓ 1 minimization can efficiently recover the sparse vector x∗via far smaller number of measurements y than its ambient dimension [5, 6].

Recently, some works have investigated the way of exploiting prior knowledge to improve the performance of compressive sensing [7–9]. In [7], typically, a promising approach named modified compressive sensing (modified-CS) is proposed for the case when the support of the signal is partially known. The modified-CS is especially suitable for the recovery of (time) sequences of sparse vectors when their supports evolve slowly over time. Vaswani and Lu [7] analyzed when is the solution of modified-CS equal to the original vector x∗, i.e., the recoverability problem. They demonstrated when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size, the sufficient conditions on recoverability problem are much weaker than those needed for classical ℓ 1minimization method. Further, we derived a sufficient and necessary condition on recoverability of modified-CS and investigated the recoverability in a probability way [10].

Obviously, the key assumption that modified-CS utilizes is that the support changes slowly over time, i.e., the unknown part of support and errors in the partially known support are small compared to the support size [11]. As we demonstrated in [10], the prior support information improves the recoverability of modified-CS. However, what is the effect of errors in partially known support on the recoverability of modified-CS? Furthermore, how many errors can modified-CS bear? These problems, to the best of our knowledge, have not been studied in any other work. We define fault-tolerance capability of modified-CS as the maximum number of errors that the modified-CS can bear. The main objective of this article is to analyze the effect of errors on the recoverability as well as the fault-tolerance capability of modified-CS. First, we propose a probabilistic measure of recoverability expressed as the probability that the solution of modified-CS is equal to the original vector x∗. Furthermore, two probability inequalities on recoverability are established. These inequalities reflect the changing tendencies of the recoverability of modified-CS with respect to the sparsity of vector x∗ and the number of errors in the known part. Second, the fault-tolerance capability of modified-CS is analyzed. We prove that for a given matrix A, if the number of errors exceeds the fault-tolerance capability of modified-CS, modified-CS cannot recover any vectors.

Preliminaries

Notations and modified-CS

We first establish some important notations. Let N denote the index set of the nonzero entries of x∗, also known as the support of x∗. We assume that the support of x∗ is partially known, but the known part may contain some errors. In the sequel, the known part of support is denoted by T, the unknown part of support by Δ, and the set of errors in T by Δ e . Hence, N can be split as N=(T∪Δ)∖ Δ e . The set operations ∪and ∖stand for set union and set except, respectively. The recovery based on the modified-CS is implemented by solving the following optimization problem.

min x x T c 1 s.ty=Ax
(2)

where ∥x∥1 denotes the 1-norm of a vector x and Tc={1,…,n}∖T, x T c is a column vector composed of the entries of x with their indices being in Tc.

Sufficient and necessary condition on recoverability of modified-CS

Let x(1) denote the solution of the model in (2). In this section, we introduce a sufficient and necessary condition on recoverability of modified-CS so that the x(1) is equal to the original sparse vector x∗(see [10] for more details of this result).

Theorem 1

For a given vector x∗, x(1)=x∗, if and only if ∀I∈F, the optimal value of the objective function of the following optimization problem is greater than zero, provided that this optimization problem is solvable:

min δ ∑ k ∈ ( T c ∖ I ) δ k − ∑ k ∈ I δ k , s.t. A δ = 0 , δ 1 = 1 δ k x k ∗ > 0 for k ∈ I δ k x k ∗ ≤ 0 for k ∈ Δ ∖ I
(3)

where δ=(δ 1,…, δ n )T∈Rn. F denotes the set of all subsets of Δ.

It can be concluded from Theorem 1 that for a given measurement matrix A, the recoverability of the sparse vector x∗based on the model in (2) depends only on the index set of nonzeros of x∗ in Tc and the signs of these nonzeros, i.e., the sign pattern of x∗in Tcinstead of the magnitudes of these nonzeros [10]. Therefore, assume the support N of x∗ is fixed but just partially known. Thus, for a given partially known support T, the total number of sign patterns of x∗ in Tc, denoted as N sp, is determinable. We denote the number of sign patterns that can be recovered by the model in (2) as RN sp. Suppose all the nonzero entries of the vector x∗take either positive or negative sign with equal probability. In this article, we consider the probability that the original vector x∗can be recovered by modified-CS, which is also called recoverability probability (RP). If the measurement matrix A and the partially known support T are given, the support N of x∗ is fixed, the RP can then be denoted as the conditional probability P T (x(1)=x∗;A N)and defined as follows.

P T ( x ( 1 ) = x ∗ ;A,N)= R N sp N sp
(4)

Obviously, The RP is a probabilistic measure of recoverability of modified-CS. In next section, based on the RP, we will analyze the changing tendencies of the recoverability of modified-CS with respect to the sparsity of vector x∗, the number of errors in the partially known support, as well as the fault-tolerance capability of modified-CS.

Effect analysis of errors in partially known support on recoverability of modified-CS

For convenience of presentation, we have below definitions. Adding an error i e  ( i e ∈Tc∖Δ) in set T, the RP is denoted as P T ∪{ i e }(x(1)=x∗;A,N); Given any vector x″, whose support is N∖{ i r }and i r ∈N, consider recovering x″from y″=Ax″by applying the modified-CS with the known support T, the RP is denoted as P T (x(1)=x∗;A,N∖{ i r }). The following theorem is established.

Theorem 2

Below inequalities hold.

P T ∪ { i e } ( x ( 1 ) = x ∗ ; A , N ) ≤ P T ( x ( 1 ) = x ∗ ; A , N ) ≤ P T ( x ( 1 ) = x ∗ ; A , N ∖ { i r } ) .

Proof

Consider the following two optimization problems:

min x x T c 1 s.t y = A x
(5)
min x x ( T ∪ { i e } ) c 1 s.t y = A x
(6)

where i e ∈Tc∖Δ is an error of support N.

Denote x 1 and x 2 the two solutions of (5) and (6), respectively. From the definitions of P T (x(1)=x∗; A, N)and P T ∪{ i e }(x(1)=x∗; A, N), we have

P ( x 1 = x ∗ ) = P T ( x ( 1 ) = x ∗ ; A , N ) P ( x 2 = x ∗ ) = P T ∪ { i e } ( x ( 1 ) = x ∗ ; A , N )

where P(β) represents the probability that an event βwill occur.

Now suppose that x 1=x∗. It follows from Theorem 1 that, ∀I∈F, when the following optimization problem (7) is solvable, its optimal value is greater than zero

min ∑ k ∈ T c ∖ I δ k − ∑ k ∈ I δ k , s.t. A δ = 0 , δ 1 = 1 δ k x k ∗ > 0 for k ∈ I δ k x k ∗ ≤ 0 for k ∈ Δ ∖ I
(7)

where F denotes the set of all subsets of Δ.

Meanwhile, when an error i e is added in the set T, we consider the RP P T ∪ { i e } ( x ( 1 ) = x ∗ ;A,N).

Since i e ∈Tc∖Δ, it means that ∀I∈F, we have i e ∈Tc∖I. Thus, it can be deduced that (T∪{ i e })c∖I=Tc∖(I∪{ i e }). Suppose x 2=x∗. It also follows from Theorem 1 that, ∀I∈F, when the following optimization problem (8) is solvable, its optimal value is greater than zero

min ∑ k ∈ T c ∖ I δ k − δ i e − ∑ k ∈ I δ k , s .t. A δ = 0 , δ 1 = 1 δ k x k ∗ > 0 for k ∈ I δ k x k ∗ ≤ 0 for k ∈ Δ ∖ I
(8)

Obviously, optimization problems (7) and (8) have the same feasible region. Moreover, when the optimal value of (7) is greater than zero, the optimal value of (8) must be. Hence, we have

P T ∪ { i e } ( x ( 1 ) = x ∗ ; A , N ) ≤ P T ( x ( 1 ) = x ∗ ; A , N )

The first inequality is proved.

Further, consider the following optimization problem and denote its solution by x 3:

min x x T c 1 s.t y ″ =Ax
(9)

From the definition of P T (x(1)=x″; A, N∖{ i r }), we have P(x 3=x″)= P T (x(1)=x″; A, N∖{ i r }). For i r ∈N, there are two cases: I) i r ∈T; and II) i r ∈Δ. Denote Δ″ is the unknown support of the vector x″.

Now consider the first case, i.e., i r ∈T.

From the definitions of N, N∖{ i r }and i r , we have Δ″=Δ. Thus,vectors x∗and x″have the same number of sign patterns on their unknown supports Δ and Δ″, respectively. Because optimization problems (9) and (7) have the same known support T, it follows from Theorem 1 that (9) and (7) have the same recoverability for every sign patterns on unknown support Δ. According to (4), if i r ∈T, we have

P T ( x ( 1 ) = x ∗ ; A , N ) = P T ( x ( 1 ) = x ″ ; A , N ∖ { i r } ) .

In the first case, the second inequality hold.

For the second case, i.e., i r ∈Δ.

Since i r ∈Δ, we have Δ″=Δ∖{ i r }. It is easy to know that the number of sign patterns of x∗ on the unknown support Δ equals twice the number of sign patterns of x″ on the unknown support Δ″. For any sign pattern of x″ on the unknown support Δ ″, there exist two corresponding sign patterns of x∗ on the unknown support Δ. These two have the same sign pattern with x″ on the unknown support Δ ″, but are nonzero and of the opposite sign in position i r . To prove the second inequality all we need to do is show that, for any sign pattern of x∗, if we have x 1=x∗, the sign pattern of x″, which has the same sign pattern with x∗ on the positions Δ″, can certainly be recovered by modified-CS, i.e., x 3=x″.

It follows from Theorem 1 that, x 3=x″, if and only if ∀I″∈F″, when the following optimization problem (10) is solvable, its optimal value is greater than zero

min ∑ k ∈ T c ∖ I ″ δ k − ∑ k ∈ I ″ δ k , s.t. A δ = 0 , δ 1 = 1 δ k x k ∗ > 0 for k ∈ I ″ δ k x k ∗ ≤ 0 for k ∈ Δ ″ ∖ I ″
(10)

where F″ denotes the set of all subsets of Δ″.

Suppose one of sign patterns of x∗on unknown support Δ can be recovered by modified-CS, i.e., x 1=x∗.

Thus, ∀I″∈F″, we have ∃(I=I″∪{ i r })∈Fso that when the following optimization problem (11) is solvable, its optimal value is greater than zero

min ∑ k ∈ T c ∖ I δ k − ∑ k ∈ I δ k , s.t. A δ = 0 , δ 1 = 1 δ k x k ∗ > 0 for k ∈ I ″ δ i r x i r ∗ > 0 δ k x k ∗ ≤ 0 for k ∈ Δ ″ ∖ I ″
(11)

Hence,

min ∑ k ∈ T c ∖ I δ k − ∑ k ∈ I δ k = min ∑ k ∈ T c ∖ I ″ δ k − δ j − ∑ k ∈ I ″ δ k − δ j > 0 ⇒ min ∑ k ∈ T c ∖ I ″ δ k − ∑ k ∈ I ″ δ k > 0

Meanwhile, ∃(I=I″)∈F, when the following optimization problem (12) is solvable, its optimal value is greater than zero

min ∑ k ∈ T c ∖ I δ k − ∑ k ∈ I δ k , s.t. A δ = 0 , δ 1 = 1 δ k x k ∗ > 0 for k ∈ I ″ δ i r x i r ∗ ≤ 0 δ k x k ∗ ≤ 0 for k ∈ Δ ″ ∖ I ″
(12)

Hence,

min ∑ k ∈ T c ∖ I δ k − ∑ k ∈ I δ k > 0 ⇒ min ∑ k ∈ T c ∖ I ″ δ k − ∑ k ∈ I ″ δ k > 0

Since x∗ has the same sign pattern with x″ on the unknown support Δ″, the unit of feasible region of optimization problems (11) and (12) is the feasible region of optimization problem (10). Moreover, ∀I″∈F″, when the optimization problem (10) is solvable, we have

min ∑ k ∈ T c ∖ I ″ δ k − ∑ k ∈ I ″ δ k > 0 .

It follows from Theorem 1 that, we have x 3=x″. In the second case, the second inequality is proved.

Combining Cases I and II, the second inequality is proved. Theorem 2 is proved.

Remarks 1

The first inequality of Theorem 2 describes quantitatively the changing tendency of RP with respect to the number of errors in known support T. It shows that known support T contains more errors, the lower RP of the modified-CS has. The second inequality of Theorem 2 indicates that the higher sparsity of a vector x∗is, the higher RP of the modified-CS is.

Further, given any vector x‡, whose support is N∪{ i e }. Consider recovering x‡ from y‡=Ax‡ by applying the modified-CS with the known support T∪{ i e }, we denote the RP as P T ∪ { i e } ( x ( 1 ) = x ‡ ;A,N∪{ i e }). From the second inequality of Theorem 2, one can establish the following Corollary to Theorem 2.

Corollary 1

Below equalities hold.

P T ∪ { i e } ( x ( 1 ) = x ∗ ; A , N ) = P T ∪ { i e } ( x ( 1 ) = x ‡ ; A , N ∪ { i e } ) .

Remarks 2

The Corollary 1 reveals the fact that when adding an error in the known support T, the effect of RP equals the effect through decreasing the sparsity of vector x∗in position i e but adding the position i e into set T. Because the prior knowledge of the support will increase the RP of modified-CS, the effect through adding an error i e in the known support T is less than the one by decreasing the sparsity of vector x∗in position i e .

It is deducible from the above discussion that the errors in known support T will reduce the RP of modified-CS. However, the first, under certain number of samples, to recover a sparse vector with â„“ nonzero entries, how many errors in set T can the modified-CS bear? The second, within the acceptance range of errors, whether the modified-CS can guarantee the recoverability? Hereinafter, we consider these problems and reach following results.

Theorem 3

Given any vector x∗, whose support N=T∪Δ∖ Δ e . Consider recovering it from y=Ax∗, where A is a m×nmatrix, by applying the modified-CS.

(1) If |T∪Δ|>m, i.e, | Δ e |>m−|N|,

P T ( x ( 1 ) = x ∗ ; A , N ) = P T ∪ { i } ( x ( 1 ) = x ∗ ; A , N ) = ¯ = P T ∪ Δ ( x ( 1 ) = x ∗ ; A , N ) = 0 , where i ∈ Δ

(2) If |T∪Δ|≤m, i.e, | Δ e |≤m−|N|,

P T ∪ Δ ( x ( 1 ) = x ∗ ; A , N ) = 1

Proof

According to the rank theorem in matrix theory,

dim[Null(A)]=n-rank(A),
(13)

where Null(•), dim(•)and rank(•)represents the null-space of a matrix, the dimension of a space and the rank of a matrix, respectively.

Suppose rank(A)=m, i.e., the matrix A has full row rank. It is well known that in the CS field, measurement matrix A is always a Gaussian random matrix, which has full row rank with a probability of one.

From Equation (13),

∃ δ ∈ N ull ( A ) ,

we select arbitrary (n−m) entries of δ as independent variables and other entries of δ can be expressed by the (n−m) variables. Hence, if | Δ e |>m−|N|, the number of zero entries of sub-vector x T c ∗ equals

| [ 1 , 2 , … , n ] ∖ ( N ∪ Δ e ) | = n − | N | − | Δ e | < n − m
(14)

Denote Ω the indexes set of zero entries of x T c ∗ . Then, it follows from (14) that

∃ δ 1 ∈ N ull ( A ) ,
(15)

we have

δ Ω 1 =0

and ∥δ 1∥1=1.

Suppose x(1)=x∗, it follows from Theorem 1 that, ∀I∈F, the optimal value of the objective function of the optimization problem (3) is greater than zero, provided that this optimization problem is solvable. Denote the objective function of (3) as

f(δ)= ∑ k ∈ T c ∖ I δ k − ∑ k ∈ I δ k
(16)

From the definition of Ω, set Tc∖I=Ω∪(Δ∖I). Therefore, (16) is equivalent to

∑ k ∈ Ω δ k + ∑ k ∈ Δ ∖ I δ k − ∑ k ∈ I δ k
(17)

For δ 1, there exist I 1, S 1 and Z 1, so that

I 1 = { k | δ k 1 x k ∗ > 0 , k ∈ Δ } S 1 = { k | δ k 1 x k ∗ < 0 , k ∈ Δ } Z 1 = { k | δ k 1 x k ∗ = 0 , k ∈ Δ }
(18)

Combining (17) and (18), it follows from Theorem 1 that, for δ 1, we have

f ( δ 1 ) = ∑ k ∈ Ω δ k 1 + ∑ k ∈ Δ ∖ I 1 δ k 1 − ∑ k ∈ I 1 δ k 1 = ∑ k ∈ S 1 δ k 1 − ∑ k ∈ I 1 δ k 1 > 0
(19)

Meanwhile, if δ 1∈Null(A), there exists

− δ 1 ∈Null(A)
(20)

Let δ 2=−δ 1, Denote

I 2 = { k | δ k 2 x k ∗ > 0 , k ∈ Δ } S 2 = { k | δ k 2 x k ∗ < 0 , k ∈ Δ } Z 2 = { k | δ k 2 x k ∗ = 0 , k ∈ Δ }
(21)

Obviously, I 2=S 1, S 2=I 1 and Z 2=Z 1.

For δ 2, we have

f ( δ 2 ) = ∑ k ∈ Ω δ k 2 + ∑ k ∈ Δ ∖ I 2 δ k 2 − ∑ k ∈ I 2 δ k 2 = ∑ k ∈ S 2 δ k 1 − ∑ k ∈ I 2 δ k 1 = ∑ k ∈ I 1 δ k 1 − ∑ k ∈ S 1 δ k 1
(22)

From (19), it can be deduced that f(δ 2)<0.

Hence, there are δ 2∈Null(A)and I 2∈F so that the optimal value of the objective function of the optimization problem (3) is less than zero. According to Theorem 1, the assumption x(1)=x∗does not hold, i.e.,

P T ( x ( 1 ) = x ∗ ; A , N ) = 0 .

It is easy to know that the above discussion is hold for known supports {T∪{i}}, …, {T∪Δ}, where i∈Δ. Therefore, the result 1) is proved.

From the definition of N, if the known support T″=T∪Δ, it is obvious that the optimal value of the objective function of mode (2) is zero. Hence, the solution of the modified-CS satisfies the constraint A T ″ x T ″ =y. If |T∪Δ|≤m, according to the linear algebra theory, the solution satisfied A T ″ x T ″ =y is the unique solution x∗. The result 2) is proved. Theorem 3 is proved.

Remarks 3

Theorem 3 provides a bound of errors in known support T that relates with the number of samples m and the sparsity ℓof the original vector. This bound mirrors the fault-tolerance capability of the modified-CS: to recover the sparse vectors with ℓnonzero entries, if the number of errors in set T exceed m−ℓ, it is impossible that the modified-CS can recover any sparse vector, regardless of how many positions of the support involved in the set T; Conversely, within this bound, as the addition of prior knowledge of the support in set T, the RP is steadily improving. Furthermore, providing sufficient prior knowledge of the support, the recoverability of the modified-CS can be guaranteed.

Numerical simulation

In this section, simulation results are presented to support the theoretical derivations. In all experiments, matrix A∈R 7×9 is taken according to the uniform distribution in [−0.5,0.5]. All nonzero entries of the sparse vector x∗ drawn from a uniform distribution valued in the range [−1, + 1].

Experiment

We will validate the probability relationships of P T (x(1)=x∗;A,N), P T ∪ { i e } ( x ( 1 ) = x ∗ ;A,N), where i e ∈Tc∖Δ, P T ∪ { i r } ( x ( 1 ) = x ∗ ;A,N), where i r ∈Δand P T ∪ { i r } ( x ( 1 ) = x ‡ ;A,N∖{ i r }), where i r ∈Δ. It is assumed that x∗has ℓ(ℓ=3,4,…,7)nonzero entries but two is known. To calculate P T (x(1)=x∗;A,N), we find the number of sign vectors that can be recovered by solving the modified-CS under matrix A and support N with partially known support T. Furthermore, for P T ∪ { i e } ( x ( 1 ) = x ∗ ;A,N), where i e ∈Tc∖Δand P T ∪ { i r } ( x ( 1 ) = x ∗ ;A,N), where i r ∈Δ, we add randomly a new index i e ∈Tc∖Δor i r ∈Δinto set T, respectively. Then, the recoverability probabilities are calculated with the same way. For P T ∪ { i r } ( x ( 1 ) = x ‡ ;A,N∖{ i r }), we first place the nonzero entries i r ( i r ∈Δ)as null value. Adding the position { i r }into the set T, we find the number of sign vectors that can by recovered by solving the modified-CS under matrix A, support N∖{ i r }and the partially known support T∪{ i r }. Notice that index i r is an error position of support for x‡. The experimental results in Figure 1 validate the results in Theorem 2.

Figure 1
figure 1

Probability curves obtained in Experiment 1. The four curves denote P T (x(1)=x∗;A,N)with |T|=|T true + Δ e |=2, P T ∪ { i r } ( x ( 1 ) = x ∗ ;A,N), the probability that add an index i r of nonzero entries into set T, i.e, |T″|=|T∪{ i r }|=|(T true + 1) + Δ e |=3, P T ∪ { i e } ( x ( 1 ) = x ∗ ;A,N), the probability that add an index i e of zero entries into set T, i.e, |T″|=|T∪{ i e }|=|T true + ( Δ e + 1)|=3 and P T ∪ { i r } ( x ( 1 ) = x ‡ ;A,N∖{ i r }).

Experiment 2.

We validate the results of Theorems 2 and 3 in this experiment. Without loss of generality, we suppose sparse vector x∗has three nonzero entries. At the beginning, the known part of support T is a null set. We randomly select 0, 1, 2, 3, 4 positions in {1,2…n}∖Nas errors and add these errors into the known part of support T step by step. Next, the experiment is divided into two cases. Case I, we will add the fifth error into the set T. After this moment, 1, 2, 3 new elements of the support are added in turn into the set T. Computing the RP at each point. Case II, we will add 1, 2, 3 new elements of support into the set T after adding the fourth error into set T. Computing the RP at each point also. The experimental results are showed in Figure 2. Black curve denote the RP of the modified-CS whose partially known support T contains 0, 1, 2, 3, 4 errors. Red curve denotes the RP of the case I. Blue curve denotes the RP of the Case II. According to Theorem 2, the RP of the modified-CS will steadily decrease as more and more errors are added into set T. From Theorem 3, when | Δ e |>m−|N|=4, it is impossible that the modified-CS can recover any 3-sparse signal, regardless of how many elements of the support involved into the set T. Conversely, when | Δ e |≤m−|N|=4, as the addition of the elements of support, the RP of modified-CS is steadily improving. Furthermore, if providing sufficient elements of the support, the RP of modified-CS can be guaranteed. As shown in Figure 2, the experimental results validate Theorems 2 and 3.

Figure 2
figure 2

Probability curves obtained in Experiment 2. Black curve denote the RP of modified-CS whose known support T contains 0, 1, 2, 3, 4 errors. Red curve denotes the RP of Case I. Blue curve denotes the RP of Case II.

Conclusions

In this article, we analyzed the effect of errors in partially known support on the recoverability of modified-CS. Two probability inequalities were established that indicate the changing tendencies of the recoverability of modified-CS respect to the addition of errors in the known support and sparsity of original vector. An exact quantitative-bound of errors, associated with the number of measurements and the sparsity of original vector, in known part was also derived. We proved that if the number of errors does not exceed this bound, the recoverability of modified-CS can be guaranteed when enough support information can be provided. Conversely, no matter how much support information we know, modified-CS can’t recover any vectors. These results reveal the relationships between error, measurement and sparsity, which can provide an important guidance for the application of modified-CS.

References

  1. MLustig DL, Donoho JM, Santos JM: Pauly, Compressed sensing MRI. IEEE Signal Process. Mag 2008, 25(2):72-82.

    Article  Google Scholar 

  2. Haupt J, Bajwa WU, Rabbat M, Nowak R: Compressed sensing for networked data. IEEE Signal Process. Mag 2008, 25(2):92-101.

    Article  Google Scholar 

  3. WU Bajwa J, Haupt AM, Sayeed R: Nowak, Compressed channel sensing: a new approach to estimating sparse multipath channels. Proc. IEEE 2010, 98(6):1058-1076.

    Article  Google Scholar 

  4. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell 2008, 31(2):210-227.

    Article  Google Scholar 

  5. Candès EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52(2):489-509.

    Article  Google Scholar 

  6. Donoho DL, Compressed sensing: IEEE Trans. Inf. Theory. 2006, 52(4):1289-1306.

    Article  Google Scholar 

  7. Vaswani N, Lu W: Modified-CS: modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process 2010, 58(9):4595-4607.

    Article  MathSciNet  Google Scholar 

  8. Miosso CJ, von Borries R, Argàez M, Velazquez L, Quintero C, Potes CM: Compressive sensing reconstruction with prior information by iteratively reweighted least-squares. IEEE Trans. Signal Process 2009, 57(6):2424-2431.

    Article  MathSciNet  Google Scholar 

  9. Wang Y, Yin W: Sparse signal reconstruction via iterative support detection. SIAM. J. Imag. Sci 2010, 3(3):462-491. 10.1137/090772447

    Article  MathSciNet  Google Scholar 

  10. Zhang J, Li YQ, Yu ZL, Gu ZH: Recoverability analysis for modified compressive sensing with partially known support. arXiv, 2012.http://arxiv.org/abs/1207.1855. Accessed 8 July 2012

    Google Scholar 

  11. Vaswani N: Stability (over time) of modified-CS and LS-CS for recursive causal sparse reconstruction. arXiv, 2010.http://arxiv.org/abs/1006.4818. Accessed 24 June 2010

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Nature Science Foundation of China under Grants 60825306, 91120305, 61175114 and 61105121, the National High-tech R&D Program of China (863 Program) under grant 2012AA011601, the Program for New Century Excellent Talents in University under Grant NCET-10-0370 and Excellent Youth Development Project of Universities in Guangdong Province.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhang, J., Li, Y., Gu, Z. et al. Effect of errors in partially known support on the recoverability of modified compressive sensing. EURASIP J. Adv. Signal Process. 2012, 199 (2012). https://doi.org/10.1186/1687-6180-2012-199

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-199

Keywords