For convenience of presentation, we have below definitions. Adding an error
i
e
(
i
e
∈Tc∖Δ) in set T, the RP is denoted as
P
T
∪{
i
e
}(x(1)=x∗;A,N); Given any vector x″, whose support is N∖{
i
r
}and
i
r
∈N, consider recovering x″from y″=Ax″by applying the modified-CS with the known support T, the RP is denoted as
P
T
(x(1)=x∗;A,N∖{
i
r
}). The following theorem is established.
Theorem 2
Below inequalities hold.
Proof
Consider the following two optimization problems:
(5)
(6)
where
i
e
∈Tc∖Δ is an error of support N.
Denote x 1 and x 2 the two solutions of (5) and (6), respectively. From the definitions of
P
T
(x(1)=x∗; A, N)and
P
T
∪{
i
e
}(x(1)=x∗; A, N), we have
where P(β) represents the probability that an event βwill occur.
Now suppose that x 1=x∗. It follows from Theorem 1 that, ∀I∈F, when the following optimization problem (7) is solvable, its optimal value is greater than zero
(7)
where F denotes the set of all subsets of Δ.
Meanwhile, when an error
i
e
is added in the set T, we consider the RP.
Since
i
e
∈Tc∖Δ, it means that ∀I∈F, we have
i
e
∈Tc∖I. Thus, it can be deduced that (T∪{
i
e
})c∖I=Tc∖(I∪{
i
e
}). Suppose x 2=x∗. It also follows from Theorem 1 that, ∀I∈F, when the following optimization problem (8) is solvable, its optimal value is greater than zero
(8)
Obviously, optimization problems (7) and (8) have the same feasible region. Moreover, when the optimal value of (7) is greater than zero, the optimal value of (8) must be. Hence, we have
The first inequality is proved.
Further, consider the following optimization problem and denote its solution by x 3:
(9)
From the definition of
P
T
(x(1)=x″; A, N∖{
i
r
}), we have P(x 3=x″)=
P
T
(x(1)=x″; A, N∖{
i
r
}). For
i
r
∈N, there are two cases: I)
i
r
∈T; and II)
i
r
∈Δ. Denote Δ″ is the unknown support of the vector x″.
Now consider the first case, i.e.,
i
r
∈T.
From the definitions of N, N∖{
i
r
}and
i
r
, we have Δ″=Δ. Thus,vectors x∗and x″have the same number of sign patterns on their unknown supports Δ and Δ″, respectively. Because optimization problems (9) and (7) have the same known support T, it follows from Theorem 1 that (9) and (7) have the same recoverability for every sign patterns on unknown support Δ. According to (4), if
i
r
∈T, we have
In the first case, the second inequality hold.
For the second case, i.e.,
i
r
∈Δ.
Since
i
r
∈Δ, we have Δ″=Δ∖{
i
r
}. It is easy to know that the number of sign patterns of x∗ on the unknown support Δ equals twice the number of sign patterns of x″ on the unknown support Δ″. For any sign pattern of x″ on the unknown support Δ ″, there exist two corresponding sign patterns of x∗ on the unknown support Δ. These two have the same sign pattern with x″ on the unknown support Δ ″, but are nonzero and of the opposite sign in position
i
r
. To prove the second inequality all we need to do is show that, for any sign pattern of x∗, if we have x 1=x∗, the sign pattern of x″, which has the same sign pattern with x∗ on the positions Δ″, can certainly be recovered by modified-CS, i.e., x 3=x″.
It follows from Theorem 1 that, x 3=x″, if and only if ∀I″∈F″, when the following optimization problem (10) is solvable, its optimal value is greater than zero
(10)
where F″ denotes the set of all subsets of Δ″.
Suppose one of sign patterns of x∗on unknown support Δ can be recovered by modified-CS, i.e., x 1=x∗.
Thus, ∀I″∈F″, we have ∃(I=I″∪{
i
r
})∈Fso that when the following optimization problem (11) is solvable, its optimal value is greater than zero
(11)
Hence,
Meanwhile, ∃(I=I″)∈F, when the following optimization problem (12) is solvable, its optimal value is greater than zero
(12)
Hence,
Since x∗ has the same sign pattern with x″ on the unknown support Δ″, the unit of feasible region of optimization problems (11) and (12) is the feasible region of optimization problem (10). Moreover, ∀I″∈F″, when the optimization problem (10) is solvable, we have
It follows from Theorem 1 that, we have x 3=x″. In the second case, the second inequality is proved.
Combining Cases I and II, the second inequality is proved. Theorem 2 is proved.
Remarks 1
The first inequality of Theorem 2 describes quantitatively the changing tendency of RP with respect to the number of errors in known support T. It shows that known support T contains more errors, the lower RP of the modified-CS has. The second inequality of Theorem 2 indicates that the higher sparsity of a vector x∗is, the higher RP of the modified-CS is.
Further, given any vector x‡, whose support is N∪{
i
e
}. Consider recovering x‡ from y‡=Ax‡ by applying the modified-CS with the known support T∪{
i
e
}, we denote the RP as . From the second inequality of Theorem 2, one can establish the following Corollary to Theorem 2.
Corollary 1
Below equalities hold.
Remarks 2
The Corollary 1 reveals the fact that when adding an error in the known support T, the effect of RP equals the effect through decreasing the sparsity of vector x∗in position
i
e
but adding the position
i
e
into set T. Because the prior knowledge of the support will increase the RP of modified-CS, the effect through adding an error
i
e
in the known support T is less than the one by decreasing the sparsity of vector x∗in position
i
e
.
It is deducible from the above discussion that the errors in known support T will reduce the RP of modified-CS. However, the first, under certain number of samples, to recover a sparse vector with ℓ nonzero entries, how many errors in set T can the modified-CS bear? The second, within the acceptance range of errors, whether the modified-CS can guarantee the recoverability? Hereinafter, we consider these problems and reach following results.
Theorem 3
Given any vector x∗, whose support N=T∪Δ∖
Δ e
. Consider recovering it from y=Ax∗, where A is a m×nmatrix, by applying the modified-CS.
(1) If |T∪Δ|>m, i.e, |
Δ
e
|>m−|N|,
(2) If |T∪Δ|≤m, i.e, |
Δ
e
|≤m−|N|,
Proof
According to the rank theorem in matrix theory,
(13)
where Null(•), dim(•)and rank(•)represents the null-space of a matrix, the dimension of a space and the rank of a matrix, respectively.
Suppose rank(A)=m, i.e., the matrix A has full row rank. It is well known that in the CS field, measurement matrix A is always a Gaussian random matrix, which has full row rank with a probability of one.
From Equation (13),
we select arbitrary (n−m) entries of δ as independent variables and other entries of δ can be expressed by the (n−m) variables. Hence, if |
Δ
e
|>m−|N|, the number of zero entries of sub-vector equals
(14)
Denote Ω the indexes set of zero entries of . Then, it follows from (14) that
(15)
we have
and ∥δ 1∥1=1.
Suppose x(1)=x∗, it follows from Theorem 1 that, ∀I∈F, the optimal value of the objective function of the optimization problem (3) is greater than zero, provided that this optimization problem is solvable. Denote the objective function of (3) as
(16)
From the definition of Ω, set Tc∖I=Ω∪(Δ∖I). Therefore, (16) is equivalent to
(17)
For δ 1, there exist I 1, S 1 and Z 1, so that
(18)
Combining (17) and (18), it follows from Theorem 1 that, for δ 1, we have
(19)
Meanwhile, if δ 1∈Null(A), there exists
Let δ 2=−δ 1, Denote
(21)
Obviously, I 2=S 1, S 2=I 1 and Z 2=Z 1.
For δ 2, we have
(22)
From (19), it can be deduced that f(δ 2)<0.
Hence, there are δ 2∈Null(A)and I 2∈F so that the optimal value of the objective function of the optimization problem (3) is less than zero. According to Theorem 1, the assumption x(1)=x∗does not hold, i.e.,
It is easy to know that the above discussion is hold for known supports {T∪{i}}, …, {T∪Δ}, where i∈Δ. Therefore, the result 1) is proved.
From the definition of N, if the known support T″=T∪Δ, it is obvious that the optimal value of the objective function of mode (2) is zero. Hence, the solution of the modified-CS satisfies the constraint . If |T∪Δ|≤m, according to the linear algebra theory, the solution satisfied is the unique solution x∗. The result 2) is proved. Theorem 3 is proved.
Remarks 3
Theorem 3 provides a bound of errors in known support T that relates with the number of samples m and the sparsity ℓof the original vector. This bound mirrors the fault-tolerance capability of the modified-CS: to recover the sparse vectors with ℓnonzero entries, if the number of errors in set T exceed m−ℓ, it is impossible that the modified-CS can recover any sparse vector, regardless of how many positions of the support involved in the set T; Conversely, within this bound, as the addition of prior knowledge of the support in set T, the RP is steadily improving. Furthermore, providing sufficient prior knowledge of the support, the recoverability of the modified-CS can be guaranteed.