For convenience of presentation, we have below definitions. Adding an error _{
i
e
} (_{
i
e
}∈^{Tc}∖*Δ*) in set **T**, the **RP** is denoted as _{
P
T
}∪{_{
i
e
}}(^{x(1)}=^{x∗};**A**,**N**); Given any vector ^{x″}, whose support is **N**∖{_{
i
r
}}and _{
i
r
}∈**N**, consider recovering ^{x″}from ^{y″}=**A**^{x″}by applying the modified-CS with the known support **T**, the **RP** is denoted as _{
P
T
}(^{x(1)}=^{x∗};**A**,**N**∖{_{
i
r
}}). The following theorem is established.

### Theorem 2

Below inequalities hold.

\begin{array}{cc}\phantom{\rule{-14.0pt}{0ex}}{\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}& ={\mathbf{x}}^{\ast};\mathbf{A},\mathbf{N})\le {\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\mathbf{A},\mathbf{N})\le {\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}\\ ={\mathbf{x}}^{\ast};\mathbf{A},\mathbf{N}\setminus \left\{{i}_{r}\right\}).\end{array}

### Proof

Consider the following two optimization problems:

\begin{array}{l}\underset{\mathbf{x}}{min}{\u2225{\mathbf{x}}_{{\mathbf{T}}^{c}}^{}\u2225}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{1em}{0ex}}\mathbf{y}=\mathbf{A}\mathbf{x}\end{array}

(5)

\begin{array}{l}\underset{\mathbf{x}}{min}{\u2225{\mathbf{x}}_{{(\mathbf{T}\cup \{{i}_{e}\left\}\right)}^{c}}^{}\u2225}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{1em}{0ex}}\mathbf{y}=\mathbf{A}\mathbf{x}\end{array}

(6)

where _{
i
e
}∈^{Tc}∖*Δ* is an error of support **N**.

Denote ^{x 1} and ^{x 2} the two solutions of (5) and (6), respectively. From the definitions of _{
P
T
}(^{x(1)}=^{x∗}; **A**, **N**)and _{
P
T
}∪{_{
i
e
}}(^{x(1)}=^{x∗}; **A**, **N**), we have

\begin{array}{l}\mathbf{P}({\mathbf{x}}^{1}={\mathbf{x}}^{\ast})={\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})\phantom{\rule{2em}{0ex}}\\ \mathbf{P}({\mathbf{x}}^{2}={\mathbf{x}}^{\ast})={\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})\phantom{\rule{2em}{0ex}}\end{array}

where *P*(*β*) represents the probability that an event *β*will occur.

Now suppose that ^{x 1}=^{x∗}. It follows from Theorem 1 that, ∀**I**∈**F**, when the following optimization problem (7) is solvable, its optimal value is greater than zero

\begin{array}{cc}min& \sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathit{\delta}=0,\phantom{\rule{1em}{0ex}}{\u2225\mathit{\delta}\u2225}_{1}=1\\ {\delta}_{k}{x}_{k}^{\ast}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in \mathbf{I}\\ {\delta}_{k}{x}_{k}^{\ast}\le 0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in \mathit{\Delta}\setminus \mathbf{I}\end{array}

(7)

where **F** denotes the set of all subsets of *Δ*.

Meanwhile, when an error _{
i
e
} is added in the set **T**, we consider the **RP**{\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\mathbf{A},\mathbf{N}).

Since _{
i
e
}∈^{Tc}∖*Δ*, it means that ∀**I**∈**F**, we have _{
i
e
}∈^{Tc}∖**I**. Thus, it can be deduced that ^{(T}∪{_{
i
e
}})*c*∖**I**=^{Tc}∖(**I**∪{_{
i
e
}}). Suppose ^{x 2}=^{x∗}. It also follows from Theorem 1 that, ∀**I**∈**F**, when the following optimization problem (8) is solvable, its optimal value is greater than zero

\begin{array}{cc}min& \sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\left|{\delta}_{{i}_{e}}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s}\mathrm{.t.}\\ \mathbf{A}\mathit{\delta}=0,\phantom{\rule{1em}{0ex}}{\u2225\mathit{\delta}\u2225}_{1}=1\\ {\delta}_{k}{x}_{k}^{\ast}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in \mathbf{I}\\ {\delta}_{k}{x}_{k}^{\ast}\le 0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in \mathit{\Delta}\setminus \mathbf{I}\end{array}

(8)

Obviously, optimization problems (7) and (8) have the same feasible region. Moreover, when the optimal value of (7) is greater than zero, the optimal value of (8) must be. Hence, we have

{\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})\le {\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})

The first inequality is proved.

Further, consider the following optimization problem and denote its solution by ^{x 3}:

\underset{\mathbf{x}}{min}{\u2225{\mathbf{x}}_{{\mathbf{T}}^{c}}^{}\u2225}_{1}\phantom{\rule{1em}{0ex}}\mathrm{s.t}\phantom{\rule{1em}{0ex}}{\mathbf{y}}^{\u2033}=\mathbf{A}\mathbf{x}

(9)

From the definition of _{
P
T
}(^{x(1)}=^{x″}; **A**, **N**∖{_{
i
r
}}), we have **P**(^{x 3}=^{x″})=_{
P
T
}(^{x(1)}=^{x″}; **A**, **N**∖{_{
i
r
}}). For _{
i
r
}∈**N**, there are two cases: I) _{
i
r
}∈**T**; and II) _{
i
r
}∈*Δ*. Denote ^{Δ″} is the unknown support of the vector ^{x″}.

Now consider the first case, i.e., _{
i
r
}∈**T**.

From the definitions of **N**, **N**∖{_{
i
r
}}and _{
i
r
}, we have ^{Δ″}=*Δ*. Thus,vectors ^{x∗}and ^{x″}have the same number of sign patterns on their unknown supports *Δ* and ^{Δ″}, respectively. Because optimization problems (9) and (7) have the same known support **T**, it follows from Theorem 1 that (9) and (7) have the same recoverability for every sign patterns on unknown support *Δ*. According to (4), if _{
i
r
}∈**T**, we have

{\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})={\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\u2033};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\setminus \{{i}_{r}\left\}\right).

In the first case, the second inequality hold.

For the second case, i.e., _{
i
r
}∈*Δ*.

Since _{
i
r
}∈*Δ*, we have ^{Δ″}=*Δ*∖{_{
i
r
}}. It is easy to know that the number of sign patterns of ^{x∗} on the unknown support *Δ* equals twice the number of sign patterns of ^{x″} on the unknown support ^{Δ″}. For any sign pattern of ^{x″} on the unknown support ^{Δ ″}, there exist two corresponding sign patterns of ^{x∗} on the unknown support *Δ*. These two have the same sign pattern with ^{x″} on the unknown support ^{Δ ″}, but are nonzero and of the opposite sign in position _{
i
r
}. To prove the second inequality all we need to do is show that, for any sign pattern of ^{x∗}, if we have ^{x 1}=^{x∗}, the sign pattern of ^{x″}, which has the same sign pattern with ^{x∗} on the positions ^{Δ″}, can certainly be recovered by modified-CS, i.e., ^{x 3}=^{x″}.

It follows from Theorem 1 that, ^{x 3}=^{x″}, if and only if ∀^{I″}∈^{F″}, when the following optimization problem (10) is solvable, its optimal value is greater than zero

\begin{array}{cc}min& \sum _{k\in {\mathbf{T}}^{c}\setminus {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|-\sum _{k\in {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathit{\delta}=0,\phantom{\rule{1em}{0ex}}{\u2225\mathit{\delta}\u2225}_{1}=1\\ {\delta}_{k}{x}_{k}^{\ast}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in {\mathbf{I}}^{\u2033}\\ {\delta}_{k}{x}_{k}^{\ast}\le 0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in {\mathit{\Delta}}^{\u2033}\setminus {\mathbf{I}}^{\u2033}\end{array}

(10)

where ^{F″} denotes the set of all subsets of ^{Δ″}.

Suppose one of sign patterns of ^{x∗}on unknown support *Δ* can be recovered by modified-CS, i.e., ^{x 1}=^{x∗}.

Thus, ∀^{I″}∈^{F″}, we have ∃(**I**=^{I″}∪{_{
i
r
}})∈**F**so that when the following optimization problem (11) is solvable, its optimal value is greater than zero

\begin{array}{cc}min& \sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathit{\delta}=0,\phantom{\rule{1em}{0ex}}{\u2225\mathit{\delta}\u2225}_{1}=1\\ {\delta}_{k}{x}_{k}^{\ast}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in {\mathbf{I}}^{\u2033}\\ {\delta}_{{i}_{r}}{x}_{{i}_{r}}^{\ast}>0\\ {\delta}_{k}{x}_{k}^{\ast}\le 0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in {\mathit{\Delta}}^{\u2033}\setminus {\mathbf{I}}^{\u2033}\end{array}

(11)

Hence,

\begin{array}{cc}min& \left(\sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|\right)\\ =min\left(\sum _{k\in {\mathbf{T}}^{c}\setminus {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|-\left|{\delta}_{j}\right|-\sum _{k\in {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|-\left|{\delta}_{j}\right|\right)>0\\ \Rightarrow min\left(\sum _{k\in {\mathbf{T}}^{c}\setminus {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|-\sum _{k\in {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|\right)>0\end{array}

Meanwhile, ∃(**I**=^{I″})∈**F**, when the following optimization problem (12) is solvable, its optimal value is greater than zero

\begin{array}{cc}min& \sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|,\phantom{\rule{1em}{0ex}}\mathrm{s.t.}\\ \mathbf{A}\mathit{\delta}=0,\phantom{\rule{1em}{0ex}}{\u2225\mathit{\delta}\u2225}_{1}=1\\ {\delta}_{k}{x}_{k}^{\ast}>0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in {\mathbf{I}}^{\u2033}\\ {\delta}_{{i}_{r}}{x}_{{i}_{r}}^{\ast}\le 0\\ {\delta}_{k}{x}_{k}^{\ast}\le 0\phantom{\rule{1em}{0ex}}\text{for}\phantom{\rule{2.77695pt}{0ex}}k\in {\mathit{\Delta}}^{\u2033}\setminus {\mathbf{I}}^{\u2033}\end{array}

(12)

Hence,

\begin{array}{c}min\left(\sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|\right)>0\\ \Rightarrow min\left(\sum _{k\in {\mathbf{T}}^{c}\setminus {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|-\sum _{k\in {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|\right)>0\end{array}

Since ^{x∗} has the same sign pattern with ^{x″} on the unknown support ^{Δ″}, the unit of feasible region of optimization problems (11) and (12) is the feasible region of optimization problem (10). Moreover, ∀^{I″}∈^{F″}, when the optimization problem (10) is solvable, we have

min\left(\sum _{k\in {\mathbf{T}}^{c}\setminus {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|-\sum _{k\in {\mathbf{I}}^{\u2033}}\left|{\delta}_{k}\right|\right)>0.

It follows from Theorem 1 that, we have ^{x 3}=^{x″}. In the second case, the second inequality is proved.

Combining Cases I and II, the second inequality is proved. Theorem 2 is proved.

### Remarks 1

The first inequality of Theorem 2 describes quantitatively the changing tendency of **RP** with respect to the number of errors in known support **T**. It shows that known support **T** contains more errors, the lower **RP** of the modified-CS has. The second inequality of Theorem 2 indicates that the higher sparsity of a vector ^{x∗}is, the higher **RP** of the modified-CS is.

Further, given any vector ^{x‡}, whose support is **N**∪{_{
i
e
}}. Consider recovering ^{x‡} from ^{y‡}=**A**^{x‡} by applying the modified-CS with the known support **T**∪{_{
i
e
}}, we denote the **RP** as {\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\u2021};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N}\cup \{{i}_{e}\left\}\right). From the second inequality of Theorem 2, one can establish the following Corollary to Theorem 2.

### Corollary 1

Below equalities hold.

\phantom{\rule{-14.0pt}{0ex}}{\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.83864pt}{0ex}}\mathbf{A},\phantom{\rule{2.83864pt}{0ex}}\mathbf{N})={\mathbf{P}}_{\mathbf{T}\cup \left\{{i}_{e}\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\u2021};\phantom{\rule{2.83864pt}{0ex}}\mathbf{A},\phantom{\rule{2.83864pt}{0ex}}\mathbf{N}\cup \{{i}_{e}\left\}\right).

### Remarks 2

The Corollary 1 reveals the fact that when adding an error in the known support **T**, the effect of **RP** equals the effect through decreasing the sparsity of vector ^{x∗}in position _{
i
e
}but adding the position _{
i
e
}into set **T**. Because the *prior* knowledge of the support will increase the **RP** of modified-CS, the effect through adding an error _{
i
e
}in the known support **T** is less than the one by decreasing the sparsity of vector ^{x∗}in position _{
i
e
}.

It is deducible from the above discussion that the errors in known support **T** will reduce the **RP** of modified-CS. However, the first, under certain number of samples, to recover a sparse vector with *ℓ* nonzero entries, how many errors in set **T** can the modified-CS bear? The second, within the acceptance range of errors, whether the modified-CS can guarantee the recoverability? Hereinafter, we consider these problems and reach following results.

### Theorem 3

Given any vector ^{x∗}, whose support **N**=**T**∪*Δ*∖_{
Δ e
}. Consider recovering it from **y**=**A**^{x∗}, where **A** is a *m*×*n*matrix, by applying the modified-CS.

**(1)** If |**T**∪*Δ*|>*m*, i.e, |_{
Δ
e
}|>*m*−|**N**|,

\begin{array}{cc}{\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})& ={\mathbf{P}}_{\mathbf{T}\cup \left\{i\right\}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})=\phantom{\rule{2.77695pt}{0ex}}\xc2\xaf\phantom{\rule{2.77695pt}{0ex}}\\ ={\mathbf{P}}_{\mathbf{T}\cup \mathit{\Delta}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})=0,\\ \phantom{\rule{1em}{0ex}}\mathrm{where}\phantom{\rule{2.77695pt}{0ex}}i\in \mathit{\Delta}\end{array}

**(2)** If |**T**∪*Δ*|≤*m*, i.e, |_{
Δ
e
}|≤*m*−|**N**|,

{\mathbf{P}}_{\mathbf{T}\cup \mathit{\Delta}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})=1

### Proof

According to the rank theorem in matrix theory,

\mathrm{dim}\left[\text{Null}\right(\mathbf{A}\left)\right]=n\text{-}\text{rank}\left(\mathbf{A}\right),

(13)

where N*ull*(•), *dim*(•)and r*ank*(•)represents the null-space of a matrix, the dimension of a space and the rank of a matrix, respectively.

Suppose rank(**A**)=*m*, i.e., the matrix **A** has full row rank. It is well known that in the CS field, measurement matrix **A** is always a Gaussian random matrix, which has full row rank with a probability of one.

From Equation (13),

\exists \mathit{\delta}\in \mathrm{N}\mathrm{ull}\left(\mathbf{A}\right),

we select arbitrary (*n*−*m*) entries of *δ* as independent variables and other entries of *δ* can be expressed by the (*n*−*m*) variables. Hence, if |_{
Δ
e
}|>*m*−|**N**|, the number of zero entries of sub-vector {\mathbf{x}}_{{\mathbf{T}}^{c}}^{\ast} equals

\begin{array}{cc}\left|\right[1,2,\dots ,n]\setminus (\mathbf{N}\cup {\mathit{\Delta}}_{e}\left)\right|& =n-\left|\mathbf{N}\right|-\left|{\mathit{\Delta}}_{e}\right|<n-m\end{array}

(14)

Denote *Ω* the indexes set of zero entries of {\mathbf{x}}_{{\mathbf{T}}^{c}}^{\ast}. Then, it follows from (14) that

\exists {\mathit{\delta}}^{1}\in \mathrm{N}\mathrm{ull}\left(\mathbf{A}\right),

(15)

we have

{\mathit{\delta}}_{\mathit{\Omega}}^{1}=0

and _{∥}^{δ 1}_{∥1}=1.

Suppose ^{x(1)}=^{x∗}, it follows from Theorem 1 that, ∀**I**∈**F**, the optimal value of the objective function of the optimization problem (3) is greater than zero, provided that this optimization problem is solvable. Denote the objective function of (3) as

\mathbf{f}\left(\mathit{\delta}\right)=\sum _{k\in {\mathbf{T}}^{c}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|

(16)

From the definition of *Ω*, set ^{Tc}∖**I**=*Ω*∪(*Δ*∖**I**). Therefore, (16) is equivalent to

\sum _{k\in \mathit{\Omega}}\left|{\delta}_{k}\right|+\sum _{k\in \mathit{\Delta}\setminus \mathbf{I}}\left|{\delta}_{k}\right|-\sum _{k\in \mathbf{I}}\left|{\delta}_{k}\right|

(17)

For ^{δ 1}, there exist ^{I 1}, ^{S 1} and ^{Z 1}, so that

\begin{array}{l}{\mathbf{I}}^{1}=\left\{k\right|{\delta}_{k}^{1}{x}_{k}^{\ast}>0,\phantom{\rule{1em}{0ex}}k\in \mathit{\Delta}\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{S}}^{1}=\left\{k\right|{\delta}_{k}^{1}{x}_{k}^{\ast}<0,\phantom{\rule{1em}{0ex}}k\in \mathit{\Delta}\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{Z}}^{1}=\left\{k\right|{\delta}_{k}^{1}{x}_{k}^{\ast}=0,\phantom{\rule{1em}{0ex}}k\in \mathit{\Delta}\}\phantom{\rule{2em}{0ex}}\end{array}

(18)

Combining (17) and (18), it follows from Theorem 1 that, for ^{δ 1}, we have

\begin{array}{cc}\mathbf{f}\left({\mathit{\delta}}^{1}\right)& =\sum _{k\in \mathit{\Omega}}\left|{\delta}_{k}^{1}\right|+\sum _{k\in \mathit{\Delta}\setminus {\mathbf{I}}^{1}}\left|{\delta}_{k}^{1}\right|-\sum _{k\in {\mathbf{I}}^{1}}\left|{\delta}_{{}_{k}}^{1}\right|\\ =\sum _{k\in {\mathbf{S}}^{1}}\left|{\delta}_{k}^{1}\right|-\sum _{k\in {\mathbf{I}}^{1}}\left|{\delta}_{{}_{k}}^{1}\right|>0\end{array}

(19)

Meanwhile, if ^{δ 1}∈N*ull*(**A**), there exists

-{\mathit{\delta}}^{1}\in \mathrm{N}\mathrm{ull}\left(\mathbf{A}\right)

(20)

Let ^{δ 2}=−^{δ 1}, Denote

\begin{array}{l}{\mathbf{I}}^{2}=\left\{k\right|{\delta}_{k}^{2}{x}_{k}^{\ast}>0,\phantom{\rule{1em}{0ex}}k\in \mathit{\Delta}\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{S}}^{2}=\left\{k\right|{\delta}_{k}^{2}{x}_{k}^{\ast}<0,\phantom{\rule{1em}{0ex}}k\in \mathit{\Delta}\}\phantom{\rule{2em}{0ex}}\\ {\mathbf{Z}}^{2}=\left\{k\right|{\delta}_{k}^{2}{x}_{k}^{\ast}=0,\phantom{\rule{1em}{0ex}}k\in \mathit{\Delta}\}\phantom{\rule{2em}{0ex}}\end{array}

(21)

Obviously, ^{I 2}=^{S 1}, ^{S 2}=^{I 1} and ^{Z 2}=^{Z 1}.

For ^{δ 2}, we have

\begin{array}{cc}\mathbf{f}\left({\mathit{\delta}}^{2}\right)& =\sum _{k\in \mathit{\Omega}}\left|{\delta}_{k}^{2}\right|+\sum _{k\in \mathit{\Delta}\setminus {\mathbf{I}}^{2}}\left|{\delta}_{k}^{2}\right|-\sum _{k\in {\mathbf{I}}^{2}}\left|{\delta}_{{}_{k}}^{2}\right|\\ =\sum _{k\in {\mathbf{S}}^{2}}\left|{\delta}_{k}^{1}\right|-\sum _{k\in {\mathbf{I}}^{2}}\left|{\delta}_{{}_{k}}^{1}\right|=\sum _{k\in {\mathbf{I}}^{1}}\left|{\delta}_{k}^{1}\right|-\sum _{k\in {\mathbf{S}}^{1}}\left|{\delta}_{{}_{k}}^{1}\right|\end{array}

(22)

From (19), it can be deduced that **f**(^{δ 2})<0.

Hence, there are ^{δ 2}∈N*ull*(**A**)and ^{I 2}∈**F** so that the optimal value of the objective function of the optimization problem (3) is less than zero. According to Theorem 1, the assumption ^{x(1)}=^{x∗}does not hold, i.e.,

{\mathbf{P}}_{\mathbf{T}}({\mathbf{x}}^{\left(1\right)}={\mathbf{x}}^{\ast};\phantom{\rule{2.77695pt}{0ex}}\mathbf{A},\phantom{\rule{2.77695pt}{0ex}}\mathbf{N})=0.

It is easy to know that the above discussion is hold for known supports {**T**∪{*i*}}, …, {**T**∪*Δ*}, where *i*∈*Δ*. Therefore, the result **1)** is proved.

From the definition of **N**, if the known support ^{T″}=**T**∪*Δ*, it is obvious that the optimal value of the objective function of mode (2) is zero. Hence, the solution of the modified-CS satisfies the constraint {\mathbf{A}}_{{\mathbf{T}}^{\u2033}}{\mathbf{x}}_{{\mathbf{T}}^{\u2033}}=\mathbf{y}. If |**T**∪*Δ*|≤*m*, according to the linear algebra theory, the solution satisfied {\mathbf{A}}_{{\mathbf{T}}^{\u2033}}{\mathbf{x}}_{{\mathbf{T}}^{\u2033}}=\mathbf{y} is the unique solution ^{x∗}. The result **2)** is proved. Theorem 3 is proved.

### Remarks 3

Theorem 3 provides a bound of errors in known support **T** that relates with the number of samples *m* and the sparsity *ℓ*of the original vector. This bound mirrors the fault-tolerance capability of the modified-CS: to recover the sparse vectors with *ℓ*nonzero entries, if the number of errors in set **T** exceed *m*−*ℓ*, it is impossible that the modified-CS can recover any sparse vector, regardless of how many positions of the support involved in the set **T**; Conversely, within this bound, as the addition of *prior* knowledge of the support in set **T**, the **RP** is steadily improving. Furthermore, providing sufficient *prior* knowledge of the support, the recoverability of the modified-CS can be guaranteed.