 Research
 Open Access
Computable performance guarantees for compressed sensing matrices
 Myung Cho^{1}Email author,
 Kumar Vijay Mishra^{1} and
 Weiyu Xu^{1}
https://doi.org/10.1186/s136340180535y
© The Author(s) 2018
 Received: 23 December 2016
 Accepted: 7 February 2018
 Published: 27 February 2018
Abstract
The null space condition for ℓ_{1} minimization in compressed sensing is a necessary and sufficient condition on the sensing matrices under which a sparse signal can be uniquely recovered from the observation data via ℓ_{1} minimization. However, verifying the null space condition is known to be computationally challenging. Most of the existing methods can provide only upper and lower bounds on the proportion parameter that characterizes the null space condition. In this paper, we propose new polynomialtime algorithms to establish upper bounds of the proportion parameter. We leverage on these techniques to find upper bounds and further develop a new procedure—tree search algorithm—that is able to precisely and quickly verify the null space condition. Numerical experiments show that the execution speed and accuracy of the results obtained from our methods far exceed those of the previous methods which rely on linear programming (LP) relaxation and semidefinite programming (SDP).
Keywords
 Compressed sensing
 Null space condition
 ℓ _{1} minimization
 Performance guarantee
 Sensing matrix
1 Introduction
Compressed sensing is an efficient signal processing technique to recover a sparse signal from fewer samples than required by the NyquistShannon theorem, reducing time and energy spent in sampling operation. These advantages make compressed sensing attractive in various signal processing areas [1].
It has been shown that the optimal solution of ℓ_{0} minimization can be obtained by solving ℓ_{1} minimization under certain conditions (e.g., restricted isometry property or RIP) [2–6]. For random sensing matrices, these conditions hold with high probability. We note that RIP is a sufficient condition for sparse recovery [7].
where K is a subset of {1,2,…,n} with cardinality at most k. The matrix A satisfies NSC for a positive integer k if and only if \(\alpha _{k}<\frac {1}{2}\). Equivalently, NSC can be verified by computing or estimating α_{ k }. The role of α_{ k } is also important in the recovery of an approximately sparse signal x via ℓ_{1} minimization where a smaller α_{ k } implies more robustness [8–10].
We are interested in computing α_{ k } and, especially, finding the maximum k for which \(\alpha _{k} < \frac {1}{2}\). However, computing α_{ k } to verify NSC is extremely expensive and was reported in [7] to be NPhard. Due to the challenges in computing α_{ k }, verifying NSC explicitly for deterministic sensing matrices remains a relatively unexamined research area. In [3, 8, 11, 12], convex relaxations were used to establish upper or lower bounds of α_{ k } (or other parameters related to α_{ k }) instead of computing the exact α_{ k }. While [3, 11] proposed semidefinite programmingbased methods, [8, 12] suggested linear programming relaxations to obtain the upper and lower bounds of α_{ k }. For both methods, computable performance guarantees on sparse signal recovery were reported via bounding α_{ k }. However, these bounds of α_{ k } could only verify NSC with \(k=O(\!\sqrt {n})\), even though theoretically, k can grow linearly with n.
Our work drastically departs from these prior methods [3, 8, 11, 12] that provide only the upper and lower bounds. In our solution, we propose the picklelement algorithms (1≤l<k), which compute upper bounds of α_{ k } in polynomial time. Subsequently, we leverage on these algorithms to develop the tree search algorithm (TSA)—a new method to compute an exact α_{ k } by significantly reducing computational complexity of an exhaustive search method. This algorithm offers a way to control a smooth tradeoff between complexity and accuracy of the computations. In the conference precursor to this paper, we had introduced sandwiching algorithm (SWA) [13], which employs a branchandbound method. Although SWA can also be used to calculate the exact α_{ k }, it has a disadvantage of greater memory usage than TSA. On the other hand, TSA provides memory and performance benefits for highdimensional matrices (e.g., up to size ∼ 6000×6000).
It is noteworthy that our methods are different from RIP or the neighborly polytope framework for analyzing the sparse recovery capability of random sensing matrices. For example, prior works such as [6, 22] employ the neighborly polytope to predict theoretical lower bounds on recoverable sparsity k for a randomly chosen Gaussian matrix. However, our methods do not resort to a probabilistic analysis and are applicable for any given deterministic sensing matrix. Also, our algorithms have the strength of providing better bounds than existing methods [3, 8, 11, 12] for a wide range of matrix sizes.
1.1 Main contributions
 (i)
Faster algorithms for high dimensions. We designed the pickl algorithm (and its optimized version), where l is a chosen integer, to provide upper bounds on α_{ k }. We are able to show that when l increases, the optimized pickl algorithm provides tighter upper bound on α_{ k }. Numerical experiments show that, even with l=2 or 3, the pickl algorithm already provides better bound on α_{ k } than the previous algorithms based on the LP [8] and SDP [3]. For large sensing matrices, the pick1element algorithm can be significantly faster than the LP and SDP methods.
 (ii)
Novel formulations using branchandbound. Based on the pickl algorithm, we propose a branchandbound tree search approach to compute tighter bounds or even the exact value of α_{ k }. To the best of our knowledge, this tree search algorithm is the first branchandbound algorithm to verify NSC for ℓ_{1} minimization. This branchandbound approach heavily depends on the pickl algorithm developed in this paper. For example, the LP [8] and SDP [3] methods cannot be directly adapted to provide an efficient branch and bound approach, due to their lack of subsetspecific upper bounds on α_{ k }. In numerical experiments, we demonstrated that the tree search algorithm reduced the execution time to precisely calculate α_{ k } by around 40–8000 times, compared to the exhaustive search method.
 (iii)
Simultaneous upper and lower bounds. The branchandbound tree search algorithm simultaneously maintains upper and lower bounds of α_{ k } during the runtime. This approach has two benefits. Firstly, if one is interested in merely certifying the NSC for a positive k rather than obtaining the exact α_{ k }, then one can terminate the TSA early to shorten the running time. This can be done as soon as the global upper (lower) bound drops below (exceeds) 1/2 and, therefore, concluding that the NSC for the positive k is satisfied (not satisfied). Secondly, consider the case when TSA is terminated early due to, say, constraints on running time. Then, the process still yields meaningful bounds on α_{ k } via the record of continuously maintained upper and lower bounds.
 (iv)
New results on recoverable sparsity. For a certain l<k, we can compute α_{ l } or its upper bound by using the branchandbound tree search algorithm (for example, based on the pick1element algorithm). We introduce a novel result (Lemma 3), which can use α_{ l } to lower bound the recoverable sparsity k. This approach of lower bounding the recoverable sparsity k is useful when l is too large to perform the pickl algorithm directly (which requires \(\binom {n}{l}\) enumerations).
1.2 Notations and preliminaries
We denote the sets of real numbers and positive integers as \(\mathbb {R}\) and \(\mathbb {Z}^{+}\) respectively. We reserve uppercase letters K and L for index sets and lowercase letters \(k, l \in \mathbb {Z}^{+}\) for their respective cardinalities. We also use · to denote the cardinality of a set. We assume k>l≥1 throughout the paper. For vectors or scalars, we use lowercase letters, e.g., x,k,l. For a vector \(x \in \mathbb {R}^{n}\), we use x_{ i } for its ith element. If we use an index set as a subscript of a vector, it represents the partial vector over the index set. For example, when \(x \in \mathbb {R}^{n}\) and K={1,2}, x_{ K } represents [x_{1},x_{2}]^{ T }. We reserve uppercase A for a sensing matrix whose dimension is m×n. Since the number of columns of a sensing matrix A is n, the full index set we consider is {1,2,…,n}. In addition, we represent \(\binom {n}{l}\) numbers of subsets as L_{ i }, \(i=1,\ldots,\binom {n}{l}\), where L_{ i }⊂{1,2,…,n}, L_{ i }=l. We use the superscript * to represent an optimal solution of an optimization problem. For instance, z^{∗} and K^{∗} are the optimal solution of (5). Since we need to represent an optimal solution for each index set L_{ i }, we use the superscript i∗ to represent an optimal solution for an index set L_{ i }, e.g., z^{i∗}. The maximum value of k such that both \(\alpha _{k} < \frac {1}{2}\) and \(\alpha _{k+1} \geq \frac {1}{2}\) hold true is denoted by the maximum recoverable sparsity k_{ max }.
2 Picklelement algorithm
Consider a sensing matrix with n columns. Then, there are \(\binom {n}{k}\) subsets K each of cardinality k. When n and k are large, exhaustive search over these subsets to compute α_{ k } is extremely expensive. For example, when n=100 and k=10, it takes a search over 1.7310e+13 subsets to compute α_{ k }— a combinatorial task that is beyond the technological reach of common desktop computers. Our goal is to devise algorithms that can rapidly yield an exact value of α_{ k }. As an initial step, we develop a method to compute an upper bound of α_{ k } in polynomial time, which is called the picklelement algorithm (or simply, pickl algorithm), where l is a chosen integer such that 1≤l<k.
Since (7) is maximizing a convex function for a given subset L, we cast (7) as 2^{ l } linear programming problems by considering all the possible sign patterns of every element of z_{ L } (e.g., if l=2 and L={1,2}, then, z_{ L }_{1}=z_{1}+z_{2} can correspond to 2^{ l }=4 possibilities: z_{1}+z_{2}, z_{1}−z_{2}, −z_{1}+z_{2}, and −z_{1}−z_{2}). α_{l,L} is equal to the maximum among the 2^{ l } objective values.
The pickl algorithm uses α_{l,L}’s obtained from different index sets to compute an upper bound of α_{ k }. Algorithm 1 shows the steps of the pickl algorithm in detail. The following Lemmata show that the pickl algorithm provides an upper bound of α_{ k }. Firstly, we provide Lemma 1 to derive the upper bound of the proportion parameter for a fixed index set K, and then, we show that the pickl algorithm yields an upper bound of α_{ k } in Lemma 2.
Lemma 1
Proof
The inequality is from the optimal value of (6) for each index set L_{ i }. □
Lemma 2
Proof
The first inequality is from Lemma 1, and the last inequality is from the assumption that \(\alpha _{l,L_{i}}\)’s are sorted in descending order. □
Here, we note that \(\frac {1}{{\binom {k1}{l1}}} \times {\binom {k}{l}} = \frac {k}{l}\). Therefore, for the optimal value, the first \(\binom {k}{l}\) largest \(\alpha _{l,L_{i}}\)’s are chosen with the coefficient \(\frac {1}{{\binom {k1}{l1}}}\).
The upshot of the pickl algorithm is that we can reduce number of operations from \(\binom {n}{k}\) enumerations to \(\binom {n}{l}\). For example, when n=300, k=20, and l=2, the number of operations is reduced by around 10^{26} times. Moreover, as n increases, the reduction rate increases. With the reduced enumerations, we can still have nontrivial upper bounds of α_{ k } through the picklelement algorithm. We will present the performance of the pickl algorithm in Section 5 showing that the pickl algorithm provides better upper bounds than the previous research [3, 8] even when l=2. Furthermore, thanks to the pickl algorithm, we can design a new algorithm based on a branchandbound search to calculate α_{ k } by using upper bounds of α_{ k } obtained from the pickl algorithm. It is noteworthy that the cheap upper bound introduced in Lemma 1 can provide upper bounds on α_{k,K} for specific subsets K, which enable our branchandbound method to calculate α_{ k } or more precise bounds on α_{ k }. However, LP relaxation method [8] and SDP method [3] do not provide upper bounds on α_{k,K} for specific subsets K, which overwhelms LP and SDP methods to be used in the branchandbound method.
Since we are also interested in k_{max}, we introduce the following Lemma 3 to bound the maximum recoverable sparsity k_{max}.
Lemma 3
where ⌈.⌉ is the ceiling function.
Proof
Note that there are \(\binom {k}{l}\) terms in the summation. From (13), if \(\alpha _{l} \cdot \frac {k}{l} < \frac {1}{2}\), then \(\alpha _{k} < \frac {1}{2}\). In other words, if \(k < l \cdot \frac {1/2}{\alpha _{l}}\), then \(\alpha _{k} < \frac {1}{2}\). Since k is a positive integer, when \(k = \big \lceil { l \cdot \frac {1/2}{\alpha _{l}}} \big \rceil  1\), \(\alpha _{k} < \frac {1}{2}\). Therefore, the maximum recoverable sparsity k_{max} should be larger than or at least equal to \(\big \lceil {l \cdot \frac {1/2}{\alpha _{l}}} \big \rceil  1\). □
It is noteworthy that in ([8] Section 4.2.B), the authors introduced lower bound on k based on α_{1}, i.e., k(α_{1}). However, in Lemma 3, we provide a more general result. Furthermore, in Lemma 3, instead of using α_{ l }, we can use an upper bound of α_{ l } to obtain the recoverable sparsity k; namely, \(k(UB(\alpha _{l})) = \left \lceil { l \cdot \frac {1/2}{UB(\alpha _{l})}} \right \rceil  1 \leq k_{max}\), where UB(α_{ l }) represents an upper bound of α_{ l }. Since the proof follows the same track as the proof of Lemma 3, we omit the proof.
Finally, we introduce the following proposition to compare our algorithm to LP method [8] theoretically.
Proposition 1
For readability, we place the proof of Theorem 1 in Appendix A.
The LP method can provide tighter upper bounds on α_{ k } than the pick1element algorithm; however, this comes at a cost of solving a big optimization problem of design dimension mn. When m and n are large, the complexity of computing \(\alpha _{k}^{LP}\) can be prohibitive (please see Table 2).
3 Optimized pickl algorithm
We can tighten the upper bound of α_{ k } in the pickl algorithm by replacing the constant factor \(\frac {1}{\left (\underset {l1}{k1}\right)}\) in (9) with optimized coefficients at the cost of additional complexity, which we call as the optimized pickl algorithm. This optimized pickl algorithm is mostly useful from a theoretical perspective. In practice, it gives improved but similar performance in calculating the upper bound of α_{ k } to the basic pickl algorithm described in Section 2. As a theoretical merit of the optimized pickl algorithm, we can show that as l increases, the upper bound of α_{ k } becomes smaller or stays the same.
In the following lemmata, we show that the optimized pickl algorithm produces an upper bound of α_{ k } and this bound is tighter than that of the basic pickl algorithm introduced in (11). The last lemma establishes that as l increases, the upper bound of α_{ k } decreases or stays the same.
Lemma 4
The optimized pickl algorithm provides an upper bound of α_{ k }.
Proof
Note that there are \(\binom {n1}{l1}\) numbers of L_{ i }’s which have an index set B as a subset. Among \(\binom {n1}{l1}\) numbers of γ_{ i }’s, only γ_{ i }’s whose corresponding L_{ i }’s are the subsets of K^{∗} are \(\frac {1}{\left (\underset {l1}{k1}\right)}\). Since each element in L_{ i } such that L_{ i }⊆K^{∗} appears \(\binom {k1}{l1}\) times in \(\left \{i:\; L_{i} \subseteq K^{*},\; 1\leq i \leq \binom {n}{l} \right \}\), the summation of γ_{ i }, where the corresponding L_{ i }’s are the subset of K^{∗}, becomes \(\frac {1}{\left (\underset {l1}{k1}\right)}\times \binom {k1}{l1} = 1\), which satisfies (17). Basically, the third constraint makes that for an index, the summation of coefficients related to the index is limited to 1. In the same way, for 1<b<l, the chosen γ_{ i } is a feasible solution of (15). From this feasible solution, we have \(\frac {1}{{\left (\underset {l1}{k1}\right)}} \sum _{\{i:\; L_{i} \subseteq K^{*},\; L_{i}=l\}} \alpha _{l,L_{i}}\) for the optimal value, which is an upper bound of α_{ k } as shown in (13). □
Lemma 5
The optimized pickl algorithm provides a tighter, or at least the same, upper bound of α_{ k } than the basic pickl algorithm introduced in (11).
Proof
We will show that the optimization problem (11) is a relaxation of (15). As in the proof of Lemma 4, for b=l, the third constraint of (15) represents (16), which is involved in the first constraint of (11). Since the third constraint of (15) considers other b values such that 1≤b<l, (15) has more constraints than (11). Therefore, the optimized pickl algorithm, which is (15), provides a tighter or at least the same upper bound than the basic pickl algorithm. □
Lemma 6
The optimized pickl algorithm provides a tighter or at least the same upper bound than the optimized pickp algorithm when l>p.
Proof
where the second equality is from the fact that for a fixed P_{ j }, there are \(\binom {np}{lp}\) numbers of L_{ i }’s, where P_{ j }⊂L_{ i }, \(i=1,\ldots,\binom {n}{l}\); for a fixed B, there are \(\binom {nb}{pb}\) numbers of P_{ j }’s, where B⊂P_{ j }, \(j=1,\ldots,\binom {n}{p}\), and \(\binom {nb}{lb}\) numbers of L_{ i }’s, where B⊂L_{ i }, \(i=1,\ldots,\binom {n}{l}\). Since (19) is obtained from the relaxation of (18), the optimal value of (19) is larger or equal to the optimal value of (18). (19) is just the optimized pickp algorithm. Thus, when l>p, the optimized pickl algorithm provides a tighter or at least the same upper bound than the optimized pickp algorithm. □
By using larger l in the pickl algorithm, we can obtain a tighter upper bound of α_{ k }. However, for a certain l, we need to enumerate \(\binom {n}{l}\) possibilities, and this becomes infeasible when l is large. Moreover, when l<k, the pickl algorithm only gives an upper bound of α_{ k }, instead of an exact value of α_{ k }. There is, however, a need to find tighter bounds on α_{ k }, or to even find the exact value of α_{ k }, when k is too large for \(\binom {n}{k}\) enumerations of exhaustive search [14–16]. To this end, we propose a new branchandbound tree search algorithm to find tighter bounds on α_{ k } than Lemma 2 provides, or to even find the exact α_{ k }. Our branchandbound tree search algorithm is enabled by the pickl algorithms introduced in Sections 2 and 3.
4 Tree search algorithm
To find the index set K^{∗} which leads to the maximum α_{k,K} (among all possible index set K’s), the tree search algorithm (TSA) performs a bestfirst branchandbound search [23] over a tree structure representing different subsets of {1,2,…,n}. In its essence, for each subset J with cardinality no bigger than k, TSA calculates an upper bound of α_{k,K}, which is valid for any set K (with cardinality k) such that J⊆K. If this upper bound is smaller than a lower bound of α_{ k }, TSA will not further explore any of J’s supersets, leading to reduced averagecase computational complexity. For simplicity, we will describe the TSA based on pick1element algorithm, simply called 1Step TSA. However, we remark we can also extend the TSA to be based on picklelement (l≥2) algorithm, by calculating upper bounds of α_{k,K} based on the results of the picklelement algorithm.
4.1 Tree structure

[R1] A parent node is a subset of each of its child node(s).

[R2] “Legitimate order”  Let P and C denote the parent node and the child node. Then, any index in P must be smaller than any index in C∖P.
4.2 Basic idea of a branchandbound approach for calculating α _{ k }
We use a branchandbound approach over the tree structure to calculate α_{ k }. This method maintains a lower bound on α_{ k } (how to maintain this lower bound will be explained in Section 4.3). When the algorithm explores a tree node J, the algorithm calculates an upper bound B(J), which is no smaller than α_{k,K} for any child node K (with cardinality k) of node J. If B(J) is smaller than the lower bound on α_{ k }, then the algorithm will not explore the child nodes of the tree node J.
where j+t=k, max(J) represents the largest index in J, and α_{1,{1}}≥α_{1,{2}}≥…≥α_{1,{n}}. We obtain this descending order by permuting the columns of the sensing matrix A in descending order of α_{1,{i}}’s as the precomputation step of TSA. For example, in Fig. 1, for k=2, B({1})=α_{1,{1}}+α_{1,{2}}. In order to justify that B(J) is an upper bound of α_{k,K} for all node K such that J⊆K, we provide the following lemma.
Lemma 7
Given α_{1,{1}}≥α_{1,{2}}≥…≥α_{1,{n}}, \(B(J) = \alpha _{j,J} + \sum _{i=1}^{t} \alpha _{1,\{i+max(J)\}}\), where j+t=k, and max(J) represents the largest index in J, is an upper bound of α_{k,K} for all nodes K such that J⊆K.
Proof
4.3 Bestfirst tree search strategy
TSA adopts a bestfirst tree search strategy for the branchandbound approach. We first describe a basic version of the bestfirst tree search strategy and then introduce two enhancements to this strategy in the next subsection.
In its basic version, TSA starts with a tree having only the root node and sets the global lower bound of α_{ k } as 0. In each iteration, TSA selects a leaf tree node J with the largest B(J) and expands the tree by adding the child nodes of J to the tree. For each of these newly added child nodes, say Q, TSA then calculates the upper bound B(Q) in (20). Note that if a newly added child node Q has k elements, TSA will calculate α_{k,Q}, which is a lower bound on α_{ k }. For this kelement Q, if the newly calculated α_{k,Q} is bigger than the global lower bound of α_{ k }, TSA will set the global lower bound equal to α_{k,Q}. TSA will terminate if a leaf tree node J has the largest B(J) among all the leaf nodes, and that B(J) is no bigger than the global lower bound on α_{ k }.
From standard theories of the branchandbound approach, this TSA will output the exact α_{ k }. Also, in this process, the global lower bound will keep increasing until it is equal to an upper bound of α_{ k } (the largest B(J) among leaf nodes).
4.4 Two enhancements
where j+t+1=k, max(Q) represents the largest index in Q, and α_{1,{1}}≥α_{1,{2}}≥…≥α_{1,{n}}. Thus, without calculating α_{j+1,Q} (which involves higher computational complexity), we can still have B(Q) as an upper bound of α_{k,K} for any child node K (with cardinality k) of the node Q.
Secondly, when TSA adds a new node Q as the child of node J in the tree structure (assuming α_{j,J} has already been calculated), TSA does not need to add all of J’s child nodes to the tree at the same time. Instead, TSA only adds the node J’s unattached child node Q with the largest B(Q) as defined in (21). Namely, the index Q∖J is no bigger than the index Q^{′}∖J, where Q^{′} is any unattached child of the node J. We note that B(Q) is an upper bound on B(Q^{′}) (according to (21)) for any other unattached child node Q^{′} of the node J. Thus, for any child node K (of cardinality k) of node J’s unattached child nodes, B(Q) is still an upper bound of α_{k,K}.
Algorithm 2 shows detailed steps of TSA, based on the pick1element algorithm (namely, l=1, 1Step TSA). In the description, we define “expanding the tree from a node J” as follows:

[R3]“Expanding the tree from a node J”—attaching a new node Q to the node J, where B(Q) is the largest value defined as (21) among the node J’s all the unattached child nodes.
4.5 Advantage of the tree search algorithm
Due to the nature of the branchandbound approach, we can obtain a global upper bound and a global lower bound of α_{ k } while TSA runs. As the number of iterations increases in TSA, we can obtain tighter and tighter upper bounds on α_{ k }, which is the largest B(·) among the leaf nodes. By using the global upper bound of α_{ k }, we can obtain a lower bound of the recoverable sparsity k via Lemma 3. Thus, even if the complexity of TSA is too high to finish in a timely manner, we can still obtain a lower bound on the recoverable sparsity k by early terminating TSA.
We note that the methods based on LP [8] and SDP [3] also provide upper bounds on α_{ k }. However, they are unable to determine upper bounds of α_{k,K}, which is for a specific index set K. This prevents the use of LP and SDP methods in our branchandbound method for computing α_{ k }.
5 Numerical experiments
We conducted extensive simulations to compute α_{ k } and its upper/lower bounds using the pickl algorithms and TSA. In this section, we call the pickl algorithms introduced in Section 2 and 3 as simply the (basic) pickl and the optimized pickl algorithms respectively.
For same matrices, we compared our methods with LP relaxation [8] approach and SDP method [3]. We assessed the computational complexity in terms of execution time of the algorithms.^{1} In addition, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically.
For LP method in [8] and SDP method in [3], we used the Matlab codes^{2} provided by the authors. Consistent with previous research, we used CVX [17]—a package for specifying and solving convex programs—for the SDP method, and MOSEK [18]—a commercial LP solver—for the LP method. In our own algorithms, we used MOSEK to solve (7). Also, to be consistent with the previous research, matrices were generated from the Matlab code provided by the authors of [3] at http://www.di.ens.fr/~aspremon/NSPcode.html. For valid bounds, we rounded down lower bounds on α_{ k } and exact α_{ k }, and rounded up upper bounds on α_{ k } to the nearest hundredth.
5.1 Performance comparison
Firstly, we considered Gaussian matrices and partial Fourier matrices sized from n=40 to n=6144. We chose n=40 so that our results can be compared with the simulation results in [3].
5.1.1 Lowdimensional sensing matrices
5.1.1.1 Sensing matrices with n=40
. We considered sensing matrices of row dimension m=0.5n, 0.6n, 0.7n, 0.8n, where n=40. For every matrix size, we randomly generated 10 different realizations of Gaussian and partial Fourier matrices. So, in total we used 80 different n=40 sensing matrices for the numerical experiments in Tables 7 and 8. We normalized all of the matrix columns so that they have a unit ℓ_{2}norm. The entries of Gaussian matrices were i.i.d standard Gaussian \(\mathcal {N}(0,1)\). The partial Fourier matrices had m rows randomly draw from the full Fourier matrices. We compared our algorithms—pick1element, pick2element, pick3element, and TSA—to LP and SDP methods. For readability, we place the numerical results for these small sensing matrices in Appendix B.
For each matrix size and type, we increased k from 1 to 5 in unit steps. Tables 7(a) and 8(a) show the median values of α_{ k }. (To be consistent with the previous research [3], in which the authors used the median value of α_{ k } to compare the SDP method with the LP method, we provided the median values obtained from 10 random realizations of sensing matrix.) From the median value of α_{ k }, we obtained the recoverable sparsity k_{max} such that \(\alpha _{k_{\text {max}}}< 1/2\) and \(\alpha _{k_{\text {max}}+1} > 1/2\). In addition, we calculated the arithmetic mean of k_{max}’s. For the arithmetic mean, we obtained each k_{max} from each random realization and computed the arithmetic mean of ten k_{max}’s. Compared with LP and SDP methods, we obtained bigger or at least the same recoverable sparsity k_{max} by using pick2, pick3, and TSA. It is noteworthy that we obtained the exact α_{ k } for k=1,2,…,5 by using TSA, while LP and SDP methods only provided the exact α_{ k } for k=1. We observed that α_{ k }<1/2 but the upper bound of α_{ k }>1/2 holds true in several cases, e.g., α_{5} in 32×40 Gaussian matrices, α_{4} in 28×40 Gaussian matrices, α_{3} in 24×40 Gaussian matrices, α_{3} in 20×40 partial Fourier matrices, and α_{4} in 24×40 partial Fourier matrices. Additionally, this can also be established by the arithmetic mean of k_{max} in Tables 7(a) and 8(a).
To compare the computational complexity, we calculated the geometric mean of the algorithms’ execution time, to avoid biases for the average. Tables 7(b) and 8(b) list the average execution time. We also ran the exhaustive search method (ESM) to find α_{ k } and compared its execution time with that of TSA. In calculating α_{5}, on average, 3Step TSA reduced the computational time by around 86 times for 20×40 Gaussian matrices, and by 94 times for 20×40 partial Fourier matrices, compared to ESM. For 32×40 Gaussian matrix and partial Fourier matrix, the speedup compared to the best lStep TSA, l=1,2,3, becomes around 1760 times and 182 times respectively. We observed that when m/n=0.5, e.g., 20×40 sensing matrices, in general, the 3step TSA provides the fastest result for k=5. On the other hand, for m/n=0.8 (e.g., 32×40 case), the 2Step TSA is the quickest in finding an exact α_{ k } for k=5; however, for k>5, the fastest lstep TSA cannot be determined from either experiments or theory.
5.1.1.2 Sensing matrices with n=256
5.1.1.3 Sensing matrices with n=512
Lower bound on k and execution time (Gaussian matrix with n=512)
Matrix A  Pick1  Pick2  LP^{a} 

(a) Lower bound on k  
102×512  2  3  2 
205×512  5  7  5 
307×512  10  17  10 
410×512  14  27  14 
(b) Execution time (Unit: second)  
102×512  53.7  2.96e4  50.8 
205×512  114.8  6.36e4  105.1 
307×512  309.7  1.19e5  333.0 
410×512  133.1  5.03e4  510.0 
5.1.2 Highdimensional sensing matrices
5.1.2.1 Sensing matrix with n≥1024
Lower bound on k and execution time (Gaussian matrix with n=1024)
Matrix A  Pick1  k(UB(α_{2})^{ b })  k(α_{1})  LP^{a} 

(a) Lower bound on k  
102×1024  2  3  2  2 
205×1024  4  4  4  4 
307×1024  5  6  5  5 
410×1024  7  8  7  7 
512×1024  9  10  9  9 
614×1024  12  13  12  12 
717×1024  16  17  15  16 
819×1024  21  23  20  21 
922×1024  32  36  30  32 
(b) Execution time (Unit: second)  
102×1024  237  24 h  237  200 
205×1024  452  24 h  452  429 
307×1024  796  24 h  796  723 
410×1024  1207  24 h  1207  1073 
512×1024  1952  24 h  1952  1600 
614×1024  2150  24 h  2150  2217 
717×1024  1337  24 h  1337  2992 
819×1024  838  24 h  838  3904 
922×1024  386  24 h  386  4730 
Lower bound on k and execution time (Gaussian matrix)
Matrix A  Pick1  k(α_{1})  LP^{a} 

(a) Lower bound on k  
512×2048  7  6  7 
2007×2048  102  90  102 
4014×4096  152  139  N/A^{b} 
1024×6144  8  8  8 
6021×6144  190  174  N/A 
6134×6144  558  406  N/A 
(b) Execution time (Unit: second)  
512×2048  7.51e3  7.51e3  6.63e3 
2007×2048  6.71e2  6.71e2  7.19e4 
4014×4096  9.12e3  9.12e3  15 days^{c} 
1024×6144  2.18e5  2.18e5  1.61e5 
6021×6144  3.89e4  3.89e4  65.5 days^{d} 
6134×6144  1.37e4  1.37e4  41.7 days^{e} 
For extremely large sensing matrices, e.g, 4014×4096 and 6021×6144, the LP and SDP methods cannot provide any lower bound on k due to unreasonable computational time. However, our pickl algorithm can still provide the lower bound on k efficiently. Table 3 shows the lower bound on k and the execution time for these large dimensional matrices, where our verified recoverable sparsity k can be as large as 558 for a 6134×6144 sensing matrix. We obtained the estimated time for the LP method by running the Matlab code obtained from http://www2.isye.gatech.edu/~nemirovs/, which shows the percentage of the calculation on screen.
5.2 Comparison between the optimized pickl algorithm and the basic pickl algorithm
α_{ k } comparison and execution time (Gaussian matrix)
Matrix A  Algo.  α _{4}  α _{5}  α _{6}  α _{7}  α _{8} 

(a) α_{ k } comparison  
28 × 40  Basic pick3  0.52  0.64  0.75  0.86  0.97 
Optimized pick3  0.52  0.63  0.75  0.85  0.96  
3Step TSA  0.47  0.54  0.62  0.67  0.720.78  
40 × 50  Basic pick3  0.40  0.48  0.57  0.65  0.72 
Optimized pick3  0.39  0.47  0.55  0.62  0.70  
3Step TSA  0.36  0.41  0.46  0.51  0.570.59  
(b) Execution time (Unit: second)  
28 × 40  Basic pick3  249.28  249.28  249.28  249.28  249.28 
Optimized pick3  420.97  410.43  422.14  422.41  460.52  
40 × 50  Basic pick3  748.88  748.88  748.88  748.88  748.88 
Optimized pick3  3.31e3  3.49e3  3.26e3  3.26e3  3.31e3 
In summary, the optimized pickl algorithm provides better or at least equal upper bound on α_{ k } to the basic pickl algorithm, with additional complexity. In spite of the increased complexity of the optimized pickl algorithm, it has an important theoretical merit, which is Lemma 6.
5.3 Complexity of tree search algorithm
5.4 Application to network tomography problem
α_{ k } and execution time in network tomography problems
Matrix A  Algo.  α _{1}  α _{2}  α _{3}  α _{4}  α _{5}  k _{max} 

(a) α_{ k } values  
33 × 66  1Step TSA  0.28  0.41  0.50  0.57  0.62  2 
2Step TSA  0.28  0.41  0.50  0.57  0.620.64  2  
3Step TSA  0.28  0.41  0.50  0.57  0.62  2  
53 × 105  1Step TSA  0.20  0.29  0.36  0.45  0.520.54  4 
2Step TSA  0.20  0.29  0.36  0.45  0.490.56  4  
3Step TSA  0.20  0.29  0.36  0.45  0.52  4  
(b) Execution time (Unit: second)  
33 × 66  1Step TSA  0.74  3.62  28.94  404.11  5.94e4  
2Step TSA  0.74  3.62  43.94  541.70  1 day  
3Step TSA  0.74  3.62  1.69e3  1.73e3  3.70e4  
ESM  0.64  3.94  1.63e3  1.4e4^{a}  1.8e5^{a}  
53 × 105  1Step TSA  1.31  30.61  608.90  5.35e3  1 day  
2Step TSA  1.31  116.12  143.99  1.05e3  1 day  
3Step TSA  1.31  116.12  7.95e3  7.93e3  1.38e4  
ESM  1.28  127.28  8.70.e3  9.6e4^{a}  1.9e6^{a} 
α_{ k } and execution time in a large network model having 300 nodes and 400 edges
Algo.  α _{1}  α _{2}  α _{3}  α _{4}  α _{5}  α _{6} 

(a) α_{ k } values  
1Step TSA  0.07  0.13  0.15  0.18  0.20  0.220.26^{a} 
2Step TSA  0.07  0.13  0.15  0.18  0.200.23^{a}  0.220.28^{a} 
(b) Execution time (Unit: second)  
1Step TSA  63.37  65.70  599.96  5.49e3  8.60e4  1 day 
2Step TSA  63.37  3.46e4  3.54e4  4.03e4  1 day  1 day 
ESM  73.22  3.20e4  1.59e6^{b}  1.58e8^{b}  1.25e10^{b}  8.22e11^{b} 
5.5 Discussion
 1.
Comparisons with LP and SDP. Our proposed pick1element algorithm can achieve similar performance as the LP [8] and SDP methods [3]. However, our pick1element algorithm has the clear advantage of being more computationally efficient for large dimensional sensing matrices. Please see Table 3, where the LP and SDP methods cannot provide the performance bounds on recoverable sparsity k due to high computational complexity. On the other hand, in Table 3, our pick1element algorithm can efficiently provide bounds on recoverable sparsity k. The LP method has high computational complexity because it has to deal with a large convex program of design dimension mn, which leads to prohibitive computational complexity when m and n are large [8].
In our pick1element algorithm, we proposed the novel idea of sorting α_{1,{i}}’s (see Lemma 2), which leads to improved performance bounds on α_{ k } and recoverable sparsity k. This sorting idea, combined with Lemma 2, provides us with larger recoverable sparsity bound k than purely using α_{1} for bounding recoverable k in ([8] Section 4.2.B).
 2.
Setspecific upper bounds. Our proposed picklelement algorithm (l≥2) is novel and can provide improved bounds on α_{ k } and recoverable sparsity k, using polynomial computational complexity in n when l is fixed. This approach is not practical when l is large. However, pick2element and pick3element algorithm can already provide improved performance bounds, compared with the previous research [3, 8].
The fact that we can obtain upper bounds on α_{ k }, based on the results of picklelement (l≥2) algorithm, is new and nontrivial (see Lemma 2, Lemma 3 and Lemma 4). For example, if we know α_{5}≤0.22, we can use Lemma 3 to obtain that α_{11}≤0.22×11/5<0.5.
Our picklelement algorithm can provide setspecific upper bound for α_{k,K}, laying the foundation for our branchandbound TSA.
 3.Computational complexity of TSA. We proposed TSA to find precise values for α_{ k } with significantly reduced averagecase computational complexity than ESM. The computational complexity of TSA is dependent on n, sparsity k, and a chosen constant l. When k, n, and l are large enough, finding α_{ k } via TSA is still computationally expensive. In the worst case, TSA has the same computational complexity as ESM. However, our extensive simulations ranging from Fig. 3 to Fig. 5 and from Table 5 to Table 8 show that on average, TSA can greatly reduce the computational complexity of finding α_{ k } compared with ESM.Table 7
α_{ k } comparison and execution time  Gaussian Matrix
\(\underset {(m \times n)}{A}\)
Algo.
α _{1}
α _{2}
α _{3}
α _{4}
α _{5}
k _{max} ^{d}
(a) α_{ k } comparison
20×40
Pick1
0.28
0.55
0.81
1
1
1/1.1
Pick2
0.28
0.45
0.66
0.85
1
2/1.9
Pick3
0.28
0.45
0.57
0.76
0.92
2/1.9
1Step TSA
0.28
0.45
0.57
0.67
0.75
2/1.9
2Step TSA
0.28
0.45
0.57
0.67
0.75
2/1.9
3Step TSA
0.28
0.45
0.57
0.67
0.75
2/1.9
LP^{a}
0.28
0.50
0.67
0.84
0.98
2/1.6
SDP^{b}
0.28
0.49
0.66
0.81
0.95
2/1.8
ESM^{c}
0.28
0.45
0.57
0.67
0.75
2/1.9
24 × 40
Pick1
0.23
0.46
0.67
0.87
1
2/2.0
Pick2
0.23
0.37
0.53
0.69
0.85
2/2.1
Pick3
0.23
0.37
0.46
0.61
0.75
3/2.8
1Step TSA
0.23
0.37
0.46
0.57
0.65
3/2.8
2Step TSA
0.23
0.37
0.46
0.57
0.65
3/2.8
3Step TSA
0.23
0.37
0.46
0.57
0.65
3/2.8
LP
0.23
0.41
0.56
0.71
0.84
2/2.0
SDP
0.23
0.41
0.55
0.70
0.82
2/2.0
ESM
0.23
0.37
0.46
0.57
0.65
3/2.8
28 × 40
Pick1
0.18
0.36
0.53
0.70
0.86
2/2.0
Pick2
0.18
0.31
0.46
0.59
0.72
3/3.0
Pick3
0.18
0.31
0.41
0.54
0.66
3/3.0
1Step TSA
0.18
0.31
0.41
0.49
0.57
4/3.5
2Step TSA
0.18
0.31
0.41
0.49
0.57
4/3.5
3Step TSA
0.18
0.31
0.41
0.49
0.57
4/3.5
LP
0.18
0.34
0.49
0.61
0.72
3/3.0
SDP
0.18
0.34
0.48
0.60
0.71
3/3.0
ESM
0.18
0.31
0.41
0.49
0.57
4/3.5
32 × 40
Pick1
0.14
0.29
0.42
0.55
0.67
3/3.0
Pick2
0.14
0.24
0.37
0.47
0.58
4/3.8
Pick3
0.14
0.24
0.33
0.44
0.53
4/4.2
1Step TSA
0.14
0.24
0.33
0.40
0.47
5/4.9
2Step TSA
0.14
0.24
0.33
0.40
0.47
5/4.9
3Step TSA
0.14
0.24
0.33
0.40
0.47
5/4.9
LP
0.14
0.27
0.38
0.49
0.58
4/3.9
SDP
0.14
0.27
0.38
0.48
0.57
4/4.0
ESM
0.14
0.24
0.33
0.40
0.47
5/4.9
(b) Execution time (Unit: second)
20 × 40
Pick1
0.35
0.35
0.35
0.35
0.35
Pick2
0.35
10.96
10.96
10.96
10.95
Pick3
0.35
10.96
313.65
313.65
313.65
1Step TSA
0.50
2.14
11.78
128.98
1.62e3
2Step TSA
0.50
13.20
14.11
58.93
3.77e3
3Step TSA
0.50
13.20
320.20
346.43
695.53
LP
0.55
0.55
0.58
0.55
0.56
SDP
56.92
6.02e3
5.14e3
5.12e3
5.61e3
ESM
0.35
10.96
313.65
4.5e3
6.0e4
24 × 40
Pick1
0.44
0.44
0.44
0.44
0.44
Pick2
0.44
13.00
13.00
13.00
13.00
Pick3
0.44
13.00
311.27
311.27
311.27
1Step TSA
0.50
2.05
9.63
77.45
429.48
2Step TSA
0.50
12.92
13.60
35.08
634.62
3Step TSA
0.50
12.92
319.27
378.10
481.29
LP
0.84
0.94
0.88
0.83
0.82
SDP
62.18
5.59e3
4.89e3
4.75e3
5.37e3
ESM
0.44
13.00
311.27
4.6e3
6.4e4
28 × 40
Pick1
0.58
0.58
0.58
0.58
0.58
Pick2
0.58
14.67
14.67
14.67
14.67
Pick3
0.58
14.67
326.80
326.80
326.80
1Stpe TSA
0.52
1.41
4.39
32.43
119.86
2Stpe TSA
0.52
13.54
13.82
29.35
126.62
3Stpe TSA
0.52
13.54
327.79
404.23
383.61
LP
1.12
1.20
1.12
1.09
0.68
SDP
71.27
5.55e3
4.90e3
4.98e3
4.72e3
ESM
0.58
14.67
326.80
4.7e3
6.9e4
32 × 40
Pick1
0.42
0.42
0.42
0.42
0.42
Pick2
0.42
13.29
13.29
13.29
13.29
Pick3
0.42
13.29
331.80
331.80
331.80
1Step TSA
0.55
1.14
2.89
13.50
40.67
2Step TSA
0.55
14.22
14.32
18.13
40.35
3Step TSA
0.55
14.22
340.87
336.29
355.06
LP
0.70
0.71
0.72
0.70
0.70
SDP
56.12
7.17e3
5.43e3
5.07e3
4.79e3
ESM
0.42
13.29
331.80
4.9e3
7.1e4
Table 8α_{ k } comparison and execution time  Partial Fourier Matrix
\(\underset {(m \times n)}{A}\)
Algo.
α _{1}
α _{2}
α _{3}
α _{4}
α _{5}
k _{max} ^{d}
(a) α_{ k } comparison
20 × 40
Pick1
0.19
0.39
0.59
0.78
0.98
2/2.0
Pick2
0.19
0.36
0.55
0.73
0.91
2/2.2
Pick3
0.19
0.36
0.47
0.64
0.80
3/2.7
1Step TSA
0.19
0.36
0.47
0.61
0.70
3/2.7
2Step TSA
0.19
0.36
0.47
0.61
0.70
3/2.7
3Step TSA
0.19
0.36
0.47
0.61
0.70
3/2.7
LP^{a}
0.19
0.39
0.59
0.78
0.98
2/2.0
SDP^{b}
0.19
0.39
0.59
0.78
0.98
2/2.0
ESM^{c}
0.19
0.36
0.47
0.61
0.70
3/2.7
24 × 40
Pick1
0.15
0.31
0.47
0.62
0.78
3/2.8
Pick2
0.15
0.27
0.42
0.55
0.69
3/3.0
Pick3
0.15
0.27
0.38
0.51
0.64
3/3.4
1Step TSA
0.15
0.27
0.38
0.49
0.59
4/3.5
2Step TSA
0.15
0.27
0.38
0.49
0.59
4/3.5
3Step TSA
0.15
0.27
0.38
0.49
0.59
4/3.5
LP
0.15
0.31
0.47
0.62
0.78
3/2.8
SDP
0.15
0.31
0.47
0.62
0.78
3/2.8
ESM
0.15
0.27
0.38
0.49
0.59
4/3.5
28 × 40
Pick1
0.12
0.25
0.37
0.50
0.62
4/3.6
Pick2
0.12
0.23
0.35
0.47
0.58
4/4.0
Pick3
0.12
0.23
0.32
0.44
0.54
4/4.0
1Step TSA
0.12
0.23
0.32
0.41
0.50
4/4.0
2Step TSA
0.12
0.23
0.32
0.41
0.50
4/4.1
3Step TSA
0.12
0.23
0.32
0.41
0.50
4/4.1
LP
0.12
0.25
0.37
0.50
0.62
4/3.6
SDP
0.12
0.25
0.37
0.50
0.62
4/3.6
ESM
0.12
0.23
0.32
0.41
0.50
4/4.1
32 × 40
Pick1
0.09
0.19
0.29
0.38
0.48
5/4.7
Pick2
0.09
0.17
0.27
0.36
0.44
5/4.7
Pick3
0.09
0.17
0.25
0.35
0.43
5/4.7
1Step TSA
0.09
0.17
0.25
0.33
0.39
5/4.7
2Step TSA
0.09
0.17
0.25
0.33
0.39
5/4.7
3Step TSA
0.09
0.17
0.25
0.33
0.39
5/4.7
LP
0.09
0.19
0.29
0.38
0.48
5/4.7
SDP
0.09
0.19
0.29
0.38
0.48
5/4.7
ESM
0.09
0.17
0.25
0.37
0.39
5/4.7
(b) Execution time (Unit: second)
20 × 40
Pick1
0.31
0.31
0.31
0.31
0.31
Pick2
0.31
10.85
10.85
10.85
10.85
Pick3
0.31
10.85
260.41
260.41
260.41
1Step TSA
0.47
9.72
70.57
329.28
3.60e3
2Step TSA
0.47
11.97
18.54
45.18
3.36e3
3Step TSA
0.47
11.97
291.29
297.45
633.12
LP
0.49
0.77
0.53
0.59
0.51
SDP
33.93
2.34e3
2.65e3
2.91e3
2.60e3
ESM
0.31
10.85
260.41
4.1e3
6.0e4
24 × 40
Pick1
0.39
0.39
0.39
0.39
0.39
Pick2
0.39
11.51
11.51
11.51
11.51
Pick3
0.39
11.51
302.86
302.86
302.86
1Step TSA
0.48
12.12
76.21
407.67
2.77e3
2Step TSA
0.48
12.52
21.46
107.00
1.83e3
3Step TSA
0.48
12.52
306.43
426.17
1.36e3
LP
0.62
0.56
0.66
0.59
0.58
SDP
41.13
2.39e3
2.66e3
2.63e3
2.56e3
ESM
0.39
11.51
302.86
4.5e3
6.4e4
28 × 40
Pick1
0.43
0.43
0.43
0.43
0.43
Pick2
0.43
13.29
13.29
13.29
13.29
Pick3
0.43
13.29
341.05
341.05
341.05
1Step TSA
0.50
8.70
31.53
272.68
731.90
2Step TSA
0.50
12.99
16.85
47.45
544.79
3Step TSA
0.50
12.99
317.40
410.47
553.67
LP
0.65
0.67
0.71
0.67
0.75
SDP
40.51
2.17e3
2.29e3
2.80e3
2.63e3
ESM
0.43
13.29
341.05
4.7e3
6.5e4
32 × 40
Pick1
0.57
0.57
0.57
0.57
0.57
Pick2
0.57
17.24
17.24
17.24
17.24
Pick3
0.57
17.24
385.26
385.26
385.26
1Step TSA
0.52
6.39
22.35
101.67
451.62
2Step TSA
0.52
13.38
18.65
49.46
372.35
3Step TSA
0.52
13.38
326.40
476.55
1.02e3
LP
0.86
0.89
0.78
0.75
0.76
SDP
46.51
2.41e3
2.62e3
2.53e3
2.75e3
ESM
0.57
17.24
385.26
4.8e3
6.8e4
Moreover, since TSA maintains an upper bound and a lower bound of α_{ k } during its iterations, one can always early terminate TSA and still get improved performance bounds on α_{ k } than the LP and SDP methods. We can use TSA to find an exact value of α_{ l }, where l<k, and then use Lemma 3 to bound α_{ k }.
 4.
Use of data structures. We used objectoriented programming (OOP) to implement TSA in Matlab [25], because the OOP makes it easy to handle treetype structures. In OOP, we defined a class and created objects from the class to store property of each node J, e.g., B(J), in the tree. In order to make a connection between two tree nodes, we used doubly linked list data structure as a part of the object. However, in case readers would like to implement the algorithm using alternative data structures, we have provided implementationagnostic pseudocode of our algorithm in Algorithm 2.
 5.
Difference from phase transition works. There has been extensive research on the phase transitions of various sparse recovery algorithms such as Basis Pursuit (BP), Orthogonal Matching Pursuit (OMP), and Approximate Message Passing (AMP) [1]. However, our research is different from the research on phase transition in two aspects. Firstly, our work and the previous works [3, 8] are focusing on worstcase performance guarantee (recovering all the possible ksparse signals), while the research on phase transition is considering the averagecase performance guarantee for a single ksparse signal with fixed support and sign pattern. Secondly, the phase transition bounds are mostly for random matrices. Hence, for a given deterministic sensing matrix, phase transition results cannot be used for that particular matrix.
6 Conclusion
In this paper, we consider the problem of verifying the null space condition in compressed sensing. Calculating the proportional parameter α_{ k } that characterizes the null space condition of a sensing matrix is a nonconvex optimization problem, and also known to be NPhard in [7]. In order to verify the null space condition, we proposed novel and simple enumerationbased algorithms, which are called the basic and optimized pickl algorithms, to obtain upper bounds of α_{ k }. With these algorithms, we further designed a new algorithm called the tree search algorithm to gain a global solution to the nonconvex optimization problem of verifying the null space condition. Numerical experiments show that our algorithms outperform the previously proposed algorithms [3, 8] in performance as well as speed.
7 Appendix
8 Proof of proposition 1
Proof
where we can exchange the order of “maximize” and “minimize” in the last equality because \(e_{i_{t}}  A^{T} y_{i_{t}}_{\infty }\) only depends on \(y_{i_{t}}\).
□
9 Sensing matrices with n=40
We conducted our experiments on HP Z220 CMT with Intel Core i73770 dual core CPU @3.4GHz clock speed and 16GB DDR3 RAM, using Matlab (R2013b) on Windows 7.
LP method from http://www2.isye.gatech.edu/~nemirovs/ and SDP method from http://www.di.ens.fr/~aspremon/NSPcode.html.
Declarations
Acknowledgements
We thank Alexandre d’Aspremont from CNRS at Ecole Normale Superieure, Anatoli Juditsky from Laboratoire, Jean Kuntzmann at Universite Grenoble Alpes, and Arkadi Nemirovski from Georgia Institute of Technology for helpful discussions and providing codes for the simulations in [3] and [8].
Funding
The work of Weiyu Xu is supported by Simons Foundation 318608, KAUST OCRF2014CRG3, NSF DMS1418737, and NIH 1R01EB02066501.
Availability of data and material
All the codes used for the numerical experiments are available at the following link: https://sites.google.com/view/myungcho/software/nsc.
Authors’ contributions
MC and WX designed the algorithms. MC implemented the algorithms. KVM checked the implementation of the algorithms and helped to polish the manuscript. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 YC Eldar, G Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012).View ArticleGoogle Scholar
 EJ Candès, T Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005).MathSciNetView ArticleMATHGoogle Scholar
 A d’Aspremont, L El Ghaoui, Testing the nullspace property using semidefinite programming. Math. Prog. 127(1), 123–144 (2011).MathSciNetView ArticleMATHGoogle Scholar
 EJ Candès, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).MathSciNetView ArticleMATHGoogle Scholar
 EJ Candès, J Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).MathSciNetView ArticleMATHGoogle Scholar
 D Donoho, Neighborly polytopes and sparse solution of underdetermined linear equations. Technical report (Stanford University. Dept of Statistics, Stanford, 2005).Google Scholar
 AM Tillmann, ME Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory. 60(2), 1248–1259 (2014).MathSciNetView ArticleMATHGoogle Scholar
 A Juditsky, A Nemirovski, On verifiable sufficient conditions for sparse signal recovery via ℓ _{1} minimization. Math. Prog. 127(1), 57–88 (2011).MathSciNetView ArticleMATHGoogle Scholar
 A Cohen, W Dahmen, R DeVore, Compressed sensing and best kterm approximation. J. American Math. Soc. 22(1), 211–231 (2009).MathSciNetView ArticleMATHGoogle Scholar
 W Xu, B Hassibi, Precise stability phase transitions for ℓ _{1} minimization: a unified geometric framework. IEEE Trans. Inf. Theory. 57(10), 6894–6919 (2011).MathSciNetView ArticleMATHGoogle Scholar
 K Lee, Y Bresler, in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Computing performance guarantees for compressed sensing, (2008), pp. 5129–5132.Google Scholar
 G Tang, A Nehorai, in Proceedings of Conference on Information Sciences and Systems (CISS). Verifiable and computable ℓ ∞ performance evaluation of ℓ 1 sparse signal recovery, (2011), pp. 1–6.Google Scholar
 M Cho, W Xu, in Proceedings of Asilomar Conference on Signals, Systems and Computers. New algorithms for verifying the null space conditions in compressed sensing, (2013), pp. 1038–1042.Google Scholar
 W Xu, E Mallada, A Tang, in Proceedings of IEEE International Conference on Computer Communizations (INFOCOM). Compressive sensing over graphs, (2011), pp. 2087–2095.Google Scholar
 MH Firooz, S Roy, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM). Network tomography via compressed sensing, (2010), pp. 1–5.Google Scholar
 MJ Coates, RD Nowak, in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Network tomography for internal delay estimation, vol.6, (2001), pp. 3409–3412.Google Scholar
 M Grant, S Boyd, CVX: Matlab software for disciplined convex programming, version 2.1 beta (2012). http://cvxr.com/cvx.
 MOSEK ApS, The MOSEK optimization toolbox for MATLAB manual. Version 7.1 (Revision 31) (2015). http://docs.mosek.com/7.1/toolbox/index.html.
 Y Tsang, M Coates, RD Nowak, Network delay tomography. IEEE Trans. Signal Process. 51(8), 2125–2136 (2003).View ArticleGoogle Scholar
 Y Vardi, Network tomography: estimating sourcedestination traffic intensities from link data. J. American Stat. Assoc. 91(433), 365–377 (1996).MathSciNetView ArticleMATHGoogle Scholar
 R Castro, M Coates, G Liang, R Nowak, B Yu, Network tomography: recent developments. Stat. Sci. 19(3), 499–517 (2004).MathSciNetView ArticleMATHGoogle Scholar
 DL Donoho, J Tanner, Precise undersampling theorems. Proc. IEEE. 98(6), 913–924 (2010).View ArticleGoogle Scholar
 G Brassard, P Bratley, Fundamentals of algorithmics (Englewood Cliffs, Prentice Hall, 1996).MATHGoogle Scholar
 DB Wilson, in Proceedings of the twentyeighth annual ACM symposium on Theory of computing. Generating random spanning trees more quickly than the cover time, (1996), pp. 296–303.Google Scholar
 Mathworks, ObjectOriented Programming in MATLAB. https://www.mathworks.com/discovery/objectorientedprogramming.html. Accessed 26 May 2017.