Computable performance guarantees for compressed sensing matrices

The null space condition for ℓ1 minimization in compressed sensing is a necessary and sufficient condition on the sensing matrices under which a sparse signal can be uniquely recovered from the observation data via ℓ1 minimization. However, verifying the null space condition is known to be computationally challenging. Most of the existing methods can provide only upper and lower bounds on the proportion parameter that characterizes the null space condition. In this paper, we propose new polynomial-time algorithms to establish upper bounds of the proportion parameter. We leverage on these techniques to find upper bounds and further develop a new procedure—tree search algorithm—that is able to precisely and quickly verify the null space condition. Numerical experiments show that the execution speed and accuracy of the results obtained from our methods far exceed those of the previous methods which rely on linear programming (LP) relaxation and semidefinite programming (SDP).


Introduction
Compressed sensing is an efficient signal processing technique to recover a sparse signal from fewer samples than required by the Nyquist-Shannon theorem, reducing time and energy spent in sampling operation. These advantages make compressed sensing attractive in various signal processing areas [1].
In compressed sensing, we are interested in recovering the sparsest vector x ∈ R n that satisfies the underdetermined equation y = Ax. Here, R is the set of real numbers, A ∈ R m×n , m < n is a sensing matrix, and y ∈ R m is the observation or measurement data. This is posed as an 0 minimization problem: where x 0 is the number of non-zero elements in vector x. The 0 minimization is an NP-hard problem. Therefore, we often relax (1) to its closest convex approximation-the 1 minimization problem: minimize x 1 subject to y = Ax.
( 2 ) *Correspondence: myung-cho@uiowa.edu Department of ECE, University of Iowa, Iowa City, IA, 52242, USA It has been shown that the optimal solution of 0 minimization can be obtained by solving 1 minimization under certain conditions (e.g., restricted isometry property or RIP) [2][3][4][5][6]. For random sensing matrices, these conditions hold with high probability. We note that RIP is a sufficient condition for sparse recovery [7].
A necessary and sufficient condition under which a ksparse signal x, (k n) can be uniquely obtained via 1 minimization is null space condition (NSC) [3,8,9]. A matrix A satisfies NSC for a positive integer k if holds true for all z ∈ {z : Az = 0, z = 0} and for all subsets K ⊆ {1, 2, . . . , n} with |K| ≤ k. Here, K is an index set, |K| is the cardinality of K, z K is the part of the vector z over the index set K, and K is the complement of K. NSC is related to the proportion parameter α k defined as The α k is the optimal value of the following optimization problem: maximize z,{K: |K|≤k} z K 1 subject to z 1 ≤ 1, Az = 0, where K is a subset of {1, 2, . . . , n} with cardinality at most k. The matrix A satisfies NSC for a positive integer k if and only if α k < 1 2 . Equivalently, NSC can be verified by computing or estimating α k . The role of α k is also important in the recovery of an approximately sparse signal x via 1 minimization where a smaller α k implies more robustness [8][9][10].
We are interested in computing α k and, especially, finding the maximum k for which α k < 1 2 . However, computing α k to verify NSC is extremely expensive and was reported in [7] to be NP-hard. Due to the challenges in computing α k , verifying NSC explicitly for deterministic sensing matrices remains a relatively unexamined research area. In [3,8,11,12], convex relaxations were used to establish upper or lower bounds of α k (or other parameters related to α k ) instead of computing the exact α k . While [3,11] proposed semidefinite programmingbased methods, [8,12] suggested linear programming relaxations to obtain the upper and lower bounds of α k . For both methods, computable performance guarantees on sparse signal recovery were reported via bounding α k . However, these bounds of α k could only verify NSC with k = O( √ n), even though theoretically, k can grow linearly with n.
Our work drastically departs from these prior methods [3,8,11,12] that provide only the upper and lower bounds. In our solution, we propose the pick-l-element algorithms (1 ≤ l < k), which compute upper bounds of α k in polynomial time. Subsequently, we leverage on these algorithms to develop the tree search algorithm (TSA)-a new method to compute an exact α k by significantly reducing computational complexity of an exhaustive search method. This algorithm offers a way to control a smooth trade-off between complexity and accuracy of the computations. In the conference precursor to this paper, we had introduced sandwiching algorithm (SWA) [13], which employs a branch-and-bound method. Although SWA can also be used to calculate the exact α k , it has a disadvantage of greater memory usage than TSA. On the other hand, TSA provides memory and performance benefits for high-dimensional matrices (e.g., up to size ∼ 6000 × 6000).
It is noteworthy that our methods are different from RIP or the neighborly polytope framework for analyzing the sparse recovery capability of random sensing matrices. For example, prior works such as [6,22] employ the neighborly polytope to predict theoretical lower bounds on recoverable sparsity k for a randomly chosen Gaussian matrix. However, our methods do not resort to a probabilistic analysis and are applicable for any given deterministic sensing matrix. Also, our algorithms have the strength of providing better bounds than existing methods [3,8,11,12] for a wide range of matrix sizes.

Main contributions
We summarize our main contributions as follows: (i) Faster algorithms for high dimensions. We designed the pick-l algorithm (and its optimized version), where l is a chosen integer, to provide upper bounds on α k . We are able to show that when l increases, the optimized pick-l algorithm provides tighter upper bound on α k . Numerical experiments show that, even with l = 2 or 3, the pick-l algorithm already provides better bound on α k than the previous algorithms based on the LP [8] and SDP [3]. For large sensing matrices, the pick-1-element algorithm can be significantly faster than the LP and SDP methods. (ii) Novel formulations using branch-and-bound. Based on the pick-l algorithm, we propose a branch-andbound tree search approach to compute tighter bounds or even the exact value of α k . To the best of our knowledge, this tree search algorithm is the first branch-and-bound algorithm to verify NSC for 1 minimization. This branch-and-bound approach heavily depends on the pick-l algorithm developed in this paper. For example, the LP [8] and SDP [3] methods cannot be directly adapted to provide an efficient branch and bound approach, due to their lack of subset-specific upper bounds on α k . In numerical experiments, we demonstrated that the tree search algorithm reduced the execution time to precisely calculate α k by around 40-8000 times, compared to the exhaustive search method. (iii) Simultaneous upper and lower bounds. The branchand-bound tree search algorithm simultaneously maintains upper and lower bounds of α k during the run-time. This approach has two benefits. Firstly, if one is interested in merely certifying the NSC for a positive k rather than obtaining the exact α k , then one can terminate the TSA early to shorten the running time. This can be done as soon as the global upper (lower) bound drops below (exceeds) 1/2 and, therefore, concluding that the NSC for the positive k is satisfied (not satisfied). Secondly, consider the case when TSA is terminated early due to, say, constraints on running time. Then, the process still yields meaningful bounds on α k via the record of continuously maintained upper and lower bounds. (iv) New results on recoverable sparsity. For a certain l < k, we can compute α l or its upper bound by using the branch-and-bound tree search algorithm (for example, based on the pick-1-element algorithm). We introduce a novel result (Lemma 3), which can use α l to lower bound the recoverable sparsity k. This approach of lower bounding the recoverable sparsity k is useful when l is too large to perform the pick-l algorithm directly (which requires n l enumerations).

Notations and preliminaries
We denote the sets of real numbers and positive integers as R and Z + respectively. We reserve uppercase letters K and L for index sets and lowercase letters k, l ∈ Z + for their respective cardinalities. We also use | · | to denote the cardinality of a set. We assume k > l ≥ 1 throughout the paper. For vectors or scalars, we use lowercase letters, e.g., x, k, l. For a vector x ∈ R n , we use x i for its i-th element. If we use an index set as a subscript of a vector, it represents the partial vector over the index set. For example, when x ∈ R n and K = {1, 2}, x K represents [ x 1 , x 2 ] T . We reserve uppercase A for a sensing matrix whose dimension is m × n. Since the number of columns of a sensing matrix A is n, the full index set we consider is {1, 2, . . . , n}. In addition, we represent n l numbers of subsets as We use the superscript * to represent an optimal solution of an optimization problem. For instance, z * and K * are the optimal solution of (5). Since we need to represent an optimal solution for each index set L i , we use the superscript i * to represent an optimal solution for an index set L i , e.g., z i * . The maximum value of k such that both α k < 1 2 and α k+1 ≥ 1 2 hold true is denoted by the maximum recoverable sparsity k max .

Pick-l-element algorithm
Consider a sensing matrix with n columns. Then, there are n k subsets K each of cardinality k. When n and k are large, exhaustive search over these subsets to compute α k is extremely expensive. For example, when n = 100 and k = 10, it takes a search over 1.7310e+13 subsets to compute α k -a combinatorial task that is beyond the technological reach of common desktop computers. Our goal is to devise algorithms that can rapidly yield an exact value of α k . As an initial step, we develop a method to compute an upper bound of α k in polynomial time, which is called the pick-l-element algorithm (or simply, pick-l algorithm), where l is a chosen integer such that 1 ≤ l < k.
Let us define the proportion parameter for a given index set L such that |L| = l, denoted by α l,L , as (6) is the partial optimization problem of (4) only considering the vector z in the null space of A for a fixed index set L. We can obtain α l,L by solving the following optimization problem: Since (7) is maximizing a convex function for a given subset L, we cast (7) as 2 l linear programming problems by considering all the possible sign patterns of every element of z L (e.g., if l = 2 and L = {1, 2}, then, ||z L || 1 = |z 1 | + |z 2 | can correspond to 2 l = 4 possibilities: z 1 + z 2 , z 1 − z 2 , −z 1 + z 2 , and −z 1 − z 2 ). α l,L is equal to the maximum among the 2 l objective values.
The pick-l algorithm uses α l,L 's obtained from different index sets to compute an upper bound of α k . Algorithm 1 shows the steps of the pick-l algorithm in detail. The following Lemmata show that the pick-l algorithm provides an upper bound of α k . Firstly, we provide Lemma 1 to derive the upper bound of the proportion parameter for a fixed index set K, and then, we show that the pick-l algorithm yields an upper bound of α k in Lemma 2.
Algorithm 1: Pick-l-element algorithm, 1 ≤ l < k for computing an upper bound of α k 1: Given a matrix A, calculate α l,L 's for all the subsets L, |L| = l, via (7). 2: Sort these n l different values of α l,L 's in descending order like (10). 3: Compute an upper bound of α k via (9). 4: If the upper bound of α k is larger than 1, then, set the upper bound to 1. If the upper bound is less than 1 2 , then NSC for k ∈ Z + is satisfied.
Lemma 1 (Cheap Upper Bound (CUB) for a given subset K) Given a subset K, we have Proof Suppose that when z = z i * and z = z * , we achieve the optimal value of (6) for given index sets L i and K respectively, i.e., α l,L i = we obtain the following inequality: The inequality is from the optimal value of (6) for each index set L i .

Lemma 2
The pick-l algorithm provides an upper bound of α k , namely where α l,L 1 ≥ α l,L 2 ≥ · · · ≥ α l,L i ≥ · · · ≥ α l,L ( n l ) . (10) Proof Without loss of generality, we assume that when z = z i * , i = 1, 2, . . . , n l , α l,L i 's are obtained in descending order like (10). It is noteworthy that α k,K is defined for a fixed K set; however, α k is the maximum value over all the subsets with cardinality k. Suppose that when z = z * and K = K * , α k is achieved in (4). From the aforementioned definitions and similar argument as in Lemma 1, we have: The first inequality is from Lemma 1, and the last inequality is from the assumption that α l,L i 's are sorted in descending order.
The steps 2 and 3 in Algorithm 1, which are sorting α l,L 's and computing an upper bound of α k with sorted α l,L 's via (9), can also be done by solving the following optimization problem without sorting operation: Here, we note that 1 ( k−1 l−1 ) × k l = k l . Therefore, for the optimal value, the first k l largest α l,L i 's are chosen with the coefficient 1 ( .
The upshot of the pick-l algorithm is that we can reduce number of operations from n k enumerations to n l . For example, when n = 300, k = 20, and l = 2, the number of operations is reduced by around 10 26 times. Moreover, as n increases, the reduction rate increases. With the reduced enumerations, we can still have non-trivial upper bounds of α k through the pick-l-element algorithm. We will present the performance of the pick-l algorithm in Section 5 showing that the pick-l algorithm provides better upper bounds than the previous research [3,8] even when l = 2. Furthermore, thanks to the pick-l algorithm, we can design a new algorithm based on a branch-andbound search to calculate α k by using upper bounds of α k obtained from the pick-l algorithm. It is noteworthy that the cheap upper bound introduced in Lemma 1 can provide upper bounds on α k,K for specific subsets K, which enable our branch-and-bound method to calculate α k or more precise bounds on α k . However, LP relaxation method [8] and SDP method [3] do not provide upper bounds on α k,K for specific subsets K, which overwhelms LP and SDP methods to be used in the branch-and-bound method.
Since we are also interested in k max , we introduce the following Lemma 3 to bound the maximum recoverable sparsity k max .

Lemma 3
The maximum recoverable sparsity k max satisfies where . is the ceiling function.
Proof To prove this lemma, we will show that when k = l · 1/2 α l − 1, α k < 1 2 . This can be concluded from the upper bound of α k given as follows: Note that there are k l terms in the summation. From (13), if α l · k l < 1 2 , then α k < 1 2 . In other words, if k < l · 1/2 α l , then α k < 1 2 . Since k is a positive integer, when k = l · 1/2 α l − 1, α k < 1 2 . Therefore, the maximum recoverable sparsity k max should be larger than or at least equal to l · 1/2 It is noteworthy that in ([8] Section 4.2.B), the authors introduced lower bound on k based on α 1 , i.e., k(α 1 ). However, in Lemma 3, we provide a more general result. Furthermore, in Lemma 3, instead of using α l , we can use an upper bound of α l to obtain the recoverable sparsity k; namely, k(UB(α l )) = l · 1/2 UB(α l ) − 1 ≤ k max , where UB(α l ) represents an upper bound of α l . Since the proof follows the same track as the proof of Lemma 3, we omit the proof.
Finally, we introduce the following proposition to compare our algorithm to LP method [8] theoretically.

Proposition 1 For any integer
be the upper bound on α k provided by the pick-1-element algorithm according to Lemma 2. Let α LP k be the upper bound on α k provided by the LP method [8] according to the following definition (namely Eq. (4.25) in [8] with β = ∞): where e j is the standard basis vector with the j-th element equal to 1, and · k,1 stands for the sum of k maximal magnitudes of components of a vector. Then we have: For readability, we place the proof of Theorem 1 in Appendix A.
The LP method can provide tighter upper bounds on α k than the pick-1-element algorithm; however, this comes at a cost of solving a big optimization problem of design dimension mn. When m and n are large, the complexity of computing α LP k can be prohibitive (please see Table 2).

Optimized pick-l algorithm
We can tighten the upper bound of α k in the pick-l algorithm by replacing the constant factor 1 (9) with optimized coefficients at the cost of additional complexity, which we call as the optimized pick-l algorithm. This optimized pick-l algorithm is mostly useful from a theoretical perspective. In practice, it gives improved but similar performance in calculating the upper bound of α k to the basic pick-l algorithm described in Section 2. As a theoretical merit of the optimized pick-l algorithm, we can show that as l increases, the upper bound of α k becomes smaller or stays the same. The optimized pick-l algorithm provides an upper bound of α k via the following optimization problem: In the following lemmata, we show that the optimized pick-l algorithm produces an upper bound of α k and this bound is tighter than that of the basic pick-l algorithm introduced in (11). The last lemma establishes that as l increases, the upper bound of α k decreases or stays the same.

Lemma 4 The optimized pick-l algorithm provides an upper bound of α k .
Proof The strategy to prove Lemma 4 is to show that one feasible solution of (15) gives an upper bound of α k .
when L i ⊆ K * , and γ i = 0 otherwise, which we can easily check whether it satisfies the first and second constraints of (15). For the third constraint, let us check the case when b = l first. For b = l, we can choose an arbitrary index set B such that |B| = b = l. For the chosen B, there is only one L i such that B ⊆ L i , which is itself, i.e., B = L i . For other chosen B's, it is the same. Hence, the third constraint represents For b = 1, the third constraint represents Note that there are n−1 l−1 numbers of L i 's which have an index set B as a subset. Among n−1 l−1 numbers of γ i 's, only γ i 's whose corresponding L i 's are the subsets of K * are (17). Basically, the third constraint makes that for an index, the summation of coefficients related to the index is limited to 1. In the same way, for 1 < b < l, the chosen γ i is a feasible solution of (15). From this feasible solution, we have {i: L i ⊆K * , |L i |=l} α l,L i for the optimal value, which is an upper bound of α k as shown in (13).

Lemma 5
The optimized pick-l algorithm provides a tighter, or at least the same, upper bound of α k than the basic pick-l algorithm introduced in (11).
Proof We will show that the optimization problem (11) is a relaxation of (15). As in the proof of Lemma 4, for b = l, the third constraint of (15) represents (16), which is involved in the first constraint of (11). Since the third constraint of (15) considers other b values such that 1 ≤ b < l, (15) has more constraints than (11). Therefore, the optimized pick-l algorithm, which is (15), provides a tighter or at least the same upper bound than the basic pick-l algorithm.

Lemma 6
The optimized pick-l algorithm provides a tighter or at least the same upper bound than the optimized pick-p algorithm when l > p. Proof We can upper bound the objective function of (15) by using (8) as follows: Note that in the objective function of (18), each α p,P j , 1 ≤ j ≤ n p , appears n−p l−p times. Let us define We can relax (18) to the following problem, which turns out to be the same as the optimized pick-p algorithm: The relaxation is shown by checking the constraints. The first constraint of (19) is trivial to obtain. For the second constraint, we can obtain the second constraint of (19) from the following relations: where the second equality is obtained from the fact that γ i , which is a coefficient of α l,L i , appears l p times in The final inequality is from the sec-ond constraint of (18). The third constraint in (19) can be deduced from the following inequality: where the second equality is from the fact that for a fixed (19) is obtained from the relaxation of (18), the optimal value of (19) is larger or equal to the optimal value of (18). (19) is just the optimized pick-p algorithm. Thus, when l > p, the optimized pickl algorithm provides a tighter or at least the same upper bound than the optimized pick-p algorithm.
By using larger l in the pick-l algorithm, we can obtain a tighter upper bound of α k . However, for a certain l, we need to enumerate n l possibilities, and this becomes infeasible when l is large. Moreover, when l < k, the pickl algorithm only gives an upper bound of α k , instead of an exact value of α k . There is, however, a need to find tighter bounds on α k , or to even find the exact value of α k , when k is too large for n k enumerations of exhaustive search [14][15][16]. To this end, we propose a new branchand-bound tree search algorithm to find tighter bounds on α k than Lemma 2 provides, or to even find the exact α k . Our branch-and-bound tree search algorithm is enabled by the pick-l algorithms introduced in Sections 2 and 3.

Tree search algorithm
To find the index set K * which leads to the maximum α k,K (among all possible index set K's), the tree search algorithm (TSA) performs a best-first branch-and-bound search [23] over a tree structure representing different subsets of {1, 2, . . . , n}. In its essence, for each subset J with cardinality no bigger than k, TSA calculates an upper bound of α k,K , which is valid for any set K (with cardinality k) such that J ⊆ K. If this upper bound is smaller than a lower bound of α k , TSA will not further explore any of J's supersets, leading to reduced average-case computational complexity. For simplicity, we will describe the TSA based on pick-1-element algorithm, simply called 1-Step TSA. However, we remark we can also extend the TSA to be based on pick-l-element (l ≥ 2) algorithm, by calculating upper bounds of α k,K based on the results of the pick-l-element algorithm.

Tree structure
A tree node J represents an index subset of {1, . . . , n} such that |J| ≤ k. We have the following rule: [R1] A parent node is a subset of each of its child node(s).
A node that has no child is referred to as a leaf node. We call the cardinality of the index set corresponding to J as J's height. The tree structure follows the "legitimate order, " which ensures that any new index in the child node is bigger than the indices of its parent node.
[R2] "Legitimate order" -Let P and C denote the parent node and the child node. Then, any index in P must be smaller than any index in C \ P. Figure 1 illustrates this rule in a tree with k = 2 and n = 3.

Basic idea of a branch-and-bound approach for calculating α k
We use a branch-and-bound approach over the tree structure to calculate α k . This method maintains a lower bound on α k (how to maintain this lower bound will be explained in Section 4.3). When the algorithm explores a tree node J, the algorithm calculates an upper bound B(J), which is no smaller than α k,K for any child node K (with cardinality k) of node J. If B(J) is smaller than the lower bound on α k , then the algorithm will not explore the child nodes of the tree node J.
In our algorithm, we calculate B(J) as where j + t = k, max(J) represents the largest index in J, and α 1, We obtain this descending order by permuting the columns of the sensing matrix A in descending order of α 1,{i} 's as the pre-computation step of TSA. For example, in Fig. 1 In order to justify that B(J) is an upper bound of α k,K for all node K such that J ⊆ K, we provide the following lemma.

Best-first tree search strategy
TSA adopts a best-first tree search strategy for the branchand-bound approach. We first describe a basic version of the best-first tree search strategy and then introduce two enhancements to this strategy in the next subsection.
In its basic version, TSA starts with a tree having only the root node and sets the global lower bound of α k as 0. In each iteration, TSA selects a leaf tree node J with the largest B(J) and expands the tree by adding the child nodes of J to the tree. For each of these newly added child nodes, say Q, TSA then calculates the upper bound B(Q) in (20). Note that if a newly added child node Q has k elements, TSA will calculate α k,Q , which is a lower bound on α k . For this k-element Q, if the newly calculated α k,Q is bigger than the global lower bound of α k , TSA will set the global lower bound equal to α k,Q . TSA will terminate if a leaf tree node J has the largest B(J) among all the leaf nodes, and that B(J) is no bigger than the global lower bound on α k .
From standard theories of the branch-and-bound approach, this TSA will output the exact α k . Also, in this process, the global lower bound will keep increasing until it is equal to an upper bound of α k (the largest B(J) among leaf nodes).

Two enhancements
We incorporate two novel features to TSA in order to reduce the computational complexity. Firstly, when TSA attaches a new node Q to a node J in the tree structure, TSA computes B(Q) as (21): where j +t +1 = k, max(Q) represents the largest index in Q, and α 1, Thus, without calculating α j+1,Q (which involves higher computational complexity), we can still have B(Q) as an upper bound of α k,K for any child node K (with cardinality k) of the node Q. Secondly, when TSA adds a new node Q as the child of node J in the tree structure (assuming α j,J has already been calculated), TSA does not need to add all of J's child nodes to the tree at the same time. Instead, TSA only adds the node J's unattached child node Q with the largest B(Q) as defined in (21). Namely, the index Q \ J is no bigger than the index Q \ J, where Q is any unattached child of the node J. We note that B(Q) is an upper bound on B(Q ) (according to (21)) for any other unattached child node Q of the node J. Thus, for any child node K (of cardinality k) of node J's unattached child nodes, B(Q) is still an upper bound of α k,K .
Algorithm 2 shows detailed steps of TSA, based on the pick-1-element algorithm (namely, l = 1, 1-Step TSA). In the description, we define "expanding the tree from a node J" as follows:

Advantage of the tree search algorithm
Due to the nature of the branch-and-bound approach, we can obtain a global upper bound and a global lower bound of α k while TSA runs. As the number of iterations increases in TSA, we can obtain tighter and tighter upper bounds on α k , which is the largest B(·) among the leaf nodes. By using the global upper bound of α k , we can obtain a lower bound of the recoverable sparsity k via Lemma 3. Thus, even if the complexity of TSA is too high to finish in a timely manner, we can still obtain a lower bound on the recoverable sparsity k by early terminating TSA. We note that the methods based on LP [8] and SDP [3] also provide upper bounds on α k . However, they are unable to determine upper bounds of α k,K , which is for a specific index set K. This prevents the use of LP and SDP methods in our branch-and-bound method for computing α k .

Numerical experiments
We conducted extensive simulations to compute α k and its upper/lower bounds using the pick-l algorithms and TSA. In this section, we call the pick-l algorithms introduced in Section 2 and 3 as simply the (basic) pick-l and the optimized pick-l algorithms respectively.
For same matrices, we compared our methods with LP relaxation [8] approach and SDP method [3]. We assessed the computational complexity in terms of execution time of the algorithms. 1 In addition, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically.
For LP method in [8] and SDP method in [3], we used the Matlab codes 2 provided by the authors. Consistent with previous research, we used CVX [17]-a package for specifying and solving convex programs-for the SDP method, and MOSEK [18]-a commercial LP solver-for the LP method. In our own algorithms, we used MOSEK to solve (7). Also, to be consistent with the previous research, matrices were generated from the Matlab code provided by the authors of [3] at http://www.di.ens.fr/ã spremon/NSPcode.html. For valid bounds, we rounded down lower bounds on α k and exact α k , and rounded up upper bounds on α k to the nearest hundredth.

Performance comparison
Firstly, we considered Gaussian matrices and partial Fourier matrices sized from n = 40 to n = 6144. We chose n = 40 so that our results can be compared with the simulation results in [3].

Low-dimensional sensing matrices
Sensing matrices with n = 40 . We considered sensing matrices of row dimension m = 0.5n, 0.6n, 0.7n, 0.8n, where n = 40. For every matrix size, we randomly generated 10 different realizations of Gaussian and partial Fourier matrices. So, in total we used 80 different n = 40 sensing matrices for the numerical experiments in Tables 7 and 8. We normalized all of the matrix columns so that they have a unit 2 -norm. The entries of Gaussian matrices were i.i.d standard Gaussian N (0, 1). The partial Fourier matrices had m rows randomly draw from the full Fourier matrices. We compared our algorithms-pick-1element, pick-2-element, pick-3-element, and TSA-to LP and SDP methods. For readability, we place the numerical results for these small sensing matrices in Appendix B.
For each matrix size and type, we increased k from 1 to 5 in unit steps. Tables 7(a) and 8(a) show the median values of α k . (To be consistent with the previous research [3], in which the authors used the median value of α k to compare the SDP method with the LP method, we provided the median values obtained from 10 random realizations of sensing matrix.) From the median value of α k , we obtained the recoverable sparsity k max such that α k max < 1/2 and α k max +1 > 1/2. In addition, we calculated the arithmetic mean of k max 's. For the arithmetic mean, we obtained each k max from each random realization and computed the arithmetic mean of ten k max 's. Compared with LP and SDP methods, we obtained bigger or at least the same recoverable sparsity k max by using pick-2, pick-3, and TSA. It is noteworthy that we obtained the exact α k for k = 1, 2, . . . , 5 by using TSA, while LP and SDP methods only provided the exact α k for k = 1. We observed that α k < 1/2 but the upper bound of α k > 1/2 holds true in several cases, e.g., α 5 in 32 × 40 Gaussian matrices, α 4 in 28 × 40 Gaussian matrices, α 3 in 24 × 40 Gaussian matrices, α 3 in 20 × 40 partial Fourier matrices, and α 4 in 24 × 40 partial Fourier matrices. Additionally, this can also be established by the arithmetic mean of k max in Tables 7(a) and 8(a).
To compare the computational complexity, we calculated the geometric mean of the algorithms' execution time, to avoid biases for the average. Tables 7(b) and 8(b) list the average execution time. We also ran the exhaustive search method (ESM) to find α k and compared its execution time with that of TSA. In calculating α 5 , on average, 3-Step TSA reduced the computational time by around 86 times for 20 × 40 Gaussian matrices, and by 94 times for 20 × 40 partial Fourier matrices, compared to ESM. For 32 × 40 Gaussian matrix and partial Fourier matrix, the speedup compared to the best l-Step TSA, l = 1, 2, 3, becomes around 1760 times and 182 times respectively. We observed that when m/n = 0.5, e.g., 20 × 40 sensing matrices, in general, the 3-step TSA provides the fastest result for k = 5. On the other hand, for m/n = 0.8 (e.g., 32 × 40 case), the 2-Step TSA is the quickest in finding an exact α k for k = 5; however, for k > 5, the fastest lstep TSA cannot be determined from either experiments or theory.
Sensing matrices with n = 256 . We assessed the performance of the pick-l algorithm for sensing matrices with n = 256. We carried out numerical experiments on 128 × 256 Gaussian matrices in Fig. 2a and 64 × 256 partial Fourier matrices in Fig. 2b. Here, for 10 sensing matrices, we obtained the median value of upper bounds of α k using the pick-l algorithm and compared the result with LP relaxation method [8]. We omitted SDP method [3] from this experiment due to its very high computational complexity. For the pick-3 algorithm in Fig. 2a, we calculated an upper bound of α 3 via TSA and used this result to calculate upper bounds of α k , k = 3, 4, . . . , 8 via (13). Figure 2a, b demonstrate that, with an appropriate choice of l, the upper bound of α k obtained via the pick-l algorithm can be tighter than that from the LP relaxation method. For example, for 128 × 256 Gaussian matrices, LP relaxation often determines the maximum recoverable sparsity as 5, while the pick-2 algorithm improves it to 6. In the pick-3 algorithm, the maximum recoverable sparsity is 7 (α 7 = 0.49). For 64 × 256 partial Fourier matrices, the maximum recoverable sparsity from LP relaxation and the pick-2 algorithm are 3 and 4 respectively.
Sensing matrices with n = 512 . We further conducted numerical experiments on Gaussian sensing matrices with n = 512. The simulation results in Table 1 clearly demonstrate that the pick-2 algorithm provides larger lower bound on the recoverable sparsity k than the LP method [8]. Especially, when Gaussian sensing matrix is 410×512, the lower bound on k obtained from the pick-2 algorithm is almost twice larger than that of the LP method.

High-dimensional sensing matrices
Sensing matrix with n ≥ 1024 . We conducted numerical experiments for Gaussian sensing matrices with n from 1024 to 6144. We show these numerical experiments in Tables 2 and 3, where we calculated the lower bound on the recoverable sparsity k and obtained the corresponding execution time. The SDP method [3] was not applicable in these experiments due to its very high computational complexity. In Table 2, we ran TSA for 1 day (24 h) and obtained an upper bound of α 2 , denoted by UB(α 2 ). With the upper bound of α 2 , we obtained a lower bound of k, denoted by k(UB(α 2 )), via Lemma 3. Our numerical results in Tables 2 and 3 clearly show that our pick-l algorithm outperforms the LP method in recoverable sparsity k or execution time. We note that although our pick-1element algorithm provides the same recoverable sparsity k as the LP method [8] in Tables 2 and 3, the complexity For extremely large sensing matrices, e.g, 4014 × 4096 and 6021×6144, the LP and SDP methods cannot provide any lower bound on k due to unreasonable computational time. However, our pick-l algorithm can still provide the lower bound on k efficiently. Table 3 shows the lower bound on k and the execution time for these large dimensional matrices, where our verified recoverable sparsity k can be as large as 558 for a 6134 × 6144 sensing matrix. We obtained the estimated time for the LP method by running the Matlab code obtained from http://www2.isye. gatech.edu/~nemirovs/, which shows the percentage of the calculation on screen.

Comparison between the optimized pick-l algorithm and the basic pick-l algorithm
We compared the basic pick-l algorithm introduced in Section 2 to the optimized pick-l algorithm in Section 3 on Gaussian sensing matrices 28 × 40 and 40 × 50 for l = 3 and k = 4, 5, . . . , 8. Table 4 demonstrates that when l = 3 and k = 4, 5, . . . , 8, the optimized pick-l algorithm provided tighter upper bounds on α k than the basic pick-l algorithm. This is because when l is large and k > l, (15) includes more constraints, which leads to the reduced size of the feasible set, than the case when k and l are small. Hence, the optimal value of (15), which is the result from the optimized pick-l, can be smaller than or equal to that of (11), which is the basic pick-l. Additionally, we provided the exact α k values obtained from TSA in order to check  how tight the bounds obtained from the basic pick-l and the optimized pick-l are. In terms of the execution time, the optimized pick-l algorithm, which computes (15), was around 1.7 and 4.4 times slower than the basic pick-l on 28 × 40 and 40 × 50 Gaussian matrix respectively. In summary, the optimized pick-l algorithm provides better or at least equal upper bound on α k to the basic pick-l algorithm, with additional complexity. In spite of the increased complexity of the optimized pick-l algorithm, it has an important theoretical merit, which is Lemma 6.

Complexity of tree search algorithm
In this subsection, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically on randomly chosen Gaussian sensing matrices. Figure 3a, b shows the distribution of execution time and the distribution of number of nodes in height 5 attached to the tree structure in TSA respectively. For m = 0.5n, we generated 100 random realizations of Gaussian matrices and computed α 5 using 3-Step TSA. The maximum number of leaf node whose cardinality is k is n k = 40 5 = 6.58008e5. From Fig. 3b, we note that for 90% of the cases, 3-Step TSA was terminated before 1.6% of all the possible height-5 nodes were attached to the tree structure.
We provided the execution time of TSA for differentsized randomly chosen Gaussian matrices in Fig. 4. We compared the execution time of TSA to ESM. Figure 4a shows that when k = 1, 1-Step TSA provides almost similar performance to ESM. This is because 1-Step TSA calculates all the α 1,{i} 's as a pre-computation, which is the same procedure as ESM. However, for k > l as shown in Fig. 4b-d, TSA can find α k with reduced computation by using all the α l,L 's, while it is required to compute all the α k,K 's in ESM. In order to compute α k , we achieved a speedup of around 100 times via 2-Step TSA compared to ESM for k = 3, 4.
In addition, in Fig. 5, we compared the execution time of TSA to ESM by varying k with n fixed on random Gaussian matrices. For the best execution time of TSA, we used different l values for TSA. For n = 40 and n = 50, 3-Step TSA reduced the execution time to find α 5 by around 100 times and 300 times respectively, compared with ESM .
Finally, Fig. 6 gives illustrations of the values of the global lower and upper bounds, for 80×100 and 160×200 Gaussian sensing matrices, as the number of iterations in TSA increases. As we can see, the global upper and lower bounds get close very quickly. This implies that we can sometimes terminate TSA early and still obtain tight bounds on α k .

Application to network tomography problem
We apply our new tools introduced in this paper to verify NSC for sensing matrices in network tomography problems [14][15][16][19][20][21]. In an undirected graph model for the communication network, the communication delay over each link can be determined by sending packets through probing paths that are composed of connected links. The delay of each path is then measured by adding the delays over its links. Generally, most links are uncongested, and only a few congested links have significant delays. It is, therefore, reasonable to think of finding the link delays as a sparse recovery problem. This sparse problem can be expressed in a system of linear equations y = Ax, where the vector y ∈ R m is the delay of m paths, the vector x ∈ R n is the delay vector for the n links, and A is a sensing matrix. The element A ij of A is 1, if and only if path y i , i ∈ {1, 2, . . . , m}, goes through link j, j ∈ {1, 2, . . . , n}; otherwise, A ij equal to 0 (see Fig. 7). The indices of nonzero elements in the vector x correspond to the congested links.
In our numerical experiments to verify NSC in network tomography problems, the paths for sending data packets were generated by random walks of fixed length. Table 5 summarizes the results of our experiments. We note that by using TSA, one can exactly verify that a total of k = 2 and k = 4 congested link delays can be uniquely found by solving 1 minimization problem (2) for the randomly generated network measurement matrices 33 × 66 (12node complete graph) and 53 × 105 (15-node complete graph) respectively. For ESM, we estimated the execution time by multiplying the unit time to solve (7) and the total number of cases in the exhaustive search. We obtained the unit time to solve (7) by calculating the arithmetic mean from 100 trials. For a 53 × 105 matrix, 3-Step TSA substantially reduced the execution time to find α 5 around 137 times compared to ESM.
We further carried out numerical experiments on even larger network model having 300 nodes and 400 edges. We created a random spanning tree for a network model by using random walk approach [24]. At each probing path, we randomly chose a node among 300 nodes as a starting point of random-walk and walked 100 times along the network connection. We obtained a 320 × 400 matrix corresponding to the network model. We calculated α k values via l-Step TSA, where l = 1, 2. In terms of the execution time, in Table 6, we compared TSA with ESM, where the unit time to solve (7) was obtained by calculating the arithmetic mean from 100 trials. Especially, 1-Step TSA reduced the execution time to find α 4 by around 28,700 times compared to ESM.

Discussion
In this section, we discuss the strengths and weaknesses of our proposed algorithms, compared with earlier research [3,8].

Comparisons with LP and SDP. Our proposed
pick-1-element algorithm can achieve similar performance as the LP [8] and SDP methods [3]. However, our pick-1-element algorithm has the clear advantage of being more computationally efficient for large dimensional sensing matrices. Please see Table 3, where the LP and SDP methods cannot provide the performance bounds on recoverable sparsity k due to high computational complexity. On the other hand, in Table 3, our pick-1-element algorithm can efficiently provide bounds on recoverable sparsity k. The LP method has high computational complexity because it has to deal with a large convex program of design dimension mn, which leads to prohibitive computational complexity when m and n are large [8].
In our pick-1-element algorithm, we proposed the novel idea of sorting α 1,{i} 's (see Lemma 2), which leads to improved performance bounds on α k and recoverable sparsity k. This sorting idea, combined with Lemma 2, provides us with larger recoverable sparsity bound k than purely using α 1 for bounding recoverable k in ([8] Section 4.2.B). 2. Set-specific upper bounds. Our proposed pick-lelement algorithm (l ≥ 2) is novel and can provide improved bounds on α k and recoverable sparsity k, using polynomial computational complexity in n when l is fixed. This approach is not practical when l is large. However, pick-2-element and pick-3-element algorithm can already provide improved performance bounds, compared with the previous research [3,8].
The fact that we can obtain upper bounds on α k , based on the results of pick-l -element (l ≥ 2) algorithm, is new and non-trivial (see Lemma 2, Our pick-l -element algorithm can provide set-specific upper bound for α k,K , laying the foundation for our branch-and-bound TSA. 3. Computational complexity of TSA. We proposed TSA to find precise values for α k with significantly reduced average-case computational complexity than ESM. The computational complexity of TSA is dependent on n, sparsity k, and a chosen constant l. When k, n, and l are large enough, finding α k via TSA is still computationally expensive. In the worst case, TSA has the same computational complexity as ESM. However, our extensive simulations ranging from Fig. 3 to Fig. 5 and from Table 5 to Table 8 show that on average, TSA can greatly reduce the computational complexity of finding α k compared with ESM. Moreover, since TSA maintains an upper bound and a lower bound of α k during its iterations, one can always early terminate TSA and still get improved performance bounds on α k than the LP and SDP methods. We can use TSA to find an exact value of α l , where l < k, and then use Lemma 3 to bound α k . 4. Use of data structures. We used object-oriented programming (OOP) to implement TSA in Matlab [25], because the OOP makes it easy to handle tree-type structures. In OOP, we defined a class and   created objects from the class to store property of each node J, e.g., B(J), in the tree. In order to make a connection between two tree nodes, we used doubly linked list data structure as a part of the object. However, in case readers would like to implement the algorithm using alternative data structures, we have provided implementation-agnostic pseudocode of our algorithm in Algorithm 2.   [1]. However, our research is different from the research on phase transition in two aspects. Firstly, our work and the previous works [3,8] are focusing on worst-case performance guarantee (recovering all the possible k -sparse signals), while the research on phase transition is considering the average-case performance guarantee for a single k -sparse signal with fixed support and sign pattern. Secondly, the phase transition bounds are mostly for random matrices. Hence, for a given deterministic sensing matrix, phase transition results cannot be used for that particular matrix.

Conclusion
In this paper, we consider the problem of verifying the null space condition in compressed sensing. Calculating the proportional parameter α k that characterizes the null space condition of a sensing matrix is a nonconvex optimization problem, and also known to be NP-hard in [7]. In order to verify the null space condition, we proposed novel and simple enumeration-based algorithms, which are called the basic and optimized pick-l algorithms, to obtain upper bounds of α k . With these algorithms, we further designed a new algorithm called the tree search algorithm to gain a global solution to the non-convex optimization problem of verifying the null space condition. Numerical experiments show that our algorithms outperform the previously proposed algorithms [3,8] in performance as well as speed. 1 We conducted our experiments on HP Z220 CMT with Intel Core i7-3770 dual core CPU @3.4GHz clock speed and 16GB DDR3 RAM, using Matlab (R2013b) on Windows 7. 2 LP method from http://www2.isye.gatech.edu/ñ emirovs/ and SDP method from http://www.di.ens.fr/ã spremon/NSPcode.html.