 Research
 Open access
 Published:
Computable performance guarantees for compressed sensing matrices
EURASIP Journal on Advances in Signal Processing volume 2018, Article number: 16 (2018)
Abstract
The null space condition for ℓ_{1} minimization in compressed sensing is a necessary and sufficient condition on the sensing matrices under which a sparse signal can be uniquely recovered from the observation data via ℓ_{1} minimization. However, verifying the null space condition is known to be computationally challenging. Most of the existing methods can provide only upper and lower bounds on the proportion parameter that characterizes the null space condition. In this paper, we propose new polynomialtime algorithms to establish upper bounds of the proportion parameter. We leverage on these techniques to find upper bounds and further develop a new procedure—tree search algorithm—that is able to precisely and quickly verify the null space condition. Numerical experiments show that the execution speed and accuracy of the results obtained from our methods far exceed those of the previous methods which rely on linear programming (LP) relaxation and semidefinite programming (SDP).
1 Introduction
Compressed sensing is an efficient signal processing technique to recover a sparse signal from fewer samples than required by the NyquistShannon theorem, reducing time and energy spent in sampling operation. These advantages make compressed sensing attractive in various signal processing areas [1].
In compressed sensing, we are interested in recovering the sparsest vector \(x \in \mathbb {R}^{n}\) that satisfies the underdetermined equation y=Ax. Here, \(\mathbb {R}\) is the set of real numbers, \(A \in \mathbb {R}^{m \times n}, \; m < n\) is a sensing matrix, and \(y \in \mathbb {R}^{m}\) is the observation or measurement data. This is posed as an ℓ_{0} minimization problem:
where ∥x∥_{0} is the number of nonzero elements in vector x. The ℓ_{0} minimization is an NPhard problem. Therefore, we often relax (1) to its closest convex approximation—the ℓ_{1} minimization problem:
It has been shown that the optimal solution of ℓ_{0} minimization can be obtained by solving ℓ_{1} minimization under certain conditions (e.g., restricted isometry property or RIP) [2–6]. For random sensing matrices, these conditions hold with high probability. We note that RIP is a sufficient condition for sparse recovery [7].
A necessary and sufficient condition under which a ksparse signal x, (k≪n) can be uniquely obtained via ℓ_{1} minimization is null space condition (NSC) [3, 8, 9]. A matrix A satisfies NSC for a positive integer k if
holds true for all z∈{z: Az=0,z≠0} and for all subsets K⊆{1,2,…,n} with K≤k. Here, K is an index set, K is the cardinality of K, z_{ K } is the part of the vector z over the index set K, and \(\overline {K}\) is the complement of K. NSC is related to the proportion parameter α_{ k } defined as
The α_{ k } is the optimal value of the following optimization problem:
where K is a subset of {1,2,…,n} with cardinality at most k. The matrix A satisfies NSC for a positive integer k if and only if \(\alpha _{k}<\frac {1}{2}\). Equivalently, NSC can be verified by computing or estimating α_{ k }. The role of α_{ k } is also important in the recovery of an approximately sparse signal x via ℓ_{1} minimization where a smaller α_{ k } implies more robustness [8–10].
We are interested in computing α_{ k } and, especially, finding the maximum k for which \(\alpha _{k} < \frac {1}{2}\). However, computing α_{ k } to verify NSC is extremely expensive and was reported in [7] to be NPhard. Due to the challenges in computing α_{ k }, verifying NSC explicitly for deterministic sensing matrices remains a relatively unexamined research area. In [3, 8, 11, 12], convex relaxations were used to establish upper or lower bounds of α_{ k } (or other parameters related to α_{ k }) instead of computing the exact α_{ k }. While [3, 11] proposed semidefinite programmingbased methods, [8, 12] suggested linear programming relaxations to obtain the upper and lower bounds of α_{ k }. For both methods, computable performance guarantees on sparse signal recovery were reported via bounding α_{ k }. However, these bounds of α_{ k } could only verify NSC with \(k=O(\!\sqrt {n})\), even though theoretically, k can grow linearly with n.
Our work drastically departs from these prior methods [3, 8, 11, 12] that provide only the upper and lower bounds. In our solution, we propose the picklelement algorithms (1≤l<k), which compute upper bounds of α_{ k } in polynomial time. Subsequently, we leverage on these algorithms to develop the tree search algorithm (TSA)—a new method to compute an exact α_{ k } by significantly reducing computational complexity of an exhaustive search method. This algorithm offers a way to control a smooth tradeoff between complexity and accuracy of the computations. In the conference precursor to this paper, we had introduced sandwiching algorithm (SWA) [13], which employs a branchandbound method. Although SWA can also be used to calculate the exact α_{ k }, it has a disadvantage of greater memory usage than TSA. On the other hand, TSA provides memory and performance benefits for highdimensional matrices (e.g., up to size ∼ 6000×6000).
It is noteworthy that our methods are different from RIP or the neighborly polytope framework for analyzing the sparse recovery capability of random sensing matrices. For example, prior works such as [6, 22] employ the neighborly polytope to predict theoretical lower bounds on recoverable sparsity k for a randomly chosen Gaussian matrix. However, our methods do not resort to a probabilistic analysis and are applicable for any given deterministic sensing matrix. Also, our algorithms have the strength of providing better bounds than existing methods [3, 8, 11, 12] for a wide range of matrix sizes.
1.1 Main contributions
We summarize our main contributions as follows:

(i)
Faster algorithms for high dimensions. We designed the pickl algorithm (and its optimized version), where l is a chosen integer, to provide upper bounds on α_{ k }. We are able to show that when l increases, the optimized pickl algorithm provides tighter upper bound on α_{ k }. Numerical experiments show that, even with l=2 or 3, the pickl algorithm already provides better bound on α_{ k } than the previous algorithms based on the LP [8] and SDP [3]. For large sensing matrices, the pick1element algorithm can be significantly faster than the LP and SDP methods.

(ii)
Novel formulations using branchandbound. Based on the pickl algorithm, we propose a branchandbound tree search approach to compute tighter bounds or even the exact value of α_{ k }. To the best of our knowledge, this tree search algorithm is the first branchandbound algorithm to verify NSC for ℓ_{1} minimization. This branchandbound approach heavily depends on the pickl algorithm developed in this paper. For example, the LP [8] and SDP [3] methods cannot be directly adapted to provide an efficient branch and bound approach, due to their lack of subsetspecific upper bounds on α_{ k }. In numerical experiments, we demonstrated that the tree search algorithm reduced the execution time to precisely calculate α_{ k } by around 40–8000 times, compared to the exhaustive search method.

(iii)
Simultaneous upper and lower bounds. The branchandbound tree search algorithm simultaneously maintains upper and lower bounds of α_{ k } during the runtime. This approach has two benefits. Firstly, if one is interested in merely certifying the NSC for a positive k rather than obtaining the exact α_{ k }, then one can terminate the TSA early to shorten the running time. This can be done as soon as the global upper (lower) bound drops below (exceeds) 1/2 and, therefore, concluding that the NSC for the positive k is satisfied (not satisfied). Secondly, consider the case when TSA is terminated early due to, say, constraints on running time. Then, the process still yields meaningful bounds on α_{ k } via the record of continuously maintained upper and lower bounds.

(iv)
New results on recoverable sparsity. For a certain l<k, we can compute α_{ l } or its upper bound by using the branchandbound tree search algorithm (for example, based on the pick1element algorithm). We introduce a novel result (Lemma 3), which can use α_{ l } to lower bound the recoverable sparsity k. This approach of lower bounding the recoverable sparsity k is useful when l is too large to perform the pickl algorithm directly (which requires \(\binom {n}{l}\) enumerations).
1.2 Notations and preliminaries
We denote the sets of real numbers and positive integers as \(\mathbb {R}\) and \(\mathbb {Z}^{+}\) respectively. We reserve uppercase letters K and L for index sets and lowercase letters \(k, l \in \mathbb {Z}^{+}\) for their respective cardinalities. We also use · to denote the cardinality of a set. We assume k>l≥1 throughout the paper. For vectors or scalars, we use lowercase letters, e.g., x,k,l. For a vector \(x \in \mathbb {R}^{n}\), we use x_{ i } for its ith element. If we use an index set as a subscript of a vector, it represents the partial vector over the index set. For example, when \(x \in \mathbb {R}^{n}\) and K={1,2}, x_{ K } represents [x_{1},x_{2}]^{T}. We reserve uppercase A for a sensing matrix whose dimension is m×n. Since the number of columns of a sensing matrix A is n, the full index set we consider is {1,2,…,n}. In addition, we represent \(\binom {n}{l}\) numbers of subsets as L_{ i }, \(i=1,\ldots,\binom {n}{l}\), where L_{ i }⊂{1,2,…,n}, L_{ i }=l. We use the superscript * to represent an optimal solution of an optimization problem. For instance, z^{∗} and K^{∗} are the optimal solution of (5). Since we need to represent an optimal solution for each index set L_{ i }, we use the superscript i∗ to represent an optimal solution for an index set L_{ i }, e.g., z^{i∗}. The maximum value of k such that both \(\alpha _{k} < \frac {1}{2}\) and \(\alpha _{k+1} \geq \frac {1}{2}\) hold true is denoted by the maximum recoverable sparsity k_{ max }.
2 Picklelement algorithm
Consider a sensing matrix with n columns. Then, there are \(\binom {n}{k}\) subsets K each of cardinality k. When n and k are large, exhaustive search over these subsets to compute α_{ k } is extremely expensive. For example, when n=100 and k=10, it takes a search over 1.7310e+13 subsets to compute α_{ k }— a combinatorial task that is beyond the technological reach of common desktop computers. Our goal is to devise algorithms that can rapidly yield an exact value of α_{ k }. As an initial step, we develop a method to compute an upper bound of α_{ k } in polynomial time, which is called the picklelement algorithm (or simply, pickl algorithm), where l is a chosen integer such that 1≤l<k.
Let us define the proportion parameter for a given index set L such that L=l, denoted by α_{l,L}, as
(6) is the partial optimization problem of (4) only considering the vector z in the null space of A for a fixed index set L. We can obtain α_{l,L} by solving the following optimization problem:
Since (7) is maximizing a convex function for a given subset L, we cast (7) as 2^{l} linear programming problems by considering all the possible sign patterns of every element of z_{ L } (e.g., if l=2 and L={1,2}, then, z_{ L }_{1}=z_{1}+z_{2} can correspond to 2^{l}=4 possibilities: z_{1}+z_{2}, z_{1}−z_{2}, −z_{1}+z_{2}, and −z_{1}−z_{2}). α_{l,L} is equal to the maximum among the 2^{l} objective values.
The pickl algorithm uses α_{l,L}’s obtained from different index sets to compute an upper bound of α_{ k }. Algorithm 1 shows the steps of the pickl algorithm in detail. The following Lemmata show that the pickl algorithm provides an upper bound of α_{ k }. Firstly, we provide Lemma 1 to derive the upper bound of the proportion parameter for a fixed index set K, and then, we show that the pickl algorithm yields an upper bound of α_{ k } in Lemma 2.
Lemma 1
(Cheap Upper Bound (CUB) for a given subset K) Given a subset K, we have
Proof
Suppose that when z=z^{i∗} and z=z^{∗}, we achieve the optimal value of (6) for given index sets L_{ i } and K respectively, i.e., \(\alpha _{l,L_{i}} = \frac {\ z^{i*}_{L_{i}} \_{1} }{ \ z^{i*} \_{1} }\) and \(\alpha _{k,K} = \frac {\z^{*}_{K}\_{1}}{\z^{*}\_{1}}\). Since each element of K appears \({\binom {k1}{l1}}\) times in {L_{ i }⊆K, L_{ i }=l}, we obtain the following inequality:
The inequality is from the optimal value of (6) for each index set L_{ i }. □
Lemma 2
The pickl algorithm provides an upper bound of α_{ k }, namely
Proof
Without loss of generality, we assume that when z=z^{i∗}, \(i=1,2,\ldots,\binom {n}{l}\), \(\alpha _{l,L_{i}}\)’s are obtained in descending order like (10). It is noteworthy that α_{k,K} is defined for a fixed K set; however, α_{ k } is the maximum value over all the subsets with cardinality k. Suppose that when z=z^{∗} and K=K^{∗}, α_{ k } is achieved in (4). From the aforementioned definitions and similar argument as in Lemma 1, we have:
The first inequality is from Lemma 1, and the last inequality is from the assumption that \(\alpha _{l,L_{i}}\)’s are sorted in descending order. □
The steps 2 and 3 in Algorithm 1, which are sorting α_{l,L}’s and computing an upper bound of α_{ k } with sorted α_{l,L}’s via (9), can also be done by solving the following optimization problem without sorting operation:
Here, we note that \(\frac {1}{{\binom {k1}{l1}}} \times {\binom {k}{l}} = \frac {k}{l}\). Therefore, for the optimal value, the first \(\binom {k}{l}\) largest \(\alpha _{l,L_{i}}\)’s are chosen with the coefficient \(\frac {1}{{\binom {k1}{l1}}}\).
The upshot of the pickl algorithm is that we can reduce number of operations from \(\binom {n}{k}\) enumerations to \(\binom {n}{l}\). For example, when n=300, k=20, and l=2, the number of operations is reduced by around 10^{26} times. Moreover, as n increases, the reduction rate increases. With the reduced enumerations, we can still have nontrivial upper bounds of α_{ k } through the picklelement algorithm. We will present the performance of the pickl algorithm in Section 5 showing that the pickl algorithm provides better upper bounds than the previous research [3, 8] even when l=2. Furthermore, thanks to the pickl algorithm, we can design a new algorithm based on a branchandbound search to calculate α_{ k } by using upper bounds of α_{ k } obtained from the pickl algorithm. It is noteworthy that the cheap upper bound introduced in Lemma 1 can provide upper bounds on α_{k,K} for specific subsets K, which enable our branchandbound method to calculate α_{ k } or more precise bounds on α_{ k }. However, LP relaxation method [8] and SDP method [3] do not provide upper bounds on α_{k,K} for specific subsets K, which overwhelms LP and SDP methods to be used in the branchandbound method.
Since we are also interested in k_{max}, we introduce the following Lemma 3 to bound the maximum recoverable sparsity k_{max}.
Lemma 3
The maximum recoverable sparsity k_{ max } satisfies
where ⌈.⌉ is the ceiling function.
Proof
To prove this lemma, we will show that when \(k = \left \lceil { l \cdot \frac {1/2}{\alpha _{l}}} \right \rceil  1\), \(\alpha _{k} < \frac {1}{2}\). This can be concluded from the upper bound of α_{ k } given as follows:
Note that there are \(\binom {k}{l}\) terms in the summation. From (13), if \(\alpha _{l} \cdot \frac {k}{l} < \frac {1}{2}\), then \(\alpha _{k} < \frac {1}{2}\). In other words, if \(k < l \cdot \frac {1/2}{\alpha _{l}}\), then \(\alpha _{k} < \frac {1}{2}\). Since k is a positive integer, when \(k = \big \lceil { l \cdot \frac {1/2}{\alpha _{l}}} \big \rceil  1\), \(\alpha _{k} < \frac {1}{2}\). Therefore, the maximum recoverable sparsity k_{max} should be larger than or at least equal to \(\big \lceil {l \cdot \frac {1/2}{\alpha _{l}}} \big \rceil  1\). □
It is noteworthy that in ([8] Section 4.2.B), the authors introduced lower bound on k based on α_{1}, i.e., k(α_{1}). However, in Lemma 3, we provide a more general result. Furthermore, in Lemma 3, instead of using α_{ l }, we can use an upper bound of α_{ l } to obtain the recoverable sparsity k; namely, \(k(UB(\alpha _{l})) = \left \lceil { l \cdot \frac {1/2}{UB(\alpha _{l})}} \right \rceil  1 \leq k_{max}\), where UB(α_{ l }) represents an upper bound of α_{ l }. Since the proof follows the same track as the proof of Lemma 3, we omit the proof.
Finally, we introduce the following proposition to compare our algorithm to LP method [8] theoretically.
Proposition 1
For any integer k≥1, let \(\alpha _{k}^{pick1}\) be the upper bound on α_{ k } provided by the pick1element algorithm according to Lemma 2. Let \(\alpha _{k}^{LP}\) be the upper bound on α_{ k } provided by the LP method [8] according to the following definition (namely Eq. (4.25) in [8] with β=∞):
where e_{ j } is the standard basis vector with the jth element equal to 1, and ∥·∥_{k,1} stands for the sum of k maximal magnitudes of components of a vector. Then we have:
For readability, we place the proof of Theorem 1 in Appendix A.
The LP method can provide tighter upper bounds on α_{ k } than the pick1element algorithm; however, this comes at a cost of solving a big optimization problem of design dimension mn. When m and n are large, the complexity of computing \(\alpha _{k}^{LP}\) can be prohibitive (please see Table 2).
3 Optimized pickl algorithm
We can tighten the upper bound of α_{ k } in the pickl algorithm by replacing the constant factor \(\frac {1}{\left (\underset {l1}{k1}\right)}\) in (9) with optimized coefficients at the cost of additional complexity, which we call as the optimized pickl algorithm. This optimized pickl algorithm is mostly useful from a theoretical perspective. In practice, it gives improved but similar performance in calculating the upper bound of α_{ k } to the basic pickl algorithm described in Section 2. As a theoretical merit of the optimized pickl algorithm, we can show that as l increases, the upper bound of α_{ k } becomes smaller or stays the same.
The optimized pickl algorithm provides an upper bound of α_{ k } via the following optimization problem:
In the following lemmata, we show that the optimized pickl algorithm produces an upper bound of α_{ k } and this bound is tighter than that of the basic pickl algorithm introduced in (11). The last lemma establishes that as l increases, the upper bound of α_{ k } decreases or stays the same.
Lemma 4
The optimized pickl algorithm provides an upper bound of α_{ k }.
Proof
The strategy to prove Lemma 4 is to show that one feasible solution of (15) gives an upper bound of α_{ k }. Suppose when K=K^{∗}, α_{ k } is achieved, i.e., \(\phantom {\dot {i}\!}\alpha _{k}=\alpha _{k,K^{*}}\). For a feasible solution, let us choose \(\gamma _{i} = \frac {1}{\binom {k1}{l1}}\) when L_{ i }⊆K^{∗}, and γ_{ i }=0 otherwise, which we can easily check whether it satisfies the first and second constraints of (15). For the third constraint, let us check the case when b=l first. For b=l, we can choose an arbitrary index set B such that B=b=l. For the chosen B, there is only one L_{ i } such that B⊆L_{ i }, which is itself, i.e., B=L_{ i }. For other chosen B’s, it is the same. Hence, the third constraint represents
For b=1, the third constraint represents
Note that there are \(\binom {n1}{l1}\) numbers of L_{ i }’s which have an index set B as a subset. Among \(\binom {n1}{l1}\) numbers of γ_{ i }’s, only γ_{ i }’s whose corresponding L_{ i }’s are the subsets of K^{∗} are \(\frac {1}{\left (\underset {l1}{k1}\right)}\). Since each element in L_{ i } such that L_{ i }⊆K^{∗} appears \(\binom {k1}{l1}\) times in \(\left \{i:\; L_{i} \subseteq K^{*},\; 1\leq i \leq \binom {n}{l} \right \}\), the summation of γ_{ i }, where the corresponding L_{ i }’s are the subset of K^{∗}, becomes \(\frac {1}{\left (\underset {l1}{k1}\right)}\times \binom {k1}{l1} = 1\), which satisfies (17). Basically, the third constraint makes that for an index, the summation of coefficients related to the index is limited to 1. In the same way, for 1<b<l, the chosen γ_{ i } is a feasible solution of (15). From this feasible solution, we have \(\frac {1}{{\left (\underset {l1}{k1}\right)}} \sum _{\{i:\; L_{i} \subseteq K^{*},\; L_{i}=l\}} \alpha _{l,L_{i}}\) for the optimal value, which is an upper bound of α_{ k } as shown in (13). □
Lemma 5
The optimized pickl algorithm provides a tighter, or at least the same, upper bound of α_{ k } than the basic pickl algorithm introduced in (11).
Proof
We will show that the optimization problem (11) is a relaxation of (15). As in the proof of Lemma 4, for b=l, the third constraint of (15) represents (16), which is involved in the first constraint of (11). Since the third constraint of (15) considers other b values such that 1≤b<l, (15) has more constraints than (11). Therefore, the optimized pickl algorithm, which is (15), provides a tighter or at least the same upper bound than the basic pickl algorithm. □
Lemma 6
The optimized pickl algorithm provides a tighter or at least the same upper bound than the optimized pickp algorithm when l>p.
Proof
We can upper bound the objective function of (15) by using (8) as follows:
Note that in the objective function of (18), each \(\alpha _{p,P_{j}},\; 1\leq j\leq \binom {n}{p}\), appears \(\binom {np}{lp}\) times. Let us define
We can relax (18) to the following problem, which turns out to be the same as the optimized pickp algorithm:
The relaxation is shown by checking the constraints. The first constraint of (19) is trivial to obtain. For the second constraint, we can obtain the second constraint of (19) from the following relations:
where the second equality is obtained from the fact that γ_{ i }, which is a coefficient of \(\alpha _{l,L_{i}}\), appears \(\binom {l}{p}\) times in \(\sum _{j=1}^{\binom {n}{p}} \sum _{\substack {\{i:\; P_{j} \subset L_{i} \}}} \gamma _{i}\). The final inequality is from the second constraint of (18). The third constraint in (19) can be deduced from the following inequality:
where the second equality is from the fact that for a fixed P_{ j }, there are \(\binom {np}{lp}\) numbers of L_{ i }’s, where P_{ j }⊂L_{ i }, \(i=1,\ldots,\binom {n}{l}\); for a fixed B, there are \(\binom {nb}{pb}\) numbers of P_{ j }’s, where B⊂P_{ j }, \(j=1,\ldots,\binom {n}{p}\), and \(\binom {nb}{lb}\) numbers of L_{ i }’s, where B⊂L_{ i }, \(i=1,\ldots,\binom {n}{l}\). Since (19) is obtained from the relaxation of (18), the optimal value of (19) is larger or equal to the optimal value of (18). (19) is just the optimized pickp algorithm. Thus, when l>p, the optimized pickl algorithm provides a tighter or at least the same upper bound than the optimized pickp algorithm. □
By using larger l in the pickl algorithm, we can obtain a tighter upper bound of α_{ k }. However, for a certain l, we need to enumerate \(\binom {n}{l}\) possibilities, and this becomes infeasible when l is large. Moreover, when l<k, the pickl algorithm only gives an upper bound of α_{ k }, instead of an exact value of α_{ k }. There is, however, a need to find tighter bounds on α_{ k }, or to even find the exact value of α_{ k }, when k is too large for \(\binom {n}{k}\) enumerations of exhaustive search [14–16]. To this end, we propose a new branchandbound tree search algorithm to find tighter bounds on α_{ k } than Lemma 2 provides, or to even find the exact α_{ k }. Our branchandbound tree search algorithm is enabled by the pickl algorithms introduced in Sections 2 and 3.
4 Tree search algorithm
To find the index set K^{∗} which leads to the maximum α_{k,K} (among all possible index set K’s), the tree search algorithm (TSA) performs a bestfirst branchandbound search [23] over a tree structure representing different subsets of {1,2,…,n}. In its essence, for each subset J with cardinality no bigger than k, TSA calculates an upper bound of α_{k,K}, which is valid for any set K (with cardinality k) such that J⊆K. If this upper bound is smaller than a lower bound of α_{ k }, TSA will not further explore any of J’s supersets, leading to reduced averagecase computational complexity. For simplicity, we will describe the TSA based on pick1element algorithm, simply called 1Step TSA. However, we remark we can also extend the TSA to be based on picklelement (l≥2) algorithm, by calculating upper bounds of α_{k,K} based on the results of the picklelement algorithm.
4.1 Tree structure
A tree node J represents an index subset of {1,…,n} such that J≤k. We have the following rule:

[R1] A parent node is a subset of each of its child node(s).
A node that has no child is referred to as a leaf node. We call the cardinality of the index set corresponding to J as J’s height. The tree structure follows the “legitimate order,” which ensures that any new index in the child node is bigger than the indices of its parent node.

[R2] “Legitimate order”  Let P and C denote the parent node and the child node. Then, any index in P must be smaller than any index in C∖P.
Figure 1 illustrates this rule in a tree with k=2 and n=3.
4.2 Basic idea of a branchandbound approach for calculating α _{ k }
We use a branchandbound approach over the tree structure to calculate α_{ k }. This method maintains a lower bound on α_{ k } (how to maintain this lower bound will be explained in Section 4.3). When the algorithm explores a tree node J, the algorithm calculates an upper bound B(J), which is no smaller than α_{k,K} for any child node K (with cardinality k) of node J. If B(J) is smaller than the lower bound on α_{ k }, then the algorithm will not explore the child nodes of the tree node J.
In our algorithm, we calculate B(J) as
where j+t=k, max(J) represents the largest index in J, and α_{1,{1}}≥α_{1,{2}}≥…≥α_{1,{n}}. We obtain this descending order by permuting the columns of the sensing matrix A in descending order of α_{1,{i}}’s as the precomputation step of TSA. For example, in Fig. 1, for k=2, B({1})=α_{1,{1}}+α_{1,{2}}. In order to justify that B(J) is an upper bound of α_{k,K} for all node K such that J⊆K, we provide the following lemma.
Lemma 7
Given α_{1,{1}}≥α_{1,{2}}≥…≥α_{1,{n}}, \(B(J) = \alpha _{j,J} + \sum _{i=1}^{t} \alpha _{1,\{i+max(J)\}}\), where j+t=k, and max(J) represents the largest index in J, is an upper bound of α_{k,K} for all nodes K such that J⊆K.
Proof
For any subset K such that J⊆K, we can write α_{k,K}=α_{j+t,{J∪T}}, where j+t=k and T=K∖J. Then, following exactly the same line of argument as in the proof of Lemma 1, we have
and α_{t,T} is no larger than \(\sum _{j \in T}^{t} \alpha _{1,\{j\}}\). Finally, since α_{1,{i}}’s are sorted in the descending order, \(\sum _{j \in T} \alpha _{1,\{j\}} \leq \sum _{i=1}^{t} \alpha _{1,\{i+max(J)\}}\). Note that, due to the legitimate order [R2], the smallest element of the index set T is no less than 1+max(J). In conclusion, for all nodes K such that J⊆K, B(J) becomes an upper bound of α_{k,K}. □
4.3 Bestfirst tree search strategy
TSA adopts a bestfirst tree search strategy for the branchandbound approach. We first describe a basic version of the bestfirst tree search strategy and then introduce two enhancements to this strategy in the next subsection.
In its basic version, TSA starts with a tree having only the root node and sets the global lower bound of α_{ k } as 0. In each iteration, TSA selects a leaf tree node J with the largest B(J) and expands the tree by adding the child nodes of J to the tree. For each of these newly added child nodes, say Q, TSA then calculates the upper bound B(Q) in (20). Note that if a newly added child node Q has k elements, TSA will calculate α_{k,Q}, which is a lower bound on α_{ k }. For this kelement Q, if the newly calculated α_{k,Q} is bigger than the global lower bound of α_{ k }, TSA will set the global lower bound equal to α_{k,Q}. TSA will terminate if a leaf tree node J has the largest B(J) among all the leaf nodes, and that B(J) is no bigger than the global lower bound on α_{ k }.
From standard theories of the branchandbound approach, this TSA will output the exact α_{ k }. Also, in this process, the global lower bound will keep increasing until it is equal to an upper bound of α_{ k } (the largest B(J) among leaf nodes).
4.4 Two enhancements
We incorporate two novel features to TSA in order to reduce the computational complexity. Firstly, when TSA attaches a new node Q to a node J in the tree structure, TSA computes B(Q) as (21):
where j+t+1=k, max(Q) represents the largest index in Q, and α_{1,{1}}≥α_{1,{2}}≥…≥α_{1,{n}}. Thus, without calculating α_{j+1,Q} (which involves higher computational complexity), we can still have B(Q) as an upper bound of α_{k,K} for any child node K (with cardinality k) of the node Q.
Secondly, when TSA adds a new node Q as the child of node J in the tree structure (assuming α_{j,J} has already been calculated), TSA does not need to add all of J’s child nodes to the tree at the same time. Instead, TSA only adds the node J’s unattached child node Q with the largest B(Q) as defined in (21). Namely, the index Q∖J is no bigger than the index Q^{′}∖J, where Q^{′} is any unattached child of the node J. We note that B(Q) is an upper bound on B(Q^{′}) (according to (21)) for any other unattached child node Q^{′} of the node J. Thus, for any child node K (of cardinality k) of node J’s unattached child nodes, B(Q) is still an upper bound of α_{k,K}.
Algorithm 2 shows detailed steps of TSA, based on the pick1element algorithm (namely, l=1, 1Step TSA). In the description, we define “expanding the tree from a node J” as follows:

[R3]“Expanding the tree from a node J”—attaching a new node Q to the node J, where B(Q) is the largest value defined as (21) among the node J’s all the unattached child nodes.
4.5 Advantage of the tree search algorithm
Due to the nature of the branchandbound approach, we can obtain a global upper bound and a global lower bound of α_{ k } while TSA runs. As the number of iterations increases in TSA, we can obtain tighter and tighter upper bounds on α_{ k }, which is the largest B(·) among the leaf nodes. By using the global upper bound of α_{ k }, we can obtain a lower bound of the recoverable sparsity k via Lemma 3. Thus, even if the complexity of TSA is too high to finish in a timely manner, we can still obtain a lower bound on the recoverable sparsity k by early terminating TSA.
We note that the methods based on LP [8] and SDP [3] also provide upper bounds on α_{ k }. However, they are unable to determine upper bounds of α_{k,K}, which is for a specific index set K. This prevents the use of LP and SDP methods in our branchandbound method for computing α_{ k }.
5 Numerical experiments
We conducted extensive simulations to compute α_{ k } and its upper/lower bounds using the pickl algorithms and TSA. In this section, we call the pickl algorithms introduced in Section 2 and 3 as simply the (basic) pickl and the optimized pickl algorithms respectively.
For same matrices, we compared our methods with LP relaxation [8] approach and SDP method [3]. We assessed the computational complexity in terms of execution time of the algorithms.^{Footnote 1} In addition, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically.
For LP method in [8] and SDP method in [3], we used the Matlab codes^{Footnote 2} provided by the authors. Consistent with previous research, we used CVX [17]—a package for specifying and solving convex programs—for the SDP method, and MOSEK [18]—a commercial LP solver—for the LP method. In our own algorithms, we used MOSEK to solve (7). Also, to be consistent with the previous research, matrices were generated from the Matlab code provided by the authors of [3] at http://www.di.ens.fr/~aspremon/NSPcode.html. For valid bounds, we rounded down lower bounds on α_{ k } and exact α_{ k }, and rounded up upper bounds on α_{ k } to the nearest hundredth.
5.1 Performance comparison
Firstly, we considered Gaussian matrices and partial Fourier matrices sized from n=40 to n=6144. We chose n=40 so that our results can be compared with the simulation results in [3].
5.1.1 Lowdimensional sensing matrices
5.1.1.1 Sensing matrices with n=40
. We considered sensing matrices of row dimension m=0.5n, 0.6n, 0.7n, 0.8n, where n=40. For every matrix size, we randomly generated 10 different realizations of Gaussian and partial Fourier matrices. So, in total we used 80 different n=40 sensing matrices for the numerical experiments in Tables 7 and 8. We normalized all of the matrix columns so that they have a unit ℓ_{2}norm. The entries of Gaussian matrices were i.i.d standard Gaussian \(\mathcal {N}(0,1)\). The partial Fourier matrices had m rows randomly draw from the full Fourier matrices. We compared our algorithms—pick1element, pick2element, pick3element, and TSA—to LP and SDP methods. For readability, we place the numerical results for these small sensing matrices in Appendix B.
For each matrix size and type, we increased k from 1 to 5 in unit steps. Tables 7(a) and 8(a) show the median values of α_{ k }. (To be consistent with the previous research [3], in which the authors used the median value of α_{ k } to compare the SDP method with the LP method, we provided the median values obtained from 10 random realizations of sensing matrix.) From the median value of α_{ k }, we obtained the recoverable sparsity k_{max} such that \(\alpha _{k_{\text {max}}}< 1/2\) and \(\alpha _{k_{\text {max}}+1} > 1/2\). In addition, we calculated the arithmetic mean of k_{max}’s. For the arithmetic mean, we obtained each k_{max} from each random realization and computed the arithmetic mean of ten k_{max}’s. Compared with LP and SDP methods, we obtained bigger or at least the same recoverable sparsity k_{max} by using pick2, pick3, and TSA. It is noteworthy that we obtained the exact α_{ k } for k=1,2,…,5 by using TSA, while LP and SDP methods only provided the exact α_{ k } for k=1. We observed that α_{ k }<1/2 but the upper bound of α_{ k }>1/2 holds true in several cases, e.g., α_{5} in 32×40 Gaussian matrices, α_{4} in 28×40 Gaussian matrices, α_{3} in 24×40 Gaussian matrices, α_{3} in 20×40 partial Fourier matrices, and α_{4} in 24×40 partial Fourier matrices. Additionally, this can also be established by the arithmetic mean of k_{max} in Tables 7(a) and 8(a).
To compare the computational complexity, we calculated the geometric mean of the algorithms’ execution time, to avoid biases for the average. Tables 7(b) and 8(b) list the average execution time. We also ran the exhaustive search method (ESM) to find α_{ k } and compared its execution time with that of TSA. In calculating α_{5}, on average, 3Step TSA reduced the computational time by around 86 times for 20×40 Gaussian matrices, and by 94 times for 20×40 partial Fourier matrices, compared to ESM. For 32×40 Gaussian matrix and partial Fourier matrix, the speedup compared to the best lStep TSA, l=1,2,3, becomes around 1760 times and 182 times respectively. We observed that when m/n=0.5, e.g., 20×40 sensing matrices, in general, the 3step TSA provides the fastest result for k=5. On the other hand, for m/n=0.8 (e.g., 32×40 case), the 2Step TSA is the quickest in finding an exact α_{ k } for k=5; however, for k>5, the fastest lstep TSA cannot be determined from either experiments or theory.
5.1.1.2 Sensing matrices with n=256
. We assessed the performance of the pickl algorithm for sensing matrices with n=256. We carried out numerical experiments on 128×256 Gaussian matrices in Fig. 2a and 64×256 partial Fourier matrices in Fig. 2b. Here, for 10 sensing matrices, we obtained the median value of upper bounds of α_{ k } using the pickl algorithm and compared the result with LP relaxation method [8]. We omitted SDP method [3] from this experiment due to its very high computational complexity. For the pick3 algorithm in Fig. 2a, we calculated an upper bound of α_{3} via TSA and used this result to calculate upper bounds of α_{ k }, k=3,4,…,8 via (13). Figure 2a, b demonstrate that, with an appropriate choice of l, the upper bound of α_{ k } obtained via the pickl algorithm can be tighter than that from the LP relaxation method. For example, for 128×256 Gaussian matrices, LP relaxation often determines the maximum recoverable sparsity as 5, while the pick2 algorithm improves it to 6. In the pick3 algorithm, the maximum recoverable sparsity is 7 (α_{7}=0.49). For 64×256 partial Fourier matrices, the maximum recoverable sparsity from LP relaxation and the pick2 algorithm are 3 and 4 respectively.
5.1.1.3 Sensing matrices with n=512
. We further conducted numerical experiments on Gaussian sensing matrices with n=512. The simulation results in Table 1 clearly demonstrate that the pick2 algorithm provides larger lower bound on the recoverable sparsity k than the LP method [8]. Especially, when Gaussian sensing matrix is 410×512, the lower bound on k obtained from the pick2 algorithm is almost twice larger than that of the LP method.
5.1.2 Highdimensional sensing matrices
5.1.2.1 Sensing matrix with n≥1024
. We conducted numerical experiments for Gaussian sensing matrices with n from 1024 to 6144. We show these numerical experiments in Tables 2 and 3, where we calculated the lower bound on the recoverable sparsity k and obtained the corresponding execution time. The SDP method [3] was not applicable in these experiments due to its very high computational complexity. In Table 2, we ran TSA for 1 day (24 h) and obtained an upper bound of α_{2}, denoted by UB(α_{2}). With the upper bound of α_{2}, we obtained a lower bound of k, denoted by k(UB(α_{2})), via Lemma 3. Our numerical results in Tables 2 and 3 clearly show that our pickl algorithm outperforms the LP method in recoverable sparsity k or execution time. We note that although our pick1element algorithm provides the same recoverable sparsity k as the LP method [8] in Tables 2 and 3, the complexity of LP method can be 10 times higher than our method on m×n Gaussian matrices, where m is large.
For extremely large sensing matrices, e.g, 4014×4096 and 6021×6144, the LP and SDP methods cannot provide any lower bound on k due to unreasonable computational time. However, our pickl algorithm can still provide the lower bound on k efficiently. Table 3 shows the lower bound on k and the execution time for these large dimensional matrices, where our verified recoverable sparsity k can be as large as 558 for a 6134×6144 sensing matrix. We obtained the estimated time for the LP method by running the Matlab code obtained from http://www2.isye.gatech.edu/~nemirovs/, which shows the percentage of the calculation on screen.
5.2 Comparison between the optimized pickl algorithm and the basic pickl algorithm
We compared the basic pickl algorithm introduced in Section 2 to the optimized pickl algorithm in Section 3 on Gaussian sensing matrices 28×40 and 40×50 for l=3 and k=4,5,…,8. Table 4 demonstrates that when l=3 and k=4,5,…,8, the optimized pickl algorithm provided tighter upper bounds on α_{ k } than the basic pickl algorithm. This is because when l is large and k>l, (15) includes more constraints, which leads to the reduced size of the feasible set, than the case when k and l are small. Hence, the optimal value of (15), which is the result from the optimized pickl, can be smaller than or equal to that of (11), which is the basic pickl. Additionally, we provided the exact α_{ k } values obtained from TSA in order to check how tight the bounds obtained from the basic pickl and the optimized pickl are. In terms of the execution time, the optimized pickl algorithm, which computes (15), was around 1.7 and 4.4 times slower than the basic pickl on 28×40 and 40×50 Gaussian matrix respectively.
In summary, the optimized pickl algorithm provides better or at least equal upper bound on α_{ k } to the basic pickl algorithm, with additional complexity. In spite of the increased complexity of the optimized pickl algorithm, it has an important theoretical merit, which is Lemma 6.
5.3 Complexity of tree search algorithm
In this subsection, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically on randomly chosen Gaussian sensing matrices. Figure 3a, b shows the distribution of execution time and the distribution of number of nodes in height 5 attached to the tree structure in TSA respectively. For m=0.5n, we generated 100 random realizations of Gaussian matrices and computed α_{5} using 3Step TSA. The maximum number of leaf node whose cardinality is k is \(\binom {n}{k} = \binom {40}{5}=6.58008e5\). From Fig. 3b, we note that for 90% of the cases, 3Step TSA was terminated before 1.6% of all the possible height5 nodes were attached to the tree structure.
We provided the execution time of TSA for differentsized randomly chosen Gaussian matrices in Fig. 4. We compared the execution time of TSA to ESM. Figure 4a shows that when k=1, 1Step TSA provides almost similar performance to ESM. This is because 1Step TSA calculates all the α_{1,{i}}’s as a precomputation, which is the same procedure as ESM. However, for k>l as shown in Fig. 4b–d, TSA can find α_{ k } with reduced computation by using all the α_{l,L}’s, while it is required to compute all the α_{k,K}’s in ESM. In order to compute α_{ k }, we achieved a speedup of around 100 times via 2Step TSA compared to ESM for k=3,4.
In addition, in Fig. 5, we compared the execution time of TSA to ESM by varying k with n fixed on random Gaussian matrices. For the best execution time of TSA, we used different l values for TSA. For n=40 and n=50, 3Step TSA reduced the execution time to find α_{5} by around 100 times and 300 times respectively, compared with ESM.
Finally, Fig. 6 gives illustrations of the values of the global lower and upper bounds, for 80×100 and 160×200 Gaussian sensing matrices, as the number of iterations in TSA increases. As we can see, the global upper and lower bounds get close very quickly. This implies that we can sometimes terminate TSA early and still obtain tight bounds on α_{ k }.
5.4 Application to network tomography problem
We apply our new tools introduced in this paper to verify NSC for sensing matrices in network tomography problems [14–16, 19–21]. In an undirected graph model for the communication network, the communication delay over each link can be determined by sending packets through probing paths that are composed of connected links. The delay of each path is then measured by adding the delays over its links. Generally, most links are uncongested, and only a few congested links have significant delays. It is, therefore, reasonable to think of finding the link delays as a sparse recovery problem. This sparse problem can be expressed in a system of linear equations y=Ax, where the vector \(y \in \mathbb {R}^{m}\) is the delay of m paths, the vector \(x\in \mathbb {R}^{n}\) is the delay vector for the n links, and A is a sensing matrix. The element A_{ ij } of A is 1, if and only if path y_{ i }, i∈{1, 2, …, m}, goes through link j, j∈{1, 2, …, n}; otherwise, A_{ ij } equal to 0 (see Fig. 7). The indices of nonzero elements in the vector x correspond to the congested links.
In our numerical experiments to verify NSC in network tomography problems, the paths for sending data packets were generated by random walks of fixed length. Table 5 summarizes the results of our experiments. We note that by using TSA, one can exactly verify that a total of k=2 and k=4 congested link delays can be uniquely found by solving ℓ_{1} minimization problem (2) for the randomly generated network measurement matrices 33×66 (12node complete graph) and 53×105 (15node complete graph) respectively. For ESM, we estimated the execution time by multiplying the unit time to solve (7) and the total number of cases in the exhaustive search. We obtained the unit time to solve (7) by calculating the arithmetic mean from 100 trials. For a 53×105 matrix, 3Step TSA substantially reduced the execution time to find α_{5} around 137 times compared to ESM.
We further carried out numerical experiments on even larger network model having 300 nodes and 400 edges. We created a random spanning tree for a network model by using random walk approach [24]. At each probing path, we randomly chose a node among 300 nodes as a starting point of randomwalk and walked 100 times along the network connection. We obtained a 320×400 matrix corresponding to the network model. We calculated α_{ k } values via lStep TSA, where l=1,2. In terms of the execution time, in Table 6, we compared TSA with ESM, where the unit time to solve (7) was obtained by calculating the arithmetic mean from 100 trials. Especially, 1Step TSA reduced the execution time to find α_{4} by around 28,700 times compared to ESM.
5.5 Discussion
In this section, we discuss the strengths and weaknesses of our proposed algorithms, compared with earlier research [3, 8].

1.
Comparisons with LP and SDP. Our proposed pick1element algorithm can achieve similar performance as the LP [8] and SDP methods [3]. However, our pick1element algorithm has the clear advantage of being more computationally efficient for large dimensional sensing matrices. Please see Table 3, where the LP and SDP methods cannot provide the performance bounds on recoverable sparsity k due to high computational complexity. On the other hand, in Table 3, our pick1element algorithm can efficiently provide bounds on recoverable sparsity k. The LP method has high computational complexity because it has to deal with a large convex program of design dimension mn, which leads to prohibitive computational complexity when m and n are large [8].
In our pick1element algorithm, we proposed the novel idea of sorting α_{1,{i}}’s (see Lemma 2), which leads to improved performance bounds on α_{ k } and recoverable sparsity k. This sorting idea, combined with Lemma 2, provides us with larger recoverable sparsity bound k than purely using α_{1} for bounding recoverable k in ([8] Section 4.2.B).

2.
Setspecific upper bounds. Our proposed picklelement algorithm (l≥2) is novel and can provide improved bounds on α_{ k } and recoverable sparsity k, using polynomial computational complexity in n when l is fixed. This approach is not practical when l is large. However, pick2element and pick3element algorithm can already provide improved performance bounds, compared with the previous research [3, 8].
The fact that we can obtain upper bounds on α_{ k }, based on the results of picklelement (l≥2) algorithm, is new and nontrivial (see Lemma 2, Lemma 3 and Lemma 4). For example, if we know α_{5}≤0.22, we can use Lemma 3 to obtain that α_{11}≤0.22×11/5<0.5.
Our picklelement algorithm can provide setspecific upper bound for α_{k,K}, laying the foundation for our branchandbound TSA.

3.
Computational complexity of TSA. We proposed TSA to find precise values for α_{ k } with significantly reduced averagecase computational complexity than ESM. The computational complexity of TSA is dependent on n, sparsity k, and a chosen constant l. When k, n, and l are large enough, finding α_{ k } via TSA is still computationally expensive. In the worst case, TSA has the same computational complexity as ESM. However, our extensive simulations ranging from Fig. 3 to Fig. 5 and from Table 5 to Table 8 show that on average, TSA can greatly reduce the computational complexity of finding α_{ k } compared with ESM.
Moreover, since TSA maintains an upper bound and a lower bound of α_{ k } during its iterations, one can always early terminate TSA and still get improved performance bounds on α_{ k } than the LP and SDP methods. We can use TSA to find an exact value of α_{ l }, where l<k, and then use Lemma 3 to bound α_{ k }.

4.
Use of data structures. We used objectoriented programming (OOP) to implement TSA in Matlab [25], because the OOP makes it easy to handle treetype structures. In OOP, we defined a class and created objects from the class to store property of each node J, e.g., B(J), in the tree. In order to make a connection between two tree nodes, we used doubly linked list data structure as a part of the object. However, in case readers would like to implement the algorithm using alternative data structures, we have provided implementationagnostic pseudocode of our algorithm in Algorithm 2.

5.
Difference from phase transition works. There has been extensive research on the phase transitions of various sparse recovery algorithms such as Basis Pursuit (BP), Orthogonal Matching Pursuit (OMP), and Approximate Message Passing (AMP) [1]. However, our research is different from the research on phase transition in two aspects. Firstly, our work and the previous works [3, 8] are focusing on worstcase performance guarantee (recovering all the possible ksparse signals), while the research on phase transition is considering the averagecase performance guarantee for a single ksparse signal with fixed support and sign pattern. Secondly, the phase transition bounds are mostly for random matrices. Hence, for a given deterministic sensing matrix, phase transition results cannot be used for that particular matrix.
6 Conclusion
In this paper, we consider the problem of verifying the null space condition in compressed sensing. Calculating the proportional parameter α_{ k } that characterizes the null space condition of a sensing matrix is a nonconvex optimization problem, and also known to be NPhard in [7]. In order to verify the null space condition, we proposed novel and simple enumerationbased algorithms, which are called the basic and optimized pickl algorithms, to obtain upper bounds of α_{ k }. With these algorithms, we further designed a new algorithm called the tree search algorithm to gain a global solution to the nonconvex optimization problem of verifying the null space condition. Numerical experiments show that our algorithms outperform the previously proposed algorithms [3, 8] in performance as well as speed.
7 Appendix
8 Proof of proposition 1
Proof
Let us denote the sum of k maximal magnitudes of elements of \(x \in \mathbb {R}^{n}\) as
We use i_{1}, i_{2},…, and i_{ k } (or j_{1}, j_{2},…, and j_{ k }) to denote k distinct integers between 1 and n. For a matrix, say A, we use A_{i,j} to represent its element in the ith row and jth column.
where we can exchange the order of “maximize” and “minimize” in the last equality because \(e_{i_{t}}  A^{T} y_{i_{t}}_{\infty }\) only depends on \(y_{i_{t}}\).
Moreover, according to the equations for “ α^{i}” between (4.29) and (4.30) in [8] (taking β there to be ∞),
Combining this with (22), \(\alpha _{k}^{LP}\) is no bigger than the upper bound calculated by Lemma 2 (based on the pick1element algorithm). Namely,
□
Notes
We conducted our experiments on HP Z220 CMT with Intel Core i73770 dual core CPU @3.4GHz clock speed and 16GB DDR3 RAM, using Matlab (R2013b) on Windows 7.
LP method from http://www2.isye.gatech.edu/~nemirovs/ and SDP method from http://www.di.ens.fr/~aspremon/NSPcode.html.
References
YC Eldar, G Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012).
EJ Candès, T Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005).
A d’Aspremont, L El Ghaoui, Testing the nullspace property using semidefinite programming. Math. Prog. 127(1), 123–144 (2011).
EJ Candès, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).
EJ Candès, J Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).
D Donoho, Neighborly polytopes and sparse solution of underdetermined linear equations. Technical report (Stanford University. Dept of Statistics, Stanford, 2005).
AM Tillmann, ME Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory. 60(2), 1248–1259 (2014).
A Juditsky, A Nemirovski, On verifiable sufficient conditions for sparse signal recovery via ℓ _{1} minimization. Math. Prog. 127(1), 57–88 (2011).
A Cohen, W Dahmen, R DeVore, Compressed sensing and best kterm approximation. J. American Math. Soc. 22(1), 211–231 (2009).
W Xu, B Hassibi, Precise stability phase transitions for ℓ _{1} minimization: a unified geometric framework. IEEE Trans. Inf. Theory. 57(10), 6894–6919 (2011).
K Lee, Y Bresler, in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Computing performance guarantees for compressed sensing, (2008), pp. 5129–5132.
G Tang, A Nehorai, in Proceedings of Conference on Information Sciences and Systems (CISS). Verifiable and computable ℓ _{ ∞ } performance evaluation of ℓ _{1} sparse signal recovery, (2011), pp. 1–6.
M Cho, W Xu, in Proceedings of Asilomar Conference on Signals, Systems and Computers. New algorithms for verifying the null space conditions in compressed sensing, (2013), pp. 1038–1042.
W Xu, E Mallada, A Tang, in Proceedings of IEEE International Conference on Computer Communizations (INFOCOM). Compressive sensing over graphs, (2011), pp. 2087–2095.
MH Firooz, S Roy, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM). Network tomography via compressed sensing, (2010), pp. 1–5.
MJ Coates, RD Nowak, in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Network tomography for internal delay estimation, vol.6, (2001), pp. 3409–3412.
M Grant, S Boyd, CVX: Matlab software for disciplined convex programming, version 2.1 beta (2012). http://cvxr.com/cvx.
MOSEK ApS, The MOSEK optimization toolbox for MATLAB manual. Version 7.1 (Revision 31) (2015). http://docs.mosek.com/7.1/toolbox/index.html.
Y Tsang, M Coates, RD Nowak, Network delay tomography. IEEE Trans. Signal Process. 51(8), 2125–2136 (2003).
Y Vardi, Network tomography: estimating sourcedestination traffic intensities from link data. J. American Stat. Assoc. 91(433), 365–377 (1996).
R Castro, M Coates, G Liang, R Nowak, B Yu, Network tomography: recent developments. Stat. Sci. 19(3), 499–517 (2004).
DL Donoho, J Tanner, Precise undersampling theorems. Proc. IEEE. 98(6), 913–924 (2010).
G Brassard, P Bratley, Fundamentals of algorithmics (Englewood Cliffs, Prentice Hall, 1996).
DB Wilson, in Proceedings of the twentyeighth annual ACM symposium on Theory of computing. Generating random spanning trees more quickly than the cover time, (1996), pp. 296–303.
Mathworks, ObjectOriented Programming in MATLAB. https://www.mathworks.com/discovery/objectorientedprogramming.html. Accessed 26 May 2017.
Acknowledgements
We thank Alexandre d’Aspremont from CNRS at Ecole Normale Superieure, Anatoli Juditsky from Laboratoire, Jean Kuntzmann at Universite Grenoble Alpes, and Arkadi Nemirovski from Georgia Institute of Technology for helpful discussions and providing codes for the simulations in [3] and [8].
Funding
The work of Weiyu Xu is supported by Simons Foundation 318608, KAUST OCRF2014CRG3, NSF DMS1418737, and NIH 1R01EB02066501.
Availability of data and material
All the codes used for the numerical experiments are available at the following link: https://sites.google.com/view/myungcho/software/nsc.
Author information
Authors and Affiliations
Contributions
MC and WX designed the algorithms. MC implemented the algorithms. KVM checked the implementation of the algorithms and helped to polish the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Cho, M., Vijay Mishra, K. & Xu, W. Computable performance guarantees for compressed sensing matrices. EURASIP J. Adv. Signal Process. 2018, 16 (2018). https://doi.org/10.1186/s136340180535y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136340180535y