Skip to main content

Computable performance guarantees for compressed sensing matrices

Abstract

The null space condition for 1 minimization in compressed sensing is a necessary and sufficient condition on the sensing matrices under which a sparse signal can be uniquely recovered from the observation data via 1 minimization. However, verifying the null space condition is known to be computationally challenging. Most of the existing methods can provide only upper and lower bounds on the proportion parameter that characterizes the null space condition. In this paper, we propose new polynomial-time algorithms to establish upper bounds of the proportion parameter. We leverage on these techniques to find upper bounds and further develop a new procedure—tree search algorithm—that is able to precisely and quickly verify the null space condition. Numerical experiments show that the execution speed and accuracy of the results obtained from our methods far exceed those of the previous methods which rely on linear programming (LP) relaxation and semidefinite programming (SDP).

1 Introduction

Compressed sensing is an efficient signal processing technique to recover a sparse signal from fewer samples than required by the Nyquist-Shannon theorem, reducing time and energy spent in sampling operation. These advantages make compressed sensing attractive in various signal processing areas [1].

In compressed sensing, we are interested in recovering the sparsest vector \(x \in \mathbb {R}^{n}\) that satisfies the underdetermined equation y=Ax. Here, \(\mathbb {R}\) is the set of real numbers, \(A \in \mathbb {R}^{m \times n}, \; m < n\) is a sensing matrix, and \(y \in \mathbb {R}^{m}\) is the observation or measurement data. This is posed as an 0 minimization problem:

$$\begin{array}{*{20}l} & \text{minimize}~ \| x \|_{0} \\ & \text{subject to}~y = Ax, \end{array} $$
(1)

where x0 is the number of non-zero elements in vector x. The 0 minimization is an NP-hard problem. Therefore, we often relax (1) to its closest convex approximation—the 1 minimization problem:

$$\begin{array}{*{20}l} & \text{minimize}~ \| x \|_{1} \\ & \text{subject to}~ y = Ax. \end{array} $$
(2)

It has been shown that the optimal solution of 0 minimization can be obtained by solving 1 minimization under certain conditions (e.g., restricted isometry property or RIP) [26]. For random sensing matrices, these conditions hold with high probability. We note that RIP is a sufficient condition for sparse recovery [7].

A necessary and sufficient condition under which a k-sparse signal x, (kn) can be uniquely obtained via 1 minimization is null space condition (NSC) [3, 8, 9]. A matrix A satisfies NSC for a positive integer k if

$$\begin{array}{*{20}l} ||z_{K} ||_{1} < || z_{\overline K} ||_{1} \end{array} $$
(3)

holds true for all z{z: Az=0,z≠0} and for all subsets K{1,2,…,n} with |K|≤k. Here, K is an index set, |K| is the cardinality of K, z K is the part of the vector z over the index set K, and \(\overline {K}\) is the complement of K. NSC is related to the proportion parameter α k defined as

$$\begin{array}{*{20}l} \alpha_{k} \triangleq \underset{\{z:\; Az=0,\;z\neq 0\}}{\text{maximize}} \underset{{\{K:\; |K|\leq k\}}}{\text{maximize}} ~\frac{\|z_{K} \|_{1}}{\|z\|_{1}}. \end{array} $$
(4)

The α k is the optimal value of the following optimization problem:

$$\begin{array}{*{20}l} &\underset{z,\{K:\;|K|\leq k\}}{\text{maximize}}\;\; \| z_{K} \|_{1} \\ & ~\text{subject to}~\| z \|_{1} \leq 1,\; Az = 0, \end{array} $$
(5)

where K is a subset of {1,2,…,n} with cardinality at most k. The matrix A satisfies NSC for a positive integer k if and only if \(\alpha _{k}<\frac {1}{2}\). Equivalently, NSC can be verified by computing or estimating α k . The role of α k is also important in the recovery of an approximately sparse signal x via 1 minimization where a smaller α k implies more robustness [810].

We are interested in computing α k and, especially, finding the maximum k for which \(\alpha _{k} < \frac {1}{2}\). However, computing α k to verify NSC is extremely expensive and was reported in [7] to be NP-hard. Due to the challenges in computing α k , verifying NSC explicitly for deterministic sensing matrices remains a relatively unexamined research area. In [3, 8, 11, 12], convex relaxations were used to establish upper or lower bounds of α k (or other parameters related to α k ) instead of computing the exact α k . While [3, 11] proposed semidefinite programming-based methods, [8, 12] suggested linear programming relaxations to obtain the upper and lower bounds of α k . For both methods, computable performance guarantees on sparse signal recovery were reported via bounding α k . However, these bounds of α k could only verify NSC with \(k=O(\!\sqrt {n})\), even though theoretically, k can grow linearly with n.

Our work drastically departs from these prior methods [3, 8, 11, 12] that provide only the upper and lower bounds. In our solution, we propose the pick-l-element algorithms (1≤l<k), which compute upper bounds of α k in polynomial time. Subsequently, we leverage on these algorithms to develop the tree search algorithm (TSA)—a new method to compute an exact α k by significantly reducing computational complexity of an exhaustive search method. This algorithm offers a way to control a smooth trade-off between complexity and accuracy of the computations. In the conference precursor to this paper, we had introduced sandwiching algorithm (SWA) [13], which employs a branch-and-bound method. Although SWA can also be used to calculate the exact α k , it has a disadvantage of greater memory usage than TSA. On the other hand, TSA provides memory and performance benefits for high-dimensional matrices (e.g., up to size  6000×6000).

It is noteworthy that our methods are different from RIP or the neighborly polytope framework for analyzing the sparse recovery capability of random sensing matrices. For example, prior works such as [6, 22] employ the neighborly polytope to predict theoretical lower bounds on recoverable sparsity k for a randomly chosen Gaussian matrix. However, our methods do not resort to a probabilistic analysis and are applicable for any given deterministic sensing matrix. Also, our algorithms have the strength of providing better bounds than existing methods [3, 8, 11, 12] for a wide range of matrix sizes.

1.1 Main contributions

We summarize our main contributions as follows:

  1. (i)

    Faster algorithms for high dimensions. We designed the pick-l algorithm (and its optimized version), where l is a chosen integer, to provide upper bounds on α k . We are able to show that when l increases, the optimized pick-l algorithm provides tighter upper bound on α k . Numerical experiments show that, even with l=2 or 3, the pick-l algorithm already provides better bound on α k than the previous algorithms based on the LP [8] and SDP [3]. For large sensing matrices, the pick-1-element algorithm can be significantly faster than the LP and SDP methods.

  2. (ii)

    Novel formulations using branch-and-bound. Based on the pick-l algorithm, we propose a branch-and-bound tree search approach to compute tighter bounds or even the exact value of α k . To the best of our knowledge, this tree search algorithm is the first branch-and-bound algorithm to verify NSC for 1 minimization. This branch-and-bound approach heavily depends on the pick-l algorithm developed in this paper. For example, the LP [8] and SDP [3] methods cannot be directly adapted to provide an efficient branch and bound approach, due to their lack of subset-specific upper bounds on α k . In numerical experiments, we demonstrated that the tree search algorithm reduced the execution time to precisely calculate α k by around 40–8000 times, compared to the exhaustive search method.

  3. (iii)

    Simultaneous upper and lower bounds. The branch-and-bound tree search algorithm simultaneously maintains upper and lower bounds of α k during the run-time. This approach has two benefits. Firstly, if one is interested in merely certifying the NSC for a positive k rather than obtaining the exact α k , then one can terminate the TSA early to shorten the running time. This can be done as soon as the global upper (lower) bound drops below (exceeds) 1/2 and, therefore, concluding that the NSC for the positive k is satisfied (not satisfied). Secondly, consider the case when TSA is terminated early due to, say, constraints on running time. Then, the process still yields meaningful bounds on α k via the record of continuously maintained upper and lower bounds.

  4. (iv)

    New results on recoverable sparsity. For a certain l<k, we can compute α l or its upper bound by using the branch-and-bound tree search algorithm (for example, based on the pick-1-element algorithm). We introduce a novel result (Lemma 3), which can use α l to lower bound the recoverable sparsity k. This approach of lower bounding the recoverable sparsity k is useful when l is too large to perform the pick-l algorithm directly (which requires \(\binom {n}{l}\) enumerations).

1.2 Notations and preliminaries

We denote the sets of real numbers and positive integers as \(\mathbb {R}\) and \(\mathbb {Z}^{+}\) respectively. We reserve uppercase letters K and L for index sets and lowercase letters \(k, l \in \mathbb {Z}^{+}\) for their respective cardinalities. We also use |·| to denote the cardinality of a set. We assume k>l≥1 throughout the paper. For vectors or scalars, we use lowercase letters, e.g., x,k,l. For a vector \(x \in \mathbb {R}^{n}\), we use x i for its i-th element. If we use an index set as a subscript of a vector, it represents the partial vector over the index set. For example, when \(x \in \mathbb {R}^{n}\) and K={1,2}, x K represents [x1,x2]T. We reserve uppercase A for a sensing matrix whose dimension is m×n. Since the number of columns of a sensing matrix A is n, the full index set we consider is {1,2,…,n}. In addition, we represent \(\binom {n}{l}\) numbers of subsets as L i , \(i=1,\ldots,\binom {n}{l}\), where L i {1,2,…,n}, |L i |=l. We use the superscript * to represent an optimal solution of an optimization problem. For instance, z and K are the optimal solution of (5). Since we need to represent an optimal solution for each index set L i , we use the superscript i to represent an optimal solution for an index set L i , e.g., zi. The maximum value of k such that both \(\alpha _{k} < \frac {1}{2}\) and \(\alpha _{k+1} \geq \frac {1}{2}\) hold true is denoted by the maximum recoverable sparsity k max .

2 Pick-l-element algorithm

Consider a sensing matrix with n columns. Then, there are \(\binom {n}{k}\) subsets K each of cardinality k. When n and k are large, exhaustive search over these subsets to compute α k is extremely expensive. For example, when n=100 and k=10, it takes a search over 1.7310e+13 subsets to compute α k — a combinatorial task that is beyond the technological reach of common desktop computers. Our goal is to devise algorithms that can rapidly yield an exact value of α k . As an initial step, we develop a method to compute an upper bound of α k in polynomial time, which is called the pick-l-element algorithm (or simply, pick-l algorithm), where l is a chosen integer such that 1≤l<k.

Let us define the proportion parameter for a given index set L such that |L|=l, denoted by αl,L, as

$$\begin{array}{*{20}l} \alpha_{l,L} \triangleq \underset{\{z:\; Az=0,\;z\neq 0\}}{\text{maximize}} \frac{\|z_{L} \|_{1}}{\|z\|_{1}}. \end{array} $$
(6)

(6) is the partial optimization problem of (4) only considering the vector z in the null space of A for a fixed index set L. We can obtain αl,L by solving the following optimization problem:

$$\begin{array}{*{20}l} & \underset{z}{\text{maximize}}~\| z_{L} \|_{1} \\ & \text{subject to}~\| z \|_{1} \leq 1, \; Az = 0. \end{array} $$
(7)

Since (7) is maximizing a convex function for a given subset L, we cast (7) as 2l linear programming problems by considering all the possible sign patterns of every element of z L (e.g., if l=2 and L={1,2}, then, ||z L ||1=|z1|+|z2| can correspond to 2l=4 possibilities: z1+z2, z1z2, −z1+z2, and −z1z2). αl,L is equal to the maximum among the 2l objective values.

The pick-l algorithm uses αl,L’s obtained from different index sets to compute an upper bound of α k . Algorithm 1 shows the steps of the pick-l algorithm in detail. The following Lemmata show that the pick-l algorithm provides an upper bound of α k . Firstly, we provide Lemma 1 to derive the upper bound of the proportion parameter for a fixed index set K, and then, we show that the pick-l algorithm yields an upper bound of α k in Lemma 2.

Lemma 1

(Cheap Upper Bound (CUB) for a given subset K) Given a subset K, we have

$$\begin{array}{*{20}l} CUB(\alpha_{k,K}) \triangleq \frac{1}{{\binom{k-1}{l-1}}} \sum\limits_{\{L_{i} \subseteq K,\; |L_{i}|=l\}} \alpha_{l,L_{i}} \geq \alpha_{k,K}. \end{array} $$
(8)

Proof

Suppose that when z=zi and z=z, we achieve the optimal value of (6) for given index sets L i and K respectively, i.e., \(\alpha _{l,L_{i}} = \frac {\| z^{i*}_{L_{i}} \|_{1} }{ \| z^{i*} \|_{1} }\) and \(\alpha _{k,K} = \frac {\|z^{*}_{K}\|_{1}}{\|z^{*}\|_{1}}\). Since each element of K appears \({\binom {k-1}{l-1}}\) times in {L i K, |L i |=l}, we obtain the following inequality:

$$\begin{array}{*{20}l} \alpha_{k,K} & = \frac{\| z^{*}_{K} \|_{1} }{ \| z^{*} \|_{1}} = \frac{1}{{\binom{k-1}{l-1}}} \sum_{\{L_{i} \subseteq K,\; |L_{i}|=l\}} \frac{\| z^{*}_{L_{i}} \|_{1} }{ \| z^{*} \|_{1}} \\ & \leq \frac{1}{{\binom{k-1}{l-1}}} \sum_{\{L_{i} \subseteq K,\; |L_{i}|=l\}} \frac{\| z^{i*}_{L_{i}} \|_{1} }{ \| z^{i*} \|_{1}} = CUB(\alpha_{k,K}). \end{array} $$

The inequality is from the optimal value of (6) for each index set L i . □

Lemma 2

The pick-l algorithm provides an upper bound of α k , namely

$$\begin{array}{*{20}l} \alpha_{k} \leq \frac{1}{{\binom{k-1}{l-1}}} \sum_{i=1 }^{\binom{k}{l}} \alpha_{l,L_{i}}, \end{array} $$
(9)
$$\begin{array}{*{20}l} \text{where}\;\; \alpha_{l,L_{1}} \geq \alpha_{l,L_{2}} \geq \cdots \geq \alpha_{l,L_{i}} \geq \cdots \geq \alpha_{l,L_{\binom{n}{l}}}. \end{array} $$
(10)

Proof

Without loss of generality, we assume that when z=zi, \(i=1,2,\ldots,\binom {n}{l}\), \(\alpha _{l,L_{i}}\)’s are obtained in descending order like (10). It is noteworthy that αk,K is defined for a fixed K set; however, α k is the maximum value over all the subsets with cardinality k. Suppose that when z=z and K=K, α k is achieved in (4). From the aforementioned definitions and similar argument as in Lemma 1, we have:

$$\begin{array}{*{20}l} \alpha_{k} = \alpha_{k,K^{*}} \leq \frac{1}{{\binom{k-1}{l-1}}} \sum\limits_{\{L_{i} \subseteq K^{*},\; |L_{i}|=l\}} \alpha_{l,L_{i}} \leq \frac{1}{{\binom{k-1}{l-1}}} \sum\limits_{i=1 }^{\binom{k}{l}} \alpha_{l,L_{i}}. \end{array} $$

The first inequality is from Lemma 1, and the last inequality is from the assumption that \(\alpha _{l,L_{i}}\)’s are sorted in descending order. □

The steps 2 and 3 in Algorithm 1, which are sorting αl,L’s and computing an upper bound of α k with sorted αl,L’s via (9), can also be done by solving the following optimization problem without sorting operation:

$$\begin{array}{*{20}l} & \underset{\gamma_{i},\; 1 \leq i \leq \binom{n}{l}}{ \text{maximize}} \;\; \sum_{i=1}^{\binom{n}{l}} \gamma_{i} \; \alpha_{l,L_{i}} \\ & \text{subject to}~0 \leq \gamma_{i} \leq \frac{1}{{\binom{k-1}{l-1}}},\; 1\leq i \leq \scriptstyle{\binom{n}{l}}, \\ & \quad\quad\quad\quad~\sum_{i=1}^{\binom{n}{l}} \gamma_{i} \leq \frac{k}{l}. \end{array} $$
(11)

Here, we note that \(\frac {1}{{\binom {k-1}{l-1}}} \times {\binom {k}{l}} = \frac {k}{l}\). Therefore, for the optimal value, the first \(\binom {k}{l}\) largest \(\alpha _{l,L_{i}}\)’s are chosen with the coefficient \(\frac {1}{{\binom {k-1}{l-1}}}\).

The upshot of the pick-l algorithm is that we can reduce number of operations from \(\binom {n}{k}\) enumerations to \(\binom {n}{l}\). For example, when n=300, k=20, and l=2, the number of operations is reduced by around 1026 times. Moreover, as n increases, the reduction rate increases. With the reduced enumerations, we can still have non-trivial upper bounds of α k through the pick-l-element algorithm. We will present the performance of the pick-l algorithm in Section 5 showing that the pick-l algorithm provides better upper bounds than the previous research [3, 8] even when l=2. Furthermore, thanks to the pick-l algorithm, we can design a new algorithm based on a branch-and-bound search to calculate α k by using upper bounds of α k obtained from the pick-l algorithm. It is noteworthy that the cheap upper bound introduced in Lemma 1 can provide upper bounds on αk,K for specific subsets K, which enable our branch-and-bound method to calculate α k or more precise bounds on α k . However, LP relaxation method [8] and SDP method [3] do not provide upper bounds on αk,K for specific subsets K, which overwhelms LP and SDP methods to be used in the branch-and-bound method.

Since we are also interested in kmax, we introduce the following Lemma 3 to bound the maximum recoverable sparsity kmax.

Lemma 3

The maximum recoverable sparsity k max satisfies

$$\begin{array}{*{20}l} k(\alpha_{l}) \triangleq \left\lceil { l \cdot \frac{1/2}{\alpha_{l}}} \right\rceil - 1 \leq k_{max}, \end{array} $$
(12)

where . is the ceiling function.

Proof

To prove this lemma, we will show that when \(k = \left \lceil { l \cdot \frac {1/2}{\alpha _{l}}} \right \rceil - 1\), \(\alpha _{k} < \frac {1}{2}\). This can be concluded from the upper bound of α k given as follows:

$$\begin{array}{*{20}l} \alpha_{k} = \alpha_{k,K^{*}} & \leq \frac{1}{\binom{k-1}{l-1}} \sum\limits_{\{L_{i} \subseteq K^{*},\; |L_{i}|=l\}} \alpha_{l,L_{i}}\\ & \leq \frac{\binom{k}{l}}{\binom{k-1}{l-1}} \alpha_{l} = \alpha_{l} \cdot \frac{k}{l}. \end{array} $$
(13)

Note that there are \(\binom {k}{l}\) terms in the summation. From (13), if \(\alpha _{l} \cdot \frac {k}{l} < \frac {1}{2}\), then \(\alpha _{k} < \frac {1}{2}\). In other words, if \(k < l \cdot \frac {1/2}{\alpha _{l}}\), then \(\alpha _{k} < \frac {1}{2}\). Since k is a positive integer, when \(k = \big \lceil { l \cdot \frac {1/2}{\alpha _{l}}} \big \rceil - 1\), \(\alpha _{k} < \frac {1}{2}\). Therefore, the maximum recoverable sparsity kmax should be larger than or at least equal to \(\big \lceil {l \cdot \frac {1/2}{\alpha _{l}}} \big \rceil - 1\). □

It is noteworthy that in ([8] Section 4.2.B), the authors introduced lower bound on k based on α1, i.e., k(α1). However, in Lemma 3, we provide a more general result. Furthermore, in Lemma 3, instead of using α l , we can use an upper bound of α l to obtain the recoverable sparsity k; namely, \(k(UB(\alpha _{l})) = \left \lceil { l \cdot \frac {1/2}{UB(\alpha _{l})}} \right \rceil - 1 \leq k_{max}\), where UB(α l ) represents an upper bound of α l . Since the proof follows the same track as the proof of Lemma 3, we omit the proof.

Finally, we introduce the following proposition to compare our algorithm to LP method [8] theoretically.

Proposition 1

For any integer k≥1, let \(\alpha _{k}^{pick1}\) be the upper bound on α k provided by the pick-1-element algorithm according to Lemma 2. Let \(\alpha _{k}^{LP}\) be the upper bound on α k provided by the LP method [8] according to the following definition (namely Eq. (4.25) in [8] with β=):

$$\begin{array}{*{20}l} \alpha_{k}^{LP} = \underset{Y=[y_{1},\ldots,y_{n}]\in \mathbb{R}^{m\times n}}{ \text{minimize}}\;\left\{\underset{1\leq j \leq n}{ \text{maximize}}\; ||(I-Y^{T} A)e_{j}||_{k,1}\; \right\}, \end{array} $$

where e j is the standard basis vector with the j-th element equal to 1, and ·k,1 stands for the sum of k maximal magnitudes of components of a vector. Then we have:

$$\begin{array}{*{20}l} \alpha_{k}^{pick1} \geq \alpha_{k}^{LP}. \end{array} $$
(14)

For readability, we place the proof of Theorem 1 in Appendix A.

The LP method can provide tighter upper bounds on α k than the pick-1-element algorithm; however, this comes at a cost of solving a big optimization problem of design dimension mn. When m and n are large, the complexity of computing \(\alpha _{k}^{LP}\) can be prohibitive (please see Table 2).

3 Optimized pick-l algorithm

We can tighten the upper bound of α k in the pick-l algorithm by replacing the constant factor \(\frac {1}{\left (\underset {l-1}{k-1}\right)}\) in (9) with optimized coefficients at the cost of additional complexity, which we call as the optimized pick-l algorithm. This optimized pick-l algorithm is mostly useful from a theoretical perspective. In practice, it gives improved but similar performance in calculating the upper bound of α k to the basic pick-l algorithm described in Section 2. As a theoretical merit of the optimized pick-l algorithm, we can show that as l increases, the upper bound of α k becomes smaller or stays the same.

The optimized pick-l algorithm provides an upper bound of α k via the following optimization problem:

$$\begin{array}{*{20}l} & \underset{\gamma_{i},\; 1 \leq i \leq \binom{n}{l}}{ \text{maximize}} \;\; \sum_{i=1}^{\binom{n}{l}} \gamma_{i} \; \alpha_{l,L_{i}} \\ & \text{subject to} \;\; \gamma_{i} \geq 0,\; 1 \leq i \leq \scriptstyle{\binom{n}{l}}, \\ & \quad\quad\quad\quad~\sum\limits_{i=1}^{\binom{n}{l}} \gamma_{i} \leq \frac{k}{l}, \\ & \quad\quad \sum\limits_{\{i:\; B \subseteq L_{i},\; 1\leq i \leq \binom{n}{l} \}} \gamma_{i} \leq \frac{\binom{k-b}{l-b}}{\binom{k-1}{l-1}}, \,\,\overset{\forall\ b\ \in\ \mathbb{Z}^{+}\ \text{s.t.}\ 1\ \leq\ b\ \leq\ l, }{\!\!\!\!\!\!\forall\ B\ \text{with}\ |B|=b} \end{array} $$
(15)

In the following lemmata, we show that the optimized pick-l algorithm produces an upper bound of α k and this bound is tighter than that of the basic pick-l algorithm introduced in (11). The last lemma establishes that as l increases, the upper bound of α k decreases or stays the same.

Lemma 4

The optimized pick-l algorithm provides an upper bound of α k .

Proof

The strategy to prove Lemma 4 is to show that one feasible solution of (15) gives an upper bound of α k . Suppose when K=K, α k is achieved, i.e., \(\phantom {\dot {i}\!}\alpha _{k}=\alpha _{k,K^{*}}\). For a feasible solution, let us choose \(\gamma _{i} = \frac {1}{\binom {k-1}{l-1}}\) when L i K, and γ i =0 otherwise, which we can easily check whether it satisfies the first and second constraints of (15). For the third constraint, let us check the case when b=l first. For b=l, we can choose an arbitrary index set B such that |B|=b=l. For the chosen B, there is only one L i such that BL i , which is itself, i.e., B=L i . For other chosen B’s, it is the same. Hence, the third constraint represents

$$\begin{array}{*{20}l} \gamma_{i} \leq \frac{1}{\binom{k-1}{l-1}},\; i=1,2,\ldots,\binom{n}{l}. \end{array} $$
(16)

For b=1, the third constraint represents

$$\begin{array}{*{20}l} \sum\limits_{\{i:\; B\subseteq L_{i},\;1\leq i \leq \binom{n}{l},\; |B|=1 \}} \gamma_{i} \leq 1. \end{array} $$
(17)

Note that there are \(\binom {n-1}{l-1}\) numbers of L i ’s which have an index set B as a subset. Among \(\binom {n-1}{l-1}\) numbers of γ i ’s, only γ i ’s whose corresponding L i ’s are the subsets of K are \(\frac {1}{\left (\underset {l-1}{k-1}\right)}\). Since each element in L i such that L i K appears \(\binom {k-1}{l-1}\) times in \(\left \{i:\; L_{i} \subseteq K^{*},\; 1\leq i \leq \binom {n}{l} \right \}\), the summation of γ i , where the corresponding L i ’s are the subset of K, becomes \(\frac {1}{\left (\underset {l-1}{k-1}\right)}\times \binom {k-1}{l-1} = 1\), which satisfies (17). Basically, the third constraint makes that for an index, the summation of coefficients related to the index is limited to 1. In the same way, for 1<b<l, the chosen γ i is a feasible solution of (15). From this feasible solution, we have \(\frac {1}{{\left (\underset {l-1}{k-1}\right)}} \sum _{\{i:\; L_{i} \subseteq K^{*},\; |L_{i}|=l\}} \alpha _{l,L_{i}}\) for the optimal value, which is an upper bound of α k as shown in (13). □

Lemma 5

The optimized pick-l algorithm provides a tighter, or at least the same, upper bound of α k than the basic pick-l algorithm introduced in (11).

Proof

We will show that the optimization problem (11) is a relaxation of (15). As in the proof of Lemma 4, for b=l, the third constraint of (15) represents (16), which is involved in the first constraint of (11). Since the third constraint of (15) considers other b values such that 1≤b<l, (15) has more constraints than (11). Therefore, the optimized pick-l algorithm, which is (15), provides a tighter or at least the same upper bound than the basic pick-l algorithm. □

Lemma 6

The optimized pick-l algorithm provides a tighter or at least the same upper bound than the optimized pick-p algorithm when l>p.

Proof

We can upper bound the objective function of (15) by using (8) as follows:

$$\begin{array}{*{20}l} & \underset{\gamma_{i},\; 1 \leq i \leq \binom{n}{l}}{\text{maximize}} \;\; \frac{1}{\binom{l-1}{p-1}} \sum\limits_{i=1}^{\binom{n}{l}} \gamma_{i} \sum\limits_{\{j :\; P_{j} \subset L_{i},\; |P_{j}|=p \}} \alpha_{p,P_{j}} \\ & \text{subject to} \;\; \gamma_{i} \geq 0,\; 1 \leq i \leq \scriptstyle{\binom{n}{l}}, \\ & \quad\quad\quad\quad\;\; \sum\limits_{i=1}^{\binom{n}{l}} \gamma_{i} \leq \frac{k}{l}, \\ & \quad\quad \sum\limits_{\{i:\; B \subseteq L_{i},\; 1\leq i \leq \scriptstyle{\binom{n}{l}} \}} \gamma_{i} \leq \frac{\binom{k-b}{l-b}}{\binom{k-1}{l-1}}, \,\,\overset{\forall\ b\ \in\ \mathbb{Z}^{+}\ \text{s.t.}\ 1\ \leq\ b\ \leq\ l}{\!\!\!\!\!\forall\ {B}\ \text{with}\ |B|=b}. \end{array} $$
(18)

Note that in the objective function of (18), each \(\alpha _{p,P_{j}},\; 1\leq j\leq \binom {n}{p}\), appears \(\binom {n-p}{l-p}\) times. Let us define

$$\begin{array}{*{20}l} \gamma_{j}^{'} \triangleq \frac{1}{\binom{l-1}{p-1}} \sum\limits_{\{i:\; P_{j} \subset L_{i},\; 1\leq i \leq \binom{n}{l} \}} \gamma_{i}. \end{array} $$

We can relax (18) to the following problem, which turns out to be the same as the optimized pick-p algorithm:

$$\begin{array}{*{20}l} & \underset{\gamma_{j}^{'},\; 1 \leq j \leq \binom{n}{p}}{\text{maximize}} \;\; \sum\limits_{j=1}^{\binom{n}{p}} \gamma_{j}^{'} \; \alpha_{p,P_{j}} \\ & \text{subject to} \;\; \gamma_{j}^{'} \geq 0,\; 1 \leq j \leq \scriptstyle{\binom{n}{p}}, \\ & \quad\quad\quad\quad\;\; \sum\limits_{j=1}^{\binom{n}{p}} \gamma_{j}^{'} \leq \frac{k}{p}, \\ & \quad\quad \sum\limits_{\{j:\; B \subseteq P_{j},\; 1\leq j \leq \binom{n}{p} \}} \gamma_{j}^{'} \leq \frac{\binom{k-b}{p-b} }{ \binom{k-1}{p-1} }, \,\,\overset{\forall\ b\ \in\ \mathbb{Z}^{+}\ \text{s.t.}\ 1\ \leq\ b\ \leq\ p}{\!\!\!\!\!\forall\ B\ \text{with}\ |B|=b}. \end{array} $$
(19)

The relaxation is shown by checking the constraints. The first constraint of (19) is trivial to obtain. For the second constraint, we can obtain the second constraint of (19) from the following relations:

$$\begin{array}{*{20}l} \sum\limits_{j=1}^{\binom{n}{p}} \gamma_{j}^{'} & = \sum\limits_{j=1}^{\binom{n}{p}} \frac{1}{\binom{l-1}{p-1}} \sum\limits_{\substack{\{i:\; P_{j} \subset L_{i}, \; 1\leq i \leq \binom{n}{l} \}}} \gamma_{i} \\ & = \frac{1}{\binom{l-1}{p-1}} \binom{l}{p} \sum\limits_{i=1}^{\binom{n}{l}} \gamma_{i} \\ & \leq \frac{1}{\binom{l-1}{p-1}} \binom{l}{p} \frac{k}{l} = \frac{k}{p}, \end{array} $$

where the second equality is obtained from the fact that γ i , which is a coefficient of \(\alpha _{l,L_{i}}\), appears \(\binom {l}{p}\) times in \(\sum _{j=1}^{\binom {n}{p}} \sum _{\substack {\{i:\; P_{j} \subset L_{i} \}}} \gamma _{i}\). The final inequality is from the second constraint of (18). The third constraint in (19) can be deduced from the following inequality:

$$\begin{array}{*{20}l} \sum\limits_{\{ j:\; B \subseteq P_{j},\; 1\leq j \leq \binom{n}{p}\}} \gamma_{j}^{'} \!\!& =\! \frac{1}{\binom{l-1}{p-1}} \sum\limits_{\{j:\; B \subseteq P_{j},\; 1\leq j \leq \binom{n}{p}\}} \sum\limits_{\{i :\; P_{j} \subset L_{i},\; 1\leq i \leq \binom{n}{l} \}} \gamma_{i} \\ & = \frac{1}{\binom{l-1}{p-1}} \frac{\binom{n-b}{p-b} \binom{n-p}{l-p} }{ \binom{n-b}{l-b}} \sum\limits_{\{i :\; B \subset L_{i},\; 1\leq i \leq \binom{n}{l} \}} \gamma_{i} \\ & \leq \frac{1}{\binom{l-1}{p-1}} \frac{\binom{n-b}{p-b} \binom{n-p}{l-p} }{ \binom{n-b}{l-b}} \frac{\binom{k-b}{l-b}}{\binom{k-1}{l-1}}, \; 1 \leq b \leq p \\ & = \frac{\binom{k-b}{p-b} }{ \binom{k-1}{p-1} }, \;1 \leq b \leq p, \end{array} $$

where the second equality is from the fact that for a fixed P j , there are \(\binom {n-p}{l-p}\) numbers of L i ’s, where P j L i , \(i=1,\ldots,\binom {n}{l}\); for a fixed B, there are \(\binom {n-b}{p-b}\) numbers of P j ’s, where BP j , \(j=1,\ldots,\binom {n}{p}\), and \(\binom {n-b}{l-b}\) numbers of L i ’s, where BL i , \(i=1,\ldots,\binom {n}{l}\). Since (19) is obtained from the relaxation of (18), the optimal value of (19) is larger or equal to the optimal value of (18). (19) is just the optimized pick-p algorithm. Thus, when l>p, the optimized pick-l algorithm provides a tighter or at least the same upper bound than the optimized pick-p algorithm. □

By using larger l in the pick-l algorithm, we can obtain a tighter upper bound of α k . However, for a certain l, we need to enumerate \(\binom {n}{l}\) possibilities, and this becomes infeasible when l is large. Moreover, when l<k, the pick-l algorithm only gives an upper bound of α k , instead of an exact value of α k . There is, however, a need to find tighter bounds on α k , or to even find the exact value of α k , when k is too large for \(\binom {n}{k}\) enumerations of exhaustive search [1416]. To this end, we propose a new branch-and-bound tree search algorithm to find tighter bounds on α k than Lemma 2 provides, or to even find the exact α k . Our branch-and-bound tree search algorithm is enabled by the pick-l algorithms introduced in Sections 2 and 3.

4 Tree search algorithm

To find the index set K which leads to the maximum αk,K (among all possible index set K’s), the tree search algorithm (TSA) performs a best-first branch-and-bound search [23] over a tree structure representing different subsets of {1,2,…,n}. In its essence, for each subset J with cardinality no bigger than k, TSA calculates an upper bound of αk,K, which is valid for any set K (with cardinality k) such that JK. If this upper bound is smaller than a lower bound of α k , TSA will not further explore any of J’s supersets, leading to reduced average-case computational complexity. For simplicity, we will describe the TSA based on pick-1-element algorithm, simply called 1-Step TSA. However, we remark we can also extend the TSA to be based on pick-l-element (l≥2) algorithm, by calculating upper bounds of αk,K based on the results of the pick-l-element algorithm.

4.1 Tree structure

A tree node J represents an index subset of {1,…,n} such that |J|≤k. We have the following rule:

  • [R1] A parent node is a subset of each of its child node(s).

A node that has no child is referred to as a leaf node. We call the cardinality of the index set corresponding to J as J’s height. The tree structure follows the “legitimate order,” which ensures that any new index in the child node is bigger than the indices of its parent node.

  • [R2] “Legitimate order” - Let P and C denote the parent node and the child node. Then, any index in P must be smaller than any index in CP.

Figure 1 illustrates this rule in a tree with k=2 and n=3.

Fig. 1
figure 1

A tree structure following the legitimate order for k=2 and n=3

4.2 Basic idea of a branch-and-bound approach for calculating α k

We use a branch-and-bound approach over the tree structure to calculate α k . This method maintains a lower bound on α k (how to maintain this lower bound will be explained in Section 4.3). When the algorithm explores a tree node J, the algorithm calculates an upper bound B(J), which is no smaller than αk,K for any child node K (with cardinality k) of node J. If B(J) is smaller than the lower bound on α k , then the algorithm will not explore the child nodes of the tree node J.

In our algorithm, we calculate B(J) as

$$\begin{array}{*{20}l} B(J) = \alpha_{j,J} + {\sum\limits_{i=1}^{t} \alpha_{1,\{i+ \text{max}(J)\}}}, \end{array} $$
(20)

where j+t=k, max(J) represents the largest index in J, and α1,{1}α1,{2}≥…≥α1,{n}. We obtain this descending order by permuting the columns of the sensing matrix A in descending order of α1,{i}’s as the pre-computation step of TSA. For example, in Fig. 1, for k=2, B({1})=α1,{1}+α1,{2}. In order to justify that B(J) is an upper bound of αk,K for all node K such that JK, we provide the following lemma.

Lemma 7

Given α1,{1}α1,{2}≥…≥α1,{n}, \(B(J) = \alpha _{j,J} + \sum _{i=1}^{t} \alpha _{1,\{i+max(J)\}}\), where j+t=k, and max(J) represents the largest index in J, is an upper bound of αk,K for all nodes K such that JK.

Proof

For any subset K such that JK, we can write αk,K=αj+t,{JT}, where j+t=k and T=KJ. Then, following exactly the same line of argument as in the proof of Lemma 1, we have

$$\alpha_{k,K} \leq \alpha_{j,J} + \alpha_{t,T}, $$

and αt,T is no larger than \(\sum _{j \in T}^{t} \alpha _{1,\{j\}}\). Finally, since α1,{i}’s are sorted in the descending order, \(\sum _{j \in T} \alpha _{1,\{j\}} \leq \sum _{i=1}^{t} \alpha _{1,\{i+max(J)\}}\). Note that, due to the legitimate order [R2], the smallest element of the index set T is no less than 1+max(J). In conclusion, for all nodes K such that JK, B(J) becomes an upper bound of αk,K. □

4.3 Best-first tree search strategy

TSA adopts a best-first tree search strategy for the branch-and-bound approach. We first describe a basic version of the best-first tree search strategy and then introduce two enhancements to this strategy in the next subsection.

In its basic version, TSA starts with a tree having only the root node and sets the global lower bound of α k as 0. In each iteration, TSA selects a leaf tree node J with the largest B(J) and expands the tree by adding the child nodes of J to the tree. For each of these newly added child nodes, say Q, TSA then calculates the upper bound B(Q) in (20). Note that if a newly added child node Q has k elements, TSA will calculate αk,Q, which is a lower bound on α k . For this k-element Q, if the newly calculated αk,Q is bigger than the global lower bound of α k , TSA will set the global lower bound equal to αk,Q. TSA will terminate if a leaf tree node J has the largest B(J) among all the leaf nodes, and that B(J) is no bigger than the global lower bound on α k .

From standard theories of the branch-and-bound approach, this TSA will output the exact α k . Also, in this process, the global lower bound will keep increasing until it is equal to an upper bound of α k (the largest B(J) among leaf nodes).

4.4 Two enhancements

We incorporate two novel features to TSA in order to reduce the computational complexity. Firstly, when TSA attaches a new node Q to a node J in the tree structure, TSA computes B(Q) as (21):

$$\begin{array}{*{20}l} B(Q) = \alpha_{j,J} +\alpha_{1, Q\setminus J}+\sum\limits_{i=1}^{t} \alpha_{1,\{i+ \text{max}(Q)\}}, \end{array} $$
(21)

where j+t+1=k, max(Q) represents the largest index in Q, and α1,{1}α1,{2}≥…≥α1,{n}. Thus, without calculating αj+1,Q (which involves higher computational complexity), we can still have B(Q) as an upper bound of αk,K for any child node K (with cardinality k) of the node Q.

Secondly, when TSA adds a new node Q as the child of node J in the tree structure (assuming αj,J has already been calculated), TSA does not need to add all of J’s child nodes to the tree at the same time. Instead, TSA only adds the node J’s unattached child node Q with the largest B(Q) as defined in (21). Namely, the index QJ is no bigger than the index QJ, where Q is any unattached child of the node J. We note that B(Q) is an upper bound on B(Q) (according to (21)) for any other unattached child node Q of the node J. Thus, for any child node K (of cardinality k) of node J’s unattached child nodes, B(Q) is still an upper bound of αk,K.

Algorithm 2 shows detailed steps of TSA, based on the pick-1-element algorithm (namely, l=1, 1-Step TSA). In the description, we define “expanding the tree from a node J” as follows:

  • [R3]“Expanding the tree from a node J”—attaching a new node Q to the node J, where B(Q) is the largest value defined as (21) among the node J’s all the unattached child nodes.

4.5 Advantage of the tree search algorithm

Due to the nature of the branch-and-bound approach, we can obtain a global upper bound and a global lower bound of α k while TSA runs. As the number of iterations increases in TSA, we can obtain tighter and tighter upper bounds on α k , which is the largest B(·) among the leaf nodes. By using the global upper bound of α k , we can obtain a lower bound of the recoverable sparsity k via Lemma 3. Thus, even if the complexity of TSA is too high to finish in a timely manner, we can still obtain a lower bound on the recoverable sparsity k by early terminating TSA.

We note that the methods based on LP [8] and SDP [3] also provide upper bounds on α k . However, they are unable to determine upper bounds of αk,K, which is for a specific index set K. This prevents the use of LP and SDP methods in our branch-and-bound method for computing α k .

5 Numerical experiments

We conducted extensive simulations to compute α k and its upper/lower bounds using the pick-l algorithms and TSA. In this section, we call the pick-l algorithms introduced in Section 2 and 3 as simply the (basic) pick-l and the optimized pick-l algorithms respectively.

For same matrices, we compared our methods with LP relaxation [8] approach and SDP method [3]. We assessed the computational complexity in terms of execution time of the algorithms.Footnote 1 In addition, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically.

For LP method in [8] and SDP method in [3], we used the Matlab codesFootnote 2 provided by the authors. Consistent with previous research, we used CVX [17]—a package for specifying and solving convex programs—for the SDP method, and MOSEK [18]—a commercial LP solver—for the LP method. In our own algorithms, we used MOSEK to solve (7). Also, to be consistent with the previous research, matrices were generated from the Matlab code provided by the authors of [3] at http://www.di.ens.fr/~aspremon/NSPcode.html. For valid bounds, we rounded down lower bounds on α k and exact α k , and rounded up upper bounds on α k to the nearest hundredth.

5.1 Performance comparison

Firstly, we considered Gaussian matrices and partial Fourier matrices sized from n=40 to n=6144. We chose n=40 so that our results can be compared with the simulation results in [3].

5.1.1 Low-dimensional sensing matrices

5.1.1.1 Sensing matrices with n=40

. We considered sensing matrices of row dimension m=0.5n, 0.6n, 0.7n, 0.8n, where n=40. For every matrix size, we randomly generated 10 different realizations of Gaussian and partial Fourier matrices. So, in total we used 80 different n=40 sensing matrices for the numerical experiments in Tables 7 and 8. We normalized all of the matrix columns so that they have a unit 2-norm. The entries of Gaussian matrices were i.i.d standard Gaussian \(\mathcal {N}(0,1)\). The partial Fourier matrices had m rows randomly draw from the full Fourier matrices. We compared our algorithms—pick-1-element, pick-2-element, pick-3-element, and TSA—to LP and SDP methods. For readability, we place the numerical results for these small sensing matrices in Appendix B.

For each matrix size and type, we increased k from 1 to 5 in unit steps. Tables 7(a) and 8(a) show the median values of α k . (To be consistent with the previous research [3], in which the authors used the median value of α k to compare the SDP method with the LP method, we provided the median values obtained from 10 random realizations of sensing matrix.) From the median value of α k , we obtained the recoverable sparsity kmax such that \(\alpha _{k_{\text {max}}}< 1/2\) and \(\alpha _{k_{\text {max}}+1} > 1/2\). In addition, we calculated the arithmetic mean of kmax’s. For the arithmetic mean, we obtained each kmax from each random realization and computed the arithmetic mean of ten kmax’s. Compared with LP and SDP methods, we obtained bigger or at least the same recoverable sparsity kmax by using pick-2, pick-3, and TSA. It is noteworthy that we obtained the exact α k for k=1,2,…,5 by using TSA, while LP and SDP methods only provided the exact α k for k=1. We observed that α k <1/2 but the upper bound of α k >1/2 holds true in several cases, e.g., α5 in 32×40 Gaussian matrices, α4 in 28×40 Gaussian matrices, α3 in 24×40 Gaussian matrices, α3 in 20×40 partial Fourier matrices, and α4 in 24×40 partial Fourier matrices. Additionally, this can also be established by the arithmetic mean of kmax in Tables 7(a) and 8(a).

To compare the computational complexity, we calculated the geometric mean of the algorithms’ execution time, to avoid biases for the average. Tables 7(b) and 8(b) list the average execution time. We also ran the exhaustive search method (ESM) to find α k and compared its execution time with that of TSA. In calculating α5, on average, 3-Step TSA reduced the computational time by around 86 times for 20×40 Gaussian matrices, and by 94 times for 20×40 partial Fourier matrices, compared to ESM. For 32×40 Gaussian matrix and partial Fourier matrix, the speedup compared to the best l-Step TSA, l=1,2,3, becomes around 1760 times and 182 times respectively. We observed that when m/n=0.5, e.g., 20×40 sensing matrices, in general, the 3-step TSA provides the fastest result for k=5. On the other hand, for m/n=0.8 (e.g., 32×40 case), the 2-Step TSA is the quickest in finding an exact α k for k=5; however, for k>5, the fastest l-step TSA cannot be determined from either experiments or theory.

5.1.1.2 Sensing matrices with n=256

. We assessed the performance of the pick-l algorithm for sensing matrices with n=256. We carried out numerical experiments on 128×256 Gaussian matrices in Fig. 2a and 64×256 partial Fourier matrices in Fig. 2b. Here, for 10 sensing matrices, we obtained the median value of upper bounds of α k using the pick-l algorithm and compared the result with LP relaxation method [8]. We omitted SDP method [3] from this experiment due to its very high computational complexity. For the pick-3 algorithm in Fig. 2a, we calculated an upper bound of α3 via TSA and used this result to calculate upper bounds of α k , k=3,4,…,8 via (13). Figure 2a, b demonstrate that, with an appropriate choice of l, the upper bound of α k obtained via the pick-l algorithm can be tighter than that from the LP relaxation method. For example, for 128×256 Gaussian matrices, LP relaxation often determines the maximum recoverable sparsity as 5, while the pick-2 algorithm improves it to 6. In the pick-3 algorithm, the maximum recoverable sparsity is 7 (α7=0.49). For 64×256 partial Fourier matrices, the maximum recoverable sparsity from LP relaxation and the pick-2 algorithm are 3 and 4 respectively.

Fig. 2
figure 2

Median upper bounds of α k from the pick-l algorithm and the LP relaxation method. a 128 × 256 Gaussian matrices b 64 × 256 partial Fourier matrices

5.1.1.3 Sensing matrices with n=512

. We further conducted numerical experiments on Gaussian sensing matrices with n=512. The simulation results in Table 1 clearly demonstrate that the pick-2 algorithm provides larger lower bound on the recoverable sparsity k than the LP method [8]. Especially, when Gaussian sensing matrix is 410×512, the lower bound on k obtained from the pick-2 algorithm is almost twice larger than that of the LP method.

Table 1 Lower bound on k and execution time (Gaussian matrix with n=512)

5.1.2 High-dimensional sensing matrices

5.1.2.1 Sensing matrix with n≥1024

. We conducted numerical experiments for Gaussian sensing matrices with n from 1024 to 6144. We show these numerical experiments in Tables 2 and 3, where we calculated the lower bound on the recoverable sparsity k and obtained the corresponding execution time. The SDP method [3] was not applicable in these experiments due to its very high computational complexity. In Table 2, we ran TSA for 1 day (24 h) and obtained an upper bound of α2, denoted by UB(α2). With the upper bound of α2, we obtained a lower bound of k, denoted by k(UB(α2)), via Lemma 3. Our numerical results in Tables 2 and 3 clearly show that our pick-l algorithm outperforms the LP method in recoverable sparsity k or execution time. We note that although our pick-1-element algorithm provides the same recoverable sparsity k as the LP method [8] in Tables 2 and 3, the complexity of LP method can be 10 times higher than our method on m×n Gaussian matrices, where m is large.

Table 2 Lower bound on k and execution time (Gaussian matrix with n=1024)
Table 3 Lower bound on k and execution time (Gaussian matrix)

For extremely large sensing matrices, e.g, 4014×4096 and 6021×6144, the LP and SDP methods cannot provide any lower bound on k due to unreasonable computational time. However, our pick-l algorithm can still provide the lower bound on k efficiently. Table 3 shows the lower bound on k and the execution time for these large dimensional matrices, where our verified recoverable sparsity k can be as large as 558 for a 6134×6144 sensing matrix. We obtained the estimated time for the LP method by running the Matlab code obtained from http://www2.isye.gatech.edu/~nemirovs/, which shows the percentage of the calculation on screen.

5.2 Comparison between the optimized pick-l algorithm and the basic pick-l algorithm

We compared the basic pick-l algorithm introduced in Section 2 to the optimized pick-l algorithm in Section 3 on Gaussian sensing matrices 28×40 and 40×50 for l=3 and k=4,5,…,8. Table 4 demonstrates that when l=3 and k=4,5,…,8, the optimized pick-l algorithm provided tighter upper bounds on α k than the basic pick-l algorithm. This is because when l is large and k>l, (15) includes more constraints, which leads to the reduced size of the feasible set, than the case when k and l are small. Hence, the optimal value of (15), which is the result from the optimized pick-l, can be smaller than or equal to that of (11), which is the basic pick-l. Additionally, we provided the exact α k values obtained from TSA in order to check how tight the bounds obtained from the basic pick-l and the optimized pick-l are. In terms of the execution time, the optimized pick-l algorithm, which computes (15), was around 1.7 and 4.4 times slower than the basic pick-l on 28×40 and 40×50 Gaussian matrix respectively.

Table 4 α k comparison and execution time (Gaussian matrix)

In summary, the optimized pick-l algorithm provides better or at least equal upper bound on α k to the basic pick-l algorithm, with additional complexity. In spite of the increased complexity of the optimized pick-l algorithm, it has an important theoretical merit, which is Lemma 6.

5.3 Complexity of tree search algorithm

In this subsection, we carried out numerical experiments to demonstrate the computational complexity of TSA empirically on randomly chosen Gaussian sensing matrices. Figure 3a, b shows the distribution of execution time and the distribution of number of nodes in height 5 attached to the tree structure in TSA respectively. For m=0.5n, we generated 100 random realizations of Gaussian matrices and computed α5 using 3-Step TSA. The maximum number of leaf node whose cardinality is k is \(\binom {n}{k} = \binom {40}{5}=6.58008e5\). From Fig. 3b, we note that for 90% of the cases, 3-Step TSA was terminated before 1.6% of all the possible height-5 nodes were attached to the tree structure.

Fig. 3
figure 3

Histograms of the TSA (based on the pick-3 algorithm) to find α5 on 100 randomly chosen 20×40 Gaussian sensing matrices for each method. a Execution time. b Number of nodes in height 5

We provided the execution time of TSA for different-sized randomly chosen Gaussian matrices in Fig. 4. We compared the execution time of TSA to ESM. Figure 4a shows that when k=1, 1-Step TSA provides almost similar performance to ESM. This is because 1-Step TSA calculates all the α1,{i}’s as a pre-computation, which is the same procedure as ESM. However, for k>l as shown in Fig. 4bd, TSA can find α k with reduced computation by using all the αl,L’s, while it is required to compute all the αk,K’s in ESM. In order to compute α k , we achieved a speedup of around 100 times via 2-Step TSA compared to ESM for k=3,4.

Fig. 4
figure 4

The execution time of TSA in log scale as a function of n on randomly chosen m×n Gaussian matrices, where m=n/2. a k=1b k=2c k=3d k=4

In addition, in Fig. 5, we compared the execution time of TSA to ESM by varying k with n fixed on random Gaussian matrices. For the best execution time of TSA, we used different l values for TSA. For n=40 and n=50, 3-Step TSA reduced the execution time to find α5 by around 100 times and 300 times respectively, compared with ESM.

Fig. 5
figure 5

The execution time of TSA in log scale as a function of k on randomly chosen m×n Gaussian matrices, where m=n/2. a n=40b n=50

Finally, Fig. 6 gives illustrations of the values of the global lower and upper bounds, for 80×100 and 160×200 Gaussian sensing matrices, as the number of iterations in TSA increases. As we can see, the global upper and lower bounds get close very quickly. This implies that we can sometimes terminate TSA early and still obtain tight bounds on α k .

Fig. 6
figure 6

Global lower bound (GLB) and global upper bound (GUB) in TSA on Gaussian sensing matrices. a For (k,l)=(5,3), we obtained (GLB,GUB)=(0.27,0.28) after 167501 iterations. b For (k,l)=(4,2), we obtained (GLB, GUB)=(0.15,0.17) after 148101 iterations

5.4 Application to network tomography problem

We apply our new tools introduced in this paper to verify NSC for sensing matrices in network tomography problems [1416, 1921]. In an undirected graph model for the communication network, the communication delay over each link can be determined by sending packets through probing paths that are composed of connected links. The delay of each path is then measured by adding the delays over its links. Generally, most links are uncongested, and only a few congested links have significant delays. It is, therefore, reasonable to think of finding the link delays as a sparse recovery problem. This sparse problem can be expressed in a system of linear equations y=Ax, where the vector \(y \in \mathbb {R}^{m}\) is the delay of m paths, the vector \(x\in \mathbb {R}^{n}\) is the delay vector for the n links, and A is a sensing matrix. The element A ij of A is 1, if and only if path y i , i{1, 2, …, m}, goes through link j, j{1, 2, …, n}; otherwise, A ij equal to 0 (see Fig. 7). The indices of nonzero elements in the vector x correspond to the congested links.

Fig. 7
figure 7

a A simple example of a network tomography graph. W, X, Y, and Z are nodes in the network, and Path1, 2, 3, and 4 are the probing paths through which the packets are sent. b The sensing matrix corresponding to the graph shown in a. The rows and columns of the matrix represent probing paths and edges respectively

In our numerical experiments to verify NSC in network tomography problems, the paths for sending data packets were generated by random walks of fixed length. Table 5 summarizes the results of our experiments. We note that by using TSA, one can exactly verify that a total of k=2 and k=4 congested link delays can be uniquely found by solving 1 minimization problem (2) for the randomly generated network measurement matrices 33×66 (12-node complete graph) and 53×105 (15-node complete graph) respectively. For ESM, we estimated the execution time by multiplying the unit time to solve (7) and the total number of cases in the exhaustive search. We obtained the unit time to solve (7) by calculating the arithmetic mean from 100 trials. For a 53×105 matrix, 3-Step TSA substantially reduced the execution time to find α5 around 137 times compared to ESM.

Table 5 α k and execution time in network tomography problems

We further carried out numerical experiments on even larger network model having 300 nodes and 400 edges. We created a random spanning tree for a network model by using random walk approach [24]. At each probing path, we randomly chose a node among 300 nodes as a starting point of random-walk and walked 100 times along the network connection. We obtained a 320×400 matrix corresponding to the network model. We calculated α k values via l-Step TSA, where l=1,2. In terms of the execution time, in Table 6, we compared TSA with ESM, where the unit time to solve (7) was obtained by calculating the arithmetic mean from 100 trials. Especially, 1-Step TSA reduced the execution time to find α4 by around 28,700 times compared to ESM.

Table 6 α k and execution time in a large network model having 300 nodes and 400 edges

5.5 Discussion

In this section, we discuss the strengths and weaknesses of our proposed algorithms, compared with earlier research [3, 8].

  1. 1.

    Comparisons with LP and SDP. Our proposed pick-1-element algorithm can achieve similar performance as the LP [8] and SDP methods [3]. However, our pick-1-element algorithm has the clear advantage of being more computationally efficient for large dimensional sensing matrices. Please see Table 3, where the LP and SDP methods cannot provide the performance bounds on recoverable sparsity k due to high computational complexity. On the other hand, in Table 3, our pick-1-element algorithm can efficiently provide bounds on recoverable sparsity k. The LP method has high computational complexity because it has to deal with a large convex program of design dimension mn, which leads to prohibitive computational complexity when m and n are large [8].

    In our pick-1-element algorithm, we proposed the novel idea of sorting α1,{i}’s (see Lemma 2), which leads to improved performance bounds on α k and recoverable sparsity k. This sorting idea, combined with Lemma 2, provides us with larger recoverable sparsity bound k than purely using α1 for bounding recoverable k in ([8] Section 4.2.B).

  2. 2.

    Set-specific upper bounds. Our proposed pick-l-element algorithm (l≥2) is novel and can provide improved bounds on α k and recoverable sparsity k, using polynomial computational complexity in n when l is fixed. This approach is not practical when l is large. However, pick-2-element and pick-3-element algorithm can already provide improved performance bounds, compared with the previous research [3, 8].

    The fact that we can obtain upper bounds on α k , based on the results of pick-l-element (l≥2) algorithm, is new and non-trivial (see Lemma 2, Lemma 3 and Lemma 4). For example, if we know α5≤0.22, we can use Lemma 3 to obtain that α11≤0.22×11/5<0.5.

    Our pick-l-element algorithm can provide set-specific upper bound for αk,K, laying the foundation for our branch-and-bound TSA.

  3. 3.

    Computational complexity of TSA. We proposed TSA to find precise values for α k with significantly reduced average-case computational complexity than ESM. The computational complexity of TSA is dependent on n, sparsity k, and a chosen constant l. When k, n, and l are large enough, finding α k via TSA is still computationally expensive. In the worst case, TSA has the same computational complexity as ESM. However, our extensive simulations ranging from Fig. 3 to Fig. 5 and from Table 5 to Table 8 show that on average, TSA can greatly reduce the computational complexity of finding α k compared with ESM.

    Table 7 α k comparison and execution time - Gaussian Matrix
    Table 8 α k comparison and execution time - Partial Fourier Matrix

    Moreover, since TSA maintains an upper bound and a lower bound of α k during its iterations, one can always early terminate TSA and still get improved performance bounds on α k than the LP and SDP methods. We can use TSA to find an exact value of α l , where l<k, and then use Lemma 3 to bound α k .

  4. 4.

    Use of data structures. We used object-oriented programming (OOP) to implement TSA in Matlab [25], because the OOP makes it easy to handle tree-type structures. In OOP, we defined a class and created objects from the class to store property of each node J, e.g., B(J), in the tree. In order to make a connection between two tree nodes, we used doubly linked list data structure as a part of the object. However, in case readers would like to implement the algorithm using alternative data structures, we have provided implementation-agnostic pseudocode of our algorithm in Algorithm 2.

  5. 5.

    Difference from phase transition works. There has been extensive research on the phase transitions of various sparse recovery algorithms such as Basis Pursuit (BP), Orthogonal Matching Pursuit (OMP), and Approximate Message Passing (AMP) [1]. However, our research is different from the research on phase transition in two aspects. Firstly, our work and the previous works [3, 8] are focusing on worst-case performance guarantee (recovering all the possible k-sparse signals), while the research on phase transition is considering the average-case performance guarantee for a single k-sparse signal with fixed support and sign pattern. Secondly, the phase transition bounds are mostly for random matrices. Hence, for a given deterministic sensing matrix, phase transition results cannot be used for that particular matrix.

6 Conclusion

In this paper, we consider the problem of verifying the null space condition in compressed sensing. Calculating the proportional parameter α k that characterizes the null space condition of a sensing matrix is a non-convex optimization problem, and also known to be NP-hard in [7]. In order to verify the null space condition, we proposed novel and simple enumeration-based algorithms, which are called the basic and optimized pick-l algorithms, to obtain upper bounds of α k . With these algorithms, we further designed a new algorithm called the tree search algorithm to gain a global solution to the non-convex optimization problem of verifying the null space condition. Numerical experiments show that our algorithms outperform the previously proposed algorithms [3, 8] in performance as well as speed.

7 Appendix

8 Proof of proposition 1

Proof

Let us denote the sum of k maximal magnitudes of elements of \(x \in \mathbb {R}^{n}\) as

$$ ||x||_{k,1}= \underset{|K|\leq k}{\text{maximize}} \sum\limits_{i \in K} |x_{i}|. $$

We use i1, i2,…, and i k (or j1, j2,…, and j k ) to denote k distinct integers between 1 and n. For a matrix, say A, we use Ai,j to represent its element in the i-th row and j-th column.

$$\begin{array}{*{20}l} \alpha_{k}^{LP} &= \underset{Y=[y_{1},\ldots,y_{n}]\in \mathbb{R}^{m\times n}}{ \text{minimize}}\;\left\{ \underset{1\leq j \leq n}{ \text{maximize}}\; ||\left(I-Y^{T} A\right)e_{j}||_{k,1}\; \right\} \\ & = \underset{Y=[y_{1},\ldots,y_{n}]\in \mathbb{R}^{m\times n}}{ \text{minimize}}\;\left\{ \underset{i_{1}, i_{2}, \ldots, i_{k}, j}{ \text{maximize}}\; \sum\limits_{t=1}^{k}|\left(I-Y^{T} A\right)_{i_{t},j}|\; \right\} \\ &\leq \!\underset{Y=[y_{1},\ldots,y_{n}]\in \mathbb{R}^{m\times n}}{ \text{minimize}}\;\left\{ \underset{\substack{{i_{1}, i_{2}, \ldots, i_{k},}\\{ j_{1}, j_{2}, \ldots,j_{k}}}}{ \text{maximize}}\; \sum\limits_{t=1}^{k}|\left(I-Y^{T} A\right)_{i_{t},j_{t}}|\; \right\} \\ &= \underset{Y=[y_{1},\ldots,y_{n}]\in \mathbb{R}^{m\times n}}{ \text{minimize}}\;\left\{ \underset{i_{1}, i_{2}, \ldots, i_{k}}{ \text{maximize}}\; \sum\limits_{t=1}^{k} ||e_{i_{t}} - A^{T} y_{i_{t}}||_{\infty} \; \right\} \\ &= \underset{i_{1}, i_{2}, \ldots, i_{k}}{ \text{maximize}}\; \left\{\sum_{t=1}^{k} \left(\underset{y_{i_{t}}\in \mathbb{R}^{m\times 1}}{ \text{minimize}}\; ||e_{i_{t}} - A^{T} y_{i_{t}}||_{\infty}\right)\;\right\}, \end{array} $$
(22)

where we can exchange the order of “maximize” and “minimize” in the last equality because \(||e_{i_{t}} - A^{T} y_{i_{t}}||_{\infty }\) only depends on \(y_{i_{t}}\).

Moreover, according to the equations for “ αi” between (4.29) and (4.30) in [8] (taking β there to be ),

$$\begin{array}{*{20}l} &\underset{y_{i_{t}}\in \mathbb{R}^{m\times 1}}{ \text{minimize}}\; ||e_{i_{t}} - A^{T} y_{i_{t}}||_{\infty}\;\\ &= \underset{x}{ \text{maximize}} \left\{ e_{i_{t}}^{T}x\; : \; Ax=0, \; ||x||_{1} \leq 1 \right\} \\ &=\alpha_{1,\{i_{t}\}}. \end{array} $$

Combining this with (22), \(\alpha _{k}^{LP}\) is no bigger than the upper bound calculated by Lemma 2 (based on the pick-1-element algorithm). Namely,

$$\begin{array}{*{20}l} \alpha_{k}^{LP} \leq \alpha_{k}^{\mathrm{pick1}}. \end{array} $$
(23)

9 Sensing matrices with n=40

Here, we provide the numerical results for small sensing matrices with n=40 to compare our methods to LP [8] and SDP [3] methods.

Notes

  1. We conducted our experiments on HP Z220 CMT with Intel Core i7-3770 dual core CPU @3.4GHz clock speed and 16GB DDR3 RAM, using Matlab (R2013b) on Windows 7.

  2. LP method from http://www2.isye.gatech.edu/~nemirovs/ and SDP method from http://www.di.ens.fr/~aspremon/NSPcode.html.

References

  1. YC Eldar, G Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012).

    Book  Google Scholar 

  2. EJ Candès, T Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005).

    Article  MathSciNet  MATH  Google Scholar 

  3. A d’Aspremont, L El Ghaoui, Testing the nullspace property using semidefinite programming. Math. Prog. 127(1), 123–144 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  4. EJ Candès, J Romberg, T Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  5. EJ Candès, J Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  6. D Donoho, Neighborly polytopes and sparse solution of underdetermined linear equations. Technical report (Stanford University. Dept of Statistics, Stanford, 2005).

    Google Scholar 

  7. AM Tillmann, ME Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory. 60(2), 1248–1259 (2014).

    Article  MathSciNet  MATH  Google Scholar 

  8. A Juditsky, A Nemirovski, On verifiable sufficient conditions for sparse signal recovery via 1 minimization. Math. Prog. 127(1), 57–88 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  9. A Cohen, W Dahmen, R DeVore, Compressed sensing and best k-term approximation. J. American Math. Soc. 22(1), 211–231 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  10. W Xu, B Hassibi, Precise stability phase transitions for 1 minimization: a unified geometric framework. IEEE Trans. Inf. Theory. 57(10), 6894–6919 (2011).

    Article  MathSciNet  MATH  Google Scholar 

  11. K Lee, Y Bresler, in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Computing performance guarantees for compressed sensing, (2008), pp. 5129–5132.

  12. G Tang, A Nehorai, in Proceedings of Conference on Information Sciences and Systems (CISS). Verifiable and computable performance evaluation of 1 sparse signal recovery, (2011), pp. 1–6.

  13. M Cho, W Xu, in Proceedings of Asilomar Conference on Signals, Systems and Computers. New algorithms for verifying the null space conditions in compressed sensing, (2013), pp. 1038–1042.

  14. W Xu, E Mallada, A Tang, in Proceedings of IEEE International Conference on Computer Communizations (INFOCOM). Compressive sensing over graphs, (2011), pp. 2087–2095.

  15. MH Firooz, S Roy, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM). Network tomography via compressed sensing, (2010), pp. 1–5.

  16. MJ Coates, RD Nowak, in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Network tomography for internal delay estimation, vol.6, (2001), pp. 3409–3412.

  17. M Grant, S Boyd, CVX: Matlab software for disciplined convex programming, version 2.1 beta (2012). http://cvxr.com/cvx.

  18. MOSEK ApS, The MOSEK optimization toolbox for MATLAB manual. Version 7.1 (Revision 31) (2015). http://docs.mosek.com/7.1/toolbox/index.html.

  19. Y Tsang, M Coates, RD Nowak, Network delay tomography. IEEE Trans. Signal Process. 51(8), 2125–2136 (2003).

    Article  Google Scholar 

  20. Y Vardi, Network tomography: estimating source-destination traffic intensities from link data. J. American Stat. Assoc. 91(433), 365–377 (1996).

    Article  MathSciNet  MATH  Google Scholar 

  21. R Castro, M Coates, G Liang, R Nowak, B Yu, Network tomography: recent developments. Stat. Sci. 19(3), 499–517 (2004).

    Article  MathSciNet  MATH  Google Scholar 

  22. DL Donoho, J Tanner, Precise undersampling theorems. Proc. IEEE. 98(6), 913–924 (2010).

    Article  Google Scholar 

  23. G Brassard, P Bratley, Fundamentals of algorithmics (Englewood Cliffs, Prentice Hall, 1996).

    MATH  Google Scholar 

  24. DB Wilson, in Proceedings of the twenty-eighth annual ACM symposium on Theory of computing. Generating random spanning trees more quickly than the cover time, (1996), pp. 296–303.

  25. Mathworks, Object-Oriented Programming in MATLAB. https://www.mathworks.com/discovery/object-oriented-programming.html. Accessed 26 May 2017.

Download references

Acknowledgements

We thank Alexandre d’Aspremont from CNRS at Ecole Normale Superieure, Anatoli Juditsky from Laboratoire, Jean Kuntzmann at Universite Grenoble Alpes, and Arkadi Nemirovski from Georgia Institute of Technology for helpful discussions and providing codes for the simulations in [3] and [8].

Funding

The work of Weiyu Xu is supported by Simons Foundation 318608, KAUST OCRF-2014-CRG-3, NSF DMS-1418737, and NIH 1R01EB020665-01.

Availability of data and material

All the codes used for the numerical experiments are available at the following link: https://sites.google.com/view/myungcho/software/nsc.

Author information

Authors and Affiliations

Authors

Contributions

MC and WX designed the algorithms. MC implemented the algorithms. KVM checked the implementation of the algorithms and helped to polish the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Myung Cho.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cho, M., Vijay Mishra, K. & Xu, W. Computable performance guarantees for compressed sensing matrices. EURASIP J. Adv. Signal Process. 2018, 16 (2018). https://doi.org/10.1186/s13634-018-0535-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-018-0535-y

Keywords