Skip to main content

Error bounds of block sparse signal recovery based on q-ratio block constrained minimal singular values

Abstract

In this paper, we introduce the q-ratio block constrained minimal singular values (BCMSV) as a new measure of measurement matrix in compressive sensing of block sparse/compressive signals and present an algorithm for computing this new measure. Both the mixed 2/q and the mixed 2/1 norms of the reconstruction errors for stable and robust recovery using block basis pursuit (BBP), the block Dantzig selector (BDS), and the group lasso in terms of the q-ratio BCMSV are investigated. We establish a sufficient condition based on the q-ratio block sparsity for the exact recovery from the noise-free BBP and developed a convex-concave procedure to solve the corresponding non-convex problem in the condition. Furthermore, we prove that for sub-Gaussian random matrices, the q-ratio BCMSV is bounded away from zero with high probability when the number of measurements is reasonably large. Numerical experiments are implemented to illustrate the theoretical results. In addition, we demonstrate that the q-ratio BCMSV-based error bounds are tighter than the block-restricted isotropic constant-based bounds.

1 Introduction

Compressive sensing (CS) [1, 2] aims to recover an unknown sparse signal \(\mathbf {x}\in \mathbb {R}^{N}\) from m noisy measurements \(\mathbf {y} \in \mathbb {R}^{m}\):

$$\begin{array}{*{20}l} \mathbf{y}=A\mathbf{x}+\boldsymbol{\epsilon}, \end{array} $$
(1)

where \(A\in \mathbb {R}^{m\times N}\) is a measurement matrix with mN, and \(\boldsymbol {\epsilon }\in \mathbb {R}^{m}\) is additive noise such that ε2ζ for some ζ≥0. It has been proven that if A satisfies the (stable/robust) null space property (NSP) or restricted isometry property (RIP), (stable/robust) recovery can be achieved [3, Chapter 4 and 6]. However, it is computationally hard to verify NSP and compute the restricted isometry constant (RIC) for an arbitrarily chosen A [4, 5]. To overcome the drawback, a new class of measures for the measurement matrix has been developed during the last decade. To be specific, [6] introduced a new measure called 1-constrained minimal singular value (CMSV): \(\rho _{s}(A)=\min \limits _{\mathbf {z}\neq 0, \lVert \mathbf {z}\rVert _{1}^{2}/\lVert \mathbf {z}\rVert _{2}^{2}\leq s}\frac {\lVert A\mathbf {z}\rVert _{2}}{\lVert \mathbf {z}\rVert _{2}}\) and obtained the 2 recovery error bounds in terms of the proposed measure for the basis pursuit (BP) [7], the Dantzig selector (DS) [8], and the lasso estimator [9]. Afterwards, [10] brought in a variant of the CMSV: \(\omega _{\lozenge }(A,s)=\min \limits _{\mathbf {z}\neq 0,\lVert \mathbf {z}\rVert _{1}/\lVert \mathbf {z}\rVert _{\infty }\leq s}\frac {\lVert A\mathbf {z}\rVert _{\lozenge }}{\lVert \mathbf {z}\rVert _{\infty }}\) with \(\lVert \cdot \rVert _{\lozenge }\) denoting a general norm and expressed the recovery error bounds using this quantity. The latest progress concerning the CMSV can be found in [11, 12]. Zhou and Yu [11] generalized these two measures to a new measure called q-ratio CMSV: \(\rho _{q,s}(A)=\min \limits _{\mathbf {z}\neq 0, (\lVert \mathbf {z}\rVert _{1}/\lVert \mathbf {z}\rVert _{q})^{q/(q-1)}\leq s}\frac {\lVert A\mathbf {z}\rVert _{2}}{\lVert \mathbf {z}\rVert _{q}}\) with q(1,] and established both q and 1 bounds of recovery errors. Zhou and Yu [12] investigated geometrical property of the q-ratio CMSV, which can be used to derive sufficient conditions and error bounds of signal recovery.

In addition to the simple sparsity, a signal x can also possess a structure called block sparsity where the non-zero elements occur in clusters. It has been shown that using block information in CS can lead to a better signal recovery [1315]. Analogue to the simple sparsity, there are block NSP and block RIP to characterize the measurement matrix in order to guarantee a successful recovery through (1) [16]. Nevertheless, they are still computationally hard to be verified for a given A. Thus, it is desirable to develop a computable measure like the CMSV for recovery of simple (non-block) sparse signals. Tang and Nehorai [17] proposed a new measure of the measurement matrix based on the CMSV for block sparse signal recovery and derived the mixed 2/ and 2 bounds of recovery errors. In this paper, we extend the q-ratio CMSV in [11] to q-ratio block CMSV (BCMSV) and generalize the error bounds from the mixed 2/ and 2 norms in [17] to mixed 2/q with q(1,] and mixed 2/1 norms.

This work includes four main contributions to block sparse signal recovery in compressive sensing: (i) we establish a sufficient condition based on the q-ratio block sparsity for the exact recovery from the noise-free block BP (BBP) and develop a convex-concave procedure to solve the corresponding non-convex problem in the condition; (ii) we introduce the q-ratio BCMSV and derive both the mixed 2/q and the mixed 2/1 norms of the reconstruction errors for stable and robust recovery using the BBP, the block DS (BDS), and the group lasso in terms of the q-ratio BCMSV; (iii) we prove that for sub-Gaussian random matrices, the q-ratio BCMSV is bounded away from zero with high probability when the number of measurements is reasonably large; and (iv) we present an algorithm to compute the q-ratio BCMSV for an arbitrary measurement matrix and investigate its properties.

The paper is organized as follows. Section 2 presents our theoretical contributions, including properties of the q-ratio block sparsity and the q-ratio BCMSV, the mixed 2/q norm and the mixed 2/1 norm reconstruction errors for the BBP, the BDS and the group lasso, and the probabilistic result of the q-ratio BCMSV for sub-Gaussian random matrices. Numerical experiments and algorithms are described in Section 3. Section 4 is devoted to conclusion and discussion. All the proofs are left in the Appendix.

2 Theoretical methodology

2.1 q-ratio block sparsity and q-ratio BCMSV—definition and property

In this section, we introduce the definitions of the q-ratio block sparsity and the q-ratio BCMSV and present their fundamental properties. A sufficient condition for block sparse signal recovery via the noise-free BBP using the q-ratio block sparsity and an inequality for the q-ratio BCMSV are established.

Throughout the paper, we denote vectors by bold lower case letters or bold numbers and matrices by upper case letters. xT denotes the transpose of a column vector x. For any vector \(\mathbf {x}\in \mathbb {R}^{N}\), we partition it into p blocks, each of length n, so we have \(\mathbf {x}=\left [\mathbf {x}_{1}^{T}, \mathbf {x}_{2}^{T}, \cdots, \mathbf {x}_{p}^{T}\right ]^{T}\) and \(\mathbf {x}_{i}\in \mathbb {R}^{n}\) denotes the ith block of x. We define the mixed 2/0 norm \(\lVert \mathbf {x}\rVert _{2,0}=\sum _{i=1}^{p} 1\{\mathbf {x}_{i}\neq \mathbf {0}\}\), the mixed 2/ norm x2,= max1≤ipxi2, and the mixed 2/q norm \(\lVert \mathbf {x}\rVert _{2,q}=\left (\sum _{i=1}^{p} \lVert \mathbf {x}_{i}\rVert _{2}^{q}\right)^{1/q}\) for 0<q<. A signal x is block k-sparse if x2,0k. [p] denotes the set {1,2,,p} and |S| denotes the cardinality of a set S. Furthermore, we use Sc for the complement [p]S of a set S in [p]. The block support is defined by bsupp(x):={i[p]:xi2≠0}. If S[p], then xS is the vector coincides with x on the block indices in S and is extended to zero outside S. For any matrix \(A\in \mathbb {R}^{m\times N}, \text {ker} A:=\{\mathbf {x}\in \mathbb {R}^{N}: A\mathbf {x}=\mathbf {0}\}, A^{T}\) is the transpose. 〈·,·〉 is the inner product function.

We first introduce the definition of the q-ratio block sparsity and its properties.

Definition 1

([18]) For any non-zero \(\mathbf {x}\in \mathbb {R}^{N}\) and non-negative q{0,1,}, the q-ratio block sparsity of x is defined as

$$\begin{array}{*{20}l} k_{q}(\mathbf{x})=\left(\frac{\lVert \mathbf{x}\rVert_{2,1}}{\lVert \mathbf{x}\rVert_{2,q}}\right)^{\frac{q}{q-1}}. \end{array} $$
(2)

The cases of q{0,1,} are evaluated by limits:

$$\begin{array}{*{20}l} k_{0}(\mathbf{x})&=\lim\limits_{q\rightarrow 0} k_{q}(\mathbf{x})=\lVert \mathbf{x}\rVert_{2,0} \end{array} $$
(3)
$$\begin{array}{*{20}l} k_{1}(\mathbf{x})&=\lim\limits_{q\rightarrow 1} k_{q}(\mathbf{x})=\exp(H_{1}(\pi(\mathbf{x}))) \end{array} $$
(4)
$$\begin{array}{*{20}l} k_{\infty}(\mathbf{x})&=\lim\limits_{q\rightarrow \infty} k_{q}(\mathbf{x})=\frac{\lVert \mathbf{x}\rVert_{2,1}}{\lVert \mathbf{x} \rVert_{2,\infty}}. \end{array} $$
(5)

Here, \(\pi (\mathbf {x})\in \mathbb {R}^{p}\) with entries πi(x)=xi2/x2,1 and H1 is the ordinary Shannon entropy \(H_{1}(\pi (\mathbf {x}))=-\sum _{i=1}^{p} \pi _{i}(\mathbf {x})\log \pi _{i}(\mathbf {x})\).

This is an extension of the sparsity measures proposed in [19, 20], where estimation and statistical inference via α-stable random projection method were investigated. In fact, this kind of sparsity measure is based on entropy, which measures energy of blocks of x via πi(x). Formally, we can express the q-ratio block sparsity by

$$\begin{array}{*{20}l} k_{q}(\mathbf{x})=\left\{\begin{array}{ll} \exp(H_{q}(\pi(\mathbf{x}))) &\text{if}\ \mathbf{x}\neq \mathbf{0}\\ 0 &\text{if}\ \mathbf{x}=\mathbf{0}, \end{array}\right. \end{array} $$
(6)

where Hq is the Rényi entropy of order q[0,] [21, 22]. When q{0,1,}, the Rényi entropy is given by \(H_{q}(\pi (\mathbf {x}))=\frac {1}{1-q}\log \left (\sum _{i=1}^{p} \pi _{i}(\mathbf {x})^{q}\right)\), and for the cases of q{0,1,}, the Rényi entropy is evaluated by limits and results in (3), (4), and (5), respectively.

Next, we present a sufficient condition for the exact recovery via the noise-free BBP in terms of the q-ratio block sparsity. Recall that when the true signal x is block k-sparse, the sufficient and necessary condition for the exact recovery via the noise-free BBP:

$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}\in\mathbb{R}^{N}}\,\,\lVert \mathbf{z}\rVert_{2,1}\,\,\,\text{s.t.}\,\,\,A\mathbf{z}=A\mathbf{x} \end{array} $$
(7)

in terms of the block NSP of order k was given by [16, 23]

$$\begin{array}{*{20}l} \lVert \mathbf{z}_{S}\rVert_{2,1}<\lVert \mathbf{z}_{S^{c}}\rVert_{2,1}, \forall \mathbf{z}\in\text{ker} A\setminus \{\mathbf{0}\}, S\subset [p]\,\text{and}\,|S|\leq k. \end{array} $$

Proposition 1

If x is block k-sparse and there exists at least one q(1,] such that k is strictly less than

$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}\in\text{ker} A\setminus\{\mathbf{0}\}}\,\,2^{\frac{q}{1-q}}k_{q}(\mathbf{z}), \end{array} $$
(8)

then the unique solution to problem (7) is the true signal x.

Remark 1

The proof can be found in A.1 in Appendix. This proposition is an extension of Proposition 1 in [11] from simple sparse signals to block sparse signals. In Section 3.1, we adopt a convex-concave procedure algorithm to solve (8) approximately.

Now, we are ready to present the definition of the q-ratio BCMSV, which is developed based on the q-ratio block sparsity.

Definition 2

For any real number s[1,p],q(1,] and matrix \(A\in \mathbb {R}^{m\times N}\), the q-ratio block constrained minimal singular value (BCMSV) of A is defined as

$$\begin{array}{*{20}l} \beta_{q,s}(A)=\min\limits_{\mathbf{z}\neq \mathbf{0},k_{q}(\mathbf{z})\leq s}\,\,\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q}}. \end{array} $$
(9)

Remark 2

For measurement matrix A with unit norm columns, it is obvious that βq,s(A)≤1 since Aei2=1,ei2,q=1, and kq(ei)=1, where ei is the ith canonical basis for \(\mathbb {R}^{N}\). Moreover, when q and A are fixed, βq,s(A) is non-increasing with respect to s. Besides, it is worth noticing that the q-ratio BCMSV depends also on the block size n, we choose to not show this parameter for the sake of simplicity. Another interesting finding is that for any \(\alpha \in \mathbb {R}\), we have βq,s(αA)=|α|βq,s(A). This fact together with Theorem 1 in Section 2.2 implies that in the case of adopting a measurement matrix αA, increasing the measurement energy through |α| will proportionally reduce the mixed 2/q norm of reconstruction errors. Comparing to the block RIP [16], there are three main advantages by using the q-ratio BCMSV:

  • It is computable (see the algorithm in Section 3.2).

  • The proof procedures and results of recovery error bounds are more concise (details in Section 2.2).

  • The q-ratio BCMSV-based recovery bounds are smaller (better) than the block RIC-based bounds as shown in Section 3.3 (see also [11, 17], for another two specific examples).

As for different q, we have the following important inequality, which plays a crucial role in deriving the probabilistic behavior of βq,s(A) via the existing results established in [17].

Proposition 2

If 1<q2q1, then for any real number \(1\leq s\leq p^{1/\tilde {q}}\) with \(\tilde {q}=\frac {q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}\), we have

$$\begin{array}{*{20}l} \beta_{q_{1},s}(A)\geq \beta_{q_{2},s^{\tilde{q}}}(A)\geq s^{-\tilde{q}} \beta_{q_{1}, s^{\tilde{q}}}(A). \end{array} $$
(10)

Remark 3

The proof can be found in A.2 in Appendix. Let q1= and q2=2 (thus, \(\tilde {q}=2\)), we have \(\beta _{\infty,s}(A)\geq \beta _{2,s^{2}}(A)\geq \frac {1}{s^{2}}\beta _{\infty,s^{2}}(A)\). If q1q2>1, then \(\tilde {q}=\frac {q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}=1+\frac {q_{1}-q_{2}}{q_{1}(q_{2}-1)}\geq 1\), so \(\beta _{q_{2},s^{\tilde {q}}}(A)\leq \beta _{q_{2},s}(A)\). Similarly, we have for any \(t\in [1,p] \beta _{q_{2},t}(A)\geq \frac {1}{t}\beta _{q_{1},t}(A)\) by letting \(t=s^{\tilde {q}}\) in (10). Based on these facts, we can not obtain the monotonicity with respect to q when s and A are fixed. However, since for any \(\mathbf {z}\in \mathbb {R}^{N}\) with p blocks, kq(z)≤p, it holds trivially that βq,p(A) is non-decreasing with respect to q by using the non-increasing property of the mixed 2/q norm.

2.2 Recovery error bounds

In this section, we derive the recovery error bounds in terms of the mixed 2/q norm and the mixed 2/1 norm via the q-ratio BCMSV of the measurement matrix. We focus on three renowned convex relaxation algorithms for block sparse signal recovery from (1): the BBP, the BDS, and the group lasso.

BBP: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\,\,\lVert \mathbf {z}\rVert _{2,1}\,\,\,\text {s.t.}\,\,\,\lVert \mathbf {y}-A\mathbf {z}\rVert _{2}\leq \zeta \).

BDS: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\,\,\lVert \mathbf {z}\rVert _{2,1}\,\,\,\text {s.t.}\,\,\,\lVert A^{T}(\mathbf {y}-A\mathbf {z})\rVert _{2,\infty }\leq \mu \).

Group lasso: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\frac {1}{2}\lVert \mathbf {y}-A\mathbf {z}\rVert _{2}^{2}+\mu \lVert \mathbf {z}\rVert _{2,1}\).

Here, ζ and μ are parameters used in the constraints to control the noise level. We first present the following main results of recovery error bounds for the case when the true signal x is block k-sparse.

Theorem 1

Suppose x is block k-sparse. For any q(1,], we have 1) If ε2ζ, then the solution \(\hat {\mathbf {x}}\) to the BBP obeys

$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{2\zeta}{\beta_{q,2^{\frac{q}{q-1}}k}(A)}, \end{array} $$
(11)
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{4k^{1-1/q}\zeta}{\beta_{q,2^{\frac{q}{q-1}}k}(A)}. \end{array} $$
(12)

2) If the noise ε in the BDS satisfies ATε2,μ, then the solution \(\hat {\mathbf {x}}\) to the BDS obeys

$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}\leq \frac{4k^{1-1/q}}{\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)}\mu, \end{array} $$
(13)
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}\leq \frac{8k^{2-2/q}}{\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)}\mu. \end{array} $$
(14)

3) If the noise εin the group lasso satisfies ATε2,κμ for some κ(0,1), then the solution \(\hat {\mathbf {x}}\) to the group lasso obeys

$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{1+\kappa}{1-\kappa}\cdot\frac{2k^{1-1/q}}{\beta_{q,\left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu, \end{array} $$
(15)
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{1+\kappa}{(1-\kappa)^{2}}\cdot\frac{4k^{2-2/q}}{\beta_{q,\left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu. \end{array} $$
(16)

Remark 4

The proof can be found in A.3 in Appendix. Obviously, if \(\beta _{q,2^{\frac {q}{q-1}}k}(A)\neq 0\) in (11) and (12), then the noise free BBP (7) can uniquely recover any block k-sparse signal by letting ζ=0.

Remark 5

The mixed 2/q norm error bounds are generalized from the existing results in [17] (q=2 and ) to any 1<q and from [11] (simple sparse signal recovery) to block sparse signal recovery. The mixed 2/q norm error bounds depend on the q-ratio BCMSV of the measurement matrix A, which is bounded away from zero for sub-Gaussian random matrix and can be computed approximately by using a specific algorithm, which are discussed later.

Remark 6

As shown in literature, the block RIC-based recovery error bounds for the BBP [16], the BDS [24], and the group lasso [25] are complicated. In contrast, as presented in this theorem, the q-ratio BCMSV-based bounds are much more concise and corresponding derivations are much less complicated, which are given in the Appendix.

Next, we extend Theorem 1 to the case when the signal is block compressible, in the sense that it can be approximated by a block k-sparse signal. Given a block compressible signal x, let the mixed 2/1 error of the best block k-sparse approximation of x be \(\phi _{k}(\mathbf {x})=\underset {\mathbf {z}\in \mathbb {R}^{N},\lVert \mathbf {z}\rVert _{2,0}=k}{\inf } \lVert \mathbf {x}-\mathbf {z}\rVert _{2,1}\), which measures how close x is to the block k-sparse signal.

Theorem 2

Suppose that x is block compressible. For any 1<q, we have 1) If ε2ζ, then the solution \(\hat {\mathbf {x}}\) to the BBP obeys

$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{2\zeta}{\beta_{q,4^{\frac{q}{q-1}}k}(A)}+k^{1/q-1}\phi_{k}(\mathbf{x}), \end{array} $$
(17)
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{4k^{1-1/q}\zeta}{\beta_{q,4^{\frac{q}{q-1}}k}(A)}+4\phi_{k}(\mathbf{x}). \end{array} $$
(18)

2) If the noise ε in the BDS satisfies ATε2,μ, then the solution \(\hat {\mathbf {x}}\) to the BDS obeys

$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{8k^{1-1/q}}{\beta_{q,4^{\frac{q}{q-1}}k}^{2}(A)}\mu+k^{1/q-1}\phi_{k}(\mathbf{x}), \end{array} $$
(19)
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{16k^{2-2/q}}{\beta_{q,4^{\frac{q}{q-1}}k}^{2}(A)}\mu+4\phi_{k}(\mathbf{x}). \end{array} $$
(20)

3) If the noise εin the group lasso satisfies ATε2,κμ for some κ(0,1), then the solution \(\hat {\mathbf {x}}\) to the group lasso obeys

$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,q}&\leq \frac{1+\kappa}{1-\kappa}\cdot\frac{4k^{1-1/q}}{\beta_{q,\left(\frac{4}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu+k^{1/q-1}\phi_{k}(\mathbf{x}), \end{array} $$
(21)
$$\begin{array}{*{20}l} \lVert\hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}&\leq \frac{1+\kappa}{(1-\kappa)^{2}}\cdot\frac{8k^{2-2/q}}{\beta_{q,\left(\frac{4}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\mu+\frac{4}{1-\kappa}\phi_{k}(\mathbf{x}). \end{array} $$
(22)

Remark 7

The proof can be found in A.4 in Appendix. All the error bounds consist of two components, one is caused by the measurement error, and another one is due to the sparsity defect.

Remark 8

Comparing to Theorem 1, we need stronger conditions to achieve the valid error bounds. Concisely, we require \(\beta _{q,4^{\frac {q}{q-1}}k}(A)>0, \beta _{q,4^{\frac {q}{q-1}}k}(A)>0\) and \(\beta _{q,\left (\frac {4}{1-\kappa }\right)^{\frac {q}{q-1}}k}(A)>0\) for the BBP, BDS, and group lasso in the block compressible case, while \(\beta _{q,2^{\frac {q}{q-1}}k}(A)>0, \beta _{q,2^{\frac {q}{q-1}}k}(A)>0\) and \(\beta _{q,\left (\frac {2}{1-\kappa }\right)^{\frac {q}{q-1}}k}(A)>0\) in the block sparse case, respectively.

2.3 Random matrices

In this section, we study the properties of the q-ratio BCMSV of sub-Gaussian random matrix. A random vector \(\mathbf {x}\in \mathbb {R}^{N}\) is called isotropic and sub-Gaussian with constant L if it holds for all \(\mathbf {u}\in \mathbb {R}^{N}\) that \(E|\langle \mathbf {x},\mathbf {u}\rangle |^{2}=\lVert \mathbf {u}\rVert _{2}^{2}\) and \(P(|\langle \mathbf {x}, \mathbf {u}\rangle |\geq t)\leq 2\exp \left (-\frac {t^{2}}{L\lVert \mathbf {u}\rVert _{2}}\right)\). Then, as shown in Theorem 2 of [17], we have the following lemma.

Lemma 1

([17]) Suppose the rows of the scaled measurement matrix \(\sqrt {m}A\) to be i.i.d isotropic and sub-Gaussian random vectors with constant L. Then, there exists constants c1 and c2 such that for any η>0 and m≥1 satisfying

$$m\geq c_{1}\frac{L^{2}{\mathbin{s}}(n+\log p)}{\eta^{2}}, $$

we have

$$\mathbb{E}|1-\beta_{2,s}(A)|\leq \eta $$

and

$$\mathbb{P}(\beta_{2,s}(A)\geq 1-\eta)\geq 1-\exp\left(-c_{2}\eta^{2}\frac{m}{L^{4}}\right).$$

Then, as a direct consequence of Proposition 2 (i.e., if 1<q<2,βq,s(A)≥s−1β2,s(A); if \(2\leq q\leq \infty, \beta _{q,s}(A)\geq \beta _{2,s^{\frac {2(q-1)}{q}}}(A)\).) and Lemma 1, we have the following probabilistic statements for βq,s(A).

Theorem 3

Under the assumptions and notations of Lemma 1, it holds that

1) When 1<q<2, there exist constants c1 and c2 such that for any η>0 and m≥1 satisfying

$$m\geq c_{1}\frac{L^{2}{s}(n+\log p)}{\eta^{2}}, $$

we have

$$\begin{array}{*{20}l} \mathbb{E}[\beta_{q,s}(A)]&\geq s^{-1}(1-\eta), \end{array} $$
(23)
$$\begin{array}{*{20}l} \mathbb{P}\big(\beta_{q,s}(A)&\geq s^{-1}(1-\eta)\big)\geq 1-\exp\left(-c_{2}\eta^2 \frac{m}{L^{4}}\right). \end{array} $$
(24)

2) When 2≤q, there exist constants c1 and c2 such that for any η>0 and m≥1 satisfying

$$m\geq c_{1}\frac{L^{2} s^{\frac{2(q-1)}{q}}(n+\log p)}{\eta^{2}}, $$

we have

$$\begin{array}{*{20}l} \mathbb{E}[\beta_{q,s}(A)]&\geq 1-\eta, \end{array} $$
(25)
$$\begin{array}{*{20}l} \mathbb{P}\big(\beta_{q,s}(A)&\geq 1-\eta\big)\geq 1-\exp\left(-c_{2}\eta^2 \frac{m}{L^{4}}\right). \end{array} $$
(26)

Remark 9

Theorem 3 shows that for sub-Gaussian random matrix, the q-ratio BCMSV is bounded away from zero as long as the number of measurements is large enough. Sub-Gaussian random matrices include Gaussian and Bernoulli ensembles.

3 Numerical experiments and results

In this section, we introduce a convex-concave method to solve the sufficient condition (8) so as to achieve the maximal block sparsity k and present an algorithm to compute the q-ratio BCMSV. We also conduct comparisons between the q-ratio BCMSV-based bounds and block RIC-based bounds through the BBP.

3.1 Solving the optimization problem (8)

According to Proposition 1, given a q(1,], we need to solve the optimization problem (8) to obtain the maximal block sparsity k which guaranties that all block k-sparse signals can be uniquely recovered by (7). Solving (8) is equivalent to solve the problem:

$$\begin{array}{*{20}l} \max\limits_{\mathbf{z}\in\mathbb{R}^{N}}\,\lVert \mathbf{z}\rVert_{2,q}\,\,\,\text{s.t.}\ A\mathbf{z}=0\ \text{and}\ \lVert \mathbf{z}\rVert_{2,1}\leq 1. \end{array} $$
(27)

However, maximizing mixed 2/q norm over a polyhedron is non-convex. Here, we adopt the convex-concave procedure (CCP) (see [26] for details) to solve the problem (27) for any q(1,]. The algorithm is presented as follows:

We implement the algorithm to solve (27) under the following settings. Let A be either Bernoulli or Gaussian random matrix with N=256, varying m, block size n, and q. Specifically, m=64,128,192,n=1,2,4,8, and q=2,4,16,128, respectively. The results are summarized in Table 1. Note that when n=1, the algorithm (??) is identical to the one in [11]. The main findings are as follows: (i) by comparing the results between Bernoulli and Gaussian random matrices under the same settings, there is no substantial difference. Thus, we can now merely focus on the left part of the table, i.e., Bernoulli random matrix part; (ii) it can be seen that the results are not monotone with respect to q (see the row with n=4,m=192), which verifies the conclusion in Remark 3; (iii) when m is the only variable, it is easy to notice that the maximal block sparsity increases as m increases; and (iv) conversely, when n is the only variable, the maximal block sparsity decreases as n increases, which is in line with the main result in ([27], Theorem 3.1).

Table 1 Maximal sparsity levels from the CCP algorithm for both Bernoulli and Gaussian random matrices with N=256 and different combinations of n,m, and q

3.2 Computing the q-ratio BCMSVs

Computing the q-ratio BCMSV (9) is equivalent to solve

$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}\in\mathbb{R}^{N}}\,\lVert A\mathbf{z}\rVert_{2}\,\,\,\text{s.t.}\,\,\,\lVert \mathbf{z}\rVert_{2,1}\leq s^{\frac{q-1}{q}}, \lVert \mathbf{z}\rVert_{2,q}=1. \end{array} $$
(28)

Since the constraint set is not convex, this is a non-convex optimization problem. In order to solve (28), we use Matlab function fmincon as in [11] and define z=z+z with z+= max(z,0) and z= max(−z,0). Consequently, (28) can be reformulated to:

$$\begin{array}{*{20}l} \min\limits_{\mathbf{z}^{+},\mathbf{z}^{-}\in\mathbb{R}^{N}}&\,(\mathbf{z}^{+}-\mathbf{z}^{-})^{T} A^{T} A(\mathbf{z}^{+}-\mathbf{z}^{-}) \\ &\text{s.t.}\,\,\,\lVert \mathbf{z}^{+}-\mathbf{z}^{-}\rVert_{2,1}-s^{\frac{q-1}{q}}\leq 0, \\ &\lVert \mathbf{z}^{+}-\mathbf{z}^{-}\rVert_{2,q}=1, \\ &\mathbf{z}^{+}\geq 0, \mathbf{z}^{-}\geq 0. \end{array} $$
(29)

Due to the existence of local minima, we perform an experiment to decide a reasonable number of iterations needed to achieve the “global” minima shown in Fig. 1. In the experiment, we calculate the q-ratio BCMSV of a fixed unit norm columns Bernoulli random matrix of size 40×64,n=s=4, and varying q=2,4,8, respectively. Fifty iterations are carried out for each q. The figure shows that after about 30 experiments, the estimate of \(\beta _{q,s}, \hat {\beta }_{q,s}\), becomes convergent, so in the following experiments, we repeat the algorithm 40 times and choose the smallest value \(\hat {\beta }_{q,s}\) as the “global” minima. We test indeed to vary m,s,n, respectively, all indicate 40 is a reasonable number to be chosen (not shown).

Fig. 1
figure 1

q-ratio BCMSVs calculated for a Bernoulli random matrix of size 40×64 with n=4,s=4, and q=2,4,8 as a function of number of experiments

Next, we illustrate the properties of βq,s, which have been pointed out in Remarks 2 and 3, through experiments. We set N=64 with three different block sizes n=1,4,8 (i.e., number of blocks p=64,16,8), three different m=40,50,60, three different q=2,4,8, and three different s=2,4,8. Unit norm columns Bernoulli random matrices are used. Results are listed in Table 2. They are inline with the theoretical results:

  1. (i)

    βq,s increases as m increases for all cases given that other parameters are fixed.

    Table 2 The q-ratio BCMSVs with varying m,n,p,q, and s
  2. (ii)

    βq,s decreases as s increases for most of cases given that other parameters are fixed. There are exceptions when m=40,n=8 with s=4, and s=8 under q=4,8, respectively. However, the difference is about 0.0002, which is possibly caused by numerical approximation.

  3. (iii)

    Monotonicity of βq,s does not hold with respect to q even given that other parameters are fixed.

3.3 Comparing error bounds

Here, we compare the q-ratio BCMSV-based bounds against the block RIC-based bounds from the BBP under different settings. The block RIC-based bound is

$$\begin{array}{*{20}l} \lVert \hat{x}-x\rVert_{2}\leq \frac{4\sqrt{1+\delta_{2k}(A)}}{1-(1+\sqrt{2})\delta_{2k}(A)}\zeta, \end{array} $$
(30)

if A satisfies the block RIP of order 2k, i.e., the block RIC \(\delta _{2k}(A)<\sqrt {2}-1\) [14, 17]. By using the Hölder’s inequality, one can obtain the mixed 2/q norm

$$\begin{array}{*{20}l} \lVert \hat{x}-x\rVert_{2,q}\leq \frac{4\sqrt{1+\delta_{2k}(A)}}{1-(1+\sqrt{2})\delta_{2k}(A)}k^{1/q-1/2}\zeta, \end{array} $$
(31)

for 0<q≤2.

We compare the two bounds (31) and (12). Without loss of generality, let ζ=1. δ2k(A) is approximated using Monte Carlo simulations. Specifically, we randomly choose 1000 sub-matrices of \(A\in \mathbb {R}^{m\times N}\) of size m×2nk to compute δ2k(A) using the maximum of \(\max \left (\sigma _{\text {max}}^{2}-1,1-\sigma _{\text {min}}^{2}\right)\) among all sampled sub-matrices. It turns out that this approximated block RIC is always smaller than or equal to the exact block RIC; thus, the error bounds based on the exact block RIC are always larger than those based on the approximated block RIC. Therefore, it would be enough to show that the q-ratio BCMSV gives a sharper error bound than the approximated block RIC.

We use unit norm columns sub-matrices of a row-randomly-permuted Hadamard matrix (an orthogonal Bernoulli matrix) with N=64,k=1,2,4,n=1,2,q=1.8, and a variety of m≤64 to approximate the q-ratio BCMSV and the block RIC. Besides the Hadamard matrix, we also test Bernoulli random matrices and Gaussian random matrices with different configurations, which only return very fewer qualified block RICs. In the simulation results of [17], the authors showed that under all considered cases for Gaussian random matrices, \(\delta _{2k}(A)>\sqrt {2}-1,\) which is coincident with our finding. Figure 2 shows that the q-ratio BCMSV-based bounds are smaller than those based on the approximated block RIC. Note that when m approaches N, βq,s(A)→1 and δ2k(A)→0, as a result, the q-ratio BCMSV-based bounds are smaller than 2.2, while the block RIC-based bounds are larger than or equal to 4.

Fig. 2
figure 2

The q-ratio BCMSV-based bounds and the block RIC-based bounds for Hadamard sub-matrices with N=64,k=1,2,4,n=1,2, and q=1.8

4 Conclusion and discussion

In this study, we introduced the q-ratio block sparsity measure and the q-ratio BCMSV. Theoretically, through the q-ratio block sparsity measure and the q-ratio BCMSV, we (i) established the sufficient condition for the unique noise-free BBP recovery; (ii) derived both the mixed 2/q norm and the mixed 2/1 norm bounds of recovery errors for the BBP, the BDS, and the group lasso estimator; and (iii) proved the q-ratio BCMSV is bounded away from zero if the number of measurements is relatively large for sub-Gaussian random matrix. Afterwards, we used numerical experiments via two algorithms to illustrate theoretical results. In addition, we demonstrated that the q-ratio BCMSV-based error bounds are much tighter than those based on block RIP through simulations.

There are still some issues left for future work. For example, analogue to the case for the q-ratio CMSV, the geometrical property of the q-ratio BCMSV can be investigated to derive sufficient conditions and error bounds for block sparse signal recovery.

5 Appendix - Proofs

Basically, the main processes of proofs follow from those in [11] with extensions to block sparse signals. We list all the details here for the sake of completeness.

5.1 A.1

Proof

(Proof of Proposition 1) Suppose there exists zkerA{0} and |S|≤k such that \(\lVert \mathbf {z}_{S}\rVert _{2,1}\geq \lVert \mathbf {z}_{S^{c}}\rVert _{2,1}\), then we have

$$\begin{array}{*{20}l} &\lVert \mathbf{z}\rVert_{2,1}=\lVert \mathbf{z}_{S}\rVert_{2,1}+\lVert \mathbf{z}_{S^{c}}\rVert_{2,1}\leq 2\lVert \mathbf{z}_{S}\rVert_{2,1} \\&\leq 2k^{1-1/q}\lVert \mathbf{z}_{S}\rVert_{2,q} \leq 2k^{1-1/q}\lVert \mathbf{z}\rVert_{2,q}, ~\forall q\in (1, \infty], \end{array} $$

which is identical to \(k\geq 2^{\frac {q}{1-q}} k_{q}(\mathbf {z}),\quad \forall q\in (1, \infty ]\).

In contrast, suppose q(1,] such that \(k<\min \limits _{\mathbf {z}\in \text {ker} A\setminus \{\mathbf {0}\}}\,\,2^{\frac {q}{1-q}}k_{q}(\mathbf {z})\), then \(\lVert \mathbf {z}_{S}\rVert _{2,1}<\lVert \mathbf {z}_{S^{c}}\rVert _{2,1}\) holds for all zkerA{0} and |S|≤k, which implies that the block null space property of order k is fulfilled; thus, any block k-sparse signal x can be obtained via (7). □

5.2 A.2

Proof

(Proof of Proposition 2.)

(i) Prove the left hand side of (10):

For any \(\mathbf {z}\in \mathbb {R}^{N}\setminus \{\mathbf {0}\}\) and 1<q2q1, suppose \(k_{q_{1}}(\mathbf {z})\leq s\), then we can get \(\left (\frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{{2,q_{1}}}}\right)^{\frac {q_{1}}{q_{1}-1}}\leq s\Rightarrow \lVert \mathbf {z}\rVert _{2,1}\leq s^{\frac {q_{1}-1}{q_{1}}}\lVert \mathbf {z}\rVert _{2,q_{1}}\leq s^{\frac {q_{1}-1}{q_{1}}}\lVert \mathbf {z}\rVert _{2,q_{2}}\). Since \(\tilde {q}=\frac {q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}\) and \( \frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,q_{2}}}\leq s^{\frac {q_{1}-1}{q_{1}}}\), we have

$$k_{q_{2}}(\mathbf{z})=\left(\frac{\lVert \mathbf{z}\rVert_{2,1}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}\right)^{\frac{q_{2}}{q_{2}-1}}\leq s^{\frac{q_{2}(q_{1}-1)}{q_{1}(q_{2}-1)}}=s^{\tilde{q}}, $$

from which we can infe

$$\{\mathbf{z}: k_{q_{1}}(\mathbf{z})\leq s\}\subseteq \{\mathbf{z}: k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}\}. $$

Therefore, we can get the left hand side of (10) through

$$\begin{array}{*{20}l} \beta_{q_{1},s}(A)&=\min\limits_{\mathbf{z}\neq \mathbf{0},k_{q_{1}}(\mathbf{z})\leq s}\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{1}}}\geq \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}}\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{1}}} \\ &= \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}\cdot\frac{\lVert \mathbf{z}\rVert_{2,q_{2}}}{\lVert \mathbf{z}\rVert_{2,q_{1}}} \\ &\geq \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}=\beta_{q_{2},s^{\tilde{q}}}(A). \end{array} $$

(ii) Verify the right hand side of (10):

Suppose \(k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\), for any \(\mathbf {z}\in \mathbb {R}^{N}\setminus \{\mathbf {0}\}\), by using the non-increasing property of the q-ratio block sparsity with respect to q and q2q1, we have the following two inequalities: \(\frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,\infty }}=k_{\infty }(\mathbf {z})\leq k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\) and \(k_{q_{1}}(\mathbf {z})\leq k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\). Since 1<q2q1, the former inequality implies that \(\frac {\lVert \mathbf {z}\rVert _{2,q_{2}}}{\lVert \mathbf {z}\rVert _{2,q_{1}}}\leq \frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,\infty }}\leq s^{\tilde {q}}\Rightarrow \frac {\lVert \mathbf {z}\rVert _{2,q_{1}}}{\lVert \mathbf {z}\rVert _{2,q_{2}}}\ge s^{\mathbin {{-\tilde {q}}}}\). The latter inequality implies that

$$\{\mathbf{z}: k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}\}\subseteq \{\mathbf{z}: k_{q_{1}}(\mathbf{z})\leq s^{\tilde{q}} \}. $$

Therefore, we can obtain the right hand side of (10) through

$$\begin{array}{*{20}l} \beta_{q_{2},s^{\tilde{q}}}(A)&=\min\limits_{\mathbf{z}\neq \mathbf{0},k_{q_{2}}(\mathbf{z})\leq s^{\tilde{q}}}\frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}} \\ &\geq \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{1}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{2}}} \\ &= \min\limits_{\mathbf{z}\neq \mathbf{0}, k_{q_{1}}(\mathbf{z})\leq s^{\tilde{q}}} \frac{\lVert A\mathbf{z}\rVert_{2}}{\lVert \mathbf{z}\rVert_{2,q_{1}}}\cdot\frac{\lVert \mathbf{z}\rVert_{2,q_{1}}}{\lVert \mathbf{z}\rVert_{2,q_{2}}}\\ &\geq \beta_{q_{1}, s^{\tilde{q}}}(A)\cdot s^{-\tilde{q}}. \end{array} $$

5.3 A.3

Proof

(Proof of Theorem 1.) The proof procedure follows from the similar arguments in [6, 10], and the procedure can be divided into two main steps.

Step 1: We first derive upper bounds of the q-ratio block sparsity of residual \(\mathbf {h}=\hat {\mathbf {x}}-\mathbf {x}\) for all algorithms. As x is block k-sparse, we assume that bsupp(x)=S and |S|≤k.

For the BBP and the BDS, since \(\lVert \hat {\mathbf {x}}\rVert _{2,1}=\lVert \mathbf {x}+\mathbf {h}\rVert _{2,1}\) is the minimum among all z satisfying the constraints of BBP and BDS (including the true signal x), we have

$$\begin{array}{*{20}l} {}\lVert \mathbf{x}\rVert_{2,1}&\!\geq\! \lVert \hat{\mathbf{x}}\rVert_{2,1}\,=\,\lVert \mathbf{x}\,+\,\mathbf{h}\rVert_{2,1}\,=\,\lVert \mathbf{x}_{S}\,+\,\mathbf{h}_{S}\rVert_{2,1}\,+\,\lVert \mathbf{x}_{S^{c}}\,+\,\mathbf{h}_{S^{c}}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}_{S}\rVert_{2,1}-\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \\ &=\lVert \mathbf{x}\rVert_{2,1}-\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}, \end{array} $$

which can be simplified to \(\lVert \mathbf {h}_{S^{c}}\rVert _{2,1}\leq \lVert \mathbf {h}_{S}\rVert _{2,1}\). Thereby, we can obtain the following inequality:

$$\begin{array}{*{20}l} &\lVert \mathbf{h}\rVert_{2,1}=\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \leq 2\lVert \mathbf{h}_{S}\rVert_{2,1}\\&\leq 2k^{1-1/q}\lVert \mathbf{h}_{S}\rVert_{2,q}\leq 2k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}, \quad \forall q\in (1,\infty], \end{array} $$

which is equivalent to

$$k_{q}(\mathbf{h})=\left(\frac{\lVert \mathbf{h}\rVert_{2,1}}{\lVert \mathbf{h}\rVert_{2,q}}\right)^{\frac{q}{q-1}}\leq 2^{\frac{q}{q-1}} k.$$

For the group lasso, since the noise ε satisfies ATε2,κμ for κ(0,1) and \(\hat {\mathbf {x}}\) is a solution of the group lasso, we have

$$\frac{1}{2}\lVert A\hat{\mathbf{x}}-\mathbf{y}\rVert_{2}^{2}+\mu\lVert \hat{\mathbf{x}}\rVert_{2,1}\leq \frac{1}{2}\lVert A\mathbf{x}-\mathbf{y}\rVert_{2}^{2}+\mu\lVert \mathbf{x}\rVert_{2,1}. $$

Substituting y by Ax+ε leads to

$$\begin{array}{*{20}l} \mu\lVert\hat{\mathbf{x}}\rVert_{2,1}&\leq \frac{1}{2}\lVert \boldsymbol{\epsilon}\rVert_{2}^{2}-\frac{1}{2}\lVert A(\hat{\mathbf{x}}-\mathbf{x})-\boldsymbol{\epsilon}\rVert_{2}^{2}+\mu\lVert \mathbf{x}\rVert_{2,1}\\ &=\frac{1}{2}\lVert \boldsymbol{\epsilon}\rVert_{2}^{2}-\frac{1}{2}\lVert A(\hat{\mathbf{x}}-\mathbf{x})\rVert_{2}^{2}+\langle A(\hat{\mathbf{x}}-\mathbf{x}),\boldsymbol{\epsilon}\rangle\\&-\frac{1}{2}\lVert \boldsymbol{\epsilon}\rVert_{2}^{2}+\mu\lVert \mathbf{x}\rVert_{2,1}\\ &\leq \langle A(\hat{\mathbf{x}}-\mathbf{x}),\boldsymbol{\epsilon}\rangle+\mu\lVert \mathbf{x}\rVert_{2,1} \\ &=\langle \hat{\mathbf{x}}-\mathbf{x}, A^{T}\boldsymbol{\epsilon}\rangle+\mu\lVert \mathbf{x}\rVert_{2,1} \\ &\leq \lVert \hat{\mathbf{x}}-\mathbf{x}\rVert_{2,1}\lVert A^{T} \boldsymbol{\epsilon}\rVert_{2,\infty}+\mu\lVert \mathbf{x}\rVert_{2,1} \\ &\leq \kappa \mu\lVert \mathbf{h}\rVert_{2,1}+\mu\lVert \mathbf{x}\rVert_{2,1}. \end{array} $$

The last second inequality follows by applying Cauchy-Schwarz inequality block wise and the last inequality can be written as

$$\begin{array}{*{20}l} \lVert \hat{\mathbf{x}}\rVert_{2,1}\leq \kappa\lVert \mathbf{h}\rVert_{2,1}+\lVert \mathbf{x}\rVert_{2,1}. \end{array} $$
(32)

Therefore, it holds that

$$\begin{array}{*{20}l} \lVert \mathbf{x}\rVert_{2,1}&\geq \lVert \hat{\mathbf{x}}\rVert_{2,1}-\kappa \lVert \mathbf{h}\rVert_{2,1}\\ &=\lVert \mathbf{x}+\mathbf{h}_{S^{c}}+\mathbf{h}_{S}\rVert_{2,1}-\kappa\lVert \mathbf{h}_{S^{c}}+\mathbf{h}_{S}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}+\mathbf{h}_{S^{c}}\rVert_{2,1}\,-\,\lVert \mathbf{h}_{S}\rVert_{2,1}\,-\,\kappa(\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}+\lVert \mathbf{h}_{S}\rVert_{2,1})\\ &=\lVert \mathbf{x}\rVert_{2,1}+(1-\kappa)\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}-(1+\kappa)\lVert \mathbf{h}_{S}\rVert_{2,1}, \end{array} $$

which can be simplified t

$$\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\leq \frac{1+\kappa}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}. $$

Thus, we can obtain

$$\begin{array}{*{20}l} \lVert \mathbf{h}\rVert_{2,1}&=\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}+\lVert \mathbf{h}_{S}\rVert_{2,1}\\ &\leq \frac{2}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}\\ &\leq \frac{2}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}_{S}\rVert_{2,q} \\ &\leq \frac{2}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}, \end{array} $$

which can be reformulated b

$$k_{q}(\mathbf{h})=\left(\frac{\lVert \mathbf{h}\rVert_{2,1}}{\lVert \mathbf{h}\rVert_{2,q}}\right)^{\frac{q}{q-1}}\leq \left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k. $$

Step 2: Obtain upper bound of Ah2 and then construct the mixed 2/q norm and the mixed 2/1 norm of the recovery error vector h via the q-ratio BCMSV for each algorithm.

(i) For the BBP, since both x and \(\hat {\mathbf {x}}\) satisfy the constraint yAz2ζ, by using the triangle inequality, we can get

$$\begin{array}{*{20}l} \lVert A\mathbf{h}\rVert_{2}\,=\,\lVert A(\hat{\mathbf{x}}-\mathbf{x})\rVert_{2}&\!\leq\! \lVert A\hat{\mathbf{x}}-\mathbf{y}\rVert_{2}+\lVert \mathbf{y}-A\mathbf{x}\rVert_{2}\leq 2\zeta. \end{array} $$
(33)

Following from the definition of the q-ratio BCMSV and \(k_{q}(\mathbf {h})\leq 2^{\frac {q}{q-1}}k\), we have

$${{}\begin{aligned} \beta_{q,2^{\frac{q}{q-1}}k}(A)\lVert \mathbf{h}\rVert_{2,q}\!\leq\! \lVert A\mathbf{h}\rVert_{2}\!\leq\! 2\zeta\Rightarrow \lVert \mathbf{h}\rVert_{2,q}\leq \frac{2\zeta}{\beta_{q,2^{\frac{q}{q-1}}k}(A)}. \end{aligned}} $$

Furthermore, we can obtain \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {4k^{1-1/q}\zeta }{\beta _{q,2^{\frac {q}{q-1}}k}(A)}\) by using the property h2,1≤2k1−1/qh2,q.

(ii) Similarly for the BDS, since both x and \(\hat {\mathbf {x}}\) satisfy the constraint AT(yAz)2,μ, we have

$$\begin{array}{*{20}l} {}\lVert A^{T} A\mathbf{h}\rVert_{2,\infty}\!\leq\! \lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty}\,+\,\lVert A^{T}(\mathbf{y}\!-A\mathbf{x})\rVert_{2,\infty} \leq 2\mu. \end{array} $$

By applying the Cauchy-Schwarz inequality again as in Step 1, we obtain

$$\begin{array}{*{20}l} {}&\lVert A\mathbf{h}\rVert_{2}^{2}=\langle A\mathbf{h},A\mathbf{h}\rangle=\langle \mathbf{h},A^{T}A\mathbf{h}\rangle\\&\leq \lVert \mathbf{h}\rVert_{2,1}\lVert A^{T}A\mathbf{h}\rVert_{2,\infty}\leq 2\mu\lVert \mathbf{h}\rVert_{2,1}. \end{array} $$
(34)

At last, with the definition of the q-ratio BCMSV, \(k_{q}(\mathbf {h})\leq 2^{\frac {q}{q-1}}k\) and h2,1≤2k1−1/qh2,q, we get the upper bounds of the mixed 2/q norm and the mixed 2/1 norm for h:

$$\begin{array}{*{20}l} &\!\!\!\!\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)\lVert \mathbf{h}\rVert_{2,q}^{2}\!\leq\! \lVert A\mathbf{h}\rVert_{2}^{2}\!\leq\! 2\mu\lVert \mathbf{h}\rVert_{2,1}\!\leq\! 4\mu k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q} \\ &\Rightarrow \lVert \mathbf{h}\rVert_{2,q}\leq \frac{4k^{1-1/q}}{\beta_{q,2^{\frac{q}{q-1}}k}^{2}(A)}\mu \end{array} $$

and \(\lVert \mathbf {h}\rVert _{2,1}\leq 2k^{1-1/q}\lVert \mathbf {h}\rVert _{2,q}\leq \frac {8k^{2-2/q}}{\beta _{q,2^{\frac {q}{q-1}}k}^{2}(A)}\mu \).

(iii) For the group lasso, with ATε2,κμ, we have

$$\begin{array}{*{20}l} \lVert A^{T}A\mathbf{h}\rVert_{2,\infty}&\leq \lVert A^{T}(\mathbf{y}-A\mathbf{x})\rVert_{2,\infty}+\lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty} \\ &\leq \lVert A^{T}\boldsymbol{\epsilon}\rVert_{2,\infty} +\lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty} \\ &\leq \kappa\mu+\lVert A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\rVert_{2,\infty}. \end{array} $$

Moreover, since \(\hat {\mathbf {x}}\) is the solution of the group lasso, the optimality condition yields that

$$A^{T}(\mathbf{y}-A\hat{\mathbf{x}})\in\mu\partial \lVert \hat{\mathbf{x}}\rVert_{2,1}, $$

where the sub-gradients in \(\partial \lVert \hat {\mathbf {x}}\rVert _{2,1}\) for the ith block are \(\hat {\mathbf {x}}_{i}/\lVert \hat {\mathbf {x}}_{i}\rVert _{2}\) if \(\hat {\mathbf {x}}_{i}\neq 0\) and is some vector g satisfying g2≤1 if \(\hat {\mathbf {x}}_{i}= 0\) (which follows from the definition of sub-gradient). Thus, we have \(\lVert A^{T}(\mathbf {y}-A\hat {\mathbf {x}})\rVert _{2,\infty }\leq \mu \), which leads to

$$\lVert A^{T}A\mathbf{h}\rVert_{2,\infty}\leq (\kappa+1)\mu. $$

Following the inequality (34), we get

$$\begin{array}{*{20}l} \lVert A\mathbf{h}\rVert_{2}^{2}\leq (\kappa+1)\mu\lVert \mathbf{h}\rVert_{2,1}. \end{array} $$
(35)

As a result, since \(k_{q}(\mathbf {h})\leq \left (\frac {2}{1-\kappa }\right)^{\frac {q}{q-1}}k\) and \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {2}{1-\kappa }k^{1-1/q}\lVert \mathbf {h}\rVert _{2,q}\), we can obtain

$$\begin{array}{*{20}l} \beta_{q,(\frac{2}{1-\kappa})^{\frac{q}{q-1}}k}^{2}(A)\lVert \mathbf{h}\rVert_{2,q}^{2}&\leq \lVert A\mathbf{h}\rVert_{2}^{2}\leq (\kappa+1)\mu\lVert \mathbf{h}\rVert_{2,1} \\ &\leq \mu\frac{2(\kappa+1)}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}, \end{array} $$
(36)

which is equivalent to

$$\lVert \mathbf{h}\rVert_{2,q}\leq \frac{k^{1-1/q}}{\beta_{q,\left(\frac{2}{1-\kappa}\right)^{\frac{q}{q-1}}k}^{2}(A)}\cdot \frac{2(\kappa+1)}{1-\kappa}\mu $$

and \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {1+\kappa }{(1-\kappa)^{2}}\cdot \frac {4k^{2-2/q}}{\beta _{q,(\frac {2}{1-\kappa })^{\frac {q}{q-1}}k}^{2}(A)}\mu \). □

5.4 A.4

Proof

Since the infimum of ϕk(x) is achieved by an block k-sparse signal z whose non-zero blocks equal to the largest k blocks, indexed by S, of x, so \(\phi _{k}(\mathbf {x})=\lVert \mathbf {x}_{S^{c}}\rVert _{2,1}\) and let \(\mathbf {h}=\hat {\mathbf {x}}-\mathbf {x}\). Similar as the proof procedure for Theorem 1, the derivations also have two steps.

Step 1: For all algorithms, bound h2,1 via h2,q and ϕk(x).

First for the BBP and the BDS, since \(\lVert \hat {\mathbf {x}}\rVert _{2,1}=\lVert \mathbf {x}+\mathbf {h}\rVert _{2,1}\) is the minimum among all z satisfying the constraints of the BBP and the BDS, we have

$$\begin{array}{*{20}l} {}\lVert \mathbf{x}_{S}\rVert_{2,1}+\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}&=\lVert \mathbf{x}\rVert_{2,1}\geq \lVert \hat{\mathbf{x}}\rVert_{2,1}=\lVert \mathbf{x}+\mathbf{h}\rVert_{2,1} \\ &=\lVert \mathbf{x}_{S}+\mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{x}_{S^{c}}+\mathbf{h}_{S^{c}}\rVert_{2,1}\\ &\geq \lVert \mathbf{x}_{S}\rVert_{2,1}\,-\,\lVert \mathbf{h}_{S}\rVert_{2,1}\,+\,\lVert \mathbf{h}_{S^{c}}\rVert\,-\,\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}, \end{array} $$

which is equivalent to

$$\begin{array}{*{20}l} \lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\leq \lVert \mathbf{h}_{S}\rVert_{2,1}+2\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}=\lVert \mathbf{h}_{S}\rVert_{2,1}+2\phi_{k}(\mathbf{x}). \end{array} $$
(37)

In consequence, we can get

$$\begin{array}{*{20}l} \lVert \mathbf{h}\rVert_{2,1}&=\lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\\ &\leq 2\lVert \mathbf{h}_{S}\rVert_{2,1}+2\phi_{k}(\mathbf{x}) \end{array} $$
(38)
$$\begin{array}{*{20}l} &\leq 2k^{1-1/q}\lVert \mathbf{h}_{S}\rVert_{2,q}+2\phi_{k}(\mathbf{x}) \\ &\leq 2k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}+2\phi_{k}(\mathbf{x}). \end{array} $$
(39)

As for the group lasso, by using (32), we can obtain

$${\begin{aligned} \lVert \mathbf{x}_{S}\rVert_{2,1}+\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}&=\lVert \mathbf{x}\rVert_{2,1} \geq \lVert \hat{\mathbf{x}}\rVert_{2,1}-\kappa\lVert \mathbf{h}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}_{S}+\mathbf{x}_{S^{c}}+\mathbf{h}_{S}+\mathbf{h}_{S^{c}}\rVert_{2,1}\\&-\kappa\lVert \mathbf{h}_{S}+\mathbf{h}_{S^{c}}\rVert_{2,1} \\ &\geq \lVert \mathbf{x}_{S}+\mathbf{h}_{S^{c}}\rVert_{2,1}-\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}\\&-\lVert \mathbf{h}_{S}\rVert_{2,1}-\kappa\lVert \mathbf{h}_{S}\rVert_{2,1}-\kappa\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \\ &=\lVert \mathbf{x}_{S}\rVert_{2,1}+(1-\kappa)\lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\\&-\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}-(1+\kappa)\lVert \mathbf{h}_{S}\rVert_{2,1}, \end{aligned}} $$

which points to that

$$\begin{array}{*{20}l} \lVert \mathbf{h}_{S^{c}}\rVert_{2,1}\leq \frac{1+\kappa}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}+\frac{2}{1-\kappa}\lVert \mathbf{x}_{S^{c}}\rVert_{2,1}. \end{array} $$
(40)

Therefore, we have

$$\begin{array}{*{20}l} \lVert \mathbf{h}\rVert_{2,1}&\leq \lVert \mathbf{h}_{S}\rVert_{2,1}+\lVert \mathbf{h}_{S^{c}}\rVert_{2,1} \\ &\leq \frac{2}{1-\kappa}\lVert \mathbf{h}_{S}\rVert_{2,1}+\frac{2}{1-\kappa}\lVert \mathbf{x}_{S^{c}}\rVert_{2,1} \\ &\leq \frac{2}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}+\frac{2}{1-\kappa}\phi_{k}(\mathbf{x}). \end{array} $$
(41)

Step 2: Verify that the q-ratio block sparsity of h has lower bound in the form of h2,q for each algorithm, when h2,q is larger than the part of recovery bounds caused by the measurement error.

(i) For the BBP, we assume that h0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {2\zeta }{\beta _{q,4^{\frac {q}{q-1}}k}(A)}\); otherwise, (17) holds trivially. Since Ah2≤2ζ (see (33)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {\lVert A\mathbf {h}\rVert _{2}}{\beta _{q,4^{\frac {q}{q-1}}k}(A)}\). Then, it holds that

$$\frac{\lVert A\mathbf{h}\rVert_{2}}{\lVert \mathbf{h}\rVert_{2,q}}<{\beta_{q,4^{\frac{q}{q-1}}k}(A)}=\min\limits_{\mathbf{h}\neq \mathbf{0}, k_{q}(\mathbf{h})\leq 4^{\frac{q}{q-1}}k}\frac{\lVert A\mathbf{h}\rVert_{2}}{\lVert \mathbf{h}\rVert_{2,q}}, $$

which implies that

$$\begin{array}{*{20}l} k_{q}(\mathbf{h})>4^{\frac{q}{q-1}}k\Rightarrow \lVert \mathbf{h}\rVert_{2,1}>4k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}. \end{array} $$
(42)

Combining (39), we have h2,q<k1/q−1ϕk(x), which completes the proof for (17). The error bound of the mixed 2/1 norm (18) follows immediately from (17) and (39).

(ii) As for the BDS, similarly we assume h0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {8k^{1-1/q}}{\beta _{q,4^{\frac {q}{q-1}}k}^{2}(A)}\mu \); otherwise, (19) holds trivially. As \(\lVert A\mathbf {h}\rVert _{2}^{2}\leq 2\mu \lVert \mathbf {h}\rVert _{2,1}\) (see (34)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {4k^{1-1/q}}{\beta _{q,4^{\frac {q}{q-1}}k}^{2}(A)}\cdot \frac {\lVert A\mathbf {h}\rVert _{2}^{2}}{\lVert \mathbf {h}\rVert _{2,1}}\). Then, we can get

$${{}\begin{aligned} \beta_{q,4^{\frac{q}{q-1}}k}^{2}(A)\,=\,\min\limits_{\mathbf{h}\neq \mathbf{0}, k_{q}(\mathbf{h})\leq 4^{\frac{q}{q-1}}k}\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}} \!>\!\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}}\left(\frac{4^{\frac{q}{q-1}}k}{k_{q}(\mathbf{h})}\right)^{1-1/q}, \end{aligned}} $$

which implies that

$$\begin{array}{*{20}l} k_{q}(\mathbf{h})>4^{\frac{q}{q-1}}k\Rightarrow \lVert \mathbf{h}\rVert_{2,1}>4k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}. \end{array} $$
(43)

Combining (39), we have h2,q<k1/q−1ϕk(x), which completes the proof for (19). (20) holds as a result of (19) and (39).

(iii) For the group lasso, we assume that h0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {1+\kappa }{1-\kappa }\cdot \frac {4k^{1-1/q}}{\beta _{q,(\frac {4}{1-\kappa })^{\frac {q}{q-1}}k}^{2}(A)}\mu \); otherwise, (21) holds trivially. Since in this case \(\lVert A\mathbf {h}\rVert _{2}^{2}\leq (1+\kappa)\mu \lVert \mathbf {h}\rVert _{2,1}\) (see (35)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {4k^{1-1/q}}{(1-\kappa)\beta _{q,(\frac {4}{1-\kappa })^{\frac {q}{q-1}}k}^{2}(A)}\cdot \frac {\lVert A\mathbf {h}\rVert _{2}^{2}}{\lVert \mathbf {h}\rVert _{2,1}}\), which leads to

$$\begin{array}{*{20}l} \beta_{q,(\frac{4}{1-\kappa})^{\frac{q}{q-1}}k}^{2}(A)&=\min\limits_{\mathbf{h}\neq \mathbf{0}, k_{q}(\mathbf{h})\leq (\frac{4}{1-\kappa})^{\frac{q}{q-1}}k}\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}} \\ &>\frac{\lVert A\mathbf{h}\rVert_{2}^{2}}{\lVert \mathbf{h}\rVert_{2,q}^{2}}\left(\frac{(\frac{4}{1-\kappa})^{\frac{q}{q-1}}k}{k_{q}(\mathbf{h})}\right)^{1-\frac{1}{q}} \\ &\Rightarrow k_{q}(\mathbf{h})>(\frac{4}{1-\kappa})^{\frac{q}{q-1}}k \\ &\Rightarrow \lVert \mathbf{h}\rVert_{2,1}>\frac{4}{1-\kappa}k^{1-1/q}\lVert \mathbf{h}\rVert_{2,q}. \end{array} $$
(44)

Combining (41), we have h2,q<k1/q−1ϕk(x), which completes the proof for (21). Consequently, (22) is obtained via (21) and (41). □

Availability of data and materials

Please contact the author for data request.

Abbreviations

BBP:

Block BP

BCMSV:

q-ratio block constrained minimal singular values

BDS:

Block DS

BP:

Basis pursuit

CMSV:

1-constrained minimal singular value

CS:

Compressive sensing

DS:

Dantzig selector

NSP:

Null space property

RIC:

Restricted isometry constant

RIP:

Restricted isometry property

References

  1. D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).

    Article  MathSciNet  Google Scholar 

  2. E. J. Candes, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math.59(8), 1207–1223 (2006).

    Article  MathSciNet  Google Scholar 

  3. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, 2013). https://doi.org/10.1007/978-0-8176-4948-7_1.

  4. A. S. Bandeira, E. Dobriban, D. G. Mixon, W. F. Sawin, Certifying the restricted isometry property is hard. IEEE Trans. Info. Theory. 59(6), 3448–3450 (2013).

    Article  MathSciNet  Google Scholar 

  5. A. M. Tillmann, M. E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory. 60(2), 1248–1259 (2014).

    Article  MathSciNet  Google Scholar 

  6. G. Tang, A. Nehorai, Performance analysis of sparse recovery based on constrained minimal singular values. IEEE Trans. Sig. Process. 59(12), 5734–5745 (2011).

    Article  MathSciNet  Google Scholar 

  7. S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput.20:, 33–61 (1998).

    Article  MathSciNet  Google Scholar 

  8. E. J. Candes, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat., 2313–2351 (2007). https://doi.org/10.1214/009053606000001523.

  9. R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser B (Methodol), 267–288 (1996). https://doi.org/10.1111/j.1467-9868.2011.00771.x.

  10. G. Tang, A. Nehorai, Computable performance bounds on sparse recovery. IEEE Trans. Sig. Process. 63(1), 132–141 (2015).

    Article  MathSciNet  Google Scholar 

  11. Z. Zhou, J. Yu, Sparse recovery based on q-ratio constrained minimal singular values. Sig. Process. 155:, 247–258 (2019).

    Article  Google Scholar 

  12. Z. Zhou, J. Yu, On q-ratio cmsv for sparse recovery. Sig. Process (2019). https://doi.org/10.1016/j.sigpro.2019.07.003.

  13. R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, Model-based compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010).

    Article  MathSciNet  Google Scholar 

  14. Y. C. Eldar, M. Mishali, Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory. 55(11), 5302–5316 (2009).

    Article  MathSciNet  Google Scholar 

  15. H. Zamani, H. Bahrami, P. Mohseni, in Proc. IEEE Biomedical Circuits and Systems Conf. (BioCAS). On the use of compressive sensing (cs) exploiting block sparsity for neural spike recording, (2016), pp. 228–231. https://doi.org/10.1109/biocas.2016.7833773.

  16. Y. Gao, M. Ma, A new bound on the block restricted isometry constant in compressed sensing. J. Inequalities Appl.2017(1), 174–174 (2017).

    Article  MathSciNet  Google Scholar 

  17. G. Tang, A. Nehorai, Semidefinite programming for computable performance bounds on block-sparsity recovery. IEEE Trans. Sig. Process. 64(17), 4455–4468 (2016).

    Article  MathSciNet  Google Scholar 

  18. Z. Zhou, J. Yu, Estimation of block sparsity in compressive sensing (2017). arXiv preprint arXiv:1701.01055.

  19. M. E. Lopes, in International Conference on Machine Learning. Estimating unknown sparsity in compressed sensing, (2013), pp. 217–225. http://proceedings.mlr.press/v28/lopes13.pdf.

  20. M. E. Lopes, Unknown sparsity in compressed sensing: denoising and inference. IEEE Trans. Inf. Theory. 62(9), 5145–5166 (2016).

    Article  MathSciNet  Google Scholar 

  21. Y. Plan, R. Vershynin, One-bit compressed sensing by linear programming. Commun. Pure Appl. Math.66(8), 1275–1297 (2013).

    Article  MathSciNet  Google Scholar 

  22. R. Vershynin, in Sampling Theory, a Renaissance. Estimation in high dimensions: a geometric perspective (SpringerCham, 2015), pp. 3–66.

    Chapter  Google Scholar 

  23. M. Stojnic, F. Parvaresh, B. Hassibi, On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Sig. Process. 57:, 3075–3085 (2009).

    Article  MathSciNet  Google Scholar 

  24. H. Liu, J. Zhang, X. Jiang, J. Liu, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 9, ed. by Y. W. Teh, M. Titterington. The group Dantzig selector (PMLRChia Laguna Resort, Sardinia, 2010), pp. 461–468. http://proceedings.mlr.press/v9/liu10a.html.

  25. R. Garg, R. Khandekar, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 15, ed. by G. Gordon, D. Dunson, and M. Dudík. Block-sparse solutions using kernel block rip and its application to group lasso (PMLRFort Lauderdale, 2011), pp. 296–304. http://proceedings.mlr.press/v15/garg11a.html.

  26. T. Lipp, S. Boyd, Variations and extension of the convex–concave procedure. Optim. Eng.17(2), 263–287 (2016).

    Article  MathSciNet  Google Scholar 

  27. N. Rao, B. Recht, R. Nowak, in Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 22, ed. by N. D. Lawrence, M. Girolami. Universal measurement bounds for structured sparse signal recovery (PMLRLa Palma, 2012), pp. 942–950. http://proceedings.mlr.press/v22/rao12.html.

Download references

Acknowledgements

This work is supported by the Swedish Research Council grant (Reg.No. 340-2013-5342).

Author information

Authors and Affiliations

Authors

Contributions

The authors read and approved the final manuscript.

Corresponding author

Correspondence to Jianfeng Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Zhou, Z. & Yu, J. Error bounds of block sparse signal recovery based on q-ratio block constrained minimal singular values. EURASIP J. Adv. Signal Process. 2019, 57 (2019). https://doi.org/10.1186/s13634-019-0653-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-019-0653-1

Keywords