 Research
 Open Access
 Published:
Error bounds of block sparse signal recovery based on qratio block constrained minimal singular values
EURASIP Journal on Advances in Signal Processing volume 2019, Article number: 57 (2019)
Abstract
In this paper, we introduce the qratio block constrained minimal singular values (BCMSV) as a new measure of measurement matrix in compressive sensing of block sparse/compressive signals and present an algorithm for computing this new measure. Both the mixed ℓ_{2}/ℓ_{q} and the mixed ℓ_{2}/ℓ_{1} norms of the reconstruction errors for stable and robust recovery using block basis pursuit (BBP), the block Dantzig selector (BDS), and the group lasso in terms of the qratio BCMSV are investigated. We establish a sufficient condition based on the qratio block sparsity for the exact recovery from the noisefree BBP and developed a convexconcave procedure to solve the corresponding nonconvex problem in the condition. Furthermore, we prove that for subGaussian random matrices, the qratio BCMSV is bounded away from zero with high probability when the number of measurements is reasonably large. Numerical experiments are implemented to illustrate the theoretical results. In addition, we demonstrate that the qratio BCMSVbased error bounds are tighter than the blockrestricted isotropic constantbased bounds.
Introduction
Compressive sensing (CS) [1, 2] aims to recover an unknown sparse signal \(\mathbf {x}\in \mathbb {R}^{N}\) from m noisy measurements \(\mathbf {y} \in \mathbb {R}^{m}\):
where \(A\in \mathbb {R}^{m\times N}\) is a measurement matrix with m≪N, and \(\boldsymbol {\epsilon }\in \mathbb {R}^{m}\) is additive noise such that ∥ε∥_{2}≤ζ for some ζ≥0. It has been proven that if A satisfies the (stable/robust) null space property (NSP) or restricted isometry property (RIP), (stable/robust) recovery can be achieved [3, Chapter 4 and 6]. However, it is computationally hard to verify NSP and compute the restricted isometry constant (RIC) for an arbitrarily chosen A [4, 5]. To overcome the drawback, a new class of measures for the measurement matrix has been developed during the last decade. To be specific, [6] introduced a new measure called ℓ_{1}constrained minimal singular value (CMSV): \(\rho _{s}(A)=\min \limits _{\mathbf {z}\neq 0, \lVert \mathbf {z}\rVert _{1}^{2}/\lVert \mathbf {z}\rVert _{2}^{2}\leq s}\frac {\lVert A\mathbf {z}\rVert _{2}}{\lVert \mathbf {z}\rVert _{2}}\) and obtained the ℓ_{2} recovery error bounds in terms of the proposed measure for the basis pursuit (BP) [7], the Dantzig selector (DS) [8], and the lasso estimator [9]. Afterwards, [10] brought in a variant of the CMSV: \(\omega _{\lozenge }(A,s)=\min \limits _{\mathbf {z}\neq 0,\lVert \mathbf {z}\rVert _{1}/\lVert \mathbf {z}\rVert _{\infty }\leq s}\frac {\lVert A\mathbf {z}\rVert _{\lozenge }}{\lVert \mathbf {z}\rVert _{\infty }}\) with \(\lVert \cdot \rVert _{\lozenge }\) denoting a general norm and expressed the ℓ_{∞} recovery error bounds using this quantity. The latest progress concerning the CMSV can be found in [11, 12]. Zhou and Yu [11] generalized these two measures to a new measure called qratio CMSV: \(\rho _{q,s}(A)=\min \limits _{\mathbf {z}\neq 0, (\lVert \mathbf {z}\rVert _{1}/\lVert \mathbf {z}\rVert _{q})^{q/(q1)}\leq s}\frac {\lVert A\mathbf {z}\rVert _{2}}{\lVert \mathbf {z}\rVert _{q}}\) with q∈(1,∞] and established both ℓ_{q} and ℓ_{1} bounds of recovery errors. Zhou and Yu [12] investigated geometrical property of the qratio CMSV, which can be used to derive sufficient conditions and error bounds of signal recovery.
In addition to the simple sparsity, a signal x can also possess a structure called block sparsity where the nonzero elements occur in clusters. It has been shown that using block information in CS can lead to a better signal recovery [13–15]. Analogue to the simple sparsity, there are block NSP and block RIP to characterize the measurement matrix in order to guarantee a successful recovery through (1) [16]. Nevertheless, they are still computationally hard to be verified for a given A. Thus, it is desirable to develop a computable measure like the CMSV for recovery of simple (nonblock) sparse signals. Tang and Nehorai [17] proposed a new measure of the measurement matrix based on the CMSV for block sparse signal recovery and derived the mixed ℓ_{2}/ℓ_{∞} and ℓ_{2} bounds of recovery errors. In this paper, we extend the qratio CMSV in [11] to qratio block CMSV (BCMSV) and generalize the error bounds from the mixed ℓ_{2}/ℓ_{∞} and ℓ_{2} norms in [17] to mixed ℓ_{2}/ℓ_{q} with q∈(1,∞] and mixed ℓ_{2}/ℓ_{1} norms.
This work includes four main contributions to block sparse signal recovery in compressive sensing: (i) we establish a sufficient condition based on the qratio block sparsity for the exact recovery from the noisefree block BP (BBP) and develop a convexconcave procedure to solve the corresponding nonconvex problem in the condition; (ii) we introduce the qratio BCMSV and derive both the mixed ℓ_{2}/ℓ_{q} and the mixed ℓ_{2}/ℓ_{1} norms of the reconstruction errors for stable and robust recovery using the BBP, the block DS (BDS), and the group lasso in terms of the qratio BCMSV; (iii) we prove that for subGaussian random matrices, the qratio BCMSV is bounded away from zero with high probability when the number of measurements is reasonably large; and (iv) we present an algorithm to compute the qratio BCMSV for an arbitrary measurement matrix and investigate its properties.
The paper is organized as follows. Section 2 presents our theoretical contributions, including properties of the qratio block sparsity and the qratio BCMSV, the mixed ℓ_{2}/ℓ_{q} norm and the mixed ℓ_{2}/ℓ_{1} norm reconstruction errors for the BBP, the BDS and the group lasso, and the probabilistic result of the qratio BCMSV for subGaussian random matrices. Numerical experiments and algorithms are described in Section 3. Section 4 is devoted to conclusion and discussion. All the proofs are left in the Appendix.
Theoretical methodology
qratio block sparsity and qratio BCMSV—definition and property
In this section, we introduce the definitions of the qratio block sparsity and the qratio BCMSV and present their fundamental properties. A sufficient condition for block sparse signal recovery via the noisefree BBP using the qratio block sparsity and an inequality for the qratio BCMSV are established.
Throughout the paper, we denote vectors by bold lower case letters or bold numbers and matrices by upper case letters. x^{T} denotes the transpose of a column vector x. For any vector \(\mathbf {x}\in \mathbb {R}^{N}\), we partition it into p blocks, each of length n, so we have \(\mathbf {x}=\left [\mathbf {x}_{1}^{T}, \mathbf {x}_{2}^{T}, \cdots, \mathbf {x}_{p}^{T}\right ]^{T}\) and \(\mathbf {x}_{i}\in \mathbb {R}^{n}\) denotes the ith block of x. We define the mixed ℓ_{2}/ℓ_{0} norm \(\lVert \mathbf {x}\rVert _{2,0}=\sum _{i=1}^{p} 1\{\mathbf {x}_{i}\neq \mathbf {0}\}\), the mixed ℓ_{2}/ℓ_{∞} norm ∥x∥_{2,∞}= max1≤i≤p∥x_{i}∥_{2}, and the mixed ℓ_{2}/ℓ_{q} norm \(\lVert \mathbf {x}\rVert _{2,q}=\left (\sum _{i=1}^{p} \lVert \mathbf {x}_{i}\rVert _{2}^{q}\right)^{1/q}\) for 0<q<∞. A signal x is block ksparse if ∥x∥_{2,0}≤k. [p] denotes the set {1,2,⋯,p} and S denotes the cardinality of a set S. Furthermore, we use S^{c} for the complement [p]∖S of a set S in [p]. The block support is defined by bsupp(x):={i∈[p]:∥x_{i}∥_{2}≠0}. If S⊂[p], then x_{S} is the vector coincides with x on the block indices in S and is extended to zero outside S. For any matrix \(A\in \mathbb {R}^{m\times N}, \text {ker} A:=\{\mathbf {x}\in \mathbb {R}^{N}: A\mathbf {x}=\mathbf {0}\}, A^{T}\) is the transpose. 〈·,·〉 is the inner product function.
We first introduce the definition of the qratio block sparsity and its properties.
Definition 1
([18]) For any nonzero \(\mathbf {x}\in \mathbb {R}^{N}\) and nonnegative q∉{0,1,∞}, the qratio block sparsity of x is defined as
The cases of q∈{0,1,∞} are evaluated by limits:
Here, \(\pi (\mathbf {x})\in \mathbb {R}^{p}\) with entries π_{i}(x)=∥x_{i}∥_{2}/∥x∥_{2,1} and H_{1} is the ordinary Shannon entropy \(H_{1}(\pi (\mathbf {x}))=\sum _{i=1}^{p} \pi _{i}(\mathbf {x})\log \pi _{i}(\mathbf {x})\).
This is an extension of the sparsity measures proposed in [19, 20], where estimation and statistical inference via αstable random projection method were investigated. In fact, this kind of sparsity measure is based on entropy, which measures energy of blocks of x via π_{i}(x). Formally, we can express the qratio block sparsity by
where H_{q} is the Rényi entropy of order q∈[0,∞] [21, 22]. When q∉{0,1,∞}, the Rényi entropy is given by \(H_{q}(\pi (\mathbf {x}))=\frac {1}{1q}\log \left (\sum _{i=1}^{p} \pi _{i}(\mathbf {x})^{q}\right)\), and for the cases of q∈{0,1,∞}, the Rényi entropy is evaluated by limits and results in (3), (4), and (5), respectively.
Next, we present a sufficient condition for the exact recovery via the noisefree BBP in terms of the qratio block sparsity. Recall that when the true signal x is block ksparse, the sufficient and necessary condition for the exact recovery via the noisefree BBP:
in terms of the block NSP of order k was given by [16, 23]
Proposition 1
If x is block ksparse and there exists at least one q∈(1,∞] such that k is strictly less than
then the unique solution to problem (7) is the true signal x.
Remark 1
The proof can be found in A.1 in Appendix. This proposition is an extension of Proposition 1 in [11] from simple sparse signals to block sparse signals. In Section 3.1, we adopt a convexconcave procedure algorithm to solve (8) approximately.
Now, we are ready to present the definition of the qratio BCMSV, which is developed based on the qratio block sparsity.
Definition 2
For any real number s∈[1,p],q∈(1,∞] and matrix \(A\in \mathbb {R}^{m\times N}\), the qratio block constrained minimal singular value (BCMSV) of A is defined as
Remark 2
For measurement matrix A with unit norm columns, it is obvious that β_{q,s}(A)≤1 since ∥Ae_{i}∥_{2}=1,∥e_{i}∥_{2,q}=1, and k_{q}(e_{i})=1, where e_{i} is the ith canonical basis for \(\mathbb {R}^{N}\). Moreover, when q and A are fixed, β_{q,s}(A) is nonincreasing with respect to s. Besides, it is worth noticing that the qratio BCMSV depends also on the block size n, we choose to not show this parameter for the sake of simplicity. Another interesting finding is that for any \(\alpha \in \mathbb {R}\), we have β_{q,s}(αA)=αβ_{q,s}(A). This fact together with Theorem 1 in Section 2.2 implies that in the case of adopting a measurement matrix αA, increasing the measurement energy through α will proportionally reduce the mixed ℓ_{2}/ℓ_{q} norm of reconstruction errors. Comparing to the block RIP [16], there are three main advantages by using the qratio BCMSV:

It is computable (see the algorithm in Section 3.2).

The proof procedures and results of recovery error bounds are more concise (details in Section 2.2).

The qratio BCMSVbased recovery bounds are smaller (better) than the block RICbased bounds as shown in Section 3.3 (see also [11, 17], for another two specific examples).
As for different q, we have the following important inequality, which plays a crucial role in deriving the probabilistic behavior of β_{q,s}(A) via the existing results established in [17].
Proposition 2
If 1<q_{2}≤q_{1}≤∞, then for any real number \(1\leq s\leq p^{1/\tilde {q}}\) with \(\tilde {q}=\frac {q_{2}(q_{1}1)}{q_{1}(q_{2}1)}\), we have
Remark 3
The proof can be found in A.2 in Appendix. Let q_{1}=∞ and q_{2}=2 (thus, \(\tilde {q}=2\)), we have \(\beta _{\infty,s}(A)\geq \beta _{2,s^{2}}(A)\geq \frac {1}{s^{2}}\beta _{\infty,s^{2}}(A)\). If q_{1}≥q_{2}>1, then \(\tilde {q}=\frac {q_{2}(q_{1}1)}{q_{1}(q_{2}1)}=1+\frac {q_{1}q_{2}}{q_{1}(q_{2}1)}\geq 1\), so \(\beta _{q_{2},s^{\tilde {q}}}(A)\leq \beta _{q_{2},s}(A)\). Similarly, we have for any \(t\in [1,p] \beta _{q_{2},t}(A)\geq \frac {1}{t}\beta _{q_{1},t}(A)\) by letting \(t=s^{\tilde {q}}\) in (10). Based on these facts, we can not obtain the monotonicity with respect to q when s and A are fixed. However, since for any \(\mathbf {z}\in \mathbb {R}^{N}\) with p blocks, k_{q}(z)≤p, it holds trivially that β_{q,p}(A) is nondecreasing with respect to q by using the nonincreasing property of the mixed ℓ_{2}/ℓ_{q} norm.
Recovery error bounds
In this section, we derive the recovery error bounds in terms of the mixed ℓ_{2}/ℓ_{q} norm and the mixed ℓ_{2}/ℓ_{1} norm via the qratio BCMSV of the measurement matrix. We focus on three renowned convex relaxation algorithms for block sparse signal recovery from (1): the BBP, the BDS, and the group lasso.
BBP: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\,\,\lVert \mathbf {z}\rVert _{2,1}\,\,\,\text {s.t.}\,\,\,\lVert \mathbf {y}A\mathbf {z}\rVert _{2}\leq \zeta \).
BDS: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\,\,\lVert \mathbf {z}\rVert _{2,1}\,\,\,\text {s.t.}\,\,\,\lVert A^{T}(\mathbf {y}A\mathbf {z})\rVert _{2,\infty }\leq \mu \).
Group lasso: \(\min \limits _{\mathbf {z}\in \mathbb {R}^{N}}\frac {1}{2}\lVert \mathbf {y}A\mathbf {z}\rVert _{2}^{2}+\mu \lVert \mathbf {z}\rVert _{2,1}\).
Here, ζ and μ are parameters used in the constraints to control the noise level. We first present the following main results of recovery error bounds for the case when the true signal x is block ksparse.
Theorem 1
Suppose x is block ksparse. For any q∈(1,∞], we have 1) If ∥ε∥_{2}≤ζ, then the solution \(\hat {\mathbf {x}}\) to the BBP obeys
2) If the noise ε in the BDS satisfies ∥A^{T}ε∥_{2,∞}≤μ, then the solution \(\hat {\mathbf {x}}\) to the BDS obeys
3) If the noise εin the group lasso satisfies ∥A^{T}ε∥_{2,∞}≤κμ for some κ∈(0,1), then the solution \(\hat {\mathbf {x}}\) to the group lasso obeys
Remark 4
The proof can be found in A.3 in Appendix. Obviously, if \(\beta _{q,2^{\frac {q}{q1}}k}(A)\neq 0\) in (11) and (12), then the noise free BBP (7) can uniquely recover any block ksparse signal by letting ζ=0.
Remark 5
The mixed ℓ_{2}/ℓ_{q} norm error bounds are generalized from the existing results in [17] (q=2 and ∞) to any 1<q≤∞ and from [11] (simple sparse signal recovery) to block sparse signal recovery. The mixed ℓ_{2}/ℓ_{q} norm error bounds depend on the qratio BCMSV of the measurement matrix A, which is bounded away from zero for subGaussian random matrix and can be computed approximately by using a specific algorithm, which are discussed later.
Remark 6
As shown in literature, the block RICbased recovery error bounds for the BBP [16], the BDS [24], and the group lasso [25] are complicated. In contrast, as presented in this theorem, the qratio BCMSVbased bounds are much more concise and corresponding derivations are much less complicated, which are given in the Appendix.
Next, we extend Theorem 1 to the case when the signal is block compressible, in the sense that it can be approximated by a block ksparse signal. Given a block compressible signal x, let the mixed ℓ_{2}/ℓ_{1} error of the best block ksparse approximation of x be \(\phi _{k}(\mathbf {x})=\underset {\mathbf {z}\in \mathbb {R}^{N},\lVert \mathbf {z}\rVert _{2,0}=k}{\inf } \lVert \mathbf {x}\mathbf {z}\rVert _{2,1}\), which measures how close x is to the block ksparse signal.
Theorem 2
Suppose that x is block compressible. For any 1<q≤∞, we have 1) If ∥ε∥_{2}≤ζ, then the solution \(\hat {\mathbf {x}}\) to the BBP obeys
2) If the noise ε in the BDS satisfies ∥A^{T}ε∥_{2,∞}≤μ, then the solution \(\hat {\mathbf {x}}\) to the BDS obeys
3) If the noise εin the group lasso satisfies ∥A^{T}ε∥_{2,∞}≤κμ for some κ∈(0,1), then the solution \(\hat {\mathbf {x}}\) to the group lasso obeys
Remark 7
The proof can be found in A.4 in Appendix. All the error bounds consist of two components, one is caused by the measurement error, and another one is due to the sparsity defect.
Remark 8
Comparing to Theorem 1, we need stronger conditions to achieve the valid error bounds. Concisely, we require \(\beta _{q,4^{\frac {q}{q1}}k}(A)>0, \beta _{q,4^{\frac {q}{q1}}k}(A)>0\) and \(\beta _{q,\left (\frac {4}{1\kappa }\right)^{\frac {q}{q1}}k}(A)>0\) for the BBP, BDS, and group lasso in the block compressible case, while \(\beta _{q,2^{\frac {q}{q1}}k}(A)>0, \beta _{q,2^{\frac {q}{q1}}k}(A)>0\) and \(\beta _{q,\left (\frac {2}{1\kappa }\right)^{\frac {q}{q1}}k}(A)>0\) in the block sparse case, respectively.
Random matrices
In this section, we study the properties of the qratio BCMSV of subGaussian random matrix. A random vector \(\mathbf {x}\in \mathbb {R}^{N}\) is called isotropic and subGaussian with constant L if it holds for all \(\mathbf {u}\in \mathbb {R}^{N}\) that \(E\langle \mathbf {x},\mathbf {u}\rangle ^{2}=\lVert \mathbf {u}\rVert _{2}^{2}\) and \(P(\langle \mathbf {x}, \mathbf {u}\rangle \geq t)\leq 2\exp \left (\frac {t^{2}}{L\lVert \mathbf {u}\rVert _{2}}\right)\). Then, as shown in Theorem 2 of [17], we have the following lemma.
Lemma 1
([17]) Suppose the rows of the scaled measurement matrix \(\sqrt {m}A\) to be i.i.d isotropic and subGaussian random vectors with constant L. Then, there exists constants c_{1} and c_{2} such that for any η>0 and m≥1 satisfying
we have
and
Then, as a direct consequence of Proposition 2 (i.e., if 1<q<2,β_{q,s}(A)≥s^{−1}β_{2,s}(A); if \(2\leq q\leq \infty, \beta _{q,s}(A)\geq \beta _{2,s^{\frac {2(q1)}{q}}}(A)\).) and Lemma 1, we have the following probabilistic statements for β_{q,s}(A).
Theorem 3
Under the assumptions and notations of Lemma 1, it holds that
1) When 1<q<2, there exist constants c_{1} and c_{2} such that for any η>0 and m≥1 satisfying
we have
2) When 2≤q≤∞, there exist constants c_{1} and c_{2} such that for any η>0 and m≥1 satisfying
we have
Remark 9
Theorem 3 shows that for subGaussian random matrix, the qratio BCMSV is bounded away from zero as long as the number of measurements is large enough. SubGaussian random matrices include Gaussian and Bernoulli ensembles.
Numerical experiments and results
In this section, we introduce a convexconcave method to solve the sufficient condition (8) so as to achieve the maximal block sparsity k and present an algorithm to compute the qratio BCMSV. We also conduct comparisons between the qratio BCMSVbased bounds and block RICbased bounds through the BBP.
Solving the optimization problem (8)
According to Proposition 1, given a q∈(1,∞], we need to solve the optimization problem (8) to obtain the maximal block sparsity k which guaranties that all block ksparse signals can be uniquely recovered by (7). Solving (8) is equivalent to solve the problem:
However, maximizing mixed ℓ_{2}/ℓ_{q} norm over a polyhedron is nonconvex. Here, we adopt the convexconcave procedure (CCP) (see [26] for details) to solve the problem (27) for any q∈(1,∞]. The algorithm is presented as follows:
We implement the algorithm to solve (27) under the following settings. Let A be either Bernoulli or Gaussian random matrix with N=256, varying m, block size n, and q. Specifically, m=64,128,192,n=1,2,4,8, and q=2,4,16,128, respectively. The results are summarized in Table 1. Note that when n=1, the algorithm (??) is identical to the one in [11]. The main findings are as follows: (i) by comparing the results between Bernoulli and Gaussian random matrices under the same settings, there is no substantial difference. Thus, we can now merely focus on the left part of the table, i.e., Bernoulli random matrix part; (ii) it can be seen that the results are not monotone with respect to q (see the row with n=4,m=192), which verifies the conclusion in Remark 3; (iii) when m is the only variable, it is easy to notice that the maximal block sparsity increases as m increases; and (iv) conversely, when n is the only variable, the maximal block sparsity decreases as n increases, which is in line with the main result in ([27], Theorem 3.1).
Computing the qratio BCMSVs
Computing the qratio BCMSV (9) is equivalent to solve
Since the constraint set is not convex, this is a nonconvex optimization problem. In order to solve (28), we use Matlab function fmincon as in [11] and define z=z^{+}−z^{−} with z^{+}= max(z,0) and z^{−}= max(−z,0). Consequently, (28) can be reformulated to:
Due to the existence of local minima, we perform an experiment to decide a reasonable number of iterations needed to achieve the “global” minima shown in Fig. 1. In the experiment, we calculate the qratio BCMSV of a fixed unit norm columns Bernoulli random matrix of size 40×64,n=s=4, and varying q=2,4,8, respectively. Fifty iterations are carried out for each q. The figure shows that after about 30 experiments, the estimate of \(\beta _{q,s}, \hat {\beta }_{q,s}\), becomes convergent, so in the following experiments, we repeat the algorithm 40 times and choose the smallest value \(\hat {\beta }_{q,s}\) as the “global” minima. We test indeed to vary m,s,n, respectively, all indicate 40 is a reasonable number to be chosen (not shown).
Next, we illustrate the properties of β_{q,s}, which have been pointed out in Remarks 2 and 3, through experiments. We set N=64 with three different block sizes n=1,4,8 (i.e., number of blocks p=64,16,8), three different m=40,50,60, three different q=2,4,8, and three different s=2,4,8. Unit norm columns Bernoulli random matrices are used. Results are listed in Table 2. They are inline with the theoretical results:
 (i)
β_{q,s} increases as m increases for all cases given that other parameters are fixed.
 (ii)
β_{q,s} decreases as s increases for most of cases given that other parameters are fixed. There are exceptions when m=40,n=8 with s=4, and s=8 under q=4,8, respectively. However, the difference is about 0.0002, which is possibly caused by numerical approximation.
 (iii)
Monotonicity of β_{q,s} does not hold with respect to q even given that other parameters are fixed.
Comparing error bounds
Here, we compare the qratio BCMSVbased bounds against the block RICbased bounds from the BBP under different settings. The block RICbased bound is
if A satisfies the block RIP of order 2k, i.e., the block RIC \(\delta _{2k}(A)<\sqrt {2}1\) [14, 17]. By using the Hölder’s inequality, one can obtain the mixed ℓ_{2}/ℓ_{q} norm
for 0<q≤2.
We compare the two bounds (31) and (12). Without loss of generality, let ζ=1. δ_{2k}(A) is approximated using Monte Carlo simulations. Specifically, we randomly choose 1000 submatrices of \(A\in \mathbb {R}^{m\times N}\) of size m×2nk to compute δ_{2k}(A) using the maximum of \(\max \left (\sigma _{\text {max}}^{2}1,1\sigma _{\text {min}}^{2}\right)\) among all sampled submatrices. It turns out that this approximated block RIC is always smaller than or equal to the exact block RIC; thus, the error bounds based on the exact block RIC are always larger than those based on the approximated block RIC. Therefore, it would be enough to show that the qratio BCMSV gives a sharper error bound than the approximated block RIC.
We use unit norm columns submatrices of a rowrandomlypermuted Hadamard matrix (an orthogonal Bernoulli matrix) with N=64,k=1,2,4,n=1,2,q=1.8, and a variety of m≤64 to approximate the qratio BCMSV and the block RIC. Besides the Hadamard matrix, we also test Bernoulli random matrices and Gaussian random matrices with different configurations, which only return very fewer qualified block RICs. In the simulation results of [17], the authors showed that under all considered cases for Gaussian random matrices, \(\delta _{2k}(A)>\sqrt {2}1,\) which is coincident with our finding. Figure 2 shows that the qratio BCMSVbased bounds are smaller than those based on the approximated block RIC. Note that when m approaches N, β_{q,s}(A)→1 and δ_{2k}(A)→0, as a result, the qratio BCMSVbased bounds are smaller than 2.2, while the block RICbased bounds are larger than or equal to 4.
Conclusion and discussion
In this study, we introduced the qratio block sparsity measure and the qratio BCMSV. Theoretically, through the qratio block sparsity measure and the qratio BCMSV, we (i) established the sufficient condition for the unique noisefree BBP recovery; (ii) derived both the mixed ℓ_{2}/ℓ_{q} norm and the mixed ℓ_{2}/ℓ_{1} norm bounds of recovery errors for the BBP, the BDS, and the group lasso estimator; and (iii) proved the qratio BCMSV is bounded away from zero if the number of measurements is relatively large for subGaussian random matrix. Afterwards, we used numerical experiments via two algorithms to illustrate theoretical results. In addition, we demonstrated that the qratio BCMSVbased error bounds are much tighter than those based on block RIP through simulations.
There are still some issues left for future work. For example, analogue to the case for the qratio CMSV, the geometrical property of the qratio BCMSV can be investigated to derive sufficient conditions and error bounds for block sparse signal recovery.
Appendix  Proofs
Basically, the main processes of proofs follow from those in [11] with extensions to block sparse signals. We list all the details here for the sake of completeness.
A.1
Proof
(Proof of Proposition 1) Suppose there exists z∈kerA∖{0} and S≤k such that \(\lVert \mathbf {z}_{S}\rVert _{2,1}\geq \lVert \mathbf {z}_{S^{c}}\rVert _{2,1}\), then we have
which is identical to \(k\geq 2^{\frac {q}{1q}} k_{q}(\mathbf {z}),\quad \forall q\in (1, \infty ]\).
In contrast, suppose ∃ q∈(1,∞] such that \(k<\min \limits _{\mathbf {z}\in \text {ker} A\setminus \{\mathbf {0}\}}\,\,2^{\frac {q}{1q}}k_{q}(\mathbf {z})\), then \(\lVert \mathbf {z}_{S}\rVert _{2,1}<\lVert \mathbf {z}_{S^{c}}\rVert _{2,1}\) holds for all z∈kerA∖{0} and S≤k, which implies that the block null space property of order k is fulfilled; thus, any block ksparse signal x can be obtained via (7). □
A.2
Proof
(Proof of Proposition 2.)
(i) Prove the left hand side of (10):
For any \(\mathbf {z}\in \mathbb {R}^{N}\setminus \{\mathbf {0}\}\) and 1<q_{2}≤q_{1}≤∞, suppose \(k_{q_{1}}(\mathbf {z})\leq s\), then we can get \(\left (\frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{{2,q_{1}}}}\right)^{\frac {q_{1}}{q_{1}1}}\leq s\Rightarrow \lVert \mathbf {z}\rVert _{2,1}\leq s^{\frac {q_{1}1}{q_{1}}}\lVert \mathbf {z}\rVert _{2,q_{1}}\leq s^{\frac {q_{1}1}{q_{1}}}\lVert \mathbf {z}\rVert _{2,q_{2}}\). Since \(\tilde {q}=\frac {q_{2}(q_{1}1)}{q_{1}(q_{2}1)}\) and \( \frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,q_{2}}}\leq s^{\frac {q_{1}1}{q_{1}}}\), we have
from which we can infe
Therefore, we can get the left hand side of (10) through
(ii) Verify the right hand side of (10):
Suppose \(k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\), for any \(\mathbf {z}\in \mathbb {R}^{N}\setminus \{\mathbf {0}\}\), by using the nonincreasing property of the qratio block sparsity with respect to q and q_{2}≤q_{1}≤∞, we have the following two inequalities: \(\frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,\infty }}=k_{\infty }(\mathbf {z})\leq k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\) and \(k_{q_{1}}(\mathbf {z})\leq k_{q_{2}}(\mathbf {z})\leq s^{\tilde {q}}\). Since 1<q_{2}≤q_{1}≤∞, the former inequality implies that \(\frac {\lVert \mathbf {z}\rVert _{2,q_{2}}}{\lVert \mathbf {z}\rVert _{2,q_{1}}}\leq \frac {\lVert \mathbf {z}\rVert _{2,1}}{\lVert \mathbf {z}\rVert _{2,\infty }}\leq s^{\tilde {q}}\Rightarrow \frac {\lVert \mathbf {z}\rVert _{2,q_{1}}}{\lVert \mathbf {z}\rVert _{2,q_{2}}}\ge s^{\mathbin {{\tilde {q}}}}\). The latter inequality implies that
Therefore, we can obtain the right hand side of (10) through
□
A.3
Proof
(Proof of Theorem 1.) The proof procedure follows from the similar arguments in [6, 10], and the procedure can be divided into two main steps.
Step 1: We first derive upper bounds of the qratio block sparsity of residual \(\mathbf {h}=\hat {\mathbf {x}}\mathbf {x}\) for all algorithms. As x is block ksparse, we assume that bsupp(x)=S and S≤k.
For the BBP and the BDS, since \(\lVert \hat {\mathbf {x}}\rVert _{2,1}=\lVert \mathbf {x}+\mathbf {h}\rVert _{2,1}\) is the minimum among all z satisfying the constraints of BBP and BDS (including the true signal x), we have
which can be simplified to \(\lVert \mathbf {h}_{S^{c}}\rVert _{2,1}\leq \lVert \mathbf {h}_{S}\rVert _{2,1}\). Thereby, we can obtain the following inequality:
which is equivalent to
For the group lasso, since the noise ε satisfies ∥A^{T}ε∥_{2,∞}≤κμ for κ∈(0,1) and \(\hat {\mathbf {x}}\) is a solution of the group lasso, we have
Substituting y by Ax+ε leads to
The last second inequality follows by applying CauchySchwarz inequality block wise and the last inequality can be written as
Therefore, it holds that
which can be simplified t
Thus, we can obtain
which can be reformulated b
Step 2: Obtain upper bound of ∥Ah∥_{2} and then construct the mixed ℓ_{2}/ℓ_{q} norm and the mixed ℓ_{2}/ℓ_{1} norm of the recovery error vector h via the qratio BCMSV for each algorithm.
(i) For the BBP, since both x and \(\hat {\mathbf {x}}\) satisfy the constraint ∥y−Az∥_{2}≤ζ, by using the triangle inequality, we can get
Following from the definition of the qratio BCMSV and \(k_{q}(\mathbf {h})\leq 2^{\frac {q}{q1}}k\), we have
Furthermore, we can obtain \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {4k^{11/q}\zeta }{\beta _{q,2^{\frac {q}{q1}}k}(A)}\) by using the property ∥h∥_{2,1}≤2k^{1−1/q}∥h∥_{2,q}.
(ii) Similarly for the BDS, since both x and \(\hat {\mathbf {x}}\) satisfy the constraint ∥A^{T}(y−Az)∥_{2,∞}≤μ, we have
By applying the CauchySchwarz inequality again as in Step 1, we obtain
At last, with the definition of the qratio BCMSV, \(k_{q}(\mathbf {h})\leq 2^{\frac {q}{q1}}k\) and ∥h∥_{2,1}≤2k^{1−1/q}∥h∥_{2,q}, we get the upper bounds of the mixed ℓ_{2}/ℓ_{q} norm and the mixed ℓ_{2}/ℓ_{1} norm for h:
and \(\lVert \mathbf {h}\rVert _{2,1}\leq 2k^{11/q}\lVert \mathbf {h}\rVert _{2,q}\leq \frac {8k^{22/q}}{\beta _{q,2^{\frac {q}{q1}}k}^{2}(A)}\mu \).
(iii) For the group lasso, with ∥A^{T}ε∥_{2,∞}≤κμ, we have
Moreover, since \(\hat {\mathbf {x}}\) is the solution of the group lasso, the optimality condition yields that
where the subgradients in \(\partial \lVert \hat {\mathbf {x}}\rVert _{2,1}\) for the ith block are \(\hat {\mathbf {x}}_{i}/\lVert \hat {\mathbf {x}}_{i}\rVert _{2}\) if \(\hat {\mathbf {x}}_{i}\neq 0\) and is some vector g satisfying ∥g∥_{2}≤1 if \(\hat {\mathbf {x}}_{i}= 0\) (which follows from the definition of subgradient). Thus, we have \(\lVert A^{T}(\mathbf {y}A\hat {\mathbf {x}})\rVert _{2,\infty }\leq \mu \), which leads to
Following the inequality (34), we get
As a result, since \(k_{q}(\mathbf {h})\leq \left (\frac {2}{1\kappa }\right)^{\frac {q}{q1}}k\) and \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {2}{1\kappa }k^{11/q}\lVert \mathbf {h}\rVert _{2,q}\), we can obtain
which is equivalent to
and \(\lVert \mathbf {h}\rVert _{2,1}\leq \frac {1+\kappa }{(1\kappa)^{2}}\cdot \frac {4k^{22/q}}{\beta _{q,(\frac {2}{1\kappa })^{\frac {q}{q1}}k}^{2}(A)}\mu \). □
A.4
Proof
Since the infimum of ϕ_{k}(x) is achieved by an block ksparse signal z whose nonzero blocks equal to the largest k blocks, indexed by S, of x, so \(\phi _{k}(\mathbf {x})=\lVert \mathbf {x}_{S^{c}}\rVert _{2,1}\) and let \(\mathbf {h}=\hat {\mathbf {x}}\mathbf {x}\). Similar as the proof procedure for Theorem 1, the derivations also have two steps.
Step 1: For all algorithms, bound ∥h∥_{2,1} via ∥h∥_{2,q} and ϕ_{k}(x).
First for the BBP and the BDS, since \(\lVert \hat {\mathbf {x}}\rVert _{2,1}=\lVert \mathbf {x}+\mathbf {h}\rVert _{2,1}\) is the minimum among all z satisfying the constraints of the BBP and the BDS, we have
which is equivalent to
In consequence, we can get
As for the group lasso, by using (32), we can obtain
which points to that
Therefore, we have
Step 2: Verify that the qratio block sparsity of h has lower bound in the form of ∥h∥_{2,q} for each algorithm, when ∥h∥_{2,q} is larger than the part of recovery bounds caused by the measurement error.
(i) For the BBP, we assume that h≠0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {2\zeta }{\beta _{q,4^{\frac {q}{q1}}k}(A)}\); otherwise, (17) holds trivially. Since ∥Ah∥_{2}≤2ζ (see (33)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {\lVert A\mathbf {h}\rVert _{2}}{\beta _{q,4^{\frac {q}{q1}}k}(A)}\). Then, it holds that
which implies that
Combining (39), we have ∥h∥_{2,q}<k^{1/q−1}ϕ_{k}(x), which completes the proof for (17). The error bound of the mixed ℓ_{2}/ℓ_{1} norm (18) follows immediately from (17) and (39).
(ii) As for the BDS, similarly we assume h≠0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {8k^{11/q}}{\beta _{q,4^{\frac {q}{q1}}k}^{2}(A)}\mu \); otherwise, (19) holds trivially. As \(\lVert A\mathbf {h}\rVert _{2}^{2}\leq 2\mu \lVert \mathbf {h}\rVert _{2,1}\) (see (34)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {4k^{11/q}}{\beta _{q,4^{\frac {q}{q1}}k}^{2}(A)}\cdot \frac {\lVert A\mathbf {h}\rVert _{2}^{2}}{\lVert \mathbf {h}\rVert _{2,1}}\). Then, we can get
which implies that
Combining (39), we have ∥h∥_{2,q}<k^{1/q−1}ϕ_{k}(x), which completes the proof for (19). (20) holds as a result of (19) and (39).
(iii) For the group lasso, we assume that h≠0 and \(\lVert \mathbf {h}\rVert _{2,q}>\frac {1+\kappa }{1\kappa }\cdot \frac {4k^{11/q}}{\beta _{q,(\frac {4}{1\kappa })^{\frac {q}{q1}}k}^{2}(A)}\mu \); otherwise, (21) holds trivially. Since in this case \(\lVert A\mathbf {h}\rVert _{2}^{2}\leq (1+\kappa)\mu \lVert \mathbf {h}\rVert _{2,1}\) (see (35)), we have \(\lVert \mathbf {h}\rVert _{2,q}>\frac {4k^{11/q}}{(1\kappa)\beta _{q,(\frac {4}{1\kappa })^{\frac {q}{q1}}k}^{2}(A)}\cdot \frac {\lVert A\mathbf {h}\rVert _{2}^{2}}{\lVert \mathbf {h}\rVert _{2,1}}\), which leads to
Combining (41), we have ∥h∥_{2,q}<k^{1/q−1}ϕ_{k}(x), which completes the proof for (21). Consequently, (22) is obtained via (21) and (41). □
Availability of data and materials
Please contact the author for data request.
Abbreviations
 BBP:

Block BP
 BCMSV:

qratio block constrained minimal singular values
 BDS:

Block DS
 BP:

Basis pursuit
 CMSV:

ℓ1constrained minimal singular value
 CS:

Compressive sensing
 DS:

Dantzig selector
 NSP:

Null space property
 RIC:

Restricted isometry constant
 RIP:

Restricted isometry property
References
 1
D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).
 2
E. J. Candes, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math.59(8), 1207–1223 (2006).
 3
S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, 2013). https://doi.org/10.1007/9780817649487_1.
 4
A. S. Bandeira, E. Dobriban, D. G. Mixon, W. F. Sawin, Certifying the restricted isometry property is hard. IEEE Trans. Info. Theory. 59(6), 3448–3450 (2013).
 5
A. M. Tillmann, M. E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory. 60(2), 1248–1259 (2014).
 6
G. Tang, A. Nehorai, Performance analysis of sparse recovery based on constrained minimal singular values. IEEE Trans. Sig. Process. 59(12), 5734–5745 (2011).
 7
S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput.20:, 33–61 (1998).
 8
E. J. Candes, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat., 2313–2351 (2007). https://doi.org/10.1214/009053606000001523.
 9
R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser B (Methodol), 267–288 (1996). https://doi.org/10.1111/j.14679868.2011.00771.x.
 10
G. Tang, A. Nehorai, Computable performance bounds on sparse recovery. IEEE Trans. Sig. Process. 63(1), 132–141 (2015).
 11
Z. Zhou, J. Yu, Sparse recovery based on qratio constrained minimal singular values. Sig. Process. 155:, 247–258 (2019).
 12
Z. Zhou, J. Yu, On qratio cmsv for sparse recovery. Sig. Process (2019). https://doi.org/10.1016/j.sigpro.2019.07.003.
 13
R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, Modelbased compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010).
 14
Y. C. Eldar, M. Mishali, Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory. 55(11), 5302–5316 (2009).
 15
H. Zamani, H. Bahrami, P. Mohseni, in Proc. IEEE Biomedical Circuits and Systems Conf. (BioCAS). On the use of compressive sensing (cs) exploiting block sparsity for neural spike recording, (2016), pp. 228–231. https://doi.org/10.1109/biocas.2016.7833773.
 16
Y. Gao, M. Ma, A new bound on the block restricted isometry constant in compressed sensing. J. Inequalities Appl.2017(1), 174–174 (2017).
 17
G. Tang, A. Nehorai, Semidefinite programming for computable performance bounds on blocksparsity recovery. IEEE Trans. Sig. Process. 64(17), 4455–4468 (2016).
 18
Z. Zhou, J. Yu, Estimation of block sparsity in compressive sensing (2017). arXiv preprint arXiv:1701.01055.
 19
M. E. Lopes, in International Conference on Machine Learning. Estimating unknown sparsity in compressed sensing, (2013), pp. 217–225. http://proceedings.mlr.press/v28/lopes13.pdf.
 20
M. E. Lopes, Unknown sparsity in compressed sensing: denoising and inference. IEEE Trans. Inf. Theory. 62(9), 5145–5166 (2016).
 21
Y. Plan, R. Vershynin, Onebit compressed sensing by linear programming. Commun. Pure Appl. Math.66(8), 1275–1297 (2013).
 22
R. Vershynin, in Sampling Theory, a Renaissance. Estimation in high dimensions: a geometric perspective (SpringerCham, 2015), pp. 3–66.
 23
M. Stojnic, F. Parvaresh, B. Hassibi, On the reconstruction of blocksparse signals with an optimal number of measurements. IEEE Trans. Sig. Process. 57:, 3075–3085 (2009).
 24
H. Liu, J. Zhang, X. Jiang, J. Liu, in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 9, ed. by Y. W. Teh, M. Titterington. The group Dantzig selector (PMLRChia Laguna Resort, Sardinia, 2010), pp. 461–468. http://proceedings.mlr.press/v9/liu10a.html.
 25
R. Garg, R. Khandekar, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 15, ed. by G. Gordon, D. Dunson, and M. Dudík. Blocksparse solutions using kernel block rip and its application to group lasso (PMLRFort Lauderdale, 2011), pp. 296–304. http://proceedings.mlr.press/v15/garg11a.html.
 26
T. Lipp, S. Boyd, Variations and extension of the convex–concave procedure. Optim. Eng.17(2), 263–287 (2016).
 27
N. Rao, B. Recht, R. Nowak, in Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 22, ed. by N. D. Lawrence, M. Girolami. Universal measurement bounds for structured sparse signal recovery (PMLRLa Palma, 2012), pp. 942–950. http://proceedings.mlr.press/v22/rao12.html.
Acknowledgements
This work is supported by the Swedish Research Council grant (Reg.No. 34020135342).
Author information
Affiliations
Contributions
The authors read and approved the final manuscript.
Corresponding author
Correspondence to Jianfeng Wang.
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, J., Zhou, Z. & Yu, J. Error bounds of block sparse signal recovery based on qratio block constrained minimal singular values. EURASIP J. Adv. Signal Process. 2019, 57 (2019) doi:10.1186/s1363401906531
Received
Accepted
Published
DOI
Keywords
 Compressive sensing
 qratio block sparsity
 qratio block constrained minimal singular value
 Convexconcave procedure