Skip to main content

Semi-tensor product-based one-bit compressed sensing

Abstract

The area of one-bit compressed sensing (1-bit CS) focuses on the recovery of sparse signals from binary measurements. Over the past decade, this field has witnessed the emergence of well-developed theories. However, most of the existing literature is confined to fully random measurement matrices, like random Gaussian and random sub-Gaussian measurements. This limitation often results in high generation and storage costs. This paper aims to apply semi-tensor product-based measurements to 1-bit CS. By utilizing the semi-tensor product, this proposed method can compress high-dimensional signals using lower-dimensional measurement matrices, thereby reducing the cost of generating and storing fully random measurement matrices. We propose a regularized model for this problem that has a closed-form solution. Theoretically, we demonstrate that the solution provides an approximate estimate of the underlying signal with upper bounds on recovery error. Empirically, we conduct a series of experiments on both synthetic and real-world data to demonstrate the proposed method’s ability to utilize a lower-dimensional measurement matrix for signal compression and reconstruction with enhanced flexibility, resulting in improved recovery accuracy.

1 Introduction

Compressed (CS) has drawn widely studied and applied in many fields such as moderate resolution imaging [1, 2] and magnetic resonance imaging (MRI). One-bit compressed sensing (1-bit CS) [3,4,5,6,7,8,9] has garnered significant attention in recent years as a unique paradigm of (CS) [10, 11]. In contrast to the classic compressed sensing method, which utilizes an infinite-precision real-valued measurement \({{\varvec{b}}}={{\varvec{A}}}{{{{\varvec{x}}}}}\) where \({\varvec{A}}\in {\mathbb{R}}^{m\times n}\) is the measurement matrix and \({{\varvec{x}}}\in {\mathbb{R}}^{n}\) denotes a sparse signal, 1-bit CS only records the signs of real-valued measurements as \({{\varvec{y}}}={\text{sign}}({\varvec{b}})={\text{sign}}({\varvec{A}}{{\varvec{x}}})\). Using binary measurement, 1-bit CS offers a cost-effective hardware implementation and high-speed sampling. Additionally, it exhibits greater resilience to various nonlinear distortions commonly encountered in input electronics [5]. Due to the aforementioned reasons, it possesses a wide range of potential applications in massive MIMO systems [12, 13], wireless sensor networks [14], synthetic aperture radar systems [15], and other scenarios where large-scale sparse data are typically involved [16].

Mathematically, standard 1-bit CS aims to recover the direction of the underlying sparse signal \({{\varvec{x}}}\in {\mathbb{R}}^{n}\) (i.e., \({{\varvec{x}}}/\Vert {{\varvec{x}}}\Vert _{2}\)) from 1-bit measurements

$${\varvec{y}}={\text{sign}}({\varvec{A}}{{\varvec{x}}}),$$
(1)

where \({\text{sign}}(\cdot )\) applies componentwise on real-valued measurement \({\varvec{A}}{{\varvec{x}}}\), satisfying \({\text{sign}}(v)=1\) if \(v\ge 0\), and \({\text{sign}}(v)=-1\) if \(v<0\). The standard measurement model (1) fails to capture information on the magnitude of \({{\varvec{x}}}\) (i.e., \(\Vert {{\varvec{x}}}\Vert _{2}\)), resulting in the aforementioned studies being limited to the unit sphere \({\mathbb{S}}_{n}:=\{{{\varvec{x}}}\in {\mathbb{R}}^{n}:\Vert {{\varvec{x}}}\Vert _{2}=1\}\). Jacques et al. [5] have proposed an \(\ell _{0}\)-minimization method for this problem, which has been proven to yield an optimal solution that provides a robust estimate of the original signal. Although a binary iterative hard threshold (BIHT) algorithm has been developed to approximate the underlying solution empirically, solving the \(\ell _{0}\)-minimization model exactly remains an NP-hard problem. Plan and Vershynin [6] utilize the \(\ell _{1}\)-norm as a means of promoting sparsity, proposing a convex programming approach that is also supported by theoretical guarantees. Moreover, their method demonstrates robustness against bit flips, a prevalent form of noise in binary measurements. Zhang et al. [7] suggest the following regularized model

$$\begin{aligned} \hat{\varvec{u}}:=\mathop {\arg \min }_{\Vert \varvec{u}\Vert _{2}\le 1}\ -\frac{1}{m}\langle {\varvec{A}}^{T}{\varvec{y}},\varvec{u}\rangle +\gamma \Vert \varvec{u}\Vert _{1}, \end{aligned}$$
(2)

and present the existence of a closed-form solution, namely \({\mathcal {S}}_{\gamma }({\varvec{A}}^{T}{\varvec{y}})/\Vert {\mathcal {S}}_{\gamma }({\varvec{A}}^{T}{\varvec{y}})\Vert _{2}\) where \({\mathcal {S}}_{\gamma }(\cdot )\) denotes the soft-thresholding operator [psee Eq. (22)]. However, the \(\ell _{1}\)-norm is a loose approximation of \(\ell _{0}\)-quasi-norm, often resulting in an over-penalized problem [17]. In our prior research [8], we proposed an \(\ell _{p}\)(\(0<p<1\))-minimization method and demonstrated its superiority over the \(\ell _{1}\)-norm based counterpart in terms of reducing the required number of measurements.

The aforementioned studies have solely focused on fully random Gaussian distributions, which limits their applicability in processing large-scale data. Unfortunately, Ai et al. [18] have shown that even with the commonly used fully random sub-Gaussian measurement matrices, it is impossible to recover any sparse unit signal from (1) without additional constraints (e.g., \({{\varvec{x}}}\) is not too sparse). To mitigate this issue, it has recently been discovered that introducing dithers can be beneficial. Let

$${\varvec{y}}={\text{sign}}({\varvec{A}}{{\varvec{x}}}+{{\varvec{\tau }}}),$$
(3)

where \({{\varvec{\tau }}}\in {\mathbb{R}}^{m}\) denotes a random dither vector. Dirksen and Mendelson [19] demonstrated that a convex program can be applied to recover the signal \({{\varvec{x}}}\in {\mathbb{R}}^{n}\) from (3) when \({\varvec{\tau }}\) is generated from a uniform distribution, and \({\varvec{A}}\) is a random sub-Gaussian measurement matrix. However, fully sub-Gaussian measurements still pose challenges in terms of high costs for generating and storing measurement matrices. Dirksen et al. [20] showed that randomly subsampled Gaussian circulant matrices can be utilized for 1-bit CS, and Liu et al. [21] provide an optimization algorithm based on it under generative prior. Dirksen et al. [22] further applied randomly subsampled sub-Gaussian circulant matrices to 1-bit CS and prove that the proposed algorithm is robust to analog pre-quantization noise as well as to adversarial bit corruptions in the quantization process. It should be noted that circulant measurement matrices are typically limited to subsampling, while oversampling is commonly employed in 1-bit CS despite being irrelevant in traditional CS problems. In practical applications of circulant matrices, we have to introduce the zero-padding technique on the original signal to accommodate oversampling, which lifts the original signal to a higher dimension. Additionally, a square measurement matrix with a matching dimension has to be constructed. In summary, the research on designing measurement matrices for 1-bit CS is still in its early stages.

The concept of semi-tensor product of matrices was proposed by Cheng et al., which is a generalization of conventional matrix product [23,24,25]. The introduction of semi-tensor product enables matrix multiplication even when two matrices do not meet the dimension matching condition. This concept has found applications in various engineering fields, including but not limited to information security [26], vehicle control [27], and gene regulation [28]. Specially, Xie et al. [29] proposed a new model of signal compression and reconstruction based on the semi-tensor product, which breaks the dimension matching conditions required by the traditional CS model. The novel approach employs lower-dimensional sensing matrices to compress high-dimensional signals, while simultaneously reducing total reconstruction time by conducting the reconstruction phase among multiple CS decoders. Inspired by these advantages, we are considering the potential application of semi-tensor product in 1-bit CS to tackle the challenge of storing large measurement matrices. The primary contributions of this paper are as follows.

  • We first apply the semi-tensor product to the measurement model of 1-bit CS with employing the dither technique. The novel measurement model significantly reduces the dimensionality of the measurement matrix compared to existing fully random counterparts in 1-bit CS.

  • We propose a regularized model with a closed-form solution and demonstrate that our proposed model, which employs an explicit selection strategy for the regularization parameter, can yield an approximate estimate of the underlying sparse signal with guaranteed convergence.

The rest of this paper is structured as follows. Section 2 introduces some notations, definitions, and useful lemmas. Section 3 clarifies the measurement model used in this paper. Section 4 elucidates the recovery model and then gives the corresponding theoretical guarantee. Section 5 presents numerical experiments, and Sect. 6 concludes this paper.

2 Preliminaries

2.1 Notations

We begin with some notations used throughout the paper. Scalars are denoted with lowercase letters, e.g., x, vectors with boldface lowercase letters, e.g., \({{\varvec{x}}}\) and matrices with boldface capital letters, e.g., \(\varvec{X}\). We denote \(x_i\) as the i-th element of the vector \({{\varvec{x}}}\). The fields of real numbers are denoted as \({\mathbb{R}}\). [n] denotes the index set \(\{1,2,\ldots ,n\}\). \(\varvec{I}_{n}\) denotes identity matrix of size \(n\times n\). The inner product between matrices \(\varvec{M}=(m_{ij})\in {\mathbb{R}}^{n_{1}\times n_{2}}\) and \(\varvec{N}=(n_{ij})\in {\mathbb{R}}^{n_{1}\times n_{2}}\) of the same size is defined as \(\langle \varvec{M},\varvec{N}\rangle ={\text{Tr}}({\varvec{M}}^{T}\varvec{N})=\sum _{ij}m_{ij}n_{ij}\). We define the \(\ell _{p}\) norm of \({{\varvec{x}}}\in {\mathbb{R}}^{n}\) as \(\Vert {{\varvec{x}}}\Vert _{p}=(\sum _{i=1}^{n}|x_{i}|^{p})^{1/p}\) where \(p\ge 1\). \(\Vert {{\varvec{x}}}\Vert _{\infty }\) denotes the absolute value of the elements of \({{\varvec{x}}}\) that has the biggest magnitude.

2.2 Semi-tensor product and some definitions

Definition 1

(See [25]) Let \(\varvec{u}\in {\mathbb{R}}^{1\times nl}\) be a row vector, and \(\varvec{v}\in {\mathbb{R}}^{l}\) be a column vector. Divide \(\varvec{u}\) into l contiguous blocks, named \(\varvec{u}_{1},\ldots ,\varvec{u}_{l}\), which are all row vectors of length n. Then, the semi-tensor product (STP) between \(\varvec{u}\) and \(\varvec{v}\), denoted by \(\varvec{u}\ltimes \varvec{v}\), is defined as

$$\begin{aligned} \varvec{u}\ltimes \varvec{v}=\sum _{i=1}^{l}\varvec{u}_{i}v_{i}\in {\mathbb{R}}^{1\times n}. \end{aligned}$$

Definition 2

(See [25]) Let \({\varvec{A}}\in {\mathbb{R}}^{m\times n}\), \(\varvec{B}\in {\mathbb{R}}^{p\times q}\), \(\varvec{a}_{i}\) denote the i-th row of \({\varvec{A}}\), and \({\varvec{b}}^{i}\) denote the i-th column of \(\varvec{B}\). The STP of \({\varvec{A}}\) and \(\varvec{B}\), denoted as \(\varvec{C}={\varvec{A}}\ltimes \varvec{B}\), consists of \(m\times q\) blocks as \(\varvec{C}=(\varvec{c}_{ij})\) where

  1. (i)

    \(\varvec{c}_{ij}=\varvec{a}_{i}\ltimes {\varvec{b}}^{i}\) is of length p when p is a factor of n, and

  2. (ii)

    \(\varvec{c}_{ij}=({{\varvec{b}}^{i}}^{T}\ltimes \varvec{a}_{i}^{T})^{T}\) is of length n when n is a factor of p,

for \(i\in [m]\) and \(j\in [q]\). Specially, when \(q=1\), \(\varvec{C}\) reduces to a vector of length mk where \(k:=gcd(n,p)\) denotes the greatest common divisor(gcd).

We also introduce the Kronecker product, which is a mathematical operation that combines two matrices to form a larger matrix with block entries.

Definition 3

The Kronecker product between matrices \({\varvec{A}}=(a_{ij})\in {\mathbb{R}}^{m\times n}\) and \(\varvec{B}\in {\mathbb{R}}^{p\times q}\) is defined as

$$\begin{aligned} {\varvec{A}}\otimes \varvec{B}=\left[ \begin{array}{ccc} a_{11}\varvec{B}&{}\quad \ldots &{}\quad a_{1n}\varvec{B} \\ \vdots &{}\quad \ddots &{}\quad \vdots \\ a_{m1}\varvec{B} &{}\quad \ldots &{}\quad a_{mn}\varvec{B} \end{array} \right] \in {\mathbb{R}}^{mp\times nq} \end{aligned}$$

It is readily checked that the Kronecker product has the following property.

Proposition 1

For any matrices \({\varvec{A}}\in {\mathbb{R}}^{m\times n}\) and \(\varvec{B}\in {\mathbb{R}}^{p\times q}\), it holds that

$$\begin{aligned} {({\varvec{A}}\otimes \varvec{B})}^{T}={{\varvec{A}}}^{T}\otimes {\varvec{B}}^{T}. \end{aligned}$$

The STP of matrices can be equivalently defined through the Kronecker product.

Definition 4

(See [24]) The STP of two matrices \({\varvec{A}}\in {\mathbb{R}}^{m\times n}\) and \(\varvec{B}\in {\mathbb{R}}^{p\times q}\) satisfies

$$\begin{aligned} {\varvec{A}}\ltimes \varvec{B}=({\varvec{A}}\otimes \varvec{I}_{t/n})(\varvec{B}\otimes \varvec{I}_{t/p}), \end{aligned}$$

where t is the least common multiple of n and p, denoted as \(t=lcm(n,p)\).

The following lemma was proved in [7], which may facilitate our analysis.

Lemma 1

(See [7]) Suppose \({\varvec{b}}=\theta ({\varvec{A}}\varvec{u})\) with \(\theta (\cdot )\) satisfying (10), and \({\varvec{A}}\in {\mathbb{R}}^{m\times n}\) is populated by i.i.d. random standard Gaussian variables. With probability at least \(1-e^{1-t}\), it holds that

$$\begin{aligned} \left\| \frac{1}{m}{\varvec{A}}^{T}{\varvec{b}}-\lambda \varvec{u}\right\| _{\infty }\le O\left( \sqrt{\frac{t+\log (n)}{m}}\right) . \end{aligned}$$
(4)

3 Problem description

Let \({\varvec{A}}\) be the measurement matrix of size \(m\times n\). We suppose the signal \({\varvec{x}}\) is of length nk where \(k\ge 1\) denotes some integer. As a result, the measurement matrix only requires 1/k of the parameters compared to traditional sensing models, which typically have a size of \(m\times nk\). The standard measurement model for 1-bit CS under STP may be naturally given by

$$\begin{aligned} {\varvec{y}}={\text{sign}}({\varvec{A}}\ltimes {\varvec{x}}), \end{aligned}$$
(5)

where \(y\in \{-1,1\}^{mk}\) denotes the sign measurements. Note that (5) loses any scaling information, and is restricted to narrow types of measurement such as random Gaussian distribution. To circumvent this issue, a mature strategy is to introduce random dither \({\varvec{\tau }}\in {\mathbb{R}}^{mk}\) before quantization, leading to the dithered one-bit measurements

$$\begin{aligned} {\varvec{y}}={\text{sign}}(\varvec{A}\ltimes {\varvec{x}}+{\varvec{\tau }}), \end{aligned}$$
(6)

where \(y\in \{-1,1\}^{mk}\). Please see Sect. 4.3 and [3] for more discussion on the dither technology. To facilitate discussion, we define the exactly sparse set

$$\begin{aligned} {\mathbb{K}}_{s}^{*}:=\{\varvec{z}\in {\mathbb{R}}^{nk}|\Vert \varvec{z}\Vert _{2}= 1, \Vert \varvec{z}\Vert _{0}\le s\}, \end{aligned}$$
(7)

and the approximately sparse set

$$\begin{aligned} {\mathbb{K}}_{s}:=\{\varvec{z}\in {\mathbb{R}}^{nk}|\Vert \varvec{z}\Vert _{2}= 1, \Vert \varvec{z}\Vert _{1}\le s\}. \end{aligned}$$
(8)

In practice, there will always be adversarial noise in the measurements during acquisition and transmission, which usually flip some signs [5]. In this case, the noisy measurements may be modeled as

$$\begin{aligned} {\varvec{y}}=\varvec{\xi }\odot {\text{sign}}(\varvec{A}\ltimes {\varvec{x}}+{\varvec{\tau }}), \end{aligned}$$
(9)

where \(\odot\) denotes the dot product between vectors, \(\varvec{\xi }\in \{-1,1\}^{mk}\) denotes random bit flips, which are populated by i.i.d. Bernoulli random variables satisfying \({{P}}(\xi _{i}=1)=p\), meaning that each sign measurement \(y_{i}\) is only correct with probability p.

To illustrate how to construct a loss function for the noisy measurements (9), inspired by [6], we begin with a more general measurement model. We assume

$$\begin{aligned} {E}(y_{i}|z_{i})=\theta (z_{i}),\quad \forall i\in [m], \end{aligned}$$
(10)

where \(z_{i}=(\varvec{A}\ltimes {\varvec{x}})_{i}\), \({\varvec{x}}\in {\mathbb{R}}^{n}\) denotes the needed-to-be-recovered signal, and \(\theta (\cdot )\) is some function unknown or unspecified, which automatically must satisfy \(-1\le \theta (z)\le 1\). We also assume

$$\begin{aligned} {E}(\theta (g)g):=\lambda , \end{aligned}$$
(11)

to capture the relation between \(y_{i}\) and \((\varvec{A}\ltimes {\varvec{x}}+{\varvec{\tau }})_{i}\), where g is a standard normal random variable. In (9), since \(y_{i}=\xi _{i}{\text{sign}}((\varvec{A}\ltimes {\varvec{x}})_{i}+\tau _{i})\) for each \(i\in [m]\), then it holds that \(\theta (z)={E}_{\tau _{i}}({E}_{\xi _{i}}(y_{i}|z,\tau _{i}))=2(p-1/2)(1-2{P}(\tau \le -z))\) in (10), and the correlation coefficient \(\lambda ={E}(\theta (g)g)\) can be evaluated using integration by parts, which gives

$$\begin{aligned} \lambda =2(p-1/2){E}[\theta '(g)]=4(p-1/2){E}[f(-g)]>0. \end{aligned}$$

For instance, if \(\tau _{i}(\forall i\in [mk])\) are random variables generated from the uniform distribution

$$\begin{aligned} {\text{Unif}}(-u,u) \end{aligned}$$
(12)

where \(c>0\) denotes a constant, then

$$\begin{aligned} \lambda =2(p-1/2)\frac{2\Phi (u)-1}{u}, \end{aligned}$$
(13)

where \(\Phi (\cdot )\) denotes the distribution function of standard Gaussian variables.

We formulate theories based on model (10) in the following section as it naturally encompasses the model presented in model (9).

4 Main results

This part introduces the optimization model for recovering the underlying signal from model (10) and subsequently establishes its corresponding theoretical guarantee.

4.1 The proposed model

Suppose \({\varvec{x}}^{*}\in {\mathbb{R}}^{nk}\) is the underlying signal in \({\mathbb{K}}_{s}\) where k is a positive integer, and \(\varvec{y}\) satisfies (10). We reorder its entries to define \(\tilde{{\varvec{x}}}_{*}\) such that

$$\begin{aligned} \tilde{{\varvec{x}}}_{*}^{(l)}=\left( \begin{array}{c} x_{l} \\ x_{k+l} \\ \vdots \\ x_{(n-1)k+l} \end{array} \right) \in {\mathbb{R}}^{n},\quad l\in [k], \end{aligned}$$
(14)

and also reorder entries of \(\varvec{y}\) to define \(\tilde{\varvec{y}}\) such that

$$\begin{aligned} \tilde{\varvec{y}}^{(l)}=\left( \begin{array}{c} y_{l} \\ y_{k+l} \\ \vdots \\ y_{(m-1)k+l} \end{array} \right) \in {\mathbb{R}}^{m},\quad l\in [k]. \end{aligned}$$
(15)

A direction application of Lemma 4 in [7] to each \(\tilde{\varvec{y}}^{(l)}\) indicates the following result.

Lemma 2

Suppose \(\varvec{A}\in {\mathbb{R}}^{m\times n}\) is populated by i.i.d. random standard Gaussian variables and \(\varvec{a}_{i}\) denotes the i-th row of \(\varvec{A}\). Following the definitions given by (10) and (11), it holds that

$$\begin{aligned} {E}\left[ {\tilde{y}}^{(l)}_{i}\varvec{a}_{i}\right] =\theta (\tilde{{\varvec{x}}}_{*}^{(l)})^{T},\ \forall i\in [m],\quad \forall l\in [k], \end{aligned}$$
(16)

where \(\tilde{{\varvec{x}}}_{*}^{(l)}\), \(\tilde{\varvec{y}}^{(l)}\) are, respectively, defined as in (14), (15), and \({\tilde{y}}^{(l)}_{i}\) represents the i-th element of \(\tilde{\varvec{y}}^{(l)}\).

By taking the inner product in both sides of (16) with respect to \({\tilde{{\varvec{x}}}_{*}^{(l)}}\) and then summing over all \(l\in [m]\), we can derive that

$$\begin{aligned} {E}\left[ \sum _{l=1}^{k}{\tilde{y}}^{(l)}_{i}\left\langle \varvec{a}_{i},\tilde{{\varvec{x}}}_{*}^{(l)}\right\rangle \right] =\theta \Vert {{\varvec{x}}}_{*}\Vert _{2}^{2}=\theta ,\quad \forall i\in [m]. \end{aligned}$$
(17)

When \(\theta (\cdot )={\text{sign}}(\cdot )\), \(\lambda ={ E}(|g|)=\sqrt{2/\pi }\) achieves the maximum relation [6]. According to the law of large numbers, Eq. (17) suggests that the optimal solution \(\hat{{\varvec{x}}}\) is expected to also achieve a large value for the relation function

$$\begin{aligned} \frac{1}{m}\sum _{i=1}^{m}\sum _{l=1}^{k} {\tilde{y}}^{(l)}_{i}\langle \varvec{a}_{i},\hat{{\varvec{x}}}\rangle \end{aligned}$$

under the constraint \(\Vert \varvec{\hat{{\varvec{x}}}}\Vert _{2}\le 1\). It is not hard to check that

$$\begin{aligned} \frac{1}{m}\sum _{i=1}^{m}\sum _{l=1}^{k}{\tilde{y}}^{(l)}_{i} \langle \varvec{a}_{i},\hat{{\varvec{x}}} \rangle =\frac{1}{m}{(\varvec{A}\ltimes \hat{{\varvec{x}}})}^{T}\varvec{y}. \end{aligned}$$
(18)

On the other hand, given the sparsity of \({\varvec{x}}\), it is imperative that \(\hat{{\varvec{x}}}\) also exhibits sparsity, which can be achieved through regularization using the \(\ell _{1}\)-norm \(\Vert \cdot \Vert _{1}\). By combining (18) with the \(\ell _{1}\)-norm regularizer via a regularization parameter \(\gamma >0\), we propose the optimization model

$$\begin{aligned} \min _{\Vert \varvec{u}\Vert _{2}\le 1} -\frac{1}{m}{(\varvec{A}\ltimes \varvec{u})}^{T}\varvec{y}+\gamma \Vert \varvec{u}\Vert _{1}. \end{aligned}$$
(19)

Since \(\varvec{A}\ltimes \varvec{u}=(\varvec{A}\otimes \varvec{I}_{k})\varvec{u}\) by Definition 4 and Proposition 1, we can equivalently rewrite (19) as

$$\begin{aligned} \min _{\Vert \varvec{u}\Vert _{2}\le 1} -\frac{1}{m}{\varvec{u}}^{T}(\varvec{A}^{T}\otimes \varvec{I}_{k})\varvec{y}+\gamma \Vert \varvec{u}\Vert _{1}. \end{aligned}$$
(20)

4.2 Theoretical guarantee

The following result shows that the model (20) has a closed-form solution.

Lemma 3

(See [7]) The optimal solution to (20) is given by

$$\begin{aligned} \hat{{\varvec{x}}}=\frac{S_{\gamma }\left( \left( \varvec{A}^{T}\otimes \varvec{I}_{k}\right) \varvec{y}\right) }{\left\| S_{\gamma }\left( \left( \varvec{A}^{T} \otimes \varvec{I}_{k}\right) \varvec{y}\right) \right\| _{2}}, \end{aligned}$$
(21)

where the soft-thresholding operator \(S_{\gamma }(\cdot )\)

$$\begin{aligned} S_{\gamma }(u)=\left\{ \begin{array}{ll} 0, &{}\quad {\text{if}}\ |u|\le \gamma , \\ {\text{sign}}(u)(|u|-\gamma ), &{}\quad {\text{otherwise}}. \end{array} \right. \end{aligned}$$
(22)

applies componentwise.

To prove the main theorem, we first provide the following lemma.

Lemma 4

Suppose \(\varvec{A}\in {\mathbb{R}}^{m\times n}\) is populated by independent and identically distributed random standard Gaussian variables, \({\varvec{x}}^{*}\in {\mathbb{R}}^{nk}\) where k is a positive integer, and \(\varvec{y}=\theta (\varvec{A}\ltimes {\varvec{x}}_{*})\) with \(\theta (\cdot )\) satisfying (10). Then, with probability at least \(1-ke^{1-t}\),

$$\begin{aligned} \left\| \frac{1}{m}(\varvec{A}^{T}\otimes \varvec{I}_{k})\varvec{y} -\lambda {\varvec{x}}_{*}\right\| _{\infty }\le \kappa \sqrt{\frac{t+\log (n)}{m}}, \end{aligned}$$
(23)

where \(\kappa >0\) is a constant, and \(\lambda\) is specified by (11).

Proof

We define \(\tilde{{\varvec{x}}}_{*}\) and \(\tilde{\varvec{y}}_{*}\) as given in (14) and (15), respectively. For each \(l\in [k]\), we have \(\tilde{\varvec{y}}^{(l)}=\varvec{A}\tilde{{\varvec{x}}}_{*}^{(l)}\) and

$$\begin{aligned} \left\| \frac{1}{m}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y} -\lambda {\varvec{x}}_{*}\right\| _{\infty }=\max _{l\in [k]} \left\| \frac{1}{m}\varvec{A}^{T}\tilde{\varvec{y}}^{(l)} -\lambda \tilde{{\varvec{x}}}_{*}^{(l)}\right\| _{\infty }. \end{aligned}$$

Since Lemma 1 indicates that

$$\begin{aligned} \left\| \frac{1}{m}\varvec{A}^{T}\tilde{\varvec{y}}^{(l)} -\lambda \tilde{{\varvec{x}}}_{*}^{(l)}\right\| _{\infty }\le O\left( \sqrt{\frac{t+\log (n)}{m}}\right) \end{aligned}$$

holds with a probability at least \(1-e^{1-t}\) for each \(l\in [k]\), taking a union bound over all l implies that

$$\begin{aligned} \left\| \frac{1}{m}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y} -\lambda {\varvec{x}}_{*}\right\| _{\infty }\le O\left( \sqrt{\frac{t+\log (n)}{m}}\right) \end{aligned}$$
(24)

holds with a probability at least \(1-ke^{1-t}\). Therefore, we complete our proof. \(\square\)

Now, we can present the main theoretical result.

Theorem 1

Assume \(\varvec{A}\in {\mathbb{R}}^{m\times n}\) is populated by independent and identically distributed random standard Gaussian variables, \(\varvec{y}\) is observed via (9), \(\lambda\) is defined in Eq. (13), and

$$\begin{aligned} \gamma =c\sqrt{\frac{t+\log (n)}{m}}, \end{aligned}$$
(25)

where \(c=2\kappa\) is a constant with \(\kappa\) coming from (23). If \({\varvec{x}}^{*}\in {\mathbb{R}}^{nk}\) belongs to (7) where k is a positive integer, with a probability at least \(1-ke^{1-t}\), the estimation \(\hat{{\varvec{x}}}\) given by (21) satisfies

$$\begin{aligned} \Vert \hat{{\varvec{x}}}-{\varvec{x}}^{*}\Vert _{2}\le \frac{3\gamma }{\lambda }\sqrt{s}=O\left( \sqrt{\frac{s\log (n)}{m}}\right) . \end{aligned}$$

If \({\varvec{x}}^{*}\in {\mathbb{R}}^{nk}\) belongs to (8), with probability exceeding \(1-ke^{1-t}\) the estimation \(\hat{{\varvec{x}}}\) given by (21) satisfies

$$\begin{aligned} \Vert \hat{{\varvec{x}}}-{\varvec{x}}^{*}\Vert _{2}\le \sqrt{\frac{3\gamma \sqrt{s}}{\lambda }}=O\left( \root 4 \of {\frac{s\log (n)}{m}}\right) . \end{aligned}$$

Proof

Since \(\hat{{\varvec{x}}}\) is the optimal solution, we have

$$\begin{aligned} -\frac{1}{m}\hat{{{\varvec{x}}}}^{T}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y}+&\gamma \Vert \hat{{\varvec{x}}}\Vert _{1}\\&\le -\frac{1}{m}{{\varvec{x}}}_{*}^{T}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y}+\gamma \Vert {\varvec{x}}_{*}\Vert _{1} \end{aligned}$$

Thus one has

$$\begin{aligned}&\gamma \Vert {\varvec{x}}_{*}\Vert _{1}\\ \ge&\left\langle {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}, \frac{1}{m}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y}\right\rangle + \gamma \Vert \hat{{\varvec{x}}}\Vert _{1}\\ \ge&\left\langle {\varvec{x}}_{*}-\hat{{{\varvec{x}}}},\lambda {\varvec{x}}_{*} \right\rangle + \left\langle {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}, \frac{1}{m}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y}-\lambda {\varvec{x}}_{*}\right\rangle \\&+ \gamma \Vert \hat{{\varvec{x}}}\Vert _{1}\\ \ge&\lambda \big (1-\hat{{{\varvec{x}}}}^{T}{\varvec{x}}_{*}\big )-\Vert {\varvec{x}}_{*} -\hat{{{\varvec{x}}}}\Vert _{1}\left\| \frac{1}{m}\big (\varvec{A}^{T}\otimes \varvec{I}_{k}\big )\varvec{y}-\lambda {\varvec{x}}_{*}\right\| _{\infty }\\&+\gamma \Vert \hat{{\varvec{x}}}\Vert _{1} \end{aligned}$$

Combining it with Lemma 4, we have that

$$\begin{aligned} \lambda \left( 1-\hat{{{\varvec{x}}}}^{T}{\varvec{x}}_{*}\right) +\gamma \Vert \hat{{\varvec{x}}}\Vert _{1}\le \frac{\gamma }{2}\Vert {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}\Vert _{1} + \gamma \Vert {\varvec{x}}_{*}\Vert _{1}. \end{aligned}$$
(26)

The remain discussion considers the two distinct types of sparse signals, respectively.

(1) When \({\varvec{x}}_{*}\) is exactly sparse, i.e., \({\varvec{x}}_{*}\in {\mathbb{K}}_{s}^{*}\), we define S as the support set of \({\varvec{x}}_{*}\) and \(S_{\perp }=[nk]{\setminus } S\) be its complement set. We also define \(P_{S}(\varvec{u})\) as

$$\begin{aligned} P_{S}(\varvec{u})=\varvec{u}_{S}, \end{aligned}$$

denoting the sub-vector of \(\varvec{u}\) indexed by the set S. Then (26) implies

$$\begin{aligned}&\lambda \left( 1-\hat{{{\varvec{x}}}}^{T}{\varvec{x}}_{*}\right) +\gamma \Vert P_{S}(\hat{{\varvec{x}}})\Vert _{1} +\gamma \Vert P_{S_{\perp }}(\hat{{\varvec{x}}})\Vert _{1}\\&\quad \le \gamma \Vert {\varvec{x}}_{*}\Vert _{1} + \frac{\gamma }{2}\Vert P_{S}({\varvec{x}}_{*} -\hat{{{\varvec{x}}}})\Vert _{1} + \frac{\gamma }{2}\Vert P_{S_{\perp }}({\varvec{x}}_{*} -\hat{{{\varvec{x}}}})\Vert _{1}\\&\quad =\gamma \Vert {\varvec{x}}_{*}\Vert _{1} + \frac{\gamma }{2}\Vert P_{S}({\varvec{x}}_{*}-\hat{{{\varvec{x}}}})\Vert _{1} + \frac{\gamma }{2}\Vert P_{S_{\perp }}(\hat{{{\varvec{x}}}})\Vert _{1}. \end{aligned}$$

Therefore, using triangle inequality yields

$$\begin{aligned}&\lambda \left( 1-{\hat{\varvec{x}}}^{T}{\varvec{x}}_{*}\right) +\frac{\gamma }{2}\Vert P_{S}({\hat{\varvec{x}}})\Vert _{1}\\ &\quad \le \gamma \Vert {\varvec{x}}_{*}\Vert _{1}-\gamma \Vert P_{S}({\hat{\varvec{x}}})\Vert _{1} +\frac{\gamma }{2}\Vert P_{S}({\varvec{x}}_{*}-{\hat{\varvec{x}}})\Vert _{1}\\ &\quad \le \frac{3\gamma }{2} \Vert P_{S}({\varvec{x}}_{*}-{\hat{\varvec{x}}})\Vert _{1}\le \frac{3\gamma }{2} {\sqrt{\Vert {\varvec{x}}_{*}\Vert _{0}}}\Vert P_{S}({\varvec{x}}_{*}-{\hat{\varvec{x}}})\Vert _{2}, \end{aligned}$$

where the final inequality follows the Cauchy–Schwarz inequality. Recalling \(\Vert {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}\Vert _{2}^{2}\le 2(1-\hat{{{\varvec{x}}}}^{T}{\varvec{x}}_{*})\), we obtain that

$$\begin{aligned} \Vert {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}\Vert _{2}^{2}\le \frac{3\gamma }{\lambda }\sqrt{s}\Vert {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}\Vert _{2}, \end{aligned}$$

i.e., \(\Vert {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}\Vert _{2}\le \frac{3\gamma }{\lambda }\sqrt{s}.\)

(2) When \({\varvec{x}}_{*}\) is approximately sparse, i.e., \({\varvec{x}}_{*}\in {\mathbb{K}}_{s}\), from (26), we have

$$\begin{aligned}&\lambda \left( 1-\hat{{{\varvec{x}}}}^{T}{\varvec{x}}_{*}\right) \\&\quad \le \gamma \Vert {\varvec{x}}_{*}\Vert _{1} - \gamma \Vert \hat{{\varvec{x}}}\Vert _{1} + \frac{\gamma }{2}\Vert {\varvec{x}}_{*}\Vert _{1} + \frac{\gamma }{2}\Vert \hat{{{\varvec{x}}}}\Vert _{1}\\&\quad \le \frac{3\gamma }{2}\Vert {\varvec{x}}_{*}\Vert _{1}, \end{aligned}$$

which indicates

$$\begin{aligned} \Vert {\varvec{x}}_{*}-\hat{{{\varvec{x}}}}\Vert _{2}^{2}\le 2\left( 1-\hat{{{\varvec{x}}}}^{T}{\varvec{x}}_{*}\right) \le \frac{3\gamma }{\lambda }\Vert {\varvec{x}}_{*}\Vert _{1}\le \frac{3\gamma }{\lambda }\sqrt{s}. \end{aligned}$$

We complete our proof. \(\square\)

Remark 1

Theorem 1 offers a practical recommendation for selecting the regularization parameter \(\gamma\) in model (20). As demonstrated experimentally in Sect. 5.1, when combined with finely tuned c in (25), the regularization parameter in (20) performs well on general sparse signals.

Remark 2

According to the suggestion put forward by [7], we have selected \(u=1\) in (12) for our methodology. In this case, we have \(\lambda \approx 1.365(p-1/2)\), meaning that recovering the signal \({\varvec{x}}_{*}\) is still possible even if each measurement is flipped with probability nearly 1/2. In particular, when \(k = 1\), the measurement model (9) reduces to the traditional 1-bit CS model. Therefore, the semi-tensor-based 1-bit CS can be considered as a generalization of traditional 1-bit CS, implying that Theorem 1 extends Theorem 1 in [7].

4.3 Discussion on dither technique

If \(\varvec{a}_{i}\) denotes the i-th row of the measurement matrix \(\varvec{A}\), then \(q_{i}={\text{sign}}(\langle \varvec{a}_{i},{\varvec{x}}\rangle )\) indicates which side of the hyperplane \(H_{\varvec{a}_{i}}:\langle \varvec{a}_{i},\varvec{u}\rangle =0\) \((i\in [m])\) the point \({\varvec{x}}\) lies on. As all \(\varvec{a}_{i}\) are generated randomly, the sign vector \(\varvec{q}={\text{sign}}(\varvec{A}{\varvec{x}})\) encodes the cell of a random hyperplane tessellation [30] in which the signal \({\varvec{x}}\) is located. A popular strategy used for recovering \({\varvec{x}}\) is searching for a vector \({\varvec{x}}^{\#}\) that is quantization consistent, i.e., \(\varvec{q}={\text{sign}}(\varvec{A}{\varvec{x}}^{\#})\). However, due to the homogeneity of these hyperplanes, it is impossible to separate any two points on a ray from the origin even when all possible hyperplanes are utilized (see Fig. 1a). In other words, the measurement \(\varvec{q}={\text{sign}}(\varvec{A}{\varvec{x}})\) loses information regarding the signal’s scale. Fortunately, the issue can be circumvented by introducing a dither vector prior to quantization. In the context of random tessellations, dithering can be geometrically interpreted as adding random parallel shifts to homogeneous hyperplanes, resulting in nonhomogeneous hyperplanes \(H_{\varvec{a}_{i},\tau _{i}}:\langle \varvec{a}_{i},\varvec{u}\rangle +\tau _{i}=0\) \((i\in [m])\).

Fig. 1
figure 1

Illustration on random hyperplane tessellation in \({\mathbb{R}}^{2}\). a corresponds to homogeneous hyperplanes in which case any two points on a ray from the origin cannot be separated even if one uses all possible hyperplanes. b Corresponds to hyperplanes with random parallel shifts (resulted by random dithers) in which case we can approximate distances between signals

As illustrated in Fig. 1, the random parallel shifts enable the appropriate separation between two points on a ray originating from the origin, provided that \({\varvec{\tau }}\) is suitably selected. Although we focus on unit signals in this paper, our proof process reveals that under the circumstances of semi-tensor product measurement, sparse signals are restored by segments. This necessitates coding the signal scale in quantization to prevent magnitude distortion during recovery. Consequently, we have employed the dithered measurement (6) and its noisy variant (9) in this study. Indeed, Chen et al. [31] proved the following result:

Proposition 2

(Lemma 1, [31]) Let x, \(\tau\) be two independent random variables satisfying \(|x|\le B\), \(\tau \sim uni([-\gamma ,\gamma ])\) where \(\gamma \ge B\), then we have

$$\begin{aligned} {\mathbb{E}}(\gamma \cdot {\text{sign}}(x+\tau ))={\mathbb{E}}(x). \end{aligned}$$

Proposition 2 states that if all elements of \({\varvec{\tau }}\) are drawn from the uniform distribution \([-\gamma ,\gamma ]\), where the bound \(\gamma >0\) is chosen such that \(\gamma \ge \Vert \varvec{A}{\varvec{x}}\Vert _{\infty }\), then the dithered measurements \({\text{sign}}(\langle \varvec{a}_{i},{\varvec{x}}\rangle +\tau _{i})\) contain most scale information on \(\langle \varvec{a}_{i},{\varvec{x}}\rangle\). By Cauchy–Schwartz inequality, it is not hard to obtain

$$\begin{aligned} \Vert \varvec{A}{\varvec{x}}\Vert _{\infty }=\max _{i\in [m]}\ \langle \varvec{a}_{i},{\varvec{x}}\rangle \le \max _{i\in [m]}\ \Vert \varvec{a}_{i}\Vert _{2}\Vert {\varvec{x}}\Vert _{2}. \end{aligned}$$

Since we assume \(\Vert {\varvec{x}}\Vert _{2}=1\) in this paper, we suggest setting \(\gamma =\max _{i\in [m]}\Vert \varvec{a}_{i}\Vert _{2}\) to generate uniformly distributed dithers \({\varvec{\tau }}\).

5 Simulations

This section initially conducts experiments on synthetic data to facilitate the selection of regularization parameters. Subsequently, we apply the proposed method to ECG signals, MRI data, and hyperspectral images (HSIs) for evaluating its effectiveness. All experiments are conducted using MATLAB (2016a) on a platform equipped with AMD Ryzen 7 5800 H 3.20-GHz CPU and 16 GB memory.

Practical analog-to-digital converters (ADCs) not only sample signals, but also quantize each measurement to a finite number of bits. The performance of ADCs is primarily limited by the quantizer, resulting in an exponential decrease in sampling rate as the resolution increases linearly [32]. This implies that higher resolution per measurement results in slower sampling rates and increased costs for the ADC. The conventional CS framework provides a solution to mitigate the quantization bottleneck by reducing the ADC sampling rate, while the 1-bit CS framework directly addresses this issue in the quantization domain by decreasing the number of bits per measurement. In other words, while a sampling rate \({{\text{SR}}}>1\) is not significant in conventional compressed sensing, it may prove highly practical in 1-bit (binary) systems that can acquire sign measurements at extra-low bit rates [5]. We define the recovery error (RE) as \({\text{RE}}:=\Vert \hat{{\varvec{x}}}/\Vert \hat{{\varvec{x}}}\Vert _{2}-{\varvec{x}}/\Vert {\varvec{x}}\Vert _{2}\Vert _{2}\), where \({\varvec{x}}\) denotes the original signal and \(\hat{{\varvec{x}}}\) denotes its estimate.

5.1 Parameter selection

Theorem 1 has indicated that the regularization parameters \(\lambda\) in (20) should be set as (25). However, determining the appropriate value for constant c remains unclear. To address this issue, we conducted a series of numerical experiments. Firstly, we generate N-dimensional signals with sparsity s randomly and evaluate the RE of the proposed method while keeping SR fixed at 20. We consider different values of c for \(N=500\), \(s=5\), \(N=1000\), \(s=10\); and \(N=2000\), \(s=20\). The average results from repeating the experiment for a hundred times are presented in Fig. 2a. Next, we generate 1000-dimensional signals with sparsity 10 randomly and evaluate the proposed method’s REs at different SRs. In addition, we conducted 100 trials and presented the average results in Fig. 2b. It can be observed that \(c=0.3\) is a suitable setting for Eq. (20), as it yields relatively small REs across almost all cases. As the constant c does not depend on the structure of the signal, the parameter setting obtained can be applied universally. Therefore, we will maintain these settings throughout all remaining experiments in this article.

Fig. 2
figure 2

Recovery errors obtained with different values of the constant c in (25). a shows the results for signals of different lengths when \({{\text{SR}}}=20\), and b shows the results under different sampling rates when \(N=1000\)

Next, we will examine the impact of factor k on recovery performance. We randomly generate signals with a length of \(N=1000\) with sparsity \(s=10\), and consider various values for k, including \(k=1\), \(k=2\), \(k=4\), \(k=5\), and \(k=10\). We repeat 100 trails and report the average REs at different SRs in Fig. 3. In addition to reducing the scale of the measurement matrix, we unexpectedly found from the experimental results that incorporating the semi-tensor product can enhance recovery accuracy. As revealed by our proof process, the semi-tensor product-based measurement is equivalent to the segmented measurement scheme of sparse signals. This implies that an excessively large value of k may result in measuring some zero vectors. To avoid the trivial situation, we suggest empirically selecting k such that it does not exceed the sparsity s of the signal.

Fig. 3
figure 3

Recovery errors obtained with different values of factor k, where \(k=1\) corresponds to full random standard Gaussian measurement scheme

5.2 ECG signal recovery

In this section, we apply the proposed method to recover a 1000-length ECG signal segment randomly selected from the MIT-BIH Arrhythmia Database [33]. We compare the semi-tensor product-based measurement with other representative types of measurements in 1-bit CS, including Gaussian measurement, symmetric Bernoulli measurement, and circulant Gaussian measurement. Denoted by Gaussian [7], we refer to a scenario where all entries of the measurement are independently and identically distributed according to the standard Gaussian distribution \({\mathcal {N}}(0,1)\). Denoted by Symm-Bernoulli [19], we mean that each element is independently sampled from a Bernoulli distribution with equal probabilities of 1 and \(-1\). Referred to as Circ-Gaussian [20], the measurement matrix was generated by shifting one element of a random row Gaussian vector to the right relative to the preceding row vector through rotation. Besides, we also compare the proposed algorithm with methods based on other sparsity regularization. For \(\ell _{0}\) regularization, we compare with a Hard Thresholding-based method [34], denoted as HT, which aims to solve a non-convex sparsity-constrained program. For weighted \(\ell _{1}\)-norm regularization, we compare our method with the Inverse Weights-based approach [35] (denotes as Inverse-WT) and the \(\ell _{1}\)-norm Shannon Entropy Weights-based method [35] (denoted as \(\ell _{1}\)-SE-WT).

The quantitative results are plotted in Fig. 4 for different sampling rates. The reconstructed results of a fraction with a length of 250, obtained at \({\text{SR}}=40\), are depicted in Fig. 5. The measurements based on semi-tensor product have consistently demonstrated superior performance in 1-bit CS compared to commonly used measurements, both in terms of indicators and recovery outcomes, encompassing reconstruction performance and execution time. To evaluate the robustness of the proposed approach, we introduced random bit flips to the measurements by flipping each entry of binary measurements with a probability of 0.2. The quantitative results are plotted in Fig. 6 for different sampling rates, and the reconstructed results of a fraction with a length of 250 at \({\text{SR}}=40\) are illustrated in Fig. 7. Although the reconstructed results are slightly degraded compared to noise-free situation, our approach still demonstrates a significant advantage over its competitors. As is shown, though weighted \(\ell _{1}\)-norm based methods, Inverse-WT and \(\ell _{1}\)-SE-WT, obtain better reconstruction performance than most other \(\ell _{1}\)-norm based algorithms, it is still inferior to our method. We believe that our advantage primarily stems from the utilization of semi-tensor product-based measurements. If other sparse regularization methods are incorporated, it is possible to achieve improved recovery outcomes, which necessitates further exploration in future discussions.

Fig. 4
figure 4

Recovery errors obtained by different methods at various sampling rates with noiseless measurements

Fig. 5
figure 5

Reconstructed results obtained by different schemes from noiseless measurements (SR of 40)

Fig. 6
figure 6

Recovery errors obtained by different methods at various sampling rates with noisy measurements

Fig. 7
figure 7

Reconstructed results obtained by different schemes from noisy measurements (SR of 40)

5.3 MRI data recovery

We also utilize the proposed method to conduct further test on MRI data, which usually exhibit sparsity under discrete cosine transform (DCT) or wavelet basis. We used two different datasets—Data A and Data B. Data A is a cardiac cine MRI dataset distributed by the 2013 ISMRM Recon Challenge committee,Footnote 1 which was collected using a 2D cine breath-held bSSFP sequence with 32-channel cardiac receiver coils. Data B is a publicly available myocardial perfusion MRI dataset from [36], which was acquired using a saturation recovery FLASH sequence. We introduce random bit flips to the measurements by flipping each entry of binary measurements with a probability of 0.1. To facilitate the implementation of the measurement process and enhance solving speed, we employed a block-based sampling scheme. Empirically, we partitioned the original image into \(32\times 32\)-sized parts and subsequently perform DCT on each part to obtain sparse signals through vectorization. The reconstruct is conducted in blocks. Readers concerning on further information on block-based samplings may refer to [37, 38]. Two quantitative indicators, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [39], and execution times (in seconds) were calculated at SRs of 5, 10, 20, and 40 and are presented in Table 1, where the bolded data represent the optimal outcomes. Figure 8 displays the twentieth frame along with the reconstructed results from various methods for each of the MRI datasets obtained at \({\text{SR}}=40\). It is evident, both quantitatively and visually, that our recovery method achieves better reconstruction performance. In addition, the reduced computational complexity of matrix and signal multiplication operations in semi-tensor product-based measurement results in significantly shorter execution time for our algorithm compared to other algorithms. This aligns with the MRI’s requirement for fast sampling and reconstruction speed.

Table 1 Comparison of the quantitative indices obtained by different methods on MRI data at different sampling rates
Fig. 8
figure 8

Visual display of the reconstructed results obtained by different approaches on frame 20 of Data A (upper), and Data B (lower) at SR = 40. a Original image, b Gaussian, c Sym-Bern, d Circ-Gauss, e HT, f Inverse-WT, g \(\ell _{1}\)-SE-WT, h Ours

5.4 Hyperspectral images recovery

We also apply the proposed method to perform compressed sensing of HSIs. We select two typical HSI datasets, pure HYDICE Urbanpart and AVIRIS Moffett Field, both of which are sized to \(256\times 256\times 128\) for our experiments. To facilitate the implementation of the measurement process, we also employed a block-based sampling scheme. Empirically, we partitioned the original data into \(16\times 16\times 16\)-sized blocks and subsequently perform three-dimensional DCT on each block to obtain sparse signals through vectorization. The recovery process is conducted in a block-wise manner. We evaluate the recovery performance by calculating the PSNR, SSIM values and execution times at SRs of 10 and 20. The quantitative indices are reported in Table 2, where the bolded data represent the optimal outcomes, and the images of all methods with bands 6-45-155 as R-G-B at SR = 20 are depicted in Fig. 9. The superiority of both \(\ell _{1}\)-SE-WT and our method in terms of recovery performance is evident, with \(\ell _{1}\)-SE-WT exhibiting a slightly better performance than ours. However, our method exhibits a significant advantage in execution time compared to the former.

Table 2 Comparison of the quantitative indices obtained by different methods on HSIs at different sampling rates
Fig. 9
figure 9

Recovered images of all competing methods with bands 6-45-155 as R-G-B at SR = 20. The upper are results for HYDICE Urbanpart and the lower are results for AVIRIS Moffett Field. a Original image, b Gaussian, c Sym-Bern, d Circ-Gauss, e HT, f Inverse-WT, g \(\ell _{1}\)-SE-WT, h Ours

6 Conclusion

This paper proposes a model for data compression and reconstruction from one-bit measurements using semi-tensor product. Unlike traditional measurements, this new method does not require the number of columns in the sensing matrix to be equal to the length of the underlying signal, thus breaking the dimension matching condition. Leveraging the theory of semi-tensor product, this approach is capable of compressing high-dimensional signals with lower-dimensional sensing matrices. Theoretically, we provide an upper bound for the recovery error of this method. We also apply the proposed method to the compressed sensing of real-world data such as ECG signals, MRI data, and HSIs to show the low time complexity of our approach.

The selection of an optimal value for k may be influenced by the characteristics of the underlying signal, and this article presents empirical approaches to aid in its determination. In future research, we intend to explore a theoretical solution to this issue. Additionally, the current compressed sensing MRI method relies on a sampling matrix derived from partial Fourier transform. However, due to the limitations imposed by one-bit quantization, this paper proposes the utilization of a semi-tensor product measurement based on Gaussian distribution, which distinguishes it from real-world scenarios. Further research is needed on how to apply 1-bit quantization and semi-tensor product-based measurements to the real application scenarios of MRI in future. Additionally, there is still much to be investigated regarding the application of semi-tensor product-based measurement in high-dimensional scenarios, particularly with respect to compressed sensing of low-rank matrices and tensors from binary measurements.

Availability of data and materials

Please contact any of the authors for data and materials.

Notes

  1. http://challenge.ismrm.org/node/66.

References

  1. H.F. Shen, X.H. Li, L.P. Zhang, D.C. Tao, C. Zeng, Compressed sensing-based inpainting of aqua moderate resolution imaging spectroradiometer Band 6 using adaptive spectrum-weighted sparse Bayesian dictionary learning. IEEE Trans. Geosci. Remote Sens. 51(2), 894–906 (2014)

    Article  Google Scholar 

  2. M. Lustig, D.L. Donoho, J.M. Santos, J.M. Pauly, Compressed sensing MRI. IEEE Signal Process. Mag. 25(2), 72–82 (2008)

    Article  Google Scholar 

  3. S. Dirksen, S. Mendelson, Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing. J. Eur. Math. Soc. 23, 2913–2947 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  4. P.T. Boufounos, R.G. Baraniuk, 1-bit compressive sensing, in: Proceeding of the 43rd Asilomar Conference on Conference on Signals, Systems and. Computers (IEEE, Pacific Grove, 2008), pp. 16–21

  5. L. Jacques, J.N. Laska, P.T. Boufounos, R.G. Baraniuk, Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. L. J. Zhang, J. F. Yi, R. Jin, Efficient algorithms for robust one-bit compressive sensing, in: Proceedings of the International Conference on International Conference on Machine Learning (IEEE, Beijing, 2014), pp. 820–828

  8. J.Y. Hou, J.J. Wang, F. Zhang, J.W. Huang, One-bit compressed sensing via \(\ell _{p}(0<p<1)\)-minimization method. Inverse Probl. 36(5), 055005 (2020)

  9. Z. Wang, F.L. Liu, Y.H. Jia, H.Y. Yang, Y.Y. Guo, One bit compressive sensing with off-grid targets. Digit. Signal Prog. 115, 103008 (2021)

    Google Scholar 

  10. E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. R.E. Carrillo, A.B. Ramirez, G.R. Arce, K.E. Barner, B.M. Sadler, Robust compressive sensing of sparse signals: a review. EURASIP J. Adv. Signal Process. 108, 1–17 (2016)

    Google Scholar 

  12. J. Mo, P. Schniter, N.G. Prelcic, R.W. Heath, Channel estimation in millimeter wave MIMO systems with one-bit quantization, in: Proceedings of the 48th Asilomar Conference on Signals, System and Computer (IEEE, Pacific Grove, 2014), pp. 957–961

  13. Y. Noh, S. Hong, Compressed sensing based active user detection in MIMO systems with one-bit ADC. IEEE Trans. Veh. Technol. 72(1), 1313–1317 (2022)

    Article  Google Scholar 

  14. J.P. Xiong, Q.H. Tang, 1-bit compressive data gathering for wireless sensor networks. J. Sens. 2014(7), 177–183 (2014)

    Google Scholar 

  15. X. Dong, Y. Zhang, A MAP approach for 1-bit compressive sensing in synthetic aperture radar imaging. IEEE Geosci. Remote Sens. Lett. 12(6), 1237–1241 (2015)

    Article  Google Scholar 

  16. J.Y. Hou, F. Zhang, H.Q. Qiu, J.J. Wang, Y. Wang, Y.D. Meng, Robust low-tubal-rank tensor recovery from binary measurements. IEEE Trans. Pattern Anal. Mach. Intell. 44(8), 4355–4373 (2022)

    Google Scholar 

  17. E.J. Candes, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted \(\ell _{1}\) minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)

  18. A. Ai, A. Lapanowski, Y. Plan, R. Vershynin, One-bit compressed sensing with non-Gaussian measurements. Linear Algebra Appl. 441, 222–239 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. S. Dirksen, S. Mendelson, Non-Gaussian hyperplane tessellations and robust one-bit compressed sensing. J. Eur. Math. Soc. 23(9), 2913–2947 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  20. S. Dirksen, H.C. Jung, H. Rauhut, One-bit compressed sensing with partial Gaussian circulant matrices. Inf. Inference 9(3), 601–626 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  21. Z.Q. Liu, S. Ghosh, J. Scarlett, Robust 1-bit compressive sensing with partial Gaussian circulant matrices and generative priors, in: 2021 IEEE Information Theory Workshop (ITW) (IEEE, Kanazawa, 2021), pp. 1–6

  22. S. Dirksen, S. Mendelson, Robust one-bit compressed sensing with partial circulant matrices. Ann. Appl. Probab. 33(3), 1874–1903 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  23. D.Z. Cheng, Semi-tensor product of matrices and its application to Morgen’s problem. Sci. China Ser. 44(3), 195–212 (2001)

    MathSciNet  MATH  Google Scholar 

  24. D.Z. Cheng, L.J. Zhang, On semi-tensor product of matrices and its applications. Acta Math. Appl. Sin. 19(2), 219–228 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  25. D.Z. Cheng, H.S. Qi, A.C. Xue, A survey on semi-tensor product of matrices. J. Syst. Sci. Complex 20(2), 304–322 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  26. D.W. Zhao, H.P. Peng, L.X. Li, S.L. Hui, Y.X. Yang, Novel way to research nonlinear feedback shift register. Sci. China Ser. 57(9), 1–14 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  27. Y.H. Wu, T.L. Shen, Policy iteration approach to control residual gas fraction in IC engines under the framework of stochastic logical dynamics. IEEE Trans. Control Syst. Technol. 25, 1100–1107 (2017)

    Article  Google Scholar 

  28. K.Z. Zhang, L.J. Zhang, S.S. Mou, An application of invertibility of Boolean control networks to the control of the mammalian cell cycle. IEEE ACM Trans. Comput. Biol. Bioinform. 14, 225–229 (2017)

    Article  Google Scholar 

  29. D. Xie, H.P. Peng, L.X. Li, Y.X. Yang, Semi-tensor compressed sensing. Digit. Signal Prog. 58, 85–92 (2016)

    Article  Google Scholar 

  30. Y. Plan, R. Vershynin, Dimension reduction by random hyperplane tessellations. Discrete Comput. Geom. 51(2), 438–461 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  31. J.R. Chen, C.L. Wang, M.K. Ng, D. Wang, High dimensional statistical estimation under uniformly dithered one-bit quantization. IEEE Trans. Inf. Theory 69(8), 5151–5187 (2023)

    Article  MathSciNet  Google Scholar 

  32. B. Le, T.W. Rondeau, J.H. Reed, C.W. Bostian, Analog-to-digital converters. IEEE Signal Process. Mag. 22(6), 69–77 (2005)

    Article  Google Scholar 

  33. G.B. Moody, R.G. Mark, The impact of the MIT-BIH arrhythmia database. IEEE Eng. Med. Biol. Mag. 20(3), 45–50 (2001)

    Article  Google Scholar 

  34. J. Shen, One-bit compressed sensing via one-shot hard thresholding, in: 36-th Conference on Uncertainty in Artificial Intelligence (PMLR, Toronto, 2020), pp. 510–519

  35. P. Xiao, B. Zhao, Robust one-bit compressive sensing with weighted \(\ell _{1}\)-norm minimization. Signal Process. 164, 380–385 (2019)

  36. S.G. Lingala, M. Jacob, Blind compressive sensing dynamic MRI. IEEE Trans. Med. Imaging 32(6), 1132–1145 (2013)

    Article  Google Scholar 

  37. L. Gan, Block compressed sensing of natural images, in: 2007 15th International Conference on Digital Signal Processing (IEEE, Cardiff, 2007), pp. 403–406

  38. J.E. Fowler, S. Mun, E.W. Tramel, Block-based compressed sensing of images and video. Found. Trends Signal Process. 4(4), 297–416 (2012)

    Article  MATH  Google Scholar 

  39. Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This research was supported in part by the National Natural Science Foundation of China (Grant Nos. 12201505, 12301594), in part by the Sichuan Science and Technology Program (Grant No. 2023NSFSC0060), and in part by the Initiative Projects for Ph.D. in China West Normal University (Grant No. 22kE030).

Author information

Authors and Affiliations

Authors

Contributions

JYH proposed the framework of the whole method; performed the simulations, analysis and interpretation of the results; and drafted the manuscript. XLL has participated in the conception and revision of this research. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Jingyao Hou.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hou, J., Liu, X. Semi-tensor product-based one-bit compressed sensing. EURASIP J. Adv. Signal Process. 2023, 112 (2023). https://doi.org/10.1186/s13634-023-01071-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-023-01071-6

Keywords