# Sparse signal recovery from modulo observations

## Abstract

We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements). This observation model is inspired by a relatively new imaging mechanism called modulo imaging, which can be used to extend the dynamic range of imaging systems; variations of this model have also been studied under the category of phase unwrapping. Signal reconstruction in the under-determined regime with modulo observations is a challenging ill-posed problem, and existing reconstruction methods cannot be used directly. In this paper, we propose a novel approach to solving the signal recovery problem under sparsity constraints for the special case to modulo folding limited to two periods. We show that given a sufficient number of measurements, our algorithm perfectly recovers the underlying signal. We also provide experiments validating our approach on toy signal and image data and demonstrate its promising performance.

## Introduction

The problem of reconstructing a signal (or image) from (possibly) nonlinear observations is a principal challenge in signal acquisition and imaging systems. Our focus in this paper is the problem of signal reconstruction from modulo measurements, where the modulo operation with respect to a positive real valued parameter R returns the (fractional) remainder after division by R. See Fig. 1a for an illustration. Formally, we consider a high-dimensional signal (or image) $$\mathbf {x}^{*} \in \mathbb {R}^{n}$$. We are given modulo measurements of x, that is, for each measurement vector $$\mathbf {a_{i}} \in \mathbb {R}^{n}$$, we observe:

$$y_{i}=\mod\left({\langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle},R\right), \qquad {i = 1,2,\ldots,m} \,.$$
(1)

The task is to recover x using the modulo measurements y and knowledge of the measurement matrix A=[a1 a2 am].

This specific form of signal recovery is gaining rapid interest in recent times . Recently, the use of a novel imaging sensor that wraps the data in a periodical manner has been shown to overcome certain hardware limitations of typical imaging systems . Several image acquisition systems suffer from the problem of limited dynamic range; however, real-world signals can contain a large range of intensity levels, and if tuned incorrectly, signal measurements can lie in the saturation region of the sensors, causing loss of information through signal clipping. The problem gets amplified in the case of multiplexed linear imaging systems (such as compressive cameras or coded aperture systems), where the required dynamic range is very high because of the fact that each linear measurement is a weighted aggregation of the original image intensity values.

The standard solution to this issue is to improve the sensor dynamic range via enhanced hardware; this, of course, can be untenably expensive. An intriguing alternative is to deploy special digital modulo sensors . As the name suggests, such a sensor wraps each signal measurement around a scalar parameter R that reflects the dynamic range. However, this also makes the forward model (1) highly nonlinear and the reconstruction problem highly ill-posed. The approach of [6, 7] resolves this problem by assuming overcomplete observations, meaning that the number of measurements m is higher than the ambient dimension n of the signal itself. For the cases where m and n are large, this requirement puts a heavy burden on computation and storage.

In contrast, our focus is on solving the inverse problem (1) with very few samples, i.e., we are interested in the case mn. While this makes the problem even more ill-posed, we show that such a barrier can be avoided if we assume that the underlying signal obeys a certain low-dimensional structure. In this paper, we focus on the special case when the underlying signal is sparse, but our techniques could be extended to other signal structures. Further, for simplicity, we assume that our observation model is limited to only two modulo fold periods, one for positive- and one for negative-valued coefficients. This does not reflect practice, but we will see that such a variation of the modulo function already inherits much of the challenging aspects of the original recovery problem. Intuitively, this simplification requires that the value of dynamic range parameter R should be finite, but large enough that most of the measurements fall within the interval [−R,R]. We emphasize that such requirement does not put a hard constraint in our algorithm, and successful recovery is possible even if some measurements lie outside the interval covered by two modulo periods.

### Our contributions

In this paper, we propose a recovery algorithm for exact reconstruction of sparse signals from modulo measurements of the form (1) with the modulo fold operation being limited to two periods. We refer to our algorithm as MoRAM, short for Modulo Recovery using Alternating Minimization. We observe that the modulo operation with respect to parameter R can be seen as the addition or subtraction of an integer multiple of R from the input. We refer to this integer multiple as the bin-index p. It is not hard to observe that a successful recovery of bin-index can lead to successful recovery of the input using standard sparse recovery methods. Concretely, the forward model in (1) can be written as follows:

$$y_{i}= \langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle+{p^{*}_{i}}R,~~i = \{1,..,m\}.$$

As we restrict our modulo operation to only two periods, the bin-index can only take two values: 0 for non-negative inputs, and 1 for negative inputs. We discuss extending this to multiple modulo fold periods in Section 4.

As mentioned above, the challenge is to identify the bin-index for each measurement. Estimating the bin-index correctly lets us “unravel” the modulo transfer function, thereby enabling signal recovery. However, due to the highly nonlinear nature of the forward model, each incorrect bin-index introduces a gross additive error of magnitude R irrespective of the magnitude of the measurement. Commonly used signal recovery algorithms fail to cope up with such gross corruptions of large magnitudes, and perform poorly on our problem (both in theory and practice). Thus, it becomes even more important to be able to find a good estimate of the bin-index and to employ a robust signal recovery algorithm that can recover the signal even in the presence of grossly corrupted measurements.

To this end, our proposed algorithm follows two steps. We first leverage the structure of the modulo folding operator to obtain a promising initial estimate of the bin-indices. We then plug this estimate into a robust sparse recovery formulation that gives us the final signal estimate. We provide analytical correctness proofs for both states and show that signal recovery can be performed using an (essentially) optimal number of observations provided certain assumptions are met. To the best of our knowledge, we are the first to pursue this type of approach for modulo recovery problems with Gaussian linear measurements, distinguishing us from previous work [6, 7].

### Techniques

Our proposed algorithm (MoRAM) consists of two stages.

In the first stage, we identify a good initial estimated bin-index p0 that is relatively close to the true bin-index p. To obtain this estimate, we employ a simple bin-index estimation step by comparing our observed measurements with typical density plots of Gaussian observations. This method is able to recover a large number of bin-indices correctly, and also provides a provable upper bound on the number of erroneous bin-indices.

In the second stage, we use the initial estimate of the bin-indices of the measurements to recover the true underlying signal. We follow an alternating minimization (AltMin) approach inspired from phase retrieval algorithms (such as ) that estimates the signal and the measurement of bin-indices alternatively. However, as mentioned above, any estimation errors incurred in the first step induces fairly large additive errors (proportional to the dynamic range parameter R.) We resolve this issue by using a robust form of alternating minimization (specifically, using the Justice Pursuit algorithm ). We prove that given enough number of measurements, our Justice Pursuit-based AltMin approach succeeds provided the number of wrongly estimated bin-indices in the beginning is a sufficiently small fraction of the total number of measurements. This gives us a natural radius of initialization for the initial bin-index estimate and also leads to provable sample-complexity upper bounds.

### Prior work

Since signal recovery from nonlinear measurements is a very large and classical area of study, our review of prior work will unfortunately be incomplete.

#### Modulo recovery

The modulo recovery problem shares some similarities with the problem of phase unwrapping from the classical signal processing literature that would allow one to use apply phase unwarapping methods to modulo recovery; however, these methods do not provide proven guarantees. For example, the algorithm proposed in  is specialized to images and employs graph cuts for phase unwrapping from a single modulo measurement per pixel. However, the inherent assumption there is that the input image has very few sharp discontinuities, and this makes it unsuitable for practical situations with textured images. Our work is motivated by the recent work of  on high dynamic range (HDR) imaging using a modulo camera sensor. For image reconstruction using multiple measurements, they propose the multi-shot UHDR recovery algorithm, with follow-ups developed further in . However, the multi-shot approach depends on carefully designed camera exposures, while our approach succeeds for non-designed random Gaussian linear observations; moreover, they do not include sparsity in their model reconstructions. In our previous work , we proposed a different extension based on [7, 13] for signal recovery from quantized modulo measurements, which can also be adapted for sparse measurements, but there too the measurements need to be carefully designed.

In the literature, several authors have attempted to theoretically understand the modulo recovery problem. Given modulo-transformed time-domain samples of a band-limited function, [6, 14, 15] provide a stable algorithm for signal recovery and also proves sufficiency conditions that guarantees the recovery. Ordentlich et al.  use non-modulo data for the initialization to exploit the statistical structure in order to undo the effects of the modulo operation. Cucuringu and Tyagi  formulate and solve a QCQP problem with non-convex constraints for denoising the modulo-1 samples of the unknown function along with providing a least-square-based modulo recovery algorithm. However, both these methods rely on the smoothness of the band-limited function as a prior structure on the signal, and as such, it is unclear how to extend their use to more complex modeling priors (such as sparsity in a given basis). On the contrast, our approach does not rely on such smoothness assumptions.

In recent works,  proposed unlimited sampling algorithm for sparse signals and images. Similar to , it also exploits the band-limitedness by considering the low-pass filtered version of the sparse signal and thus differs from our random measurements setup. In , modulo recovery from Gaussian random measurements is considered. However, it assumes the true signal to be distributed as a mixed Bernoulli-Gaussian distribution which is not a standard assumption.

For a qualitative comparison of our MoRAM method with existing approaches, we refer the reader to Table 1. The table suggests that the previous approaches varied from the Nyquist-Shannon sampling setup only along the amplitude dimension, as they rely on band-limitedness of the signal and uniform sampling grid. We vary the sampling setup along both the amplitude and time dimensions by incorporating sparsity in our model, which enables us to work with a non-uniform sampling grid (random measurements) and achieve a provable sub-Nyquist sample complexity.

## Methods

### Preliminaries

Let us introduce some notation. We denote matrices using bold capital-case letters (A,B), column vectors using bold-small case letters (x,y,z, etc.), and scalars using non-bold letters (R,m etc.). We use letters C and c to represent constants that are large enough and small enough respectively. We use x and A to denote the transpose of the vector x and matrix A respectively. The cardinality of set S is denoted by card(S). We define the signum function as $$\text {sgn}(x) := \frac {x}{|x|}$$ for every $$x \in \mathbb {R}, x \neq 0$$, with the convention that sgn(0)=1. The ith element of the vector $$\mathbf {x} \in \mathbb {R}^{n}$$ is denoted by xi. Similarly, the ith row of the matrix $$\mathbf {A} \in \mathbb {R}^{m \times n}$$ is denoted by ai, while the element of A in the ith row and jth column is denoted as aij.

### Mathematical model

We consider the modulo operation within 2 periods (one in the positive half and one in the negative half). We assume that the value of dynamic range parameter R is large enough so that most of the measurements 〈ai,x〉 are covered within the domain of operation of modulo function. Rewriting in terms of the signum function, the (variation of) modulo function under consideration can be defined as:

$$f(t) := t+\left(\frac{1-\text{sgn}(t)}{2}\right)R.$$

One can easily notice that the modulo operation in this case is nothing but an addition of scalar R if the input is negative, while the non-negative inputs remain unaffected by it. If we divide the number line in these two bins, then the coefficient of R in above equation can be seen as a bin-index, a binary variable which takes value 0 when sgn(t)=1, or 1 when sgn(t)=−1. Inserting the definition of f in the measurement model of Eq. 1 gives,

$$y_{i}= \langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle+\left(\frac{1-\text{sgn}(\langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle)}{2}\right)R,~~i = \{1,..,m\}.$$
(2)

We can rewrite Eq. 2 using a bin-index vector p{0,1}m. Each element of the true bin-index vector p is given as,

$$p^{*}_{i} = \frac{1-\text{sgn}\left(\langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle\right)}{2},~~i = \{1,..,m\}.$$

If we ignore the presence of the modulo operation in the above formulation, then it reduces to a standard compressive sensing reconstruction problem. In that case, the compressed measurements $$y_{c_{i}}$$ would just be equal to 〈ai,x〉. While we have access only to the compressed modulo measurements y, it is useful to write y in terms of true compressed measurements yc. Thus,

\begin{aligned} y_{i} &= {\langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle} + p^{*}_{i}R \\ &= y_{c_{i}}+p^{*}_{i}R. \end{aligned}

It is evident that if we can recover p successfully, we can calculate the true compressed measurements 〈ai, x〉 and use them to reconstruct x with any sparse recovery algorithm such as CoSaMP  or basis pursuit .

### Signal recovery

The major barrier to signal recovery is that the bin-index vector is unknown. In this section, we describe our algorithm to recover both x and p, given the modulo measurements y, measurement matrix A, sparsity of underlying signal s, and the modulo parameter R. In this work, we rely on the assumption that our signal is sparse in a known domain with sparsity being s. Our algorithm MoRAM (Modulo Reconstruction with Alternating Minimization) comprises two stages: (i) an bin-index initialization stage and (ii) a descent stage via alternating minimization.

#### Bin-index initialization

As stated earlier, if we recover true bin-index p successfully, x can be recovered easily using any sparse recovery algorithm as we can obtain the true compressed measurements 〈ai,x〉 from p. Thus, in the absence of p, we propose to estimate a fraction of the values from the p correctly. To understand the rationale for such a procedure, we will first try to understand the effect of the modulo operation on the linear measurements.

#### Effect of the modulo transfer function

To provide some intuition, let us first examine the relation between the distributions of Ax and mod (Ax). It is easy to see that the compressed measurements yc follow a normal distribution.

We can now divide the compressed observations yc into two sets: yc,+, which contains all the non-negative observations with bin-index =0, and yc,−, which contains all the negative observations with bin-index =1. As shown in Fig. 2, after the modulo operation, the set yc,− (green) shifts to the right by R and gets concentrated in the right half ([R/2,R]), while the set yc,+ (orange) remains unaffected and concentrated in the left half ([0,R/2]). Thus, for some of the modulo measurements, their correct bin-index can be identified by observing their magnitudes relative to the midpoint R/2. This leads us to the following estimator for bin-indices (p):

$${p}^{0}_{i} = \left\{\begin{array}{ll} 0,& \text{if}\ 0\leq y_{i} < R/2,\\ 1,& \text{if}\ R/2 \leq y_{i} \leq R. \end{array}\right.$$
(3)

The vector p0 obtained with the above method contains the correct values of bin-indices for many of the measurements, except for the ones concentrated within the ambiguous region in the center. We should highlight that the procedure in Eq. 3 will succeed only for the specific case of modulo fold operations limited to two periods, one for the positive and one for the negative cycle. Once we identify the initial values of bin-index for the modulo measurements, we can calculate corrected measurements as:

$$\begin{array}{*{20}l} \mathbf{y^{0}_{c} = y - p^{0}}R. \end{array}$$
(4)

#### Alternating minimization

Using Eq. 3, we calculate the initial estimate of the bin-index p0 in which significant fraction of the total values are estimated correctly. Starting with p0, we calculate the estimates of x and p in an alternating fashion to converge to the original signal x and true bin-index p.

With pt being close to p, we would calculate the correct compressed measurements $$\mathbf {y^{t}_{c}}$$ using pt and use $$\mathbf {y^{t}_{c}}$$ with any popular compressive recovery algorithms (such as CoSaMP, or basis pursuit) to calculate the signal estimate xt. Therefore:

$$\begin{array}{*{20}l} \mathbf{y^{t}_{c}} &= \mathbf{y} - \mathbf{p^{t}}R, \end{array}$$
(5)
$$\begin{array}{*{20}l} \mathbf{{x}^{t}} &= \underset{\mathbf{x} \in \mathcal{M}_{s}}{\arg\min}\|{\mathbf{Ax} - \mathbf{y^{t}_{c}}}\|_{2}^{2}, \end{array}$$
(6)

where $$\mathcal {M}_{s}$$ denotes the set of s-sparse vectors in $$\mathbb {R}^{n}$$. Note that sparsity is only one of several signal models that can be used here, and a rather similar formulation would extend to cases where $$\mathcal {M}$$ denotes any other structured sparsity model [23, 24].

However, the bin-index estimation error, dt=ptp, even if small, would significantly impact the correction step that constructs $$\mathbf {y^{t}_{c}}$$ since each incorrect bin-index would add a noise of the magnitude R in $$\mathbf {y^{t}_{c}}$$. Our experiments suggest that the typical sparse recovery algorithms are not robust enough to cope up with such large errors in $$\mathbf {y^{t}_{c}}$$. To tackle this issue, we employ an outlier-robust sparse recovery method known as Justice Pursuit .

At a high level, Justice Pursuit tackles the problem of sparse signal recovery from the measurements that are corrupted by a sparse but large (unbounded) corruptions. Justice Pursuit leverages the fact that the corruptions are also sparse, and reformulates the problem to recover both the sparse signal and sparse corruptions together in the form of a concatenated sparse vector. In our case, the error dt is sparse with sparsity sdt=dt0, and each erroneous element of p adds a corruption of magnitude R in $$\mathbf {y^{t}_{c}}$$. Following , we augment the measurement matrix A with an identity matrix Im×m and introduce an intermediate vector $$\mathbf {u} \in \mathbb {R}^{n+m}$$ to represent our measurements at iteration t as:

$$\mathbf{Ax^{*}} + R\mathbf{I_{m} d^{t}} = \left[\begin{array}{l} \mathbf{A} ~~~~R\mathbf{I} \end{array}\right] \left[\begin{array}{l} \mathbf{x^{*}} \\ \mathbf{d^{t}} \end{array}\right] = \left[\begin{array}{l} \mathbf{A} ~~ ~~R\mathbf{I} \end{array}\right] \mathbf{u},$$
(7)

and solve for the (s+sdt)−sparse estimate $$\mathbf {\widehat {u}}$$:

$$\left[\begin{array}{l} \mathbf{\widehat{x}^{t}} \\ \mathbf{\widehat{d}^{t}} \end{array}\right] = \mathbf{\widehat{u}} = \underset{\mathbf{u}}{\arg\min} \|{\mathbf{u}}\|_{1}~~~s.t. \left[\begin{array}{l} \mathbf{A} ~~~~ R\mathbf{I} \end{array}\right] \mathbf{u} = \mathbf{y^{t}_{c}}$$
(8)

Here, the signal estimate $$\mathbf {\widehat {x}^{t}}$$ is obtained by selecting the first n elements of $$\mathbf {\widehat {u}}$$, while an estimate of the corruptions can be obtained by selecting the last m elements of $$\mathbf {\widehat {u}}$$. The problem in Eq. 8 can be solved by any stable sparse recovery algorithm such as CoSaMP or IHT; however, note that the sparsity of dt is unknown, suggesting that greedy sparse recovery methods cannot be directly used without an additional hyper-parameter. Therefore, we employ basis pursuit  which does not heavily depend on a priori knowledge of the sparsity level.

We refer to the routine that solves the program in Eq. 8 using basis pursuit as JP. Given $$\mathbf {A, y^{t}_{c}}$$, JP returns xt. Thus,

$$\mathbf{x^{t+1}}= JP \left(\mathbf{A}, \mathbf{y^{t}_{c}} \right).$$
(9)

Once the signal estimate xt is obtained at each iteration of alternating minimization, we use it to calculate the value of the bin-index vector pt+1 as follows:

$$\mathbf{{p}^{t+1}} = \frac{\mathbf{1}-\text{sgn}\left(\mathbf{A}\mathbf{x^{t}} \right)}{2}.$$
(10)

Proceeding this way, we repeat the steps of sparse recovery (Eq. 8) and bin-index calculation (Eq. 10) in alternating fashion for T iterations. Under certain conditions (described in Section 2.4 below), our algorithm is able to achieve convergence to the true underlying signal.

### Mathematical analysis

In this section, we provide correctness proofs for both steps of Algorithm 1. For the first stage, we derive an upper bound on the number of incorrect estimations in p0 obtained in the bin-index initialization step. This upper bound essentially provides an upper bound on the permissible sparsity of d0. For the second stage, we calculate a sufficient number of measurements required such that the augmented matrix used in the Justice Pursuit formulation in (8) satisfies the Restricted Isometry Property (RIP), which would in turn enable a recovery guarantee.

### Bin-index initialization

In this step, we initialize the bin-index vector p0 according to Eq. 3. We can also quantify the number of correctly estimated bin-indices by calculating the area under the curve of the density plots of the measurements before and after the modulo operation. An illustration is provided in Fig. 3.

In this analysis, our goal is to characterize the distribution of total number of measurements for which we can estimate the correct bin-index through Eq. 3. Such a random variable is denoted by Mc. From Mc, we can calculate the sparsity of d0 as d00=mMc. The following lemma presents a bound on the sparsity of d0.

### Lemma 1

Let the entries of the measurement matrix be generated as $$\mathbf {A}_{ij} \sim \mathcal {N}(0,1/m)$$, and y be the modulo measurements obtained as per Eq. 1. Let Mc the random variable denoting the number of measurements for which the correct bin-indices are identified in the initialization method provided in Eq. 3. Then, with probability at least $$1 - e^{-O(m\delta ^{2})}$$:

$$M_{c} > (1 - \delta)m \left(1-2\frac{\sigma^{2}\phi(R/2)}{(R/2)} \right).$$

Here, ϕ(·) is a Gaussian density with mean μ=0 and variance $$\sigma ^{2} = \|{\mathbf {x^{*}}}\|^{2}_{2}$$.

### Proof

Observe that each element of A is i.i.d. standard normal, i.e., $$\mu _{A_{ij}} = 0$$ and $$\sigma ^{2}_{A_{ij}}=1$$. Recall that

$$y_{c,i} =\langle \mathbf{a_{i}}, \mathbf{x^{*}} \rangle = \sum_{j=1}^{n}A_{ij}x^{*}_{j}.$$

Therefore, we have

$$y_{c,i} \sim \mathcal{N}\left(\mu= \sum_{j=1}^{n}x^{*}\mu_{A_{ij}} = 0, \sigma^{2} =\sum_{j=1}^{n}x^{*2}_{j}\sigma^{2}_{A_{ij}}\right).$$

Thus, each element of yc follows a zero-mean Gaussian distribution with variance σ2. Let Ei be the event that the random variable yc,i lies in the interval [−R/2,R/2]; this event indicates that the corresponding measurement is appropriately corrected using Eq. 4 Clearly, Ei is a Bernoulli random variable with probability q=P[−R/2≤yc,iR/2]. Elementary probability calculations give us:

$$\begin{array}{*{20}l} q = 1 - 2 Q_{0, \sigma^{2}}(R/2), \end{array}$$

where $$\phantom {\dot {i}\!}Q_{0, \sigma ^{2}}(\cdot)$$ is the usual Q-function. This is not calculable in closed form; however, it can be lower bounded using the following identity (where $$\phantom {\dot {i}\!}\phi _{0, \sigma ^{2}}(\cdot)$$ is a Gaussian density function with mean zero and variance σ2:

$$Q_{0, \sigma^{2}}(t) < \sigma^{2} \frac{\phi_{0, \sigma^{2}}(t)}{t}.$$

The random variable $$M_{c} = \sum _{i=1}^{m} {E_{i}}$$ denotes the number of corrected measurements. By an application of the Chernoff bound,

$$\begin{array}{*{20}l} P\left(M_{c} \leq (1 - \delta)\mu'\right) \leq e^{-\mu'\delta^{2}/2}, \end{array}$$

for any δ(0,1),where μ is the mean of Mc. Plugging in μ=mq gives the desired result. □

We now perform a theoretical analysis of the descent stage of our algorithm. We assume the availability of an initial estimate of bin-index vector p0 that is close to p. In our case, our initialization step (in Alg. 1) provides such p0.

We perform alternating minimization (AltMin) as described in 1, starting with p0 calculated using Eq. 3. For simplicity, we limit our analysis of the convergence to only one AltMin iteration. In fact, according to our theoretical analysis, if initialized well enough, one iteration of AltMin suffices for exact signal recovery with sufficiently many measurements; however, in practice, we have observed that our algorithm performs better with multiple AltMin iterations.

### Theorem 2

Given the initial estimate of bin-index p0 obtained using Eq. 3, if the number of modulo measurements m satisfies:

$$\begin{array}{*{20}l} m \geq C_{1}\left(\|{\mathbf{x^{*}}}\|_{0} + m(1 - U + \delta U)\right) \log\left(\frac{n + m}{\|{\mathbf{x^{*}}}\|_{0} + m\left(1 - U + \delta U\right)}\right), \end{array}$$

then the first iteration of Algorithm 1 returns the true signal x0 with probability exceeding $$1 - e^{-O(m\delta ^{2})}$$ with small δ>0. Here, C1 depends only on the RIP constant for the augmented measurement matrix [A I], $$\phantom {\dot {i}\!}q = 1 - 2 Q_{0, \sigma ^{2}}(R/2)$$, and $$U = 1-2\sigma ^{2}\frac {\phi (R/2)}{(R/2)}$$.

### Proof

In the estimation step, Algorithm 1 recasts the problem of recovering the true signal x as a special case of sparse signal recovery from sparsely corrupted compressive measurements. The presence of modulo operation modifies the compressive measurements by adding a constant noise of the value R in fraction of total measurements. However, once we identify correct bin-index for some of the measurements using Eq. 3, the remaining noise can be modeled as sparse corruptions d according to the formulation:

$$\mathbf{y^{0}_{c}} = \mathbf{Ax^{*}} + \mathbf{I_{m}R\left(p^{0}-p^{*}\right)} = \mathbf{Ax^{*}} + \mathbf{d^{0}}.$$

Here, the 0-norm of d0 gives us the number of noisy measurements in $$\mathbf {y^{0}_{c}}$$.

If the initial bin-index vector p0 is close to the true bin-index vector p, then d00 is small enough with respect to total number of measurements m; thus, d0 can be treated as sparse corruption. If we model this corruption as a sparse noise, then we can employ JP for a guaranteed recovery of the true signal given sufficiently large number of measurements are available. Denote d00=mMc as number of measurements for which the bin-index estimates were incorrect. Then, using Lemma 1, with probability at least $$1 - e^{-O(m\delta ^{2})}$$:

$$\begin{array}{*{20}l} \|{\mathbf{d^{0}}}\|_{0} & \leq m -(1 - \delta)mU \\ & \leq m\left(1 - U + \delta U\right),~~\text{with}\ U = \left(1-2\sigma^{2}\frac{\phi(R/2)}{(R/2)} \right). \\ \end{array}$$

Algorithm 1 is essentially the Justice Pursuit (JP) formulation as described in . Exact signal recovery from sparsely corrupted measurements is a well-studied problem with uniform recovery guarantees available in the existing literature. We use the guarantee proved in  for Gaussian observations, which states that provided enough measurements, the augmented matrix [A I] satisfies the Restricted Isometry Property. As stated in , one can recover a sparse signal exactly by tractable 1-minimization if the measurement matrix is known to satisfy the RIP. Thus, provided mC(x0+d00) log((n+m)/(x0+d00)), we invoke Theorem 1.1 from  and replace d00 with m(1−U+δU) as stated above to complete the proof. □

From the theorem, we see that the number of measurements required for guaranteed recovery depends on the ratio of σ (standard deviation of the measurements) and R. In practical applications, choosing a sufficiently large R such that the interval [−R,R] covers multiple standard deviations on both sides of origin enables successful recovery.

## Experiments

In this section, we present the results of simulations of signal reconstruction using our algorithm. All numerical experiments were conducted using MATLAB R2020b on a Windows system with an Intel CPU and 16GB RAM. Our experiments explores the performance of the MoRAM algorithm on both synthetic data and real images.

We perform experiments on a synthetic sparse signal $$\mathbf {x^{*}} \in \mathbb {R}^{n}$$ with n=1000. The sparsity level of the signal is chosen in steps of 6 starting from 6 with a maximum value of 24. The non-zero elements of the test signal x are generated using a zero-mean Gaussian distribution $$\mathcal {N}(0, 1)$$ and normalized such that x=1. The elements of the Gaussian measurement matrix $$\mathbf {A} \in \mathbb {R}^{m\times n}, a_{ij}$$ are also generated using the standard normal distribution $$\mathcal {N}(0, 1/m)$$. The number of measurements m is varied from m=100 to m=1000 in steps of 100.

Using A,x, and R, we first obtain the compressed modulo measurements y by passing the signal through forward model described by Eq. 2. For reconstruction, algorithm 1 is employed. We plot the variation of the relative reconstruction error $$\left (\frac {\|{\mathbf {x^{*}-x^{T}\|}}}{\|{\mathbf {x^{*}\|}}}\right)$$ with the number of measurements m for our AltMin-based sparse recovery algorithm MoRAM.

For each combination of R,m, and s, we run 10 independent Monte Carlo trials and calculate mean of the relative reconstruction error over these trials. Figure 4a and b illustrate the performance of our algorithm for two values of R respectively. Additionally, in Fig. 5, we also show reconstruction performance results for higher sparsity values s=20,30,40,50 as they vary with the number of measurements. It is evident that for each combination of R and s, our algorithm recovers the true signal (with zero relative error) provided enough measurements. In all such cases, the minimum number of measurements required for exact recovery is well below the ambient dimension (n) of the underlying signal.

From the analysis provided in the previous section, it is clear that the success of our algorithm depends on the ratio of the standard deviation σ of the Gaussian measurements and modulo period R. When the matrix A is correctly normalized, σ directly depends on the Euclidean norm of the signal, which in turn depends on the sparsity. Thus, as discussed earlier, if R is chosen to be large enough, success of our algorithm is guaranteed.

To put this into perspective, we provide additional results where we fix the number of measurements m and sparsity s, and vary the modulo parameter R. These values suggest that even when the R becomes small and the ratio of σ and R increases, our algorithm is able to recover the underlying signal perfectly as shown in Fig. 6. As R increases, recovery becomes easier to achieve for our algorithm. It also shows that the performance of our algorithm decays gracefully when varying R.

In practical scenarios, we may encounter cases where a few measurements exceed the modulo period R. If we define γ as max(|〈A,x〉|), then the value of γ would vary across each Monte Carlo run of our experiment (due to the randomness in the measurement matrix A), and we may encounter cases with γ>R as well. To analyze such cases, we provide an additional experiment where we fix the number of measurements m to 400, sparsity s to 12, and modulo parameter R to 3.2 and run our recovery algorithm multiple times with different realizations of A. We note down the values of γ during these experiments and note summary statistics as meanγ=3.45,varianceγ=0.2065,maxγ=4.41,minγ=2.82 across 50 random trials. In all these 50 cases, our algorithm recovered the true signal perfectly, i.e., resulting in zero relative error in each case. As the γ varies widely and also takes values higher than R, perfect recovery achieved in all cases shows that our algorithm is robust to measurements with magnitudes exceeding R, as far as such measurements are low in number (as guaranteed by Gaussian tail bounds on 〈A,x〉). In this sense, our algorithm follows a “graceful decay” with respect to the assumption that most of the measurements are contained in the interval [−R,R] in this sense.

We also evaluated the performance of our algorithm on a real image. We obtain sparse representation of the real image by transforming the original image in the wavelet basis (db1). The image used in our experiment is a 128×128 (n=16384) image (Fig. 7a), and we use a wavelet transform (with Haar wavelets) to sparsify this image with s=800. We reconstruct the image with MoRAM using m=4000 and m=6000 compressed modulo measurements, for 3 different values of R, 4, 4.25, and 4.5. As expected, the reconstruction performance increases with increasing value of R. As shown in Fig. 7 (bottom), for m=6000, the algorithm produces near-perfect recovery for all 3 values of R with high PSNR. Here, let us note that the blocky artifacts appearing in the recovered image are actually due to sparsification of the original image using the Haar wavelet transform. Since we use s=800 to obtain the ground truth image, the ground truth itself contains some compression artifacts as depicted in Fig. 7 (a, bottom). The PSNR values are calculated with respect to the sparse ground truth image and not with respect to the original image as our algorithm aims to recover only the former. We expect the effect of compression artifacts to decrease with a better choice of sparsity basis for the underlying image.

## Conclusions

In this paper, we presented a novel algorithmic approach for sparse signal recovery from compressed modulo measurements, inspired by techniques from phase retrieval. We also support our proposed algorithm via mathematical analysis and several experimental results. Our work points the way to a few directions for further research. While in this paper we considered only two modulo periods, extending the proposed approach for more periods (up to a theoretically infinite number) is a significant and interesting research direction. Also, instead of relying on a sparsity prior for compressed recovery, employing richer priors such as GANs  is an additional direction. Moreover, our analysis is limited to the case of Gaussian measurements schemes, which may or may not be physically realizable. Extending our results to more practical measurement schemes such as Fourier-based sampling or ptychography  can be an interesting problem for future study.

## Availability of data and materials

Reproducible source code and datasets supporting the conclusions of this paper can be accessed via the Github repository http://github.com/shahviraj/MoRAM. This is made accessible under the MIT License.

## References

1. 1

V. Shah, C. Hegde, in 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP). Signal reconstruction from modulo observations, (2019), pp. 1–5.

2. 2

J. Rhee, Y. Joo, Wide dynamic range CMOS image sensor with pixel level ADC. Electron. Lett.39:, 360–361 (2010).

3. 3

S. Kavusi, A. El Gamal, in Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V, 5301. Quantitative study of high-dynamic-range image sensor architectures (International Society for Optics and Photonics, 2004), pp. 264–276.

4. 4

K. Sasagawa, T. Yamaguchi, M. Haruta, Y. Sunaga, H. Takehara, H. Takehara, T. Noda, T. Tokuda, J. Ohta, An implantable CMOS image sensor with self-reset pixels for functional brain imaging. IEEE Trans. Electron Devices. 63(1), 215–222 (2016).

5. 5

T. Yamaguchi, H. Takehara, Y. Sunaga, M. Haruta, M. Motoyama, Y. Ohta, T. Noda, K. Sasagawa, T. Tokuda, J. Ohta, Implantable self-reset CMOS image sensor and its application to hemodynamic response detection in living mouse brain. Jpn. J. Appl. Phys.55(4S), 04–02 (2016).

6. 6

A. Bhandari, F. Krahmer, R. Raskar, in 2017 International Conference on Sampling Theory and Applications (SampTA). On unlimited sampling, (2017), pp. 31–35.

7. 7

H. Zhao, B. Shi, C. Fernandez-Cull, S. Yeung, R. Raskar, in IEEE International Conference on Computational Photography (ICCP). Unbounded high dynamic range photography using a modulo camera, (2015).

8. 8

V. Shah, M. Soltani, C. Hegde, in Proceedings of the Asilomar Conference on Signals, Systems, and Computers. Reconstruction from periodic nonlinearities, with applications to HDR imaging (IEEE, 2017), pp. 863–867.

9. 9

P. Netrapalli, P. Jain, S. Sanghavi, in Proceedings of the Advances in Neural Information Processing Systems (NIPS). Phase retrieval using alternating minimization, (2013), pp. 2796–2804.

10. 10

J. Laska, M. Davenport, R. Baraniuk, Exact signal recovery from sparsely corrupted measurements through the pursuit of justice, (2009).

11. 11

J. Bioucas-Dias, G. Valadao, Phase unwrapping via graph cuts. IEEE Trans. Image Proc.16(3), 698–709 (2007).

12. 12

F. Lang, T. Plötz, S. Roth, in German Conference on Pattern Recognition. Robust multi-image HDR reconstruction for the modulo camera, (2017), pp. 78–89. https://arxiv.org/pdf/1707.01317v1.pdf.

13. 13

M. Soltani, C. Hegde, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Stable recovery of sparse vectors from random sinusoidal feature maps, (2017), pp. 6384–6388.

14. 14

O. Ordentlich, G. Tabak, P. K. Hanumolu, A. C. Singer, G. W. Wornell, A modulo-based architecture for analog-to-digital conversion. IEEE J. Sel. Top. Signal Process.12(5), 825–840 (2018).

15. 15

S. Rudresh, A. Adiga, B. A. Shenoy, C. S. Seelamantula, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Wavelet-based reconstruction for unlimited sampling (IEEE, 2018), pp. 4584–4588.

16. 16

M. Cucuringu, H. Tyagi, in International Conference on Artificial Intelligence and Statistics. On denoising modulo 1 samples of a function, (2018), pp. 1868–1876.

17. 17

A. Bhandari, F. Krahmer, R. Raskar, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Unlimited sampling of sparse signals, (2018), pp. 4569–4573.

18. 18

O. Musa, P. Jung, N. Goertz, in 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). Generalized approximate message passing for unlimited sampling of sparse signals, (2018), pp. 336–340.

19. 19

D. Needell, J. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM. 53(12), 93–100 (2010).

20. 20

S. Chen, D. Donoho, M. Saunders, Atomic decomposition by basis pursuit. SIAM review. 43(1), 129–159 (2001).

21. 21

E. van den Berg, M. P. Friedlander, SPGL1: a solver for large-scale sparse reconstruction (2007). http://www.cs.ubc.ca/labs/scl/spgl1.

22. 22

E. van den Berg, M. P. Friedlander, Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput.31(2), 890–912 (2008).

23. 23

R. Baraniuk, V. Cevher, M. Duarte, C. Hegde, Model-based compressive sensing. IEEE Trans. Inf. Theory. 56:, 1982–2001 (2010).

24. 24

C. Hegde, P. Indyk, L. Schmidt, Fast algorithms for structured sparsity. Bull. EATCS. 3(117) (2015).

25. 25

E. J. Candès, et al, in Proceedings of the International Congress of Mathematicians, 3. Compressive sampling (Madrid, Spain, 2006), pp. 1433–1452.

26. 26

E. J. Candes, The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci. I. 346:, 589–592 (2008).

27. 27

A. Bora, A. Jalal, E. Price, A. Dimakis, in International Conference on Machine Learning. Compressed sensing using generative models, (2017), pp. 537–546.

28. 28

V. Shah, C. Hegde, in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). Solving linear inverse problems using GAN priors: an algorithm with provable guarantees, (2018), pp. 4609–4613.

29. 29

G. Jagatap, C. Hegde, in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). High dynamic range imaging using deep image priors, (2020), pp. 9289–9293.

30. 30

G. Jagatap, Z. Chen, S. Nayer, C. Hegde, N. Vaswani, Sample efficient fourier ptychography for structured data. IEEE Trans. Comput. Imaging. 6:, 344–357 (2020).

## Acknowledgements

This work was conducted when both VS and CH were at the ECE Department at Iowa State University, Ames, IA, USA. The authors thank Praneeth Narayanamurthy, Gauri Jagatap, and Thanh Nguyen, and the anonymous reviewers for helpful comments.

## Funding

This work was supported by grants CCF-1566281, CAREER CCF-1750920/2005804, and CCF-1815101 from the National Science Foundation; a faculty fellowship grant from the Black and Veatch Foundation; and a GPU grant from the NVIDIA Corporation.

## Author information

Authors

### Contributions

VS contributed to the theory developments and the experiments; CH contributed to the problem formulation, theory, and overall project guidance. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Chinmay Hegde.

## Ethics declarations

### Competing interests

The authors declare no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A conference version of this manuscript appeared in IEEE GlobalSIP 2019 \citeshah2019modulo.

## Rights and permissions 