 Research
 Open Access
 Published:
Deterministic construction of Fourierbased compressed sensing matrices using an almost difference set
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 155 (2013)
Abstract
In this paper, a new class of Fourierbased matrices is studied for deterministic compressed sensing. Initially, a basic partial Fourier matrix is introduced by choosing the rows deterministically from the inverse discrete Fourier transform (DFT) matrix. By row/column rearrangement, the matrix is represented as a concatenation of DFTbased submatrices. Then, a full or a part of columns of the concatenated matrix is selected to build a new M × N deterministic compressed sensing matrix, where M = p ^{r} and N = L(M + 1) for prime p, and positive integers r and L ≤ M  1. Theoretically, the sensing matrix forms a tight frame with small coherence. Moreover, the matrix theoretically guarantees unique recovery of sparse signals with uniformly distributed supports. From the structure of the sensing matrix, the fast Fourier transform (FFT) technique can be applied for efficient signal measurement and reconstruction. Experimental results demonstrate that the new deterministic sensing matrix shows empirically reliable recovery performance of sparse signals by the CoSaMP algorithm.
1 Introduction
Compressed sensing (or compressive sampling) is a novel and emerging technology with a variety of applications in imaging, data compression, and communications. In compressed sensing, one can recover sparse signals of high dimension from incomplete measurements. Mathematically, measuring an Ndimensional signal$\mathbf{\text{x}}\in {\mathbb{R}}^{N}$ with an M × N measurement matrix Φ yields an Mdimensional vector y = Φ x, where M < N. Imposing a requirement that x is ssparse or the number of nonzero entries in x is at most s, one can recover x exactly with high probability by an l _{1}minimization method or a greedy algorithm, which is computationally tractable.
Many research activities have been triggered on theory and practice of compressed sensing since Donoho, Candes, Romberg, and Tao published their marvelous theoretical works[1][3]. The efforts revealed that a measurement matrix Φ plays a crucial role in recovery of ssparse signals. Although a random matrix provides many theoretical benefits[4], it has the drawbacks[5] of high complexity and large storage in its practical implementation. As an alternative, we may consider a deterministic matrix, where wellknown codes and sequences have been employed for the construction, e.g., chirp sequences[6], Alltop sequences[7, 8], Kerdock and DelsarteGoethals codes[9], secondorder ReedMuller codes[10], and BCH codes[11]. Other techniques for deterministic construction, based on finite fields, representation theory, characters and algebraic curves, and multicoset codes, can be also found in[12][18]. The deterministic matrices guarantee the recovery performance that is empirically reliable, allowing fast processing and low complexity.
In this paper, we study deterministic construction of a new class of Fourierbased compressed sensing matrices. Initially, a p ^{r} × (p ^{2r}1) basic partial Fourier matrix, equivalent to the partial Fourier codebook of[19], is introduced by selecting p ^{r} rows from the (p ^{2r}1)point inverse discrete Fourier transform (DFT) matrix according to an almost difference set, where p is a prime number and r is a positive integer. By rearranging the rows and/or columns, we show that the matrix is represented as a concatenation of DFTbased submatrices. Then, a full or a part of columns of the concatenated matrix is selected to build a new M × N sensing matrix for deterministic compressed sensing, where M = p ^{r} and N = L(M + 1) for L ≤ M1. The concatenated structure allows the new sensing matrix to offer the various admissible column numbers while keeping it as an incoherent tight frame and enables efficient processing for measurement and reconstruction in compressed sensing. We would like to stress that it is not a trivial task to obtain the concatenated structure from the basic partial Fourier matrix by row/column rearrangement. With the parameters M and N, our new deterministic matrix can achieve the various permissible compression ratios of$\frac{M}{N}\approx \frac{1}{L}$ for a positive integer L, 2 ≤ L ≤ M1.
Theoretically, the new sensing matrix forms a tight frame with small coherence. Moreover, our new sensing matrix theoretically guarantees unique recovery of sparse signals with uniformly distributed supports with high probability. From the structure of our new sensing matrix, the fast Fourier transform (FFT) technique can be applied for efficient signal measurement and reconstruction. Experimental results demonstrate that the new deterministic compressed sensing matrix, together with the CoSaMP recovery algorithm[20], empirically guarantees sparse signal recovery with high reliability. We observe that the empirical recovery performance of our new sensing matrices is similar to those of chirp sensing[6] and random partial Fourier matrices. However, our new matrices offer several practical benefits, requiring less storage and complexity than random partial Fourier matrices and providing more parameters of M and N than chirp sensing codes.
The rest of this paper is organized as follows. In Section 2, we introduce basic concepts and notations to understand this work. Section 3 modifies the structure of a basic partial Fourier matrix and presents a new sensing matrix for deterministic construction. We also discuss the efficient implementation and the theoretical recovery guarantee of the new sensing matrix. Section 4 describes the signal measurement process and the CoSaMP recovery algorithm by employing the FFT technique. In Section 5, we demonstrate the empirical recovery performance of our new sensing matrices in noiseless and noisy settings. Finally, concluding remarks will be given in Section 6.
2 Preliminaries
This section introduces fundamental concepts and notations for understanding this work. In subsections 2.1 and 2.2, we briefly introduce the concepts of finite fields, trace functions and cyclotomic cosets for signal processing researchers. For more details, see[21] and[22].
2.1 Finite fields and trace functions
Let p be prime and m > 1 a positive integer. A finite field ${\mathbb{F}}_{{p}^{m}}$ is generated by 0 and α ^{i}, i = 0,1,…,p ^{m}2, i.e.,${\mathbb{F}}_{{p}^{m}}=\{0,1,\alpha ,{\alpha}^{2},\dots ,{\alpha}^{{p}^{m}2}\}$, where α is called a primitive element and${\alpha}^{{p}^{m}1}=1$. The primitive element α is a root of a primitive polynomial f(x), i.e., f(α) = 0, where f(x) has the highest degree m and its coefficients are the elements of${\mathbb{F}}_{p}=\{0,1,2,\dots ,p1\}$.
Let k be a positive integer that divides m. A trace function is a linear mapping from${\mathbb{F}}_{{p}^{m}}$ onto${\mathbb{F}}_{{p}^{k}}$ defined by
where the addition is computed modulo p. The trace function algebraically defines the wellknown msequences or pseudonoise (PN) sequences, which have been widely used in wireless communications. For instance, if p = 2 and k = 1, then$({\text{Tr}}_{1}^{m}(1),{\text{Tr}}_{1}^{m}(\alpha ),{\text{Tr}}_{1}^{m}({\alpha}^{2}),\dots ,{\text{Tr}}_{1}^{m}({\alpha}^{{2}^{m}2}))$ is a binary msequence of length 2^{m}  1, where each entry is 0 or 1. The msequence, defined by a trace function, is efficiently generated by a linear feedback shift register (LFSR), which is a common method in communication standards.
Example 1. Let p = 2 and m = 4. Then, the finite field${\mathbb{F}}_{{2}^{4}}$ is defined by a primitive polynomial f(x) = x ^{4} + x + 1, where the root α is a primitive element of${\mathbb{F}}_{{2}^{4}}$. Thus,${\mathbb{F}}_{{2}^{4}}=\{0,1,\alpha ,{\alpha}^{2},\dots ,{\alpha}^{14}\}$, where α ^{4} + α + 1 = 0 and α ^{15} = 1. The trace function${\text{Tr}}_{1}^{4}(x)$ takes on either 0 or 1, since it is a linear mapping from${\mathbb{F}}_{{2}^{4}}$ onto${\mathbb{F}}_{2}$. For example,
where the addition is computed modulo p = 2.
2.2 Cyclotomic cosets
Let${\mathbb{Z}}_{v}=\{0,1,\dots ,v1\}$, where v is a positive integer. Also, p is a prime integer which is relatively prime to v, i.e., gcd(v,p) = 1. For a nonnegative integer$s\in {\mathbb{Z}}_{v}$, a cyclotomic coset modulo v over p containing s is defined as
where n _{ s } is the smallest positive integer such that$s{p}^{{n}_{s}}\equiv s(\text{mod v})$. It is conventional to define the coset leader of C _{ s } as the smallest integer s. Then,${\mathbb{Z}}_{v}$ is partitioned into cyclotomic cosets, i.e.,${\mathbb{Z}}_{v}={\bigcup}_{s\in {\mathrm{\Gamma}}_{p}(v)}{C}_{s}$, where Γ_{ p }(v) denotes a set of coset leaders in${\mathbb{Z}}_{v}$. By definition, once an element is given, a cyclotomic coset containing it can be easily generated by successive multiplication by p to the element, which will be useful in generating the row index set for our new sensing matrix.
Example 2
Let p = 2 and v = 15. Then, the cyclotomic cosets modulo 15 over p = 2 are
where the coset leaders are Γ_{2}(15) = {0,1,3,5,7}. With the cyclotomic cosets,${\mathbb{Z}}_{15}=\{0,1,\dots ,14\}=\bigcup _{s\in {\mathrm{\Gamma}}_{2}(15)}{C}_{s}$.
2.3 Basic partial Fourier matrix$\stackrel{\mathbf{~}}{\mathbf{A}}$
In this subsection, we introduce a basic framework from which a new sensing matrix can be developed in Section 3. Throughout this paper, we set M = p ^{r} and N ^{′} = p ^{2r}1 = M ^{2}1 for prime p and a positive integer r. Also, we assume that each column of a sensing matrix has unit l _{2}norm, where the l _{2}norm is denoted as$\mathbf{\text{x}}=\sqrt{{\sum}_{i=0}^{n1}{x}_{i}{}^{2}}$ for an ndimensional vector x = (x _{0},x _{1},…,x _{ n1}). Table1 summarizes all the variables and notations for the development of a new sensing matrix.
In[19], Yu, Feng, and Zhang presented a new class of (N ^{′},M) nearoptimal partial Fourier codebooks using an almost difference set[23]. The codebook can be equivalently translated into an M × N ^{′} partial Fourier matrix, which contains M rows selected from the N ^{′}point inverse DFT (IDFT) matrix according to the almost difference set.
From the results of[19], Proposition 1 describes the basic partial Fourier matrix and its geometric properties with the notations of this paper.
Proposition 1. For prime p and a positive integer r, let M = p ^{r} and N ^{′} = p ^{2r}1 = M ^{2}1. Let D = {d _{0},d _{1},…,d _{ M1}} be an index set defined in Lemma 2 of[19], which will be given below in Remark 1. Choosing M rows from the N ^{′}point IDFT matrix according to D, we construct an M × N ^{′} matrix $\stackrel{~}{\mathbf{A}}$, where each entry is given by
for 0 ≤ k ≤ M  1 and 0 ≤ n ≤ N ^{′}1. Then, the coherence[24]of $\stackrel{~}{\mathbf{A}}$ is given by
where ${\stackrel{~}{\mathbf{a}}}_{{n}_{1}}$ is a column vector of $\stackrel{~}{\mathbf{A}}$ and ${\stackrel{~}{\mathbf{a}}}_{{n}_{1}}^{H}$ denotes the transpose of its complex conjugate. The coherence nearly achieves the Welch bound equality[25]of $\sqrt{\frac{{N}^{\prime}M}{M({N}^{\prime}1)}}\approx \frac{1}{\sqrt{M+1}}$ for sufficiently large M. Moreover, $\stackrel{~}{\mathbf{A}}$ forms a tight frame[26]as each row is mutually orthogonal.
The coherence and the tightness of$\stackrel{~}{\mathbf{A}}$ do not change if we select the rows from the DFT matrix, instead of the IDFT matrix. In this paper, we decide to use the IDFT matrix.
Remark 1. With the notations of this paper, the row index set D is defined by[19]
where all the operations in D are computed modulo N ^{′}. In Equation 1, D is an almost difference set, and e _{ v } is a nonnegative integer satisfying${\alpha}^{({p}^{r}+1){e}_{v}}={\text{Tr}}_{r}^{2r}({\alpha}^{v})$ for v ∈ I[19], where α is a primitive element in${\mathbb{F}}_{{p}^{2r}}$. To determine the index set D, one needs to compute e _{ v } using a trace function, which might be difficult for signal processing researchers. In Section 3, we will present an alternative method to generate the indices of D by successive multiplication to predetermined values, which does not require the computation of e _{ v }. Therefore, it suffices to assume that e _{ v } is simply an integer in this paper.
3 Construction of new Fourierbased sensing matrices
To build a new M × N sensing matrix, we begin with the M × N ^{′} basic partial Fourier matrix$\stackrel{~}{\mathbf{A}}$, and then choose the N columns after a row/column rearrangement. Our approach is different from a conventional one of random or deterministic selection of M rows out of an N × N Fourier matrix, but we will show that it ultimately presents reliable recovery performance and practical benefits in implementation.
In deterministic compressed sensing, it is desired that a sensing matrix should be able to support a variety of admissible column numbers to sense a signal of various lengths. For this purpose, one needs to consider how to select the columns from$\stackrel{~}{\mathbf{A}}$ for a new M × N sensing matrix with N < N ^{′}. In this section, we apply a row/column permutation to the partial Fourier matrix$\stackrel{~}{\mathbf{A}}$ to obtain its variant M × N ^{′} matrix A’. Then, we choose a full or a part of columns of A’ to construct a new M × N sensing matrix A, where N = (M + 1)L for a positive integer L, 2 ≤ L ≤ M  1. The row/column rearrangement offers the following benefits for our new sensing matrix A in compressed sensing, which is the motivation:

1.
If one selects the columns arbitrarily from $\stackrel{~}{\mathbf{A}}$, the resulting sensing matrix may not be a tight frame in general. In fact, one needs to be careful in selecting the columns of $\stackrel{~}{\mathbf{A}}$, to achieve the tightness of the resulting matrix. Through the row/column rearrangement, we will show that the new sensing matrix A has a concatenated structure of (M + 1)point DFTbased submatrices. With the structure, A can be still a tight frame by choosing N as a multiple of M + 1, which will be shown in Lemma 2.

2.
The concatenated structure of A also allows efficient (M + 1)point FFT processing for measurement and recovery of sparse signals in compressed sensing. Note that if one selects the columns arbitrarily from the original $\stackrel{~}{\mathbf{A}}$, the resulting matrix generally requires the N ^{′}point FFT processing, which has more computational complexity. Moreover, one may enjoy fast processing via parallel FFT computations using the concatenated structure, which will be discussed in Section 4.
3.1 Structure
Recall the partial Fourier matrix$\stackrel{~}{\mathbf{A}}$ in Proposition 1. If p = 2, we use the original index set D in Equation 1, i.e.,
On the other hand, if p > 2, we redefine the index set D by adding$\frac{M+1}{2}$ to each original index in Equation 1, i.e.,
The above modification for p > 2 ensures that each entry of D is nonzero when computed modulo M + 1, which also holds for p = 2. See the proof of Lemma 1 for the implication.
Now, we suggest a column rearrangement of the original$\stackrel{~}{\mathbf{A}}$. For given l, 0 ≤ l ≤ M  2, let us take the M + 1 column vectors of indices n = (M  1)t + l from$\stackrel{~}{\mathbf{A}}$, where 0 ≤ t ≤ M. With the column vectors, we then obtain an M × (M + 1) submatrix${\mathit{\sigma}}^{(l)}=\{{\sigma}_{k,t}^{(l)}\mid 0\le k\le M1,0\le t\le M\}$, where each entry is given by
In Equation 4,${\gamma}_{k}^{(l)}$ is defined as
for each k, 0 ≤ k ≤ M  1.
Next, we show that the submatrix σ ^{(l)} has a DFTbased structure if the row indices of D are arranged in appropriate order. In Lemma 1, we denote${\mathbf{\text{F}}}_{M+1}^{\prime}$ as the M × (M + 1) DFT matrix without the first row, where each entry is${F}_{k,t}^{\prime}=exp\left(j\frac{2\pi (k+1)t}{M+1}\right)$ for 0 ≤ k ≤ M  1 and 0 ≤ t ≤ M.
Lemma 1. In the index set of Equation 2 for p = 2 or Equation 3 for p > 2, the entries of D = {d _{0},d _{1},…,d _{ M1}} can be arranged such that d _{ k } ≡ (k + 1) (mod M + 1) for 0 ≤ k ≤ M  1. With such D, let us define ${\mathbf{\Gamma}}^{(l)}=\{{\mathrm{\Gamma}}_{k,t}^{(l)}\mid 0\le k,t\le M1\}$ as an M × M diagonal matrix where each entry is
for each l, 0 ≤ l ≤ L  1. Then, each submatrix σ ^{(l)} can be expressed by
which clearly shows the DFTbased structure of σ ^{(l)}.
Proof. We investigate how$exp\left(j\frac{2\pi {d}_{k}t}{M+1}\right)$ of Equation 4 is changed for p = 2 and p > 2, respectively.
Case. p = 2: In this case, each element of D in Equation 2 is represented as
for 0 ≤ k ≤ M  1. Thus,$exp\left(j\frac{2\pi {d}_{k}t}{M+1}\right)=exp\left(j\frac{2\pi (k+1)t}{M+1}\right)$ in Equation 4. Consequently, each entry of Equation 4 forms an M × (M + 1) submatrix σ ^{(l)} where each row is from the (M + 1)point DFT matrix excluding all one row and then masked by${\gamma}_{k}^{(l)}$. Then, the structure of Equation 5 is straightforward. □
Case. p > 2: In this case, each index of D in Equation 3 is
where$\frac{M+1}{2}\equiv \frac{M+1}{2}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}M+1)$. Reordering the indices, we get
for 0 ≤ k ≤ M  1. Then, Equation 7 yields$exp\left(j\frac{2\pi {d}_{k}t}{M+1}\right)=exp\left(j\frac{2\pi (k+1)t}{M+1}\right)$ in Equation 4. It is now clear why we introduced the modified index set D of Equation 3 for p > 2. By ensuring d _{ k } ≢ 0 (mod M + 1) for any k, the modification guarantees that we can achieve the same DFTbased submatrix structure as that of p = 2. Finally, each entry of Equation 4 also forms an M × (M + 1) submatrix σ ^{(l)} where each row is from${\mathbf{\text{F}}}_{M+1}^{\prime}$ and then masked by${\gamma}_{k}^{(l)}$, which yields Equation 5. □
Remark 2. In both cases of p, one needs to ensure that the entries of the index set D = {d _{0},d _{1},…,d _{ M1}} should satisfy d _{ k } ≡ (k + 1) ≡ M  k (mod M + 1), to achieve the DFTbased submatrix structure in Lemma 1. If p = 2, the original entries of Equation 2 meet the condition from Equation 6. On the other hand, if p > 2, Equation 7 shows that we have to rearrange the entries of Equation 3 by circularly shifting the order by$\frac{M+1}{2}$. If the entries of D are generated by a different method, which will be introduced in Procedure 1, the index set D should be sorted for both p such that the entries are in decreasing order when computed modulo M + 1, i.e., D (mod M + 1) ≡ {M,M  1,…,1}, to satisfy the condition.
Finally, if l runs through {0,1,…,M  2}, we obtain the M  1 submatrices σ ^{(l)}, and construct a variant A ^{′} = [σ ^{(0)}∣σ ^{(1)}∣⋯∣σ ^{(M2)}] by concatenating them. Clearly, the M × N ^{′} matrix A’ is equivalent to the original matrix$\stackrel{~}{\mathbf{A}}$ under the row/column rearrangement. In what follows, Construction 1 presents a formal expression of the new sensing matrix A.
Construction 1. Let M = p ^{r} for prime p and a positive integer r. Let D = {d _{0},d _{1},…,d _{ M1}} be the row index set which satisfies d _{ k } ≡  (k + 1) (mod M + 1) for 0 ≤ k ≤ M  1. Let L be a positive integer and N = (M + 1)L, where 2 ≤ L ≤ M  1. For a given integer l, 0 ≤ l ≤ L  1, define an M × (M + 1) submatrix ${\mathit{\sigma}}^{(l)}=\{{\sigma}_{k,t}^{(l)}\mid 0\le k\le M1,\phantom{\rule{1em}{0ex}}0\le t\le M\}$ where
and ${\gamma}_{k}^{(l)}=exp\left(j\frac{\pi {d}_{k}l}{M1}\right)\times exp\left(j\frac{\pi {d}_{k}l}{M+1}\right)$. An M × N sensing matrix A is a concatenation of the L submatrices i.e., A = [σ ^{(0)}∣σ ^{(1)}∣⋯∣σ ^{(L  1)}]. In particular, if L = M  1, then A = A ^{′} = [σ ^{(0)}∣σ ^{(1)}∣⋯∣σ ^{(M2)}].
Figure1 illustrates the structure of our new sensing matrix A in Construction 1.
3.2 Implementation
In Construction 1, generating the row index set D efficiently is a key issue in implementing the deterministic sensing matrix A. In D, as${\alpha}^{({p}^{r}+1){e}_{v}}={\text{Tr}}_{r}^{2r}({\alpha}^{v})$ is an element of a p ^{r}ary msequence of period p ^{2r}1[21], we can compute it by a 2stage LFSR. Therefore, each element of D in Equation 1 can be generated by LFSR, log operation and other basic arithmetics over finite fields.
As the computation over finite fields is not trivial, we introduce an alternative method to generate the indices of D more efficiently. In the method, we use cyclotomic cosets modulo p ^{r} + 1 and p ^{2r}1 over p, respectively, which are always valid for any prime p, since gcd(p, p ^{r} + 1) = gcd(p, p ^{2r}1) = 1. In what follows, we describe the procedure, where the proof will be given in the Appendix.
Procedure 1

1.
Generate all cyclotomic cosets modulo p ^{r} + 1 over p. When p = 2, identify a set of nonzero coset leaders by Γ_{2}(2^{r} + 1)∖{0} = {u _{1},…,u _{ δ }}. If p > 2, on the other hand, identify a set of coset leaders without $\frac{{p}^{r}+1}{2}$ by ${\mathrm{\Gamma}}_{p}({p}^{r}+1)\setminus \{\frac{{p}^{r}+1}{2}\}=\{{u}_{1},\dots ,{u}_{\delta}\}$. Note that u _{1} = 0 if p > 2.

2.
For each u _{ i },1 ≤ i ≤ δ, compute a positive integer ${z}_{i}\in {\mathbb{Z}}_{{p}^{2r}1}$ such that
$${\alpha}^{{z}_{i}}=1+{\alpha}^{({p}^{r}1){u}_{i}}$$(8)where α is a primitive element in${\mathbb{F}}_{{p}^{2r}}$.

3.
For each z _{ i },1 ≤ i ≤ δ, generate a cyclotomic coset modulo p ^{2r}1 over p containing z _{ i } by ${C}_{{s}_{i}}=\{{z}_{i},{z}_{i}p,\dots ,{z}_{i}{p}^{{n}_{{s}_{i}}1}\}$, where ${n}_{{s}_{i}}$ is the smallest positive integer such that ${z}_{i}\equiv {z}_{i}{p}^{{n}_{{s}_{i}}}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{1em}{0ex}}{p}^{2r}1)$. Note that the coset leader s _{ i } is not necessarily equal to z _{ i }.

4.
If p = 2,
$$D=\bigcup _{1\le i\le \delta}{C}_{{s}_{i}},$$and if p > 2,
$$D=\bigcup _{1\le i\le \delta}{C}_{{s}_{i}}+\frac{M+1}{2}$$where the addition is performed to each element of${\bigcup}_{1\le i\le \delta}{C}_{{s}_{i}}$, and computed modulo p ^{2r}  1. From Remark 2, the index set D should be sorted such that the entries are in decreasing order when computed modulo M + 1, i.e., D (mod M + 1) ≡ {M,M  1,…,1}.
Example 3. Let p = 2 and r = 3. Also, let α be a primitive element in${\mathbb{F}}_{{2}^{6}}$ satisfying α ^{6} + α + 1 = 0. Then, Procedure 1 generates M = p ^{r} = 8 indices of D for our new sensing matrix:

1.
From all cyclotomic cosets modulo 9 over p = 2, we identify nonzero coset leaders Γ_{2}(9)∖{0} = {1,3}, where the cosets are ${C}_{1}^{\prime}=\{1,2,4,8,7,5\}$ and ${C}_{3}^{\prime}=\{3,6\}$, respectively.

2.
From u _{1} = 1, Equation 8 yields ${\alpha}^{{z}_{1}}=1+{\alpha}^{7}={\alpha}^{26}$, where z _{1} = 26. Also, from u _{2} = 3, we have ${\alpha}^{{z}_{2}}=1+{\alpha}^{21}={\alpha}^{42}$ and z _{2} = 42.

3.
By successively multiplying z _{1} = 26 by 2, we obtain its cyclotomic coset C _{13} = {26,52,41,19,38,13}, where the coset leader is s _{1} = 13. Note that the multiplication is computed modulo p ^{2r}  1 = 63. Similarly, we have C _{21} = {42,21} from z _{2} = 42, where s _{2} = 21.

4.
Finally, the index set D is given by
$$\begin{array}{l}D=\{{d}_{0},{d}_{1},\dots ,{d}_{7}\}={C}_{13}\bigcup {C}_{21}\\ \phantom{\rule{.8em}{0ex}}=\{26,52,42,41,13,21,38,19\}\end{array}$$
where we have sorted the indices such that they are in decreasing order when computed modulo 9, i.e., D (mod 9) ≡ {8,7,6,5,4,3,2,1}.
Example 4. With the index set D of Example 3, we can construct an 8 × 18 matrix A = [σ ^{(0)}∣σ ^{(1)}], where M = 8,N = 18 and L = 2. Denote$\omega =exp\left(j\frac{2\pi}{9}\right)$. Then,
where Γ ^{(0)} is the 8 × 8 identity matrix. Also,
where
We can further concatenate σ ^{(2)},σ ^{(3)},…,σ ^{(L1)} for L ≤ 7 so that A can take various column number N = 9L.
In practice, we can precompute z _{1},z _{2},…,z _{ δ } at items 1 and 2 of Procedure 1 and save them in memory to avoid the algebraic computation of Equation 8 in the hardware. Then, M = p ^{r} indices can be generated by items 3 and 4 of Procedure 1. In Example 3, for instance, z _{1} = 26 and z _{2} = 42 can be precomputed. Then, only the two elements need to be stored in the memory to generate eight row indices. Table2 presents δ or the number of${z}_{i}^{\prime}$s to be stored in the memory for various M = p ^{r}. While a random partial Fourier matrix needs to save M indices, a storage space for δ (≪ M) elements is sufficient for our new sensing matrix. In conclusion, constructing our new sensing matrix requires a storage for δ elements and an additional circuit for successive multiplication, addition and sorting, which may present a practical benefit over random partial Fourier matrices.
3.3 Theoretical recovery performance
In this subsection, we discuss the geometric properties and the theoretical recovery guarantee of our new sensing matrix A.
Lemma 2. The M × N sensing matrix A in Construction 1 has the following properties.

1.
The coherence is upper bounded by $1/\sqrt{M}$.

2.
A forms a tight frame.

3.
All the row sums are equal to zero.
Proof. Recall that A’ is obtained by row/column rearrangement of$\stackrel{~}{\mathbf{A}}$. Since the coherence of a matrix does not change by row/column permutation, the coherence of A’ is also$1/\sqrt{M}$ from Proposition 1. Note that when p > 2, we have added the constant$\frac{M+1}{2}$ to each entry of the original D in Equation 1, which does not change the coherence[19] either. As A is a set of selected columns of A’, the coherence of A is at most$1/\sqrt{M}$ from which item 1 is true. Moreover,${\mathit{\sigma}}^{(l)}{\mathit{\sigma}}^{(l)H}=\frac{(M+1)}{M}{\mathbf{\text{I}}}_{M}$ from Equation 5, where I _{ M } is the M × M identity matrix. Then, we have$\mathbf{\text{A}}{\mathbf{\text{A}}}^{H}=\frac{(M+1)L}{M}{\mathbf{\text{I}}}_{M}=\frac{N}{M}{\mathbf{\text{I}}}_{M}$ by concatenating the L submatrices, which shows that item 2 is true. Finally, Equation 5 ensures that no submatrix σ ^{(l)} has all one row masked by a constant factor, which concludes that all the row sums of each submatrix are equal to zero, due to the DFTbased structure. Consequently, item 3 is true from the concatenation. □
The geometric properties of Lemma 2 meet the sufficient conditions for the new matrix A to achieve the uniquenessguaranteed statistical restricted isometry property (UStRIP)[5]. See[27] for the proof of the UStRIP of A.
With a deterministic sensing matrix of coherence μ, one can successfully recover every ssparse signal from its measurement as long as s = O(μ ^{1})[24], which guarantees unique recovery of sparse signals with sparsity up to$O(\sqrt{M})$ by our new sensing matrix A. In an attempt to overcome the theoretical bottleneck, the authors of[28] discussed the average performance of compressed sensing under a generic ssparse model, where the positions of nonzero entries of an ssparse signal are distributed uniformly at random and their signs are independent and equally likely to be 1 or +1. In what follows, the average recovery performance of ssparse signals with the generic ssparse model is theoretically guaranteed by the sensing matrix A.
Theorem 1. Consider the M × N sensing matrix A in Construction 1. Let $\mathbf{x}\in {\mathbb{R}}^{N}$ be an ssparse signal with the generic ssparse model. Then, if $s=O\left(\frac{M}{log\phantom{\rule{.15em}{0ex}}N}\right)$, it is possible to recover x with probability 1  N ^{1} from the measurement Ax.
Proof. From Lemma 2, A is a tight frame with coherence$\mu =O\left(\frac{1}{\sqrt{M}}\right)$. For such a matrix, Theorem 2.2 of[29] presents the average recovery performance that if$s=O\left(min\left\{\frac{{\mu}^{2}}{log\phantom{\rule{.15em}{0ex}}N},\frac{M}{log\phantom{\rule{.15em}{0ex}}N}\right\}\right)=O\left(\frac{M}{log\phantom{\rule{.15em}{0ex}}N}\right)$, it is possible to recover x with probability 1N ^{1} from Ax, which completes the proof. □
4 FFTbased signal measurement and recovery
This section describes measurement and recovery processes with the deterministic compressed sensing matrix A in Construction 1. With the DFTbased submatrix structure, we can make use of the FFT technique in the processes.
4.1 Measurement
The measurement process of compressed sensing is accomplished by y = Ax, where x = (x _{0},x _{1},…,x _{ N1})^{T} and y = (y _{0},y _{1},…,y _{ M1})^{T}. Let b = M + 1 and x _{ l } = (x _{ bl },x _{ bl+1}…,x _{ bl+b1})^{T} be a segment of x of length b, where 0 ≤ l ≤ L  1. From Equation 5,${\mathit{\sigma}}^{(l)}{\mathbf{\text{x}}}_{l}=\frac{1}{\sqrt{M}}{\mathbf{\Gamma}}^{(l)}{\mathbf{\text{F}}}_{b}^{\prime}{\mathbf{\text{x}}}_{l}$ for each l, which implies that the matrixvector multiplication σ ^{(l)} x _{ l } includes to perform the bpoint DFT of each segment x _{ l } and then to multiply each DFT output by${\gamma}_{k}^{(l)}$. Let${\stackrel{~}{x}}_{k}^{(l)}$ be the bpoint DFT of x _{ l }, i.e.,
and${X}_{k}^{(l)}={\stackrel{~}{x}}_{k+1}^{(l)}$, 0 ≤ k ≤ M  1 for each l. As Ax = σ ^{(0)} x _{0}+⋯+ σ ^{(L1)} x _{ L1}, the measurement from Ax is equivalent to adding up each DFT output${X}_{k}^{(l)}$ weighted by${\gamma}_{k}^{(l)}$ for 0 ≤ l ≤ L1. In other words,
For fast implementation, the FFT algorithm can be applied to the L distinct segments of x simultaneously in a parallel fashion.
4.2 Reconstruction
For ssparse signal recovery, we consider the CoSaMP algorithm presented in Algorithm 2.1 of[20], which is described in Algorithm 1 of this paper. At each iteration, it forms a signal proxy f and identifies a potential candidate Ω of the signal support by locating the largest 2s components of the proxy. The algorithm then merges the candidate Ω with the one from the previous iteration, to create a new support set T. To estimate the target signal${\hat{\mathbf{\text{x}}}}_{i}$, it solves a leastsquares problem and takes only the largest s entries from the signal approximation z. Finally, it updates the current sample v for the next iteration.
Algorithm 1 CoSaMP recovery algorithm[20]
In Algorithm 1, the signal proxy is f = A ^{H} v = (f _{0},f _{1},…,f _{ N1})^{T}, where v = (v _{0},v _{1},…,v _{ M1})^{T} and A ^{H} denotes the conjugate transpose of A. Initially, v is a (noisy) measurement vector u. At each iteration, it will be updated by$\mathbf{\text{v}}=\mathbf{\text{u}}\mathbf{\text{A}}{\hat{\mathbf{\text{x}}}}_{i}$. Considering the submatrix structure of σ ^{(l)}, the matrixvector multiplication A ^{H} v is performed by the reverse operation of the measurement process, i.e., extracting the weight${\gamma}_{k}^{(l)}$ from each measurement and then applying the bpoint IDFT. For each l, 0 ≤ l ≤ L  1, we create a demasked version of v of length b = M + 1, i.e.,${\stackrel{~}{\mathbf{\text{v}}}}^{(l)}={({\stackrel{~}{v}}_{0}^{(l)},{\stackrel{~}{v}}_{1}^{(l)},\dots ,{\stackrel{~}{v}}_{M}^{(l)})}^{T}$ where${\stackrel{~}{v}}_{0}^{(l)}=0$ and
where '∗’ denotes the complex conjugate. Applying the bpoint IDFT to${\stackrel{~}{\mathbf{\text{v}}}}^{(l)}$ with normalization then yields a segment of f of length b, i.e., f _{ l } = (f _{ bl },f _{ bl+1},…,f _{ bl+b1})^{T}, where
For fast implementation, the FFT algorithm can be applied to the L distinct demasked versions of v simultaneously in a parallel fashion. Finally, concatenating the L segments forms$\mathbf{\text{f}}={({\mathbf{\text{f}}}_{0}^{T}\mid \cdots \mid {\mathbf{\text{f}}}_{L1}^{T})}^{T}$.
While updating current samples at each iteration, the matrixvector multiplication$\mathbf{\text{A}}{\hat{\mathbf{\text{x}}}}_{i}$ is also performed by the FFT algorithm in a similar manner to the measurement process. One may stop the iterations of the CoSaMP algorithm if the norm of updated samples is sufficiently small or the iteration counter reaches a predetermined value.
Table one of[20] claimed that forming a signal proxy dominates the algorithm complexity by the cost of matrixvector multiplication. Thus, each iteration of the FFTbased CoSaMP recovery algorithm has the complexity of O(L × b log b) ≈ O(N log M), which is smaller than that of random partial Fourier matrices.
5 Empirical recovery performance
In this section, we compare our new sensing matrices to chirp sensing[6] and random partial Fourier matrices in terms of empirical recovery performance in noiseless and noisy scenarios. For comparison, we assume that a random partial Fourier matrix has the same parameters M and N = (M + 1)L as those of our new sensing matrix. To obtain it, we made ten trials to select M rows randomly from the Npoint IDFT matrix, where the coherence was checked at each trial. Then, we chose the one with the smallest coherence for our experiments. For a chirp sensing matrix, on the other hand, M is set to a prime number closest to the parameter used in our new sensing matrix, and N = M L. Each submatrix of the partial chirp sensing matrix has an alternating polarity as in[30].
Through experiments, we measured an ssparse signal x, where the s nonzero entries are either +1 or 1, and their positions and signs are chosen uniformly at random. For signal reconstruction, the FFTbased CoSaMP algorithm was applied to a total of 2,000 sample vectors measured by the three sensing matrices. In Algorithm 1, the iterations are stopped if either v < 10^{4} or the iteration counter reaches the sparsity level s.
Figure2 displays successful recovery rates of the three sensing matrices from noiseless measurements at various compression ratios, where the sparsity level is s = 64. In the figure, for 5 ≤ L ≤ 30, M = 256 and N = (M + 1)L for our new sensing and random partial Fourier matrices, while M = 257 and N = M L for chirp sensing matrices. With the parameters, each sensing matrix achieves the compression ratios of$0.0333\le \frac{M}{N}\approx \frac{1}{L}\le 0.2$. A success is declared in reconstruction if$\mathbf{\text{x}}\hat{\mathbf{\text{x}}}<1{0}^{6}$ for the estimate$\hat{\mathbf{\text{x}}}$. The figure shows that our new sensing matrices have slightly higher recovery rates than the random partial Fourier matrices but have almost the same recovery rates as those of chirp sensing codes.
In noisy compressed sensing, a measured signal is corrupted by additive noise, i.e., u = y + n = Ax + n, where n is the additive white Gaussian noise of zero mean and variance σ ^{2}. Then, the input signaltonoise ratio (SNR) is defined as${\text{SNR}}_{\mathit{\text{input}}}(\mathit{\text{dB}})=10{log}_{10}\frac{\mathbf{\text{y}}{}^{2}}{{\sigma}^{2}}$. Also, we define the reconstruction SNR as${\text{SNR}}_{\mathit{\text{reconst}}}(\mathit{\text{dB}})=10{log}_{10}\frac{\mathbf{\text{x}}{}^{2}}{\mathbf{\text{x}}\hat{\mathbf{\text{x}}}{}^{2}}$, to measure the recovery performance in noisy compressed sensing. In the experiments, we fixed L = 8, where M = 256 and N = L(M + 1) = 2,056 for our new sensing and random partial Fourier matrices, while M = 257 and N = M L = 2,056 for chirp sensing matrices. Figure3 shows an example of original and reconstructed signals for our new sensing matrix in noisy compressed sensing, where the sparsity level is s = 15 and the input SNR is 15 dB.
Figure4 sketches the reconstruction SNR of the three sensing matrices from noisy measurements. In the figure, the input SNR is 15 dB. The figure reveals that our new sensing matrix outperforms the random partial Fourier and the chirp sensing matrices at high sparsity levels, but the differences are negligible. Figure5 demonstrates reconstruction SNR versus input SNR of the three matrices in noisy compressed sensing, where the sparsity level of an original signal is 70. At the sparsity level, we observed that the relationship between reconstruction and input SNR is linear for medium and high input SNR. Our new sensing matrix slightly outperforms the random partial Fourier matrix for high input SNR but shows almost the same trend with the chirp sensing code.
In addition to the above experiments, we attempted an elementary image reconstruction employing the Haar wavelet transform. An original sparsified image was measured by the three sensing matrices and then reconstructed by the CoSaMP algorithm. We observed that the successfully reconstructed images from three different matrices are hard to distinguish, and show almost the same reconstruction SNR.
In conclusion, our new sensing matrix showed empirically reliable recovery performance by the CoSaMP algorithm in both noiseless and noisy scenarios, which is comparable to those of chirp sensing and random partial Fourier matrices.
6 Conclusions
This paper has constructed a new class of Fourierbased compressed sensing matrices using an almost difference set. We showed that a basic partial Fourier matrix, equivalent to the nearoptimal partial Fourier codebook presented in[19], could be represented as a concatenation of DFTbased submatrices under row/column rearrangement. Choosing a full or a part of columns of the concatenated matrix, we then constructed a new sensing matrix which turns out to be an incoherent tight frame. The new sensing matrix guarantees unique sparse reconstruction with high probability for sparse signals with uniformly distributed supports. Moreover, experimental results revealed that our deterministic compressed sensing guarantees the empirically reliable recovery performance.
In conclusion, compared to existing chirp sensing and random partial Fourier matrices, our new sensing matrices have the benefits summarized:

1.
Our new deterministic sensing matrices support various parameters of M = p ^{r} and N = (M + 1)L for any prime p and positive integers r and L, 2 ≤ L ≤ M  1. They are incoherent tight frames for any such M and N. Compared to chirp sensing codes where M is generally restricted to a prime number, the new matrices therefore provide more options for the parameters M and N, permitting various compression ratios of $\frac{M}{N}\approx \frac{1}{L}$. A large number of new sensing matrices with a variety of admissible parameters may have many potential applications in compressed sensing.

2.
The deterministic row index structure requires much less storage space than random partial Fourier matrices. Moreover, while the Npoint FFT is required for random partial Fourier matrices, the DFTbased submatrix structure of our new sensing matrices allows the (M + 1)point FFT processing, which enables efficient signal measurement and reconstruction with low complexity and fast processing. The benefits in implementation indicate the potential of our new sensing matrices in practical compressed sensing.
Appendix
Proof of procedure 1
First of all, Lemma 3 shows that the indices of D in Equation 1 are equivalently generated by cyclotomic cosets. In the proof, we use the wellknown property that${(x+y)}^{{p}^{k}}={x}^{{p}^{k}}+{y}^{{p}^{k}}$ for$x,y\in {\mathbb{F}}_{{p}^{m}}$ and any integers m,k.
Lemma 3. Consider all cyclotomic cosets modulo p ^{r} + 1 over p. From Procedure 1, recall that if p = 2, Γ_{2}(2^{r} + 1) ∖ {0} = {u _{1},…,u _{ δ }}, and if p > 2,${\mathrm{\Gamma}}_{p}({p}^{r}+1)\setminus \{\frac{{p}^{r}+1}{2}\}=\{{u}_{1},\dots ,{u}_{\delta}\}$, respectively. For each u _{ i }, let ${C}_{{u}_{i}}^{\prime}$ be the cyclotomic coset having the coset leader u _{ i }. Also, let ${z}_{i}\in {\mathbb{Z}}_{{p}^{2r}1}$ be an integer satisfying Equation 8 for each u _{ i }. Assume ${z}_{i}\in {C}_{{s}_{i}}$, where ${C}_{{s}_{i}}$ is a cyclotomic coset modulo p ^{2r}1 over p containing a coset leader s _{ i }. Then,

1.
In Equation 1 of Remark 1, $I={\bigcup}_{1\le i\le \delta}{C}_{{u}_{i}}^{\prime}$.

2.
z _{ i } = z _{ j } if and only if u _{ i } = u _{ j } for 1 ≤ i, j ≤ δ.

3.
${C}_{{s}_{i}}={C}_{{u}_{i}}^{\prime}$ for each i, 1 ≤ i ≤ δ.

4.
Finally, the index set D of Equation 1 is given by
$$D=\bigcup _{1\le i\le \delta}{C}_{{s}_{i}}.$$(9)
Proof.

1.
If p = 2, then ${\bigcup}_{1\le i\le \delta}{C}_{{u}_{i}}^{\prime}={\mathbb{Z}}_{{p}^{r}+1}\setminus \{0\}=I$ is obvious. If p > 2, on the other hand, $\frac{p({p}^{r}+1)}{2}\frac{{p}^{r}+1}{2}=({p}^{r}+1)\times \frac{(p1)}{2}\equiv 0\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}{p}^{r}+1)$ for odd p. Then, we have $\frac{p({p}^{r}+1)}{2}\equiv \frac{{p}^{r}+1}{2}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}{p}^{r}+1)$, which means that $\frac{{p}^{r}+1}{2}$ is the only element of the cyclotomic coset containing it. Therefore, ${\bigcup}_{1\le i\le \delta}{C}_{{u}_{i}}^{\prime}={\mathbb{Z}}_{{p}^{r}+1}\setminus \{\frac{{p}^{r}+1}{2}\}=I$ is also clear.

2.
For given u _{ i }, the solution ${z}_{i}\in {\mathbb{Z}}_{{p}^{2r}1}$ of Equation 8 is unique from the structure of the finite field ${\mathbb{F}}_{{p}^{2r}}$. From the uniqueness, z _{ i } = z _{ j } if and only if u _{ i } = u _{ j } for 1 ≤ i, j ≤ δ.

3.
For each i, let ${C}_{{u}_{i}}^{\prime}={n}_{{u}_{i}}$, where
$${u}_{i}\equiv {u}_{i}{p}^{{n}_{{u}_{i}}}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}{p}^{r}+1).$$(10)Also, let${C}_{{s}_{i}}={n}_{{s}_{i}}$. For${z}_{i}\in {C}_{{s}_{i}}$,
$${z}_{i}\equiv {z}_{i}{p}^{{n}_{{s}_{i}}}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}{p}^{2r}1).$$(11)Then,$1+{\alpha}^{({p}^{r}1){u}_{i}{p}^{{n}_{{u}_{i}}}}={\left(1+{\alpha}^{({p}^{r}1){u}_{i}}\right)}^{{p}^{{n}_{{u}_{i}}}}={\alpha}^{{z}_{i}{p}^{{n}_{{u}_{i}}}}$, where α is a primitive element in${\mathbb{F}}_{{p}^{2r}}$. Since$1+{\alpha}^{({p}^{r}1){u}_{i}{p}^{{n}_{{u}_{i}}}}=1+{\alpha}^{({p}^{r}1){u}_{i}}={\alpha}^{{z}_{i}}$, we have${\alpha}^{{z}_{i}{p}^{{n}_{{u}_{i}}}}={\alpha}^{{z}_{i}}$, which implies
$${z}_{i}\equiv {z}_{i}{p}^{{n}_{{u}_{i}}}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}{p}^{2r}1).$$(12)From Equations 11 and 12,${n}_{{s}_{i}}$ divides${n}_{{u}_{i}}$ since${n}_{{s}_{i}}$ is the smallest positive integer satisfying Equation 11. Similarly,$1+{\alpha}^{({p}^{r}1){u}_{i}{p}^{{n}_{{s}_{i}}}}={\left(1+{\alpha}^{({p}^{r}1){u}_{i}}\right)}^{{p}^{{n}_{{s}_{i}}}}={\alpha}^{{z}_{i}{p}^{{n}_{{s}_{i}}}}={\alpha}^{{z}_{i}}=1+{\alpha}^{({p}^{r}1){u}_{i}}$. Then,
$${u}_{i}\equiv {u}_{i}{p}^{{n}_{{s}_{i}}}\phantom{\rule{1em}{0ex}}(\text{mod}\phantom{\rule{.5em}{0ex}}{p}^{r}+1).$$(13)From Equations 10 and 13,${n}_{{u}_{i}}$ divides${n}_{{s}_{i}}$ since${n}_{{u}_{i}}$ is the smallest positive integer satisfying Equation 10. As${n}_{{s}_{i}}$ and${n}_{{u}_{i}}$ divide each other, it means${n}_{{u}_{i}}={n}_{{s}_{i}}$, or equivalently${C}_{{u}_{i}}^{\prime}={C}_{{s}_{i}}$ for each i,1 ≤ i ≤ δ.

4.
With ${z}_{i}\in {C}_{{s}_{i}}$ and ${u}_{i}\in {C}_{{u}_{i}}^{\prime}$ satisfying Equation 8, let us say that ${C}_{{s}_{i}}$ is associated with ${C}_{{u}_{i}}^{\prime}$. Recall that ${\alpha}^{({p}^{r}+1){e}_{v}}={\text{Tr}}_{r}^{2r}({\alpha}^{v})$ in Remark 1. For each index of D in Equation 1,
$$\begin{array}{ll}\phantom{\rule{6.5pt}{0ex}}{\alpha}^{{d}_{k}}& ={\alpha}^{({p}^{r}+1){e}_{v}v}={\alpha}^{v}\times {\text{Tr}}_{r}^{2r}({\alpha}^{v})\\ ={\alpha}^{v}\times ({\alpha}^{v}+{\alpha}^{v{p}^{r}})=1+{\alpha}^{({p}^{r}1)v}\end{array}$$(14)where v ∈ I in Equation 1. In Equation 14, if v = u _{ i }, then d _{ k } = z _{ i } from Equation 8. Moreover,
$${\alpha}^{{z}_{i}{p}^{t}}={\left(1+{\alpha}^{({p}^{r}1){u}_{i}}\right)}^{{p}^{t}}=1+{\alpha}^{({p}^{r}1){u}_{i}{p}^{t}}$$(15)for$1\le t\le {n}_{{u}_{i}}={n}_{{s}_{i}}$. Then, Equation 15 implies that each element of${C}_{{u}_{i}}^{\prime}=\{{u}_{i},{u}_{i}p,\dots ,{u}_{i}{p}^{{n}_{{u}_{i}}1}\}$ induces each element of${C}_{{s}_{i}}=\{{z}_{i},{z}_{i}p,\dots ,{z}_{i}{p}^{{n}_{{s}_{i}}1}\}$ as a solution of Equation 15. For each element$v\in I={\bigcup}_{1\le i\le \delta}{C}_{{u}_{i}}^{\prime}$, therefore, we conclude that the corresponding solution d _{ k } of Equation 14 constitutes δ cyclotomic cosets of${C}_{{s}_{1}},\dots ,{C}_{{s}_{\delta}}$ each of which is associated with${C}_{{u}_{1}}^{\prime},\dots ,{C}_{{u}_{\delta}}^{\prime}$, respectively, which yields Equation 9. From item 2,${C}_{{s}_{1}},\dots ,{C}_{{s}_{\delta}}$ are disjoint, and the set D of Equation 9 has p ^{r} distinct elements since$D={\sum}_{i=1}^{\delta}{C}_{{s}_{i}}={\sum}_{i=1}^{\delta}{C}_{{u}_{i}}^{\prime}=I={p}^{r}$.
From Lemma 3, if p = 2, Equation 9 directly presents the indices of D in Equation 2. If p > 2, on the other hand, we can simply add$\frac{M+1}{2}$ to each element of Equation 9, to obtain the indices of D in Equation 3. This verifies that Procedure 1 equivalently generates the row index set for our new sensing matrix.
References
 1.
Donoho DL: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):12891306.
 2.
Candes EJ, Romberg J, Tao T: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52(2):489509.
 3.
Candes EJ, Tao T: Nearoptimal signal recovery from random projections: Universal encoding strategies. IEEE Trans. Inf. Theory 2006, 52(12):54065425.
 4.
Rauhut H: Compressive sensing and structured random matrices. Theoretical Foundations and Numerical Methods for Sparse Recovery, Radon Series ed. by M. Fornasier. Computational and Applied Mathematics vol. 9 (deGruyter, Berlin, 2010), pp. 1–92
 5.
Calderbank R, Howard S, Jafarpour S: Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property. IEEE J. Sel. Topics Sig. Proc 2010, 4(2):358374.
 6.
Applebaum L, Howard SD, Searle S, Calderbank R: Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery. Appl. Comput. Harmon. Anal 2009, 26: 283290. 10.1016/j.acha.2008.08.002
 7.
Alltop W: Complex sequences with low periodic correlations. IEEE Trans. Inf. Theory 1980, 26(3):350354. 10.1109/TIT.1980.1056185
 8.
Strohmer T, Heath R: Grassmanian frames with applications to coding communication. Appl. Comput. Harmon. Anal 2003, 14(3):257275. 10.1016/S10635203(03)00023X
 9.
Calderbank R, Howard S, Jafarpour S: A sublinear algorithm for sparse reconstruction with l2/l2 recovery guarantees. 3rd IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP), Aruba, 13–16 Dec 2009. Piscataway: IEEE; 2009:209212.
 10.
Howard S, Calderbank R, Searle S: A fast reconstruction algorithm for deterministic compressive sensing using second order ReedMuller codes. Conference on Information Systems and Sciences (CISS), Princeton, 19–21 March 2008. Piscataway: IEEE; 2008:1115.
 11.
Amini A, Montazerhodjat V, Marvasti F: Matrices with small coherence using pary block codes. IEEE Trans. Sig. Proc 2012, 60: 172181.
 12.
DeVore RA: Deterministic constructions of compressed sensing matrices. J. Complexity 2007, 23: 918925. 10.1016/j.jco.2007.04.002
 13.
Gurevich S, Hadani R, Sochen N: On some deterministic dictionaries supporting sparsity. J. Fourier Anal. Appl 14(5):859876.
 14.
Xu Z: Deterministic sampling of sparse trigonometric polynomials. J. Complexity 2011, 27: 133140. 10.1016/j.jco.2011.01.007
 15.
Yu NY: Additive character sequences with small alphabets for compressed sensing matrices. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, 22–27 May 2011 (IEEE, Piscataway, 2011), pp. 2932–2935
 16.
Li S, Gao F, Ge G, Zhang S: Deterministic construction of compressed sensing matrices via algebraic curves. IEEE Trans. Inf. Theory 2012, 58(8):50355041.
 17.
Mishali M, Eldar YC: Blind multiband signal reconstruction: Compressed sensing for analog signals. IEEE Trans. Sig. Proc 2009, 57(3):9931009.
 18.
DominguezJimenez ME, GonzalezPrelcic N, VazquezVilar G, LopezValcarce R: Design of universal multicoset sampling patterns for compressed sensing of multiband sparse signals. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, 2530 March 2012. Piscataway: IEEE; 2012:33373340.
 19.
Yu NY, Feng K, Zhang A: A new class of nearoptimal partial Fourier codebooks from an almost difference set. Des. Codes Cryptogr 2012. 10.1007/s1062301297538
 20.
Needell D, Tropp JA: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal 2009, 26: 301321. 10.1016/j.acha.2008.07.002
 21.
Golomb SW, Gong G: Signal Design for Good Correlation  for Wireless Communication, Cryptography and Radar.. Cambridge: Cambridge University Press; 2005.
 22.
MacWilliams FJ, Sloane NJ: The Theory of ErrorCorrecting Codes. Amsterdam: NorthHolland; 1977.
 23.
Arasu KT, Ding C, Helleseth T, Kumar PV, Martinsen H: Almost difference sets and their sequences with optimal autocorrelation. IEEE Trans. Inf. Theory 2001, 47(7):29342943. 10.1109/18.959271
 24.
Donoho DL, Elad M: Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci 2003, 100: 21972202. 10.1073/pnas.0437847100
 25.
Welch LR: Lower bounds on the maximum cross correlation of the signals. IEEE Trans. Inf. Theory 1974, IT20: 397399.
 26.
Kovacević J, Chebira A: An Introduction to Frames: Foundations and Trends in Signal Processing. Hannover: now Publishers; 2008.
 27.
Yu NY: On statistical restricted isometry property of a new class of deterministic partial Fourier compressed sensing matrices. International Symposium on Information Theory and its Applications (ISITA), Honolulu, 28–31 Oct 2012. Piscataway: IEEE; 2012:287291.
 28.
Candes E, Plan Y: Nearideal model selection by l1 minimization. Ann. Stat 2009, 37(5A):21452177. 10.1214/08AOS653
 29.
Jafapour S, Duarte MF, Calderbank R: Beyond worstcase reconstruction in deterministic compressed sensing. IEEE International Symposium on Information Theory Proceedings (ISIT), Cambridge, 1–6 July 2012. Piscataway: IEEE; 2012:18621866.
 30.
Ni K, Datta S, Mahanti P, Roudenko S, Cochran D: Efficient deterministic compressed sensing for images with chirps and ReedMuller codes. SIAM J. Imaging Sci 2011, 4(3):931953. 10.1137/100808794
Acknowledgements
This work was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Yu, N.Y., Li, Y. Deterministic construction of Fourierbased compressed sensing matrices using an almost difference set. EURASIP J. Adv. Signal Process. 2013, 155 (2013). https://doi.org/10.1186/168761802013155
Received:
Accepted:
Published:
Keywords
 Discrete Fourier Transform
 Tight Frame
 Sparse Signal
 Signal Proxy
 Linear Feedback Shift Register