Skip to main content

Deterministic construction of Fourier-based compressed sensing matrices using an almost difference set

Abstract

In this paper, a new class of Fourier-based matrices is studied for deterministic compressed sensing. Initially, a basic partial Fourier matrix is introduced by choosing the rows deterministically from the inverse discrete Fourier transform (DFT) matrix. By row/column rearrangement, the matrix is represented as a concatenation of DFT-based submatrices. Then, a full or a part of columns of the concatenated matrix is selected to build a new M × N deterministic compressed sensing matrix, where M = p r and N = L(M + 1) for prime p, and positive integers r and L ≤ M - 1. Theoretically, the sensing matrix forms a tight frame with small coherence. Moreover, the matrix theoretically guarantees unique recovery of sparse signals with uniformly distributed supports. From the structure of the sensing matrix, the fast Fourier transform (FFT) technique can be applied for efficient signal measurement and reconstruction. Experimental results demonstrate that the new deterministic sensing matrix shows empirically reliable recovery performance of sparse signals by the CoSaMP algorithm.

1 Introduction

Compressed sensing (or compressive sampling) is a novel and emerging technology with a variety of applications in imaging, data compression, and communications. In compressed sensing, one can recover sparse signals of high dimension from incomplete measurements. Mathematically, measuring an N-dimensional signalx R N with an M × N measurement matrix Φ yields an M-dimensional vector y = Φ x, where M < N. Imposing a requirement that x is s-sparse or the number of nonzero entries in x is at most s, one can recover x exactly with high probability by an l 1-minimization method or a greedy algorithm, which is computationally tractable.

Many research activities have been triggered on theory and practice of compressed sensing since Donoho, Candes, Romberg, and Tao published their marvelous theoretical works[1]-[3]. The efforts revealed that a measurement matrix Φ plays a crucial role in recovery of s-sparse signals. Although a random matrix provides many theoretical benefits[4], it has the drawbacks[5] of high complexity and large storage in its practical implementation. As an alternative, we may consider a deterministic matrix, where well-known codes and sequences have been employed for the construction, e.g., chirp sequences[6], Alltop sequences[7, 8], Kerdock and Delsarte-Goethals codes[9], second-order Reed-Muller codes[10], and BCH codes[11]. Other techniques for deterministic construction, based on finite fields, representation theory, characters and algebraic curves, and multicoset codes, can be also found in[12]-[18]. The deterministic matrices guarantee the recovery performance that is empirically reliable, allowing fast processing and low complexity.

In this paper, we study deterministic construction of a new class of Fourier-based compressed sensing matrices. Initially, a p r × (p 2r-1) basic partial Fourier matrix, equivalent to the partial Fourier codebook of[19], is introduced by selecting p r rows from the (p 2r-1)-point inverse discrete Fourier transform (DFT) matrix according to an almost difference set, where p is a prime number and r is a positive integer. By rearranging the rows and/or columns, we show that the matrix is represented as a concatenation of DFT-based submatrices. Then, a full or a part of columns of the concatenated matrix is selected to build a new M × N sensing matrix for deterministic compressed sensing, where M = p r and N = L(M + 1) for L ≤ M-1. The concatenated structure allows the new sensing matrix to offer the various admissible column numbers while keeping it as an incoherent tight frame and enables efficient processing for measurement and reconstruction in compressed sensing. We would like to stress that it is not a trivial task to obtain the concatenated structure from the basic partial Fourier matrix by row/column rearrangement. With the parameters M and N, our new deterministic matrix can achieve the various permissible compression ratios of M N 1 L for a positive integer L, 2 ≤ L ≤ M-1.

Theoretically, the new sensing matrix forms a tight frame with small coherence. Moreover, our new sensing matrix theoretically guarantees unique recovery of sparse signals with uniformly distributed supports with high probability. From the structure of our new sensing matrix, the fast Fourier transform (FFT) technique can be applied for efficient signal measurement and reconstruction. Experimental results demonstrate that the new deterministic compressed sensing matrix, together with the CoSaMP recovery algorithm[20], empirically guarantees sparse signal recovery with high reliability. We observe that the empirical recovery performance of our new sensing matrices is similar to those of chirp sensing[6] and random partial Fourier matrices. However, our new matrices offer several practical benefits, requiring less storage and complexity than random partial Fourier matrices and providing more parameters of M and N than chirp sensing codes.

The rest of this paper is organized as follows. In Section 2, we introduce basic concepts and notations to understand this work. Section 3 modifies the structure of a basic partial Fourier matrix and presents a new sensing matrix for deterministic construction. We also discuss the efficient implementation and the theoretical recovery guarantee of the new sensing matrix. Section 4 describes the signal measurement process and the CoSaMP recovery algorithm by employing the FFT technique. In Section 5, we demonstrate the empirical recovery performance of our new sensing matrices in noiseless and noisy settings. Finally, concluding remarks will be given in Section 6.

2 Preliminaries

This section introduces fundamental concepts and notations for understanding this work. In subsections 2.1 and 2.2, we briefly introduce the concepts of finite fields, trace functions and cyclotomic cosets for signal processing researchers. For more details, see[21] and[22].

2.1 Finite fields and trace functions

Let p be prime and m > 1 a positive integer. A finite field F p m is generated by 0 and α i, i = 0,1,…,p m-2, i.e., F p m ={0,1,α, α 2 ,, α p m - 2 }, where α is called a primitive element and α p m - 1 =1. The primitive element α is a root of a primitive polynomial f(x), i.e., f(α) = 0, where f(x) has the highest degree m and its coefficients are the elements of F p ={0,1,2,,p-1}.

Let k be a positive integer that divides m. A trace function is a linear mapping from F p m onto F p k defined by

Tr k m ( x ) = i = 0 m / k - 1 x p ki , x F p m

where the addition is computed modulo p. The trace function algebraically defines the well-known m-sequences or pseudo-noise (PN) sequences, which have been widely used in wireless communications. For instance, if p = 2 and k = 1, then( Tr 1 m (1), Tr 1 m (α), Tr 1 m ( α 2 ),, Tr 1 m ( α 2 m - 2 )) is a binary m-sequence of length 2m - 1, where each entry is 0 or 1. The m-sequence, defined by a trace function, is efficiently generated by a linear feedback shift register (LFSR), which is a common method in communication standards.

Example 1. Let p = 2 and m = 4. Then, the finite field F 2 4 is defined by a primitive polynomial f(x) = x 4 + x + 1, where the root α is a primitive element of F 2 4 . Thus, F 2 4 ={0,1,α, α 2 ,, α 14 }, where α 4 + α + 1 = 0 and α 15 = 1. The trace function Tr 1 4 (x) takes on either 0 or 1, since it is a linear mapping from F 2 4 onto F 2 . For example,

Tr 1 4 ( 1 ) = 1 + 1 + 1 + 1 = 0 , Tr 1 4 ( α ) = α + α 2 + α 4 + α 8 = 0 , Tr 1 4 ( α 3 ) = α 3 + α 6 + α 12 + α 9 = 1

where the addition is computed modulo p = 2.

2.2 Cyclotomic cosets

Let Z v ={0,1,,v-1}, where v is a positive integer. Also, p is a prime integer which is relatively prime to v, i.e., gcd(v,p) = 1. For a nonnegative integers Z v , a cyclotomic coset modulo v over p containing s is defined as

C s = { s , sp , s p 2 , , s p n s - 1 }

where n s is the smallest positive integer such thats p n s s(mod v). It is conventional to define the coset leader of C s as the smallest integer s. Then, Z v is partitioned into cyclotomic cosets, i.e., Z v = s Γ p ( v ) C s , where Γ p (v) denotes a set of coset leaders in Z v . By definition, once an element is given, a cyclotomic coset containing it can be easily generated by successive multiplication by p to the element, which will be useful in generating the row index set for our new sensing matrix.

Example 2

Let p = 2 and v = 15. Then, the cyclotomic cosets modulo 15 over p = 2 are

C 0 = { 0 } , C 1 = { 1 , 2 , 4 , 8 } , C 3 = { 3 , 6 , 12 , 9 } , C 5 = { 5 , 10 } , C 7 = { 7 , 14 , 13 , 11 }

where the coset leaders are Γ2(15) = {0,1,3,5,7}. With the cyclotomic cosets, Z 15 ={0,1,,14}= s Γ 2 ( 15 ) C s .

2.3 Basic partial Fourier matrix A ~

In this subsection, we introduce a basic framework from which a new sensing matrix can be developed in Section 3. Throughout this paper, we set M = p r and N  = p 2r-1 = M 2-1 for prime p and a positive integer r. Also, we assume that each column of a sensing matrix has unit l 2-norm, where the l 2-norm is denoted as||x||= i = 0 n - 1 | x i | 2 for an n-dimensional vector x = (x 0,x 1,…,x n-1). Table1 summarizes all the variables and notations for the development of a new sensing matrix.

Table 1 Variables and notations for the new sensing matrix

In[19], Yu, Feng, and Zhang presented a new class of (N ,M) near-optimal partial Fourier codebooks using an almost difference set[23]. The codebook can be equivalently translated into an M × N partial Fourier matrix, which contains M rows selected from the N -point inverse DFT (IDFT) matrix according to the almost difference set.

From the results of[19], Proposition 1 describes the basic partial Fourier matrix and its geometric properties with the notations of this paper.

Proposition 1. For prime p and a positive integer r, let M = p r and N  = p 2r-1 = M 2-1. Let D = {d 0,d 1,…,d M-1} be an index set defined in Lemma 2 of[19], which will be given below in Remark 1. Choosing M rows from the N -point IDFT matrix according to D, we construct an M × N matrix A ~ , where each entry is given by

a ~ k , n = 1 M exp j 2 π d k n N , j = - 1

for 0 ≤ k ≤ M - 1 and 0 ≤ n ≤ N -1. Then, the coherence[24]of A ~ is given by

μ = max 0 n 1 n 2 N - 1 a ~ n 1 H a ~ n 2 = 1 M

where a ~ n 1 is a column vector of A ~ and a ~ n 1 H denotes the transpose of its complex conjugate. The coherence nearly achieves the Welch bound equality[25]of N - M M ( N - 1 ) 1 M + 1 for sufficiently large M. Moreover, A ~ forms a tight frame[26]as each row is mutually orthogonal.

The coherence and the tightness of A ~ do not change if we select the rows from the DFT matrix, instead of the IDFT matrix. In this paper, we decide to use the IDFT matrix.

Remark 1. With the notations of this paper, the row index set D is defined by[19]

D = { ( M + 1 ) e v - v v I } where I = Z M + 1 { 0 } , if p = 2 Z M + 1 { M + 1 2 } , if p > 2
(1)

where all the operations in D are computed modulo N . In Equation 1, D is an almost difference set, and e v is a nonnegative integer satisfying α ( p r + 1 ) e v = Tr r 2 r ( α v ) for vI[19], where α is a primitive element in F p 2 r . To determine the index set D, one needs to compute e v using a trace function, which might be difficult for signal processing researchers. In Section 3, we will present an alternative method to generate the indices of D by successive multiplication to predetermined values, which does not require the computation of e v . Therefore, it suffices to assume that e v is simply an integer in this paper.

3 Construction of new Fourier-based sensing matrices

To build a new M × N sensing matrix, we begin with the M × N basic partial Fourier matrix A ~ , and then choose the N columns after a row/column rearrangement. Our approach is different from a conventional one of random or deterministic selection of M rows out of an N × N Fourier matrix, but we will show that it ultimately presents reliable recovery performance and practical benefits in implementation.

In deterministic compressed sensing, it is desired that a sensing matrix should be able to support a variety of admissible column numbers to sense a signal of various lengths. For this purpose, one needs to consider how to select the columns from A ~ for a new M × N sensing matrix with N < N . In this section, we apply a row/column permutation to the partial Fourier matrix A ~ to obtain its variant M × N matrix A’. Then, we choose a full or a part of columns of A’ to construct a new M × N sensing matrix A, where N = (M + 1)L for a positive integer L, 2 ≤ L ≤ M - 1. The row/column rearrangement offers the following benefits for our new sensing matrix A in compressed sensing, which is the motivation:

  1. 1.

    If one selects the columns arbitrarily from A ~ , the resulting sensing matrix may not be a tight frame in general. In fact, one needs to be careful in selecting the columns of A ~ , to achieve the tightness of the resulting matrix. Through the row/column rearrangement, we will show that the new sensing matrix A has a concatenated structure of (M + 1)-point DFT-based submatrices. With the structure, A can be still a tight frame by choosing N as a multiple of M + 1, which will be shown in Lemma 2.

  2. 2.

    The concatenated structure of A also allows efficient (M + 1)-point FFT processing for measurement and recovery of sparse signals in compressed sensing. Note that if one selects the columns arbitrarily from the original A ~ , the resulting matrix generally requires the N -point FFT processing, which has more computational complexity. Moreover, one may enjoy fast processing via parallel FFT computations using the concatenated structure, which will be discussed in Section 4.

3.1 Structure

Recall the partial Fourier matrix A ~ in Proposition 1. If p = 2, we use the original index set D in Equation 1, i.e.,

D = ( M + 1 ) e v - v v Z M + 1 { 0 } .
(2)

On the other hand, if p > 2, we redefine the index set D by adding M + 1 2 to each original index in Equation 1, i.e.,

D = ( M + 1 ) e v - v + M + 1 2 v Z M + 1 M + 1 2 .
(3)

The above modification for p > 2 ensures that each entry of D is nonzero when computed modulo M + 1, which also holds for p = 2. See the proof of Lemma 1 for the implication.

Now, we suggest a column rearrangement of the original A ~ . For given l, 0 ≤ l ≤ M - 2, let us take the M + 1 column vectors of indices n = (M - 1)t + l from A ~ , where 0 ≤ t ≤ M. With the column vectors, we then obtain an M × (M + 1) submatrix σ ( l ) ={ σ k , t ( l ) 0kM-1,0tM}, where each entry is given by

σ k , t ( l ) = 1 M exp j 2 π d k ( ( M - 1 ) t + l ) N = 1 M exp j 2 π d k t M + 1 × exp j 2 π d k l N = 1 M exp j 2 π d k t M + 1 × γ k ( l ) .
(4)

In Equation 4, γ k ( l ) is defined as

γ k ( l ) = exp j 2 π d k l N = exp j 2 π d k l × 1 2 1 M - 1 - 1 M + 1 = exp j π d k l M - 1 × exp - j π d k l M + 1 .

for each k, 0 ≤ k ≤ M - 1.

Next, we show that the submatrix σ (l) has a DFT-based structure if the row indices of D are arranged in appropriate order. In Lemma 1, we denote F M + 1 as the M × (M + 1) DFT matrix without the first row, where each entry is F k , t =exp - j 2 π ( k + 1 ) t M + 1 for 0 ≤ k ≤ M - 1 and 0 ≤ t ≤ M.

Lemma 1. In the index set of Equation 2 for p = 2 or Equation 3 for p > 2, the entries of D = {d 0,d 1,…,d M-1} can be arranged such that d k  ≡ -(k + 1) (mod M + 1) for 0 ≤ k ≤ M - 1. With such D, let us define Γ ( l ) ={ Γ k , t ( l ) 0k,tM-1} as an M × M diagonal matrix where each entry is

Γ k , t ( l ) = γ k ( l ) , if k = t , 0 , if k t

for each l, 0 ≤ l ≤ L - 1. Then, each submatrix σ (l) can be expressed by

σ ( l ) = 1 M Γ ( l ) F M + 1 ,
(5)

which clearly shows the DFT-based structure of σ (l).

Proof. We investigate howexp j 2 π d k t M + 1 of Equation 4 is changed for p = 2 and p > 2, respectively.

Case. p = 2: In this case, each element of D in Equation 2 is represented as

d k = ( M + 1 ) e k + 1 - ( k + 1 ) - ( k + 1 ) ( mod M + 1 )
(6)

for 0 ≤ k ≤ M - 1. Thus,exp j 2 π d k t M + 1 =exp - j 2 π ( k + 1 ) t M + 1 in Equation 4. Consequently, each entry of Equation 4 forms an M × (M + 1) submatrix σ (l) where each row is from the (M + 1)-point DFT matrix excluding all one row and then masked by γ k ( l ) . Then, the structure of Equation 5 is straightforward. □

Case. p > 2: In this case, each index of D in Equation 3 is

d k = ( M + 1 ) e k - k + M + 1 2 , 0 k < M + 1 2 ( M + 1 ) e k + 1 - ( k + 1 ) + M + 1 2 , M + 1 2 k M - 1 - k - M + 1 2 ( mod M + 1 ) , 0 k < M + 1 2 - ( k + 1 ) + M + 1 2 ( mod M + 1 ) , M + 1 2 k M - 1

where- M + 1 2 M + 1 2 (modM+1). Reordering the indices, we get

d k = d k + M + 1 2 , 0 k < M - 1 2 d k - M - 1 2 , M - 1 2 k M - 1 - ( k + 1 ) ( mod M + 1 )
(7)

for 0 ≤ k ≤ M - 1. Then, Equation 7 yieldsexp j 2 π d k t M + 1 =exp - j 2 π ( k + 1 ) t M + 1 in Equation 4. It is now clear why we introduced the modified index set D of Equation 3 for p > 2. By ensuring d k  0 (mod M + 1) for any k, the modification guarantees that we can achieve the same DFT-based submatrix structure as that of p = 2. Finally, each entry of Equation 4 also forms an M × (M + 1) submatrix σ (l) where each row is from F M + 1 and then masked by γ k ( l ) , which yields Equation 5. □

Remark 2. In both cases of p, one needs to ensure that the entries of the index set D = {d 0,d 1,…,d M-1} should satisfy d k  ≡ -(k + 1) ≡ M - k (mod M + 1), to achieve the DFT-based submatrix structure in Lemma 1. If p = 2, the original entries of Equation 2 meet the condition from Equation 6. On the other hand, if p > 2, Equation 7 shows that we have to rearrange the entries of Equation 3 by circularly shifting the order by M + 1 2 . If the entries of D are generated by a different method, which will be introduced in Procedure 1, the index set D should be sorted for both p such that the entries are in decreasing order when computed modulo M + 1, i.e., D (mod M + 1) ≡ {M,M - 1,…,1}, to satisfy the condition.

Finally, if l runs through {0,1,…,M - 2}, we obtain the M - 1 submatrices σ (l), and construct a variant A  = [σ (0)σ (1)σ (M-2)] by concatenating them. Clearly, the M × N matrix A’ is equivalent to the original matrix A ~ under the row/column rearrangement. In what follows, Construction 1 presents a formal expression of the new sensing matrix A.

Construction 1. Let M = p r for prime p and a positive integer r. Let D = {d 0,d 1,…,d M-1} be the row index set which satisfies d k  ≡ - (k + 1) (mod M + 1) for 0 ≤ k ≤ M - 1. Let L be a positive integer and N = (M + 1)L, where 2 ≤ L ≤ M - 1. For a given integer l, 0 ≤ l ≤ L - 1, define an M × (M + 1) submatrix σ ( l ) ={ σ k , t ( l ) 0kM-1,0tM} where

σ k , t ( l ) = 1 M exp - j 2 π ( k + 1 ) t M + 1 × γ k ( l )

and γ k ( l ) =exp j π d k l M - 1 ×exp - j π d k l M + 1 . An M × N sensing matrix A is a concatenation of the L submatrices i.e., A = [σ (0)σ (1)σ (L - 1)]. In particular, if L = M - 1, then A = A  = [σ (0)σ (1)σ (M-2)].

Figure1 illustrates the structure of our new sensing matrix A in Construction 1.

Figure 1
figure1

Concatenated structure of new sensing matrix A. It illustrates the concatenated structure of the new sensing matrix A in Construction 1.

3.2 Implementation

In Construction 1, generating the row index set D efficiently is a key issue in implementing the deterministic sensing matrix A. In D, as α ( p r + 1 ) e v = Tr r 2 r ( α v ) is an element of a p r-ary m-sequence of period p 2r-1[21], we can compute it by a 2-stage LFSR. Therefore, each element of D in Equation 1 can be generated by LFSR, log operation and other basic arithmetics over finite fields.

As the computation over finite fields is not trivial, we introduce an alternative method to generate the indices of D more efficiently. In the method, we use cyclotomic cosets modulo p r + 1 and p 2r-1 over p, respectively, which are always valid for any prime p, since gcd(p, p r + 1) = gcd(p, p 2r-1) = 1. In what follows, we describe the procedure, where the proof will be given in the Appendix.

Procedure 1

  1. 1.

    Generate all cyclotomic cosets modulo p r + 1 over p. When p = 2, identify a set of nonzero coset leaders by Γ2(2r + 1){0} = {u 1,…,u δ }. If p > 2, on the other hand, identify a set of coset leaders without p r + 1 2 by Γ p ( p r +1){ p r + 1 2 }={ u 1 ,, u δ }. Note that u 1 = 0 if p > 2.

  2. 2.

    For each u i ,1 ≤ i ≤ δ, compute a positive integer z i Z p 2 r - 1 such that

    α z i =1+ α ( p r - 1 ) u i
    (8)

    where α is a primitive element in F p 2 r .

  3. 3.

    For each z i ,1 ≤ i ≤ δ, generate a cyclotomic coset modulo p 2r-1 over p containing z i by C s i ={ z i , z i p,, z i p n s i - 1 }, where n s i is the smallest positive integer such that z i z i p n s i (mod p 2 r -1). Note that the coset leader s i is not necessarily equal to z i .

  4. 4.

    If p = 2,

    D = 1 i δ C s i ,

    and if p > 2,

    D = 1 i δ C s i + M + 1 2

    where the addition is performed to each element of 1 i δ C s i , and computed modulo p 2r - 1. From Remark 2, the index set D should be sorted such that the entries are in decreasing order when computed modulo M + 1, i.e., D (mod M + 1) ≡ {M,M - 1,…,1}.

Example 3. Let p = 2 and r = 3. Also, let α be a primitive element in F 2 6 satisfying α 6 + α + 1 = 0. Then, Procedure 1 generates M = p r = 8 indices of D for our new sensing matrix:

  1. 1.

    From all cyclotomic cosets modulo 9 over p = 2, we identify nonzero coset leaders Γ2(9){0} = {1,3}, where the cosets are C 1 ={1,2,4,8,7,5} and C 3 ={3,6}, respectively.

  2. 2.

    From u 1 = 1, Equation 8 yields α z 1 =1+ α 7 = α 26 , where z 1 = 26. Also, from u 2 = 3, we have α z 2 =1+ α 21 = α 42 and z 2 = 42.

  3. 3.

    By successively multiplying z 1 = 26 by 2, we obtain its cyclotomic coset C 13 = {26,52,41,19,38,13}, where the coset leader is s 1 = 13. Note that the multiplication is computed modulo p 2r - 1 = 63. Similarly, we have C 21 = {42,21} from z 2 = 42, where s 2 = 21.

  4. 4.

    Finally, the index set D is given by

    D = { d 0 , d 1 , , d 7 } = C 13 C 21 = { 26 , 52 , 42 , 41 , 13 , 21 , 38 , 19 }

where we have sorted the indices such that they are in decreasing order when computed modulo 9, i.e., D (mod 9) ≡ {8,7,6,5,4,3,2,1}.

Example 4. With the index set D of Example 3, we can construct an 8 × 18 matrix A = [σ (0)σ (1)], where M = 8,N = 18 and L = 2. Denoteω=exp - j 2 π 9 . Then,

σ ( 0 ) = 1 8 Γ ( 0 ) F 9 = 1 8 1 ω ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 ω 8 1 ω 2 ω 4 ω 6 ω 8 ω 10 ω 12 ω 14 ω 16 1 ω 3 ω 6 ω 9 ω 12 ω 15 ω 18 ω 21 ω 24 1 ω 4 ω 8 ω 12 ω 16 ω 20 ω 24 ω 28 ω 32 1 ω 5 ω 10 ω 15 ω 20 ω 25 ω 30 ω 35 ω 40 1 ω 6 ω 12 ω 18 ω 24 ω 30 ω 36 ω 42 ω 48 1 ω 7 ω 14 ω 21 ω 28 ω 35 ω 42 ω 49 ω 56 1 ω 8 ω 16 ω 24 ω 32 ω 40 ω 48 ω 56 ω 64

where Γ (0) is the 8 × 8 identity matrix. Also,

σ ( 1 ) = 1 8 Γ ( 1 ) F 9 = 1 8 γ 0 ( 1 ) 0 0 0 0 0 0 0 0 γ 1 ( 1 ) 0 0 0 0 0 0 0 0 γ 2 ( 1 ) 0 0 0 0 0 0 0 0 γ 3 ( 1 ) 0 0 0 0 0 0 0 0 γ 4 ( 1 ) 0 0 0 0 0 0 0 0 γ 5 ( 1 ) 0 0 0 0 0 0 0 0 γ 6 ( 1 ) 0 0 0 0 0 0 0 0 γ 7 ( 1 ) × F 9 = 1 8 γ 0 ( 1 ) γ 0 ( 1 ) ω γ 0 ( 1 ) ω 2 γ 0 ( 1 ) ω 3 γ 0 ( 1 ) ω 4 γ 0 ( 1 ) ω 5 γ 0 ( 1 ) ω 6 γ 0 ( 1 ) ω 7 γ 0 ( 1 ) ω 8 γ 1 ( 1 ) γ 1 ( 1 ) ω 2 γ 1 ( 1 ) ω 4 γ 1 ( 1 ) ω 6 γ 1 ( 1 ) ω 8 γ 1 ( 1 ) ω 10 γ 1 ( 1 ) ω 12 γ 1 ( 1 ) ω 14 γ 1 ( 1 ) ω 16 γ 2 ( 1 ) γ 2 ( 1 ) ω 3 γ 2 ( 1 ) ω 6 γ 2 ( 1 ) ω 9 γ 2 ( 1 ) ω 12 γ 2 ( 1 ) ω 15 γ 2 ( 1 ) ω 18 γ 2 ( 1 ) ω 21 γ 2 ( 1 ) ω 24 γ 3 ( 1 ) γ 3 ( 1 ) ω 4 γ 3 ( 1 ) ω 8 γ 3 ( 1 ) ω 12 γ 3 ( 1 ) ω 16 γ 3 ( 1 ) ω 20 γ 3 ( 1 ) ω 24 γ 3 ( 1 ) ω 28 γ 3 ( 1 ) ω 32 γ 4 ( 1 ) γ 4 ( 1 ) ω 5 γ 4 ( 1 ) ω 10 γ 4 ( 1 ) ω 15 γ 4 ( 1 ) ω 20 γ 4 ( 1 ) ω 25 γ 4 ( 1 ) ω 30 γ 4 ( 1 ) ω 35 γ 4 ( 1 ) ω 40 γ 5 ( 1 ) γ 5 ( 1 ) ω 6 γ 5 ( 1 ) ω 12 γ 5 ( 1 ) ω 18 γ 5 ( 1 ) ω 24 γ 5 ( 1 ) ω 30 γ 5 ( 1 ) ω 36 γ 5 ( 1 ) ω 42 γ 5 ( 1 ) ω 48 γ 6 ( 1 ) γ 6 ( 1 ) ω 7 γ 6 ( 1 ) ω 14 γ 6 ( 1 ) ω 21 γ 6 ( 1 ) ω 28 γ 6 ( 1 ) ω 35 γ 6 ( 1 ) ω 42 γ 6 ( 1 ) ω 49 γ 6 ( 1 ) ω 56 γ 7 ( 1 ) γ 7 ( 1 ) ω 8 γ 7 ( 1 ) ω 16 γ 7 ( 1 ) ω 24 γ 7 ( 1 ) ω 32 γ 7 ( 1 ) ω 40 γ 7 ( 1 ) ω 48 γ 7 ( 1 ) ω 56 γ 7 ( 1 ) ω 64

where

γ 0 ( 1 ) = exp j 26 π 7 × exp - j 26 π 9 , γ 1 ( 1 ) = exp j 52 π 7 × exp - j 52 π 9 , γ 2 ( 1 ) = exp j 42 π 7 × exp - j 42 π 9 , γ 3 ( 1 ) = exp j 41 π 7 × exp - j 41 π 9 , γ 4 ( 1 ) = exp j 13 π 7 × exp - j 13 π 9 , γ 5 ( 1 ) = exp j 21 π 7 × exp - j 21 π 9 , γ 6 ( 1 ) = exp j 38 π 7 × exp - j 38 π 9 , γ 7 ( 1 ) = exp j 19 π 7 × exp - j 19 π 9 .

We can further concatenate σ (2),σ (3),…,σ (L-1) for L ≤ 7 so that A can take various column number N = 9L.

In practice, we can precompute z 1,z 2,…,z δ at items 1 and 2 of Procedure 1 and save them in memory to avoid the algebraic computation of Equation 8 in the hardware. Then, M = p r indices can be generated by items 3 and 4 of Procedure 1. In Example 3, for instance, z 1 = 26 and z 2 = 42 can be precomputed. Then, only the two elements need to be stored in the memory to generate eight row indices. Table2 presents δ or the number of z i s to be stored in the memory for various M = p r. While a random partial Fourier matrix needs to save M indices, a storage space for δ (M) elements is sufficient for our new sensing matrix. In conclusion, constructing our new sensing matrix requires a storage for δ elements and an additional circuit for successive multiplication, addition and sorting, which may present a practical benefit over random partial Fourier matrices.

Table 2 The number of elements z i s to be stored in memory for various M = p r

3.3 Theoretical recovery performance

In this subsection, we discuss the geometric properties and the theoretical recovery guarantee of our new sensing matrix A.

Lemma 2. The M × N sensing matrix A in Construction 1 has the following properties.

  1. 1.

    The coherence is upper bounded by 1/ M .

  2. 2.

    A forms a tight frame.

  3. 3.

    All the row sums are equal to zero.

Proof. Recall that A’ is obtained by row/column rearrangement of A ~ . Since the coherence of a matrix does not change by row/column permutation, the coherence of A’ is also1/ M from Proposition 1. Note that when p > 2, we have added the constant M + 1 2 to each entry of the original D in Equation 1, which does not change the coherence[19] either. As A is a set of selected columns of A’, the coherence of A is at most1/ M from which item 1 is true. Moreover, σ ( l ) σ ( l ) H = ( M + 1 ) M I M from Equation 5, where I M is the M × M identity matrix. Then, we haveA A H = ( M + 1 ) L M I M = N M I M by concatenating the L submatrices, which shows that item 2 is true. Finally, Equation 5 ensures that no submatrix σ (l) has all one row masked by a constant factor, which concludes that all the row sums of each submatrix are equal to zero, due to the DFT-based structure. Consequently, item 3 is true from the concatenation. □

The geometric properties of Lemma 2 meet the sufficient conditions for the new matrix A to achieve the uniqueness-guaranteed statistical restricted isometry property (UStRIP)[5]. See[27] for the proof of the UStRIP of A.

With a deterministic sensing matrix of coherence μ, one can successfully recover every s-sparse signal from its measurement as long as s = O(μ -1)[24], which guarantees unique recovery of sparse signals with sparsity up toO( M ) by our new sensing matrix A. In an attempt to overcome the theoretical bottleneck, the authors of[28] discussed the average performance of compressed sensing under a generic s-sparse model, where the positions of nonzero entries of an s-sparse signal are distributed uniformly at random and their signs are independent and equally likely to be -1 or +1. In what follows, the average recovery performance of s-sparse signals with the generic s-sparse model is theoretically guaranteed by the sensing matrix A.

Theorem 1. Consider the M × N sensing matrix A in Construction 1. Let x R N be an s-sparse signal with the generic s-sparse model. Then, if s=O M log N , it is possible to recover x with probability 1 - N -1 from the measurement Ax.

Proof. From Lemma 2, A is a tight frame with coherenceμ=O 1 M . For such a matrix, Theorem 2.2 of[29] presents the average recovery performance that ifs=O min μ - 2 log N , M log N =O M log N , it is possible to recover x with probability 1-N -1 from Ax, which completes the proof. □

4 FFT-based signal measurement and recovery

This section describes measurement and recovery processes with the deterministic compressed sensing matrix A in Construction 1. With the DFT-based submatrix structure, we can make use of the FFT technique in the processes.

4.1 Measurement

The measurement process of compressed sensing is accomplished by y = Ax, where x = (x 0,x 1,…,x N-1)T and y = (y 0,y 1,…,y M-1)T. Let b = M + 1 and x l  = (x bl ,x bl+1…,x bl+b-1)T be a segment of x of length b, where 0 ≤ l ≤ L - 1. From Equation 5, σ ( l ) x l = 1 M Γ ( l ) F b x l for each l, which implies that the matrix-vector multiplication σ (l) x l includes to perform the b-point DFT of each segment x l and then to multiply each DFT output by γ k ( l ) . Let x ~ k ( l ) be the b-point DFT of x l , i.e.,

x ~ k ( l ) = t = 0 b - 1 x bl + t e - j 2 π tk b , 0 k b - 1

and X k ( l ) = x ~ k + 1 ( l ) , 0 ≤ k ≤ M - 1 for each l. As Ax = σ (0) x 0++ σ (L-1) x L-1, the measurement from Ax is equivalent to adding up each DFT output X k ( l ) weighted by γ k ( l ) for 0 ≤ l ≤ L-1. In other words,

y k = 1 M l = 0 L - 1 X k ( l ) γ k ( l ) , 0 k M - 1 .

For fast implementation, the FFT algorithm can be applied to the L distinct segments of x simultaneously in a parallel fashion.

4.2 Reconstruction

For s-sparse signal recovery, we consider the CoSaMP algorithm presented in Algorithm 2.1 of[20], which is described in Algorithm 1 of this paper. At each iteration, it forms a signal proxy f and identifies a potential candidate Ω of the signal support by locating the largest 2s components of the proxy. The algorithm then merges the candidate Ω with the one from the previous iteration, to create a new support set T. To estimate the target signal x ̂ i , it solves a least-squares problem and takes only the largest s entries from the signal approximation z. Finally, it updates the current sample v for the next iteration.

Algorithm 1 CoSaMP recovery algorithm[20]

In Algorithm 1, the signal proxy is f = A H v = (f 0,f 1,…,f N-1)T, where v = (v 0,v 1,…,v M-1)T and A H denotes the conjugate transpose of A. Initially, v is a (noisy) measurement vector u. At each iteration, it will be updated byv=u-A x ̂ i . Considering the submatrix structure of σ (l), the matrix-vector multiplication A H v is performed by the reverse operation of the measurement process, i.e., extracting the weight γ k ( l ) from each measurement and then applying the b-point IDFT. For each l, 0 ≤ l ≤ L - 1, we create a demasked version of v of length b = M + 1, i.e., v ~ ( l ) = ( v ~ 0 ( l ) , v ~ 1 ( l ) , , v ~ M ( l ) ) T where v ~ 0 ( l ) =0 and

v ~ k + 1 ( l ) = v k γ k ( l ) , 0 k M - 1

where '’ denotes the complex conjugate. Applying the b-point IDFT to v ~ ( l ) with normalization then yields a segment of f of length b, i.e., f l  = (f bl ,f bl+1,…,f bl+b-1)T, where

f bl + t = 1 M k = 0 b - 1 v ~ k ( l ) e j 2 π tk b , 0 t b - 1 .

For fast implementation, the FFT algorithm can be applied to the L distinct demasked versions of v simultaneously in a parallel fashion. Finally, concatenating the L segments formsf= ( f 0 T f L - 1 T ) T .

While updating current samples at each iteration, the matrix-vector multiplicationA x ̂ i is also performed by the FFT algorithm in a similar manner to the measurement process. One may stop the iterations of the CoSaMP algorithm if the norm of updated samples is sufficiently small or the iteration counter reaches a predetermined value.

Table one of[20] claimed that forming a signal proxy dominates the algorithm complexity by the cost of matrix-vector multiplication. Thus, each iteration of the FFT-based CoSaMP recovery algorithm has the complexity of O(L × b log b) ≈ O(N log M), which is smaller than that of random partial Fourier matrices.

5 Empirical recovery performance

In this section, we compare our new sensing matrices to chirp sensing[6] and random partial Fourier matrices in terms of empirical recovery performance in noiseless and noisy scenarios. For comparison, we assume that a random partial Fourier matrix has the same parameters M and N = (M + 1)L as those of our new sensing matrix. To obtain it, we made ten trials to select M rows randomly from the N-point IDFT matrix, where the coherence was checked at each trial. Then, we chose the one with the smallest coherence for our experiments. For a chirp sensing matrix, on the other hand, M is set to a prime number closest to the parameter used in our new sensing matrix, and N = M L. Each submatrix of the partial chirp sensing matrix has an alternating polarity as in[30].

Through experiments, we measured an s-sparse signal x, where the s nonzero entries are either +1 or -1, and their positions and signs are chosen uniformly at random. For signal reconstruction, the FFT-based CoSaMP algorithm was applied to a total of 2,000 sample vectors measured by the three sensing matrices. In Algorithm 1, the iterations are stopped if either ||v|| < 10-4 or the iteration counter reaches the sparsity level s.

Figure2 displays successful recovery rates of the three sensing matrices from noiseless measurements at various compression ratios, where the sparsity level is s = 64. In the figure, for 5 ≤ L ≤ 30, M = 256 and N = (M + 1)L for our new sensing and random partial Fourier matrices, while M = 257 and N = M L for chirp sensing matrices. With the parameters, each sensing matrix achieves the compression ratios of0.0333 M N 1 L 0.2. A success is declared in reconstruction if||x- x ̂ ||<1 0 - 6 for the estimate x ̂ . The figure shows that our new sensing matrices have slightly higher recovery rates than the random partial Fourier matrices but have almost the same recovery rates as those of chirp sensing codes.

Figure 2
figure2

Successful recovery rates of the three sensing matrices from noiseless measurements. The figure displays successful recovery rates of our new class (asterisks), random partial Fourier (white circle), and chirp sensing (white triangle) matrices from noiseless measurements at various compression ratios of M N , where the sparsity level is s = 64, M = 256 for our new sensing and random partial Fourier matrices, and M = 257 for chirp sensing codes.

In noisy compressed sensing, a measured signal is corrupted by additive noise, i.e., u = y + n = Ax + n, where n is the additive white Gaussian noise of zero mean and variance σ 2. Then, the input signal-to-noise ratio (SNR) is defined as SNR input (dB)=10 log 10 | | y | | 2 σ 2 . Also, we define the reconstruction SNR as SNR reconst (dB)=10 log 10 | | x | | 2 | | x - x ̂ | | 2 , to measure the recovery performance in noisy compressed sensing. In the experiments, we fixed L = 8, where M = 256 and N = L(M + 1) = 2,056 for our new sensing and random partial Fourier matrices, while M = 257 and N = M L = 2,056 for chirp sensing matrices. Figure3 shows an example of original and reconstructed signals for our new sensing matrix in noisy compressed sensing, where the sparsity level is s = 15 and the input SNR is 15 dB.

Figure 3
figure3

An example of original and reconstructed signals for our new sensing matrix in noisy compressed sensing. The figure shows an example of original (white circle) and reconstructed (asterisks) signals of length N = 2,056 from its noisy measurement of length M = 256 for our new sensing matrix, where s = 15 and SNR input  = 15 dB.

Figure4 sketches the reconstruction SNR of the three sensing matrices from noisy measurements. In the figure, the input SNR is 15 dB. The figure reveals that our new sensing matrix outperforms the random partial Fourier and the chirp sensing matrices at high sparsity levels, but the differences are negligible. Figure5 demonstrates reconstruction SNR versus input SNR of the three matrices in noisy compressed sensing, where the sparsity level of an original signal is 70. At the sparsity level, we observed that the relationship between reconstruction and input SNR is linear for medium and high input SNR. Our new sensing matrix slightly outperforms the random partial Fourier matrix for high input SNR but shows almost the same trend with the chirp sensing code.

Figure 4
figure4

Reconstruction SNR of the three sensing matrices in noisy compressed sensing. The figure sketches the reconstruction SNR of our new class (asterisks), random partial Fourier (white circle), and chirp sensing (white triangle) matrices from noisy measurements with SNR input  = 15 dB, where M = 256 and N = 2,056 for our new sensing and random partial Fourier matrices, while M = 257 and N = 2,056 for chirp sensing codes.

Figure 5
figure5

Reconstruction SNR versus input SNR in noisy compressed sensing for 70-sparse input signals. The figure displays reconstruction SNR versus input SNR of our new class (asterisks), random partial Fourier (white circle), and chirp sensing (white triangle) matrices in noisy compressed sensing for 70-sparse input signals, where M = 256 and N = 2,056 for our new sensing and random partial Fourier matrices, while M = 257 and N = 2,056 for chirp sensing codes.

In addition to the above experiments, we attempted an elementary image reconstruction employing the Haar wavelet transform. An original sparsified image was measured by the three sensing matrices and then reconstructed by the CoSaMP algorithm. We observed that the successfully reconstructed images from three different matrices are hard to distinguish, and show almost the same reconstruction SNR.

In conclusion, our new sensing matrix showed empirically reliable recovery performance by the CoSaMP algorithm in both noiseless and noisy scenarios, which is comparable to those of chirp sensing and random partial Fourier matrices.

6 Conclusions

This paper has constructed a new class of Fourier-based compressed sensing matrices using an almost difference set. We showed that a basic partial Fourier matrix, equivalent to the near-optimal partial Fourier codebook presented in[19], could be represented as a concatenation of DFT-based submatrices under row/column rearrangement. Choosing a full or a part of columns of the concatenated matrix, we then constructed a new sensing matrix which turns out to be an incoherent tight frame. The new sensing matrix guarantees unique sparse reconstruction with high probability for sparse signals with uniformly distributed supports. Moreover, experimental results revealed that our deterministic compressed sensing guarantees the empirically reliable recovery performance.

In conclusion, compared to existing chirp sensing and random partial Fourier matrices, our new sensing matrices have the benefits summarized:

  1. 1.

    Our new deterministic sensing matrices support various parameters of M = p r and N = (M + 1)L for any prime p and positive integers r and L, 2 ≤ L ≤ M - 1. They are incoherent tight frames for any such M and N. Compared to chirp sensing codes where M is generally restricted to a prime number, the new matrices therefore provide more options for the parameters M and N, permitting various compression ratios of M N 1 L . A large number of new sensing matrices with a variety of admissible parameters may have many potential applications in compressed sensing.

  2. 2.

    The deterministic row index structure requires much less storage space than random partial Fourier matrices. Moreover, while the N-point FFT is required for random partial Fourier matrices, the DFT-based submatrix structure of our new sensing matrices allows the (M + 1)-point FFT processing, which enables efficient signal measurement and reconstruction with low complexity and fast processing. The benefits in implementation indicate the potential of our new sensing matrices in practical compressed sensing.

Appendix

Proof of procedure 1

First of all, Lemma 3 shows that the indices of D in Equation 1 are equivalently generated by cyclotomic cosets. In the proof, we use the well-known property that ( x + y ) p k = x p k + y p k forx,y F p m and any integers m,k.

Lemma 3. Consider all cyclotomic cosets modulo p r + 1 over p. From Procedure 1, recall that if p = 2, Γ2(2r + 1)  {0} = {u 1,…,u δ }, and if p > 2, Γ p ( p r +1){ p r + 1 2 }={ u 1 ,, u δ }, respectively. For each u i , let C u i be the cyclotomic coset having the coset leader u i . Also, let z i Z p 2 r - 1 be an integer satisfying Equation 8 for each u i . Assume z i C s i , where C s i is a cyclotomic coset modulo p 2r-1 over p containing a coset leader s i . Then,

  1. 1.

    In Equation 1 of Remark 1, I= 1 i δ C u i .

  2. 2.

    z i  = z j if and only if u i  = u j for 1 ≤ i, j ≤ δ.

  3. 3.

    | C s i |=| C u i | for each i, 1 ≤ i ≤ δ.

  4. 4.

    Finally, the index set D of Equation 1 is given by

    D= 1 i δ C s i .
    (9)

Proof.

  1. 1.

    If p = 2, then 1 i δ C u i = Z p r + 1 {0}=I is obvious. If p > 2, on the other hand, p ( p r + 1 ) 2 - p r + 1 2 =( p r +1)× ( p - 1 ) 2 0(mod p r +1) for odd p. Then, we have p ( p r + 1 ) 2 p r + 1 2 (mod p r +1), which means that p r + 1 2 is the only element of the cyclotomic coset containing it. Therefore, 1 i δ C u i = Z p r + 1 { p r + 1 2 }=I is also clear.

  2. 2.

    For given u i , the solution z i Z p 2 r - 1 of Equation 8 is unique from the structure of the finite field F p 2 r . From the uniqueness, z i  = z j if and only if u i  = u j for 1 ≤ i, j ≤ δ.

  3. 3.

    For each i, let | C u i |= n u i , where

    u i u i p n u i (mod p r +1).
    (10)

    Also, let| C s i |= n s i . For z i C s i ,

    z i z i p n s i (mod p 2 r -1).
    (11)

    Then,1+ α ( p r - 1 ) u i p n u i = 1 + α ( p r - 1 ) u i p n u i = α z i p n u i , where α is a primitive element in F p 2 r . Since1+ α ( p r - 1 ) u i p n u i =1+ α ( p r - 1 ) u i = α z i , we have α z i p n u i = α z i , which implies

    z i z i p n u i (mod p 2 r -1).
    (12)

    From Equations 11 and 12, n s i divides n u i since n s i is the smallest positive integer satisfying Equation 11. Similarly,1+ α ( p r - 1 ) u i p n s i = 1 + α ( p r - 1 ) u i p n s i = α z i p n s i = α z i =1+ α ( p r - 1 ) u i . Then,

    u i u i p n s i (mod p r +1).
    (13)

    From Equations 10 and 13, n u i divides n s i since n u i is the smallest positive integer satisfying Equation 10. As n s i and n u i divide each other, it means n u i = n s i , or equivalently| C u i |=| C s i | for each i,1 ≤ i ≤ δ.

  4. 4.

    With z i C s i and u i C u i satisfying Equation 8, let us say that C s i is associated with C u i . Recall that α ( p r + 1 ) e v = Tr r 2 r ( α v ) in Remark 1. For each index of D in Equation 1,

    α d k = α ( p r + 1 ) e v - v = α - v × Tr r 2 r ( α v ) = α - v × ( α v + α v p r ) = 1 + α ( p r - 1 ) v
    (14)

    where vI in Equation 1. In Equation 14, if v = u i , then d k  = z i from Equation 8. Moreover,

    α z i p t = 1 + α ( p r - 1 ) u i p t =1+ α ( p r - 1 ) u i p t
    (15)

    for1t n u i = n s i . Then, Equation 15 implies that each element of C u i ={ u i , u i p,, u i p n u i - 1 } induces each element of C s i ={ z i , z i p,, z i p n s i - 1 } as a solution of Equation 15. For each elementvI= 1 i δ C u i , therefore, we conclude that the corresponding solution d k of Equation 14 constitutes δ cyclotomic cosets of C s 1 ,, C s δ each of which is associated with C u 1 ,, C u δ , respectively, which yields Equation 9. From item 2, C s 1 ,, C s δ are disjoint, and the set D of Equation 9 has p r distinct elements since|D|= i = 1 δ | C s i |= i = 1 δ | C u i |=|I|= p r .

From Lemma 3, if p = 2, Equation 9 directly presents the indices of D in Equation 2. If p > 2, on the other hand, we can simply add M + 1 2 to each element of Equation 9, to obtain the indices of D in Equation 3. This verifies that Procedure 1 equivalently generates the row index set for our new sensing matrix.

References

  1. 1.

    Donoho DL: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):1289-1306.

    MathSciNet  Article  Google Scholar 

  2. 2.

    Candes EJ, Romberg J, Tao T: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52(2):489-509.

    MathSciNet  Article  Google Scholar 

  3. 3.

    Candes EJ, Tao T: Near-optimal signal recovery from random projections: Universal encoding strategies. IEEE Trans. Inf. Theory 2006, 52(12):5406-5425.

    MathSciNet  Article  Google Scholar 

  4. 4.

    Rauhut H: Compressive sensing and structured random matrices. Theoretical Foundations and Numerical Methods for Sparse Recovery, Radon Series ed. by M. Fornasier. Computational and Applied Mathematics vol. 9 (deGruyter, Berlin, 2010), pp. 1–92

  5. 5.

    Calderbank R, Howard S, Jafarpour S: Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property. IEEE J. Sel. Topics Sig. Proc 2010, 4(2):358-374.

    Article  Google Scholar 

  6. 6.

    Applebaum L, Howard SD, Searle S, Calderbank R: Chirp sensing codes: Deterministic compressed sensing measurements for fast recovery. Appl. Comput. Harmon. Anal 2009, 26: 283-290. 10.1016/j.acha.2008.08.002

    MathSciNet  Article  Google Scholar 

  7. 7.

    Alltop W: Complex sequences with low periodic correlations. IEEE Trans. Inf. Theory 1980, 26(3):350-354. 10.1109/TIT.1980.1056185

    MathSciNet  Article  Google Scholar 

  8. 8.

    Strohmer T, Heath R: Grassmanian frames with applications to coding communication. Appl. Comput. Harmon. Anal 2003, 14(3):257-275. 10.1016/S1063-5203(03)00023-X

    MathSciNet  Article  Google Scholar 

  9. 9.

    Calderbank R, Howard S, Jafarpour S: A sublinear algorithm for sparse reconstruction with l2/l2 recovery guarantees. 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Aruba, 13–16 Dec 2009. Piscataway: IEEE; 2009:209-212.

    Google Scholar 

  10. 10.

    Howard S, Calderbank R, Searle S: A fast reconstruction algorithm for deterministic compressive sensing using second order Reed-Muller codes. Conference on Information Systems and Sciences (CISS), Princeton, 19–21 March 2008. Piscataway: IEEE; 2008:11-15.

    Google Scholar 

  11. 11.

    Amini A, Montazerhodjat V, Marvasti F: Matrices with small coherence using p-ary block codes. IEEE Trans. Sig. Proc 2012, 60: 172-181.

    MathSciNet  Article  Google Scholar 

  12. 12.

    DeVore RA: Deterministic constructions of compressed sensing matrices. J. Complexity 2007, 23: 918-925. 10.1016/j.jco.2007.04.002

    MathSciNet  Article  Google Scholar 

  13. 13.

    Gurevich S, Hadani R, Sochen N: On some deterministic dictionaries supporting sparsity. J. Fourier Anal. Appl 14(5):859-876.

  14. 14.

    Xu Z: Deterministic sampling of sparse trigonometric polynomials. J. Complexity 2011, 27: 133-140. 10.1016/j.jco.2011.01.007

    MathSciNet  Article  Google Scholar 

  15. 15.

    Yu NY: Additive character sequences with small alphabets for compressed sensing matrices. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, 22–27 May 2011 (IEEE, Piscataway, 2011), pp. 2932–2935

    Google Scholar 

  16. 16.

    Li S, Gao F, Ge G, Zhang S: Deterministic construction of compressed sensing matrices via algebraic curves. IEEE Trans. Inf. Theory 2012, 58(8):5035-5041.

    MathSciNet  Article  Google Scholar 

  17. 17.

    Mishali M, Eldar YC: Blind multiband signal reconstruction: Compressed sensing for analog signals. IEEE Trans. Sig. Proc 2009, 57(3):993-1009.

    MathSciNet  Article  Google Scholar 

  18. 18.

    Dominguez-Jimenez ME, Gonzalez-Prelcic N, Vazquez-Vilar G, Lopez-Valcarce R: Design of universal multicoset sampling patterns for compressed sensing of multiband sparse signals. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, 25-30 March 2012. Piscataway: IEEE; 2012:3337-3340.

    Google Scholar 

  19. 19.

    Yu NY, Feng K, Zhang A: A new class of near-optimal partial Fourier codebooks from an almost difference set. Des. Codes Cryptogr 2012. 10.1007/s10623-012-9753-8

    Google Scholar 

  20. 20.

    Needell D, Tropp JA: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal 2009, 26: 301-321. 10.1016/j.acha.2008.07.002

    MathSciNet  Article  Google Scholar 

  21. 21.

    Golomb SW, Gong G: Signal Design for Good Correlation - for Wireless Communication, Cryptography and Radar.. Cambridge: Cambridge University Press; 2005.

    Book  Google Scholar 

  22. 22.

    MacWilliams FJ, Sloane NJ: The Theory of Error-Correcting Codes. Amsterdam: North-Holland; 1977.

    Google Scholar 

  23. 23.

    Arasu KT, Ding C, Helleseth T, Kumar PV, Martinsen H: Almost difference sets and their sequences with optimal autocorrelation. IEEE Trans. Inf. Theory 2001, 47(7):2934-2943. 10.1109/18.959271

    MathSciNet  Article  Google Scholar 

  24. 24.

    Donoho DL, Elad M: Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proc. Natl. Acad. Sci 2003, 100: 2197-2202. 10.1073/pnas.0437847100

    MathSciNet  Article  Google Scholar 

  25. 25.

    Welch LR: Lower bounds on the maximum cross correlation of the signals. IEEE Trans. Inf. Theory 1974, IT-20: 397-399.

    MathSciNet  Article  Google Scholar 

  26. 26.

    Kovacević J, Chebira A: An Introduction to Frames: Foundations and Trends in Signal Processing. Hannover: now Publishers; 2008.

    Google Scholar 

  27. 27.

    Yu NY: On statistical restricted isometry property of a new class of deterministic partial Fourier compressed sensing matrices. International Symposium on Information Theory and its Applications (ISITA), Honolulu, 28–31 Oct 2012. Piscataway: IEEE; 2012:287-291.

    Google Scholar 

  28. 28.

    Candes E, Plan Y: Near-ideal model selection by l1 minimization. Ann. Stat 2009, 37(5A):2145-2177. 10.1214/08-AOS653

    MathSciNet  Article  Google Scholar 

  29. 29.

    Jafapour S, Duarte MF, Calderbank R: Beyond worst-case reconstruction in deterministic compressed sensing. IEEE International Symposium on Information Theory Proceedings (ISIT), Cambridge, 1–6 July 2012. Piscataway: IEEE; 2012:1862-1866.

    Google Scholar 

  30. 30.

    Ni K, Datta S, Mahanti P, Roudenko S, Cochran D: Efficient deterministic compressed sensing for images with chirps and Reed-Muller codes. SIAM J. Imaging Sci 2011, 4(3):931-953. 10.1137/100808794

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nam Yul Yu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Yu, N.Y., Li, Y. Deterministic construction of Fourier-based compressed sensing matrices using an almost difference set. EURASIP J. Adv. Signal Process. 2013, 155 (2013). https://doi.org/10.1186/1687-6180-2013-155

Download citation

Keywords

  • Discrete Fourier Transform
  • Tight Frame
  • Sparse Signal
  • Signal Proxy
  • Linear Feedback Shift Register