Deterministic construction of Fourier-based compressed sensing matrices using an almost difference set

In this paper, a new class of Fourier-based matrices is studied for deterministic compressed sensing. Initially, a basic partial Fourier matrix is introduced by choosing the rows deterministically from the inverse discrete Fourier transform (DFT) matrix. By row/column rearrangement, the matrix is represented as a concatenation of DFT-based submatrices. Then, a full or a part of columns of the concatenated matrix is selected to build a new M × N deterministic compressed sensing matrix, where M = pr and N = L(M + 1) for prime p, and positive integers r and L ≤ M - 1. Theoretically, the sensing matrix forms a tight frame with small coherence. Moreover, the matrix theoretically guarantees unique recovery of sparse signals with uniformly distributed supports. From the structure of the sensing matrix, the fast Fourier transform (FFT) technique can be applied for efficient signal measurement and reconstruction. Experimental results demonstrate that the new deterministic sensing matrix shows empirically reliable recovery performance of sparse signals by the CoSaMP algorithm.


Introduction
Compressed sensing (or compressive sampling) is a novel and emerging technology with a variety of applications in imaging, data compression, and communications.In compressed sensing, one can recover sparse signals of high dimension from incomplete measurements.Mathematically, measuring an N-dimensional signal x ∈ R N with an M × N measurement matrix yields an M-dimensional vector y = x, where M < N. Imposing a requirement that x is s-sparse or the number of nonzero entries in x is at most s, one can recover x exactly with high probability by an l 1 -minimization method or a greedy algorithm, which is computationally tractable.
Many research activities have been triggered on theory and practice of compressed sensing since Donoho, Candes, Romberg, and Tao published their marvelous theoretical works [1][2][3].The efforts revealed that a measurement matrix plays a crucial role in recovery of ssparse signals.Although a random matrix provides many theoretical benefits [4], it has the drawbacks [5] of high complexity and large storage in its practical implementation.As an alternative, we may consider a deterministic matrix, where well-known codes and sequences have been employed for the construction, e.g., chirp sequences [6], Alltop sequences [7,8], Kerdock and Delsarte-Goethals codes [9], second-order Reed-Muller codes [10], and BCH codes [11].Other techniques for deterministic construction, based on finite fields, representation theory, characters and algebraic curves, and multicoset codes, can be also found in [12][13][14][15][16][17][18].The deterministic matrices guarantee the recovery performance that is empirically reliable, allowing fast processing and low complexity.
In this paper, we study deterministic construction of a new class of Fourier-based compressed sensing matrices.Initially, a p r × (p 2r − 1) basic partial Fourier matrix, equivalent to the partial Fourier codebook of [19], is introduced by selecting p r rows from the (p 2r −1)-point inverse discrete Fourier transform (DFT) matrix according to an almost difference set, where p is a prime number and r is a positive integer.By rearranging the rows and/or columns, we show that the matrix is represented as a concatenation of DFT-based submatrices.Then, a full or a part of columns of the concatenated matrix is selected to build a new M × N sensing matrix for deterministic compressed sensing, where M = p r and N = L(M + 1) for L ≤ M − 1.The concatenated structure allows the new sensing matrix to offer the various admissible column numbers while keeping it as an incoherent tight frame and enables efficient processing for measurement and reconstruction in compressed sensing.We would like to stress that it is not a trivial task to obtain the concatenated structure from the basic partial Fourier matrix by row/column rearrangement.With the parameters M and N, our new deterministic matrix can achieve the various permissible compression ratios of M N ≈ 1 L for a positive integer L, 2 ≤ L ≤ M − 1.
Theoretically, the new sensing matrix forms a tight frame with small coherence.Moreover, our new sensing matrix theoretically guarantees unique recovery of sparse signals with uniformly distributed supports with high probability.From the structure of our new sensing matrix, the fast Fourier transform (FFT) technique can be applied for efficient signal measurement and reconstruction.Experimental results demonstrate that the new deterministic compressed sensing matrix, together with the CoSaMP recovery algorithm [20], empirically guarantees sparse signal recovery with high reliability.We observe that the empirical recovery performance of our new sensing matrices is similar to those of chirp sensing [6] and random partial Fourier matrices.However, our new matrices offer several practical benefits, requiring less storage and complexity than random partial Fourier matrices and providing more parameters of M and N than chirp sensing codes.
The rest of this paper is organized as follows.In Section 2, we introduce basic concepts and notations to understand this work.Section 3 modifies the structure of a basic partial Fourier matrix and presents a new sensing matrix for deterministic construction.We also discuss the efficient implementation and the theoretical recovery guarantee of the new sensing matrix.Section 4 describes the signal measurement process and the CoSaMP recovery algorithm by employing the FFT technique.In Section 5, we demonstrate the empirical recovery performance of our new sensing matrices in noiseless and noisy settings.Finally, concluding remarks will be given in Section 6.

Preliminaries
This section introduces fundamental concepts and notations for understanding this work.In subsections 2.1 and 2.2, we briefly introduce the concepts of finite fields, trace functions and cyclotomic cosets for signal processing researchers.For more details, see [21] and [22].
Let k be a positive integer that divides m.A trace function is a linear mapping from F p m onto F p k defined by where the addition is computed modulo p.The trace function algebraically defines the well-known m-sequences or pseudo-noise (PN) sequences, which have been widely used in wireless communications.For instance, if p = 2 and ) is a binary m-sequence of length 2 m − 1, where each entry is 0 or 1.The m-sequence, defined by a trace function, is efficiently generated by a linear feedback shift register (LFSR), which is a common method in communication standards.

Basic partial Fourier matrix A
In this subsection, we introduce a basic framework from which a new sensing matrix can be developed in Section 3. Throughout this paper, we set M = p r and N = p 2r − 1 = M 2 − 1 for prime p and a positive integer r.Also, we assume that each column of a sensing matrix has unit l 2 -norm, where the l 2 -norm is denoted 1 summarizes all the variables and notations for the development of a new sensing matrix.
In [19], Yu, Feng, and Zhang presented a new class of (N , M) near-optimal partial Fourier codebooks using an almost difference set [23].The codebook can be equivalently translated into an M × N partial Fourier matrix, which contains M rows selected from the N -point inverse DFT (IDFT) matrix according to the almost difference set.

Variable or notation Meaning
M M = p r for prime p and a positive integer r M × (M + 1) DFT matrix without the first row From the results of [19], Proposition 1 describes the basic partial Fourier matrix and its geometric properties with the notations of this paper.Proposition 1.For prime p and a positive integer r, let M = p r and N = p 2r − 1 = M 2 − 1.Let D = {d 0 , d 1 , . . ., d M−1 } be an index set defined in Lemma 2 of [19], which will be given below in Remark 1. Choosing M rows from the N -point IDFT matrix according to D, we construct an M ×N matrix A, where each entry is given by Then, the coherence [24] of A is given by where a n 1 is a column vector of A and a H n 1 denotes the transpose of its complex conjugate.The coherence nearly achieves the Welch bound equality [25] of M+1 for sufficiently large M.Moreover, A forms a tight frame [26] as each row is mutually orthogonal.
The coherence and the tightness of A do not change if we select the rows from the DFT matrix, instead of the IDFT matrix.In this paper, we decide to use the IDFT matrix.

Remark 1.
With the notations of this paper, the row index set D is defined by [19] where all the operations in D are computed modulo N .In Equation 1, D is an almost difference set, and e v is a nonnegative integer satisfying , where α is a primitive element in F p 2r .To determine the index set D, one needs to compute e v using a trace function, which might be difficult for signal processing researchers.In Section 3, we will present an alternative method to generate the indices of D by successive multiplication to predetermined values, which does not require the computation of e v .Therefore, it suffices to assume that e v is simply an integer in this paper.

Construction of new Fourier-based sensing matrices
To build a new M × N sensing matrix, we begin with the M × N basic partial Fourier matrix A, and then choose the N columns after a row/column rearrangement.Our approach is different from a conventional one of random or deterministic selection of M rows out of an N × N Fourier matrix, but we will show that it ultimately presents reliable recovery performance and practical benefits in implementation.
In deterministic compressed sensing, it is desired that a sensing matrix should be able to support a variety of admissible column numbers to sense a signal of various lengths.For this purpose, one needs to consider how to select the columns from A for a new M×N sensing matrix with N < N .In this section, we apply a row/column permutation to the partial Fourier matrix A to obtain its variant M × N matrix A .Then, we choose a full or a part of columns of A to construct a new M × N sensing matrix A, where N = (M + 1)L for a positive integer L, 2 ≤ L ≤ M − 1.The row/column rearrangement offers the following benefits for our new sensing matrix A in compressed sensing, which is the motivation: 1.If one selects the columns arbitrarily from A, the resulting sensing matrix may not be a tight frame in general.In fact, one needs to be careful in selecting the columns of A, to achieve the tightness of the resulting matrix.Through the row/column rearrangement, we will show that the new sensing matrix A has a concatenated structure of (M + 1)point DFT-based submatrices.With the structure, A can be still a tight frame by choosing N as a multiple of M + 1, which will be shown in Lemma 2. 2. The concatenated structure of A also allows efficient (M + 1)-point FFT processing for measurement and recovery of sparse signals in compressed sensing.Note that if one selects the columns arbitrarily from the original A, the resulting matrix generally requires the N -point FFT processing, which has more computational complexity.Moreover, one may enjoy fast processing via parallel FFT computations using the concatenated structure, which will be discussed in Section 4.

Structure
Recall the partial Fourier matrix A in Proposition 1.If p = 2, we use the original index set D in Equation 1, i.e., On the other hand, if p > 2, we redefine the index set D by adding M+1 2 to each original index in Equation 1, i.e., ( The above modification for p > 2 ensures that each entry of D is nonzero when computed modulo M + 1, which also holds for p = 2. See the proof of Lemma 1 for the implication.Now, we suggest a column rearrangement of the original A. For given l, 0 ≤ l ≤ M − 2, let us take the M + 1 column vectors of indices n = (M − 1)t + l from A, where 0 ≤ t ≤ M. With the column vectors, we then obtain an where each entry is given by In Equation 4, k is defined as Next, we show that the submatrix σ (l) has a DFT-based structure if the row indices of D are arranged in appropriate order.In Lemma 1, we denote F M+1 as the M×(M+1) DFT matrix without the first row, where each entry is Lemma 1.In the index set of Equation 2 for p = 2 or Equation 3 for p > 2, the entries of D = {d 0 , d 1 , . . .
Then, each submatrix σ (l) can be expressed by which clearly shows the DFT-based structure of σ (l) .
Proof.We investigate how exp j 2πd k t M+1 of Equation 4 is changed for p = 2 and p > 2, respectively.http://asp.eurasipjournals.com/content/2013/1/155Case.p = 2: In this case, each element of D in Equation 2 is represented as in Equation 4. Consequently, each entry of Equation 4forms an M × (M + 1) submatrix σ (l) where each row is from the (M + 1)-point DFT matrix excluding all one row and then masked by γ (l) k .Then, the structure of Equation 5 is straightforward.
Case. p > 2: In this case, each index of D in Equation 3is Reordering the indices, we get for 0 ≤ k ≤ M−1.Then, Equation 7 yields exp j 2πd k t M+1 = exp −j 2π(k+1)t M+1 in Equation 4. It is now clear why we introduced the modified index set D of Equation 3 for p > 2. By ensuring d k ≡ 0 (mod M + 1) for any k, the modification guarantees that we can achieve the same DFT-based submatrix structure as that of p = 2. Finally, each entry of Equation 4 also forms an M × (M + 1) submatrix σ (l) where each row is from F M+1 and then masked by γ (l)  k , which yields Equation 5.
Remark 2. In both cases of p, one needs to ensure that the entries of the index set , to achieve the DFT-based submatrix structure in Lemma 1.If p = 2, the original entries of Equation 2 meet the condition from Equation 6.On the other hand, if p > 2, Equation 7shows that we have to rearrange the entries of Equation 3 by circularly shifting the order by M+1  2 .If the entries of D are generated by a different method, which will be introduced in Procedure 1, the index set D should be sorted for both p such that the entries are in decreasing order when computed modulo M + 1, i.e., D (mod M + 1) ≡ {M, M − 1, . . ., 1}, to satisfy the condition.

Implementation
In Construction 1, generating the row index set D efficiently is a key issue in implementing the deterministic sensing matrix A. In D, as α (p r +1)e v = Tr 2r r (α v ) is an element of a p r -ary m-sequence of period p 2r − 1 [21], we can compute it by a 2-stage LFSR.Therefore, each element of D in Equation 1 can be generated by LFSR, log operation and other basic arithmetics over finite fields.
As the computation over finite fields is not trivial, we introduce an alternative method to generate the indices of D more efficiently.In the method, we use cyclotomic cosets modulo p r + 1 and p 2r − 1 over p, respectively, which are always valid for any prime p, since gcd(p, p r + 1) = gcd(p, p 2r − 1) = 1.In what follows, we describe the procedure, where the proof will be given in the Appendix.
When p = 2, identify a set of nonzero coset leaders where α is a primitive element in F p 2r .3. For each z i , 1 ≤ i ≤ δ, generate a cyclotomic coset modulo p 2r − 1 over p containing z i by C s i = {z i , z i p, . . ., z i p n s i −1 }, where n s i is the smallest positive integer such that z i ≡ z i p n s i (mod p 2r − 1).Note that the coset leader s i is not necessarily equal where the addition is performed to each element of 1≤i≤δ C s i , and computed modulo p 2r − 1.From Remark 2, the index set D should be sorted such that the entries are in decreasing order when computed modulo M + 1, i.e., D (mod M + 1) ≡ {M, M − 1, . . ., 1}.

Theoretical recovery performance
In this subsection, we discuss the geometric properties and the theoretical recovery guarantee of our new sensing matrix A.

Lemma 2. The M × N sensing matrix A in Construction 1 has the following properties.
1.The coherence is upper bounded by 1/ √ M. 2. A forms a tight frame.3.All the row sums are equal to zero.
Proof.Recall that A is obtained by row/column rearrangement of A. Since the coherence of a matrix does not change by row/column permutation, the coherence of A is also 1/ √ M from Proposition 1.Note that when p > 2, we have added the constant M+1  2 to each entry of the original D in Equation 1, which does not change the coherence [19] either.As A is a set of selected columns of A , the coherence of A is at most 1/ √ M from which item 1 is true.Moreover, σ (l) σ (l)H = (M+1) M I M from Equation 5, where I M is the M × M identity matrix.Then, we have AA H = (M+1)L M I M = N M I M by concatenating the L submatrices, which shows that item 2 is true.Finally, Equation 5ensures that no submatrix σ (l) has all one row masked by a constant factor, which concludes that all the row sums of each submatrix are equal to zero, due to the DFTbased structure.Consequently, item 3 is true from the concatenation.
The geometric properties of Lemma 2 meet the sufficient conditions for the new matrix A to achieve the uniqueness-guaranteed statistical restricted isometry property (UStRIP) [5].See [27] for the proof of the UStRIP of A.
With a deterministic sensing matrix of coherence μ, one can successfully recover every s-sparse signal from its measurement as long as s = O(μ −1 ) [24], which guarantees unique recovery of sparse signals with sparsity up to O( √ M) by our new sensing matrix A. In an attempt to overcome the theoretical bottleneck, the authors of [28] discussed the average performance of compressed sensing under a generic s-sparse model, where the positions of nonzero entries of an s-sparse signal are distributed uniformly at random and their signs are .For such a matrix, Theorem 2.2 of [29] presents the average recovery performance that if , it is possible to recover x with probability 1 − N −1 from Ax, which completes the proof.

FFT-based signal measurement and recovery
This section describes measurement and recovery processes with the deterministic compressed sensing matrix A in Construction 1.With the DFT-based submatrix structure, we can make use of the FFT technique in the processes.

Measurement
The measurement process of compressed sensing is accomplished by y = Ax, where x = (x 0 , x 1 , . . ., x N−1 ) T and y = (y 0 , y 1 , . . ., y M−1 ) T .Let b = M + 1 and x l = http://asp.eurasipjournals.com/content/2013/1/155(x bl , x bl+1 . . ., x bl+b−1 ) T be a segment of x of length b, where 0 ≤ l ≤ L − 1.From Equation 5, σ (l) x l = 1 √ M (l) F b x l for each l, which implies that the matrixvector multiplication σ (l) x l includes to perform the bpoint DFT of each segment x l and then to multiply each DFT output by γ (l) k weighted by γ (l)  k for 0 ≤ l ≤ L − 1.In other words, For fast implementation, the FFT algorithm can be applied to the L distinct segments of x simultaneously in a parallel fashion.

Reconstruction
For s-sparse signal recovery, we consider the CoSaMP algorithm presented in Algorithm 2.1 of [20], which is described in Algorithm 1 of this paper.At each iteration, it forms a signal proxy f and identifies a potential candidate of the signal support by locating the largest 2s components of the proxy.The algorithm then merges the candidate with the one from the previous iteration, to create a new support set T. To estimate the target signal x i , it solves a least-squares problem and takes only the largest s entries from the signal approximation z.Finally, it updates the current sample v for the next iteration.
Algorithm 1: CoSaMP recovery algorithm [20] Update current samples until a halting criterion is true In Algorithm 1, the signal proxy is T and A H denotes the conjugate transpose of A. Initially, v is a (noisy) measurement vector u.At each iteration, it will be updated by v = u − A x i .Considering the submatrix structure of σ (l) , the matrix-vector multiplication A H v is performed by the reverse operation of the measurement process, i.e., extracting the weight γ (l) k from each measurement and then applying the b-point IDFT.For each l, 0 where ' * ' denotes the complex conjugate.Applying the b-point IDFT to v (l) with normalization then yields a segment of f of length b, i.e., f l = (f bl , f bl+1 , . . ., f bl+b−1 ) T , where For fast implementation, the FFT algorithm can be applied to the L distinct demasked versions of v simultaneously in a parallel fashion.Finally, concatenating the L segments T .While updating current samples at each iteration, the matrix-vector multiplication A x i is also performed by the FFT algorithm in a similar manner to the measurement process.One may stop the iterations of the CoSaMP algorithm if the norm of updated samples is sufficiently small or the iteration counter reaches a predetermined value.
Table one of [20] claimed that forming a signal proxy dominates the algorithm complexity by the cost of matrixvector multiplication.Thus, each iteration of the FFTbased CoSaMP recovery algorithm has the complexity of O(L × b log b) ≈ O(N log M), which is smaller than that of random partial Fourier matrices.

Empirical recovery performance
In this section, we compare our new sensing matrices to chirp sensing [6] and random partial Fourier matrices in terms of empirical recovery performance in noiseless and noisy scenarios.For comparison, we assume that a random partial Fourier matrix has the same parameters M and N = (M + 1)L as those of our new sensing matrix.To obtain it, we made ten trials to select M rows randomly from the N-point IDFT matrix, where the coherence was checked at each trial.Then, we chose the one with the smallest coherence for our experiments.For a chirp sensing matrix, on the other hand, M is set to a prime number closest to the parameter used in our new sensing matrix, and N = ML.Each submatrix of the partial chirp sensing matrix has an alternating polarity as in [30].http://asp.eurasipjournals.com/content/2013/1/155 Through experiments, we measured an s-sparse signal x, where the s nonzero entries are either +1 or −1, and their positions and signs are chosen uniformly at random.For signal reconstruction, the FFT-based CoSaMP algorithm was applied to a total of 2, 000 sample vectors measured by the three sensing matrices.In Algorithm 1, the iterations are stopped if either ||v|| < 10 −4 or the iteration counter reaches the sparsity level s.
Figure 2 displays successful recovery rates of the three sensing matrices from noiseless measurements at various compression ratios, where the sparsity level is s = 64.In the figure, for 5 ≤ L ≤ 30, M = 256 and N = (M + 1)L for our new sensing and random partial Fourier matrices, while M = 257 and N = ML for chirp sensing matrices.With the parameters, each sensing matrix achieves the compression ratios of 0.0333 A success is declared in reconstruction if ||x − x|| < 10 −6 for the estimate x.The figure shows that our new sensing matrices have slightly higher recovery rates than the random partial Fourier matrices but have almost the same recovery rates as those of chirp sensing codes.
In noisy compressed sensing, a measured signal is corrupted by additive noise, i.e., u = y + n = Ax + n, where n is the additive white Gaussian noise of zero mean and variance σ 2 .Then, the input signal-to-noise ratio (SNR) is defined as SNR input (dB) = 10 log 10 ||x− x|| 2 , to measure the recovery performance in noisy compressed sensing.In the experiments, we fixed L = 8, where M = 256 and N = L(M + 1) = 2, 056 for our new sensing and random partial Fourier matrices, while M = 257 and N = ML = 2, 056 for chirp sensing matrices.Figure 3 shows an example of original and reconstructed signals for our new sensing matrix in noisy compressed sensing, where the sparsity level is s = 15 and the input SNR is 15 dB.
Figure 4 sketches the reconstruction SNR of the three sensing matrices from noisy measurements.In the figure, the input SNR is 15 dB.The figure reveals that our new sensing matrix outperforms the random partial Fourier and the chirp sensing matrices at high sparsity levels, but the differences are negligible.Figure 5 demonstrates reconstruction SNR versus input SNR of the three matrices in noisy compressed sensing, where the sparsity level of an original signal is 70.At the sparsity level, we observed that the relationship between reconstruction and input SNR is linear for medium and high input SNR.Our new sensing matrix slightly outperforms the random partial Fourier matrix for high input SNR but shows almost the same trend with the chirp sensing code.
In addition to the above experiments, we attempted an elementary image reconstruction employing the Haar wavelet transform.An original sparsified image was  measured by the three sensing matrices and then reconstructed by the CoSaMP algorithm.We observed that the successfully reconstructed images from three different matrices are hard to distinguish, and show almost the same reconstruction SNR.
In conclusion, our new sensing matrix showed empirically reliable recovery performance by the CoSaMP algorithm in both noiseless and noisy scenarios, which is comparable to those of chirp sensing and random partial Fourier matrices.

Conclusions
This paper has constructed a new class of Fourier-based compressed sensing matrices using an almost difference set.We showed that a basic partial Fourier matrix, equivalent to the near-optimal partial Fourier codebook presented in [19], could be represented as a concatenation of DFT-based submatrices under row/column rearrangement.Choosing a full or a part of columns of the concatenated matrix, we then constructed a new sensing matrix which turns out to be an incoherent tight frame.The new sensing matrix guarantees unique sparse reconstruction with high probability for sparse signals with uniformly distributed supports.Moreover, experimental results revealed that our deterministic compressed sensing guarantees the empirically reliable recovery performance.
In conclusion, compared to existing chirp sensing and random partial Fourier matrices, our new sensing matrices have the benefits summarized:

Appendix
Proof of procedure 1 First of all, Lemma 3 shows that the indices of D in Equation 1 are equivalently generated by cyclotomic cosets.In the proof, we use the well-known property that (x + y) p k = x p k + y p k for x, y ∈ F p m and any integers m, k.

Construction 1 .
by concatenating them.Clearly, the M × N matrix A is equivalent to the original matrix A under the row/column rearrangement.In what follows, Construction 1 presents a formal expression of the new sensing matrix A. Let M = p r for prime p and a positive integer r.Let D = {d 0 , d 1 , . . ., d M−1 } be the row index set which satisfies d k ≡ −(k + 1) (mod M + 1) for 0 ≤ k ≤ M − 1.Let L be a positive integer and N = (M + 1)L, where 2

Figure 1
Figure 1 Concatenated structure of new sensing matrix A. It illustrates the concatenated structure of the new sensing matrix A in Construction 1.

Theorem 1 .
independent and equally likely to be −1 or +1.In what follows, the average recovery performance of s-sparse signals with the generic s-sparse model is theoretically guaranteed by the sensing matrix A. Consider the M × N sensing matrix A in Construction 1.Let x ∈ R N be an s-sparse signal with the generic s-sparse model.Then, if s = O M log N , it is possible to recover x with probability 1 − N −1 from the measurement Ax.Proof.From Lemma 2, A is a tight frame with coherence μ = O 1 √ M

Figure 2 Figure 3
Figure 2Successful recovery rates of the three sensing matrices from noiseless measurements.The figure displays successful recovery rates of our new class (asterisks), random partial Fourier (white circle), and chirp sensing (white triangle) matrices from noiseless measurements at various compression ratios of M N , where the sparsity level is s = 64, M = 256 for our new sensing and random partial Fourier matrices, and M = 257 for chirp sensing codes.

Figure 4 Figure 5
Figure 4 Reconstruction SNR of the three sensing matrices in noisy compressed sensing.The figure sketches the reconstruction SNR of our new class (asterisks), random partial Fourier (white circle), and chirp sensing (white triangle) matrices from noisy measurements with SNR input = 15 dB, where M = 256 and N = 2, 056 for our new sensing and random partial Fourier matrices, while M = 257 and N = 2, 056 for chirp sensing codes.

1 .
Our new deterministic sensing matrices support various parameters of M = p r and N = (M + 1)L for any prime p and positive integers r and L, 2 ≤ L ≤ M − 1.They are incoherent tight frames for any such M and N. Compared to chirp sensing codes where M is generally restricted to a prime number, the new matrices therefore provide more options for the parameters M and N, permitting various compression ratios of M N ≈ 1 L .A large number of new sensing matrices with a variety of admissible parameters may have many potential applications in compressed sensing.2. The deterministic row index structure requires much less storage space than random partial Fourier matrices.Moreover, while the N -point FFT is required for random partial Fourier matrices, the DFT-based submatrix structure of our new sensing matrices allows the (M + 1)-point FFT processing, which enables efficient signal measurement and reconstruction with low complexity and fast processing.The benefits in implementation indicate the potential of our new sensing matrices in practical compressed sensing.