Skip to content

Advertisement

  • Research
  • Open Access

Fast algorithm for designing periodic/aperiodic sequences with good correlation and stopband properties

EURASIP Journal on Advances in Signal Processing20182018:57

https://doi.org/10.1186/s13634-018-0579-z

  • Received: 30 March 2017
  • Accepted: 22 August 2018
  • Published:

Abstract

Periodic/aperiodic sequences with low autocorrelation sidelobes are widely used in many fields, such as communication and radar systems. Besides the correlation property, the frequency stopband property is often considered in the sequence design when the systems work in a crowded electromagnetic environment. In this paper, we aim at designing periodic/aperiodic sequences with low autocorrelation sidelobes and arbitrary frequency stopbands, and propose an efficient algorithm named FFT (fast Fourier transform)-based conjugate gradient algorithm. To calculate the step size efficiently, a method based on Taylor series expansion is developed. By changing the number of FFT points, the proposed algorithm can be easily used to generate periodic/aperiodic sequences. Since the gradient and step size can be implemented by FFT operations and Hadamard product, the whole algorithm is computationally efficient and can be used to design very long sequences. Numerical experiments show that the proposed algorithm has better performance than the state-of-the-art algorithms in terms of the running time.

Keywords

  • Autocorrelation
  • Integrated sidelobe level
  • Stopband property
  • Gradient
  • Periodic/aperiodic sequence design

1 Introduction

Sequences with low periodic or aperiodic autocorrelation sidelobes have been studied for a long time since the 1950s. The applications of such sequences cover both military field and civil field, such as the pulse compression radar and wireless communication [1, 2]. In pulse compression radar, the output of the pulse compression can be regarded as the convolution of each scattering point and the waveform’s autocorrelation. As the traditional waveforms (e.g., linear frequency modulation (LFM) signal) suffer high range sidelobes, the detection performance and imaging quality of radar systems are limited. Thus, sequences with low autocorrelation sidelobes are applied to suppress the sidelobe jamming from the adjacent strong scattering points and thus improve the detection performance of weak target [3]. Moreover, sequences with low autocorrelation sidelobes have many advantages in the code division multiple access (CDMA) systems [4], for instance, reducing the influence of the multipath effect and improving the synchronization precision.

Owing to the extensive application and great practical value of the sequences with low autocorrelation sidelobes, many scholars have shown a great interest in generating such sequences. The related studies can be classified into two categories. The first category is the design of binary sequences by using the search methodologies, such as exhaustive search method [5] and evolutionary algorithm [6]. And the second category is the design of polyphase sequences [713]. In recent years, with the development of the optimization theory, the problem of designing polyphase sequences with low autocorrelation sidelobes has been a research hotspot. Stochastic optimization algorithms [7, 8] were applied to design such sequences at the early stage. However, because of the high computational burden, these algorithms are impractical and unable to design sequences with large length. In order to fix this problem more efficiently, algorithms named CAN (cyclic algorithm new) and Accelerated-MISL (accelerated monotonic minimizer for integrated sidelobe level) which were both based on fast Fourier transform (FFT) were proposed in [9, 10]. In [9], the CAN algorithm was proposed to design unimodular sequences of large length by minimizing a criterion which is “almost equivalent” to the integrated sidelobe level (ISL) metric. Different from the CAN algorithm, the Accelerated-MISL algorithm [10] was deduced via majorization-minimization (MM) method. Since the Accelerated-MISL algorithm is based on an acceleration scheme which can greatly reduce the iteration number, it outperforms the CAN algorithm on both the merit factor (MF) and the computational efficiency. Besides, there are some literatures on designing sequences with zero correlation zone [1113] or low periodic autocorrelation [14].

Apart from the correlation property, the stopband property (which means suppressing several frequency bands) is usually considered in sequence design for suppressing narrowband interferences or avoiding the certain frequency bands that reserved for some applications, such as navigation and military communications [1519]. Since the discontinuous frequency of the transmit waveform leads to high autocorrelation sidelobes, the correlation and stopband properties are both considered. At the early stage, the cyclic algorithms named SCAN (stopband CAN) and WeSCAN (weighted SCAN) were proposed in [20] to design sequences with good correlation and stopband properties. But since the WeSCAN requires N FFT operations and the eigenvalue decomposition at each iteration, its convergence speed is slow. Afterwards, the improved cyclic algorithms named SDCA (steepest descent-based cyclic algorithm) [21] and MMSE-WISL [22] are proposed for sparse frequency waveform design on the basis of the CAN and WeCAN (weighted CAN) algorithms [9]. However, like the WeSCAN algorithm, these two improved algorithms are also time-consuming and not suitable for long waveform design [22]. In recent years, some efficient algorithms, such as the majorization-minimization (MM) method [10], gradient-based algorithms [23, 24], Lagrange programming neural network (LPNN) [25], alternating projection [26], and alternating direction method of multipliers (ADMM) [27], have also been applied to unimodular sequence design with stopband property, where LPNN and ADMM can be applied to generate both the periodic and aperiodic waveforms. It is worth noting that the sidelobe suppression means flatting the sequence spectrum. Thus, the suppression of frequency stopbands seems more difficult than the sidelobe suppression, which means that suppressing the stopbands increases more complexity.

In this paper, we consider the problem of designing periodic/aperiodic sequences with low autocorrelation sidelobes and arbitrary frequency stopbands. By using the relationship between the autocorrelation function and power spectrum density (PSD), the problem is formulated as an unconstrained minimization problem with respect to the sequence phase in frequency domain. To solve this problem, an efficient algorithm named FFT-based conjugate gradient algorithm (which we call FCGA) is proposed. Unlike the traditional gradient algorithm, the searching step size of the FCGA which is hard to calculate directly is deduced via the Taylor series expansion. As the gradient and step size of the FCGA can be implemented by FFT operations and Hadamard product, the algorithm is efficient and can design sequences with large length. Moreover, by selecting the number of the FFT points, the proposed algorithm can be easily applied to design periodic or aperiodic sequences.

The remaining sections of the paper are organized as follows. In Section 2, the sequence design problem that incorporates both the correlation and stopband properties is formulated. In Section 3, the phase gradient and step size are derived, and then, a modified descent direction is given to guarantee the monotonicity of the FCGA. To show the effectiveness of the proposed algorithm, several numerical experiments are presented in Section 4. Finally, Section 5 gives the conclusion.

Notation: Boldface upper case letters denote matrices while boldface lower case letters denote column vectors. (·), (·)T, and (·)H denote the complex conjugate, transpose, and conjugate transpose, respectively. · denotes the Euclidean norm of vectors. denotes the Hadamard product. X(m,n) denotes the (mth,nth) element of matrix X. \({\mathop {\mathrm Re}\nolimits } (\cdot)\) and \({\mathop {\mathrm Im}\nolimits } (\cdot)\) denote the real and imaginary part, respectively. Diag(x) is a diagonal matrix formed with x. \({{\mathcal {F}}_{N}}(\cdot),{\mathcal {F}}_{N}^{- 1}(\cdot)\) denote the N−point FFT and inverse FFT (IFFT) operations, respectively.

2 Problem formulation

As mentioned in Section 1, this paper focuses on the problem of designing sequences with good correlation and frequency stopband properties. That is to say, the sequences should satisfy two conditions: one is low autocorrelation sidelobes (here we minimize the power of the sidelobes, i.e., ISL metric); the other one is low stopband power. In the following, two criterions are established to measure the autocorrelation and the stopband power, and then, we formulate the design problem as an unconstrained minimization problem.

2.1 Correlation property

Let \(\left \{ {{x_{n}}} \right \}_{n = 0}^{N-1}\) denote the complex sequence to be designed. The vector form of the sequence can be expressed as
$$ {\textbf{x} = {\left[ {{x_{0}},...,{x_{N-1}}} \right]^{T}}}. $$
(1)
The periodic and aperiodic autocorrelations of the sequence are respectively defined as
$$ \begin{aligned} {{\tilde r}_{k}} &= {\sum\limits_{n = 0}^{N-1}} {{x_{n}}x_{(n - k)\bmod N}^{*}} = \tilde r_{- k}^{*}, \\ {r_{k}} &= {\sum\limits_{n = k}^{N-1}} {{x_{n}}x_{n - k}^{*}} = r_{- k}^{*},k = 0,...,N - 1. \end{aligned} $$
(2)
Then, the corresponding integrated sidelobe levels (ISLs) which express the goodness of the periodic and aperiodic correlation properties are given by
$$ {I_{per}}{\mathrm{=}}\sum\limits_{k = 1}^{N - 1} {{{\left| {{{\tilde r}_{k}}} \right|}^{2}}},\;\;\;{I_{aper}}{\mathrm{=}}\sum\limits_{k = 1}^{N - 1} {{{\left| {{r_{k}}} \right|}^{2}}}. $$
(3)
Let
$$ \begin{aligned} {{{\mathbf{r}}_{1}}} &{= {\left[ {{{\tilde r}_{0}},{{\tilde r}_{1}},...,{{\tilde r}_{N - 1}}} \right]^{T}},}\\ {{{\mathbf{r}}_{2}}} &{= {\left[ {{r_{0}},{r_{1}},...,{r_{N - 1}},0,r_{N - 1}^{*},...,r_{1}^{*}} \right]^{T}}.} \end{aligned} $$
(4)
denote the periodic and aperiodic autocorrelation vectors, respectively. Then, Iper and Iaper can be written more compact as
$$ \begin{aligned} {I_{per}} &{= {\sum\limits_{k = 0}^{N - 1} {{{\left| {{{\tilde r}_{k}}} \right|}^{2}}} - {{\left| {{{\tilde r}_{0}}} \right|}^{2}}} = {\textbf{r}_{1}^{H}{\textbf{r}_{1}} - {N^{2}}},}\\ {I_{aper}} &{= \frac{1}{2}\left({\sum\limits_{k = 1 - N}^{N - 1} {{{\left| {{r_{k}}} \right|}^{2}}} - {{\left| {{r_{0}}} \right|}^{2}}} \right) = \frac{1}{2}\left({\textbf{r}_{2}^{H}{\textbf{r}_{2}} - {N^{2}}} \right),} \end{aligned} $$
(5)
where we assume \({\tilde r_{0}}={r_{0}} = \sum \limits _{n = 0}^{N-1} {{x_{n}}x_{n}^{*}} = N\). Let \({\textbf {F}_{\tilde N}}\) be the \({\tilde N} \times {\tilde N}\) discrete Fourier transform (DFT) matrix with the following expression
$$ {{{\mathbf{F}}_{\tilde N}}\left({m,n} \right) = {e^{- j\frac{{2mn\pi}}{{\tilde{N}}}}},0 \le m,n < {\tilde N}.} $$
(6)
Then, the \({\tilde N} \times {\tilde N}\) inverse discrete Fourier transform (IDFT) matrix is \({{\textbf {F}_{\tilde N}^{H}}/{\tilde N}}\). It is well known that the autocorrelation function is the inverse Fourier transform of the power spectrum density (PSD); thus, we have the following relationship [13, 14]:
$$ \begin{aligned} {{{\mathbf{r}}_{1}}} &{= {\mathcal{F}}_{N}^{- 1}\left({{{\mathbf{p}}_{1}}} \right) = \frac{1}{N}{\mathbf{F}}_{N}^{H}{{\mathbf{p}}_{1}},}\\ {{{\mathbf{r}}_{2}}} &{= {\mathcal{F}}_{2N}^{- 1}\left({{{\mathbf{p}}_{2}}} \right) = \frac{1}{{2N}}{\mathbf{F}}_{2N}^{H}{{\mathbf{p}}_{2}}.} \end{aligned} $$
(7)
where p1,p2 are the PSD of lengths N and 2N, respectively. By substituting (7) into (5), the frequency domain expressions of Iper and Iaper are given by
$$ {}\begin{aligned} {{I_{per}}} &{= \frac{1}{{{N^{2}}}}{\mathbf{p}}_{1}^{H}{{\mathbf{F}}_{N}}{\mathbf{F}}_{N}^{H}{{\mathbf{p}}_{1}} - {N^{2}} = \frac{1}{N}\left({{\mathbf{p}}_{1}^{H}{{\mathbf{p}}_{1}} - {N^{3}}} \right),}\\ {{I_{aper}}} &{= \frac{1}{2}\left(\! {\frac{1}{{4{N^{2}}}}{\mathbf{p}}_{2}^{H}{{\mathbf{F}}_{2N}}{\mathbf{F}}_{2N}^{H}{{\mathbf{p}}_{2}} - {N^{2}}}\! \right) = \frac{1}{{4N}}\! \left({{\mathbf{p}}_{2}^{H}{{\mathbf{p}}_{2}}\! - 2{N^{3}}} \right),} \end{aligned} $$
(8)

where \({{\textbf {F}}_{\tilde N}}{\textbf {F}}_{\tilde N}^{H} = \tilde N{{\textbf {I}}_{\tilde N}},\tilde N = N,2N\), and \({{\textbf {I}}_{\tilde N}}\) denotes the \(\tilde N \times \tilde N\) identity matrix.

Generally, the problem of designing periodic/aperiodic sequences with low autocorrelation sidelobes can be regarded as a minimization problem of the periodic/aperiodic ISL metrics. From (8), it is easy to see that the metrics of the periodic and aperiodic autocorrelations have the same form. Thus, by ignoring the constant terms in (8), the periodic/aperiodic ISL metrics can be equivalent to the following unified criterion:
$$ {J_{ACF}} = \frac{1}{{2\tilde N}}{\textbf{p}^{H}}\textbf{p}, $$
(9)

where \({\textbf {p}} = {[{p_{0}},...,{p_{\tilde N-1}}]^{T}}\) and \(\tilde N\) is the number of FFT points. When \(\tilde N = N\), (9) is the criterion for the periodic autocorrelation. And when \(\tilde N = 2N\), (9) becomes the criterion for the aperiodic autocorrelation.

2.2 Stopband property

In crowded electromagnetic environment, the signal spectrum is often polluted by the interference or the signals from other users. Thus, designing sequences with frequency stopbands is sometimes quite necessary. Without loss of generality, we consider the normalized frequencies here.

Assume the set of frequency stopbands Ω[0,1] can be written as
$$ \Omega = \mathop \cup \limits_{k = 1}^{{N_{s}}} \left({{f_{k1}},{f_{k2}}} \right), $$
(10)
where (fk1,fk2) denotes one stopband and Ns denotes the number of the stopbands. Considering the \(\tilde N{\mathrm {- point}}\) FFT operations, the corresponding point set of the stopband set can be expressed as
$$ {{\Omega_{1}} = \mathop \cup \limits_{k = 1}^{{N_{s}}} \left[ {\left\lfloor {\tilde N{f_{k1}}} \right\rfloor,\left\lceil {\tilde N{f_{k2}}} \right\rceil} \right] \subset \left[ {0,\tilde N} \right],} $$
(11)
where ·,· are the floor and ceiling operations. Therefore, the suppression of the stopband set Ω is equivalent to minimizing the PSD in Ω1, i.e., p(k),kΩ1. Let us define the frequency weight vector as
$$ \begin{aligned} {{\mathbf{w}}_{f}} &= {{\left[ {{{\bar w}_{0}},...,{{\bar w}_{\tilde N-1}}} \right]^{T}}},\\ {{\bar w}_{p}} &= \left\{ {\begin{array}{*{20}{l}} {1,}&{p \in {\Omega_{1}}}\\ {0,}&{{\text{otherwise}}} \end{array}} \right.. \end{aligned} $$
(12)
As the PSD is nonnegative, the criterion for the stopband property can be constructed as the square of the Euclidean norm of the weighted PSD:
$$ {J_{PSD}} = {\left\| {{{\mathbf{w}}_{f}} \circ {\mathbf{p}}} \right\|^{2}} = {{\mathbf{p}}^{H}}{\text{Diag}}\left({{{\mathbf{w}}_{f}}} \right)\textbf{p}. $$
(13)

Actually, the criterion JPSD is the total power in frequency stopbands. Thus, by minimizing (13), the spectral power in Ω can be suppressed.

2.3 The minimization problem

In order to suppress the autocorrelation sidelobes and the stopband power, both JACF and JPSD should be minimized. Thus, the design problem can be regarded as a bi-objective optimization (or Pareto optimization) problem. However, since the frequency stopband leads to high sidelobes, it is impossible to find a solution that minimizes both JACF and JPSD. To make a compromise between correlation sidelobes and frequency stopbands, here we apply the scalarization procedure, i.e., turning the bi-objective optimization problem into a single-objective optimization problem. After scalarizing the problem, the single objective function that incorporates (9) and (13) can be expressed as
$$ \begin{aligned} {J_{T}} &= \lambda {J_{PSD}} + \left({1 - \lambda} \right){J_{ACF}}\\ &= \lambda {{\mathbf{p}}^{H}}{\text{Diag}}\left({{{\mathbf{w}}_{f}}} \right){\mathbf{p}} + \frac{{1 - \lambda}}{{2\tilde N}}{{\mathbf{p}}^{H}}{\mathbf{p}}\\ & = {{\mathbf{p}}^{H}}{\text{Diag}}\left({\mathbf{w}} \right){\mathbf{p}}. \end{aligned} $$
(14)

where λ[0,1] is a weight coefficient by which we can balance the relative weight between JACF and JPSD, and \({\mathbf {w}} = \lambda {{\mathbf {w}}_{f}} + \frac {{1 - \lambda }}{{2\tilde N}}\).

Defining \(\textbf {f} = {\left [ {{f_{0}},...,{f_{\tilde N-1}}} \right ]^{T}}\) as the \(\tilde N{\mathrm {- point}}\) DFT of x, and
$$ {{}{\textbf{a}_{p}} = {\left[ {1,{e^{j{\omega_{p}}}},...,{e^{j{\omega_{p}}(N - 1)}}} \right]^{T}},{\omega_{p}} = \frac{{2\pi}}{{\tilde N}}p,p = 0,...,\tilde N-1,} $$
(15)
then one can get
$$ \begin{aligned} {\textbf{f}} &{= {\left[ {\textbf{a}_{0}^{H}\textbf{x},...,\textbf{a}_{\tilde N-1}^{H}\textbf{x}} \right]^{T}}.}\\ {\textbf{p}} &{= {\textbf{f}^{*}} \circ \textbf{f} = {\left[ {{\textbf{x}^{H}}{\textbf{a}_{0}}\textbf{a}_{0}^{H}\textbf{x},...,{\textbf{x}^{H}}{\textbf{a}_{\tilde N-1}}\textbf{a}_{\tilde N-1}^{H}\textbf{x}} \right]^{T}}.} \end{aligned} $$
(16)
By substituting (16) into (14), the objective function JT can be reformulated as
$$ {J_{T}} = {\sum\limits_{p = 0}^{\tilde N-1}} {{w_{p}}{{\left({{\textbf{x}^{H}}{\textbf{a}_{p}}\textbf{a}_{p}^{H}\textbf{x}} \right)}^{2}}}, $$
(17)

where \({w_{p}} = \lambda {\bar w_{p}} + \frac {{1 - \lambda }}{{2\tilde N}}\) is the pth element of the weight vector w.

In general, the transmitter components such as power amplifier have a maximum amplitude clip [2]. To maximize the transmitter power, it is desired for sequence amplitude to reach this amplitude clip, which means that the sequence should be unimodular. Therefore, considering the unimodular constraint, the problem of interest can be written as
$$ \begin{aligned} \min \;\;&{J_{T}} = {\sum\limits_{p = 0}^{\tilde N-1}} {{w_{p}}{{\left({{\textbf{x}^{H}}{\textbf{a}_{p}}\textbf{a}_{p}^{H}\textbf{x}} \right)}^{2}}} \\ {\mathrm{s}}{\mathrm{.t}}{\mathrm{.}}\;\;&\left| {{x_{n}}} \right| = 1,\;\;n = {0,...,N-1}. \end{aligned} $$
(18)
From (18), we can see that the unimodular constraint is the constraint related to the sequence amplitude. Thus, the minimization problem can be solved by optimizing the sequence phase. Let
(19)
denote the phase vector of the sequence x, then we have
$$ \textbf{x} = {{\left[ {{e^{j{\phi_{0}}}},...,{e^{j{\phi_{N-1}}}}} \right]^{T}}.} $$
(20)
Thus, the problem (18) can be seen as an unconstrained minimization problem on ϕ:
(21)

3 The FCGA algorithm

A good property of the Accelerated-MISL algorithm in [10] is the monotone decreasing property. But this algorithm minimizes the majorization function rather than the original function. To minimize the objective function (21) directly and guarantee the monotonically decreasing property of the algorithm, the conjugate gradient algorithm (CGA) is applied here. In this section, we first deduce the gradient and the step size, which are two important parts of the CGA, and then summarize the FCGA algorithm.

3.1 Phase gradient

To deduce the phase gradient ϕJT, we first work out the derivative of the function JT(ϕ) with respect to the phase ϕi,i=0,...,N−1:
$$ {\frac{{\partial {J_{T}}}}{{\partial {\phi_{i}}}} = 2\sum\limits_{p = 0}^{\tilde N-1} {{w_{p}}\left({{\textbf{x}^{H}}{\textbf{a}_{p}}\textbf{a}_{p}^{H}\textbf{x}} \right)\frac{{\partial {\textbf{x}^{H}}{\textbf{a}_{p}}\textbf{a}_{p}^{H}\textbf{x}}}{{\partial {\phi_{i}}}}}}. $$
(22)
Let
$$ {{}\textbf{A}_{p}} = {\textbf{a}_{p}}\textbf{a}_{p}^{H},\;\;{\textbf{A}_{p}}\left({m{+1},n{+1}} \right) = a_{mn}^{p},\;\;{0} \le m,n \le {N-1}. $$
(23)
Then, the gradient ϕJT can be written as
(24)
In the following, we derive the explicit expression of ϕ(xHApx). According to (20) and (23), the expansion of xHApx is given by
$$ {\textbf{x}^{H}}{\textbf{A}_{p}}\textbf{x} = {\sum\limits_{m = 0}^{N-1} {\sum\limits_{n = 0}^{N-1} {a_{mn}^{p}{e^{j\left({{\phi_{n}} - {\phi_{m}}} \right)}}}}}. $$
(25)
Then, the derivative of xHApx with respect to ϕi can be denoted as
$$ \begin{aligned} &{\frac{{\partial {\textbf{x}^{H}}{\textbf{A}_{p}}\textbf{x}}}{{\partial {\phi_{i}}}} = j{x_{i}}\sum\limits_{m = 0}^{N-1} {a_{mi}^{p}x_{m}^{*}} - jx_{i}^{*}\sum\limits_{n = 0}^{N-1} {a_{in}^{p}{x_{n}}}}. \end{aligned} $$
(26)
Since \({\textbf {A}_{p}} = \textbf {A}_{p}^{H}\), we have \(a_{mi}^{p} = {\left ({a_{im}^{p}} \right)^{*}}\). Thus, the derivative (26) can be rewritten as
$$ {\frac{{\partial {\textbf{x}^{H}}{\textbf{A}_{p}}\textbf{x}}}{{\partial {\phi_{i}}}} = 2{\mathop{\mathrm Re}\nolimits} \left({ - jx_{i}^{*}\sum\limits_{n = 0}^{N-1} {a_{in}^{p}{x_{n}}}} \right)}. $$
(27)
By stacking (27) in a vector, the gradient ϕ(xHApx) is given by
(28)
According to (24) and (28), the explicit expression of phase gradient is given by (see the Appendix 1)
(29)
where y1:N denotes the first N elements of y, and
$$ {\textbf{y} = {\mathcal{F}}_{\tilde N}^{- 1}\left({\textbf{w} \circ {{\left| {{{\mathcal{F}}_{\tilde N}}\left(\textbf{x} \right)} \right|}^{2}} \circ {{\mathcal{F}}_{\tilde N}}\left(\textbf{x} \right)} \right).} $$
(30)

3.2 Step size calculation via Taylor series expansion

The traditional methods for obtaining the step size are the linear search methods. As the linear search methods cost much iteration and computation, the best way to obtain the step size is direct calculation. In this subsection, we adopt the Taylor series expansion to deduce the step size.

Assume the present iteration point is xk, and the corresponding iteration direction is \({\textbf {d}^{k}} {= \left [ {d_{0}^{k},...,d_{N - 1}^{k}} \right ]^{T}}\). Then, the next iteration point can be expressed as
(31)
where μ is the step size. Essentially, the linear search problem is a minimization problem with respect to μ which can be expressed as follows:
$$ \mathop {\min}\limits_{\mu > 0} \;\;{J_{T}}\left(\mu \right) = {\sum\limits_{p = 0}^{\tilde N-1}} {{w_{p}}{{\left({{{\left({{\textbf{x}^{k + 1}}} \right)}^{H}}{\textbf{A}_{p}}{\textbf{x}^{k + 1}}} \right)}^{2}}}. $$
(32)
Let
$$ {{\mathbf{z}}^{k + 1}} = {\left[ {{{\left({{\textbf{x}^{k + 1}}} \right)}^{H}}{\textbf{A}_{{0}}}{\textbf{x}^{k + 1}},...,{{\left({{{\mathbf{x}}^{k + 1}}} \right)}^{H}}{\textbf{A}_{{\tilde N-1}}}{\textbf{x}^{k + 1}}} \right]^{T}}, $$
(33)
then the derivative of JT(μ) with respect to μ can be written as
$$ \begin{aligned} {\frac{{\partial {J_{T}}\left(\mu \right)}}{{\partial \mu}}} &= \frac{{\partial \left({{\textbf{w}^{T}}\left({{\textbf{z}^{k + 1}} \circ {\textbf{z}^{k + 1}}} \right)} \right)}}{{\partial \mu}}\\ &= 2{\textbf{w}^{T}}\left({{\textbf{z}^{k + 1}} \circ \frac{{\partial {\textbf{z}^{k + 1}}}}{{\partial \mu}}} \right). \end{aligned} $$
(34)
To simplify (34), we first expand (xk+1)HApxk+1 as follows:
$$ {\left({{\textbf{x}^{k + 1}}} \right)^{H}}{\textbf{A}_{p}}{\textbf{x}^{k + 1}} = {\sum\limits_{m = 0}^{N-1} \sum\limits_{n = 0}^{N-1}} {a_{mn}^{p}{{\left({x_{m}^{k}} \right)}^{*}}x_{n}^{k}{e^{j\mu \left({d_{n}^{k} - d_{m}^{k}} \right)}}}, $$
(35)
where \(x_{n}^{k},\;d_{n}^{k}\) are respectively the (n+1)th element of xk and dk. By using the Taylor series expansion and then keeping the first three terms, the exponential term \({e^{j\mu \left ({d_{n}^{k} - d_{m}^{k}} \right)}}\) in (35) can be approximated as
$$ {e^{j\mu \left({d_{n}^{k} - d_{m}^{k}} \right)}} \approx 1 + j\mu \left({d_{n}^{k} - d_{m}^{k}} \right) - \frac{{{\mu^{2}}}}{2}{\left({d_{n}^{k} - d_{m}^{k}} \right)^{2}}. $$
(36)
By substituting (36) into (35), (xk+1)HApxk+1 can be rewritten as
$$ \begin{aligned} &{\left({{\textbf{x}^{k + 1}}} \right)^{H}}{\textbf{A}_{p}}{\textbf{x}^{k + 1}} \\ &= {\left({{\textbf{x}^{k}}} \right)^{H}}{\textbf{A}_{p}}{\textbf{x}^{k}} - 2\mu {\mathop{\mathrm Im}\nolimits} \left({{{\left({{\textbf{x}^{k}}} \right)}^{H}}{\textbf{A}_{p}}\textbf{x}_{1}^{k}} \right) \\ &~~~~- {\mu^{2}}\left({{\mathop{\mathrm Re}\nolimits} \left({{{\left({{\textbf{x}^{k}}} \right)}^{H}}{\textbf{A}_{p}}\textbf{x}_{2}^{k}} \right) - {{\left({\textbf{x}_{1}^{k}} \right)}^{H}}{\textbf{A}_{p}}\textbf{x}_{1}^{k}} \right), \end{aligned} $$
(37)
where \(\textbf {x}_{1}^{k} = {\textbf {x}^{k}} \circ {\textbf {d}^{k}},\;\textbf {x}_{2}^{k} = {\textbf {x}^{k}} \circ {\textbf {d}^{k}} \circ {\textbf {d}^{k}}\). Let us define
$$ \begin{aligned} {\textbf{f}^{k}} &= {{\mathcal{F}}_{\tilde N}}\left({{\textbf{x}^{k}}} \right) = {\left[ {\textbf{a}_{{0}}^{H}{\textbf{x}^{k}},...,\textbf{a}_{\tilde N{-1}}^{H}{\textbf{x}^{k}}} \right]^{T}},\\ \textbf{f}_{1}^{k} &= {{\mathcal{F}}_{\tilde N}}\left({\textbf{x}_{1}^{k}} \right) = {\left[ {\textbf{a}_{{0}}^{H}\textbf{x}_{1}^{k},...,\textbf{a}_{\tilde N{-1}}^{H}\textbf{x}_{1}^{k}} \right]^{T}},\\ \textbf{f}_{2}^{k} &= {{\mathcal{F}}_{\tilde N}}\left({\textbf{x}_{2}^{k}} \right) = {\left[ {\textbf{a}_{{0}}^{H}\textbf{x}_{2}^{k},...,\textbf{a}_{\tilde N{-1}}^{H}\textbf{x}_{2}^{k}} \right]^{T}}. \end{aligned} $$
(38)
Then, according to (33), (37), and (38), we have
$$ {\textbf{z}^{k + 1}} = {\left| {{\textbf{f}^{k}}} \right|^{2}} - 2\mu {\textbf{t}_{2}} - {\mu^{2}}{\textbf{t}_{1}}, $$
(39)
where
$$ \begin{aligned} {\textbf{t}_{1}} &= {\mathop{\mathrm Re}\nolimits} \left({{{\left({{\textbf{f}^{k}}} \right)}^{*}} \circ \textbf{f}_{2}^{k}} \right) - {\left({\textbf{f}_{1}^{k}} \right)^{*}} \circ \textbf{f}_{1}^{k},\\ {\textbf{t}_{2}} &= {\mathop{\mathrm Im}\nolimits} \left({{{\left({{\textbf{f}^{k}}} \right)}^{*}} \circ \textbf{f}_{1}^{k}} \right). \end{aligned} $$
(40)
By substituting (39) into (34), the derivative in (34) can be written more compactly as
$$ \begin{aligned} &{\frac{{\partial {J_{T}}\left(\mu \right)}}{{\partial \mu}}}\\ &= - 4{\textbf{w}^{T}}\left({\left({{{\left| {{\textbf{f}^{k}}} \right|}^{2}} - 2\mu {\textbf{t}_{2}} - {\mu^{2}}{\textbf{t}_{1}}} \right) \circ \left({{\textbf{t}_{2}} + \mu {\textbf{t}_{1}}} \right)} \right)\\ &= - 4\left({a{\mu^{3}} + b{\mu^{2}} + c\mu + d} \right), \end{aligned} $$
(41)
where
$$ \begin{aligned} a &= - {\textbf{w}^{T}}\left({{\textbf{t}_{1}} \circ {\textbf{t}_{1}}} \right),\\ b &= - 3{\textbf{w}^{T}}\left({{\textbf{t}_{1}} \circ {\textbf{t}_{2}}} \right),\\ c &= - 2{\textbf{w}^{T}}\left({{\textbf{t}_{2}} \circ {\textbf{t}_{2}}} \right) + {\textbf{w}^{T}}\left({{{\left| {{\textbf{f}^{k}}} \right|}^{2}} \circ {\textbf{t}_{1}}} \right),\\ d &= {\textbf{w}^{T}}\left({{{\left| {{\textbf{f}^{k}}} \right|}^{2}} \circ {\textbf{t}_{2}}} \right). \end{aligned} $$
(42)
Let JT(μ)/μ=0, then we have
$$ a{\mu^{3}} + b{\mu^{2}} + c\mu + d = 0. $$
(43)

It is well known that there is at least a real root in the roots of a cubic equation with real coefficients. Thus, the approximate step size can be calculated directly by solving (43). Since the searching direction is descendant, problem (32) has at least one minimum point greater than 0. Then, we can choose the positive root of (43) which is closest to zero as the step size.

3.3 Algorithm summary

After deducing the phase gradient and the step size, it is easy to solve the unconstrained problem (21) by using the conjugate gradient algorithm (CGA). In order to calculate the search direction effectively, here we apply the classical Polak-Ribiere-Polyak-CGA (PRP-CGA) whose direction can be calculated according to the gradient. The searching direction of the PRP-CGA can be expressed as
$$ {\textbf{d}^{k + 1}} = - {\textbf{g}^{k + 1}} + \frac{{{{\left({{\textbf{g}^{k + 1}} - {\textbf{g}^{k}}} \right)}^{T}}{\textbf{g}^{k + 1}}}}{{{{|| {{\textbf{g}^{k}}} ||}^{2}}}}{\textbf{d}^{k}}, $$
(44)
where gk and dk are the gradient and searching direction of the kth iteration, respectively. Since the step size is an approximate value, (gk+1)Tdk may not be zero. Thus, dk+1 may not be descendant, i.e.,
$$ \begin{aligned} &{\left({\textbf{g}^{k + 1}}\right){~}^{T}}{\textbf{d}^{k + 1}} \\ &= - {||{\textbf{g}^{k + 1}}||{~}^{2}} + \frac{{{{\left({{\textbf{g}^{k + 1}} - {\textbf{g}^{k}}} \right)}^{T}}{\textbf{g}^{k + 1}}}}{{{{||{\textbf{g}^{k}} ||}{~}^{2}}}}{\left({\textbf{g}^{k + 1}}\right)^{T}}{\textbf{d}^{k}} < 0 \end{aligned} $$
(45)
is not always satisfied. For guaranteeing that the searching direction is descendant, we apply the following modified direction:
$$ \begin{aligned} {{\tilde{\mathbf{d}}}^{k + 1}} &= - {\textbf{g}^{k + 1}} + \frac{{{{\left({{\textbf{g}^{k + 1}} - {\textbf{g}^{k}}} \right)}^{T}}{\textbf{g}^{k + 1}}}}{{{{|| {{\textbf{g}^{k}}} ||}^{2}}}}{\textbf{d}^{k}},\\ {\textbf{d}^{k + 1}} &= \left\{ {\begin{array}{*{20}{l}} {{{\tilde{\mathbf{d}}}^{k + 1}},}&{{{\left({\textbf{g}^{k + 1}}\right)}^{T}}{{\tilde{\mathbf{d}}}^{k + 1}} < 0}\\ { - {\textbf{g}^{k + 1}},}&{{{\left({\textbf{g}^{k + 1}}\right)}^{T}}{{\tilde{\mathbf{d}}}^{k + 1}} \ge 0} \end{array}} \right.. \end{aligned} $$
(46)

From (46), we can see that when the approximate step size can guarantee the direction \({\tilde {\mathbf {d}}^{k + 1}}\) of the PRP-CGA is descendant (i.e., \({{{\left ({\textbf {g}^{k + 1}}\right)}^{T}}{{\tilde {\mathbf {d}}}^{k + 1}} < 0}\)), the \({{\tilde {\mathbf {d}}}^{k + 1}}\) is selected as the searching direction. But when the step size makes \({{\tilde {\mathbf {d}}}^{k + 1}}\) no longer a descent direction (i.e., \({{{\left ({\textbf {g}^{k + 1}}\right)}^{T}}{{\tilde {\mathbf {d}}}^{k + 1}} \ge 0}\)), we choose the negative gradient direction as the searching direction. By doing so, the descent direction is always guaranteed. On the basis of the above derivation, the FCGA algorithm can be summarized in Algorithm 1.

From Algorithm 1, we can see that the proposed algorithm can be easily implemented by the FFT operations and the Hadamard product. In order to analyze the per iteration computation of the algorithm, we calculate the numbers of additions and multiplications for each step in Algorithm 1, and the result is shown in Table 1. As shown in this table, the total number of the additions and multiplications is \(21\tilde N + 12N+8\tilde N\log \tilde N\leq 33\tilde N +8\tilde N\log \tilde N\). With the increase of \(\tilde N\), the total number approaches \(8\tilde N\log \tilde N\), i.e., the computation of four FFT operations. Thus, the computation complexity of each iteration is about \({\mathcal {O}}(\tilde N\log \tilde N)\). Like the other FFT-based algorithms (such as the CAN algorithm and the Accelerated-MISL algorithm), the FCGA is also computationally efficient.
Table 1

The numbers of additions and multiplications for each step in Algorithm 1

Step

Addition

Multiplication

1

\(3N + 2\tilde N\log \tilde N\)

\(2\tilde N\log \tilde N\)

2

\(3 \tilde N\)

\(\tilde N\)

3

\(10 \tilde N\)

\(5 \tilde N\)

4

0

0

5

\(N+\tilde N\log \tilde N\)

\(\tilde N\log \tilde N\)

6

\(2\tilde N+\tilde N\log \tilde N\)

\(\tilde N\log \tilde N\)

7

N

0

8

4N

3N

Total

\(15\tilde N + 9N+4\tilde N\log \tilde N\)

\(6\tilde N + 3N+4\tilde N\log \tilde N\)

In fact, the per iteration computation is not the only factor that affects the computational efficiency of the algorithm. Another factor is the iteration number. For example, the per iteration computation of the CAN (needs two FFT operations at each iteration) and Accelerated-MISL (needs four FFT operations at each iteration) algorithms are \(15\tilde N+8\tilde N\log \tilde N\) and \(4\tilde N\log \tilde N\) [9, 10], respectively. Although the per iteration computation of the Accelerated-MISL is larger than that of the CAN, the Accelerated-MISL is more efficient than the CAN [10]. Compared to these two algorithms, the per iteration computation of the FCGA is a little larger. But it does not mean that the FCGA is slower than the CAN and Accelerated-MISL algorithms. In the next section, we will provide several experiments to validate the effectiveness of the proposed algorithm. Actually, in our test, the FCGA is more efficient due to fewer iterations.

4 Simulation results and discussion

To illustrate the effectiveness of the proposed FCGA and compare the performance with the existing ones, several numerical experiments are presented in this section. In the first two experiments, we carry out the design of periodic and aperiodic sequences with low autocorrelation sidelobes, respectively. Then, we consider the problem of designing sequences with stopband property in the third experiment. All the experiments are performed on a PC with a 3.60-GHz i7-4790 CPU and 8-GB RAM. The software environment is Matlab 2012b. In the following experiments, if there is no special declaration, all the algorithms are initialized by the unimodular sequence \({\textbf {x}^{0}} = \left \{ {{e^{j2\pi {\phi _{n}}}}} \right \}_{n = {0}}^{N{-1}}\), where \(\left \{ {{\phi _{n}}} \right \}_{n = {0}}^{N{-1}}\) are independent random variables uniformly distributed in [0,1].

4.1 Periodic sequence design with low autocorrelation sidelobes

First, we apply the proposed algorithm to generate periodic sequences of length N=128 with impulse-like autocorrelation function (i.e., the case of \(\lambda = 0,\tilde N = N\)) and compare the performance with the classical PeCAN [14] and the state-of-the-art ADMM [27] algorithms. Both algorithms are efficient at generating periodic sequence with impulse-like autocorrelation. The Matlab code of the PeCAN was downloaded from the website [28] of the book [2]. The stopping criterions of PeCAN and ADMM algorithms are the same as the ones applied in [2] and [27], and the FCGA is stopped when xk+1xk≤10−14. In this experiment, we initialize the three algorithms by 1000 random unimodular sequences and then generate 1000 sequences for each algorithm. The normalized autocorrelation level (NAL) of the sequences designed by the PeCAN, ADMM, and FCGA are shown in Figs. 1, 2, and 3, respectively, where the NAL is defined as
$$ {{r_{norm}}(k)} = 20{\log_{10}}\left| {\frac{{{r_{k}}}}{{{r_{0}}}}} \right|,\;k = 1 - N,...,N - 1,{k\ne0}. $$
(47)
Fig. 1
Fig. 1

Autocorrelation level of the sequences generated by PeCAN

Fig. 2
Fig. 2

Autocorrelation level of the sequences generated by ADMM

Fig. 3
Fig. 3

Autocorrelation level of the sequences generated by FCGA

It can be observed that in all these figures, there exist two distinct ranges for the sidelobe distribution. This is because that for all algorithms, the optimal sequence is a local optimal solution. When the algorithms are initialized by random sequences, the generated sequence may reach global optimal (sequence with sidelobes that are less than − 300 dB) or not. Comparing Figs. 1, 2, and 3, we can see that the FCGA can generate the best sequence whose peak sidelobe level (PSL) is about − 318 dB.

To fully compare the sidelobe performance, we calculate the ISLs and PSLs of all 1000 sequences for each algorithm and record the running time of each trial. The cumulative distribution functions (CDF) of the ISLs, PSLs, and running time are presented in Fig. 4. Since both ISL and PSL are the criterions related to the autocorrelation, the CDF of the ISLs and PSLs are similar as can be seen from Fig. 4a, b. In Fig. 4b, we can see that about 55.7% of the PSL of the FCGA sequences is less than − 285 dB, while the PeCAN and ADMM algorithms are difficult to produce sequences with such PSL. Compared with the PeCAN, the overall sidelobe performance of the FCGA is better. The only drawback of the proposed algorithm is that about 41.8% of the PSLs of the FCGA sequences are located in [−100,−50]dB, while the ADMM has the proportion of only 10.3%. However, as shown in Fig. 4c, the running time of the FCGA is about an order of magnitude smaller than that of the PeCAN and ADMM algorithms, which indicates the former is more computationally efficient. It is worth noting that since the ADMM is stopped after 2×105 iterations, the running time of ADMM is basically unchanged.
Fig. 4
Fig. 4

The cumulative distribution functions of the a ISLs, b PSLs, and c running time

4.2 Aperiodic sequences design with low autocorrelation sidelobes

In this subsection, we investigate the performance of the proposed algorithm by designing aperiodic sequences (i.e., the case of \(\lambda = 0,\tilde N = 2N\)). To show the advantage of the proposed algorithm, we compare the performance with several algorithms, including the CAN [9], the Accelerated-MISL [10], and the gradient descent (GD) algorithm in [23]. Among the three contrast algorithms, the CAN is the classical algorithm for this design problem, and the other two algorithms are proposed recently. All these algorithms are good at designing aperiodic sequence with low sidelobes and also have high computational efficiency. Since the ADMM is inferior to the CAN and Accelerated-MISL algorithms in terms of the aperiodic autocorrelation sidelobes [27], here we do not take the ADMM into account. The stopping criterion is set to be xk+1xk≤10−3 for all algorithms.

In this experiment, we compare the quality of the designed sequences and the running times of these algorithms. Generally, the sequence quality can be measured by the merit factor (MF) (the large the better) which is defined as
$$ {\mathrm{MF =}}\frac{{{{\left| {{r_{0}}} \right|}^{2}}}}{{2\sum\limits_{k = 1}^{N - 1} {{{\left| {{r_{k}}} \right|}^{2}}}}} = \frac{{{N^{2}}}}{{2{\text{ISL}}}}. $$
(48)
First of all, we use the random sequences to initialize the algorithms. To obtain the average values of the MF and running time, we repeat the four algorithms 300 times for each of the following lengths: N=25,26,...,213. Figure 5 shows the performance metrics (average merit factor, average running time, and average iteration number) of different algorithms. As we can see in Fig. 5a, the average merit factors of the sequences designed by the FCGA, GD, and Accelerated-MISL are basically the same. This is because that these three algorithms are all gradient-based algorithms which are different from the CAN. Actually, the MF of the sequences designed by the FCGA is a little larger than that of the GD and Accelerated-MISL. From Fig. 5b, it can be easily observed that the FCGA is faster than the other three algorithms, especially when the sequence length is very large. Although the FCGA costs a little more computations than the CAN and Accelerated-MISL algorithms at each iteration, the superiority of the FCGA comes from the reduction of the iteration number. Figure 5c shows the average iteration numbers of the four algorithms. From this figure, we can see that the average iteration number of the FCGA is the least. This may be because that (i) the FCGA minimizes the objective function (ISL) directly, rather than a more loose objective function which the Accelerated-MISL applied, and (ii) in our test, the step size of the FCGA seems more accurate than that of the GD in [23], which means that the FCGA has faster convergence speed. Taking the merit factor and the running time into account, the proposed FCGA is better than the other three algorithms.
Fig. 5
Fig. 5

ac The performance metrics of different algorithms initialized by random sequences

Next, we use the Frank sequence and Golomb sequence to initialize the four algorithms, and the results are shown in Figs. 6 and 7, respectively. It is well known that both the Frank sequence and the Golomb sequence have good autocorrelation property. Thus, after initializing by these two kinds of sequences, the algorithms can produce sequence with better sidelobe performance. In Figs. 6a and 7a, the MF of different algorithms are almost the same and far greater than the result in Fig. 5a. This indicates that the Frank sequence and Golomb sequence can lead to higher MF than the random sequence. Moreover, from Figs. 6b and 7b, we can see that the FCGA is still an efficient and competitive algorithm when initialized by the Frank sequence or the Golomb sequence.
Fig. 6
Fig. 6

ac The performance metrics of different algorithms initialized by Frank sequence

Fig. 7
Fig. 7

ac The performance metrics of different algorithms initialized by Golomb sequence

4.3 Sequence design with stopband property

To further verify the effectiveness of the FCGA, here we consider the problem of designing aperiodic sequence with low autocorrelation sidelobes and stopband property. The contrast algorithms are the SCAN [20], Spectral-MISL [10], ADMM, and GD [23] respectively, where the ADMM is the modified version for the synthesis of aperiodic unimodular sequences. Assume that the sequence length is N=128, and the normalized frequency stopband considered here is [0.04,0.21][0.23,0.25][0.28,0.37][0.39,0.49][0.52,0.56]. In this experiment, we choose xk+1xk≤10−3 as the stopping criterion of the SCAN, Spectral-MISL, and FCGA algorithms. And the weight coefficient λ in these three algorithms are set to be 0.9, 104, 0.9, respectively. The parameters of the ADMM are the same as those in [27]. For the GD algorithm, we set the maximum iteration number T=3000, αacf=0.1, and αspec=0.9.

Figures 8 and 9 give an example of the autocorrelation level and spectrum power of the sequences designed by the five algorithms. The PSL of the sequences designed by the SCAN, Spectral-MISL, ADMM, GD, and FCGA algorithms are − 14.01 dB, − 14.09 dB, − 15.64 dB, − 13.80 dB, and − 15.81 dB, respectively. And the corresponding peak stopband power (PSP) are − 16.12 dB, − 15.71 dB, − 20.13 dB, − 17.77 dB, and − 20.73 dB. Like the other four algorithms, the proposed algorithm can suppress the autocorrelation sidelobes and the spectral power in frequency stopbands simultaneously.
Fig. 8
Fig. 8

Autocorrelation level of the sequences generated by five different algorithms

Fig. 9
Fig. 9

Spectral power of sequences generated by five different algorithms

Next, to eliminate the randomness, we repeat the algorithms 100 times and then record the PSL, PSP, iteration number NI, and running time t. Figure 10 shows the cumulative distribution functions of the four performance parameters (PSL,PSP,NI,t). And the average values of the parameters are provided in Table 2. From Fig. 10 and Table 2, we can see that the PSL of these algorithms are basically the same, while the PSP of the ADMM and proposed algorithm (which have comparable performance) are lower than that of the other three algorithms. Moreover, the running time in Fig. 10 indicates that the proposed algorithm is more computationally efficient than the other four algorithms owing to the fewer iterations.
Fig. 10
Fig. 10

The cumulative distribution functions of the PSL, PSP, iteration number, and running time

Table 2

Comparison of the performance parameters between the five algorithms

 

PSL (dB)

PSP (dB)

N I

t(s)

SCAN

− 14.20

− 14.96

1887

0.071

Spectral-MISL

− 14.07

− 16.55

3953

0.084

ADMM

− 14.14

− 20.14

50000

7.097

GD

− 14.09

− 17.21

3000

0.646

FCGA

− 14.13

− 20.78

269

0.028

5 Conclusions

In this paper, we have proposed an efficient algorithm named FCGA for designing periodic/aperiodic sequences with low autocorrelation sidelobes and low frequency stopband power. The FCGA algorithm is derived based on the conjugate gradient algorithm which can guarantee the monotonicity of the objective function. By changing the number of FFT points, the design of periodic and aperiodic sequences can be easily achieved. Since the main step of the FCGA can be implemented by FFT operations and Hadamard product, the proposed algorithm is computationally efficient and can be used to design sequences of large length. Several numerical experiments have been provided to demonstrate the superiority of the proposed algorithm.

6 Appendix 1: Derivation of phase gradient

By substituting (23) and (28) into (24), the gradient ϕJT can be rewritten as
(49)
Defining \(\textbf {A} = \left [ {{\textbf {a}_{0}},...,{\textbf {a}_{\tilde N-1}}} \right ]\), then we have
$$ \begin{aligned} {\kern-18.5pt}{\left[ {\begin{array}{*{20}{c}} {{w_{0}}{{\left| {\textbf{a}_{0}^{H}\textbf{x}} \right|}^{2}}\textbf{a}_{0}^{H}\textbf{x}}\\ \vdots \\ {{w_{\tilde N - 1}}{{\left| {\textbf{a}_{\tilde N - 1}^{H}\textbf{x}} \right|}^{2}}\textbf{a}_{\tilde N - 1}^{H}\textbf{x}} \end{array}} \right]} &{= \left[ {\begin{array}{*{20}{c}} {{w_{0}}}\\ \vdots \\ {{w_{\tilde N - 1}}} \end{array}} \right] \circ \left[ {\begin{array}{*{20}{c}} {{{\left| {\textbf{a}_{0}^{H}\textbf{x}} \right|}^{2}}}\\ \vdots \\ {{{\left| {\textbf{a}_{\tilde N - 1}^{H}\textbf{x}} \right|}^{2}}} \end{array}} \right] \circ \left[ {\begin{array}{*{20}{c}} {\textbf{a}_{0}^{H}\textbf{x}}\\ \vdots \\ {\textbf{a}_{\tilde N - 1}^{H}\textbf{x}} \end{array}} \right]}\\ &{= \textbf{w} \circ {\left| {{\textbf{A}^{H}}\textbf{x}} \right|^{2}} \circ \left({{\textbf{A}^{H}}\textbf{x}} \right),} \end{aligned} $$
(50)
where function |·|2 is applied element-wise to the vector. According to (49) and (50), ϕJT can be expressed as
(51)
From the expression of A, we can see that the \(\tilde N \times N\) matrix AH is composed of the first N columns of \(\tilde N \times \tilde N\) DFT matrix \(\textbf {F}_{\tilde N}\), i.e., AH=F:,1:N (to facilitate the derivation, here we ignore the subscript of \(\textbf {F}_{\tilde N}\)). By substituting F for A, (51) can be rewritten as
(52)
Therefore, the phase gradient (52) can be implemented by FFT operations. Let
$$ {\textbf{y} = {\mathcal{F}}_{\tilde N}^{- 1}\left({\textbf{w} \circ {{\left| {{{\mathcal{F}}_{\tilde N}}\left(\textbf{x} \right)} \right|}^{2}} \circ {{\mathcal{F}}_{\tilde N}}\left(\textbf{x} \right)} \right),} $$
(53)
then the phase gradient can be further simplified as
(54)

where y1:N denotes the first N elements of y.

Abbreviations

ADMM: 

Alternating direction method of multipliers

CAN: 

Cyclic algorithm new

CDF: 

Cumulative distribution function

CDMA: 

Code division multiple access

FCGA: 

FFT-based conjugate gradient algorithm

(I)DFT: 

(Inverse) discrete fourier transform

(I)FFT: 

(Inverse) fast fourier transform

ISL: 

Integrated sidelobe level

LFM: 

Linear frequency modulation

LPNN: 

Lagrange programming neural network

MF: 

Merit factor

MISL: 

Monotonic minimizer for integrated sidelobe level

MM: 

Majorization-minimization

MMSE: 

Minimum mean square error

NAL: 

Normalized autocorrelation level

PeCAN: 

Periodic-correlation cyclic algorithm new

PRP-CGA: 

Polak-Ribiere-Polyak-CGA

PSD: 

Power spectrum density

PSL: 

Peak sidelobe level

PSP: 

Peak stopband power

SCAN: 

Stopband cyclic algorithm new

SDCA: 

Steepest descent-based cyclic algorithm

WeCAN: 

Weighted cyclic algorithm new

WeSCAN: 

Weighted stopband cyclic algorithm new

Declarations

Acknowledgements

The authors want to thank XiaoLi Sun for her great help in Latex editing and the anonymous reviewers for their valuable comments.

Funding

This work is supported by the Key Projects in the National Science & Technology Pillar Program during the Twelfth Five-year Plan Period (No. 2014BAK12B00) and the National Natural Science Foundation of China (No. 61101186 and No. 61101179).

Availability of data and materials

Please contact the author for data (Matlab codes) requests.

Authors’ contributions

LT conceived and devised the idea. LT and YZ designed and performed the experiments. YZ and QF analyzed the experiment results. All authors contributed to writing and editing the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
National University of Defense Technology, Deya Road, Changsha, China

References

  1. S. W. Golomb, G. Gong, Signal design for good correlation: for wireless communication, cryptography, and radar (Cambridge University Press, Cambridge, 2005).View ArticleGoogle Scholar
  2. H. He, J. Li, P. Stoica, Waveform design for active sensing systems: a computational approach (Cambridge University Press, Cambridge, 2012).View ArticleGoogle Scholar
  3. N. Levanon, E. Mozeson, Radar signals (Wiley, New York, 2004).View ArticleGoogle Scholar
  4. D. Tse, P. Viswanath, Fundamentals of wireless communication (Cambridge University Press, New York, 2005).View ArticleGoogle Scholar
  5. S. Mertens, Exhaustive search for low-autocorrelation binary sequences. J. Phys. A. Math. General. 29(18), 473–481 (1996).MathSciNetView ArticleGoogle Scholar
  6. W. H. Mow, K. L. Du, W. H. Wu, New evolutionary search for long low autocorrelation binary sequences. IEEE Trans. Aerosp. Electron. Syst.51(1), 290–303 (2015).View ArticleGoogle Scholar
  7. C. J. Nunn, G. E. Coxson, Polyphase pulse compression codes with optimal peak and integrated sidelobes. IEEE Trans. Aerosp. Electron. Syst.45(2), 775–781 (2009).View ArticleGoogle Scholar
  8. P. Borwein, R. Ferguson, Polyphase sequences with low autocorrelation. IEEE Trans. Inf. Theory. 51(4), 1564–1567 (2005).MathSciNetView ArticleGoogle Scholar
  9. P. Stoica, H. He, J. Li, New algorithms for designing unimodular sequences with good correlation properties. IEEE Trans. Signal Process.57(4), 1415–1425 (2009).MathSciNetView ArticleGoogle Scholar
  10. J. X. Song, P. Babu, D. P. Palomar, Optimization methods for designing sequences with low autocorrelation sidelobes. IEEE Trans. Signal Process.63(15), 3998–4009 (2015).MathSciNetView ArticleGoogle Scholar
  11. F. C. Li, Y. N. Zhao, X. L. Qiao, A waveform design method for suppressing range sidelobes in desired intervals. Signal Process.96:, 203–211 (2014).View ArticleGoogle Scholar
  12. J. X. Song, P. Babu, D. Palomar, Sequence design to minimize the weighted integrated and peak sidelobe levels. IEEE Trans. Signal Process.64(8), 2051–2064 (2016).MathSciNetView ArticleGoogle Scholar
  13. H. Wu, Z. Y. Song, H. Q. Fan, Q. Fu, Designing sequence with low sidelobe levels at specified intervals based on psd fitting. Electron. Lett.51(1), 99–100 (2015).View ArticleGoogle Scholar
  14. P. Stoica, H. He, J. Li, On designing sequences with impulse-like periodic correlation. IEEE Signal Process. Lett.16(8), 703–706 (2009).View ArticleGoogle Scholar
  15. A. Aubry, A. De Maio, Y. Huang, M. Piezzo, A. Farina, A new radar waveform design algorithm with improved feasibility for spectral coexistence. IEEE Trans. Aerosp. Electron. Syst.51(2), 1029–1038 (2015).View ArticleGoogle Scholar
  16. W. Rowe, P. Stoica, J. Li, Spectrally constrained waveform design. IEEE Signal Process. Mag.31(3), 157–162 (2014).View ArticleGoogle Scholar
  17. M. J. Lindenfeld, Sparse frequency transmit and receive waveform design. IEEE Trans. Aerosp. Electron. Syst.40(3), 851–861 (2004).View ArticleGoogle Scholar
  18. Z. T. Luo, K. Lu, X. Y. Chen, Z. S. He, Wideband signal design for over-the-horizon radar in cochannel interference. Eurasip J. Adv. Signal Process.2014(1), 159 (2014).View ArticleGoogle Scholar
  19. P. Ge, G. L. Cui, L. J. Kong, J. Y. Yang, Unimodular sequence design under frequency hopping communication compatibility requirements. Eurasip J. Adv. Signal Process.2016(1), 137 (2016).View ArticleGoogle Scholar
  20. H. He, P. Stoica, J. Li, in 2nd International Workshop on Cognitive Information Processing (CIP) 2010. Waveform design with stopband and correlation constraints for cognitive radar (IEEEElba, 2010), pp. 344–349.View ArticleGoogle Scholar
  21. G. Wang, Y. Lu, Designing single/multiple sparse frequency waveforms with sidelobe constraint. IET Radar Sonar Navig.5(1), 32–38 (2011).View ArticleGoogle Scholar
  22. H. Wu, Z. Y. Song, H. Q. Fan, Y. X. Li, Q. Fu, A new algorithm for sparse frequency waveform design with range sidelobes constraint. Chinese J. Electron.24(3), 604–610 (2015).View ArticleGoogle Scholar
  23. B. O’Donnell, J. M. Baden, in 2016 IEEE Radar Conference. Fast gradient descent for multi-objective waveform design (IEEEPhiladelphia, 2016), pp. 1–5.Google Scholar
  24. D. Zhao, Y. Wei, Y. Liu, Spectrum optimization via FFT-based conjugate gradient method for unimodular sequence design. Signal Process. 142:, 354–365 (2018).View ArticleGoogle Scholar
  25. J. L. Liang, H. C. So, C. S. Leung, J. Li, A. Farina, Waveform design with unit modulus and spectral shape constraints via Lagrange programming neural network. IEEE J. Sel. Topics Signal Process.9(8), 1377–1386 (2015).View ArticleGoogle Scholar
  26. X. Feng, Y. C. Song, Z. Q. Zhou, Y. N. Zhao, Designing unimodular waveform with low range sidelobes and stopband for cognitive radar via relaxed alternating projection.Int J Antennas Propag. 2016:, 1–9 (2016).View ArticleGoogle Scholar
  27. J. L. Liang, H. C. So, J. Li, A. Farina, Unimodular sequence design based on alternating direction method of multipliers. IEEE Trans. Signal Process. 64(20), 5367–5381 (2016).MathSciNetView ArticleGoogle Scholar
  28. The Matlab code of the PeCAN. http://www.sal.ufl.edu/book/.

Copyright

© The Author(s) 2018

Advertisement