Skip to main content

Cognitive radar ambiguity function optimization for unimodular sequence

Abstract

An important characteristic of a cognitive radar is the capability to adjust its transmitted waveform to adapt to the radar environment. The adaptation of the transmit waveform requires an effective framework to synthesize waveforms sharing a desired ambiguity function (AF). With the volume-invariant property of AF, the integrated sidelobe level (ISL) can only be minimized in a certain area on the time delay and Doppler frequency shift plane. In this paper, we propose a new algorithm for unimodular sequence to minimize the ISL of an AF in a certain area based on the phase-only conjugate gradient and phase-only Newton’s method. For improving detection performance of a moving target detecting (MTD) radar system, slow-time ambiguity function (STAF) is defined, and the proposed algorithm is presented to optimize the range-Doppler response. We also devise a cognitive approach for a MTD radar by adaptively altering its sidelobe distribution of STAF. At the simulation stage, the performance of the proposed algorithm is assessed to show their capability to properly shape the AF and STAF of the transmitted waveform.

1 Introduction

Cognitive radar (CR) is established by the notion of a cognitive cycle, in which the two key aspects are perception of the environment and control exercised on the environment by virtue of feedback of the information that was learnt through perception. Figure 1 summarizes the essence of cognitive radar in its most basic forms. In cognitive radar system, how the transmitted waveform adapts in response to information about the radar environment is a key enabling step [1]. Many of the research efforts have been devoted to radar waveform optimization methods, which have been developed based on different performance objectives. For detecting a particular target in the presence of additive signal-dependent noise, waveform optimization method, developed by Guerci [2, 3], is evaluated in terms of the signal-to-interference-plus-noise ratio (SINR) under a particular model of the system, interference, clutter, and targets. For estimating the parameters of a target from a given ensemble, the radar waveform should be designed to maximize the mutual information (MI) between the received signal and the target ensemble [4]. Besides, exploiting a variety of knowledge sources, the radar can locate the range-Doppler bins where strong unwanted returns are predicted and synthesize a waveform whose ambiguity function (AF) exhibits low values in those interfering bins. In the previous work in [5], the idea of designing the slow-time ambiguity function (STAF) of the transmit waveform in a CR system has been discussed.

Fig. 1
figure 1

Cognitive radar in its most basic form

In radar systems, unimodular (i.e., constant modulus) sequences are usually exploited and optimized for transmission. The integrated sidelobe level (ISL) of the autocorrelation function (ACF) is often used to express the goodness of the correlation properties of a given sequence. A transmitted sequence with low ISL value reduces the risk that the echo signal of the weak target of interest is drawn in the sidelobes of the strong one or clutter interference [6]. Additionally, the unimodular sequence has low peak-to-average power ratio (PAR) which is especially desired for the transmitter [7]. A lot of literature has been focused on the topic of unimodular sequence synthesis with good properties (in particular, the ACF with low ISL values) and the many references included. These unimodular synthesis methods can be summarized into two types. The first is to use some famous sequences, such as the Golomb sequence [8], Frank sequence [9], and a pseudo random sequence, which have been proved with low sidelobes and applied in the radar systems successfully. The second is to synthesize the sequence with minimized ISL metric by the optimization algorithms [1012]. Because the problem of reducing the ISL metric may have multiple local minima, the exhaustive search algorithm has been proposed in [12]. The computational burden of this kind of algorithm increases significantly as the sequence length increases. Some optimization algorithms have been designed as the local minimization algorithms to overcome this default [1317]. Most of these algorithms can obtain fast convergence in descent gradient and provide quick solutions. It is worthwhile to mention that the cyclic algorithms proposed in [13] can design unimodular sequences that have virtually zero autocorrelation sidelobes in a specified lag interval and long sequences.

In this paper, we mainly consider the ambiguity function synthesis problem for unimodular sequences. According to Woodward’s definition, AF is a two-dimensional function defined on the time delay and Doppler frequency shift plane. The AF is defined as follows:

$$ \chi(\tau, f_{d}) = \int_{-\infty}^{\infty} s(t) s^{*}(t+\tau) e^{j 2\pi f_{d} t} dt, $$
((1))

where τ and f d denote the time delay and Doppler frequency shift, respectively, and s(t) is the radar waveform. It describes the matched filter response to the target signature. The shape of AF indicates the range and Doppler resolutions of the radar system. It also demonstrates the matched filter output with respect to the interference produced by unwanted returns. It should come with no surprise that extensive research on AF synthesis exists in the literature [1823]. Despite so much effort on this problem, few methods can synthesize the desired AF successfully. In [22], the cross ambiguity function was considered instead of AF. A pair of the waveform and receiving filter was developed simultaneously. Aubry et al. [23] deal with the design of phase-coded pulse train, which approximately maximizes the detection performance. A similarity constraint between the ambiguity functions of the devised waveform and the pulse train encoded with the prefixed sequence is required. De Maio et al. [5] also discuss the design problem of phase-coded pulse train. The average value of the STAF of the transmitted signal over some range-Doppler bins is minimized with prior information.

Note that the volume of a AF, which is defined as

$$ V=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} |\chi(\tau, f_{d})|^{2} d\tau df_{d} $$
((2))

is equal to the energy of s(t). The volume-invariant property of AF prevents the synthesis of an ideal AF that has a high narrow peak in the origin and zero sidelobes everywhere else. In this paper, we mainly focus on the synthesis of an AF that has a clear area close to the origin or minimized ISL in a certain area on the time delay and Doppler frequency shift plane.

Additionally, it is known that a moving target detecting (MTD) radar system is designed to observe the target in range-Doppler bins [6]. Its detection performance is considerably affected by the range-Doppler response of the waveform used to illuminate the operation environment. Considering that a MTD radar transmits a burst of pulses in slow time, the STAF is defined to evaluate the range-Doppler response.

The main contribution of this paper is as follows:

  1. (1)

    For optimizing the shape of an AF, the optimization algorithm is proposed based on the phase-only conjugate gradient (POCG) and phase-only Newton’s method (PONM), which have been successfully applied in optimizing the phased array radar beam pattern.

  2. (2)

    We extend the PCA and present an algorithm for optimizing the shape of STAF.

  3. (3)

    A cognitive approach for a MTD radar system is also provided in this work. The radar system can adaptively alternate its sidelobe distribution of STAF according to the interested area and clutter distribution on the time delay and Doppler frequency shift plane. This scheme is especially attractive for detecting a target with a small radar cross section (RCS) in a heavy clutter scenario.

The rest of this work is organized as follows. Section 2 discusses the formulation of the ambiguity function synthesis problem of unimodular sequence, and the optimization method based on POCG and PONM are proposed. Section 3 defines the STAF and extends the optimization algorithm for optimizing the shape of STAF of a MTD radar system. A cognitive workflow is also given. Several numerical examples are presented in Section 4. Finally, concluding remarks and directions for future research are presented in Section 5.

2 Ambiguity function synthesis

2.1 Problem formulation

We consider a monostatic radar that transmits a burst of pulses. The transmit signal can be written as

$$ s(t) = \sum_{k=1}^{N} s_{k} p_{k}(t), $$
((3))

where N denotes the number of subpulses, s k is the sequence code of the kth subpulse, and p k (t) is the pulse-shaping function. The typical form of p k (t) is the rectangular pulse and can be expressed as

$$ p_{k}(t) = \frac{1}{\sqrt{t_{p}}} \text{rect} \left(\frac{t-(k-1)t_{p}}{t_{p}} \right), $$
((4))

where t p is the time duration of subpulses and

$$ \text{rect}(t) = \left\{ \begin{array}{ll} 1, & \text{0 \(\leq\) t \(\leq\) 1;} \\ 0, & \text{elsewhere.} \end{array} \right. $$
((5))

Under the above assumptions, the AF of the transmit signal s(t) can be given by

$$ \begin{aligned} \chi_{s}\left(\tau, f_{d}\right) & = \int_{-\infty}^{\infty} s(t) s^{*}(t+\tau) e^{j 2 \pi f_{d} t }dt \\ & = \sum_{k=1}^{N} \sum_{l=1}^{N} s_{k} s_{l}^{*} \chi_{p}^{(k,l)}\left(\tau, f_{d}\right), \end{aligned} $$
((6))

where

$$ \begin{aligned} \chi_{p}^{(k,l)}\left(\tau, f_{d}\right) & = \int p_{k}(t) p_{l}^{*}(t+\tau) e^{j 2 \pi f_{d} t}dt \\ &= e^{j \pi f_{d} (2k-1) t_{p}} \cdot \frac{t_{p} - \left|\tau-(k-l)t_{p}\right|}{t_{p}}\\ &\quad\times\frac{\text{sin} \pi f_{d}\left(t_{p} - \left|\tau-(k-l)t_{p}\right|\right)}{\pi f_{d} \left(t_{p} - \left|\tau-(k-l)t_{p}\right|\right)}, \\ & \quad (k-l-1)t_{p} \leq \tau \leq (k-l+1)t_{p} \\ \end{aligned} $$
((7))

denotes the cross ambiguity function (CAF) of the pulse-shaping functions p k (t) and p l (t).

The AF χ s (τ,f d ) can be rewritten as

$$ \chi_{s}(\tau, f_{d}) = \mathbf{s}^{H} \mathbf{R}(\tau, f_{d}) \mathbf{s}, $$
((8))

where \(\mathbf {s}=[s_{1}, s_{2}, \ldots, s_{N}]^{T} \in \mathbb {C}^{N}\), \(\mathbb {C}^{N}\) denotes the complex N-space, (·)T and (·)H indicate transpose and conjugate transpose of a vector or matrix, respectively, and

$$ \mathbf{R}(\tau, f_{d}) = \left(\begin{array}{ccc} \chi_{p}^{(1,1)}(\tau, f_{d}) & \cdots & \chi_{p}^{(1,N)}(\tau, f_{d}) \\ \vdots & & \vdots \\ \chi_{p}^{(N,1)}(\tau, f_{d}) & \cdots & \chi_{p}^{(N,N)}(\tau, f_{d}) \\ \end{array}\right) $$
((9))

is the subpulse CAF matrix, which is fixed once the pulse-shaping function and the number of subpulses N are given. Therefore, the shape of the AF χ s (τ,f d ) is directly determined by the sequence codes \(\{s_{k}\}_{k=1}^{N}\) or the phase variables \(\{\phi _{k}\}_{k=1}^{N}\) of \(\{s_{k}\}_{k=1}^{N}\).

For the convenience of simplification, the time delay and Doppler frequency shift plane, i.e., τf d plane, is discretized into grids with sufficient precision. The spacing of the grids is t p in the time-delay axis and 1/(Mt p ) in the Doppler frequency shift axis. By substituing τ=nt p and f d =m/(Mt p ) in Eq. (7), we have

$$ \chi_{p}^{(k,l)}(n, m) = \left\{ \begin{array}{ll} e^{j \pi (2k-1) m/M}, & \text{\(k=l+n\);} \\ 0, & \text{elsewhere} \end{array} \right. $$
((10))

and obtain the discretized AF (DAF), which can be expressed as

$$ \chi_{s}(n,m) = \mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}, $$
((11))

where U n,m =R(nt p ,m/(Mt p )).

In this work, we aim at synthesizing the AF with a clear area close to the origin or minimized ISL in a certain area on τf d plane. Considering that the shape of AF can be controlled by the shape of DAF, we exploit the ISL metric of DAF, which is described as

$$ \text{ISL} = \sum_{(n,m) \subset I_{\Omega}} |\chi_{s}(n,m)|^{2}, $$
((12))

where I Ω is the subset of the range and Doppler bins (nt p ,m/Mt p ) on τf d plane. Additionally, the synthesized sequence should have constant modulus, i.e.,

$$ s_{k} = e^{j \phi_{k}}, k = 1,\ldots,N $$
((13))

where ϕ k is the phase of the kth sequence code s k . Therefore, we can think of synthesizing the unimodular sequence s as minimizing the ISL metric in Eq. (12) over the unimodular sequence set.

The ambiguity function synthesis problem in this paper can be formulated as

$$ \begin{aligned} & \text{min} \sum_{(n,m) \subset I_{\Omega}} |\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}|^{2} \\ & \mathrm{s.t.} \quad |s_{k}| = 1, \quad k=1,2,\ldots,N \\ \end{aligned} $$
((14))

The objective function in Eq. (14) is a quartic form, which is relatively difficult to tackle. With the conclusions in [24], the objective function is also a non-convex function. Moreover, the constraint set is a non-convex set. Hence, this problem is a non-convex optimization problem. The paper [25] has suggested that maximum block improvement (MBI) algorithms are capable of providing some good-quality solutions to this kind of problem in polynomial time. A simplified and more practical method relies on the exploitation of a simpler criterion (in particular, a quadratic function) to replace the quartic function [26].

In general, constrained optimization problems such as this one can be difficult to deal with because we must simultaneously perform the optimization and satisfy the constraint. It is worthwhile to point out that unconstrained gradient-based algorithms can be generalized to the constant modular constraint case. Therefore, the constrained optimization problem can be transformed to be unconstrained. With the derivatives of the objective function with respect to the phases, a local optimum can be obtained by gradient-based algorithms, such as the conjugate-gradient method and Newton’s method. However, a local minima can also be found in the gradient equation by successive iterations if the Hessian matrix is (semi) positive definite. Furthermore, the application of the iterative algorithm is computational efficient and easy to realize.

Based on the above considerations, accounting for the complicated form of the objective function, we can obtain the local optimum in the first-order and second-order derivatives instead. Although the Hessian matrix is not (semi) positive definite, we can exploit the diagonal loading technique to make it so.

2.2 Optimization analysis

As already highlighted, a highly multi-modal optimization objective inevitably appears in Eq. (14). It is hard for us to obtain the global optimum by the analytical expression or the optimization method. In this section, we expect to find the local optimum for the problem in Eq. (14) and propose a computationally efficient approach.

The first-order and second-order derivatives of the objective function in Eq. (14) can be respectively given by

$$ \frac{\partial \text{ISL}}{\partial \mathbf{\phi}} = \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right)\right] $$
((15))
$$ \begin{aligned} \frac{\partial^{2} \text{ISL}}{\partial\mathbf{\phi} \partial\mathbf{\phi}^{T}} = \sum_{(n,m) \subset I_{\Omega}} & \text{Re} \left[\left(\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \mathbf{U}_{n,m} \odot \mathbf{s} \mathbf{s}^{H}\right] \\ & + \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right) \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right)^{H}, \\ \end{aligned} $$
((16))

where \(\mathbf {\phi } = [\phi _{1}, \phi _{2}, \ldots, \phi _{N}]^{T} \in \mathbb {R}^{N}\), \(\mathbb {R}^{N} \) denotes real N-space and Re(·) and Im(·) represent the real and imaginary part of a complex number, respectively (see the derivation in Appendix B).

The set of the local minimum (including its global optima) is simply a subset of the stable points \(\{ \widetilde {\mathbf {s}} \}\), which can be characterized by

$$ \left.\frac{\partial \text{ISL} }{\partial \mathbf{\phi}} \right|_{\mathbf{s} = \widetilde{\mathbf{s}}} = 0. $$
((17))

Moreover, \(\widetilde {\mathbf {s}}\) is also a local minimum if and only if

$$ \left.\frac{\partial^{2} \text{ISL} }{ \partial \mathbf{\phi} \partial \mathbf{\phi}^{T}} \right|_{\mathbf{s} = \widetilde{\mathbf{s}}} \geq 0. $$
((18))

Namely, the Hessian matrix of the ISL metric is required to be (semi) positive definite. With the positive definiteness of \(\frac {\partial ^{2} \text {ISL} }{ \partial \mathbf {\phi } \partial \mathbf {\phi }^{T}}\), the stable points \(\{ \widetilde {\mathbf {s}} \}\) form the set of the local minimum.

In the following discussion, we express the Hessian matrix in Eq. (18) as U . Note that this matrix can be (semi) positive definite using the diagonal loading technique, which implies

$$ \mathbf{U}^{\ddag} + \lambda N^{2} \mathbf{I} \geq 0, $$
((19))

where I is an identity matrix, λ is a constant coefficient, which should satisfy λ+δ min (U )/N 2≥0, and δ min (U ) denotes the smallest singular value of the Hessian matrix U .

We also note that (see the proof in Appendix C)

$$ \mathbf{s}^{H} \mathbf{U}_{0,0} \mathbf{s} = N^{2} \mathbf{I}, $$
((20))

and

$$ \begin{aligned} & \text{Re} [ (\mathbf{s}^{H} \mathbf{U}_{0,0} \mathbf{s})^{*} \mathbf{U}_{0,0} \odot \mathbf{s} \mathbf{s}^{H} ] \\ & + \text{Im} (\mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s}) \text{Im} (\mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s})^{H} = N^{2} \mathbf{I}, \\ \end{aligned} $$
((21))

where denotes Hadamard (element-wise) product of matrices, and U 0,0=I. Hence, the corresponding optimization problem in (14) can be transformed to

$$ \begin{aligned} \text{min} &\sum_{(n,m) \subset I_{\Omega}} \rho=\left|\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right|^{2} + \lambda \left|\mathbf{s}^{H} \mathbf{U}_{0,0}\mathbf{s}\right|^{2} \\ & \mathrm{s.t.} \quad |s_{k}| = 1, \quad k=1,2,\ldots,N. \\ \end{aligned} $$
((22))

The first-order and second-order derivatives of the objective function in Eq. (14) can be respectively given by

$$ \begin{aligned} \frac{\partial \mathrm{\rho}}{\partial \mathbf{\phi}} = \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right)\right]\\ \end{aligned} $$
((23))
$$ \begin{aligned} \frac{\partial^{2} \mathrm{\rho}}{\partial\mathbf{\phi} \partial\mathbf{\phi}^{T}} &= \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \mathbf{U}_{n,m} \odot \mathbf{s} \mathbf{s}^{H}\right] \\ &\quad + \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right) \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right)^{H} +\lambda N^{2} \mathbf{I}.\\ \end{aligned} $$
((24))

Due to the fact that such a diagonal loading does not change the solution of the equality function in Eq. (15), the local minimum \(\overline {\mathbf {s}}\) can now be obtained by

$$ \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \text{Im} \left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)\right] =0 \\ $$
((25))

over the constant modulus set. This equation is also equivalent to the following expression as (see the proof in Appendix D).

$$ \sum_{(n,m) \subset I_{\Omega}} \text{Re}\left[j \left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right) \right] =0. $$
((26))

Consequently, a local minimum \(\overline {\mathbf {s}}\) can be characterized by

$$ \sum_{(n,m) \subset I_{\Omega}} \left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \mathbf{U}_{n,m} \mathbf{\overline{s}} = v \mathbf{\overline{s}} $$
((27))

or

$$ \sum_{(n,m) \subset I_{\Omega}} \left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \mathbf{U}_{n,m} = v \mathbf{I}, $$
((28))

where v is a real number.

2.3 Optimization method

The conjugate gradient method is used to compute the optimizing value of a function defined on a vector space, using only first derivation information. The only difference between the standard conjugate gradient method and the phase-only conjugate gradient method is that lines in Euclidean space must be replaced by lines on the N-torus, i.e., for a phase-only vector s, direction h c =ρ/ ϕ, and step size t

$$ e^{j t \text{Diag}(\mathbf{h}_{c})} \textbf{s} \quad \rightarrow \quad \mathbf{s}+t \mathbf{h}_{c}. $$
((29))

Newton’s method can provide quadratic convergence to an optimum solution. However, the Hessian matrix must be computed at every step, and there is the possibility of converging to a nonoptimum critical point. The experience of applying such algorithms is to first use the conjugate gradient method to get a close solution and then use Newton’s method to achieve the solution within machine accuracy. Newton’s iteration is obtained by moving in the direction

$$ \mathbf{h}_{n} = - \left(\frac{\partial^{2} \mathrm{\rho}}{\partial\mathbf{\phi} \partial\mathbf{\phi}^{T}} \right)^{-1} \frac{\partial \mathrm{\rho}}{\partial \mathbf{\phi}} = - \mathbf{U}^{\ddag (-1)} \mathbf{h}_{c}. $$
((30))

Let s i and s i+1 be the sequence at the ith and (i+1)th iteration. The detailed steps incorporating the phase-only conjugate gradient method and the phase-only Newton’s method are given as follows:

  1. 1.

    Select \(\phi _{0} \in \mathbb {R}_{N}\), compute g 0=h 0=∂ρ(s 0)/ ϕ, and set i=0.

  2. 2.

    For i=0,1,…,N c , compute t i such that

    $$ \rho\left(e^{jt_{i} \text{Diag}\left(\mathbf{h}_{i}\right)} \mathbf{s}_{i}\right) > \rho\left(e^{jt \text{Diag}\left(\mathbf{h}_{i}\right)} \mathbf{s}_{i}\right) $$

    for all t>0 (line optimization).

  3. 3.

    Set \(\mathbf {s}_{i+1} = e^{jt_{i} \text {Diag}\left (\mathbf {h}_{i}\right)} \mathbf {s}_{i}\).

  4. 4.

    Set

    $$ \mathbf{g}_{i+1} = \frac{\partial \rho\left(\mathbf{s}_{i+1}\right)}{\partial \mathbf{\phi}} $$
    $$ \mathbf{h}_{i+1} = \mathbf{g}_{i+1} + \gamma_{i} \mathbf{h}_{i} $$
    $$ \gamma_{i} = \frac{\left(\mathbf{g}_{i+1}- \mathbf{g}_{i}\right)^{T} \mathbf{g}_{i+1}}{\| \mathbf{g}_{i} \|^{2}}. $$
  5. 5.

    Set i=i+1; if i<N c , go to Step 2; or else, go to Step 6.

  6. 6.

    Compute

    $$ \mathbf{U}^{\ddag}(\mathbf{s}_{i}) = \frac{\partial^{2} \mathrm{\rho(\mathbf{s}_{i})}}{\partial\mathbf{\phi} \partial\mathbf{\phi}^{T}} $$
    $$ \textbf{h}_{i} = - \mathbf{U}^{\ddag(-1)}(\mathbf{s}_{i}) \mathbf{g}_{i}. $$
  7. 7.

    Set i=i+1; go to Step 6 until \(\| \rho (\mathbf {s}_{i+1}) -\rho (\mathbf {s}_{i}) \|_{2}^{2} < \varepsilon \), where ε is a predefined parameter.

The algorithm of POCG requires on the order of 8ℓN 2+ℓN real floating point operations (flops) to form the gradient vector, where is the number of samples available. Per iteration, it requires 8ℓN 2+ℓN flops to compute the gradient, and 2N flops to compute the updated search direction.

The algorithm of PONM requires on the order of 8ℓN 2+N 2 flops to form the Hessian matrix, 2N 3/3+N 2/4+2N flops to perform matrix inversion, and 4N(N−1) to perform the production of a matrix and a vector.

2.4 Selection of parameter λ

In optimization algorithms of POCG and PONM, the local/gobal optimum is obtained by successive iterations. It should be pointed out that the Hessian matrix U varies with the synthesized sequence at the optimization process, and the parameter λ i should change with the smallest singular value of U (s i ) to guarantee the positive definiteness of the Hessian matrix.

Two methods can be used to make the Hessian matrix positive definite. The first is to use a large-enough value for λ, and this will make λ a constant value. The second is to calculate the eigenvalues and eigenvector of U (s i ) at every iteration. Note that the matrix inversion of U (s i ) is also required at every iteration, and \(\mathbf {U}^{\ddag }(\mathbf {s}_{i}) = \sum _{l=1}^{w_{i}} \delta _{l} \mathbf {v}_{i}^{l} \mathbf {v}_{i}^{l H}\), where w i =rank(U (s i )). The matrix inversion of U (s i ) after diagonal loading by λ i can be given by

$$ \mathbf{U}^{\ddag(-1)}(\mathbf{s}_{i}) = \sum_{l=1}^{w_{i}} \frac{1}{\delta_{l} + \lambda_{i}} \mathbf{v}_{i}^{l} \mathbf{v}_{i}^{l H}, $$
((31))

where λ i +δ min(U (s i ))>0.

3 Slow-time ambiguity function synthesis in cognitive MTD radar

Motivated by higher performance requirements, the radar system now can exploit different environmental information, such as geographic information database, meteorological data, previous scans and some electromagnetic reflectivity, and spectral clutter models [27]. In this paper, we consider a cognitive MTD radar system which can observe the range and Doppler bins where clutter or interference is foreseen. This radar can then transmit a burst of waveforms whose STAF generates low sidelobe values in those bins.

3.1 STAF optimization

Now, we consider a monostatic MTD radar system which transmits a coherent burst of P slow-time pulses. The transmitted pulses can be written as

$$ x(t) = \sum_{i=0}^{P-1} s(t-iT_{r}), $$
((32))

where T r is the pulse repetition interval, and T r Nt p .

From the viewpoint of matched filtering and MTD processing, we define the slow-time ambiguity function 𝜗(τ,f d ) as

$$ \begin{aligned} \vartheta(\tau, f_{d}) & = \int_{-\infty}^{\infty} x(t) x^{*}(t-\tau) e^{j 2 \pi f_{d} t} d t \\ & = \sum_{i=0}^{P-1} e^{-j 2 \pi i f_{d} T_{r}} \int_{-\infty}^{\infty} s(t) s^{*}(t-\tau) e^{j 2 \pi f_{d} t}dt \\ & = \sum_{i=0}^{P-1} e^{-j 2 \pi i f_{d} T_{r}} \chi_{s}(\tau,f_{d}), \\ & \quad -(T_{r}-Nt_{p}) \leq \tau \leq (T_{r}-Nt_{p}). \\ \end{aligned} $$
((33))

Note that

$$ \sum_{i=0}^{P-1} e^{-j 2 \pi i f_{d} T_{r}} = e^{-j 2 \pi (P-1) f_{d} T_{r}} \frac{\text{sin}\left(\pi P f_{d} T_{r}\right)}{\text{sin}\left(\pi f_{d} T_{r}\right)}. $$
((34))

𝜗(τ,f d ) can also be written as

$$ \vartheta\left(\tau, f_{d}\right) = e^{-j 2 \pi (P-1) f_{d} T_{r}} \frac{\text{sin}\left(\pi P f_{d} T_{r}\right)}{\text{sin}\left(\pi f_{d} T_{r}\right)} \chi_{s}(\tau,f_{d}). $$
((35))

Hence, the STAF 𝜗(τ,f d ) can be regarded as the product of the Doppler weighted function and the AF χ s (τ,f d ).

By substituting τ=nt p and f d =m/(Mt p ), the discretized form of 𝜗(τ,f d ) is given by

$$ \begin{aligned} \vartheta(n, m) & = \sum_{i=0}^{P-1} e^{-j 2 \pi i f_{d} T_{r}} \chi_{s}(n,m) \\ & = e^{-j 2 \pi m (P-1) T_{r}/Mt_{p}} \frac{\text{sin}(\pi mPT_{r}/Mt_{p})}{\text{sin}(\pi mT_{r}/Mt_{p})} \chi_{s}(n,m). \end{aligned} $$
((36))

In this section, we intend to synthesize the STAF 𝜗(n,m) with minimized ISL in the range-Doppler bins where the clutter exists. The ISL metric for STAF can be expressed as

$$ \text{ISL} = \sum_{(n,m) \subset I_{C}} \left|\vartheta(n, m)\right|^{2}, $$
((37))

where I C is the subset of the range and Doppler bins, whose sidelobes are desired to be suppressed as much as possible at the output of the MTD processor.

Interested and clutter areas are depicted on the discretized time delay and Doppler frequency shift plane, respectively, in Fig. 2. Without loss of generality, the center of the interested area can be assumed to be the origin of the range-Doppler plane. This means that the matched filter and MTD processing response of clutter returns depend on the difference of its time delay and Doppler frequency shift with respect to those of the center of the interested area.

Fig. 2
figure 2

Interested area and clutter area on the time delay and Doppler frequency shift plane

Taking into account that the synthesized waveform should have constant module, the STAF optimization problem for a MTD radar system can be summarized as

$$ \begin{aligned} & \text{min} \sum_{(n,m) \subset I_{C}} |\vartheta(n, m)|^{2} \\ & \mathrm{s.t.} |s_{k}|=1, k=1,2,\ldots,N. \\ \end{aligned} $$
((38))

(35) is equivalent to

$$ \begin{aligned} & \text{min} \sum_{(n,m) \subset I_{C}} |\rho_{m}\chi(n, m)|^{2} \\ & \mathrm{s.t.} |s_{k}|=1, k=1,2,\ldots,N, \\ \end{aligned} $$
((39))

where

$$ \rho_{m} = \frac{\text{sin}\left(\pi mPT_{r}/Mt_{p}\right)}{\text{sin}\left(\pi mT_{r}/Mt_{p}\right)}. $$
((40))

With the derivation in the previous section, the formulation can also be given by

$$ \begin{aligned} & \text{min} \quad \sum_{(n,m) \subset I_{C}} \left| \rho_{m} \mathbf{s}^{(t){H}} \mathbf{U}_{n,m} \mathbf{s}^{(t)} \right|^{2} + \lambda \left|\mathbf{s}^{(t){H}} \mathbf{U}_{0,0} \mathbf{s}^{(t)}\right|^{2} \\ & \mathrm{s.t.} |s_{k}|=1, k=1,2,\ldots,N. \end{aligned} $$
((41))

The proposed optimization algorithm in Section 2.3 can also be used to solve this problem.

3.2 Workflow of a cognitive MTD radar

In Fig. 3, the workflow of a cognitive MTD radar is given. When the MTD radar begins to work, it utilizes some unimodular sequences with good AF or ACF properties for transmission. Then, range-Doppler processing is carried out for information extraction. With the extracted information associated with the target and clutter, the radar system begins to synthesize the unimodular sequence by our proposed algorithm. In the next coherent processing interval (CPI), the MTD radar will transmit a new designed sequence. The above process is repeated and the MTD radar system can operate in a dynamic environment with cognitive capability. This framework is especially attractive for the confirmation process. Once a target has been found by a standard radar waveform, detection can be confirmed reliably by transmitting the optimized waveform, which is matched to the operation scenario of the radar system.

Fig. 3
figure 3

Workflow of a cognitive MTD radar system

4 Numerical examples

In order to verify the effectiveness of the proposed algorithms, we will present several numerical examples, including the AF synthesis, STAF synthesis, and detection performance of a cognitive MTD radar. In the following examples, we all assume that the unimodular sequence has N=100 subpulses with rectangular pulse-shaping. The time duration of each subpulse is t p and that of the total waveform is T=100t p . The pulse repetition interval is T r =10T, and the number of pulses in a CPI is P=64. In AF and STAF, the time delay axis τ is normalized by T and the Doppler frequency axis f is normalized by 1/T. The convergence of the proposed algorithm will be tested by using randomly generated sequences in the initialization. In the iteration process, the parameter ε is set to be 10−3.

4.1 AF synthesis

Suppose that Ω={(τ,f d )||τ|<0.2,|f d |<0.01,τf d ≠0} is the interested area, which is near the origin but excludes the origin on τf d plane. With randomly generated sequence in the initialization, PCA is applied to minimize the ISL metric of the AF of the synthesized sequence.

The AFs of the initialization sequence and synthesized sequence are shown in Fig. 4 a,b. The AF in Fig. 4 a presents high sidelobe values on the whole τf d plane. The desired low sidelobes in the interested area of AF is obviously obtained in Fig. 4 b. Therefore, the synthesized sequence has a good capability of separating and detecting closely spaced targets.

Fig. 4
figure 4

AF synthesis for unimodular sequence. a The AF of the initialization sequence. b The AF of the synthesized unimodular sequence. c The zero-Doppler range profile cut of AF. d The zero-delay Doppler profile cut of AF

Figure 4 c,d gives the zero-Doppler range profile cut and zero-delay Doppler profile cut of the AF in Fig. 4 b. The sidelobes in the interested area is suppressed to about −40 dB in the time delay axis with |τ|<0.2. Due to the fact that the synthesized sequence has constant modulus, the zero-delay Doppler profile cut is a sinc function.

4.2 STAF synthesis

STAF can also be optimized by the algorithm, which has been suggested in Section 4. Two types of STAFs are both examined in this example. The first type has a clear area close to the origin on the time delay and Doppler frequency shift plane and is especially attractive for detecting closely spaced targets. The specified area can be described as

$$ \Omega_{1} = \left\{\left(\tau,f_{d}\right)\left.\left\|\tau\right.\right|<0.1, \left|f_{d}\right|<5\times10^{-4} \right\}. $$
((42))

Figure 5 a,b shows the desired and synthesized STAFs of the first type in log scale. The ISL of STAF in Ω 1 is minimized and the averaged sidelobe of the obtained sequence is suppressed to about −50 dB in Fig. 4 b.

Fig. 5
figure 5

STAF synthesis for unimodular sequence

The seconde type has minimized ISL in a certain area, which is given by

$$ \begin{aligned} \Omega_{2} &= \left\{\left(\tau,f_{d}\right) | 0.3 <|\tau|<0.5, \right.\\ &\left. 4\times10^{-4} <|f_{d}|<6\times10^{-4} \right\}. \\ \end{aligned} $$
((43))

The desired and synthesized STAFs of the seconde type are plotted in Fig. 5 c,d. The ISL of STAF in Ω 2 is reduced and the averaged sidelobe of the obtained sequence is suppressed to about −70 dB in Fig. 5 d.

4.3 STAF synthesis in a cognitive MTD radar system

In this example, a MTD radar system is designed as a CR system. The target and clutter distributions within the radar scene should be dynamically deciphered from the received backscattered signal, and these deciphered distributions over the STAF could then be used for the proposed synthesis approach. In Fig. 6 a, the clutter distribution on the τf d plane is plotted and a strong clutter block lies in

$$ \begin{aligned} \Omega_{C} = \{(\tau,f_{d})& | 0.3 <|\tau|<0.5, \\ & 4\times10^{-4} <|f_{d}|<6\times10^{-4} \}. \end{aligned} $$
((44))
Fig. 6
figure 6

Processing results for a cognitive MTD radar

For ease of simulation, the clutter in every range-Doppler bin can be treated as a stationary scattering point. Hence, the whole clutter return is the superposition of all the returns from every range-Doppler scattering point.

We also assume that the target distribution can be described as

$$ \begin{aligned} \Omega_{T} & = \{(\tau,f_{d}) | |\tau|<0.1, |f_{d}|<5\times10^{-4} \} \end{aligned} $$
((45))

and consider the underlying scintillation on RCS based on different Swerling models for the moving target. The optimized shape of STAF is plotted in Fig. 6 b, in which a low sidelobe is presented in the target and heavy clutter area.

According to the Swerling models, the RCS of a reflecting target can be described by the chi-square probability density function with specific degrees of freedom. In this example, Swerling I and III models are used in order to evaluate the detection performance of a cognitive MTD radar system. Swerling I and III models indicate a target whose magnitude of the backscattered signal is relatively constant during the dwell time. The RCS is constant from pulse to pulse but varies independently from scan to scan. For Swerling I model, its RCS varies according to a chi-square probability density function with two degrees of freedom. The density of probability of the RCS is given by the Rayleigh-Function

$$ P(\sigma) = \frac{1}{\sigma_{\text{average}}} \cdot \text{exp} \left\{ \frac{-\sigma}{\sigma_{\text{average}}} \right\}. $$
((46))

The Swerling III model is described like Swerling I but with four degrees of freedom. The scan-to-scan fluctuation follows a density of probability

$$ P(\sigma) = \frac{4 \sigma}{\left(\sigma_{\text{average}}\right)^{2}} \cdot \text{exp} \left\{ \frac{-2\sigma}{\sigma_{\text{average}}} \right\}. $$
((47))

In Eqs. (46) and (47), σ is the value of RCS, and σ average is the mean value of RCS.

In order to evaluate the detection performance, signal-to-clutter ratio (SCR) is defined as

$$ \mathrm{SCR }= \frac{P T_{r} \sigma_{\text{average}}^{2}}{Mt_{p} \int \int C(\tau,f_{d}) d\tau df_{d}}, $$
((48))

where C(τ,f d ) is the clutter distribution. In this definition, the average scattering power of the Swerling target model is compared with the average power of all the clutter scattering points.

In Fig. 6 c,d, considering the radar scene in Fig. 6 a, the detection probability versus SCR is given for the Swerling I and III target models, and the detection probability of the optimized Frank and Golomb sequences are compared. As expected, the optimized sequence outperforms Frank and Golomb sequences, showing the performance of higher detection probability and suppressing the interference of the clutter returns from the output of MTD processing. Furthermore, as SCR increases, the detection probability is raised accordingly for both the Swerling I and III models. These two figures highlight the capability of the proposed algorithm to suitably shape the STAF of the transmitted waveform.

5 Conclusions

An algorithm was proposed to synthesize a unimodular sequence by minimizing the sidelobe values of AF in certain areas on the time delay and Doppler frequency shift plane. This algorithm can be convergent theoretically and practically and has been shown to be useful for ISL minimization of AF and STAF. The algorithm for synthesizing the unimodular sequence with the desired AF and STAF was built in this work.

A cognitive approach to devise waveforms for a MTD radar system was also put forward in this work. With this approach, the MTD radar system can adaptively optimize the STAF of its transmit waveform by minimizing the ISL metric of the interested area and clutter area on the time delay and Doppler frequency shift plane. The numerical example shows that better detection performance can be achieved by our proposed approach.

We note further that computational efficiency of Newton’s method was limited by matrix inversion. This algorithm is better for the sequence with a length no longer than 104. Therefore, in the future work, we will try to find a better approach and a computation-saving method.

6 Appendix

6.1 Subpulse cross ambiguity function

In order to verify the subpulse CAF in Eq. (7), we rewrite the kth and lth subpulse CAF expressions as follows:

$$ \chi_{p}^{(k,l)}(\tau, f_{d}) = \int_{-\infty}^{\infty} p_{k}(t) p_{l}^{*}(t+\tau) e^{j 2 \pi f_{d} t}dt, $$

where

$$ p_{k}(t) = \text{rect} \left(\frac{t-(k-1)t_{p}}{t_{p}}\right), \quad (k-1)t_{p} \leq t \leq kt_{p} $$
$$ \begin{aligned} p_{l}(t+\tau) &=\text{rect} \left(\frac{t+\tau-(l-1)t_{p}}{t_{p}}\right),\\ &\quad(l-1)t_{p} \leq (t+\tau) \leq lt_{p}. \end{aligned} $$

Note that the two subsets of t overlap with each other only when τ=(kl)t p +τ , with |τ |≤t p . The integral in Eq. (7) can be calculated in two cases.

Case 1.

t p τ <0

$$ \begin{aligned} \chi_{p}^{(k,l)}(\tau, f_{d}) & = \frac{1}{t_{p}}\int_{(k-1)t_{p}}^{kt_{p}+\tau'} e^{j 2 \pi f_{d} t}dt \\ & = \frac{1}{t_{p}} \cdot \frac{e^{j 2 \pi f_{d} t}}{j 2 \pi f_{d}} {\big|}_{(k-1)t_{p}}^{kt_{p}+\tau'} \\ & = e^{j \pi f_{d} (2k-1) t_{p}} e^{j \pi f_{d} \tau'} \frac{\text{sin} \pi f_{d} (t_{p} + \tau')}{\pi f_{d} (t_{p} + \tau')} \frac{t_{p} + \tau'}{t_{p}}. \end{aligned} $$

With f d τ 1, the above equation can be simplified to

$$ \chi_{p}^{(k,l)}(\tau, f_{d}) = e^{j \pi f_{d} (2k-1) t_{p}} \frac{\text{sin} \pi f_{d} (t_{p} + \tau')}{\pi f_{d} (t_{p} + \tau')} \frac{t_{p} + \tau'}{t_{p}}. $$

Case 2.

0≤τ t p

$$ \chi_{p}^{(k,l)}(\tau, f_{d}) = e^{j \pi f_{d} (2k-1) t_{p}} \frac{\text{sin} \pi f_{d} (t_{p} - \tau')}{\pi f_{d} (t_{p} - \tau')} \frac{t_{p} - \tau'}{t_{p}}. $$

Therefore, the subpulse CAF can be summarized with the following expression as

$$ \begin{aligned} \chi_{p}^{(k,l)}(\tau, f_{d}) = & e^{j \pi f_{d} (2k-1) t_{p}} \\ &\cdot \frac{\text{sin} \pi f_{d} \left(t_{p} - |\tau-(k-l)t_{p}|\right)}{\pi f_{d} \left(t_{p} - |\tau-(k-l)t_{p}|\right)} \\ & \cdot \frac{t_{p} - |\tau-(k-l)t_{p}|}{t_{p}}, \\ & (k-l-1)t_{p} \leq \tau \leq (k-l+1)t_{p}. \\ \end{aligned} $$

6.2 Derivatives of ISL

It is assumed that \(\mathbf {s} = \left (e^{j \phi _{1}}, e^{j \phi _{2}},\ldots, e^{j \phi _{N}}\right)^{T}\) and noted that

$$ \begin{aligned} \text{ISL} & = \sum_{(n,m) \subset I_{\Omega}} \left|\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right|^{2} \\ & = \sum_{(n,m) \subset I_{\Omega}} \sum_{k,l} \left| \mathbf{U}_{n,m}(k,l) e^{j(\phi_{l} - \phi_{k})} \right|^{2}. \end{aligned} $$

Let γ n,m (s)=s H U n,m s and 1≤k 0N, we have

$$ \begin{aligned} \frac{\partial \text{ISL}}{\partial \phi_{k0}} & = \sum_{(n,m) \subset I_{\Omega}} \left\{ \frac{\partial \gamma_{n,m}(\mathbf{s}) }{\partial \phi_{k0}} \gamma_{n,m}^{*}(\mathbf{s}) + \frac{\partial \gamma_{n,m}^{*}(\mathbf{s}) }{\partial \phi_{k0}} \gamma_{n,m}(\mathbf{s}) \right\} \\ & = \sum_{(n,m) \subset I_{\Omega}} 2\text{Re} \left\{ \frac{\partial \gamma_{n,m}(\mathbf{s}) }{\partial \phi_{k0}} \gamma_{n,m}^{*}(\mathbf{s}) \right\}, \\ \end{aligned} $$

where

$$ \frac{\partial \gamma_{n,m}(\mathbf{s})}{\partial \phi_{k0}} = \text{Im} \big(e^{-j \phi_{k0}} \sum_{l} \mathbf{U}_{n,m}(k0,l) e^{j \phi_{l}} \big). $$

The first order derivative of ISL with respect to ϕ can be given by

$$ \begin{aligned} \frac{\partial \text{ISL}}{\partial \phi} = & 2 \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left\{ \left(\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right) \right\}. \\ \end{aligned} $$

Similarly, the second order derivative can also be obtained by

$$ \begin{aligned} \frac{\partial^{2} \text{ISL}}{\partial \phi \partial \phi^{T}} = &\sum_{(n,m) \subset I_{\Omega}} \left\{ \frac{\partial^{2} \gamma_{n,m}(\mathbf{s}) }{\partial \phi \partial \phi^{T}} \gamma_{n,m}^{*}(\mathbf{s}) + \frac{\partial^{2} \gamma_{n,m}^{*}(\mathbf{s}) }{\partial \phi \partial \phi^{T}} \gamma_{n,m}(\mathbf{s}) \right.\\ &\left. + \frac{\partial \gamma_{n,m}(\mathbf{s}) }{\partial \phi} \frac{\partial \gamma_{n,m}^{*}(\mathbf{s})}{\partial \phi^{T}} + \frac{\partial \gamma_{n,m}^{*}(\mathbf{s}) }{\partial \phi} \frac{\partial \gamma_{n,m}(\mathbf{s})}{\partial \phi^{T}} \right\}, \\ \end{aligned} $$

where

$$ \frac{\partial^{2} \gamma_{n,m}(\mathbf{s}) }{\partial \phi \partial \phi^{T}} = \mathbf{U}_{n,m} \odot \mathbf{s} \mathbf{s}^{H}. $$

It can be simplified to

$$ \begin{aligned} \frac{\partial^{2} \text{ISL}}{\partial \phi \partial \phi^{T}} =&\, 2 \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{s}^{H} \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \mathbf{U}_{n,m} \odot \mathbf{s} \mathbf{s}^{H}\right] \\ & + \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right) \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right)^{H}. \\ \end{aligned} $$

6.3 Proof of Eq. (20)

With the definition of the matrix U n,m , we have

$$ \mathbf{U}_{0,0} = \mathbf{I} $$

and

$$ \mathbf{s}^{H} \mathbf{U}_{0,0} \mathbf{s} = N^{2}. $$

Hence, the first item in Eq. (20) can be simplified and rewritten as

$$ \text{Re}\left[\left(\mathbf{s}^{H} \mathbf{U}_{0,0} \mathbf{s}\right)^{*} \mathbf{U}_{0,0} \odot \mathbf{s} \mathbf{s}^{H}\right] = N^{2} \mathbf{I}. $$

We also note that

$$ \mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s} = \mathbf{1}, $$

where 1=[1,1,…,1]T. The second item in Eq. (20) can also be expressed as

$$ \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s}\right) \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s}\right)^{H} = \mathbf{0}. $$

With the above two equations, we can obtain the equality in Eq. (20), which is expressed as

$$ \begin{aligned} &\text{Re}\left[\left(\mathbf{s}^{H} \mathbf{U}_{0,0} \mathbf{s}\right)^{*} \mathbf{U}_{0,0} \odot \mathbf{s} \mathbf{s}^{H} \right] + \text{Im}\left(\mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s}\right)\\ &\text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{0,0} \mathbf{s}\right)^{H} = N^{2} \mathbf{I}. \end{aligned} $$

6.4 Equality proof

As indicated in Section “Derivatives of ISL”, we have

$$ \begin{aligned} \frac{\partial \gamma_{n,m}(\mathbf{s})}{\partial \phi} & = -j\left(\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right) + j\left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right)^{*} \\ & = \text{Im} \left(\mathbf{s}^{*} \odot \mathbf{U}_{n,m} \mathbf{s}\right). \\ \end{aligned} $$

Eq. (26) can be rewritten as

$$ \begin{aligned} & \sum_{(n,m) \subset I_{\Omega}} \text{Re}\left[\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \text{Im} \left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)\right]\\ &\quad= \sum_{(n,m) \subset I_{\Omega}} \left\{ -\text{Re} \left[j \left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \left(\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right)\right]\right.\\ &\qquad\left.+ \text{Re} \left[j\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \left(\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s} \right)^{*} \right] \right\} =0. \\ \end{aligned} $$

An equality can be obtained the above equation, and expressed as

$$ \begin{aligned} & \sum_{(n,m) \subset I_{\Omega} }\sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[j \left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \left(\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s} \right) \right]\\ &\quad = \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[j \left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \left(\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s} \right)^{*} \right]. \end{aligned} $$

This expression can be expanded by the real and imaginary part of \(\mathbf {\overline {s}}^{H} \mathbf {U}_{n,m} \mathbf {\overline {s}}\) and s U n,m s, and given by

$$ \begin{aligned} &\sum_{(n,m) \subset I_{\Omega} }-\text{Im}\left[\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right] \text{Re}\left[\mathbf{s}^{*}\odot \mathbf{U}_{n,m} \mathbf{s}\right]\\& \quad+ \text{Re}\left[\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right] \text{Im}\left[\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right] \\ &=\sum_{(n,m) \subset I_{\Omega}} \text{Im}\left[\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right] \text{Re}\left[\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right]\\ &\quad+ \text{Re}\left[\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right] \text{Im}\left[\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right] \end{aligned} $$

which implies

$$ \sum_{(n,m) \subset I_{\Omega}} \text{Im}\left[\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right] \text{Re}\left[\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right] = 0. $$

Note that from

$$ \begin{aligned} &\sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \text{Im} \left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)\right]\\ &= \sum_{(n,m) \subset I_{\Omega}} \text{Re} \left[\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*}\right] \text{Im} \left[\left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)\right] = 0, \end{aligned} $$

we can obtain

$$ \begin{aligned} & \sum_{(n,m) \subset I_{\Omega}} \left\{ \text{Im}\left[\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right] \text{Re}\left[\mathbf{s}^{* }\odot \mathbf{U}_{n,m} \mathbf{s}\right]\right.\\& \quad\left.+ \text{Re} \left[\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*}\right] \text{Im} \left[\left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)\right] \right\} \\ & = \sum_{(n,m) \subset I_{\Omega}} \text{Re}\left[j\left(\mathbf{\overline{s}}^{H} \mathbf{U}_{n,m} \mathbf{\overline{s}}\right)^{*} \left(\mathbf{\overline{s}}^{*} \odot \mathbf{U}_{n,m} \mathbf{\overline{s}}\right) \right]. \end{aligned} $$

References

  1. S Haykin, Cognitive radar: a way of the future. IEEE Signal Process. Mag. 23(1), 30–40 (2006).

    Article  Google Scholar 

  2. PG Grieve, JR Guerci, Optimum matched illumination reception radar. U.S. Patent S517552 (1992).

  3. SU Pillai, HS OH, DC Youla, JR Guerci, Optimum transmit-receiver design in the presence of signal-dependent interference and channel noise. IEEE Trans. Inf. Theory. 46(2), 577–584 (2000).

    Article  MATH  Google Scholar 

  4. MR Bell, Information theory and radar waveform. IEEE Trans. Inf. Theory. 39(5), 1578–1597 (1993).

    Article  MATH  Google Scholar 

  5. S A De Maio, Y De Nicola, ZQ Huang, S Luo, Zhang, Design of phase codes for radar performance optimization with a similarity constraint. IEEE Trans. Signal Process. 57(2), 30–40 (2009).

    MathSciNet  Google Scholar 

  6. M Skolnik, Radar Handbook, 3rd ed (McGraw Hill, New York, 2008).

    Google Scholar 

  7. N Levanon, E Mozeson, Radar Signals (NY, Wiley, 2004).

    Book  Google Scholar 

  8. N Zhang, SW Golomb, Polyphase sequence with low autocorrelations. IEEE Trans. Inf. Theory. 39(3), 1085–1089 (1993).

    Article  MathSciNet  MATH  Google Scholar 

  9. R Frank, Polyphase codes with good nonperiodic correlation properties. IEEE Trans. Inf. Theory. 9(1), 43–45 (1963).

    Article  Google Scholar 

  10. CD Groot, D Wurtz, KH Hoffmann. Low autocorrelation binary sequences: exact enumeration and optimization by evolutionary strategies.Optimization.23(4), 369–384 (1992).

    Google Scholar 

  11. HD Schotten, HD Luke, On the search for low correlated binary sequences. Int. J. Electron. Commun. 59(2), 67–78 (2005).

    Article  Google Scholar 

  12. S Mertens, Exhaustive search for low-autocorrelation binary sequences. J. Phys. A. 29:, 473–481 (1996).

    Article  MathSciNet  MATH  Google Scholar 

  13. P Stocia, H He, J Li, New algorithms for designing unimodular sequences with good correlation properties. IEEE Trans. Signal Process. 57(4), 1415–1425 (2009).

    Article  MathSciNet  Google Scholar 

  14. J Li, P Stoica, X Zheng, Signal synthesis and receiver design for MIMO radar imaging. IEEE Trans. Signal Process. 56(8), 3959–3968 (2008).

    Article  MathSciNet  Google Scholar 

  15. M Soltanalian, P Stoica, Computational design of sequences with good correlation properties. IEEE Trans. Signal Process. 60(5), 2180–2193 (2012).

    Article  MathSciNet  Google Scholar 

  16. P Stoica, H He, J Li, On designing sequences with impulse-like periodic correlation. IEEE Trans. Signal Process. Lett. 16(8), 703–706 (2009).

    Article  Google Scholar 

  17. M Soltanalian, P Stoica, Designing unimodular codes via quadratic optimization is not always hard. IEEE Trans. Signal Process. 57(6), 1221–1234 (2009).

    MathSciNet  Google Scholar 

  18. S Sussman, Least-square synthesis of radar ambiguity functions. IEEE Trans. Inf. Theory. 8(3), 246–254 (1962).

    Article  MATH  Google Scholar 

  19. JD Wolf, GM Lee, CE Suyo, Radar waveform synthesis by mean-square optimization techniques. IEEE Trans. on Aero. Elec. Sys. 5(4), 611–619 (1968).

    Google Scholar 

  20. I Gladkova, D Chebanov, in International Conference on Radar Systems, Toulouse. On the synthesis problem for a waveform having a nearly ideal ambiguity functions (France, 2004), pp. 1–5.

  21. YI Abramovich, BG Danilov, AN Meleshkevich, Application of integer programming to problems of ambiguity function optimization. Radio Eng. Elect. Phys. 22(5), 48–52 (1977).

    Google Scholar 

  22. H He, P Stocia, in Acoustics, Speech and Signal Processing (ICASSP) 2011 IEEE International Conference on. On synthesizing cross ambiguity functions (Prague, 2011), pp. 3536–3539.

  23. A Aubry, A De Maio, B Jiang, S Zhang, Ambiguity function shaping for cognitive radar via complex quartic optimization. IEEE Trans. Signal Process. 61(22), 5603–5619 (2013).

    Article  MathSciNet  Google Scholar 

  24. S Boyd, L Vandenberghe, Convex Optimization (Cambridge Univ. Press, Cambridge, 2004).

    Book  MATH  Google Scholar 

  25. B Chen, S He, Z Li, S Zhang, Maximum block improvement and polynomial optimization. SIAM J. Optimiz. 22(11), 87–107 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  26. M Soltanalian, P Stoica, Designing unimodular codes via quadratic optimization. IEEE Trans. Signal Process. 62(5), 1221–1234 (2014).

    Article  MathSciNet  Google Scholar 

  27. JR Guerci, Cognitive Radar, The Knowledge-Aided Fully Adaptive Approach (Artech House, Norwood, MA, 2010).

    Book  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under grant 61201367, the Natural Science Foundation of Jiangsu Province under grant BK2012382, the Aeronautical Science Foundation of China under grant 20142052019, the Fundamental Research Funds for Central Universities under grant NS2016042, and the Cooperative Innovation Foundation Project in Jiangsu Province under grant BY2014003-5.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyan Qiu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Qiu, X., Shi, C. et al. Cognitive radar ambiguity function optimization for unimodular sequence. EURASIP J. Adv. Signal Process. 2016, 31 (2016). https://doi.org/10.1186/s13634-016-0325-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0325-3

Keywords