- Research
- Open Access
- Published:

# Target maneuver discrimination using ISAR image in interception

*EURASIP Journal on Advances in Signal Processing*
**volume 2016**, Article number: 24 (2016)

## Abstract

Discrimination for target maneuver magnitude and direction switching during the endgame is significant for interception performance improvement. Inverse synthetic aperture radar (ISAR) images carry the information related to target motion parameters. It is feasible to use them to discriminate the maneuver. An imaging model in interception is first formulated. The principle of maneuver discrimination using the ISAR images is then fully explored. A novel and practical discriminator is developed with a rigorous analysis of the scenario characteristics. The discriminator parameter selection and some important factors affecting the discrimination performance are discussed comprehensively. Finally, a simulation environment with software tools capable of generating target-realistic ISAR images is developed. The simulation results confirm the rationality of the design procedure and demonstrate that the proposed discriminator performs better than the classical innovation-based maneuver discriminator.

## Introduction

The interception of highly maneuver target is a representative optimal control problem, and target maneuver is one of the main error sources for the nonzero miss distance as the guidance theory points out [1]. Fast response to a maneuver onset and exact discrimination of the acceleration change are important for interception performance improvement. Since target maneuvers are independently controlled and target acceleration cannot be measured directly by existing sensors, acceleration can only be acquired by means of state estimation.

Conventional works commonly adopt the acceleration magnitude change from zero to nonzero as the maneuver indication [2, 3]. However, a large amount of simulation studies and flight tests have demonstrated that maneuvering in a fixed direction (constant acceleration, denoted as type I maneuver) does not, usually, pose a real challenge to the interceptor guidance system. Rather, the more difficult problem (handled by the target acceleration estimator) is to detect a single, randomly timed maneuver direction switch (MDS) during the endgame [4] (denoted as type II maneuver). It was also found that such a bang-bang type of evasion maneuvers is the optimal one for interception avoidance [1, 5]. Hence, we focus on the maneuver discrimination for a single MDS in interception in this paper.

According to the information employed in the maneuver discriminator, current techniques can be mainly classified into two categories: innovation-based and feature-based. The innovation-based [6] maneuver detector and state estimator depend on the innovation information of the Kalman filter or its variation. Whether a single-model or multiple-model method, it is difficult to achieve a short discrimination delay while maintaining high correct probabilities, due to the *Q* effect [2]. This inherent drawback becomes more serious when a mismatch occurs between target acceleration and predesigned models. By employing maneuver information embedded in the features from sensors, the feature-based technique [6] breaks through the above obstacle. In an air-to-air interception scenario, the bank angle measurement from the image is utilized in estimator when the fighter plane is taking a bank-to-turn (BTT) maneuver [4]. The detection delay is greatly reduced compared to the classical innovation-based estimator. Many similar researches exploit maneuver information from optical sensors by estimating target orientation directly [7] or extracting image fluctuant features indirectly [8]. However, the target range and velocity cannot be measured by optical sensors directly which are helpful for the state estimation and guidance. The radar echo also conveys maneuver information due to the modulation effects on electromagnetic scattering, and it can easily handle the problems of the optical sensors. Some narrowband radar features including glint, radar cross section (RCS), and high-resolution Doppler profile (HRDP) have been exploited successfully for type I maneuver detection [6, 9, 10]. But the MDS discrimination is not involved in the aforementioned researches. The validity of these approaches needs to be further verified.

According to the information employed in the maneuver discriminator, current techniques can be mainly classified into two categories: innovation-based and feature-based. The former methods depend on the innovation information of the Kalman filter or its variations [6]. Due to the *Q* effect [2], it is difficult for both of the single-model and multiple-model methods to achieve a short discrimination delay while maintaining high correct probabilities. This inherent drawback becomes more serious with mismatch between target acceleration and predesigned models. However, employing maneuver information embedded in the features from sensors, feature-based techniques [6] overcome the above obstacle. For instance, in an air-to-air interception scenario where a fighter plane is taking a BTT maneuver [4], an estimator using the bank angle measurement from images can greatly reduce the detection delay, compared to the classical innovation-based estimator. Many similar researches exploit maneuver information from optical sensors by directly estimating target orientation [7] or indirectly extracting image fluctuant features [8]. Unfortunately, optical sensors cannot directly measure target range and velocity which are helpful for the state estimation and guidance. Radar sensors can also capture maneuver information and easily obtain target range and velocity, because of the modulation effect on electromagnetic scattering. Narrowband radar features including glint, RCS, and HRDP have been exploited to detect type I maneuver [6, 9, 10]. But their validity needs to be further verified for the MDS discrimination, which is not involved in the aforementioned researches.

Generally, compared with low-resolution radar features, high-resolution features from high-range resolution (HRR) or inverse synthetic aperture radar (ISAR) images gain more advantages in maneuver information extraction. As stated in [11], the relative orientation of missile-to-target in interception can be approximated by a turntable model, which makes the maneuver discrimination using ISAR images possible. In fact, the estimation of motion parameters per se including translational motion velocity, rotation velocity, and direction is key to autofocus and cross-range scaling in ISAR imaging, and many signal-domain and image-domain methods have been proposed [12–15]. In essence, these methods are mostly offline data processing regardless of application backgrounds and need an iterative optimization “matching.” Notice that they comprehensively do not analyze the relationship between maneuver parameters and ISAR images as well as the impact factors on estimation error, such as resolution and relative orientation of missile-to-target. Yang et al. [16] derive the relationship between target ISAR image slope and turn rate in the ground moving target indicator (GMTI) radar surveillance. In this paper, we extend the air-to-ground scenario to an air-to-air interception scenario and estimate the lateral acceleration instead of the maneuvering turn. Moreover, the estimation in a quasi-steady state is extended to the transient period with a determination of maneuver switch instant.

Considering the interception of a skid-to-turn (STT) cruise missile taking a horizontal, planar “S” maneuver (denoted as type II maneuver) in penetration [17, 18], the sideslip angle will change, and a novel and practical target maneuver discriminator using ISAR images is proposed with rigorous analysis. For simplicity, we assume the target translational motion is well compensated, and employ the image-domain features, in consideration of the effect of echoes’ quality on the signal-domain parameter estimation. The paper makes an elaborate and systematic analysis of the maneuver discrimination principle in Section 2. Section 3 discusses some important factors affecting the discrimination performance and then proposes a maneuver discriminator using ISAR images. Simulation results are presented in Section 4, and conclusion is drawn in Section 5.

## Maneuver discrimination principle

Taking the initial line-of-sight (LOS) coordinate system as the inertial coordinate system, the geometry of the two-dimensional interception scenario is shown in Fig. 1, where *R* is the distance between target and missile, *q* is the LOS angle, and *γ*
_{M}, *a*
_{M}, and *V*
_{M} and *γ*
_{T}, *a*
_{T}, and *V*
_{T} are missile and target path angles, accelerations (perpendicular to the respective velocities), and speeds, respectively. Assuming first-order dynamics for both target and missile and both velocities are nearly constant, the following dynamics equations are satisfied:

where *i* = *M*,*T* refers to missile and target, \( {a}_i^{\mathrm{c}} \) is the acceleration command, and *τ*
_{
i
} is the time constant. The relationship between motion parameters and relative orientation of missile-to-target in a three-dimensional interception scenario has been derived in [11]. A two-dimensional simplified analysis is presented to establish the ISAR imaging model as follows.

### Imaging model

A right-handed coordinate system is attached to the target where the *x*-axis is pointing out of the nose, the *y*-axis to the left, and the *z*-axis to the top. Assuming the target velocity is along the *x*-axis, the pose angle *ψ*, relative to LOS, and its rates *ω* are formulated as

The positive pose angle is prescribed to the left and the positive turn is also to the left. The considerable disparities of *ψ* and *ω* in MDS are the basis of maneuver discrimination. In (2), \( {\overset{.}{\gamma}}_{\mathrm{T}} \) and \( \overset{.}{q} \) represent the pose angle variety introduced by the target motion and relative motion of missile-to-target, respectively. It has been demonstrated in [11] that

This conclusion also can be explained by the expression of \( \overset{.}{q} \) in proportional navigation (PN) guidance law [19]:

In fact, the numerator of (4) is the “collision triangle” (the dotted triangle in Fig. 1) condition [19]. If the relative motion keeps the condition satisfied in the whole interception, i.e., \( {V}_{\mathrm{T}} \sin \left({\gamma}_{\mathrm{T}}-q\right)={V}_{\mathrm{M}} \sin \left({\gamma}_{\mathrm{M}}-q\right),\kern1em \overset{.}{q}=0 \) is straightforward. In real interception scenario, \( \overset{.}{q} \) is influenced by many factors, including target maneuver, guidance law adopted, initial heading error, and estimation error. But the guidance law always keeps \( \overset{.}{q} \) within a small neighborhood of zero before the seeker head reaches its blind range. Hence, after translational motion compensation, the imaging model in two-dimensional interception can be approximated as a planar rotating object in Fig. 2, where rotation angular velocity is equal to the pose angle rate,

Furthermore, assuming target centroid *O* as the origin, the scatterer on target of radial distance *L* is mapped on to the range-Doppler plane as follows under the far-field condition:

where *v* is the rotation linear velocity, *λ* is the wavelength, and *η*
_{r} and *η*
_{f} are the range and Doppler resolutions, respectively, defined by

In (7), *c* is the velocity of light, *B* is the signal bandwidth, and *T*
_{img} is the imaging time. A typical target ISAR image is also illustrated in Fig. 2 which shows the scatterer distribution in the range-Doppler plane.

The above imaging model in interception differs significantly from that in surveillance of the ground-based radar (or other location-fixed radar) [6, 16]. In the latter situation, an ISAR image also can be obtained even if the target does not maneuver. However, from Fig. 2, we can see that only when the target takes the lateral maneuver (perpendicular to velocity), i.e., rotation, the imaging condition could be satisfied. This disparity provides the feasibility of maneuver discrimination using ISAR images in interception.

### Relationship between maneuver parameters and images

The changes of target acceleration *a*
_{T} and acceleration command \( {a}_{\mathrm{T}}^{\mathrm{c}} \) in type II maneuver are showed in Fig. 3.

In maneuver discrimination, we focus on the acceleration command switch instant *t*
_{sw} and the acceleration direction switch instant *t*
_{dir}. *t*
_{sw} is usually set to be the starting instant which is the reference to the discrimination delay evaluation. We follow this metric in our paper. By substituting (1) into (5), *ω* and its time derivative can thus be expressed as

From (8), *ω* and \( \overset{.}{\omega } \) encapsulate the full information of target maneuver. On the other hand, from (6), *ω* can be derived by the scatterer position in the range-Doppler plane. Consequently, the maneuver information can be extracted theoretically by matching the target scatterers in ISAR images of different instants [13, 14]. For example [14], *ω* of airplane rotation is estimated by comparing the geometrical relationship differences of relative scatterers in two adjacent images. Actually, the equivalent scatterer number of missile-class target is significantly fewer than ship or airplane due to its smaller size. So the matching process needs the high-resolution images, and the scatterer association in different images must be well-handled. Fortunately, a “shaft-like” shape is available for extracting the line features from target ISAR images.

From (6), the slope of the target ISAR image can be expressed as

where *f*
_{0} is the center frequency and *K* = − *f*
_{s}/*T*
_{img}
*f*
_{0} is a known constant. By substituting (8) into the time derivative of *s*, we obtain

Thus, the relationship between *s* and *ω*, also their time derivatives, is established. It is noted that these relationships are also related to the pose angle *ψ*. We firstly assume *ψ* > 0 in the following analysis and other situations are discussed later.

For the sake of clarity, type II maneuver is further divided into two types considering the different switch directions, i.e., type P and type N. According to Fig. 3, a detailed summary of maneuver parameters and slope features in type II maneuver is listed in Table 1.

According to the results summarized in Table 1, we can see that the value of \( \overset{.}{s} \) and the sign of *s* will change after the time instants *t*
_{sw} and *t*
_{dir}, respectively. Especially, if *ψ* satisfies (11) in the type P maneuver, the MDS can be easily discriminated only by the sign of \( \overset{.}{s} \).

Notice that the above derivation entails the assumption that the rotation center of the target is known which is very difficult to fulfill in reality. As a matter of fact, the slope estimation can be realized by any two scatterers along the target radial axis or by some elaborate line-extraction algorithms in image processing. Both of them are independent of the position of rotation center. More detailed scheme will be given in the next section.

### Discussion

We discuss the influence of *ψ* on the maneuver discrimination performance herein. As we know, most of the missile targets are of axial symmetry. The ISAR image acquired when *ψ* < 0 and *ω* < 0 is the same as that when *ψ* > 0 and *ω* > 0 according to Fig. 2. In other words, we only have the information of |*ψ*| and |*ω*| from the ISAR images. Thereupon, some further remarks are made as follows (the sign of variable is denoted as sgn[⋅] for simplicity):

First of all, conclusions in Table 1 are contrary when *ψ* < 0 according to (9) and (10). It does not affect the detection of *t*
_{sw} but misleads the MDS discrimination. Actually, sgn[*s*] only indicates whether the target is turning toward or away from the LOS without the knowledge of sgn[*ψ*].

Secondly, if *ψ* traverses zero in interception, either + → 0 → − or − → 0 → +, a “ghost phenomenon” is produced which means *ω* changing from *ω* < 0 to *ω* > 0. As a result, both sgn[*s*] and the value of \( \overset{.}{s} \) vary even though no MDS occurs. In this instance, the discrimination of type N maneuver suffers from the invalidation or ambiguity.

Thirdly, from the expression of \( \overset{.}{s} \) before the MDS occurs, i.e., \( \overset{.}{s}=\frac{-K}{{ \sin}^2\psi } \), we know that the value of \( \overset{.}{s} \) varies along with *ψ* even there is no MDS. If the value of \( \overset{.}{s} \) is used as the test statistic, a float threshold is needed which is generated by a large amount of statistics at different *ψ*. But it is very hard to be realized in reality.

Finally, the performance of line extracting from the images strongly depends on the value of *ψ*. For example, within a small neighborhood of *ψ* = 0, the target image approximately remains perpendicular to the Doppler axis even if there is a rotation. The reason is that Doppler frequencies developed in both sides of a target from the front to the rear have the same small values with different signs. At the same time, the ISAR image spreads in several Doppler resolution bins due to the target width. Therefore, both sgn[*s*] and the value of \( \overset{.}{s} \) variations are unpredictable results from the line extracting errors.

In summary, the estimation of sgn[*ψ*] and a de-ambiguity processing are necessary for the former two situations in maneuver discrimination. From the point of an implementation view, only \( \operatorname{sgn}\left[\overset{.}{s}\right] \) and sgn[*s*] are selected in our paper as the indications of MDS. Although (11) should be satisfied in type P maneuver, it holds in most cases during [*t*
_{sw},*t*
_{dir}]. This situation will be testified in Section 4 where the effect of *ψ* on discrimination performance is also explained more thoroughly.

## Discriminator design

### Image pre-processing

The ISAR image series can be obtained by the sliding windowing method. The window length, i.e., *T*
_{img}, determines the accumulated rotating angle for each ISAR image, and thus determines its Doppler resolution (inversely proportional to *T*
_{img} in (7)). But long *T*
_{img} implies great delay in discrimination. Hence, a tradeoff between delay and resolution should be considered. On the other hand, the sliding step length (denoted as Δ*T*) determines the aspect angle difference between neighboring ISAR images with a given *ω*. In reality, Δ*T* should be less than the missile control period but not too small. If so, the estimation of \( \overset{.}{s} \) is not reliable due to the slight difference between neighboring aspect angles.

Then, a target image is segmented out from the scene that is indicating which pixels are on-target. This usually can be done using CFAR detection followed by a sequence of binary morphological operations [16]. For the air-to-air endgame application in our paper, the signal-to-noise ratio (SNR) is quite high due to the low clutter and the enough radiation power. The major noise sources are scatterer amplitude and position fluctuations. So the target is segmented simply by the following two level thresholds:

where *I*(*x*
_{i},*y*
_{j}) is the original image pixel value, *N*
_{D} and *N*
_{R} are the numbers of pixels in the Doppler and range directions, respectively, *I*′(*x*
_{
i
}, *y*
_{
j
}) is the image pixel value filtrated by the first-level threshold TH_{1}, and *C* < 1 is a scaling factor.

### Estimation of *s* and \( \overset{.}{s} \)

Many methods in curve fitting, image processing, and ISAR cross-range scaling can be utilized to estimate *s* [12–15]. Least squares (LS) and total least squares (TLS) methods are easy to operate but would be invalided if some shadowing or obstructions appeared in the ISAR image [20]. Some algorithms such as Radon and Hough transforms [13] and polar mapping [15] can solve this problem by an iterative optimization implementation of exhaustive searching in a wide angle range. However, they afford heavy computational burden for real-time missile-borne application.

As analyzed earlier, only sgn[*s*] and \( \operatorname{sgn}\left[\overset{.}{s}\right] \) are needed in most cases. Herein, the major axis direction in the image-domain analysis is used as the estimation of *s* [21], namely

where *α* is the oblique angle, *μ*
_{
pq
} is the (*p* + *q*) ‐ th central moment of image defined as

and *m*
_{
pq
} is the (*p* + *q*) ‐ th geometrical moment of image defined as

From (13) to (15), we see that the major axis direction can be obtained only by some multiplications and additions in one cycle. The estimation of \( \overset{.}{s} \), denoted as \( \widehat{\overset{.}{s}} \) can be deduced by two neighboring frames of image sequence.

### Estimation of *ψ*

With the knowledge of sgn[*ψ*], the real maneuver switch direction could be confirmed. An estimation method of *ψ* based on the target velocity vector *v*_{T} and LOS vector ** r** is proposed in [16], namely

From (16), we know that the accurate estimation is not well because both the position and velocity of missile-to-target need to be estimated. As mentioned previously, the “collision triangle” condition holds approximately in the whole interception. \( \widehat{\psi} \) can also be obtained by this restriction

The target state estimation per se is quite accurate, so only the constant target speed needs to be estimated. Obviously, the estimation error of (17) is much smaller than that of (16).

### De-ambiguity

In theory, the value of \( \widehat{\psi} \) can be used to solve the ambiguity caused by the ghost phenomenon as mentioned in type N maneuver. But the estimation delay still exists from (17) because it is essentially an innovation-based method. It is difficult to solve the ambiguity based on the value of \( \widehat{\psi} \). Although there is no MDS, it is noted that |*ω*| is maximal during *ψ* traversing zero, and thus *ŝ* is quite accurate due to the high Doppler resolution. Therefore, if a change of sgn[*ŝ*] from positive to negative is detected, current frame or several frames before can be utilized to obtain an *ŝ*. The value of *ŝ* depends on *ψ* when MDS occurs, but it is almost constant when *ψ* is traversing zero. This disparity provides a feasible way to solve the ambiguity.

Finally, the flow chart of the proposed maneuver discrimination scheme is provided in Fig. 4.

Figure 4 shows that a two-stage detection is implemented employing \( \operatorname{sgn}\left[\widehat{\overset{.}{s}}\right] \) and sgn[*ŝ*] at different instants *t*
_{sw} and *t*
_{dir}, respectively. Meanwhile, in order to increase the reliability of \( \widehat{\overset{.}{s}} \), a “binary integration detection” or “*N*
_{B}
*/M*
_{B} detection” scheme is adopted [22]. The detection of \( \operatorname{sgn}\left[\widehat{\overset{.}{s}}\right] \) and sgn[*ŝ*] can be directly used as the indication of *t*
_{sw}, whether the target is turning toward or away from the LOS. After \( \widehat{\psi} \) is integrated, the real maneuver switch direction (denoted as *D* in Fig. 4) is taken.

## Simulation results

### Simulation environment

The block diagram of simulation environment is shown in Fig. 5. A trajectory is generated in a missile-target interception scenario, and the target pose angle is calculated. The scatterer phase center data, that is, the position and complex RCS of each of the scatterers composing the target in different pose angles and frequencies, are obtained through the high-frequency electromagnetic simulation software RadBase [6]. Based on it, the ISAR images are generated considering the noise and clutter data. Finally, the performance of the maneuver discrimination is evaluated.

The simulation parameters are listed in Table 2, and the final time of the interception is about 3 s. The initial target heading angle uniformly distributed between ±15°, that means the missile is located in the head-on zone of the target. The differential game guidance laws (DGL1) and the step-frequency wideband waveform [23] are adopted by the missile. Regardless of the specific performance of estimator used, only the target acceleration estimation delay is considered, namely, *Δ*
_{est} = 0.2 s. Estimation errors of other target states are assumed as the white Gaussian noises whose standard deviation is also listed in Table 2. Besides, the window sliding step length is set equal to the missile control period, namely, *ΔT* = *T*
_{c} = 0.01 *s*.

### Simulation results

#### A single run trial

A single simulation run of missile-target interception trajectory is depicted in Fig. 6a, where the target performs a single MDS from −15 to 15 g at *t*
_{sw} = 1 s. Figure 6b illustrates the pose angle *ψ* and the target path angle *γ*
_{T} during the whole interception. Note that *ψ* ≈ *γ*
_{T} is almost satisfied except at the last phase when the LOS angle *q* variations are considerable. So the planar rotation imaging model is reasonable in interception. The estimation result of *ψ* from (17) is also added in Fig. 6b. It can be seen that the estimation delay is evident, but the sign estimation is quite accurate.

On the other hand, the oblique angle *α* in (13) instead of *ŝ* is shown in Fig. 6c for clarity, where six sample time instants are marked in turn. The ISAR images and *ŝ* corresponding to these time instants are illustrated, respectively, in Fig. 7. The imaging time *T*
_{img} = 0.2 s and SNR = 10 dB [24] are prescribed in ISAR imaging. Combined with the variation of *ψ* in Fig. 6b, some cases are outlined below: in these images, most of target scatterers are visible due to the fast rotation rate (|*ω*| ≈ 28^{∘}/s at Fig. 7a, d), so the target is turning about 5.6° in each imaging time interval. It makes easier to estimate the slope of the target. In the neighborhood of *t*
_{dir} (Fig. 7b, c), the Doppler resolution degradation may adversely affect the accuracy of *ŝ*, but sgn[*ŝ*] reverses rapidly yet; sgn[*ŝ*] also reverses from positive to negative when *ψ*is traversing zero (Fig. 7e, f), but the Doppler resolution is distinctly higher than that in Fig. 7b, c. Another remarkable and important thing is the persistent decrement of *ŝ* during [*t*
_{sw}, *t*
_{2}] in Fig. 6c. That is to say, \( \operatorname{sgn}\left[\widehat{\overset{.}{s}}\right] \) reverses after MDS occurs. All of these cases are consistent with the analysis and discussion in the above sections.

#### Discriminator parameter design

As analyzed earlier, the discriminator parameter design strongly depends on *ψ*. So the influence of various discriminator parameters on discrimination performance is testified through a large amount of Monte Carlo simulations at different *ψ*. Note that type P maneuver in *ψ* > 0 is equivalent to type N maneuver in *ψ* < 0 and vice versa. Hence, only the target takes a single MDS from 15 to −15 g is considered. In simulation, the angle interval is 5° and 1000 runs at each interval are picked out by changing *t*
_{sw} and *γ*
_{T}(0). From the point of implementation view, *γ*
_{M} should be smaller than the seeker gimbals angle [25] (35° in this paper) to make sure that the target will not fly off the field of view. Herein, *ψ* is limited to 60° according to the “collision triangle” condition.

Figure 8 shows the detection probability *P*
_{d} and the false alarm probability *P*
_{fa} of *t*
_{sw} for *N*
_{B}
*/M*
_{B} detection of \( \widehat{\overset{.}{s}} \) when *T*
_{img} = 0.2 s. The detection performance degrades when *ψ* < 15^{∘} because the estimation errors of *s* increases and (11) is hardly to be satisfied. Both the detection and false alarm performance are good after *ψ* > 30^{∘}. Since the detection of \( \widehat{\overset{.}{s}} \) is just the first stage of MDS discrimination, the minimal *P*
_{fa} is the top priority for the *N*
_{B}
*/M*
_{B} selection. On the other hand, considering the delay increases as *N*
_{B} and *M*
_{B} increase, the *N*
_{B}
*/M*
_{B} selection is a tradeoff between the delay and *P*
_{fa}.

Similarly, the performance of de-ambiguity is shown in Fig. 9. As mentioned in Section 3.4, the mean value of *α* of ten frames before sgn[*ŝ*] changes from positive to negative is chosen as a threshold (denoted as *AM_index*). In Fig. 9a, a cluster of the detection probability of type N maneuver (type P maneuver when *ψ* < 0) at different thresholds is exhibited. In contrast, the curves of false alarm probability in type P maneuver when *ψ* > 0 are illustrated in Fig. 9b. Different from the *N*
_{B}
*/M*
_{B} selection, the *AM_index* selection should pursue a total maximum sum of *P*
_{d} − *P*
_{fa}.

At last, the detection probability of different window lengths is shown in Fig. 10. The imaging time *T*
_{img} is normalized by the window sliding step length *ΔT*, for simplicity, namely, WL = *T*
_{img}/*ΔT*. The *N*
_{B}
*/M*
_{B} is all “7/7” and the *AM_index* values are 65.88^{∘}, 87.2^{∘}, and 87.5^{∘}, respectively, in three cases. The detection probability of WL = 10 (*T*
_{img} = 0.1 s) is low due to the low Doppler resolution. The detection probability of WL = 15 and WL = 20 are closer, but WL = 15 is a better choice due to a smaller delay. It is also apparent that the discrimination performance is extremely poor in the neighborhood of *ψ* = 0 ([−5^{∘}, 5^{∘}]) that justifies the conclusion in the former section.

#### Discrimination statistics

The total discrimination performance at *t*
_{sw} ∈ [0.2, 2.8] s in interception is shown in Fig. 11, and sets of 1000 Monte Carlo runs with random noise, random initial positions at each time interval are used. The window length WL = 15 and other discriminator parameters are the same as given in Fig. 10. The detection probability *P*
_{d}, false alarm probability *P*
_{fa}, miss probability *P*
_{m}, and correct direction discrimination probability *P*
_{c} (integrated \( \operatorname{sgn}\left[\widehat{\psi}\right] \)) are illustrated in Fig. 11a. It can be seen that the total successful discrimination probability in interception is quite good with the exception of *t*
_{sw} = 2.8 s. Since the sufficient information cannot be collected to deliver a statistically significant decision at this time instant when WL = 15, *P*
_{m} increases rapidly. Besides, the performance degrades slightly, especially *P*
_{c} < *P*
_{d}, when *t*
_{sw} < 1 s, because the pose angle *ψ* is often in the neighborhood of zero when MDS occurs.

From Fig. 11b, the minimum discrimination delay is 0.07 s which is acquired by the “7/7” detection of \( \widehat{\overset{.}{s}} \) and the maximum delay does not exceed 0.26 s. The mean delay keeps about 0.15 s at all switch instants. Compared with the classical innovation-based maneuver detector, such as adaptive-H0 and the standard GLR detectors in [26], the mean delay in the same detection probability is an almost linear function of *t*
_{sw} monotonically decreasing from 0.35 (at *t*
_{sw} = 0.2 s) to 0.16 s (at *t*
_{sw} = 0.8 s). The reason can be attributed to the constant angular noise, and the displacement noise is proportional to the range. The results are not shown here for conciseness.

#### Applicability summary

According to the pose angle *ψ* and maneuver type, an applicability summary of the proposed discriminator based on the analysis and simulation results is illustrated as follows:

From Fig. 12, the feature or the test statistics used in the discriminator are the same in the same color zone and the deeper color means the shorter delay. In reality, MDS often occurs in the hatch zone due to the initial head-on geometry of missile-to-target and the short fly time in endgame, so sgn[*s*] and \( \operatorname{sgn}\left[\overset{.}{s}\right] \) are sufficient for discrimination. The upper bound is determined by the gimbal angle of missile under the “collision triangle” condition, and the lower bound depends on the estimation error of *s* (the red mesh boundary). Of course, real switch direction discrimination and de-ambiguity are necessary with the help of other information. Note that the applicability analysis in Fig. 12 is based on the particular scenario in this paper. Generally, the feature selection and the discriminator parameter design should be closely associated with the application characteristics.

## Conclusions

Discriminating target maneuver using ISAR images is feasible because of the embedded information related to target motion parameters. This paper firstly sets up the imaging model in interception and mathematically derives the relationship between the bang-bang type maneuver parameters and ISAR image slope. Then, the principle of maneuver discrimination using the ISAR images is explored, and some important factors affecting the discrimination performance are discussed. A novel and practical discriminator is developed afterwards whose parameter is designed elaborately based on the endgame scenario characteristics. Finally, the simulation results give some operational guidelines to designer for choosing discriminator parameters in practice and demonstrate that the proposed discriminator performs better than the classical innovation-based maneuver discriminator.

Compared with the conventional maneuver detector, the proposed discriminator further provides the maneuver direction switch information which has been successfully used in both estimator [4] and guidance law [27]. In fact, as analyzed in this paper, we know that *ω* can be estimated directly from ISAR images or integrated in the conventional innovation-based estimator. It will certainly enhance the estimation performance. Moreover, although the analysis in this paper is based on STT maneuvering target, it is also feasible to extract the maneuver parameters for a BTT target. For example, the wings’ rotation when the plane is taking a BTT maneuver is similar to the missile body’s rotation. In this situation, maneuver discrimination based on the ISAR images is a very attractive research direction.

## References

- 1.
J Shinar, T Vladimir, What happens when certainty equivalence is not valid? Is there an optimal estimator for terminal guidance? Annu. Rev. Control.

**27**, 119–130 (2003). doi:10.1016/j.arcontrol.2003.10.001 - 2.
HQ Fan, S Wang, Q Fu, Survey of algorithms of target maneuver detection. Syst. Eng. Electron.

**31**(5), 1064–1070 (2009) - 3.
JF Ru, VP Jikov, XR Li, A Bashi, Detection of target maneuver onset. IEEE Trans. Aerosp. Electron. Syst.

**45**(2), 536–554 (2009) - 4.
Y Oshman, D Arad, Enhanced air-to-air missile tracking using target orientation observations. AIAA J Guid. Control. Dyn.

**27**(4), 595–606 (2004). doi:10.2514/1.11155 - 5.
J. Shinar, T. Shima, Robust missile guidance law against highly maneuvering targets. Paper presented at the 7th Mediterranean conference on control and automation, Haifa, Israel, 28–30 June 1999

- 6.
YL Zhu, HQ Fan, JP Fan, ZQ Lu, Q Fu, Target turning maneuver detection using high resolution Doppler profile. IEEE Trans. Aerosp. Electron. Syst.

**48**(1), 762–779 (2012). doi:10.1109/TAES.2012.6129669 - 7.
DD Sworder, RG Hutchins, Maneuver estimation using measurements of orientation. IEEE Trans. Aerosp. Electron. Syst.

**26**(4), 625–638 (1990) - 8.
S Shetty, AT Alouani, A multisensor tracking system with an image-based maneuver detector. IEEE Trans. Aerosp. Electron. Syst.

**32**(1), 167–181 (1996) - 9.
EJ Hughes, M Leyland, Target manoeuvre detection using radar glint. Electron. Lett.

**34**(17), 1695–1696 (1998) - 10.
H.Q. Fan, Dissertation, National University of Defense Technology, 2008

- 11.
SJ Fan, HQ Fan, HT Xiao, JP Fan, Q Fu, Three-dimensional analysis of relationship between relative orientation and motion modes. Chin. J. Aeronaut.

**27**(6), 1495–1504 (2014). doi:10.1016/j.cja.2014.10.016 - 12.
ZW Xu, L Zhang, MD Xing, Precise cross-range scaling for ISAR images using feature registration. IEEE Trans. Geosci. Remote Sens. Lett.

**11**(10), 1792–1796 (2014). doi:10.1109/LGRS.2014.2309604 - 13.
CM Yeh, J Xu, YN Peng, XT Wang, J Yang, XG Xia, Cross-range scaling for ISAR via optical flow analysis. IEEE Aerosp. Electron. Syst. Mag.

**27**(2), 14–22 (2012). doi:10.1109/MAES.2012.6163609 - 14.
CM Yeh, J Xu, YN Peng, XM Shan, Rotational motion estimation for ISAR via triangle pose difference on two range-Doppler images. IET Radar. Sonar. Navig.

**4**(4), 528–536 (2010). doi:10.1049/iet-rsn.2009.0042 - 15.
SH Park, HT Kim, KT Kim, Cross-range scaling algorithm for ISAR images using 2-D Fourier transform and polar mapping. IEEE Trans. Geosci. Remote Sens.

**49**(2), 868–877 (2011). doi:10.1109/TGRS.2010.2060731 - 16.
C. Yang, W. Garber, R. Mitchell, E. Blasch, A simple maneuver indicator from target range-Doppler image. Paper presented at the 10th international conference information fusion, Quebec, Canada, 9–16 July 2007

- 17.
B Etkin, LD Reid,

*Dynamics of Flight: Stability and Control*(Wiley, New York, 1996) - 18.
J.R. Cloutier, T.S. Donald, Nonlinear hybrid bank-to-turn/ skid-to-turn missile autopilot design. Paper presented at the AIAA guidance, navigation, and control conference and exhibit, Montreal, Canada, 6–9 August 2001, 705–715

- 19.
NF Palumbo, RA Blauwkamp, JM Lloyd, Modern homing missile guidance theory and techniques. J. Hopkins APL Tech. Dig.

**29**(1), 42–59 (2010) - 20.
TK Moon, WC Stirling,

*Mathematical Methods and Algorithms for Signal Processing*(Prentice Hall Press, Upper Saddle River, 2000) - 21.
JX Sun,

*Image Analysis*(Chinese Science Press, Beijing, 2004), pp. 123–126 - 22.
JV Harrington, An analysis of the detection of repeated signals in noise by binary integration. IEEE Trans. IT.

**4**(1), 1–9 (1955). - 23.
Z Bao, MD Xing, T Wang,

*Technology on Radar Imaging*(Publishing House of Electronics Industry Press, Beijing, 2005) - 24.
JX Zhou, ZG Shi, X Cheng, Q Fu, Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. Geosci. Remote Sens.

**49**(10), 3713–3729 (2011). doi:10.1109/TGRS.2011.2162526 - 25.
MIL-HDBK-1211(MI).

*‘Missile flight simulation, part one: surface-to-air missiles’*. (US Department of Defense, Falls Church, VA, 1995) - 26.
D Dionne, H Michalska, Y Oshman, J Shinar, Novel adaptive generalized likelihood ratio detector with application to maneuvering target tracking. AIAA J. Guid. Control. Dyn.

**29**(2), 465–474 (2006). doi:10.2514/1.13447 - 27.
Y Oshman, D Arad, Differential-game-based guidance law using target orientation observations. IEEE Trans. Aerosp. Electron. Syst.

**42**(1), 316–326 (2006). doi:10.1109/TAES.2006.1603425

## Acknowledgements

This work was supported in part by the China National Science Foundation under Grant 61101186 and the Specialized Research Fund for the Doctoral Program of China Higher Education under Grant 20134307110012. The authors thank Dr. Zhou J. X. for her valuable suggestions. The authors would also like to thank the anonymous reviewers for their valuable suggestions on improving this paper.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Fan, SJ., Xiao, HT., Fan, HQ. *et al.* Target maneuver discrimination using ISAR image in interception.
*EURASIP J. Adv. Signal Process. * **2016, **24 (2016). https://doi.org/10.1186/s13634-016-0319-1

Received:

Accepted:

Published:

### Keywords

- Maneuver target
- Motion discrimination
- Inverse synthetic aperture radar
- Interception