 Research
 Open Access
 Published:
Imaging targets moving in formation using parametric compensation
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 8 (2014)
Abstract
When conventional motion compensation algorithms that are fit for a single target are applied to cooperative targets imaging, a wellfocused image cannot be obtained due to the low correlation between adjacent returned signals. In this paper, a parametric compensation method is proposed for the imaging of cooperative targets. First, the problem of the imaging is formulated by analyzing the translational motion of the target moving along a rectilinear fight path and by assuming a signal model of cooperative targets imaging. A bulk image is then obtained by parametric compensation of the linear and quadratic phase terms, which is performed by means of estimating the translational motion parameters through the fractional Fourier transform. Next, the number of targets in the bulk image is estimated by clustering number estimation, and the segmented images from the bulk image are separated by the normalized cuts. Finally, wellfocused images are obtained by refined parametric compensation of the residual quadratic and cubic phase terms, which is carried out by estimating the parameters through maximizing the image contrast. Simulation results demonstrate the effectiveness of the proposed method.
1. Introduction
Highresolution radar imaging has been a widely addressed topic in recent years [1–3]. It has the ability to produce wellfocused images under all weather conditions and dayandnight. When highresolution radar imaging is applied to imaging of multiple targets moving in the same radar beam, the conventional motion compensation algorithms, which are suitable for a single moving target, cannot obtain a wellfocused image due to the low correlation between adjacent returned signals [4–7]. Currently, highresolution radar imaging of multiple targets has been an important issue in the radar imaging field.
The current techniques of multiple targets imaging can be categorized into two classes: direct imaging and separated imaging. The former method is based on timefrequency transformation [2, 8, 9]. However, this method has a heavy computational burden and needs to solve cross terms. The latter method is based on the separation of the returned signals. In this method, there are two basic techniques for signal separation. In the first type, the returned signal of each target is obtained by parameter separation [10–13]. However, this method is likely to fail when the differences between motion parameters are small in the formation flying. In the second type, the returned signals are separated from the region of target in the image domain [14, 15]. However, the trajectory is coarsely estimated owing to without using phase information in [14] and the error of the firstorder phase term compensation would be seriously increased when the blind velocity is large in [15].
For concise expression, multiple targets moving in a formation with almost identical velocity and acceleration are defined as cooperative targets. In the situation of cooperative target imaging, motion compensation for cooperative targets inherently confronts with difficulties due to the returned signals have low correlation as well as being overlapped in time. Therefore, motion compensation of the cooperative targets is important to the image quality. In order to reconstruct a wellfocused image of cooperative targets, this paper proposes an imaging method using parametric compensation. The proposed method is carried out by three steps. In the first step, the translational motion of the target moving along a rectilinear fight path is analyzed and the translational motion parameters are estimated by means of the fractional Fourier transform (FrFT). The linear and quadratic phase compensations are then carried out to obtain a bulk image of the cooperative targets. The second step is image segmentation. In this step, the region of each target in the bulk image is determined by clustering number estimation and the normalized cuts. The third step consists in refined parametric compensation. In this step, wellfocused images are obtained by refined compensation of the residual quadratic and cubic phase terms, which is carried out by estimating the parameters through maximizing the image contrast. Compared with the existing imaging methods by parameter separation, the proposed method can yield wellfocused images of cooperative targets. Meanwhile, since the quadratic phase term is coarsely compensated and the cubic phase term is small, there is no large searching range for the residual quadratic and cubic parameters, which reduces the computational burden.
The remainder of this paper is organized as follows. Section 2 analyzes the translational motion and introduces the signal model of cooperative targets imaging. Section 3 describes the proposed imaging method. In Section 4, simulation results are presented to prove the effectiveness of the proposed method. Section 5 provides the conclusions.
2. Signal model
2.1. Analysis of translational motion
In the imaging interval, supposing the target is in uniformly accelerative rectilinear motion. At time t = 0, the distance from the radar to the target is R(0). By referring to the geometry shown in Figure 1, the distance between the target and the radar at a generic instant t can be expressed as
where v and a denote the target's velocity and acceleration, respectively.
The target's radial velocity v_{ R }(t) and acceleration a_{ R }(t) can be expressed as
When a = 0, that is to say, the target is in uniform rectilinear motion, Equation (2) is given by
From Equations (2) and (3), it is worth noting that the translational motion of the target can be viewed as a uniformly accelerative rectilinear motion regardless of the target is in uniform rectilinear motion or in uniformly accelerative rectilinear motion. In the imaging interval, R(t) can be approximated with its secondorder Taylor polynomial calculated around t = 0 and it can be expressed as
where ${v}_{R0}=\dot{R}\left(0\right)$ and ${a}_{R0}=\ddot{R}\left(0\right)$ are the initial radial target's velocity and acceleration, respectively.
2.2. Signal model
The imaging geometry is shown in Figure 2. Coordinate (U, V) is the radar coordinate system. The target is described in coordinate (x, y) with its origin located at the geometric center of the target, called the target coordinate. A scatterer P locates on the target. (X, Y) is the coordinate system translated from the radar and is used to describe rotations of the target, ϕ is the azimuth angle of the target with respect to the (U, V) coordinate.
When ϕ = 0, the range from the radar to the scatterer P is [8]
where ω is the target angular velocity in the imaging interval.
In the case of cooperative targets, we assume that the returned signals in fastslow time can be expressed as
where $p\left(t\right)=\mathrm{rect}\left(\frac{t}{{T}_{0}}\right)exp\left(\mathit{j\pi \gamma}{t}^{2}\right)$ denotes the chirp pulse, γ is the chirp rate, $\widehat{t}=t{t}_{m}$ and t_{ m } = mT_{ r } are known as the fast time and slow time, respectively, K is the number of targets, q_{ i } is the number of scatterers of the target i, A_{i,j} is the strength of the scatterer j in the target i, f_{ c } is the carrier frequency, R_{i,j}(t_{ m }) = R_{i,0}(t_{ m }) + x_{i,j}  y_{i,j}ωt_{ m } is the distance between the radar and the scatterer j of the target i, T_{0} is the pulse length, T_{ a } is the imaging interval, c is the light speed.
After downconversion, in the range frequencyslow time domain, Equation (6) can be expressed as
where B is the bandwidth.
If Equation (4) is used in Equation (7), we obtain
For f = f_{0}, that is, in the special range cell, the Doppler frequency shift induced by the translational motion of the target i is
where ${f}_{i,d0}=\frac{2\left({f}_{c}+{f}_{0}\right)}{c}\left({v}_{i,R0}{y}_{i,j}\omega \right)$ is the initial frequency, ${k}_{i,0}=\frac{2\left({f}_{c}+{f}_{0}\right)}{c}{a}_{i,R0}$ denotes the chirp rate. It is worth pointing out that the Doppler frequency shift induced by the translational motion is approximated as the linear frequency modulation signal (LFM signal) in range cell. If the initial frequency and the chirp rate are estimated, the translational motion parameters can also be coarsely estimated.
By performing the Fourier transformation on Equation (8) with respect to t_{ m } and applying the principle of stationary phase, we have
The first phase term is the Doppler chirp term as well as the rangeDoppler coupling term which should be compensated to eliminate the image defocusing. The second phase term is the Doppler term and reflects the cross range of the scatterer j of the target i. The third phase term is the range shift term which should be compensated to reduce the image deformation. The fourth phase term is the range term and reflects the range of the scatterer j of the target i. Therefore, in order to obtain a wellfocused image, it is necessary to estimate the translational parameters and eliminate their influences on the image.
3. Imaging algorithm
3.1. Parameters estimation by the FrFT
The FrFT of signal x(t) with angle α is defined by [16]
where K_{ α }(t,u) represents the transformation kernel which can be defined as
where δ(t) denotes the Dirac function.
Supposing that the LFM signal x(t) is
where f_{ dc } and k_{ d } denote the initial frequency and the chirp rate, respectively. They can be estimated from the peak position of its fractional Fourier spectrum as follows
In the case of cooperative targets imaging, it can be noted that the differences in radial velocities of different targets are small, and the radial accelerations of different targets can be considered as the same value from Equation (2). From Equation (9), it is known that the returned signals in range cell can be approximated as LFM signal. In the case of proper angle α, the spectrum values of the returned signals in a range cell almost entirely reach a maximum owing to the almost identical radial acceleration. If we choose the strong spectrum value, the initial frequency and the chirp can be estimated by the FrFT. Meanwhile, the radial acceleration can be obtained as follows
Since y_{i,j}ω is always on the order of one [15], (v_{i,R 0}  y_{i,j}ω) can be approximated as v_{i,R 0}. Consequently, the radial velocity can be coarsely estimated by
Therefore, the radial acceleration ${\widehat{a}}_{R0}$ and velocity ${\widehat{v}}_{R0}$ of the cooperative targets can be coarsely equated with ${\widehat{a}}_{i,R0}$ and ${\widehat{v}}_{i,R0}$, respectively.
3.2 Parametric compensation
Ignoring the higher order terms, the phase term in the imaging interval is a secondorder polynomial in Equation (10). In order to obtain the bulk image of the cooperative targets, compensation of the linear and quadratic phase terms should be carried out to eliminate the influences of translational motion.
3.2.1. The linear phase term compensation
To eliminate the influence of the linear phase term, the radial velocity of the cooperative targets is used to the linear phase term compensation and the corresponding compensation term can be expressed as
After compensation, we obtain
where $\mathrm{\Delta}{v}_{i}={v}_{i,R0}{\widehat{v}}_{R0}$ is the residual radial velocity, which still induces range shift. The keystone transform is then utilized to eliminate the influence of the residual radial velocity. The keystone transform is [17]
If Equation (19) is used in Equation (18), we obtain
Thus, the linear coupling between f and t_{ m } can be removed. However, residual coupling still exists in the quadratic phase term, which induces image defocusing and should be compensated.
3.2.2. The quadratic phase term compensation
After the linear phase term compensation, the quadratic phase term can be compensated by the following expression
Then, the returned signals can be written as follows
where $\mathrm{\Delta}{a}_{i,R0}={a}_{i,R0}{\widehat{a}}_{R0}$ denotes the residual radial acceleration. The last phase term in Equation (22) represents the uncorrected quadratic phase term which will induce image blur. However, this image blur can be negligible if the residual radial acceleration Δa_{i,R 0} is such that $\mathrm{\Delta}{a}_{i,R0}{T}_{a}^{2}<2{\lambda}_{c}$, where λ_{ c } is the carrier wavelength [18]. In this situation, the image in rangeDoppler domain is
where r denotes range, f_{ d } denotes Doppler.
After compensation of the linear and quadratic phase terms, we can obtain the bulk image of cooperative targets and denote the absolute value of the image by I_{0}(r, f_{ d }).
3.3. Image segmentation
Since the residual quadratic and higher order phase terms are not entirely eliminated in the parametric compensation, the image is not wellfocused and cannot be used to target identification. Refined parametric compensation is necessarily used to improve the image quality. Considering the differences in the residual quadratic and higher order motion parameters, refined parametric compensation should be carried out to target one by one. Therefore, in order to perform refined parametric compensation, the bulk image is segmented by clustering number estimation and the normalized cuts.
3.3.1. Clustering number estimation
Cooperative targets cannot generally be resolved in range domain. However, they can be resolved in azimuth domain due to large distances exist in azimuth domain. According to estimating the number of the target centers in the bulk image, the clustering number can be determined as follows [15]

Step 1: Calculate normalized histograms of I_{0}(r,?f_{ d }) along r and f_{ d }, respectively.

Step 2: For each histogram, obtain its smoothed envelope by lowpass filtering.

Step 3: Calculate the positions $\left\{\left({P}_{r},{P}_{{f}_{d}}\right)\right\}$ of local maxima of smoothed envelopes.

Step 4. For each $\left({P}_{r},{P}_{{f}_{d}}\right)$, if the average of pixel values around it is above the threshold, it can be viewed as a true target center. Finally, the clustering number is equal to the number of true target centers.
3.3.2. Normalized cuts
A graph G = (V,E) is composed of two disjoint sets A ∩ B = ∅, A ∪ B = V. In graph theoretic language, the degree of dissimilarity between these two sets is called the cuts
where w(u,v) is the graph edge weight between nodes u and v. The normalized cuts Ncut(A,B) can be defined by [19]
Shi and Malik addressed that the optimal partition can be found by computing
where y = {a, b}^{N} is a binary indicator vector and y_{ i } = a if pixel i ∈ A and y_{ i } = b if pixel j ∈ B. N is the number of pixels. W is the association matrix with W_{ ij } = w(i, j). D is the diagonal matrix with D_{ ii } = ∑ _{ j }W_{ ij }.
According to the image segmentation, the region of each target can be determined from the bulk image and the rangeslow time data of each target can be obtained by taking IFFT in each range bin of each segmented rangeDoppler image. Therefore, refined parametric compensation for each target can be carried out to obtain a wellfocused image.
3.4. Enhancement of image
The image contrast will reach a maximum if the image is wellfocused [20]. Therefore, the image contrast can be utilized to compensate the residual secondorder and higher order phase terms. Since the fourthorder phase term to the imaging quality can be ignorable [14], the compensation phase term can be written as follows
where $\widehat{\beta}$ is a estimation of residual radial acceleration β and $\widehat{\gamma}$ is a estimation of the radial jerk γ.
The image contrast is defined as follows
where ${I}_{0}\left(r,{f}_{d},\widehat{\beta},\widehat{\gamma}\right)$ is obtained by compensating the returned signals through Equation (27), the operator A(⋅) denotes the image spatial mean. When $C\left(\widehat{\beta},\widehat{\gamma}\right)$ reaches a maximum, it indicates that β and γ are accurately estimated and then the image can be wellfocused. According to the above imaging process, the flowchart of cooperative targets imaging is shown in Figure 3.
3.5. Computational complexity analyses
In this part, the computational complexity of major steps in the proposed method will be analyzed in terms of the number of operation. Suppose that the number of pulses, range sampling cells, searching residual radial acceleration, searching the radial jerk, and searching range frequency cells are N, M, N_{ β }, N_{ γ }, and W, respectively. As to the FrFT, WMN multiplications and WM(N–1) additions are needed and the computational complexity is O(WMN). For the parametric compensation, 2MN multiplications and 2M(N  1) additions are needed and the computational complexity is O(MN). As to the image segmentation, the computational complexity is O(kMN), where k is the number of steps Lanczos takes to converge. As to the image contrast, N_{ β }N_{ γ }(8MN + 2MN log _{2}MN) multiplications and N_{ β }N_{ γ }(4MN + 3MN log _{2}MN) additions are needed and the computational complexity is O(MN log _{2}MN).
4. Simulation results
In this section, the returned signals of three targets separated along the xaxis on the same xy plane are generated to prove the effectiveness of the proposed method. We assume that the initial velocity and acceleration of three targets moving in a formation are v_{1} = 268 m/s, a_{1} = 11 m/s^{2}, v_{2} = 270 m/s, a_{2} = 10 m/s^{2}, v_{3} = 272 m/s, and a_{3} = 9 m/s^{2}, respectively. The angle θ is 10°. Target 1, target 2, and target 3 are located at (65 m, 0 m), (10 m, 0 m), and (40 m, 0 m), respectively. The radar is located at (0 m, 8,000 m). The pulse repetition frequency (PRF) is 2,000 Hz, the carried frequency is 9 GHz, the bandwidth is 300 MHz, the number of sampling data is 1,024 and the echo number for coherent imaging processing is 2,000. The target model is shown in Figure 4. Complex white Gaussian noise is added to the returned signals.
Monte Carlo simulation was performed to verify the performance of the parameter estimated method. The number of Monte Carlo simulation is 500. The rootmeansquare errors (RMS errors) of the radial velocity and acceleration of the target 2 are shown in Figure 5. It can be seen from Figure 5 that the RMS errors of the estimated radial velocity is large, while that of the estimated radial acceleration are close to zero with different SNRs. The reason is that the radial acceleration can be accurately estimated and the radial velocity was coarsely estimated. Fortunately, the influence of the residual of the radial velocity can be eliminated by the keystone transform. When the signaltonoise ratio (SNR) is 10 dB, the averages of the radial velocity and acceleration estimated by the FrFT are 45.6752 m/s, 10.5218 m/s^{2}, respectively.
After the linear and quadratic phase term compensations, a resolved bulk image is shown in Figure 6. When applying the conventional imaging method using envelope crosscorrelation and the phase gradient autofocus algorithm, this method yields an image in Figure 7. It is evident that the image in Figure 7 does not yield a resolved image, which demonstrates the conventional imaging method does not work well for cooperative targets imaging.
Figures 8 and 9 show normalized histograms along range and azimuth by solid line and their envelops by dashed line, respectively. According to Figures 8 and 9, coordinates of local maxima of envelops are (143, 895), (143, 1,022), and (143, 1,137), which are likely to the target centers. Table 1 lists coordinates of potential target centers and the average of pixel values around each coordinate of local maxima. According to the normalized average, the number of target centers is three, so the clustering number is three.
After image segmentation, image contrasts of the segmented image with β and γ are shown in Figure 10a,b,c, respectively, where β ∈ [5, 5], γ ∈ [0, 1] [14]. It is important to note that the image contrast is mainly affected by β, which indicates that the quadratic coefficient plays an important role in the imaging quality. After refined parametric compensation by (β,γ) corresponding to the image contrast maximization, the enhanced images of three targets in decibel are shown in Figure 11a,b,c, respectively. It is evident that each image has better focus than its original image in Figure 6.
When the SNR is 0 dB, the averages of the radial velocity and acceleration estimated by the FrFT are 49.8649 m/s, 10.3582 m/s^{2}, respectively. The wellfocused images of three targets in decibel are shown in Figure 12a,b,c, respectively. From this simulation, we believe that the proposed method can still image cooperative targets in the low SNR scenario.
5. Conclusions
In this paper, we have proposed a method to solve the radar imaging problem of cooperative targets. The method utilizes the parametric compensation of the linear and quadratic phase terms to obtain a bulk image. Image segmentation is used to separate the bulk image into different images. Refined parametric compensation of the residual quadratic and cubic phase terms is then carried out to obtain wellfocused images. The simulation results demonstrate that the proposed method can successfully image cooperative targets.
In future work, an experiment to generate wellfocused images of cooperative targets will be conducted based on measured data. Furthermore, our further study will focus on the imaging of cooperative targets that have more complicated motion forms.
References
 1.
Zhang Q, Jin YQ: Aspects of radar imaging using frequencystepped chirp signals. EURASIP J. Adv. Sig. Pr. 2006, 2006: 4343.
 2.
Chen VC, Shie Q: Joint timefrequency transform for radar rangeDoppler imaging. IEEE T Aero. Elec. Sys. 1998, 34: 486499. 10.1109/7.670330
 3.
Zhang L, Sheng JL, Duan J, Xing MD, Qiao ZJ, Bao Z: Translational motion compensation for ISAR imaging under low SNR by minimum entropy. EURASIP J. Adv. Sig. Pr. 2013, 2013: 119. 10.1186/1687618020131
 4.
Chen CC, Andrews CC: Targetmotioninduced radar imaging. IEEE T Aero. Elec. Sys. 1980, 16: 114.
 5.
Itoh T, Sueda H, Watanabe Y: Motion compensation for ISAR via centroid tracking. IEEE T Aero. Elec. Sys. 1996, 32: 11911197.
 6.
Li J, Wu R, Chen VC: Robust autofocus algorithm for ISAR imaging of moving targets. IEEE T aero. Elec. Sys. 2001, 37: 10561069. 10.1109/7.953256
 7.
Wu H, Grenier D, Delisle GY, Fang DG: Translational motion compensation in ISAR image processing. IEEE Trans. Image Pr. 1995, 4: 15611571. 10.1109/83.469937
 8.
Chen VC: Timefrequency transforms for radar imaging and signal analysis. Boston: Artech House Radar Library; 2002.
 9.
Chen VC, Lu ZZ: Radar imaging of multiple moving targets. In Proceedings of the SPIE Radar Processing, Technology, and Application. San Diego; 1997. 31 July–1 August
 10.
Li YN, Fu YW, Li X, Le L, Wei L: ISAR imaging of multiple targets using particle swarm optimisationadaptive joint time frequency approach. IET Sig. Pr. 2010, 4: 343351. 10.1049/ietspr.2009.0046
 11.
Yamamoto K, Iwamoto M, Fujisaka T: An ISAR imaging algorithm for multiple targets of different radial velocity. ELECTR Commun. JPN 2003, 86: 110.
 12.
Park SH, Park KK, Jung JH, Kim HT, Kim KT: ISAR imaging of multiple targets using edge detection and Hough transform. J. Electromagnet. Wave. 2008, 2: 365373.
 13.
Choi G, Park S, Kim H, Kim K: ISAR imaging of multiple targets based on particle swarm optimization and Hough transform. J. Electromagnet. Wave. 2009, 23: 18251834. 10.1163/156939309789932322
 14.
Park SH, Kim HT, Kim KT: Segmentation of ISAR images of targets moving in formation. IEEE T Geosci. Remote. 2010, 48: 20992108.
 15.
Bai XR, Zhou F, Xing MD, Bao Z: A novel method for imaging of group targets moving in a formation. IEEE T Geosci. Remote. 2012, 50: 221231.
 16.
Sejdić E, Djurović I, Stanković L: Fractional Fourier transform as a signal processing tool: an overview of recent developments. Sig Pr. 2011, 91: 13511369. 10.1016/j.sigpro.2010.10.008
 17.
Perry RP, DiPietro RC, Fante RL: SAR imaging of moving targets. IEEE T Aero. Elec. Sys. 1999, 35: 188200. 10.1109/7.745691
 18.
Franceschetti G, Tatoian J, Dutt B: Aberrations in the SAR image of a moving target. Alta. Frequenza. 1989, 58: 175183.
 19.
Shi J, Malik J: Normalized cuts and image segmentation. IEEE T Pattern Anal. 2000, 22: 888905. 10.1109/34.868688
 20.
Martorella M, Berizzi F, Haywood B: Contrast maximisation based technique for 2D ISAR autofocusing. IET Sig. Pr. 2005, 152: 253262.
Acknowledgements
The authors are grateful to the anonymous reviewers for the provided feedback. This work was supported by the National Natural Science Foundation of China under grant no. 61372159.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Chen, J., Xiao, H., Song, Z. et al. Imaging targets moving in formation using parametric compensation. EURASIP J. Adv. Signal Process. 2014, 8 (2014). https://doi.org/10.1186/1687618020148
Received:
Accepted:
Published:
Keywords
 Formation fight
 Radar imaging
 Image segmentation
 Parameter estimation
 Keystone transform