Based on the signal model shown in Section 2, a 3D reconstruction algorithm is proposed for space targets with multistatic ISAR systems. In this algorithm, we use the projection equations between the target 3D geometry and ISAR images offered by multiple sensors properly spaced. To illustrate this process from the target 3D geometry to ISAR images mathematically, the transformation in matrix form is expressed by
$$\begin{array}{@{}rcl@{}} \begin{array}{l} \left[ {\begin{array}{c} {{\mathbf{R}_{n}}}\\ {{\mathbf{F}_{n}}} \end{array}} \right]{\mathbf{p}_{k}} = \left[ {\begin{array}{c} {{r_{k,n}}}\\ {{f_{k,n}}} \end{array}} \right]\\ {\mathbf{R}_{n}} = {\left[ {\begin{array}{c} {\cos {\varphi_{n}}\cos {\beta_{n}}}\\ {\cos {\varphi_{n}}\sin {\beta_{n}}}\\ {\sin {\varphi_{n}}} \end{array}} \right]^{\mathrm{T}}}\\ {\mathbf{F}_{n}} = \frac{2}{\lambda }\!{\left[\! {\begin{array}{c} { - {{\dot \varphi }_{n}}\cos {\beta_{n}}\sin {\varphi_{n}} \!- \!{{\dot \beta }_{n}}\sin {\beta_{n}}\cos {\varphi_{n}}}\\ {{{\dot \beta }_{n}}\cos {\beta_{n}}\cos {\varphi_{n}} \!- \!{{\dot \varphi }_{n}}\sin {\beta_{n}}\sin {\varphi_{n}}}\\ {{{\dot \varphi }_{n}}\cos {\varphi_{n}}} \end{array}} \right]^{\mathrm{T}}} \end{array} \end{array} $$
(5)
where pk(=[xk,yk,zk]T) is the 3D-reconstructed position of the kth scattering center and Dn(=[Rn,Fn]T) is the projection matrix of the nth sensor. Under multistatic ISAR systems, a set of equations can be expressed as follows:
$$ \left[ {\begin{array}{c} {{\mathbf{D}_{1}}}\\ \vdots \\ {{\mathbf{D}_{N}}} \end{array}} \right]{\mathbf{p}_{k}} = \left[ {\begin{array}{c} {{r_{k,1}}}\\ {{f_{k,1}}}\\ \vdots \\ {\begin{array}{c} {{r_{k,N}}}\\ {{f_{k,N}}} \end{array}} \end{array}} \right]. $$
(6)
Then a least-squares sense is adopted to solve the over-determined equations.
$$\begin{array}{@{}rcl@{}} {\mathbf{p}_{k}} &= &{\mathbf{D}^{\mathrm{T}}}{\left({\mathbf{D}{\mathbf{D}^{\mathrm{T}}}} \right)^{- 1}}\mathbf{I}\\ \mathbf{D} &=& \left[ {\begin{array}{c} {{\mathbf{D}_{1}}}\\ \vdots \\ {{\mathbf{D}_{N}}} \end{array}} \right], \mathbf{I} = \left[ {\begin{array}{c} {{r_{k,1}}}\\ {{f_{k,1}}}\\ \vdots \\ {\begin{array}{c} {{r_{k,N}}}\\ {{f_{k,N}}} \end{array}} \end{array}} \right]. \end{array} $$
(7)
In (7), the reconstructed position of scattering center pk can be calculated by the composite projection matrix D and the trajectory matrix I. Obviously, the proposed algorithm does not need cross-range scaling, and trajectory matrix of scattering centers directly calculated from the gravity model. To obtain the composite projection matrix D and the trajectory matrix I, the process is outlined in Fig. 2. Firstly, the observation angles of sensor relative to space targets can be estimated by using kinematic formulas and coordinate system transformation. Then we establish the composite projection matrix D by using the observation angles (azimuth and elevation angles) to sensor. Secondly, we assume that scattering centers are sufficiently separated such that each peak in the ISAR image corresponds to a single scattering center. In order to extract scattering centers from ISAR image, a watershed algorithm is adopted to segment ISAR image into high-energy regions [15]. As a result, the range and Doppler frequency of scattering centers can be extracted from the maximum of every region. Considering projective positions of scattering centers in different images may vary widely under multistatic SAR systems, it is necessary to associate scattering centers between different ISAR images. In this paper, an association cost function based on projective transform and epipolar geometry is developed. As the cost function is an assignment with 0–1 linear programming, the Jonker-Volgenant algorithm is used to build a one-to-one correspondence between two scattering centers. As these scattering centers from different images are associated, the ranges and Doppler frequencies of the same scattering centers in different images can be used to build the matrix I. The projection matrix Dn is discussed in Section 3.1 and scattering centers association is analyzed in Section 3.2.
3.1 Projection matrix
For space targets in the steady trajectory, we can use orbital elements to establish the projection matrix. As shown in Fig. 3, a satellite orbits around the earth in its regular orbit, and we assume that sensors on the Earth’s surface can receive returned signal on observation time. The processing steps are listed as follows:
(1) Transform the location of sensor from the Earth-centered, Earth-fixed (ECEF) reference system to the Earth-centered inertial (ECI) reference system, as shown in Fig. 3. By coordinate system transformation, the location of sensor is given by
$$ {\bar{\mathbf{r}}_{ECI}} = {\mathbf{R}_{{Z_{I}}}}\left({{\alpha_{G}}} \right){\bar{\mathbf{r}}_{ECEF}} $$
(8)
where \({\mathbf {R}_{{Z_{I}}}}\) is the rotation matrix rotating around the ZI -axis [16], αG is the Greenwich Hour Angle, and \({\bar {\mathbf {r}}_{ECEF}}\) is the location of the sensor in ECEF reference system defined by longitude and latitude.
(2) Transform the location of the sensor from the ECI reference system to the orbit plane (O′,Xa,Ya,Za) reference system. The orbit plane reference system is used to describe the motion of the satellite. The Xa -axis points at the flight direction of satellite, Za axis points at sub-satellite point, and Ya axis is normal to the orbit plane. The transformation consists of two steps: coordinate system transformation and coordinate system translation. Firstly, a temporary coordinate system which centers the Earth’s core and is parallel to the orbit reference system is built by rotating the ECI reference system
$$ {\bar{\mathbf{r'}}_{a}} = {\mathbf{R}_{{Z_{a}}}}\left(\mu \right){\mathbf{R}_{{X_{a}}}}\left(i \right){\mathbf{R}_{{Z_{a}}}}\left(\Omega \right){\bar{\mathbf{r}}_{ECI}} $$
(9)
where \({\mathbf {R}_{{Z_{a}}}}\) is the rotation matrix rotating around the Za -axis, \({\mathbf {R}_{{X_{a}}}}\) is the rotation matrix rotating around the Xa -axis, μ is the argument of perigee, i is the orbit inclination, and Ω is the right ascension. Secondly, in the temporary coordinate system, the location of the satellite is given by applying Kepler’s laws and the two body kinematics equations
$$ {\bar{\mathbf{r}}_{s}} = {\left[ {\begin{array}{cccc} {\frac{{\rho \cos \left(\gamma \right)}}{{1 + e\cos \left(\gamma \right)}}}&{\frac{{\rho \sin \left(\gamma \right)}}{{1 + e\cos \left(\gamma \right)}}}&0 \end{array}} \right]^{\mathrm{T}}} $$
(10)
where ρ denotes the semi-latus rectum, e denotes the eccentricity, and γ is the true anomaly [17]. Then, the location of the sensor in the orbit (O′,Xa,Ya,Za) reference system is given by translating the origin of the temporary coordinate system to the center of satellite
$$ {\bar{\mathbf{r}}_{a}} = \mathbf{B}{\bar{\mathbf{r'}}_{a}} + {\left[ {\begin{array}{ccc} 0&0&{ - \left\| {{\bar{\mathbf{r}}_{s}}} \right\|} \end{array}} \right]^{\mathrm{T}}} $$
(11)
where \(\mathbf {B} = \left [ {\begin {array}{ccc} 0&1&0\\ 0&0&{ - 1}\\ { - 1}&0&0 \end {array}} \right ]\) is used to adjust the coordinate axis.
(3) Transform the location of sensor in the orbit (O′,Xa,Ya,Za) reference system to the body (O′,Xn,Yn,Zn) reference system. In the body reference system, the location of sensor is shown by
$$ {\bar{\mathbf{r}}_{n}} = {\mathbf{R}_{{X_{n}}}}\left({{\psi_{s}}} \right){\mathbf{R}_{{Y_{n}}}}\left({{\gamma_{s}}} \right){\mathbf{R}_{{Z_{n}}}}\left({{\phi_{s}}} \right){\bar{\mathbf{r}}_{a}} $$
(12)
where \({\bar {\mathbf {r}}_{n}}\left ({ = {{\left [ {{x_{n}}, {y_{n}}, {z_{n}}} \right ]}^{\mathrm {T}}}} \right)\) is the coordinate value of sensor in the body reference system; \({\mathbf {R}_{{X_{n}}}}\), \({\mathbf {R}_{{Y_{n}}}}\), and \({\mathbf {R}_{{Z_{n}}}}\) are the rotation matrices rotating around the Xn, Yn, and Zn axes; and (ψs,γs,ϕs) are the roll, pitch, and yaw angles of the satellite. Obviously, while ψs=0,γs=0 and ϕs=0, the body reference system is equal to the orbit reference system. We use the method in [18] to estimate roll, pitch, and yaw angles. The basic idea is to exploit the phase history of the strongest scatterers in different images. Firstly, the brightest spots in the different images as corresponding to the same scatterer of the target are associated by using the simple nearest neighboring method. Secondly, considering that the rotation vector is involved in the Doppler frequency of the scatterer, a Doppler matching-based estimation technique processing scheme is proposed to recover the Doppler frequency of the scatterer. Then, the yaw, pitch, and roll rotation motions can be estimated by matching the Doppler frequency. Finally, the elevation angle φn and azimuth angle βn of the nth sensor relative to the satellite are obtained by
$$\begin{array}{*{20}l} {\varphi_{n}} = &arc\sin \left({\frac{{{z_{n}}}}{{\left\| {{\bar{r}_{n}}} \right\|}}} \right)\\ {\beta_{n}} =& arc\cos \left({\frac{{{x_{n}}}}{{\left\| {{\bar{r}_{n}}} \right\|}}\frac{1}{{\cos {\varphi_{n}}}}} \right). \end{array} $$
(13)
Generally, the ISAR targets are non-cooperative with unknown motion. For specific space targets, they usually undergo steadily moving motion, so orbital elements can be used to estimate the observation angles of the sensor. By using the elevation angle and azimuth angle, the projection matrix can be constructed.
3.2 Scattering center association
As space targets are viewed under multistatic ISAR systems and their scattering centers are viewed in different orientations as well, radar data in general are not associated. In different ISAR images, the projective positions of scattering center may vary widely. To tackle the problem about the association scattering center between different images, we propose a new association method based on projective transform and epipolar geometry. In fact, scattering center association can be considered as an assignment procedure which assigns each unassociated scattering center to the associated one. To arrive at the minimum assignment cost, we establish a cost function between the mth and nth image as follows:
$$\begin{array}{*{20}l} \min &\sum\limits_{p = 1}^{P} {\sum\limits_{q = 1}^{Q} {{k_{m,n}}\left({p,q} \right)\left({{g_{m,n}}\left({p,q} \right) + {e_{m,n}}\left({p,q} \right)} \right)}} \\ {\mathrm{s}}{\mathrm{.t}}{\mathrm{.}}&\sum\limits_{p = 1}^{P} {{k_{m,n}}\left({p,q} \right)} = 1\\ &\sum\limits_{q = 1}^{Q} {{k_{m,n}}\left({p,q} \right)} = 1\\ &{k_{m,n}}\left({p,q} \right) = 1{\text{or}}0 \end{array} $$
(14)
where P is the number of scattering centers in the mth image and Q is the number of scattering centers in the nth image. We assume the mth image is obtained by the mth sensor and the nth image is obtained by the nth sensor. km, n(p, q) is the control variable which is called association matrix between the mth image and the nth image. While the pth scattering center in the mth image is associated with the qth scattering center in the nth image, the value of km, n(p, q) is 1. On the contrary, the value of km, n(p, q) is 0. The variables gm, n(p, q) and em, n(p, q) are geometry coefficient and error coefficient which are analyzed below.
3.2.1 Geometry coefficient g
m, n(p, q)
Epipolar geometry describes the mapping relationship between the two images. The 3D geometry structure corresponding to its scattering centers on the image should locate on a line in another image. Therefore, the probable location of the corresponding scattering center projecting on another imaging plane is restricted. The geometry coefficient gm, n(p, q) indicates the Euclidean distances between the probable projective position of the pth scattering center in the nth image and the position of the qth scattering center. The mathematical expression of geometry coefficient gm, n(p, q) is presented as follows.
Firstly, the position of 3D geometry structure corresponding to the pth scattering center be obtained by
$$\begin{array}{@{}rcl@{}} \begin{array}{c} \left[ {\begin{array}{c} {{{\tilde{x}}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{y}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{z}_{p}}\left(\tilde{t} \right)} \end{array}} \right] = \left({{\mathbf{R}_{m}} \times {\mathbf{F}_{m}}} \right)\tilde{t} + {\left[ {\begin{array}{c} {{\mathbf{R}_{m}}}\\ {{\mathbf{F}_{m}}}\\ {{\mathbf{R}_{m}} \times {\mathbf{F}_{m}}} \end{array}} \right]^{- 1}}\\ \cdot \left[ {\begin{array}{c} {{r_{p,m}}}\\ {{f_{p,m}}}\\ 0 \end{array}} \right], \tilde{t} \in \left[ {{\tilde{t}_{a}},{\tilde{t}_{b}}} \right] \end{array} \end{array} $$
(15)
where \(\left [ {{\tilde {x}_{p}}\left (\tilde {t} \right),{\tilde {y}_{p}}\left (\tilde {t} \right),{\tilde {z}_{p}}\left (\tilde {t} \right)} \right ]\) is the probable position of 3D geometry structure, (rp, m,fp, m) are the range and Doppler frequency of the pth scattering center in the mth image and “ ×” is the multiplication cross. The size of 3D geometry structure distributed on the coordinate should be limited in a certain range, so the value range of \(\tilde {t}\) are \(\tilde {t}_{a}\) to \(\tilde {t}_{b}\), which are user-defined values. The projection on the nth image of this 3D geometry structure is the epipolar line segment \(\vec l_{n}^{p,m}\)
$$ \vec l_{n}^{p,m}:\left[ {\begin{array}{c} {{\tilde{r}_{p,n}}\left(\tilde{t} \right)}\\ {{\tilde{f}_{p,n}}\left(\tilde{t} \right)} \end{array}} \right] = {\mathbf{D}_{n}}\left[ {\begin{array}{c} {{\tilde{x}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{y}_{p}}\left(\tilde{t} \right)}\\ {{\tilde{z}_{p}}\left(\tilde{t} \right)} \end{array}} \right] $$
(16)
where \(\left ({{\tilde {r}_{p,n}}\left (\tilde {t} \right),{\tilde {f}_{p,n}}\left (\tilde {t} \right)} \right)\) are the projective range and Doppler frequency of the pth scattering center in the nth image. While the pth scattering center is associated with the qth scattering center, the qth scattering center should be on the line segment \(\vec l_{n}^{p,m}\). Obviously, since there are deviations caused by noise etc., the qth scattering center may be not just right on the line segment \(\vec l_{n}^{p,m}\), but near \(\vec l_{n}^{p,m}\). Finally, the geometry coefficient gm, n(p, q) is presented by
$$ {g_{m,n}}\left({p,q} \right) = \min {\left\| {\begin{array}{c} {{\tilde{r}_{p,n}}\left(\tilde{t} \right) - {r_{q,n}}}\\ {\frac{{\lambda {T_{p}}}}{2}\left({{\tilde{f}_{p,n}}\left(\tilde{t} \right) - {f_{q,n}}} \right)} \end{array}} \right\|_{2}} $$
(17)
where min(∙) donates the minimization and \(\frac {{\lambda {T_{p}}}}{2}\) is used to keep the same units of the range and Doppler frequency. In (17), the geometry coefficient gm, n(p, q) indicates the difference between the pth and qth scattering center in the same projection plane. The smaller value of gm, n(p, q) represents higher possibility of which the two scattering centers are correlative. However, while the target is complicated, there may be more than one scattering centers near \(\vec l_{n}^{p,m}\). Then the error coefficient em, n(p, q) is presented to evaluate the association possibility of these scattering centers.
3.2.2 Error coefficient e
m, n(p, q)
In (7), the 3D position of the scattering center is able to be reconstructed by the least-squares solution with minimum mean squared error. While trajectory matrix I has a large error, for example, scattering center association is inaccurate, there may have a big reconstruction error between the 3D-reconstructed position and the 3D real position. Here, the error coefficient em, n(p, q) regards the reconstruction error as the assignment cost. The mathematical expression of the error coefficient em, n(p, q) is presented as follows.
Firstly, based on projective transform, the 3D-reconstructed position can be obtained by associating the pth scattering center with the qth one
$$\begin{array}{*{20}l} {{\tilde{\mathbf{p}}}_{p,q}} = & {\tilde{\mathbf{D}}^{\mathrm{T}}}\left(\tilde{\mathbf{D}}{{\tilde{\mathbf{D}}}^{\mathrm{T}}} \right)^{-1} \tilde{\mathbf{I}} \\ \tilde{\mathbf{D}} = &\left[ {\begin{array}{c} {{\mathbf{D}_{m}}}\\ {{\mathbf{D}_{n}}} \end{array}} \right], \tilde{\mathbf{I}} = \left[ {\begin{array}{c} {{r_{p,m}}}\\ {{f_{p,m}}}\\ {{r_{q,n}}}\\ {{f_{q,n}}} \end{array}} \right] \end{array} $$
(18)
where Dm and Dn denote the projection matrixes of the mth and nth sensors and rp, m and fp, m are the range and Doppler frequency of the pth scattering center in the mth image. Similarly, rq, n and fq, n are the range and Doppler frequency of the qth scattering center in the nth image. Then the projection ranges and Doppler frequencies of \({\tilde {\mathbf {p}}_{p,q}}\) on the two images are expressed by
$$\begin{array}{*{20}l} \left[ {\begin{array}{c} {{\tilde{r}_{p,m}}}\\ {{\tilde{f}_{p,m}}} \end{array}} \right] = &{\mathbf{D}_{m}}{{\tilde{\mathbf{p}}}_{p,q}}\\ \left[ {\begin{array}{c} {{\tilde{r}_{q,n}}}\\ {{\tilde{f}_{q,n}}} \end{array}} \right] = &{\mathbf{D}_{n}}{{\tilde{\mathbf{p}}}_{p,q}}. \end{array} $$
(19)
According to (19), the error coefficient em, n(p, q) is given by
$$\begin{array}{@{}rcl@{}} \begin{array}{c} {e_{m,n}}\left({p,q} \right) ={\left\| {\left[ {\begin{array}{*{20}{c}} {{\tilde{r}_{p,m}}}\\ {{\tilde{r}_{q,n}}} \end{array}} \right] - \left[ {\begin{array}{l} {{r_{p,m}}}\\ {{r_{q,n}}} \end{array}} \right]} \right\|_{2}}\\ + \frac{{\lambda {T_{p}}}}{2}{\left\| {\left[ {\begin{array}{l} {{\tilde{f}_{p,m}}}\\ {{\tilde{f}_{q,n}}} \end{array}} \right] - \left[ {\begin{array}{l} {{f_{p,m}}}\\ {{f_{q,n}}} \end{array}} \right]} \right\|_{2}}. \end{array} \end{array} $$
(20)
In (20), based on the projective transform, the error coefficient em, n(p, q) indicates the difference between the 3D reconstructed position and the 3D real one in the image domain. As error coefficient em, n(p, q) is smaller, the association possibility is higher.
The next step is to optimize the cost function. The cost function is an assignment problem with 0–1 linear programming, which is extremely complex to optimize. Enumeration methods are used for this assignment problem; however, they need too much computation. In this paper, the linear assignment problem can be efficiently solved by the Jonker-Volgenant algorithm [19]. The Jonker-Volgenant algorithm is a joint optimization process which decreases the cost of computation.
As analyzed above, the proposed association method simplifies the association problem between the two images to the one between line and image. However, this method needs a high range and Doppler frequency resolution of images. To get better performance, interpolation method [20] can be utilized to enhance image resolution.