 Research
 Open access
 Published:
An integrative synchronization and imaging approach for bistatic spaceborne/stratospheric SAR with a fixed receiver
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 165 (2013)
Abstract
Bistatic spaceborne/stratospheric synthetic aperture radar (SAR) with a fixed receiver is a novel hybrid bistatic SAR system, in which a spaceborne SAR serves as the transmitter of opportunity, while a fixed receiver is mounted on a stratospheric platform. This paper presents an integrative synchronization and imaging approach for this particular system. Firstly, a novel synchronization method using the directpath signal, which can be collected by a dedicated antenna, is proposed and applied. The synchronization error can be completely removed using the proposed method. However, as the cost of synchronization, the characteristic of synchronized echo’s range history becomes quite different from that of general bistatic SAR data. To focus this particular synchronized data, its 2D spectrum is derived under linear approximations and then a frequencydomain imaging algorithm using twodimensional inverse scaled Fourier transform (2DISFT) is proposed. At last, the proposed integrative synchronization and imaging algorithm is verified by simulations.
1. Introduction
Bistatic synthetic aperture radar (SAR) has been an active research area in the last decade, where in particular, the method based on spaceborne illuminator appears to be more attractive. Several experiments of bistatic SAR with spaceborne illuminator have been conducted by numerous organizations. In these experiments, the receiver is either spaceborne, aircraft, or stationary on the ground. The promising imagery results show the great potentials and capabilities of bistatic SAR as an innovative imaging system [1–6].
This paper discusses a particular subclass of spaceborne hybrid configuration, where a spaceborne SAR, e.g. TerraSARX, serves as the transmitter of opportunity, while a fixed receiver is mounted on a stratospheric platform, e.g. a stratospheric aerostat. Its advantages include lowcost, reduced vulnerability to counter measurement, high operational flexibility and wide observation scene [7, 8]. Such a system is a good tool for highresolution imaging and highprecision height information extraction, which has the potential for future mission both in civil and military applications.
Time and phase synchronization is essential and foremost for such a kind of bistatic system, since independent local oscillators (LO) are used, and the common time reference is missed for the transmitter and the receiver. The synchronization errors result not only in range cell migration (RCM) error but also in the distortion of the azimuth dependent phase history [9–12]. This implies that synchronization errors will degrade the quality of the bistatic SAR images, and therefore, corresponding compensation must be implemented.
The synchronization scheme has been well studied for cooperative bistatic SAR systems, in which the dedicated synchronization link can be constructed [13–17]. An echodomain phase synchronization approach by using correlation of bistatic echo is proposed for bistatic SAR in alternating bistatic/pingpong mode in [18]. For uncooperative bistatic SAR systems, it is a common method to achieve synchronization by using directpath signal [19–23].
Assuming that the directpath channel and the reflectedpath channel are previous balanced and the common LO is used for both channels, the reflected signal and the directpath signal will be contaminated by the same synchronization errors. However, without any other auxiliary data, it is quite difficult to estimate the phase synchronization error with high precision [22]. On the one hand, the accuracy of the satellite's trajectory is not sufficient for separating the phase synchronization error from the nominal phase caused by directpath range history. On the other hand, the phase synchronization error caused by the LO phase noise is a random component, which is difficult to estimate [24].
In this paper, we propose a new synchronization method using directpath signal's time delays and peak phases, without the isolation of synchronization errors, to compensate corresponding components in the reflected signal directly. In this way, the time and synchronization errors can be completely removed, and this method is quite simple and fast.
However, due to the fact that directpath signal’s time delays and peak phases constitute both the synchronization error component and the nominal range component, the range history of directpath signal will be removed in the synchronized echo data as well. This means that the system impulse response of the synchronized data will be quite different from that of the general bistatic SAR data. It is most straightforward to use timedomain algorithms such as backprojection algorithm (BPA) to focus the synchronized data [25–27]. However, it requires highprecision position measurement and suffers from severe computational load. Therefore, a frequencydomain algorithm is the preferred choice, but has to be reengineered accordingly.
The calculation of an analytical bistatic pointtarget reference spectrum (BPTRS) is the key to develop frequencydomain imaging algorithm, since the bistatic range history loses its hyperbolic form [28]. Based on the approximate BPTRS, including Loffeld bistatic formula (LBF) [29, 30] and series reversion [31], several image formations have been proposed. The 2DISFT [32] algorithm and chirp scaling (CS) algorithm [33] were developed from LBF for corresponding bistatic configurations. Based on the series reversion method, a nonlinear CS (NLCS) process was applied to equalize the azimuth chirp rate in bistatic SAR focusing for general configuration [34], and a range Doppler (RD) algorithm was proposed to handle the azimuthinvariant bistatic case [35]. In bistatic SAR with a fixed receiver configuration, a twodimensional NLCS processor was applied in [36] to deal with the large bistatic angle case. In [37], a highly accurate bistatic range migration algorithm (RMA) was proposed for asymmetric bistatic SAR system with a fixed receiver. A modified range Doppler algorithm was presented for spacesurface bistatic SAR (SSBSAR) [38]. In the spaceborne/airborne configuration, a frequencydomain processing method based on 2DISFT was proposed in [39, 40].
According to the property of the synchronized data, a frequencydomain imaging algorithm based on 2DISFT is proposed in this paper. Firstly, for the sake of triple squareroot terms in the range history, the Taylor series is applied to obtain the stationary phase point when deriving the BPTRS. Further, under the properly designed linear approximation of BPTRS, the 2D spectrum of the synchronized scene data is derived. It is found in the 2D spectrum that the dominant component is the 2D scaled Fourier transform of target's bistatic backscattering coefficient. At last, a frequencydomain imaging algorithm based on 2DISFT is proposed.
This paper is organized as follows. In Section 2, the geometry of the bistatic spaceborne/stratospheric SAR with a fixed receiver is given, and the signal model of the echo data with synchronization errors is derived. In Section 3, the synchronization implementation using directpath signal is presented. Section 4 derives the BPTRS and 2D spectrum for synchronized data. The analysis of approximation errors for the derivation of 2D spectrum is presented in Section 5. Then, a frequencydomain imaging algorithm using 2DISFT is proposed in Section 6. To validate the proposed algorithm, simulation experiments are carried out in Section 7. Finally, in Section 8, some conclusions are presented.
2. Geometry and signal model
Figure 1 shows the geometry of the bistatic spaceborne/stratospheric SAR with a fixed receiver considered, where the spaceborne transmitter T moves along straight line with velocity v, while the stratospheric receiver R keeps stationary. It is worth pointing out that the receiver has been designed with two antennas. One is the radar channel antenna which is used to collect reflected signal, while the other is the directpath channel antenna which is used to collect directpath signal. As can be seen in the Figure 1, for the point target P, the reflected signal travels along r _{T} + r _{R} to be received by the radar channel antenna, while the directpath signal travels along r _{D} to be received by the dedicated directpath channel antenna.
As shown in Figure 1, the coordinate of the receiver is (0,0,z _{R}), and the coordinate of the transmitter is (x _{T},y _{T},z _{T}). According to the definition of geometry, z _{R}, x _{T}, and z _{T} are constant, and y _{T}(t) satisfies
where t is the azimuth (slow) time. Assuming that the coordinate of the point target P is (x, y, 0), then the transmittertotarget range and the targettoreceiver range for P can be expressed as
To make the following derivation easier, we use the transmitterreferenced coordinate to define the coordinate of target space. Assuming that
where t _{0T} is the zero Doppler time for P, r _{0T} is the slant range from transmitter to target at t _{0T}. Therefore, (2) and (3) can be rewritten as
where {r}_{0\mathrm{R}}=\sqrt{{x}^{2}+{z}_{\mathrm{R}}^{2}}. From (5), we can see that x is the function of the variable r _{0T}; hence, r _{0R } is expressed as r _{0T}(r _{0T}).
Meanwhile, the slant range history of directpath signal can be written as
where {r}_{0\mathrm{d}}=\sqrt{{x}_{\mathrm{T}}^{2}+{\left({z}_{\mathrm{T}}{z}_{\mathrm{R}}\right)}^{2}} is the closest distance from the transmitter to the receiver.
Supposing that the transmitted signal is
where rect[·] is the window function with rectangular shape, τ is the range (fast) time, T _{P} is the duration time of the transmitted pulse, f _{0} is the radar carrier frequency, and k is the chirp rate. Therefore, the ideal reflected raw signal from the point target P at the receiver can be given as
where σ(·) is the bistatic backscattering coefficient of the point target P, t _{delay} = (r _{T}(t,t _{0T},r _{0T}) + r _{R}(t _{0T},r _{0T}))/c is the time delay corresponding to the time it takes the signal to travel the transmittertargetreceiver distance, c is the speed of light, and T _{S} is the synthetic aperture time for the point target P.
In bistatic satellite/stratosphere SAR system, there is no common time reference between the transmitter and the receiver. Moreover, independent oscillators are used in the transmitter and the receiver, thus the phase noise of the oscillator cannot be cancelled out as in monostatic SAR. Therefore, time and phase synchronization errors must be considered in this system.
Firstly, only considering that the time synchronization error is e(t), the practical reflected signal can be rewritten as
Next, assuming that the oscillator in the spaceborne transmitter has an ideal performance, the combined phase synchronization error ∅ _{e}(t) is assigned to the oscillator in the receiver [14–22]. Thus, omitting the amplitude and the initial phase which are inessential, the output signal of the receiver's oscillator can be modelled as
After the quadrature demodulation process, the practical signal reflected from the point target P, considering time and phase synchronization errors, can be given as
As can be seen from (13), the existing time and phase synchronization errors result not only in a drift of the echo sampling windows which will cause RCM errors but also in the distortion of the azimuthdependent phase history. From the knowledge of SAR imaging, we can conclude that the time and phase synchronization errors will degrade the quality of bistatic SAR image [9].Therefore, the time and phase synchronization compensation must be implemented.
Assuming that the directpath channel and the radar channel are previously balanced and the common LOS is used for both channels, the radar channel signal and the directpath channel signal will be contaminated by the same synchronization errors. Thus, the directpath signal after demodulation can be written as
where t _{D_delay} = r _{D}(t)/C.
3. Synchronization using the directpath signal
As stated before, extracting the synchronization errors from directpath signal is a straightforward idea for the bistatic SAR synchronization. It is possible to extract time synchronization error from directpath signal [22]. However, it seems to be difficult to estimate and extract the phase synchronization error with high precision if no other auxiliary data are available. The reason for this is double sided. On one hand, the accuracy of the satellite's trajectory is not sufficient to separate the phase synchronization error from the nominal phase caused by directpath range history. On the other hand, the phase synchronization error caused by the LO phase noise is a random component, which is difficult to be estimated [24]. Therefore, a different approach has to be applied here.
Taking into account the advantages of high SNR and clean phases, after range compression, time delays and peak phases of the directpath signal can be precisely extracted. Ignoring trivial constant offsets, the extracted time delays and peak phases can be formulated as
As can be seen from (15) and (16), the extracted time delays and peak phases constitute both the nominal range component and the synchronization error component. Instead of separating the synchronization error component from the nominal range component, we use the whole extracted time delay \stackrel{\wedge}{{t}_{\mathrm{D}\_\mathrm{delay}}\left(t\right)} and peak phase \stackrel{\wedge}{{\varphi}_{\mathrm{D}}\left(t\right)} to compensate the corresponding terms in the reflected signal. This can be done by applying an opposite timedelay shift and by multiplying an opposite phase term on each pulse.
At first, performing the range FT with respect to the variable τ, we can transform (13) into the rangefrequency/azimuthtime domain:
where f is the range frequency. Then, according to (15), establishing the reference function of time synchronization yields
Thus, the reflected signal after time synchronization, in the rangefrequency/azimuthtime domain, can be given as
and then transforming (19) into 2D time domain gives
Similarly, the reference function of phase synchronization according to (16) can be given as
Finally, multiplying (20) by the reference function (21) yields
where
The combination of extraction and compensation can be viewed as the whole synchronization process. Its block diagram is shown in Figure 2.
From (22), we can see that after the implementation of synchronization, synchronization errors are completely removed. However, as the cost of synchronization, the range history of the point target P r _{T} + r _{R} is replaced by r _{T} + r _{R}  r _{D}. We note here that the range history is quite different from that of the general bistatic SAR. Thus, some special processing approaches have to be implemented, which will be discussed in the following sections.
4. 2D spectrum of synchronized data
To understand the features of the synchronized data, this section will describe the processing steps performed to obtain its 2D spectrum, which is based on the signal model (22) and the principle of stationary phase (POSP).
4.1 Derivation of BPTRS
The BPTRS can be obtained by performing the Fourier transform (FT) with respect to the variable τ and the variable t, respectively. Firstly, performing the range FT with respect to the variable τ, we can transform (22) into the rangefrequency/azimuthtime domain. Using POSP, the result can be given as
Then we can transform (24) into the rangefrequency/azimuthfrequency domain by performing the azimuth FT with respect to the variable t.
where f _{a} is the azimuth frequency, and the phase term considered in the integral is
It can be seen from (6) to (8) and (23) that there are three square root terms in φ _{a}(f _{a},f,t,t _{0T},r _{0T}). This makes it difficult to obtain BPTRS by applying POSP, since it seems to be impossible to obtain the analytical expression of the stationary point for (26). To overcome this limitation, the Taylor series expansion is used to simplify the phase terms in (26), with a sufficient precision of approximation which has been verified in [39]. Expanding the three square root terms in Taylor series and keeping terms up to the second order gives
Based on (27) to (29), we can calculate some key parameters of the Doppler signal. Assuming that the transmitted signal is the narrowband signal, i.e. \frac{f+{f}_{0}}{c}\approx \frac{1}{\lambda}, the Doppler centroid and the Doppler bandwidth can be derived by
Therefore, the Doppler timebandwidth production (TBP) is {\mathrm{TBP}}_{\mathrm{D}}=\frac{{v}^{2}{T}_{\mathrm{s}}^{2}}{\lambda {r}_{0\mathrm{T}}{r}_{0\mathrm{d}}}\xb7\left{r}_{0\mathrm{T}}{r}_{0\mathrm{d}}\right. For the point target located in the scene centre, we get v = 7,600 m/s, λ = 0.031 m, r _{0T} = 726.9 km, r _{0d} = 645.8 km, and T _{s} = 0.484susing the parameters listed in the Table 1 and then TBP_{D} ≈ 78. This means that the accuracy of POSP is sufficient to obtain the analytical expression of the spectrum. Thus, the phase term shown as (26) can be rewritten as
Setting \frac{\partial {\phi}_{\mathrm{a}}\left({f}_{\mathrm{a}},f,t,{t}_{0\mathrm{T}},{r}_{0\mathrm{T}}\right)}{\partial t}=0, we can obtain the stationary point
Substituting t _{P} for t in the integral of (25) yields
where the phase term ψ (f _{a} ,f,t _{0T},r _{0T}), which represents the significant features of BPTRS, can be given as
The first and the second terms in (35) represent the RCM which is not only azimuth variant but also rangevariant. The third term is a linear function of azimuth frequency f _{a}, expressing the azimuth position of the point target in the focused image. The forth term stands for the cross coupling between azimuth and range, and the last one is responsible for the range modulation. We can see from (35) that BPTRS is dependent on both the azimuth coordinate t _{0T} and the range coordinate r _{0T} of the point target P.
4.2 Derivation of 2D spectrum of the scene data
The 2D spectrum of a complete scene data can be obtained by integrating the BPTRS over all the point target spectra [41, 42]:
As shown in (35), the phase term ψ (f _{a},f,t _{0T},r _{0T}) is a complicated function with respect to the position of target, i.e., variables t _{0T} and r _{0T}.To obtain the analytical solution of the integral, the linearity approximation of ψ (f _{a},f,t _{0T},r _{0T}) with regard to variables t _{0T} and r _{0T} has to be carried out.
First of all, even though both the first term and the second term of ψ (f _{a},f,t _{0T},r _{0T}) represent the RCM shift, the second term is a quadratic function of t _{0T}. Therefore, the second term will be automatically omitted in the linearity approximation expression. The impact of this approximation will be given in the following section.
Then, by expanding (35) in Taylor series at t _{0T} = 0 and r _{0T} = r _{0}, and only keeping terms up to the first order gives
where {r}_{0}=\sqrt{{\left({x}_{0}{x}_{\mathrm{T}}\right)}^{2}+{z}_{\mathrm{T}}^{2}}, (x _{0},0) is the coordinate of the target located at the scene centre, and
where (t _{0T} = 0, r _{0T} = r _{0}) expresses the scene centre point. In (37), the first term is the spaceinvariant phase component, representing the spaceinvariant range modulation, RCM, and azimuth modulation. The last two terms in (37) represent the spacevariant component of ψ _{L} (f _{a},f,t _{0T},r _{0T}). After some tedious algebra manipulations, the corresponding result can be given as
where
We can see that there is across coupling between azimuth and range in (43). To remove the coupling, another linear approximation can be made to (43). By expanding ψ _{rL} (f _{a},f,0,r _{0}) in Taylor series of f and keeping terms up to the first order, ψ _{rL} (f _{a},f,0,r _{0}) becomes
where
where λ = c/f _{0} is the wavelength. In (45), the first term is a pure Doppler phase term which represents the rangevariant azimuth modulation, and the second term is the scaled rangefrequency phase term which expresses the residual spacevariant RCM component.
At last, letting r = r _{0T}  r _{0} and substituting (37) and (45) into (36) yields the2D spectrum of the synchronized scene data
where
The exponential term in the integral of (48), as it is linear with respect to t _{0T} and r _{0T}, can be interpreted as a Fourier kernel and thus we can obtain
However, due to the expressions of ψ _{tL}(f _{a}, f, 0, r _{0}) and ψ _{rL}(f _{a}, f, 0, r _{0}), σ(ψ _{tL}, ψ _{rL}) should be viewed as the 2D scaled Fourier transform of the bistatic backscattering coefficient σ(t _{0T}, r _{0T}).
5. Analysis of approximation errors
Examining the derivation of (50), it can be seen that two linear approximations are made to the phase term ψ (f _{a},f,t _{0T},r _{0T}). To validate these operations, these approximation errors should be analysed in detail.
Firstly, as shown in (45), ψ _{rL} (f _{a},f,0,r _{0}) is approximated by a linear function with respect to the variable f. This implies that the secondorder term and higherorder terms are neglected with respect to the variable f. According to (43) and (45), the approximation error can be given as
For the case of narrowband signal (f ≪ f _{0}), it is clear that the quadratic term is the most dominant component. Therefore, only the quadratic term needs to be considered in the following analysis. Considering the worst case when the range frequency f reaches its maximum value, the approximation phase error is calculated and shown in Figure 3.
Using the parameters listed in Table 1, we get r _{0} = 726.9 km, r _{0d} = 645.8 km, f = 25 MHz. The azimuth frequency f _{a} and the slant range deviation r = r _{0T}  r _{0} are used as independent variables in the simulation. From Figure 3, we can see that Δψ _{E1} ≪ π/8 is satisfied. This implies that the linear approximation operation of (45) is accurate enough.
Secondly, as shown in (37), ψ (f _{a},f,t _{0T},r _{0T}) is approximated by ψ _{L}(f _{a}, f, t _{0T}, r _{0T}) which is a linear function with respect to variables t _{0T} and r _{0T}. Therefore, the approximation error can be expressed as
Substituting (35) and (37) into (52) gives
In (53), the first term is due to the azimuth displacement error. The second term and the third term represent the residual RCM error and may result in a range displacement error. The last term is the residual coupling phase which may result in a defocus.
From (53), the azimuth displacement error can be given as
From (35), the correct range position of the point target in the focused image can be given as
where vt _{0T} represents the azimuth coordinate of the point target, indicating the azimuthvariant RCM shift. However, after this approximation, the range position of the point target in the focused image will be
Thus, the range displacement error can be expressed as
We can see that the geometric distortion will be introduced by this linear approximation. However, this geometric distortion can be corrected by interpolating after image focusing [43]. Moreover, the geometric correction is a necessary step for SAR imaging in frequency domain, and thus this process will not increase the computational load.
Therefore, only the last phase error term in (53) is considered here. Under the assumption of narrowband signal, e.g. c/f + f _{0} ≈ λ, expanding it in Taylor series at r _{0T} = r _{0} gives
In (58), the ratio of the second term against to the first term is given as \frac{1}{2}\xb7\left\frac{{r}_{0\mathrm{T}}{r}_{0}}{{r}_{0\mathrm{d}}{r}_{0}}\right, which is smaller than 0.04 for the same parameters as before. Furthermore, according to the principle of Taylor expansion, the higher the order of the term, the smaller the error value. Now, we can see that the dominant phase error is the quadratic function of the f _{a} and the r = r _{0T}  r _{0}. Thus, the phase error increases quickly when both f _{a} and r increase. If we use π/8 as the tolerable threshold, the constraint for this approximation can be given as
Using the same parameters as before, we get f _{a} r ≤ 4.87 × 10^{5}. For example, if the scene width is assumed to be 10 km in the slantrange direction, i.e. max(r) = 5,000 m, the azimuth frequency needs to satisfy f _{a} ≤ 97.4 Hz.
However, from (34), we can see that the f _{a} is restricted by \mathrm{rect}\left[\frac{{f}_{\mathrm{a}}{f}_{\mathrm{DC}}}{{B}_{\mathrm{a}}}\right]. Moreover, from (30), we note that the Doppler centroid f _{DC} is dependent on the azimuth coordinate t _{0T} of the point target. Assuming that the scene width is W _{a} in the azimuth direction, and then the azimuth frequency of the synchronized scene data meets
Using the parameters listed in Table 1, we get B _{a} = 158.99 Hz. If we assume that W _{a} = 2,000 m; thus, max(f _{a}) = 458.02 Hz. However, it is beyond the restriction given above. This means that a compromise between f _{a} and r is needed to meet the requirement.
According to the discussion above, Figure 4 gives an example of compromise, in which max(f _{a} r) = 4.5 × 10^{5}. The size of the corresponding scene block is about 1 km × 3 km (azimuth direction × slantrange direction).
If we intend to focus a larger scene, the synchronized data can be processed in small blocks both on range direction and on azimuth direction. For different range blocks, the corresponding reference slant range r _{0} is used. For different azimuth blocks, Doppler centroid correction (DCC) is applied by multiplying the synchronized echo with the DCC function in the 2D time domain. The DCC function can be given as
where
In (62), t _{0TN} is the azimuth coordinate of the centre point in the block N. According to the shifting theorem of FT, the corresponding 2D spectrum can be given as
In this way, the 2D spectrum of the block N will satisfy the constriction denoted by (59). However, blocks have to overlap, and then the efficiency of the algorithm decreases with increasing the number of blocks.
6. Imaging process using 2DISFT
6.1 Analysis of 2D spectrum
As is pointed out, σ(ψ _{tL},ψ _{rL}) can be interpreted as a 2D scaled Fourier transform of the bistatic backscattering coefficient σ(t _{0T},r _{0T}); thus, we rewrite σ(ψ _{tL},ψ _{rL}) as
It can be seen that both the azimuth frequency and range frequency are scaled by coefficients. In particular, the azimuth frequency is scaled by r _{0d}/(r _{0d}  r _{0}). Meanwhile, from (31), we can see that the Doppler bandwidth can be rewritten as
where {B}_{\mathrm{a}}^{\text{'}}=\frac{{v}^{2}}{\lambda {r}_{0\mathrm{T}}}\xb7{T}_{\mathrm{s}} is the Doppler bandwidth for the traditional bistatic SAR with a fixed receiver configuration. In this case, if we apply inverse Fourier transform along azimuth direction directly, the azimuth resolution of inferred image will be {\rho}_{\mathrm{a}}=\left\frac{{r}_{0\mathrm{d}}}{{r}_{0\mathrm{d}}{r}_{0}}\right\xb7\frac{v}{{B}_{\mathrm{a}}^{\text{'}}}. Compared with the azimuth resolution of the traditional bistatic SAR with a fixed receiver configuration, which is {\rho}_{\mathrm{a}0}=v/{B}_{\mathrm{a}}^{\text{'}}, the inferred azimuth resolution is scaled by r _{0d}/(r _{0d}  r _{0}). When the value of r _{0d}/(r _{0d}  r _{0}) is many times larger than 1, the inferred image would suffer a very poor azimuth resolution. Using the same parameters as before, we get ρ _{a0} = 5.31 m, r _{0d}/(r _{0d}  r _{0}) = 7.96, and then ρ _{a0} = 4.82 m. Similarly, the range frequency suffers the same problem, but not as serious as the azimuth one. To circumvent this limitation, we propose to apply ISFT both on range and on azimuth direction, which will be presented in the next section.
6.2 Principle of ISFT
The inverse scaled Fourier transform, abbreviated as ISFT, was introduced to focus monostatic SAR data in [41] and was compared with chirp scaling method in [42]. The ISFT was also applied for bistatic SAR data imaging [32, 40].
The principle of ISFT is shown below [41]. As shown in Figure 5, assuming s(t)↔S(f) is a Fourier transform pair, if the input frequencydomain signal is S(a · f), after ISFT implementation, the output timedomain signal will be \frac{1}{\lefta\right}\xb7s\left(t\right). The digital implementation of ISFT can be found in [42].
6.3 Imaging process
From (50), we can see that the 2D spectrum of the synchronized scene data can be viewed as the consequence of a multiplication of the spaceinvariant phase term H _{0}(f _{a},f) and the 2D scaled Fourier transform of the 'brightness’ of the point target σ(ψ _{tL}(f _{a}, f, 0, r _{0}), ψ _{rL}(f _{a}, f, 0, r _{0})). Meanwhile, the imaging can be interpreted as the process of obtaining the backscattering coefficient σ(t _{0T},r _{0T}) from the 2D spectrum H(f _{a},f). Therefore, in this paper, a 2DISFT imaging algorithm is proposed to obtain the focused image. The implementation steps are listed below, and the block diagram of proposed algorithm is shown in Figure 6.

1.
According to the constriction (59), the whole synchronized scene data is divided into smaller blocks.

2.
DCC was performed by multiplying the DCC function in 2D timedomain for different azimuth blocks.
{s}_{\mathrm{DCC}}\left(t,\tau ,{t}_{0\mathrm{T}},{r}_{0\mathrm{T}}\right)=s\left(t,\tau ,{t}_{0\mathrm{T}},{r}_{0\mathrm{T}}\right)\xb7{H}_{\mathrm{DCC}}\left(t\right).(66)

3.
The 2D timedomain data was transformed into 2D frequency domain using Fourier transform.

4.
Bulk RCM correction (RCMC) and 2D compression by multiplying the reference function (RF). As is pointed out, H _{0}(f _{a}, f) represents the spaceinvariant range modulation, RCM and azimuth modulation. Therefore, RF should be given as the conjugate of H _{0}(f _{a},f). However, to keep the phase which can be used in the interferometric application, RF can be expressed as
\begin{array}{ll}\phantom{\rule{.5em}{0ex}}{H}_{\mathrm{RF}}\left({f}_{\mathrm{a}},f\right)& ={H}_{0}^{\ast}({f}_{\mathrm{a}},f)\xb7exp\left[j2\pi {f}_{0}\xb7\frac{{r}_{0}+{r}_{0\mathrm{R}}\left({r}_{0}\right){r}_{0\mathrm{d}}}{c}\right]\\ =\mathrm{rect}\left[\frac{f}{k\xb7{T}_{\mathrm{p}}}\right]\xb7exp\left[\mathit{j\pi}\frac{{f}^{2}}{k}\right]\\ \phantom{\rule{1em}{0ex}}\xb7exp\left[j2\mathit{\pi f}\xb7\frac{{r}_{0}+{r}_{0\mathrm{R}}\left({r}_{0}\right){r}_{0\mathrm{d}}}{c}\right]\\ \phantom{\rule{1em}{0ex}}\xb7exp\left[\mathit{j\pi}\xb7\frac{{f}_{\mathrm{a}}^{2}c{r}_{0}{r}_{0\mathrm{d}}}{{v}^{2}\left({r}_{0\mathrm{d}}{r}_{0}\right)\left(f+{f}_{0}\right)}\right].\end{array}(67)After this implementation, the remaining signal can be given by
\begin{array}{ll}\phantom{\rule{.5em}{0ex}}{H}_{1}\left({f}_{\mathrm{a}},f\right)& =H({f}_{\mathrm{a}},f)\xb7{H}_{\mathrm{RF}}({f}_{\mathrm{a}},f)\\ =exp\left[j2\pi {f}_{0}\xb7\frac{{r}_{0}+{r}_{0\mathrm{R}}\left({r}_{0}\right){r}_{0\mathrm{d}}}{c}\right]\\ \phantom{\rule{1em}{0ex}}\xb7\sigma \left[\frac{{r}_{0\mathrm{d}}}{{r}_{0\mathrm{d}}{r}_{0}}\xb7{f}_{\mathrm{a}},{\psi}_{\mathrm{rL}1}\left({f}_{\mathrm{a}},0,{r}_{0}\right)+{\psi}_{\mathrm{rL}2}\left({f}_{\mathrm{a}},0,{r}_{0}\right)\xb7f\right].\end{array}(68) 
5.
ISFT with regard to the range frequency, which can be expressed as
\begin{array}{ll}\phantom{\rule{1em}{0ex}}{H}_{2}\left({f}_{\mathrm{a}},r\right)=& {\displaystyle \int {H}_{1}\left({f}_{\mathrm{a}},f\right)}exp\left[j2\pi {E}_{\mathrm{RFS}}\left({f}_{\mathrm{a}},0,{r}_{0}\right)\xb7{f}_{r}\xb7r\right]\\ \times d\left({E}_{\mathrm{RFS}}\right({f}_{\mathrm{a}},0,{r}_{0})\xb7{f}_{\mathrm{r}})=\sigma \left[\frac{{r}_{0\mathrm{d}}}{{r}_{0\mathrm{d}}{r}_{0}}\xb7{f}_{\mathrm{a}},r\right]\\ \xb7exp\left[j2\pi {\psi}_{\mathrm{rL}1}\left({f}_{\mathrm{a}},0,{r}_{0}\right)r\right]\\ \xb7exp\left[j2\pi {f}_{0}\xb7\frac{{r}_{0}+{r}_{0\mathrm{R}}\left({r}_{0}\right){r}_{0\mathrm{d}}}{c}\right],\end{array}(69)where
\begin{array}{ll}\phantom{\rule{.5em}{0ex}}{E}_{\mathrm{RFS}}\left({f}_{\mathrm{a}},{r}_{0}\right)& =\frac{c}{1+M}\xb7{\psi}_{\mathrm{rL}2}\left({f}_{\mathrm{a}},0,{r}_{0}\right)\\ =1+\frac{c}{1+M}\xb7\frac{{f}_{\mathrm{a}}^{2}\lambda {r}_{0\mathrm{d}}^{2}}{2{v}^{2}{\left({r}_{0\mathrm{d}}{r}_{0}\right)}^{2}{f}_{0}}.\end{array}(70)In the implementation of (69), it is worth noting that f _{r} is the frequency variable with respect to the variable r, while f is the frequency variable with respect to the fasttime variable τ. The relation between f _{r} and f is {f}_{\mathrm{r}}=f\xb7\frac{1+M}{c}.

6.
Residual azimuth compression by multiplying the rangevariant phase functions in rangetime/azimuthfrequency domain.
\begin{array}{ll}\phantom{\rule{.5em}{0ex}}{H}_{3}\left({f}_{\mathrm{a}},r\right)& ={H}_{2}({f}_{\mathrm{a}},r)\xb7exp\left[j2\pi {\psi}_{\mathrm{rL}1}\left({f}_{\mathrm{a}},0,{r}_{0}\right)r\right]\\ =\sigma \left[\frac{{r}_{0\mathrm{d}}}{{r}_{0\mathrm{d}}{r}_{0}}\xb7{f}_{\mathrm{a}},r\right]\\ \phantom{\rule{1em}{0ex}}\xb7exp\left[j2\pi {f}_{0}\xb7\frac{{r}_{0}+{r}_{0\mathrm{R}}\left({r}_{0}\right){r}_{0\mathrm{d}}}{c}\right].\end{array}(71) 
7.
ISFT with regard to the azimuth frequency f _{ a } to remove azimuth scaling.
\begin{array}{ll}\phantom{\rule{1em}{0ex}}{s}_{\mathrm{out}}\left({t}_{0\mathrm{T}},r\right)& ={\displaystyle \int {H}_{3}\left({f}_{\mathrm{a}},r\right)}exp\left[j2\pi \frac{{r}_{0\mathrm{d}}}{{r}_{0\mathrm{d}}{r}_{0}}\xb7{f}_{\mathrm{a}}\xb7{t}_{0\mathrm{T}}\right]d\left(\frac{{r}_{0\mathrm{d}}}{{r}_{0\mathrm{d}}{r}_{0}}\xb7{f}_{\mathrm{a}}\right)\\ =\sigma \left({t}_{0\mathrm{T}},r\right)\xb7exp\left[j2\pi {f}_{0}\xb7\frac{{r}_{0}+{r}_{0\mathrm{R}}\left({r}_{0}\right){r}_{0\mathrm{d}}}{c}\right]\end{array}(72) 
8.
According to (54) and (57), applying geometric correction by interpolation

9.
Stitching the divided blocks into a whole SAR image.
7. Simulations
To validate the proposed algorithm, the simulations are carried out. The simulation parameters are listed in Table 1. The transmitter's parameters are referred to TerraSARX, and the receiver's parameters are referred to a stratospheric aerostat.
From the knowledge of synchronization, the linear component is the dominant component in the time synchronization error [9]. For the phase synchronization error, both the fixed carrier frequency offset and the phase noise are considered [17, 22]. Therefore, the slope of the time synchronization error is given, and the corresponding parameters of phase synchronization error are also listed in Table 1, where 1 ppm means that the fixed carrier frequency offset is ∆f = f _{ c } · 10^{6}, and the Alan variance σ(τ = 1 s) = 1 × 10^{11} can be regarded as a representative example for the ultra stable oscillators (USO) of current spaceborne SAR system [10].
7.1 Echo characteristics before and after synchronization
At first, the point target located at the scene centre is used to show the characteristic of echo data before and after performing the proposed synchronization implementation algorithm. As shown in Figure 7, before synchronization, the echo support domain is skew both in the time domain and in the frequency domain, and 2D spectrum appears an azimuth displacement in frequency domain. After synchronization, the echo support domain becomes straight. This will benefit the following focus process. However, it can be seen from Figure 7d that the bandwidth of azimuth frequency is r _{0d}/(r _{0d}  r _{0}) times narrower than the one before synchronization, which verifies the analysis in the former section.
7.2 Imaging simulation for small scene
When the scene size satisfies the constraint condition (59), the proposed algorithm can be applied without block dividing. As is shown earlier, a reasonable scene size is 1 km × 3 km (azimuth direction × slantrange direction). Because the incidence angle of the transmitter is 45°, the scene width along the Xaxis (the groundrange direction) is 3 km/cos(45°) ≃ 4.24 km. Here, an imaged scene with size 4 km × 1 km (Xaxis × Yaxis) is assumed. According to the geometry depicted in Figure 1 and the parameters listed in Table 1, the coordinate of the point target (Target 5) located in the scene centre is (97.9796 km,0,0). To make the following discussion clearer, as shown in Figure 8, a new XY coordinate frame is created with the origin located at (97.9796 km,0,0) in the old frame. Nine point targets are located in the scene, with a 3 × 3 matrix. Then, the proposed algorithm is used to process the simulated echo. The focused image after geometric correction is shown in Figure 9.
It can be seen from Figure 9 that all targets are focused precisely and located in the right positions. To further demonstrate the performance of the proposed algorithm, the profiles of targets 1, 5 and 9 are shown in Figure 10, and the quality parameters of imaging result are listed in Table 2.
As shown in Figure 10, the profiles of targets 1, 5 and 9 are very close to those of the ideal results. This implies that these three targets are focused well with the proposed algorithm. It can also be found from Figure 10 (a),(c) that the first side lobes are slightly asymmetric for the two edge points. The reason for this is the approximation error shown in (58).
The resolutions of bistatic SAR are 2D dependent [44]. Therefore, to demonstrate the performance of the proposed algorithm, the resolution before geometric correction is presented in Table 2. According to parameters listed in Table 1, the theoretical value of azimuth resolution and the slantrange resolution is 5.31 m. The theoretical value of peak side lobe ratio (PSLR) is 13.26 dB, and the theoretical value of integrated sidelobe ratio (ISLR) is 9.72 dB [40].
In Table 2, we can find that both the azimuth resolution and the range resolution have a deviation less than 0.08 m with respect to the theoretical value. The maximum deviation of the measured azimuth PSLR is less than 0.49 dB of the theoretical value. The maximum deviation of the measured range PSLR is less than 0.14 dB of the theoretical value. For ISLR, the maximum deviation is less than 0.48 and 0.65 dB along the azimuth and the range directions, respectively.
7.3 Imaging simulation for large scene
According to (60), to avoid the Doppler ambiguity, the Doppler frequency should meet max(f _{a}) < PRF/2. In other words, the azimuth width of the imaged scene should satisfy {W}_{\mathrm{a}}<\frac{\lambda {r}_{0\mathrm{d}}}{v}\xb7\left\mathrm{PRF}{B}_{\mathrm{a}}\right. Based on the parameters listed in Table 1, we get W _{a} < 4.86 km. If we want to extend the azimuth width of the target scene, higher PRF should be applied.
If the scene size extends beyond the constraint condition (59), small blocks should be used in the imaging process. The block process along the range direction is simple, since only the reference slant range r _{0} is replaced for different range blocks. However, as stated before, DCC has to be applied for different azimuth blocks. Assuming that the azimuth coordinate of a block is 1,000 m, the Doppler centroid of this block is 378.53Hz. Then, before and after DCC, the 2D spectrum of the point target located at (0,1,000 m) is shown in Figure 11.
It can be seen from Figure 11 that the Doppler frequency is shifted to baseband using DCC. Therefore, the same imaging process can be applied for this block.
Then, an imaged scene with size 8 km × 4 km (Xaxis × Yaxis) is assumed. According to the constraint condition (59), we divide the whole scene into 3 × 5 (Xaxis × Yaxis) blocks with the consideration of small overlaps between blocks. Three point targets with coordinates are located in this scene. The coordinates can be listed as (4,000,2,000), (0,0) and (4,000,2,000). Using the proposed algorithm to process the data, the final focused image and zoomed contour graphs of these three targets are shown in Figure 12.
It can be seen from Figure 12 that three targets are well focused and located at correct position using the proposed algorithm. This demonstrates the capability of the proposed algorithm for large scene imaging.
8. Conclusion
Bistatic spaceborne/stratospheric SAR has the potential to play an important role in the future missions. The crucial steps for such a system are synchronization and imaging. There have been quite a number of published studies in these two fields. However, the combined research of synchronization and imaging has not been developed. This paper proposes an integrative synchronization and imaging approach for a particular configurationbistatic spaceborne/stratospheric SAR with a fixed receiver.
In published methods, the directpath signal is a common choice for the synchronization of bistatic SAR systems, since the directpath signal takes the advantage of high SNR and clean phases. However, it is still very difficult to extract synchronization errors with high precision from the directpath signal, if no other auxiliary data is variable. In this paper, as a novel idea, time delays and peak phases of directpath signal are utilized to compensate corresponding components of reflected signal directly. Time and phase synchronization errors can be completely removed using the presented method. Meanwhile, as the cost of synchronization, the system impulse response of the synchronized echo becomes quite different from that of the general bistatic SAR. To focus the particular synchronized data, the 2D spectrum of synchronized data is derived under linear approximations, and then a frequencydomain imaging algorithm is proposed. The theoretical analysis and simulation results show that the proposed approach can provide accurate time and phase synchronization and wellfocused images.
Based on the presented method, we continue our research on bistatic spaceborne/stratospheric SAR. The next steps of research are multiple. One is the study of synchronization and imaging for general bistatic spaceborne/stratospheric SAR configuration. The other is the study of the interferometric application of such a bistatic system.
References
Cherniakov M, Saini R, Zuo R, Antoniou M: Spacesurface bistatic synthetic aperture radar with global navigation satellite system transmitter of opportunityexperimental results. IET Radar. Sonar Navigat. 2007, 1(6):447458. 10.1049/ietrsn:20060172
RodriguezCassola M, Baumgartner SV, Krieger G, Moreira A: Bistatic TerraSARX/FSAR spaceborneairborne SAR experiment: description, data processing, and results. IEEE Trans. Geosci. Remote Sens. 2010, 48(2):781794.
Walterscheid I, Espeter T, Brenner AR, Klare J, Ender JH, Nies H, Wang R, Loffeld O: Bistatic SAR experiments with PAMIR and TerraSARX: setup, processing and image results. IEEE Trans. Geosci. Remote Sens. 2010, 48(8):32683279.
SanzMarcos J, LopezDekker P, Mallorqui JJ, Aguasca A, Prats P: SABRINA: a SAR bistatic receiver for interferometric applications. IEEE Geosci. Remote Sens. Lett. 2007, 4(2):307311.
Goh AS, Preiss M, Stacy NJS, Gray DA: Bistatic SAR experiment with the Ingara imaging radar. IET Radar Sonar Navigat. 2010, 4(3):426437. 10.1049/ietrsn.2009.0103
Behner F, Reuter S: HITCHHIKER: hybrid bistatic high resolution SAR experiment using a stationary receiver and TerraSARX. In EUSAR Conference. Aachen; 2010.
Krieger G, Moreira A: Spaceborne bi and multistatic SAR: potential and challenges. IEEE Proc. Radar Sonar Navigat. 2006, 153(3):184198. 10.1049/iprsn:20045111
Moccia A, Salzillo G, D'Errico M, Rufino G, Alberti G: Performance of spaceborne bistatic synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst. 2005, 41(4):13831395. 10.1109/TAES.2005.1561891
Weib M: Synchronisation of bistatic radar systems. In IEEE Geoscience and Remote Sensing Symposium. Anchorage; 2004.
Krieger G: Impact of oscillator noise in bistatic and multistatic SAR. IEEE Geosci. Remote Sens. Lett. 2006, 3(3):424428. 10.1109/LGRS.2006.874164
Eineder M: Oscillator clock drift compensation in bistatic interferometric SAR. In IEEE Geoscience and Remote Sensing Symposium. Toulouse; 2003.
Krieger G, De ZF: Relativistic effects in bistatic SAR processing and system synchronization. In EuSAR Conference. Oberpfaffenhofen; 2012.
Wang W: GPSbased time and phase synchronization processing for distributed SAR. IEEE Trans. Aerosp. Electron. Syst. 2009, 45(3):10401051.
Krieger G, Moreira A, Fiedler H, Hajnsek I, Werner M, Younis M, Zink M, Tan DEMX: A satellite formation for highresolution SAR interferometry. IEEE Geosci. Remote Sens. Lett. 2007, 45(11):33173341.
Tian W, Hu S, Zeng T: A frequency synchronization scheme based on PLL for BiSAR and experiment result. In 9th International Conference on Signal Processing. Beijing; 2008.
Younis M, Metzig R, Krieger G: Performance prediction of a phase synchronization link for bistatic interferometric SAR system. IEEE Geosci. Remote Sens. Lett. 2006, 3(3):429433. 10.1109/LGRS.2006.874163
Wang W: Approach of adaptive synchronization for bistatic SAR realtime imaging. IEEE Trans. Geosci. Remote Sens. 2007, 45(9):26952700.
He Z, He F, Li J, Huang H, Dong Z, Liang D: Echodomain phase synchronization algorithm for bistatic SAR in alternating bistatic/ping–pong mode. IEEE Geosci. Remote Sens. Lett. 2012, 9(4):604608.
Saini R, Zuo R, Cherniakov M: Signal synchronization in SSBSAR based on GLONASS satellite emission. In IET International Conference on Radar System. Edinburgh; 2007.
Saini R, Zuo R, Cherniakov M: Problem of signal synchronization in spacesurface bistatic synthetic aperture radar based on global navigation satellite emissionsexperimental results. IET Radar, Sonar Navigat. 2010, 4(1):110125. 10.1049/ietrsn.2008.0121
Espeter T, Walterscheid I, Klare J, Gierull C, Brenner AR, Ender JH, Loffeld O: Progress of hybrid bistatic SAR: synchronization experiments and first imaging results. In EuSAR Conference. Friedrichshafen; 2008.
LopezDekker P, JMallorqui J, SrraMorales P, SanzMarcos J: Phase synchronization and Doppler centroid estimation in fixed receiver bistatic SAR systems. IEEE Trans. Geosci. Remote Sens. 2008, 46(11):34593471.
Duque S, LopezDekker P, Mallorgui J: Singlepass bistatic SAR interferometry using fixedreceiver configurations: theory and experimental validation. IEEE Trans. Geosci. Remote Sens. 2010, 48(6):27402749.
Rutman J, Wall FL: Characterization of frequency stability in precision frequency sources. Proceedings of IEEE 1991, 79(7):952960. 10.1109/5.84972
Ulander LMH, Hellsten H, Stenstrom G: Syntheticaperture radar processing using fast factorized backprojection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39(3):760776. 10.1109/TAES.2003.1238734
Ding Y, Jr Munson DC: A fast back projection algorithm for bistatic SAR imaging. ICIP 2002, 2: 449452.
Hu C, Zeng T, Long T, Chen J: Fast backprojection algorithm for bistatic SAR with parallel trajectory. In EuSAR Conference. Germany; 2006.
Walterscheid I, Espeter T, Brenner AR, Klare J, Ender JH, Nies H, Wang R, Loffeld O: Bistatic SAR processing and experiments processing. IEEE Trans. Geosci. Remote Sens. 2006, 44(10):27102717.
Loffeld O, Nies H, Peters V, Knedlik S: Models and useful relations for bistatic SAR processing. IEEE Trans. Geosci. Remote Sens. 2004, 42(10):20312038.
Wang R, Loffeld O, Nies H, Knedlik S, Ender JHG: A bistatic point target reference spectrum for general bistatic SAR processing. IEEE Trans. Geosci. Remote Sens. 2008, 5(3):517521.
Neo YL, Wong FH, Cumming IG: A twodimensional spectrum for bistatic SAR processing using series reversion. IEEE Geosci. Remote Sens. Lett. 2007, 4(1):9396.
Natroshvili K, Loffeld O, Nies H, Medrano A, Knedlik S: Focusing of general bistatic SAR configuration data with 2D inverse scaled FFT. IEEE Trans. Geosci. Remote Sens. 2006, 44(10):27182727.
Wang R, Loffeld O, Nies H, Knedlik S, Ender JHG: Chirp scaling algorithm for the bistatic SAR data in the constantoffset configuration. IEEE Trans. Geosci. Remote Sens. 2009, 47(3):952964.
Wong FH, Cumming IG, Neo YL: Focusing bistatic SAR data using the nonlinear chirp scaling algorithm. IEEE Trans. Geosci. Remote Sens. 2008, 46(9):24932505.
Neo YL, Wong FH, Cumming IG: Processing of azimuthinvariant bistatic SAR data using the range Doppler algorithm. IEEE Geosci. Remote Sens. Lett. 2007, 46(1):1421.
Qiu X, Hu D, Ding C: An improved NLCS algorithm with capability analysis for onestationary BiSAR. IEEE Trans. Geosci. Remote Sens. 2008, 46(10):31793186.
Zeng T, Liu F, Hu C, Long T: Image formation algorithm for asymmetric bistatic SAR systems with a fixed receiver. IEEE Trans. Geosci. Remote Sens. 2012, 50(11):46844698.
Antoniou M, Saini R, Cherniakov M: Results of a spacesurface bistatic SAR image formation algorithm. IEEE Trans. Geosci. Remote Sens. 2007, 45(11):33593371.
Wang R, Loffeld O, Nies H, Ulann Q, MedranoOrtiz A, Knedlik S, Samarah A: Analysis and processing of spaceborne/airborne bistatic SAR data. In IEEE Geoscience and Remote Sensing Symposium. Boston; 2008.
Wang R, Loffeld O, Nies H, Knedlik S, Ulann Q, MedranoOrtiz A: Frequencydomain bistatic SAR processing for spaceborne/airborne configuration. IEEE Trans. Aerosp. Electron. Syst. 2010, 46(3):13291345.
Loffeld O, Hein A: SAR processing by scaled inverse Fourier transformation. In EuSAR Conference. Seattle; 1996.
Loffeld O, Hein A, Schneider F: SAR focusing: scaled inverse Fourier transformation and chirp scaling. In IEEE Geoscience and Remote Sensing Symposium. Seattle; 1998.
Li Y, Zhu D: The geometricdistortion correction algorithm for circularscanning SAR imaging. IEEE Geosci. Remote Sens. Lett. 2010, 7(2):376380.
Zeng T, Cherniakov M, Long T: Generalized approach to resolution analysis in BSAR. IEEE Trans. Aerosp. Electron. Syst. 2005, 41(2):461474. 10.1109/TAES.2005.1468741
Acknowledgements
The authors express their appreciation to Dr. Zeng Zhanfan. Thanks for his proofreading and helpful discussions. We also would like to appreciate the reviewers for their very patient work and kind suggestions, which helped a lot in correcting many mistakes, increasing the readability, and improving the quality of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zhang, Q., Chang, W. & Li, X. An integrative synchronization and imaging approach for bistatic spaceborne/stratospheric SAR with a fixed receiver. EURASIP J. Adv. Signal Process. 2013, 165 (2013). https://doi.org/10.1186/168761802013165
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168761802013165