Skip to main content

An integrative synchronization and imaging approach for bistatic spaceborne/stratospheric SAR with a fixed receiver

Abstract

Bistatic spaceborne/stratospheric synthetic aperture radar (SAR) with a fixed receiver is a novel hybrid bistatic SAR system, in which a spaceborne SAR serves as the transmitter of opportunity, while a fixed receiver is mounted on a stratospheric platform. This paper presents an integrative synchronization and imaging approach for this particular system. Firstly, a novel synchronization method using the direct-path signal, which can be collected by a dedicated antenna, is proposed and applied. The synchronization error can be completely removed using the proposed method. However, as the cost of synchronization, the characteristic of synchronized echo’s range history becomes quite different from that of general bistatic SAR data. To focus this particular synchronized data, its 2-D spectrum is derived under linear approximations and then a frequency-domain imaging algorithm using two-dimensional inverse scaled Fourier transform (2-DISFT) is proposed. At last, the proposed integrative synchronization and imaging algorithm is verified by simulations.

1. Introduction

Bistatic synthetic aperture radar (SAR) has been an active research area in the last decade, where in particular, the method based on spaceborne illuminator appears to be more attractive. Several experiments of bistatic SAR with spaceborne illuminator have been conducted by numerous organizations. In these experiments, the receiver is either spaceborne, aircraft, or stationary on the ground. The promising imagery results show the great potentials and capabilities of bistatic SAR as an innovative imaging system [16].

This paper discusses a particular sub-class of spaceborne hybrid configuration, where a spaceborne SAR, e.g. TerraSAR-X, serves as the transmitter of opportunity, while a fixed receiver is mounted on a stratospheric platform, e.g. a stratospheric aerostat. Its advantages include low-cost, reduced vulnerability to counter measurement, high operational flexibility and wide observation scene [7, 8]. Such a system is a good tool for high-resolution imaging and high-precision height information extraction, which has the potential for future mission both in civil and military applications.

Time and phase synchronization is essential and foremost for such a kind of bistatic system, since independent local oscillators (LO) are used, and the common time reference is missed for the transmitter and the receiver. The synchronization errors result not only in range cell migration (RCM) error but also in the distortion of the azimuth dependent phase history [912]. This implies that synchronization errors will degrade the quality of the bistatic SAR images, and therefore, corresponding compensation must be implemented.

The synchronization scheme has been well studied for cooperative bistatic SAR systems, in which the dedicated synchronization link can be constructed [1317]. An echo-domain phase synchronization approach by using correlation of bistatic echo is proposed for bistatic SAR in alternating bistatic/ping-pong mode in [18]. For un-cooperative bistatic SAR systems, it is a common method to achieve synchronization by using direct-path signal [1923].

Assuming that the direct-path channel and the reflected-path channel are previous balanced and the common LO is used for both channels, the reflected signal and the direct-path signal will be contaminated by the same synchronization errors. However, without any other auxiliary data, it is quite difficult to estimate the phase synchronization error with high precision [22]. On the one hand, the accuracy of the satellite's trajectory is not sufficient for separating the phase synchronization error from the nominal phase caused by direct-path range history. On the other hand, the phase synchronization error caused by the LO phase noise is a random component, which is difficult to estimate [24].

In this paper, we propose a new synchronization method using direct-path signal's time delays and peak phases, without the isolation of synchronization errors, to compensate corresponding components in the reflected signal directly. In this way, the time and synchronization errors can be completely removed, and this method is quite simple and fast.

However, due to the fact that direct-path signal’s time delays and peak phases constitute both the synchronization error component and the nominal range component, the range history of direct-path signal will be removed in the synchronized echo data as well. This means that the system impulse response of the synchronized data will be quite different from that of the general bistatic SAR data. It is most straightforward to use time-domain algorithms such as back-projection algorithm (BPA) to focus the synchronized data [2527]. However, it requires high-precision position measurement and suffers from severe computational load. Therefore, a frequency-domain algorithm is the preferred choice, but has to be re-engineered accordingly.

The calculation of an analytical bistatic point-target reference spectrum (BPTRS) is the key to develop frequency-domain imaging algorithm, since the bistatic range history loses its hyperbolic form [28]. Based on the approximate BPTRS, including Loffeld bistatic formula (LBF) [29, 30] and series reversion [31], several image formations have been proposed. The 2-DISFT [32] algorithm and chirp scaling (CS) algorithm [33] were developed from LBF for corresponding bistatic configurations. Based on the series reversion method, a non-linear CS (NLCS) process was applied to equalize the azimuth chirp rate in bistatic SAR focusing for general configuration [34], and a range Doppler (RD) algorithm was proposed to handle the azimuth-invariant bistatic case [35]. In bistatic SAR with a fixed receiver configuration, a two-dimensional NLCS processor was applied in [36] to deal with the large bistatic angle case. In [37], a highly accurate bistatic range migration algorithm (RMA) was proposed for asymmetric bistatic SAR system with a fixed receiver. A modified range Doppler algorithm was presented for space-surface bistatic SAR (SS-BSAR) [38]. In the spaceborne/airborne configuration, a frequency-domain processing method based on 2-DISFT was proposed in [39, 40].

According to the property of the synchronized data, a frequency-domain imaging algorithm based on 2-DISFT is proposed in this paper. Firstly, for the sake of triple square-root terms in the range history, the Taylor series is applied to obtain the stationary phase point when deriving the BPTRS. Further, under the properly designed linear approximation of BPTRS, the 2-D spectrum of the synchronized scene data is derived. It is found in the 2-D spectrum that the dominant component is the 2-D scaled Fourier transform of target's bistatic backscattering coefficient. At last, a frequency-domain imaging algorithm based on 2-DISFT is proposed.

This paper is organized as follows. In Section 2, the geometry of the bistatic spaceborne/stratospheric SAR with a fixed receiver is given, and the signal model of the echo data with synchronization errors is derived. In Section 3, the synchronization implementation using direct-path signal is presented. Section 4 derives the BPTRS and 2-D spectrum for synchronized data. The analysis of approximation errors for the derivation of 2-D spectrum is presented in Section 5. Then, a frequency-domain imaging algorithm using 2-DISFT is proposed in Section 6. To validate the proposed algorithm, simulation experiments are carried out in Section 7. Finally, in Section 8, some conclusions are presented.

2. Geometry and signal model

Figure 1 shows the geometry of the bistatic spaceborne/stratospheric SAR with a fixed receiver considered, where the spaceborne transmitter T moves along straight line with velocity v, while the stratospheric receiver R keeps stationary. It is worth pointing out that the receiver has been designed with two antennas. One is the radar channel antenna which is used to collect reflected signal, while the other is the direct-path channel antenna which is used to collect direct-path signal. As can be seen in the Figure 1, for the point target P, the reflected signal travels along r T + r R to be received by the radar channel antenna, while the direct-path signal travels along r D to be received by the dedicated direct-path channel antenna.

Figure 1
figure 1

Geometry of the bistatic spaceborne/stratospheric SAR with a fixed receiver.

As shown in Figure 1, the coordinate of the receiver is (0,0,z R), and the coordinate of the transmitter is (x T,y T,z T). According to the definition of geometry, z R, x T, and z T are constant, and y T(t) satisfies

y T t = v · t ,
(1)

where t is the azimuth (slow) time. Assuming that the coordinate of the point target P is (x, y, 0), then the transmitter-to-target range and the target-to-receiver range for P can be expressed as

r T t , x , y = x - x T 2 + y - vt 2 + z T 2
(2)
r R x , y = x 2 + y 2 + z R 2 .
(3)

To make the following derivation easier, we use the transmitter-referenced coordinate to define the coordinate of target space. Assuming that

t 0 T = y / v
(4)
r 0 T = x - x T 2 + z T 2 ,
(5)

where t 0T is the zero Doppler time for P, r 0T is the slant range from transmitter to target at t 0T. Therefore, (2) and (3) can be rewritten as

r T t , t 0 T , r 0 T = r 0 T 2 + v 2 t - t 0 T 2
(6)
r R t 0 T , r 0 T = r 0 R 2 r 0 T + v 2 t 0 T 2 ,
(7)

where r 0 R = x 2 + z R 2 . From (5), we can see that x is the function of the variable r 0T; hence, r 0R is expressed as r 0T(r 0T).

Meanwhile, the slant range history of direct-path signal can be written as

r D t = x T 2 + v 2 t 2 + z T - z R 2 = r 0 d 2 + v 2 t 2 ,
(8)

where r 0 d = x T 2 + z T - z R 2 is the closest distance from the transmitter to the receiver.

Supposing that the transmitted signal is

s t t , τ = rect τ T P · exp j 2 π f 0 t + τ + jπk τ 2 ,
(9)

where rect[·] is the window function with rectangular shape, τ is the range (fast) time, T P is the duration time of the transmitted pulse, f 0 is the radar carrier frequency, and k is the chirp rate. Therefore, the ideal reflected raw signal from the point target P at the receiver can be given as

s raw t , τ = σ ( t 0 T , r 0 T ) · rect t - t 0 T T s · rect τ - t delay T P · exp j 2 π f 0 t + τ - t delay + jπk τ - t delay 2 ,
(10)

where σ(·) is the bistatic backscattering coefficient of the point target P, t delay = (r T(t,t 0T,r 0T) + r R(t 0T,r 0T))/c is the time delay corresponding to the time it takes the signal to travel the transmitter-target-receiver distance, c is the speed of light, and T S is the synthetic aperture time for the point target P.

In bistatic satellite/stratosphere SAR system, there is no common time reference between the transmitter and the receiver. Moreover, independent oscillators are used in the transmitter and the receiver, thus the phase noise of the oscillator cannot be cancelled out as in monostatic SAR. Therefore, time and phase synchronization errors must be considered in this system.

Firstly, only considering that the time synchronization error is e(t), the practical reflected signal can be rewritten as

s raw _ prac t , τ = σ ( t 0 T , r 0 T ) · rect t - t 0 T T s · rect τ - t delay - e t T P · exp j 2 π f 0 t + τ - t delay - e t + jπk τ - t delay - e t 2 .
(11)

Next, assuming that the oscillator in the spaceborne transmitter has an ideal performance, the combined phase synchronization error e(t) is assigned to the oscillator in the receiver [1422]. Thus, omitting the amplitude and the initial phase which are inessential, the output signal of the receiver's oscillator can be modelled as

s o = exp j 2 π f 0 t + j ϕ e t .
(12)

After the quadrature demodulation process, the practical signal reflected from the point target P, considering time and phase synchronization errors, can be given as

s r t , τ = σ ( t 0 T , r 0 T ) · rect t - t 0 T T s · rect τ - t delay - e t T P · exp jπk τ - t delay - e t 2 · exp - j 2 π f 0 t delay + e t · exp j ϕ e t .
(13)

As can be seen from (13), the existing time and phase synchronization errors result not only in a drift of the echo sampling windows which will cause RCM errors but also in the distortion of the azimuth-dependent phase history. From the knowledge of SAR imaging, we can conclude that the time and phase synchronization errors will degrade the quality of bistatic SAR image [9].Therefore, the time and phase synchronization compensation must be implemented.

Assuming that the direct-path channel and the radar channel are previously balanced and the common LOS is used for both channels, the radar channel signal and the direct-path channel signal will be contaminated by the same synchronization errors. Thus, the direct-path signal after demodulation can be written as

s d t , τ = rect τ - t D _ delay - e t T P · exp jπk τ - t D _ delay - e t 2 · exp - j 2 π f 0 t D _ delay + e t · exp j ϕ e t ,
(14)

where t D_delay = r D(t)/C.

3. Synchronization using the direct-path signal

As stated before, extracting the synchronization errors from direct-path signal is a straightforward idea for the bistatic SAR synchronization. It is possible to extract time synchronization error from direct-path signal [22]. However, it seems to be difficult to estimate and extract the phase synchronization error with high precision if no other auxiliary data are available. The reason for this is double sided. On one hand, the accuracy of the satellite's trajectory is not sufficient to separate the phase synchronization error from the nominal phase caused by direct-path range history. On the other hand, the phase synchronization error caused by the LO phase noise is a random component, which is difficult to be estimated [24]. Therefore, a different approach has to be applied here.

Taking into account the advantages of high SNR and clean phases, after range compression, time delays and peak phases of the direct-path signal can be precisely extracted. Ignoring trivial constant offsets, the extracted time delays and peak phases can be formulated as

t D _ delay t = t D _ delay + e t
(15)
ϕ D t = - 2 π f 0 t D _ delay + e t + ϕ e t .
(16)

As can be seen from (15) and (16), the extracted time delays and peak phases constitute both the nominal range component and the synchronization error component. Instead of separating the synchronization error component from the nominal range component, we use the whole extracted time delay t D _ delay t and peak phase ϕ D t to compensate the corresponding terms in the reflected signal. This can be done by applying an opposite time-delay shift and by multiplying an opposite phase term on each pulse.

At first, performing the range FT with respect to the variable τ, we can transform (13) into the range-frequency/azimuth-time domain:

S r t , f , t 0 T , r 0 T = σ ( t 0 T , r 0 T ) · rect t - t 0 T T s · rect f k · T p · exp - f 2 k · exp - j 2 π f + f 0 ( t delay + e t · exp j ϕ e t ,
(17)

where f is the range frequency. Then, according to (15), establishing the reference function of time synchronization yields

S rf _ t t , f = exp j 2 πf t D _ delay t = exp j 2 πf t D _ delay + e t .
(18)

Thus, the reflected signal after time synchronization, in the range-frequency/azimuth-time domain, can be given as

S r _ ts t , f , t 0 T , r 0 T = S r ( t , f , t 0 T , r 0 T ) · S rf _ t t , f = σ t 0 T , r 0 T · rect t - t 0 T T s · rect f k · T p · exp - f 2 k · exp - j 2 πf t delay - t D _ delay · exp - j 2 π f 0 ( t delay + e t · exp j ϕ e t ,
(19)

and then transforming (19) into 2-D time domain gives

s r _ ts t , τ = σ ( t 0 T , r 0 T ) · rect t - t 0 T T s · rect τ - t delay - t D _ delay T P · exp jπk τ - t delay - t D _ delay 2 · exp - j 2 π f 0 t delay + e t · exp j ϕ e t
(20)

Similarly, the reference function of phase synchronization according to (16) can be given as

s rf _ p t = exp - j ϕ D t = exp j 2 π f 0 t D _ delay + e t - j ϕ e t .
(21)

Finally, multiplying (20) by the reference function (21) yields

s t , τ , t 0 T , r 0 T = s r _ ts ( t , τ ) · s r f _ p t = σ t 0 T , r 0 T · rect t - t 0 T T s · rect τ - t d t , t 0 T , r 0 T T p · exp jπk τ - t d t , t 0 T , r 0 T 2 · exp - j 2 π f 0 t d t , t 0 T , r 0 T
(22)

where

t d t , t 0 T , r 0 T = t delay - t D _ delay = r T t , t 0 T , r 0 T + r R t 0 T , r 0 T - r D t c
(23)

The combination of extraction and compensation can be viewed as the whole synchronization process. Its block diagram is shown in Figure 2.

Figure 2
figure 2

The block diagram of synchronization process.

From (22), we can see that after the implementation of synchronization, synchronization errors are completely removed. However, as the cost of synchronization, the range history of the point target P r T + r R is replaced by r T + r R - r D. We note here that the range history is quite different from that of the general bistatic SAR. Thus, some special processing approaches have to be implemented, which will be discussed in the following sections.

4. 2-D spectrum of synchronized data

To understand the features of the synchronized data, this section will describe the processing steps performed to obtain its 2-D spectrum, which is based on the signal model (22) and the principle of stationary phase (POSP).

4.1 Derivation of BPTRS

The BPTRS can be obtained by performing the Fourier transform (FT) with respect to the variable τ and the variable t, respectively. Firstly, performing the range FT with respect to the variable τ, we can transform (22) into the range-frequency/azimuth-time domain. Using POSP, the result can be given as

S t , f , t 0 T , r 0 T = σ ( t 0 T , r 0 T ) · rect t - t 0 T T s · rect f k · T p · exp - f 2 k · exp - j 2 π f + f 0 t d t , t 0 T , r 0 T ,
(24)

Then we can transform (24) into the range-frequency/azimuth-frequency domain by performing the azimuth FT with respect to the variable t.

S f a , f , t 0 T , r 0 T = σ ( t 0 T , r 0 T ) · rect f k · T p · exp - f 2 k · rect t - t 0 T T s · exp - j φ a f a , f , t , t 0 T , r 0 T dt ,
(25)

where f a is the azimuth frequency, and the phase term considered in the integral is

φ a f a , f , t , t 0 T , r 0 T = 2 π f + f 0 t d t , t 0 T , r 0 T + 2 π f a t .
(26)

It can be seen from (6) to (8) and (23) that there are three square root terms in φ a(f a,f,t,t 0T,r 0T). This makes it difficult to obtain BPTRS by applying POSP, since it seems to be impossible to obtain the analytical expression of the stationary point for (26). To overcome this limitation, the Taylor series expansion is used to simplify the phase terms in (26), with a sufficient precision of approximation which has been verified in [39]. Expanding the three square root terms in Taylor series and keeping terms up to the second order gives

r T t , t 0 T , r 0 T = r 0 T 2 + v 2 t - t 0 T 2 r 0 T + v 2 t - t 0 T 2 2 r 0 T
(27)
r R t 0 T , r 0 T = r 0 R 2 r 0 T + v 2 t 0 T 2 = r 0 R r 0 T + v 2 t 0 T 2 2 r 0 R r 0 T
(28)
r D t = r 0 d 2 + v 2 t 2 r 0 d + v 2 t 2 2 r 0 d .
(29)

Based on (27) to (29), we can calculate some key parameters of the Doppler signal. Assuming that the transmitted signal is the narrowband signal, i.e. f + f 0 c 1 λ , the Doppler centroid and the Doppler bandwidth can be derived by

f DC - 1 λ · r T t , t 0 T , r 0 T + r R t 0 T , r 0 T - r d t t t = t 0 T = v 2 λ r 0 d t 0 T
(30)
B a 1 λ · 2 r T t , t 0 T , r 0 T + r R t 0 T , r 0 T - r D t t 2 · T s = v 2 r 0 T - r 0 d λ r 0 T r 0 d · T s .
(31)

Therefore, the Doppler time-bandwidth production (TBP) is TBP D = v 2 T s 2 λ r 0 T r 0 d · r 0 T - r 0 d . For the point target located in the scene centre, we get v = 7,600 m/s, λ = 0.031 m, r 0T = 726.9 km, r 0d = 645.8 km, and T s = 0.484susing the parameters listed in the Table 1 and then TBPD ≈ 78. This means that the accuracy of POSP is sufficient to obtain the analytical expression of the spectrum. Thus, the phase term shown as (26) can be rewritten as

φ a f a , f , t , t 0 T , r 0 T 2 π f + f 0 c · r 0 T + r 0 R r 0 T - r 0 d + v 2 t - t 0 T 2 2 r 0 T + v 2 t 0 T 2 2 r 0 R r 0 T - v 2 t 2 2 r 0 d + 2 π f a t
(32)
Table 1 Simulation parameters

Setting φ a f a , f , t , t 0 T , r 0 T t = 0 , we can obtain the stationary point

t P = v 2 t 0 T r 0 d f + f 0 - r 0 T f a c r 0 d v 2 r 0 d - r 0 T f + f 0
(33)

Substituting t P for t in the integral of (25) yields

S f a , f , t 0 T , r 0 T = σ t 0 T , r 0 T · rect f a - f DC B a · rect f k · T p · exp f a , f , t 0 T , r 0 T ,
(34)

where the phase term ψ (f a ,f,t 0T,r 0T), which represents the significant features of BPTRS, can be given as

ψ f a , f , t 0 T , r 0 T = - 2 π · f + f 0 c ( r 0 T + r 0 R ( r 0 T ) - r 0 d ) - π · f + f 0 c · r 0 T + r 0 R r 0 T - r 0 d r 0 T - r 0 d r 0 R r 0 T · v 2 t 0 T 2 - 2 π · f a t 0 T r 0 d r 0 d - r 0 T + π · f a 2 c r 0 T r 0 d v 2 r 0 d - r 0 T f + f 0 - π f 2 k
(35)

The first and the second terms in (35) represent the RCM which is not only azimuth variant but also range-variant. The third term is a linear function of azimuth frequency f a, expressing the azimuth position of the point target in the focused image. The forth term stands for the cross coupling between azimuth and range, and the last one is responsible for the range modulation. We can see from (35) that BPTRS is dependent on both the azimuth coordinate t 0T and the range coordinate r 0T of the point target P.

4.2 Derivation of 2-D spectrum of the scene data

The 2-D spectrum of a complete scene data can be obtained by integrating the BPTRS over all the point target spectra [41, 42]:

H f a , f = S f a , f , t 0 T , r 0 T d t 0 T d r 0 T
(36)

As shown in (35), the phase term ψ (f a,f,t 0T,r 0T) is a complicated function with respect to the position of target, i.e., variables t 0T and r 0T.To obtain the analytical solution of the integral, the linearity approximation of ψ (f a,f,t 0T,r 0T) with regard to variables t 0T and r 0T has to be carried out.

First of all, even though both the first term and the second term of ψ (f a,f,t 0T,r 0T) represent the RCM shift, the second term is a quadratic function of t 0T. Therefore, the second term will be automatically omitted in the linearity approximation expression. The impact of this approximation will be given in the following section.

Then, by expanding (35) in Taylor series at t 0T = 0 and r 0T = r 0, and only keeping terms up to the first order gives

ψ L f a , f , t 0 T , r 0 T = ψ 0 L f a , f , 0 , r 0 - 2 π · ψ tL f a , f , 0 , r 0 · t 0 T - 2 π · ψ rL f a , f , 0 , r 0 · r 0 T - r 0 ,
(37)

where r 0 = x 0 - x T 2 + z T 2 , (x 0,0) is the coordinate of the target located at the scene centre, and

ψ 0 L f a , f , 0 , r 0 = ψ f a , f , t 0 T , r 0 T t 0 T = 0 , r 0 T = r 0
(38)
ψ tL f a , f , 0 , r 0 = - 1 2 π · ψ f a , f , t 0 T , r 0 T t 0 T t 0 T = 0 , r 0 T = r 0
(39)
ψ rL f a , f , 0 , r 0 = - 1 2 π · ψ f a , f , t 0 T , r 0 T r 0 T t 0 T = 0 , r 0 T = r 0 ,
(40)

where (t 0T = 0, r 0T = r 0) expresses the scene centre point. In (37), the first term is the space-invariant phase component, representing the space-invariant range modulation, RCM, and azimuth modulation. The last two terms in (37) represent the space-variant component of ψ L (f a,f,t 0T,r 0T). After some tedious algebra manipulations, the corresponding result can be given as

ψ 0 L f a , f , 0 , r 0 = - π f 2 k - 2 π f + f 0 c × r 0 + r 0 R r 0 - r 0 d - f a 2 c 2 r 0 r 0 d 2 v 2 r 0 d - r 0 f + f 0 2
(41)
ψ tL f a , f , 0 , r 0 = r 0 d r 0 d - r 0 · f a
(42)
ψ rL f a , f , 0 , r 0 = f + f 0 c 1 + M - f a 2 c r 0 d 2 2 v 2 r 0 d - r 0 2 f + f 0 ,
(43)

where

M = r 0 R r 0 T t 0 T = 0 , r 0 T = r 0 = x 0 x 0 2 + z R 2 · r 0 x 0 - x T .
(44)

We can see that there is across coupling between azimuth and range in (43). To remove the coupling, another linear approximation can be made to (43). By expanding ψ rL (f a,f,0,r 0) in Taylor series of f and keeping terms up to the first order, ψ rL (f a,f,0,r 0) becomes

ψ rL f a , f , 0 , r 0 ψ rL 1 f a , 0 , r 0 + ψ rL 2 f a , 0 , r 0 · f ,
(45)

where

ψ rL 1 f a , 0 , r 0 = 1 + M λ - f a 2 λ r 0 d 2 2 v 2 r 0 d - r 0 2
(46)
ψ rL 2 f a , 0 , r 0 = 1 + M c + f a 2 λ r 0 d 2 2 v 2 r 0 d - r 0 2 f 0 ,
(47)

where λ = c/f 0 is the wavelength. In (45), the first term is a pure Doppler phase term which represents the range-variant azimuth modulation, and the second term is the scaled range-frequency phase term which expresses the residual space-variant RCM component.

At last, letting r = r 0T - r 0 and substituting (37) and (45) into (36) yields the2-D spectrum of the synchronized scene data

H f a , f H 0 ( f a , f ) · σ t 0 T , r 0 T · rect f a - f DC B a · exp - j · 2 π · ψ tL f a , f , 0 , r 0 · t 0 T + ψ rL f a , f , 0 , r 0 · r d t 0 T dr
(48)

where

H 0 f a , f = rect f k · T p · exp j · ψ 0 L f a , f , 0 , r 0 .
(49)

The exponential term in the integral of (48), as it is linear with respect to t 0T and r 0T, can be interpreted as a Fourier kernel and thus we can obtain

H f a , f H 0 f a , f · σ ψ tL f a , f , 0 , r 0 , ψ rL f a , f , 0 , r 0 .
(50)

However, due to the expressions of ψ tL(f a, f, 0, r 0) and ψ rL(f a, f, 0, r 0), σ(ψ tL, ψ rL) should be viewed as the 2-D scaled Fourier transform of the bistatic backscattering coefficient σ(t 0T, r 0T).

5. Analysis of approximation errors

Examining the derivation of (50), it can be seen that two linear approximations are made to the phase term ψ (f a,f,t 0T,r 0T). To validate these operations, these approximation errors should be analysed in detail.

Firstly, as shown in (45), ψ rL (f a,f,0,r 0) is approximated by a linear function with respect to the variable f. This implies that the second-order term and higher-order terms are neglected with respect to the variable f. According to (43) and (45), the approximation error can be given as

Δ ψ E 1 = π 2 ! · f a 2 λ r 0 d 2 r 0 T - r 0 v 2 r 0 d - r 0 2 · f f 0 2 - π 3 ! · f a 2 λ r 0 d 2 r 0 T - r 0 v 2 r 0 d - r 0 2 · f f 0 3 +
(51)

For the case of narrowband signal (f  f 0), it is clear that the quadratic term is the most dominant component. Therefore, only the quadratic term needs to be considered in the following analysis. Considering the worst case when the range frequency f reaches its maximum value, the approximation phase error is calculated and shown in Figure 3.

Figure 3
figure 3

Approximation phase error, denoted by ( 51 ).

Using the parameters listed in Table 1, we get r 0 = 726.9 km, r 0d = 645.8 km, f = 25 MHz. The azimuth frequency f a and the slant range deviation r = r 0T - r 0 are used as independent variables in the simulation. From Figure 3, we can see that |Δψ E1| π/8 is satisfied. This implies that the linear approximation operation of (45) is accurate enough.

Secondly, as shown in (37), ψ (f a,f,t 0T,r 0T) is approximated by ψ L(f a, f, t 0T, r 0T) which is a linear function with respect to variables t 0T and r 0T. Therefore, the approximation error can be expressed as

Δ ψ E 2 = ψ f a , f , t 0 T , r 0 T - ψ L f a , f , t 0 T , r 0 T .
(52)

Substituting (35) and (37) into (52) gives

Δ ψ E 2 = - 2 π · f a · r 0 d r 0 T - r 0 r 0 d - r 0 r 0 d - r 0 T · t 0 T - 2 π · f + f 0 c · r 0 T + r 0 R r 0 T - r 0 d 2 r 0 T - r 0 d · r 0 R r 0 T · v t 0 T 2 - 2 π · f + f 0 c · r 0 R r 0 T - r 0 R r 0 + M · r 0 T - r 0 - π · f a 2 c v 2 f + f 0 · r 0 d 2 r 0 T - r 0 2 r 0 d - r 0 T r 0 d - r 0 2 .
(53)

In (53), the first term is due to the azimuth displacement error. The second term and the third term represent the residual RCM error and may result in a range displacement error. The last term is the residual coupling phase which may result in a defocus.

From (53), the azimuth displacement error can be given as

Δx = r 0 d r 0 T - r 0 r 0 d - r 0 r 0 d - r 0 T · v t 0 T
(54)

From (35), the correct range position of the point target in the focused image can be given as

r P = r 0 T + r 0 R r 0 T - r 0 d · 1 + v t 0 T 2 2 r 0 T - r 0 d · r 0 R r 0 T ,
(55)

where vt 0T represents the azimuth coordinate of the point target, indicating the azimuth-variant RCM shift. However, after this approximation, the range position of the point target in the focused image will be

r P ' = r 0 T + r 0 R r 0 + M · r 0 T - r 0 - r 0 d .
(56)

Thus, the range displacement error can be expressed as

Δ r = r 0 T + r 0 R r 0 T - r 0 d 2 r 0 T - r 0 d · r 0 R r 0 T · v t 0 T 2 + r 0 R r 0 T - r 0 R r 0 + M · r 0 T - r 0 .
(57)

We can see that the geometric distortion will be introduced by this linear approximation. However, this geometric distortion can be corrected by interpolating after image focusing [43]. Moreover, the geometric correction is a necessary step for SAR imaging in frequency domain, and thus this process will not increase the computational load.

Therefore, only the last phase error term in (53) is considered here. Under the assumption of narrowband signal, e.g. c/f + f 0 ≈ λ, expanding it in Taylor series at r 0T = r 0 gives

Δ ψ E 2 - π · f a 2 λ r 0 d 2 v 2 · r 0 T - r 0 2 r 0 d - r 0 3 - π 2 · f a 2 λ r 0 d 2 v 2 · r 0 T - r 0 3 r 0 d - r 0 4 -
(58)

In (58), the ratio of the second term against to the first term is given as 1 2 · r 0 T - r 0 r 0 d - r 0 , which is smaller than 0.04 for the same parameters as before. Furthermore, according to the principle of Taylor expansion, the higher the order of the term, the smaller the error value. Now, we can see that the dominant phase error is the quadratic function of the f a and the r = r 0T - r 0. Thus, the phase error increases quickly when both f a and r increase. If we use π/8 as the tolerable threshold, the constraint for this approximation can be given as

f a r r 0 d - r 0 3 v 2 8 r 0 d 2 λ .
(59)

Using the same parameters as before, we get |f a r| ≤ 4.87 × 105. For example, if the scene width is assumed to be 10 km in the slant-range direction, i.e. max(r) = 5,000 m, the azimuth frequency needs to satisfy |f a| ≤ 97.4 Hz.

However, from (34), we can see that the f a is restricted by rect f a - f DC B a . Moreover, from (30), we note that the Doppler centroid f DC is dependent on the azimuth coordinate t 0T of the point target. Assuming that the scene width is W a in the azimuth direction, and then the azimuth frequency of the synchronized scene data meets

max f a = 1 2 v λ r 0 d · W a + B a .
(60)

Using the parameters listed in Table 1, we get B a = 158.99 Hz. If we assume that W a = 2,000 m; thus, max(|f a|) = 458.02 Hz. However, it is beyond the restriction given above. This means that a compromise between f a and r is needed to meet the requirement.

According to the discussion above, Figure 4 gives an example of compromise, in which max(|f a r|) = 4.5 × 105. The size of the corresponding scene block is about 1 km × 3 km (azimuth direction × slant-range direction).

Figure 4
figure 4

Approximation phase error, denoted by ( 58 ).

If we intend to focus a larger scene, the synchronized data can be processed in small blocks both on range direction and on azimuth direction. For different range blocks, the corresponding reference slant range r 0 is used. For different azimuth blocks, Doppler centroid correction (DCC) is applied by multiplying the synchronized echo with the DCC function in the 2-D time domain. The DCC function can be given as

H DCC t = exp - j 2 π f DN t ,
(61)

where

f DN = v 2 λ r 0 d · t 0 TN .
(62)

In (62), t 0TN is the azimuth coordinate of the centre point in the block N. According to the shifting theorem of FT, the corresponding 2-D spectrum can be given as

H DCC f a , f = H f a + f DN , f .
(63)

In this way, the 2-D spectrum of the block N will satisfy the constriction denoted by (59). However, blocks have to overlap, and then the efficiency of the algorithm decreases with increasing the number of blocks.

6. Imaging process using 2-DISFT

6.1 Analysis of 2-D spectrum

As is pointed out, σ(ψ tL,ψ rL) can be interpreted as a 2-D scaled Fourier transform of the bistatic backscattering coefficient σ(t 0T,r 0T); thus, we rewrite σ(ψ tL,ψ rL) as

σ ψ tL , ψ rL = σ r 0 d r 0 d - r 0 · f a , ψ rL 1 f a , 0 , r 0 + ψ rL 2 f a , 0 , r 0 · f .
(64)

It can be seen that both the azimuth frequency and range frequency are scaled by coefficients. In particular, the azimuth frequency is scaled by |r 0d/(r 0d - r 0)|. Meanwhile, from (31), we can see that the Doppler bandwidth can be rewritten as

B a = B a ' r 0 d / r 0 d - r 0 ,
(65)

where B a ' = v 2 λ r 0 T · T s is the Doppler bandwidth for the traditional bistatic SAR with a fixed receiver configuration. In this case, if we apply inverse Fourier transform along azimuth direction directly, the azimuth resolution of inferred image will be ρ a = r 0 d r 0 d - r 0 · v B a ' . Compared with the azimuth resolution of the traditional bistatic SAR with a fixed receiver configuration, which is ρ a 0 = v / B a ' , the inferred azimuth resolution is scaled by |r 0d/(r 0d - r 0)|. When the value of |r 0d/(r 0d - r 0)| is many times larger than 1, the inferred image would suffer a very poor azimuth resolution. Using the same parameters as before, we get ρ a0 = 5.31 m, |r 0d/(r 0d - r 0)| = 7.96, and then ρ a0 = 4.82 m. Similarly, the range frequency suffers the same problem, but not as serious as the azimuth one. To circumvent this limitation, we propose to apply ISFT both on range and on azimuth direction, which will be presented in the next section.

6.2 Principle of ISFT

The inverse scaled Fourier transform, abbreviated as ISFT, was introduced to focus monostatic SAR data in [41] and was compared with chirp scaling method in [42]. The ISFT was also applied for bistatic SAR data imaging [32, 40].

The principle of ISFT is shown below [41]. As shown in Figure 5, assuming s(t)↔S(f) is a Fourier transform pair, if the input frequency-domain signal is S(a · f), after ISFT implementation, the output time-domain signal will be 1 a · s t . The digital implementation of ISFT can be found in [42].

Figure 5
figure 5

The principle of ISFT.

6.3 Imaging process

From (50), we can see that the 2-D spectrum of the synchronized scene data can be viewed as the consequence of a multiplication of the space-invariant phase term H 0(f a,f) and the 2-D scaled Fourier transform of the 'brightness’ of the point target σ(ψ tL(f a, f, 0, r 0), ψ rL(f a, f, 0, r 0)). Meanwhile, the imaging can be interpreted as the process of obtaining the backscattering coefficient σ(t 0T,r 0T) from the 2-D spectrum H(f a,f). Therefore, in this paper, a 2-DISFT imaging algorithm is proposed to obtain the focused image. The implementation steps are listed below, and the block diagram of proposed algorithm is shown in Figure 6.

Figure 6
figure 6

The block diagram of 2-DISFT imaging process.

  1. 1.

    According to the constriction (59), the whole synchronized scene data is divided into smaller blocks.

  2. 2.

    DCC was performed by multiplying the DCC function in 2-D time-domain for different azimuth blocks.

    s DCC t , τ , t 0 T , r 0 T = s t , τ , t 0 T , r 0 T · H DCC t .
    (66)
  1. 3.

    The 2-D time-domain data was transformed into 2-D frequency domain using Fourier transform.

  2. 4.

    Bulk RCM correction (RCMC) and 2-D compression by multiplying the reference function (RF). As is pointed out, H 0(f a, f) represents the space-invariant range modulation, RCM and azimuth modulation. Therefore, RF should be given as the conjugate of H 0(f a,f). However, to keep the phase which can be used in the interferometric application, RF can be expressed as

    H RF f a , f = H 0 ( f a , f ) · exp - j 2 π f 0 · r 0 + r 0 R r 0 - r 0 d c = rect f k · T p · exp f 2 k · exp j 2 πf · r 0 + r 0 R r 0 - r 0 d c · exp - · f a 2 c r 0 r 0 d v 2 r 0 d - r 0 f + f 0 .
    (67)

    After this implementation, the remaining signal can be given by

    H 1 f a , f = H ( f a , f ) · H RF ( f a , f ) = exp - j 2 π f 0 · r 0 + r 0 R r 0 - r 0 d c · σ r 0 d r 0 d - r 0 · f a , ψ rL 1 f a , 0 , r 0 + ψ rL 2 f a , 0 , r 0 · f .
    (68)
  3. 5.

    ISFT with regard to the range frequency, which can be expressed as

    H 2 f a , r = H 1 f a , f exp j 2 π E RFS f a , 0 , r 0 · f r · r × d ( E RFS ( f a , 0 , r 0 ) · f r ) = σ r 0 d r 0 d - r 0 · f a , r · exp - j 2 π ψ rL 1 f a , 0 , r 0 r · exp - j 2 π f 0 · r 0 + r 0 R r 0 - r 0 d c ,
    (69)

    where

    E RFS f a , r 0 = c 1 + M · ψ rL 2 f a , 0 , r 0 = 1 + c 1 + M · f a 2 λ r 0 d 2 2 v 2 r 0 d - r 0 2 f 0 .
    (70)

    In the implementation of (69), it is worth noting that f r is the frequency variable with respect to the variable r, while f is the frequency variable with respect to the fast-time variable τ. The relation between f r and f is f r = f · 1 + M c .

  4. 6.

    Residual azimuth compression by multiplying the range-variant phase functions in range-time/azimuth-frequency domain.

    H 3 f a , r = H 2 ( f a , r ) · exp j 2 π ψ rL 1 f a , 0 , r 0 r = σ r 0 d r 0 d - r 0 · f a , r · exp - j 2 π f 0 · r 0 + r 0 R r 0 - r 0 d c .
    (71)
  5. 7.

    ISFT with regard to the azimuth frequency f a to remove azimuth scaling.

    s out t 0 T , r = H 3 f a , r exp j 2 π r 0 d r 0 d - r 0 · f a · t 0 T d r 0 d r 0 d - r 0 · f a = σ t 0 T , r · exp - j 2 π f 0 · r 0 + r 0 R r 0 - r 0 d c
    (72)
  6. 8.

    According to (54) and (57), applying geometric correction by interpolation

  7. 9.

    Stitching the divided blocks into a whole SAR image.

7. Simulations

To validate the proposed algorithm, the simulations are carried out. The simulation parameters are listed in Table 1. The transmitter's parameters are referred to TerraSAR-X, and the receiver's parameters are referred to a stratospheric aerostat.

From the knowledge of synchronization, the linear component is the dominant component in the time synchronization error [9]. For the phase synchronization error, both the fixed carrier frequency offset and the phase noise are considered [17, 22]. Therefore, the slope of the time synchronization error is given, and the corresponding parameters of phase synchronization error are also listed in Table 1, where 1 ppm means that the fixed carrier frequency offset is ∆f = f c  · 10-6, and the Alan variance σ(τ = 1 s) = 1 × 10-11 can be regarded as a representative example for the ultra stable oscillators (USO) of current spaceborne SAR system [10].

7.1 Echo characteristics before and after synchronization

At first, the point target located at the scene centre is used to show the characteristic of echo data before and after performing the proposed synchronization implementation algorithm. As shown in Figure 7, before synchronization, the echo support domain is skew both in the time domain and in the frequency domain, and 2-D spectrum appears an azimuth displacement in frequency domain. After synchronization, the echo support domain becomes straight. This will benefit the following focus process. However, it can be seen from Figure 7d that the bandwidth of azimuth frequency is |r 0d/(r 0d - r 0)| times narrower than the one before synchronization, which verifies the analysis in the former section.

Figure 7
figure 7

Echocharacteristics of the point target located at scene centre before and after synchronization implementation. (a) Time domain echo before synchronization. (b) Frequency domain echo before synchronization. (c) Time domain echo after synchronization. (d) Frequency domain echo after synchronization.

7.2 Imaging simulation for small scene

When the scene size satisfies the constraint condition (59), the proposed algorithm can be applied without block dividing. As is shown earlier, a reasonable scene size is 1 km × 3 km (azimuth direction × slant-range direction). Because the incidence angle of the transmitter is 45°, the scene width along the X-axis (the ground-range direction) is 3 km/cos(45°)  4.24 km. Here, an imaged scene with size 4 km × 1 km (X-axis × Y-axis) is assumed. According to the geometry depicted in Figure 1 and the parameters listed in Table 1, the coordinate of the point target (Target 5) located in the scene centre is (97.9796 km,0,0). To make the following discussion clearer, as shown in Figure 8, a new X-Y coordinate frame is created with the origin located at (97.9796 km,0,0) in the old frame. Nine point targets are located in the scene, with a 3 × 3 matrix. Then, the proposed algorithm is used to process the simulated echo. The focused image after geometric correction is shown in Figure 9.

Figure 8
figure 8

The scene with nine point targets for imaging simulation.

Figure 9
figure 9

Imaging simulation result using the proposed algorithm.

It can be seen from Figure 9 that all targets are focused precisely and located in the right positions. To further demonstrate the performance of the proposed algorithm, the profiles of targets 1, 5 and 9 are shown in Figure 10, and the quality parameters of imaging result are listed in Table 2.

Figure 10
figure 10

Two-dimensional profiles of point targets 1, 5 and 9(from left to right). (a) Azimuth profile of point target 1, (b) azimuth profile of point target 5, (c) azimuth profile of point target 9, (d) range profile of point target 1, (e) range profile of point target 5, (f) range profile of point target 9.

Table 2 Quality parameters of the imaging results

As shown in Figure 10, the profiles of targets 1, 5 and 9 are very close to those of the ideal results. This implies that these three targets are focused well with the proposed algorithm. It can also be found from Figure 10 (a),(c) that the first side lobes are slightly asymmetric for the two edge points. The reason for this is the approximation error shown in (58).

The resolutions of bistatic SAR are 2-D dependent [44]. Therefore, to demonstrate the performance of the proposed algorithm, the resolution before geometric correction is presented in Table 2. According to parameters listed in Table 1, the theoretical value of azimuth resolution and the slant-range resolution is 5.31 m. The theoretical value of peak side lobe ratio (PSLR) is -13.26 dB, and the theoretical value of integrated sidelobe ratio (ISLR) is -9.72 dB [40].

In Table 2, we can find that both the azimuth resolution and the range resolution have a deviation less than 0.08 m with respect to the theoretical value. The maximum deviation of the measured azimuth PSLR is less than 0.49 dB of the theoretical value. The maximum deviation of the measured range PSLR is less than 0.14 dB of the theoretical value. For ISLR, the maximum deviation is less than 0.48 and 0.65 dB along the azimuth and the range directions, respectively.

7.3 Imaging simulation for large scene

According to (60), to avoid the Doppler ambiguity, the Doppler frequency should meet max(|f a|) < PRF/2. In other words, the azimuth width of the imaged scene should satisfy W a < λ r 0 d v · PRF - B a . Based on the parameters listed in Table 1, we get W a < 4.86 km. If we want to extend the azimuth width of the target scene, higher PRF should be applied.

If the scene size extends beyond the constraint condition (59), small blocks should be used in the imaging process. The block process along the range direction is simple, since only the reference slant range r 0 is replaced for different range blocks. However, as stated before, DCC has to be applied for different azimuth blocks. Assuming that the azimuth coordinate of a block is 1,000 m, the Doppler centroid of this block is 378.53Hz. Then, before and after DCC, the 2-D spectrum of the point target located at (0,1,000 m) is shown in Figure 11.

Figure 11
figure 11

The 2-D spectrum of the point target located at the block centre. (a) Before DCC and (b) after DCC.

It can be seen from Figure 11 that the Doppler frequency is shifted to baseband using DCC. Therefore, the same imaging process can be applied for this block.

Then, an imaged scene with size 8 km × 4 km (X-axis × Y-axis) is assumed. According to the constraint condition (59), we divide the whole scene into 3 × 5 (X-axis × Y-axis) blocks with the consideration of small overlaps between blocks. Three point targets with coordinates are located in this scene. The coordinates can be listed as (-4,000,-2,000), (0,0) and (4,000,2,000). Using the proposed algorithm to process the data, the final focused image and zoomed contour graphs of these three targets are shown in Figure 12.

Figure 12
figure 12

Simulation result for large scene using the proposed algorithm.

It can be seen from Figure 12 that three targets are well focused and located at correct position using the proposed algorithm. This demonstrates the capability of the proposed algorithm for large scene imaging.

8. Conclusion

Bistatic spaceborne/stratospheric SAR has the potential to play an important role in the future missions. The crucial steps for such a system are synchronization and imaging. There have been quite a number of published studies in these two fields. However, the combined research of synchronization and imaging has not been developed. This paper proposes an integrative synchronization and imaging approach for a particular configuration-bistatic spaceborne/stratospheric SAR with a fixed receiver.

In published methods, the direct-path signal is a common choice for the synchronization of bistatic SAR systems, since the direct-path signal takes the advantage of high SNR and clean phases. However, it is still very difficult to extract synchronization errors with high precision from the direct-path signal, if no other auxiliary data is variable. In this paper, as a novel idea, time delays and peak phases of direct-path signal are utilized to compensate corresponding components of reflected signal directly. Time and phase synchronization errors can be completely removed using the presented method. Meanwhile, as the cost of synchronization, the system impulse response of the synchronized echo becomes quite different from that of the general bistatic SAR. To focus the particular synchronized data, the 2-D spectrum of synchronized data is derived under linear approximations, and then a frequency-domain imaging algorithm is proposed. The theoretical analysis and simulation results show that the proposed approach can provide accurate time and phase synchronization and well-focused images.

Based on the presented method, we continue our research on bistatic spaceborne/stratospheric SAR. The next steps of research are multiple. One is the study of synchronization and imaging for general bistatic spaceborne/stratospheric SAR configuration. The other is the study of the interferometric application of such a bistatic system.

References

  1. Cherniakov M, Saini R, Zuo R, Antoniou M: Space-surface bistatic synthetic aperture radar with global navigation satellite system transmitter of opportunity-experimental results. IET Radar. Sonar Navigat. 2007, 1(6):447-458. 10.1049/iet-rsn:20060172

    Article  Google Scholar 

  2. Rodriguez-Cassola M, Baumgartner SV, Krieger G, Moreira A: Bistatic TerraSAR-X/F-SAR spaceborne-airborne SAR experiment: description, data processing, and results. IEEE Trans. Geosci. Remote Sens. 2010, 48(2):781-794.

    Article  Google Scholar 

  3. Walterscheid I, Espeter T, Brenner AR, Klare J, Ender JH, Nies H, Wang R, Loffeld O: Bistatic SAR experiments with PAMIR and TerraSAR-X: setup, processing and image results. IEEE Trans. Geosci. Remote Sens. 2010, 48(8):3268-3279.

    Article  Google Scholar 

  4. Sanz-Marcos J, Lopez-Dekker P, Mallorqui JJ, Aguasca A, Prats P: SABRINA: a SAR bistatic receiver for interferometric applications. IEEE Geosci. Remote Sens. Lett. 2007, 4(2):307-311.

    Article  Google Scholar 

  5. Goh AS, Preiss M, Stacy NJS, Gray DA: Bistatic SAR experiment with the Ingara imaging radar. IET Radar Sonar Navigat. 2010, 4(3):426-437. 10.1049/iet-rsn.2009.0103

    Article  Google Scholar 

  6. Behner F, Reuter S: HITCHHIKER: hybrid bistatic high resolution SAR experiment using a stationary receiver and TerraSAR-X. In EUSAR Conference. Aachen; 2010.

    Google Scholar 

  7. Krieger G, Moreira A: Spaceborne bi- and multistatic SAR: potential and challenges. IEEE Proc. Radar Sonar Navigat. 2006, 153(3):184-198. 10.1049/ip-rsn:20045111

    Article  Google Scholar 

  8. Moccia A, Salzillo G, D'Errico M, Rufino G, Alberti G: Performance of spaceborne bistatic synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst. 2005, 41(4):1383-1395. 10.1109/TAES.2005.1561891

    Article  Google Scholar 

  9. Weib M: Synchronisation of bistatic radar systems. In IEEE Geoscience and Remote Sensing Symposium. Anchorage; 2004.

    Google Scholar 

  10. Krieger G: Impact of oscillator noise in bistatic and multistatic SAR. IEEE Geosci. Remote Sens. Lett. 2006, 3(3):424-428. 10.1109/LGRS.2006.874164

    Article  Google Scholar 

  11. Eineder M: Oscillator clock drift compensation in bistatic interferometric SAR. In IEEE Geoscience and Remote Sensing Symposium. Toulouse; 2003.

    Google Scholar 

  12. Krieger G, De ZF: Relativistic effects in bistatic SAR processing and system synchronization. In EuSAR Conference. Oberpfaffenhofen; 2012.

    Google Scholar 

  13. Wang W: GPS-based time and phase synchronization processing for distributed SAR. IEEE Trans. Aerosp. Electron. Syst. 2009, 45(3):1040-1051.

    Article  Google Scholar 

  14. Krieger G, Moreira A, Fiedler H, Hajnsek I, Werner M, Younis M, Zink M, Tan DEM-X: A satellite formation for high-resolution SAR interferometry. IEEE Geosci. Remote Sens. Lett. 2007, 45(11):3317-3341.

    Article  Google Scholar 

  15. Tian W, Hu S, Zeng T: A frequency synchronization scheme based on PLL for BiSAR and experiment result. In 9th International Conference on Signal Processing. Beijing; 2008.

    Google Scholar 

  16. Younis M, Metzig R, Krieger G: Performance prediction of a phase synchronization link for bistatic interferometric SAR system. IEEE Geosci. Remote Sens. Lett. 2006, 3(3):429-433. 10.1109/LGRS.2006.874163

    Article  Google Scholar 

  17. Wang W: Approach of adaptive synchronization for bistatic SAR real-time imaging. IEEE Trans. Geosci. Remote Sens. 2007, 45(9):2695-2700.

    Article  Google Scholar 

  18. He Z, He F, Li J, Huang H, Dong Z, Liang D: Echo-domain phase synchronization algorithm for bistatic SAR in alternating bistatic/ping–pong mode. IEEE Geosci. Remote Sens. Lett. 2012, 9(4):604-608.

    Article  Google Scholar 

  19. Saini R, Zuo R, Cherniakov M: Signal synchronization in SS-BSAR based on GLONASS satellite emission. In IET International Conference on Radar System. Edinburgh; 2007.

    Google Scholar 

  20. Saini R, Zuo R, Cherniakov M: Problem of signal synchronization in space-surface bistatic synthetic aperture radar based on global navigation satellite emissions-experimental results. IET Radar, Sonar Navigat. 2010, 4(1):110-125. 10.1049/iet-rsn.2008.0121

    Article  Google Scholar 

  21. Espeter T, Walterscheid I, Klare J, Gierull C, Brenner AR, Ender JH, Loffeld O: Progress of hybrid bistatic SAR: synchronization experiments and first imaging results. In EuSAR Conference. Friedrichshafen; 2008.

    Google Scholar 

  22. Lopez-Dekker P, JMallorqui J, Srra-Morales P, Sanz-Marcos J: Phase synchronization and Doppler centroid estimation in fixed receiver bistatic SAR systems. IEEE Trans. Geosci. Remote Sens. 2008, 46(11):3459-3471.

    Article  Google Scholar 

  23. Duque S, Lopez-Dekker P, Mallorgui J: Single-pass bistatic SAR interferometry using fixed-receiver configurations: theory and experimental validation. IEEE Trans. Geosci. Remote Sens. 2010, 48(6):2740-2749.

    Article  Google Scholar 

  24. Rutman J, Wall FL: Characterization of frequency stability in precision frequency sources. Proceedings of IEEE 1991, 79(7):952-960. 10.1109/5.84972

    Article  Google Scholar 

  25. Ulander LMH, Hellsten H, Stenstrom G: Synthetic-aperture radar processing using fast factorized back-projection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39(3):760-776. 10.1109/TAES.2003.1238734

    Article  Google Scholar 

  26. Ding Y, Jr Munson DC: A fast back projection algorithm for bistatic SAR imaging. ICIP 2002, 2: 449-452.

    Google Scholar 

  27. Hu C, Zeng T, Long T, Chen J: Fast back-projection algorithm for bistatic SAR with parallel trajectory. In EuSAR Conference. Germany; 2006.

    Google Scholar 

  28. Walterscheid I, Espeter T, Brenner AR, Klare J, Ender JH, Nies H, Wang R, Loffeld O: Bistatic SAR processing and experiments processing. IEEE Trans. Geosci. Remote Sens. 2006, 44(10):2710-2717.

    Article  Google Scholar 

  29. Loffeld O, Nies H, Peters V, Knedlik S: Models and useful relations for bistatic SAR processing. IEEE Trans. Geosci. Remote Sens. 2004, 42(10):2031-2038.

    Article  Google Scholar 

  30. Wang R, Loffeld O, Nies H, Knedlik S, Ender JHG: A bistatic point target reference spectrum for general bistatic SAR processing. IEEE Trans. Geosci. Remote Sens. 2008, 5(3):517-521.

    Article  Google Scholar 

  31. Neo YL, Wong FH, Cumming IG: A two-dimensional spectrum for bistatic SAR processing using series reversion. IEEE Geosci. Remote Sens. Lett. 2007, 4(1):93-96.

    Article  Google Scholar 

  32. Natroshvili K, Loffeld O, Nies H, Medrano A, Knedlik S: Focusing of general bistatic SAR configuration data with 2-D inverse scaled FFT. IEEE Trans. Geosci. Remote Sens. 2006, 44(10):2718-2727.

    Article  Google Scholar 

  33. Wang R, Loffeld O, Nies H, Knedlik S, Ender JHG: Chirp scaling algorithm for the bistatic SAR data in the constant-offset configuration. IEEE Trans. Geosci. Remote Sens. 2009, 47(3):952-964.

    Article  Google Scholar 

  34. Wong FH, Cumming IG, Neo YL: Focusing bistatic SAR data using the nonlinear chirp scaling algorithm. IEEE Trans. Geosci. Remote Sens. 2008, 46(9):2493-2505.

    Article  Google Scholar 

  35. Neo YL, Wong FH, Cumming IG: Processing of azimuth-invariant bistatic SAR data using the range Doppler algorithm. IEEE Geosci. Remote Sens. Lett. 2007, 46(1):14-21.

    Article  Google Scholar 

  36. Qiu X, Hu D, Ding C: An improved NLCS algorithm with capability analysis for one-stationary BiSAR. IEEE Trans. Geosci. Remote Sens. 2008, 46(10):3179-3186.

    Article  Google Scholar 

  37. Zeng T, Liu F, Hu C, Long T: Image formation algorithm for asymmetric bistatic SAR systems with a fixed receiver. IEEE Trans. Geosci. Remote Sens. 2012, 50(11):4684-4698.

    Article  Google Scholar 

  38. Antoniou M, Saini R, Cherniakov M: Results of a space-surface bistatic SAR image formation algorithm. IEEE Trans. Geosci. Remote Sens. 2007, 45(11):3359-3371.

    Article  Google Scholar 

  39. Wang R, Loffeld O, Nies H, Ul-ann Q, Medrano-Ortiz A, Knedlik S, Samarah A: Analysis and processing of spaceborne/airborne bistatic SAR data. In IEEE Geoscience and Remote Sensing Symposium. Boston; 2008.

    Google Scholar 

  40. Wang R, Loffeld O, Nies H, Knedlik S, Ul-ann Q, Medrano-Ortiz A: Frequency-domain bistatic SAR processing for spaceborne/airborne configuration. IEEE Trans. Aerosp. Electron. Syst. 2010, 46(3):1329-1345.

    Article  Google Scholar 

  41. Loffeld O, Hein A: SAR processing by scaled inverse Fourier transformation. In EuSAR Conference. Seattle; 1996.

    Google Scholar 

  42. Loffeld O, Hein A, Schneider F: SAR focusing: scaled inverse Fourier transformation and chirp scaling. In IEEE Geoscience and Remote Sensing Symposium. Seattle; 1998.

    Google Scholar 

  43. Li Y, Zhu D: The geometric-distortion correction algorithm for circular-scanning SAR imaging. IEEE Geosci. Remote Sens. Lett. 2010, 7(2):376-380.

    Article  Google Scholar 

  44. Zeng T, Cherniakov M, Long T: Generalized approach to resolution analysis in BSAR. IEEE Trans. Aerosp. Electron. Syst. 2005, 41(2):461-474. 10.1109/TAES.2005.1468741

    Article  Google Scholar 

Download references

Acknowledgements

The authors express their appreciation to Dr. Zeng Zhanfan. Thanks for his proofreading and helpful discussions. We also would like to appreciate the reviewers for their very patient work and kind suggestions, which helped a lot in correcting many mistakes, increasing the readability, and improving the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qilei Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhang, Q., Chang, W. & Li, X. An integrative synchronization and imaging approach for bistatic spaceborne/stratospheric SAR with a fixed receiver. EURASIP J. Adv. Signal Process. 2013, 165 (2013). https://doi.org/10.1186/1687-6180-2013-165

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-165

Keywords