Skip to main content

RFI suppression in SAR based on filtering interpretation of SSA and fast implementation

Abstract

Synthetic aperture radar (SAR) has proven to be a powerful remote sensing instrument for underground and obscured object detection. However, SAR echoes are often contaminated by radio frequency interferences (RFI) from multiple broadcasting stations. Essentially, RFI suppression problem is one-dimensional stationary time series denoising problem. This article proposed a novel RFI suppression algorithm based on singular spectral analysis (SSA) from a linear invariant systems perspective. It can be seen that SSA algorithm has an equivalence relation with finite impulse response (FIR) filter banks. Besides, this article first introduce two approximated methods which can remarkably speed up spectral decomposition--Nyström method and Column-Sampling approximation--to obtain the coefficients of above SSA-FIR-filter. Simulation results and imaging results of measured data have proved the efficiency and validity of this algorithm.

1. Introduction

Synthetic aperture radar (SAR) operates in the P-, L-, C-, and X-bands to image the Earth's surface along the carrier platform's flight path while the antenna is oriented perpendicular to the flight direction in the downward-slanted direction. However, the above frequency bands are not reserved exclusively for SAR applications. They are also occupied by other services like radio and television stations, as well as telecommunication purposes. When the carrier platform passes over these broadcasting stations, the receiver of SAR picks up the signals from these stations also. These interfere signals are called radio frequency interferences (RFIs) which will overlay the SAR information and become visible in the SAR image [1]. Sometimes, they will make the SAR receiver saturated because their power goes beyond the receiver's dynamic load.

Therefore, it remains to be a hot research topic on RFI suppression for decades. The early methods are mainly based on coherent estimation and subtraction of RFIs [2]. Their performances heavily rely on the complicated data models and parameter estimation accuracy so that they are awkward and not robust. Other commonly used methods are notch filtering [3]. Their main drawback is degradation in the range domain of SAR imagery when multiple RFI sources are present. Then, more attractive methods are based on least mean square (LMS) adaptive filters [4]. However, they have a tradeoff between length of the filter and numerical sensitivity of adaptation. Afterwards, some researchers proposed subspace filters for estimating RFI signals with Gram-Schmidt orthogonalization procedure or Eigenvalue Decomposition (ED) [5, 6]. The performances of these methods are robust but the large computational burden limits their application in practice.

This article has proposed a novel RFI suppression algorithm based on subspace projection by taking use of singular spectral analysis (SSA). The roles of signal and noise in the proposed algorithm are exchanged just as in the LMS mechanism. Each SAR echo received during pulse repetition time can be considered as a one-dimensional time series. Thus, RFI suppression problem is the one-dimensional stationary time series denoising problem in nature, and SSA algorithm is a prevalent algorithm when clustering subspace models are applied to time series datasets [7]. The aim of SSA is to obtain RFI clustering and 'wideband noise' clustering. It can be considered as three steps: 'embedding' step, 'spectral decomposition' step, and 're-embedding' step. Interestingly, these steps can be achieved through an analysis-synthesis-joint finite impulse response (FIR) filters group [810]. Based on this equivalence relationship, a special kind of filtering mechanism for RFI suppression can be established.

However, on account of large amount, abundant information, the burden of SAR imaging system for data storage and processing is increased. For this reason, this article introduces two random sampling techniques, Nyström method [1115] and Column-Sampling approximation [1417], to provide powerful tools to approximate the coefficients of the proposed SSA-FIR filter banks. These techniques are recently used in the machine learning community [13] and analyzed in the theoretical Computer Science community [13, 16, 17]. Their reconstruction error bound analysis and application guidelines are elaborated in [1417]. For the first time, we combined them with SAR signal processing and RFI suppression.

The remainder of this article is organized as follows. In Section 2, the signal model of RFI sources contaminating the SAR signals is expressed in formulas, followed by the implementation details of FIR filtering method that is equivalent to SSA algorithm, and then in Section 3, we give two quick approaches, the Nyström method and the Column-Sampling approximation, for the filtering method in Section 2. In Section 4, the proposed algorithm is tested by means of using numerical simulations and P-band SAR real data, and comparison and analysis of these results are given. Finally, Section 5 comes up with the conclusions.

2. Signal model and filtering interpretation of SSA

Assuming the time t a it is observed that received SAR signal s(t r , t a ) is to be measured at the discrete fast-time sample t r , t r = 0,1,...,M-1 and t a is the discrete aperture (slow-time) samples, t a = 0,1,...,N-1. It is SAR echo signal secho(t r , t a ) contaminated by RFI signal sRFI(t r , t a ) and system noise sNoise(t r , t a ):

s ( t r , t a ) = s echo ( t r , t a ) + s RFI ( t r , t a ) + s Noise ( t r , t a )
(1)

Generally, for distributed targets (e.g., in real scenario) secho(t r , t a ) resembles Additive White Gaussian Noise (AWGN), and sNoise(t r , t a ) generated by system is AWGN, too. RFI signal has comparatively narrow band, and can be considered but not limit to complex exponential signal. Previous researches [16] pointed that the power of RFI signals is tens of dBs greater than the power of the SAR echoes. So, secho(t r , t a ) and sNoise(t r , t a ) can be together considered as wideband background noise which is independent from the RFI source, denoted as sWB(t r , t a ) = secho(t r , t a )+sNoise(t r , t a ).

For a given t a , s(t r , t a ) is a one-dimensional time series, denoted as s = (s[1], s[2],...,s[M]) for simplicity. Normally, we remove mean value from the original data vector ahead. According to the first step of SSA algorithm [7], an embedding step, we choose a window length L and an embedded dimension K to construct K = M-L+1 lagged vectors:

s k = s k - 1 + L - 1 , , s k - 1 T = s RFI , k + s WB , k , k = 1 , . . . , K ,
(2)

where superscript 'T' denotes transpose. Then, an L × K matrix with Toeplitz structure is composed:

S = s 1 , s 2 , , s k = s L - 1 s L s M - 1 s L - 2 s L - 1 s M - 2 s 0 s 1 s M - L , k = 1 , . . . , K ,
(3)

Note that G = SSH is an L × L symmetric and positive semi-definite (SPSD) matrix, where superscript 'H' denotes conjugate transpose. We may express it in spectral decomposition form as G = UΛUH, where U is an orthogonal matrix whose columns are the eigenvectors u1,...,uL of G, and Λ = diag(λ1,λ2,....,λ L ) is a diagonal matrix whose entries on diagonal are ordered eigenvalues λ1λ2 ≥...,≥λ L of G. Based on Equation (1), we suppose that the r leading eigenpairs λ i , u i 1 r which should be separated from the other L-r eigenpairs to reconstruct a 'clean' data matrix without 'noise' are related to RFI subspace. We know that if the signal and noise subspace can be sufficiently separated, it implies that the noise has to be white and zero-mean. In the preceding paragraph, we have expatiated on the precondition of the proposed algorithm: radar echoes from distributed targets have White Gaussian Noise characters, and the mean value has been removed from the original record ahead. Then, the reconstructed RFI clustering could be obtained via

S ̃ R F I = UP U H S = U P U H S = UQ
(4)

where P is a diagonal selected matrix, with the i th diagonal entry satisfies P ii = 1 if the i th row of Q = PUHS is selected whereas P ii = 0 it will be discard. Thus, we select the first r = rank(G) corresponding to the dimension of RFI subspace entries, P i i = 1 i = 1 r .

Note that, there may be varying number of RFI sources in the region where carrier platform transits. Namely, the dimension r of the RFI subspace is mutative in the aperture synthetic duration and needs to be ascertained. Historically, multiple methods are proposed for how to select the rank r, such as AIC-type (AIC, Akaike Information Criterion and MDL, Minimum Description Length [18]) which are based on information theoretic criteria or ESPRIT-type (ESTER, ESTimation Error [19] and SAMOS, Subspace-based Automatic Model Order Selection [20]) based on the shift invariance equation. Besides that, the method we adopt here is based on the eigenvalue distribution of the approximated sample covariance matrix [21]. It uses several results from random matrix theory which provides a set of remarkable tools for rank estimation and just has progressed substantially over the last 10 years. In a word, this method takes use of the theories developed in [2224], and consequently determines a specified eigenvalue threshold. Here, we just give the rank estimation formula:

r = rank G = arg min j λ j σ j 2 μ L , K - j + τ ς δ L , K - j - 1 1 j min L , K ,
(5)

where σ j 2 is the noise level whose calculation procedure is also in [21], τ(ς) is the corresponding value computed by inversion of the Tracy-Widom distribution [21, 23], ς indicates that the eigenvalue significance level is normally a small positive number, in our raw data experiment, we set ς = 0.05. Besides, μ(L, K-j) and δ(L, K-j) are center and scaling quantities, respectively:

μ L , K - j = L + K - j 2
(6)
δ L , K - j = L + K - j 1 L + 1 K - j 1 3
(7)

Now, recall Equation (4), the i th row of Q is denoted as q i = P i i u i H S , i = 1,...,L, then (4) can be written as

S ̃ RFI = u 1 P 11 u 1 H S + u 2 P 2 u 2 H S + + u r P r r u r H S = u 1 q 1 + u 2 q 2 + + u r q r = i = 1 r S ̃ RFI , i
(8)

Interestingly, the 1 × L row vector q i = P i i u i H S can be considered as a filtered version of the original data sequence [8, 9]. Each sample of q i can be written as follows

q i n = P i i j = 1 L u j , i s n - j + 1 , L - 1 n < M ,
(9)

There are K = M-(L-1) samples drawn from the time series q i [n] in q i . Equation (6) is the convolution sum of the eigenvector u i and the (n - K + 2)th original lagged data vector. Thus, the entries of eigenvector u i correspond to the coefficients of an FIR filter which is namely the analysis filter. Introducing z-transform Z , we can get the transfer function H i (z) of the analysis filter

H i z = Q i z S z = P i i j = 1 L u j , i z - j - 1 = P i i u 1 , i + u 2 , i z + + u K , i z - L - 1
(10)

where Q i z = Z q i n , S z z - j - 1 = Z s n - j + 1 , and z is a complex value.

In the sequel we will perform re-embedding step on the reconstructed RFI-embedded matrix S ̃ RFI to recover it in a one-dimensional time series again. However, S ̃ RFI has not kept Toeplitz structure anymore. To substitute all entries on each diagonal with their respective average value, we obtain a new Toeplitz matrix. All of the averaging values comprise the final pure RFI signals [7]. We give an example of S ̃ RFI , i = u i q i , the procedure can be described as follows.

S ̃ RFI , i = u i q i  =  u 1 , i q i L - 1 u 1 , i q i L u 1 , i q i M - 1 u 2 , i q i L - 1 u 2 , i q i L u 2 , i q i M - 1 u r , i q i L - 1 u r , i q i L u r , i q i M - 1
(11)

The diagonal averaging process of this rank one matrix is given by

s ̃ i n = 1 N d j = a b u j , i q i n + j - 1 , i = 1 , , M ,
(12)

where N d is the number of current diagonal entries, a and b depend on the location of current diagonal. For lower left corner of the matrix in Equation (11), N d = n + 1, a = L-N d , b = L; otherwise, for upper right corner of the same matrix, N d = M-n, a = 1, b = L-N d . Obviously, Equation (12) can also be seen as a convolution sum that can be interpreted as an anti-causal FIR filter. Accordingly, both of the above two cases can be unified by formally setting q i [n] = 0, 0 ≤ nL-2, or LnM + L-2. Therefore, the synthesis transfer function in the steady-state case is given by

F i z = S ̃ i z Q i z = 1 L j = 1 L u j , i z j - 1 = 1 L u 1 , i + u 2 , i z + + u L , i z L - 1
(13)

where S ̃ i z = Z s ̃ i n , Q i z z j - 1 = Z q i n + j - 1 .

Consequently, the whole SSA procedure which contains projection, reconstruction, and diagonal averaging can be described by a global transfer function below

T i z = H i z F i z = Q i z S z S ̃ i z Q i z = S ̃ i z S z = j = - L - 1 L - 1 t j , i z j
(14)

By substituting (10) and (11) into (14), we can see that t j, i = t -j, i , j = 1,...,L-1. Let z = e , j = - 1 , the related frequency response arises via

T i e j ω = t 0 , i + 2 j = 1 L - 1 t j , i cos j ω
(15)

This is a zero-phase filter whose output signal is always in phase with the input signal. So, the proposed RFI suppression algorithm is a phase preserving method. Otherwise, it can be observed that |H i (e )| and |F i (e )| only differ in a scaling factor and the sign of the power of the complex exponential argument. Hence, T i (e ) resembles |H i (e )| in shape and its value can be given by |T i (e )| = |H i (e )|2/L. The pattern of the global FIR filter group is shown in Figure 1.

Figure 1
figure 1

FIR eigenfilter description: H i ( z ) are analysis filters and F i ( z ) are synthesis filters.

Besides, once alternative embedding procedure is adopted--the trajectory matrix S has Hankel structure but not Toeplitz structure (see [8, 9] for example), the properties of H i (e ) and F i (e ) will interchange, i.e., the former corresponding to an anti-casual filter whereas the latter a casual filter.

3. Approximation of spectral decomposition

From Equations (10) (13), and (14), we learn that the entries of eigenvector u i correspond to the coefficients of the SSA FIR filter we proposed. Traditionally, eigenvectors can be obtained via spectral decomposition algorithm, SVD on S or ED on G. Nevertheless, they are prohibitive for large L. Here, we introduced two different approximate spectral decomposition methods for large dense matrices: Nyström method [1115] and Column-Sampling approximation [1417]. Both of them are based on random sampling techniques which only operate on a subset of the matrix and provide a powerful alternative for approximate spectral decomposition issue.

As described in the previous section, the proposed algorithm focused working on the L × L SPSD matrix G = SSH, whose spectral decomposition is G = UΛUH. Suppose we randomly (so uniformly) sample l « L columns of G without replacement. Let C denote the L × l matrix formed by these sampled columns and W the l ×l matrix consisting of the intersection of these l columns with corresponding l rows of G. Based on this sampling, the columns and rows of G are rearranged. Thus, without loss of generality, G and C can be written as follows:

G = W G 21 H G 21 G 22 and C = W G 21
(16)

Note that W is also SPSD since G is SPSD. Through performing ED or SVD on much smaller scale matrices W and C, it can generate approximations of eigenpairs U and Λ, denoted as U ̃ and Λ ̃ respectively, which we now describe.

The Nyström method is originally introduced to obtain numerical solutions to integral equations [25]. Then, the same idea is in turn applied to extend the solution of a reduced matrix eigen-decomposition problem to approximate the eigenvectors of an SPSD matrix. The Nyström method provides an approximation of G by using W and C as follows

G G ̃ = C W - 1 C H = W G 21 H G 21 G 21 W - 1 G 21 H
(17)

where -1 denotes the inverse of a matrix. Now define spectral decomposition W = U w Λ w U w H , accordingly the approximations of eigenpairs are

U ̃ : = l L C U w Λ w - 1
(18a)
Λ ̃ : = L l Λ w
(18b)

Unlike the Nyström method, the Column-Sampling method approximates the eigenpairs of G by using the SVD of C directly. It is initially motivated by exploring a simple and intuitive algorithm to compute a description of a low-rank approximation to a very large matrix, which is qualitatively faster than SVD [17]. Suppose the SVD of C is C = U c Λ c V c H , then the approximated eigenpairs of G are obtained by

U ̃ : = U c
(19a)
Λ ̃ : = L l Λ c
(19b)

Accordingly, it generates the approximation of G by instituting (19a) and (19b) into G G ̃ = U ̃ Λ ̃ U ̃ H :

G G ̃ = U ̃ Λ ̃ U ̃ H  =  U c L l Λ c U c H = L l C V c Λ c - 1 Λ c C V c Λ c - 1 H = L l C V c Λ c - 1 Λ c Λ c - H V c H C H = L l C V c Λ c - H V c H C H = C l L C H C 1 2 - 1 C H
(20)

Here, we adopts U c = C V c Λ c - 1 and C H C = V c Λ c 2 V c H . Compared with (17), it is obvious that the two methods have resembling forms with each other, the latter replaces W in (14) with l L C H C 1 2 .

The above sampling-based approximations adopt the most basic uniform sampling approach to pick out l columns from the original matrix G. Alternatively, some researchers derived several non-uniformly sampling approaches, i.e., selecting the i th column with a weight that is proportional to either the corresponding diagonal element G ii or the squared of the column-norm [13, 17]. However, in the recent articles which have the guiding significance for the applications of sampling-based approximations on various problems, Kumor et al. [14, 15] pointed that, for large dense matrices, the uniform sampling without replacement approach, in addition to being more efficient both in time and space, produces more effective approximations. Otherwise, Kumor et al. [15] summarized that the low-rank approximation issues can be classified into two groups--matrix projection and spectral reconstruction. Furthermore, the former approximations tend to be more accurate than the latter approximations when adopt sampling-based approximated methods. According to Equation (4), the RFI signal reconstruction belongs to the first kind. Based on the above analysis, using uniform sampling approach is a good choice.

In the basic SSA algorithm, the most time-consuming step is the spectral decomposition of the L × L matrix G (or of the L × K trajectory matrix S). The computational cost of SVD on S is usually computed by the means of Golub and Reinsch algorithm [26], which requires O(L2K+LK2+K3) multiplications. Using ED on G is even more prohibitive required O(L3) multiplications. Contrarily, the Nyström method just needs to calculate the eigenpairs of the sampled submatrix of G which requires O(l3+l2L), rl, l3 required by ED on W and l2L for multiplication with C, and the Column-Sampling approximation needs O(L2l+Ll2+l3) multiplications for SVD on C. After measuring differences of floating-point numbers in arithmetic, we supply a comparison of concrete CPU time when implementing different spectral decomposition algorithms on P-band real datasets, shown in Section 4.

4. Numerical experiments

In this section, we first perform numerical simulations for a time series to illustrate the methods discussed above. We consider a linear frequency modulation (LFM) signal, namely chirp signal, which is often used as SAR transmitting pulse [27]: secho = exp(-jπKt2), -T/2 ≤ tT/2, where the pulse width T = 32 μs, bandwidth B = |K·T| is 9.6 MHz, the center frequency at zero and sampling frequencies is 39.6 NHz. All these parameters are typical for a SAR transmitting pulse, except the sampling frequency. According to Nyquist principal 12 MHz is enough for the given bandwidth, but we adopt a high sampling frequency in order to obtain a comparatively 'long' time series which has 1,844 sample points. Three real sinusoidal signals as RFIs, whose center frequencies are 1.8, 3.2, and 3.5 MHz, respectively, have been added. To set two RFIs with similar frequencies, we can examine the RFI suppression capability of SSA-FIR filter banks when RFIs with close frequencies present, and the signal-to-noise ratio is 40 dB.

We adopt the above LFM signal as a calibrated signal, whose time domain waveform and corresponding power spectrum are shown in Figure 2a, b. After contaminated by RFIs (three sinusoidal signals), the time domain waveform and power spectrum of the mixed signal are shown in Figure 2c, d. One can see that the power of RFIs is about 40 dB higher than that of the calibrated signal. The method described in [5], we call it 'ED method' for convenience, and the Notch filtering method in [3] are chosen as contrast RFI suppression algorithms. We choose L = 460 and because single sinusoidal signal sin x = 1 2 exp j x - exp - j x corresponds to rank 2, the rank is r = 6.

Figure 2
figure 2

Time domain signal waveforms and corresponding normalized power specturms, in due order of Chirp signal, Chirp and sinusoidal mixed signal, and the reconstructed signals by the ED method, the Notch filtering method, the Column-Sampling approximation, the Nyström method with 57 sampling columns and with 115 sampling columns, respectively.

The reconstructed signals using accurate ED method are shown in Figure 2e, f, corresponding to time and frequency domains, respectively, and in Figure 2g, h, the performances of the Notch filtering method are represented. Then, we first set l = floor(L/8) = 57, in this condition, the reconstructed signals using the Column-Sampling approximation and the Nyström method are shown in Figure 2i-l. One can see the Column-Sampling approximation works better and more approximately up to the ED reconstructed performance. Contrarily, the performance of the Nyström method is kind of unsatisfactory. It still leaves about 10 dB RFIs energy and the time domain signal wave is 'chaotic'. Next, we set l = floor(L/4) = 115 and the reconstructed signal using the Nyström method are shown in Figure 2m, n. This time it works better, more RFIs energy is removed but its performance still little poorer than the Column-Sampling approximation's, even when the latter with half of the sampling columns. The reason we will give through further numerical analysis. However, the Nyström method with l = floor(L/8) = 57 sampling columns still removes more RFIs energy than the Notch filtering method does. We can see the Notch caused some frequency spectrum fracture and lost the most useful signal.

In the following, we monitor two quantificable criterions of the two approximated methods compared with the exact spectral decomposition, just for the eigenvectors u ̃ i 1 l which construct the matrix U ̃ in Equations (18a) and (19a). First, we compute the cosines of principal angles, between the exact and the approximate eigenvectors: cos Angle = u i 2 + u ̃ i 2 - u i - u ̃ i 2 u i 2 u ̃ i 2 , i = 1,..l, which should be close to 1 for good approximation. Here, ||•||2 denotes 2-norm. When the number of sampling columns varying, the error curves are shown in Figure 3 (in which we just give the top 40 results). We can see as the number of samples increasing, both of the Column-Sampling and the Nyström method generate more accurate eigenvectors, and the accuracies of them are very close.

Figure 3
figure 3

For different numbers of samples ( l ), cosines of principal angles between the exact and the approximated eigenvectors obtained by the Column-Sampling approximation and the Nyström method, respectively.

But, why their RFI suppression results are so different? In Table 1, we give the orthonormality error for different number of samples, which are measured by Frobenius norm 10  log  10 U ̃ U ̃ H - I F . We repeat the excises for 50 times, and then compute the average deviation values. The mean deviation from orthonormality decreases as the number of samples increases for both of the two methods. But, the orthonormality error of the Nyström method is much larger than the Column-Sampling approximation. This is the reason for the poor Nyström method performance. When extrapolates eigenvectors of G from eigenvectors of W, the Nyström products lose orthonormality in this process.

Table 1 Mean deviation from orthonormality for different number of samples

The frequency responses of the associated filters with the first eight filter banks are shown in Figure 4. We adopt l = floor(L/4) = 115 columns for the Nyström method and half columns for the Column-Sampling approximation. Figure 4a-f shows that the first six filters have band-pass characteristics so that they mostly capture the three sinusoidal harmonics of the mixed signals. Nevertheless, the other two filters shown in Figure 4g, h as contrast are much less important when reconstructing signals. It illustrates that the two approximated methods are effective for constructing the SSA-FIR filter groups.

Figure 4
figure 4

Frequency response of the first eight zero-phase FIR filter banks constructed by the ED method, the Col-Sampling approximation and the Nyström method, respectively.

Then, we give the pulse compression results to illustrate the targets resolution capability after using above multiple RFI suppression methods. All the reconstructed signals are filtered by a matched filter Hmatch = exp(jπKt2), -T/2 ≤ tT/2 [27]. When the calibrated signal through the matched filter, it generates a waveform whose peak value of the main lobe is at least 13 dB higher than that of the highest pair of side lobes, and the magnitudes of other side lobe pairs are decreasing from center to both sides. Thus, the expected target can be distinguished. However, when there are RFIs, through matched filter we cannot differentiate the main lobe. We compared the compression results by pairs, the contrast waveforms are shown in Figure 5. One can see some sidelobes still higher than -13.4 dB line when using the Notch filtering method. Using the Nyström method with 115 samples we can obtain a satisfactory wave form, similar to the one when using the Column-Sampling approximation. But, when with only 57 samples, the Nyström method defeats the Notch filtering method by a narrow margin.

Figure 5
figure 5

Comparison of the pulse compression results.

Next, we will detect the proposed algorithm in practical scenario using real datasets which are collected by a P-band airborne SAR. These datasets include 16,384 pulses along the azimuth direction and each pulse has 10,240 sample points. It means each time the series has M = 10240 entries. The main system parameters are reported in Table 2. The estimated Doppler center frequency is 6.1 Hz. Range-Doppler image focus algorithm [27] is adopted combined with motion compensation processing.

Table 2 Main system parameters

Our experiments are made on a PC equipped with Intel Core i3 CPU at 2.13 GHz with a duel core processor, 4 GB of RAM memory. We adopt L = 2048 as the filter order, l = L/8 = 256 sampling columns for the Column-Sampling approximation and the Nyström method, and an extra l = L/4 = 512 sampling number for the latter. The ED method and the Notch filtering method are still as contrast RFI suppression algorithms. For each pulse, the rank of the RFI subspace is given by Equations (5)-(7). Figure 6a interprets multiple narrowband sources occurring at P-waveband. The focused images after RFI suppression using the two contrast method--ED method and the Notch filtering method--are shown in Figure 6b, c respectively, and the other three images on the next line are obtained using the Column-Sampling approximation, the Nyström method with 256 samples, and with 512 samples, respectively. Figure 6b is considered as the optical reference image. One can see that Figure 6c is blurred and the remaining interferences are visible. Surprisingly, from Figure 6e, we can see that the Nyström method with 256 samples makes much better performance than that of the Notch filtering. Although the simulation results in Figure 5a show it just slightly better than the latter. One possible reason is that the real scenario more RFIs exists, the Notch filtering method is so inflexible that makes more frequency gaps, and as shown in Figure 6d, f, the Column-Sampling approximation and the Nyström method with 512 samples still have the similar suppression effects. Otherwise, we use the MATLAB commands 'tic' and 'toc' to measure the required CPU running time when adopting different spectral decomposition methods. The experimental results show that, for a single pulse, adopting ED method needs to run for 26.5694 s, the Column-Sampling approximation with l = 256 samples needs 1.7012 s, the Nyström method with l = 512 samples needs 1.9712 s, and the Nyström method with l = 256 samples only needs 0.3771 s.

Figure 6
figure 6

P-Band image degraded by RF interferences in (a); the same images after filtering the RF interferences in due order with the ED method in (b), the Notch filtering method in (c), the Column-Sampling approximation in (d), the Nyström method with 256 samples in (e) and with 512 samples in (f), respectively.

5. Conclusions

In this article, we give analysis of that the RFI suppression problem in SAR can be considered as one-dimensional stationary time series denoising problem. Furthermore, by applying a linear invariant system theory, the RFI suppression task can be achieved expediently by a zero-phase FIR filter whose coefficients are related to the eigenvectors of the input signal covariance matrix. Owing to the outputs being in phase with the inputs, phase preserving can be available through the proposed algorithm. What is more, in view of the large amount of SAR original data and high complexity of computing eigenvalues and eigenvectors, we first introduce two random sampling methods to speed up the SSA-FIR filter banks construction process. The results of simulations and practical experiments illuminate that the proposed filtering algorithm can be provided with both efficiency and validity.

References

  1. Soumekh M, Nobles D, Wicks MC, Gerard JG: Signal processing of wide bandwidth and wide beam width P-3 SAR data. IEEE Trans Aerosp Electron Syst 2001, 4(37):1122-1141.

    Article  Google Scholar 

  2. Miller T, Potter L, McCorkle J: RFI suppression for ultra wideband radar. IEEE Trans Aerosp Electron Syst 1997, 33(4):1142-1156.

    Article  Google Scholar 

  3. Kimura H, Nakamura T, Papathanassiou KP: Suppression of ground radar interference in JERS-1 SAR data. IEICE Trans Commun 2004, E87-B: 3759-3765.

    Google Scholar 

  4. Luo X, Ulander LMH, Askne J, Smith G, Frolind P-O: RFI suppression in ultra-wideband SAR systems using LMS filters infrequency domain. IET Electron Lett 2001, 37(4):241-243. 10.1049/el:20010153

    Article  Google Scholar 

  5. Nguyen L, Ton T, Wong D, Soumekh M: Adaptive coherent suppression of multiple wide-bandwidth RFI sources in SAR. Proc SPIE 2004, 5427: 1-16.

    Article  Google Scholar 

  6. Wang W, Wyatt LR: Radio frequency interference cancellation for sea-state remote sensing by high-frequency radar. IET Radar Sonar Navigat 2011, 5(4):405-415. 10.1049/iet-rsn.2010.0041

    Article  Google Scholar 

  7. Golyandina N, Nekrutkin V, Zhigljavsky A: Analysis of Time Series Structure: SSA and Related Techniques. Chapman & Hall, London; 2001.

    Book  Google Scholar 

  8. Harris TJ, Yuan H: Filtering and frequency interpretations of singular spectrum analysis. Physica D 2010, 239: 1958-1967. 10.1016/j.physd.2010.07.005

    Article  MathSciNet  Google Scholar 

  9. Hansen PC, Jensen SH: FIR filter representations of reduced-rank noise reduction. IEEE Trans Signal Process 1998, 46(6):1737-1741. 10.1109/78.678511

    Article  Google Scholar 

  10. Dologlou I, Carayannis G: Physical interpretation of signal reconstruction from reduced rank matrices. IEEE Trans Signal Process 1991, 39(7):1681-1682. 10.1109/78.134407

    Article  Google Scholar 

  11. Williams CKI, Seeger M: Using the Nyström method to speed up kernel machines. NIPS 2001, 13: 682-688.

    Google Scholar 

  12. Zhang K, Tsang I, Kwok J: Improved Nyström low-rank approximation and error analysis. Proc ICML'08, New York, USA 2008, 273-297.

    Google Scholar 

  13. Drineas P, Mahoney MW: On the Nyström method for approximating a Gram matrix for improved kernel-based learning. JMLR 2005, 6: 2153-2175.

    MathSciNet  Google Scholar 

  14. Kumor S, Mohri M, Talwalkar A: Sampling techniques for the Nyström method. Conference on Artificial Intelligence and Statistics JMLR: W&CP, Clearwater Beach, Florida, USA 2009, 5: 304-311.

    Google Scholar 

  15. Kumor S, Mohri M, Talwalkar A: On sampling-based approximate spectral decomposition. Proc 26th Annual International Conference on Machine Learning, Montreal, Canada 2009, 553-560.

    Google Scholar 

  16. Deshpande A, Rademacher L, Vempala S, Wang G: Matrix approximation and projective clustering via volume sampling. Theor Comput 2006, 6: 225-247.

    Article  MathSciNet  Google Scholar 

  17. Drineas P, Kannan R, Mahoney MW: Fast Monte Carlo algorithms for matrices II: computing a low-rank approximation to a matrix. SIAM J Comput 2006, 36(1):158-183. 10.1137/S0097539704442696

    Article  MathSciNet  Google Scholar 

  18. Wax M, Kailath T: Detection of signals by information theoretic criteria. IEEE Trans Acoust Speech, Signal Process 1985, 33(2):387-392. 10.1109/TASSP.1985.1164557

    Article  MathSciNet  Google Scholar 

  19. Badeau R, David B, Richard G: Selecting the modeling order for the ESPRIT high resolution method: an alternative approach. IEEE International Conference on Acoustics, Speech, Signal Processing, Montreal, Quebec, Canada 2004, 2: 1025-1028.

    Google Scholar 

  20. Papy J-M, De LathauwerL, Huffel SV: A shift invariance-based order-selection technique for exponential data modeling. IEEE Signal Process Lett 2007, 14(7):473-476.

    Article  Google Scholar 

  21. Kritchman S, Nadler B: Determining the number of components in a factor model from limited noisy data. Chemomet Intell Lab Syst 2008, 94(1):19-32. 10.1016/j.chemolab.2008.06.002

    Article  Google Scholar 

  22. Johansson K: Shape fluctuation and random matrices. Commun Math Phys 2000, 209(2):437-476. 10.1007/s002200050027

    Article  Google Scholar 

  23. Johnstone IM: On the distribution of the largest principal components analysis. Ann Statist 2001, 29(2):295-327. 10.1214/aos/1009210544

    Article  MathSciNet  Google Scholar 

  24. Baik J, Ben ArousG, Péché S: Phase transition of the largest eigenvalue for non-null complex sample covariance matrices. Ann Probab 2005, 33(5):1643-1697. 10.1214/009117905000000233

    Article  MathSciNet  Google Scholar 

  25. Delves LM, Mohamed JL: Computational Methods for Integral Equations. Cambridge University Press, Cambridge; 1985.

    Book  Google Scholar 

  26. Golub G, Reinsch C: Singular value decomposition and least squares solutions. Numer Math 1970, 14: 403-420. 10.1007/BF02163027

    Article  MathSciNet  Google Scholar 

  27. Cumming IG, Wong FH: Digital Processing of Synthetic Aperture Radar Data: Algorithm and Implementation. Artech House, Inc., Boston; 2005.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyu Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, X., Yu, W., Qi, X. et al. RFI suppression in SAR based on filtering interpretation of SSA and fast implementation. EURASIP J. Adv. Signal Process. 2012, 103 (2012). https://doi.org/10.1186/1687-6180-2012-103

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-103

Keywords