Open Access

SAR moving object imaging using sparsity imposing priors

EURASIP Journal on Advances in Signal Processing20172017:10

DOI: 10.1186/s13634-016-0442-z

Received: 3 August 2016

Accepted: 23 December 2016

Published: 23 January 2017


Synthetic aperture radar (SAR) returns from a scene with motion can be viewed as data from a stationary scene, but with phase errors due to motion. Based on this perspective, we formulate the problem of SAR imaging of motion-containing scenes as one of joint imaging and phase error compensation. The proposed method is based on the minimization of a cost function which involves sparsity-imposing regularization terms on the reflectivity field to be imaged, considering that it admits a sparse representation as well as on the spatial structure of the motion-related phase errors, reflecting the assumption that only a small percentage of the entire scene contains moving objects. To incorporate the spatial structure of the phase errors into the problem, we provide three different sparsity-enforcing prior terms. In order to achieve computational gains, we also present a two-step version of our approach, which first determines regions of interest that are likely to contain the moving objects and then applies our sparsity-driven approach for joint image reconstruction and autofocusing in such a spatially constrained setting. Our preliminary experiments demonstrate the effectiveness of this new moving target SAR imaging approach.


SAR Moving object Sparsity Group sparsity Low-rank sparse decomposition

1 Introduction

Moving object tracking and imaging is an important problem in a wide-range of radar systems and applications including emerging systems such as radars with commercial off-the-shelf (COTS) components or software-defined radio (SDR)-based radars, which have attracted interest in recent years. Imaging of moving objects is a challenging problem for synthetic aperture radar (SAR) as an imaging radar. Moving objects in the scene cause phase errors in the SAR data and subsequently defocusing in images reconstructed based on a stationary scene assumption. The defocusing caused by moving objects exhibits space-variant characteristics, i.e., the defocusing arises only in the parts of the image containing the moving objects, whereas the stationary background is not defocused. This type of defocusing can be removed by estimating two sets of unknowns, which are the locations of the moving objects in the scene and the velocities of the objects or the corresponding phase errors.

For monostatic spotlight mode SAR which is the modality of interest in this paper, most of the published pieces of work aim first to form the smeared imagery of moving objects and then to focus the smeared parts of the image [14]. These kinds of approaches are based on post-processing of an image reconstructed conventionally, e.g., by the polar-format algorithm [5]. As in many other imaging problems, sparsity-based approaches have recently been considered in the context of SAR moving object imaging as well. In [68], compressed sensing (CS) techniques are used to search for a solution over an overcomplete dictionary which consists of basis elements for several velocity-position combinations. The method proposed in [6] for multistatic radar imaging of moving objects facilitates linearization of the nonlinear problem of target scattering and motion estimation and subsequently solves the problem as a larger, unified regularized inversion problem subject to sparsity constraints. Focusing on scenarios with low signal-to-clutter ratio, the approach in [7] first applies a clutter cancellation procedure and then solves an optimization problem similar to the one in [6]. In [8], which concentrates on targets with micro-motions such as rotation or vibration, generalized Gaussian and Student’s t prior models are used to enforce sparsity. The approach in [9] deals with the problem using a radon transform and a CS-based method to achieve motion parameter estimation of moving targets with Doppler spectrum ambiguity and Doppler centroid frequency ambiguity encountered in SAR systems with low pulse repetition frequency (PRF). The sparsity information is used in [10] within a Bayesian framework to determine velocities in a multi-target scenario for low PRF wideband radar. In [11], which also uses a Bayesian approach, not only the target signature is estimated using a prior distribution on the target trajectory but also parameters related to nuisances such as clutter and antenna miscalibration are estimated.

We handle the problem in the context of sparsity-driven imaging as well. Our method is based on simultaneous imaging and phase error compensation. Considering that in SAR imaging, the underlying field usually exhibits a sparse structure, we previously proposed a sparsity-driven technique for joint SAR imaging of stationary scenes and space-invariant focusing by using a nonquadratic regularization-based framework [12]. That work was motivated by defocusing due to, e.g., platform position uncertainties. Here, through a significant extension of that framework, we propose a method for joint sparsity-driven imaging and space-variant focusing for correction of phase errors caused by moving objects. Preliminary pieces of this work have been presented at [13, 14]. We formulate an optimization problem over the reflectivities and potential phase errors due to motion over the scene and solve it iteratively. In this formulation, we not only exploit the sparsity of the reflectivity field but we also impose a constraint on the spatial sparsity of the phase errors based on the assumption that motion in the scene will be limited to a small number of spatial locations. This constraint on phase errors helps to automatically determine and focus the moving points in the scene. We also discuss two possible extensions of this primary approach, through two alternate choices for the regularization term on the motion field. In the first extension, we use group-sparsity enforcing regularization term to impose the sparse structure. The second extension is based on low-rank sparse decomposition of the phase error matrix. More importantly, to reduce the computational complexity of this problem, we propose a second approach within the same framework to make these ideas practically applicable on relatively large scenes. In this second approach, our aim is to improve the computational efficiency of the phase error estimation procedure by first determining regions of interest (ROI) for potential motion using a fast procedure and then performing phase error estimation only in these regions. Here, note that both of these two approaches provide advantages such as increased resolution, reduced sidelobes, and reduced speckle in the imaging side thanks to the regularization-based image formation, which can alleviate challenges caused by incomplete data or sparse apertures [15] as well.

In Section 2, the observation model used for SAR moving object imaging is presented. In Section 3, the proposed method is described in detail for our primary approach, its extensions based on group sparsity and low-rank sparse decomposition, and the ROI-based approach. After providing some additional remarks on the practical implementation of the proposed approaches in Section 3, we present our experimental results in Section 5. We conclude the paper in Section 6.

2 SAR imaging model

The discrete SAR imaging model of interest including all returned signals is given by [15]:
$$\begin{array}{@{}rcl@{}} \underbrace{\left[ \begin{array}{c} \mathbf{r_{1}}\\ \vdots\\ \mathbf{r_{M}} \end{array} \right]}_{\mathbf{r}} = \underbrace{\left[ \begin{array}{c} \mathbf{C_{1}}\\ \vdots\\ \mathbf{C_{M}} \end{array} \right]}_{\mathbf{C}} \begin{array}{c} \mathbf{f} \end{array} \end{array} $$

Here, r m is the vector of observed samples, C m is a discretized approximation to the continuous observation kernel at the mth aperture position, f is a vector representing the unknown sampled reflectivity image, and M is the total number of aperture positions. The vector r is the SAR phase history data of all points in the scene. It is also possible to view r as the sum of the SAR data corresponding to each point in the scene.

$$\begin{array}{@{}rcl@{}} \mathbf{r}=\underbrace{\mathbf{C_{cl-1}} \mathbf{f}(1)}_{\mathbf{p_{1}}}+\underbrace{\mathbf{C_{cl-2}} \mathbf{f}(2)}_{\mathbf{p_{2}}}+...+\underbrace{\mathbf{C_{cl-I}} \mathbf{f}(I)}_{\mathbf{p_{I}}} \end{array} $$

Here, C cli is the ith column of the model matrix C and f(i) and p i represent the complex reflectivity at the ith point of the scene and the corresponding SAR data it produces, respectively. I is the total number of points in the scene.

The cross-range component of the target velocity causes the image of the target to be defocused in the cross-range direction, whereas the range component causes shifting in the cross-range direction and defocusing in both cross-range and range directions [1, 2]. Now, let us view the ith point in the scene as a point target having a motion which results in defocusing along the cross-range direction. In this paper, we particularly focus on motions which result in cross-range defocusing. The SAR data of this target can be expressed as [1, 2]:
$$ \left[ \begin{array}{c} \mathbf{p}_{\mathbf{i}_{\mathbf{1}_{e}}} \\ \vdots \\ \mathbf{p}_{\mathbf{i}_{\mathbf{M}_{e}}} \end{array} \right] = \left[ \begin{array}{c} e^{j \boldsymbol{\phi}_{i}(1)}~ \mathbf{p}_{\mathbf{i}_{1}} \\ \vdots \\ e^{j \boldsymbol{\phi}_{i}(M)}~ \mathbf{p}_{\mathbf{i}_{M}} \end{array} \right] $$
Here, ϕ i represents the phase error caused by the motion of the target and \(\mathbf {p}_{\mathbf {i}_{m}}\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}\mathbf {p}_{\mathbf {i}_{\mathbf {m}_{e}}}\) are the phase history data for the stationary and moving point target, respectively, at aperture position m. Similarly, this relation can be expressed in terms of the model matrix C as follows:
$$ \left[ \begin{array}{c } \mathbf{C_{cl-i_{1}}}(\phi)\\ \vdots \\ \mathbf{C_{cl-i_{M}}}(\phi) \end{array} \right] = \left[ \begin{array}{c} e^{j \boldsymbol{\phi}_{i}(1)}~ \mathbf{C_{cl-i_{1}}}\\ \vdots \\ e^{j \boldsymbol{\phi}_{i}(M)}~ \mathbf{C_{cl-i_{M}}} \end{array} \right] $$
Here, C c l i (ϕ) is the ith column of the model matrix C(ϕ) that takes the movement of the objects into account and \(\mathbf {C}_{\mathbf {cl-i_{m}}}(\phi)\) is the part of C c l i (ϕ) for the mth aperture position. In the presence of additional observation noise, the observation model for the overall system becomes
$$\begin{array}{@{}rcl@{}} \mathbf{g}=\mathbf{C}(\phi)\mathbf{f}+\mathbf{v} \end{array} $$

where v is the observation noise. In this way, we have turned the moving object imaging problem into the problem of imaging a stationary scene with phase corrupted data. Here, the aim is to estimate f and ϕ from the noisy observation g.

3 Sparsity-driven moving target SAR imaging

Using the observation model we have formulated in the previous section, we handle the imaging and motion correction problem as an optimization problem. Our cost function involves sparsity-imposing side constraints on both the field and the motion-induced phase errors besides the data fidelity term. Phase errors are represented by a vector β of size K×1 where K=MI. The vector β includes phase errors corresponding to all points in the scene, for all aperture positions as follows:
$$ \boldsymbol{\beta}^{T}=\left[\boldsymbol{\beta}_{\mathbf{1}}^{T} \boldsymbol{\beta}_{\mathbf{2}}^{T} \hdots \boldsymbol{\beta}_{\mathbf{M}}^{T}\right] $$
Here, β m is the vector of phase errors for the mth aperture position and has the following form:
$$\begin{array}{@{}rcl@{}} \boldsymbol{\beta}_{\mathbf{m}}=\left[e^{j\boldsymbol{\phi}_{\mathbf{1}}(m)}, \hdots, e^{j\boldsymbol{\phi}_{\mathbf{I}}(m)}\right]^{T} \end{array} $$
Our proposed cost function which is minimized with respect to the field f and the phase error vector β is as follows:
$$\begin{array}{*{20}l} J(\mathbf{f},\boldsymbol{\beta})=\left\|\mathbf{g}-\mathbf{C(\phi)f}\right\|^{2}_{2}+ \lambda_{1} \left\|\mathbf{f}\right\|_{1}+\lambda_{2}\left\|\boldsymbol{\beta}-\mathbf{1}\right\|_{1} \\ ~s.t. ~\left|\boldsymbol{\beta}(k)\right|=1~ \forall k \end{array} $$

Here, 1 is a K×1 vector of ones and β(k) denotes the kth element of β. Since the number of moving points is usually much smaller than the total number of points in the scene, most of the ϕ values in the vector β are zero. Since the elements of β are in the form of e j ϕ s, the elements of the vector β corresponding to the stationary scene points become 1, whereas the elements corresponding to the moving points take various values depending on the amount of the phase error. Therefore, this sparsity on the phase errors is incorporated into the problem by using the regularization term β11.

This problem is solved similarly to the optimization problem in [16]. In the first step of the (n+1)st iteration, the cost function J(f,β) is minimized with respect to the field f.
$$\begin{array}{*{20}l} \hat{\mathbf{f}}^{(n+1)} &=\arg\min_{\mathbf{f}} J\left(\mathbf{f},\hat {\boldsymbol{\beta}}^{(n)}\right) \\ &=\arg\min_{\mathbf{f}} \left\|\mathbf{g}-\mathbf{C}(\hat{\phi}^{(n)})\mathbf{f}\right\|^{2}_{2}+ \lambda_{1} \left\|\mathbf{f}\right\|_{1} \end{array} $$
To avoid problems due to nondifferentiability of the l 1norm at the origin, a smooth approximation is used [15]:
$$ \left\|\mathbf{f}\right\|_{1}\approx\sum\limits^{I}_{i=1}\left(\left|\mathbf{f}(i)\right|^{2}+\sigma\right)^{1/2} $$
where σ is a small positive constant. In each iteration, the field estimate is updated as follows:
$$\begin{array}{*{20}l} \hat{\mathbf{f}}^{(n+1)} &= \left(\mathbf{C}\left(\hat{\phi}^{(n)}\right)^{H} \mathbf{C} \left(\hat{\phi}^{(n)}\right) \right. \\ & \quad\left. + \lambda_{1} \mathbf{W}\left(\hat{f}^{(n)}\right)\!{\vphantom{\left(\hat{\phi}^{(n)}\right)^{H}}}\right)^{-1} \mathbf{C}\left(\hat{\phi}^{(n)}\right)^{H}\mathbf{g} \end{array} $$
where \(\mathbf {W}(\hat {f}^{(n)})\) is a diagonal matrix:
$$\begin{array}{@{}rcl@{}} \mathbf{W}\left(\hat{f}^{(n)}\right)=\text{diag}\left\{1/\left(\left|\hat{\mathbf{f}}^{(n)}(i)\right|^{2}+\sigma\right)^{1/2}\right\} \end{array} $$
In the second step of each iteration, we use the field estimate \(\hat {\mathbf {f}}\) from the first step and estimate the phase errors by minimizing the following cost function for each aperture position:
$$ {}\begin{aligned} \hat{\boldsymbol{\beta}}_{\mathbf{m}}^{(n+1)} &= \arg\min_{\boldsymbol{\beta}_{\mathbf{m}}} J(\hat{\mathbf{f}}^{(n+1)},\boldsymbol{\beta}_{\mathbf{m}}) ~s.t. ~ \left|\boldsymbol{\beta}_{\mathbf{m}}(i)\right|=1~ \forall i \\ J(\hat{\mathbf{f}}^{(n+1)},\boldsymbol{\beta}_{\mathbf{m}}) &= \left\|\mathbf{g_{m}}-\mathbf{C_{m}T}^{(n+1)}\boldsymbol{\beta}_{\mathbf{m}}\right\|^{2}_{2}+ \lambda_{2}\left\|\boldsymbol{\beta}_{\mathbf{m}}-\mathbf{1}\right\|_{1} \end{aligned} $$
Here, 1 is a I×1 vector of ones and T is a diagonal matrix, with the entries \(\hat {\mathbf {f}}(i)\) on its main diagonal, as follows:
$$\begin{array}{@{}rcl@{}} \mathbf{T}^{(n+1)}=diag\left\{\hat{\mathbf{f}}^{(n+1)}(i)\right\} \end{array} $$
The constrained optimization problem in (13) is replaced by the following unconstrained problem that incorporates a penalty term on the magnitudes of β m (i)s.
$$ {{}\begin{aligned} \hat{\boldsymbol{\beta}}_{\mathbf{m}}^{(n+1)} &=\arg\min_{\boldsymbol{\beta}_{\mathbf{m}}} \left\|\mathbf{g_{m}}-\mathbf{C_{m}T}^{(n+1)}\boldsymbol{\beta}_{\mathbf{m}}\right\|^{2}_{2}+\lambda_{2}\left\|\boldsymbol{\beta}_{\mathbf{m}}-\mathbf{1}\right\|_{1}\\ & \quad + \lambda_{3}\sum\limits_{i=1}^{I}\left(\left|\boldsymbol{\beta}_{\mathbf{m}}(i)\right|-1\right)^{2}~ \forall m \end{aligned}} $$
The expression in (15) can be rewritten as follows:
$$ {{}\begin{aligned} \hat{\boldsymbol{\beta}}_{\mathbf{m}}^{(n+1)} &= \arg\min_{\boldsymbol{\beta}_{\mathbf{m}}} \left\|\mathbf{g_{m}}-\mathbf{C_{m}T}^{(n+1)}\boldsymbol{\beta}_{\mathbf{m}}\right\|^{2}_{2} + \lambda_{2}\left\|\boldsymbol{\beta}_{\mathbf{m}}-\mathbf{1}\right\|_{1}\\ & \quad + \lambda_{3}\left\|\boldsymbol{\beta}_{\mathbf{m}}\right\|_{2}^{2}-2\lambda_{3}\left\|\boldsymbol{\beta}_{\mathbf{m}}\right\|_{1}~ \forall m \end{aligned}} $$
This optimization problem is solved by using the same technique as in the field estimation step. Using the estimate \(\hat {\boldsymbol {\beta }}_{\mathbf {m}}^{(n+1)}\), the following matrix is created:
$$\begin{array}{@{}rcl@{}} \mathbf{B_{m}}^{(n+1)}=diag\left\{\hat{\boldsymbol{\beta}}_{\mathbf{m}}^{(n+1)}(i)\right\} \end{array} $$
which is used to update the model matrix for the mth aperture position.
$$\begin{array}{@{}rcl@{}} \mathbf{C_{m}}(\phi^{n+1})=\mathbf{C_{m}B_{m}}^{(n+1)} \end{array} $$

After these phase estimation and model matrix update procedures have been completed for all aperture positions, the algorithm moves on to the next iteration.

3.1 Extensions

Within the same framework, we present two additional methods for the phase estimation step. Both of them can be regarded as extensions of our main method. One of the methods is based on the idea of using group-sparsity constraints whereas the other is based on using a low-rank sparse matrix decomposition for the phase error matrix.

3.1.1 Group-sparsity based regularization

Let us convert the phase error vector β in the previous section to a matrix so that the columns of this matrix are the β m vectors as follows:
$$\begin{array}{*{20}l} \mathbf{Q}&= \left[ \begin{array}{c c c c} ~\boldsymbol{\beta}_{\mathbf{1}} &~\boldsymbol{\beta}_{\mathbf{2}} &~\ldots &~\boldsymbol{\beta}_{\mathbf{M}} \end{array} \right] \\ &=\left[ \begin{array}{cccc} e^{j \boldsymbol{\phi}_{\mathbf{1}}(1)} & e^{j \boldsymbol{\phi}_{\mathbf{1}}(2)}& \ldots & e^{j \boldsymbol{\phi}_{\mathbf{1}}(M)}\\ e^{j \boldsymbol{\phi}_{\mathbf{2}}(1)} & e^{j \boldsymbol{\phi}_{\mathbf{2}}(2)}& \ldots & e^{j \boldsymbol{\phi}_{\mathbf{2}}(M)} \\ \vdots & \vdots & \ddots & \vdots \\ e^{j \boldsymbol{\phi}_{\mathbf{I}}(1)} & e^{j \boldsymbol{\phi}_{\mathbf{I}}(2)} &\ldots & e^{j \boldsymbol{\phi}_{\mathbf{I}}(M)} \end{array} \right]_{I\times M} \end{array} $$

Here, Q is the matrix of phase errors and each row of the matrix Q consists of the phase error values for all aperture positions, for a particular point in the scene. We expect each column of Q to exhibit sparse nature across the rows, indicating the expectation that there are small number of moving pixels in the scene. However, no such sparsity is expected in general across the columns. This structure motivates imposing sparsity in a group-wise fashion, where groups in our setting corresponds to rows of Q.

The method is performed by minimizing the following cost function with respect to the field and phase errors.
$$ {\begin{aligned} J(\mathbf{f},\boldsymbol{\beta}) &= \left\|\mathbf{g}-\mathbf{C(\phi)f}\right\|^{2}_{2}+ \lambda_{1} \left\|\mathbf{f}\right\|_{1} \\ & \quad+\lambda_{2}\sum\limits^{I}_{i=1}\left(\sum\limits^{M}_{m=1}\left|\mathbf{Q}(i,m)-1\right|^{2}\right)^{1/2} \end{aligned}} $$

Since the number of moving points is much smaller than the total number of points in the scene, most of the ϕ values in the vector β and subsequently in the matrix Q are zero. Since the elements of Q are in the form of e j ϕ s, the elements of the rows corresponding to the stationary scene points become 1, whereas the elements of the rows corresponding to the moving points take various values depending on the amount of the phase error. Therefore, this group sparsity nature on the phase errors is incorporated into the problem by using the regularization term \(\sum ^{I}_{i=1}\left (\sum ^{M}_{m=1}\left |\mathbf {Q}(i,m)-1 \right |^{2}\right)^{1/2}\).

The field estimation step remains the same as in the previous section. In the second step of each iteration, we use the field estimate \(\hat {\mathbf {f}}\) from the first step and estimate the phase errors by minimizing the following cost function:
$$ {\begin{aligned} \hat{\boldsymbol{\beta}}^{(n+1)} & = \arg\min_{\boldsymbol{\beta}} J\left(\hat{\mathbf{f}}^{(n+1)},\boldsymbol{\beta}\right)\\ & = \arg\min_{\boldsymbol{\beta}} \left\|\mathbf{g}-\mathbf{HD}^{(n+1)}\boldsymbol{\beta}\right\|^{2}_{2}\\ & \quad + \lambda_{2}\sum\limits^{I}_{i=1}\left(\sum\limits^{M}_{m=1}\left|\mathbf{Q}(i,m)-1\right|^{2}\right)^{1/2} \end{aligned}} $$
Here, H and D are matrices having the following forms
$$\begin{array}{@{}rcl@{}} \mathbf{H}= \left[ \begin{array}{ccccc} \mathbf{C_{1}} &\mathbf{0} & \ldots & \ldots & \mathbf{0}\\ \mathbf{0} & \mathbf{C_{2}}& \mathbf{0} & \ldots & \mathbf{0}\\ \vdots & \vdots & \vdots & \ddots&\vdots \\ \vdots & \vdots & \vdots & \ddots&\vdots \\ \mathbf{0} & \mathbf{0}& \ldots & \mathbf{0}& \mathbf{C_{M}} \end{array} \right] \end{array} $$
where C m denotes the submatrix for the part of the model matrix corresponding to the mth aperture position.
$$\begin{array}{@{}rcl@{}} \mathbf{D}^{(n+1)}= \left[ \begin{array}{ccccc} \mathbf{T}^{(n+1)} & \mathbf{0} & \ldots & \ldots &\mathbf{0}\\ \mathbf{0} &\mathbf{T}^{(n+1)}& \mathbf{0} & \ldots & \mathbf{0}\\ \vdots & \vdots & \vdots & \ddots&\vdots \\ \vdots & \vdots & \vdots & \ddots&\vdots \\ \mathbf{0} & \mathbf{0}& \ldots & \mathbf{0} &\mathbf{T}^{(n+1)} \end{array} \right] \end{array} $$
Here, T is a diagonal matrix, with the entries \(\hat {\mathbf {f}}(i)\) on its main diagonal, as follows:
$$\begin{array}{@{}rcl@{}} \mathbf{T}^{(n+1)}=\text{diag}\left\{\hat{\mathbf{f}}^{(n+1)}(i)\right\} \end{array} $$
The convex optimization problem in (21) can be efficiently solved via second-order cone programming [17]. For the sake of simplicity of the optimization process, in (20), we have not used an additional constraint to force the magnitudes of the vector β to be 1. Consequently, since in this step, we want to use only the phase information and to suppress the effect of the magnitudes, the estimate \(\hat {\boldsymbol {\beta }}\) is first normalized and then for every aperture position the following matrix is created,
$$\begin{array}{@{}rcl@{}} \mathbf{B_{m}}^{(n+1)}=\text{diag}\left\{\hat{\boldsymbol{\beta}}_{\mathbf{m}}^{(n+1)}(i)\right\} \end{array} $$
which is used to update the corresponding part of the model matrix.
$$\begin{array}{@{}rcl@{}} \mathbf{C_{m}}(\phi^{n+1})=\mathbf{C_{m}B_{m}}^{(n+1)} \end{array} $$

3.1.2 Regularization via low-rank sparse decomposition

The phase error matrix Q we have defined in (19) can be formulated as the sum of a low-rank matrix and a sparse matrix. Let us explain with an example. If the nth and kth (n<k) points in the scene have motions and the rest of the scene is stationary, then Q could be expressed as the sum of a low-rank matrix L and a sparse matrix S as follows:
$$ {{}\begin{aligned} \underbrace{\left[ \begin{array}{clrr} 1& \ldots & 1\\ \vdots& \ldots & \vdots\cr e^{j \boldsymbol{\phi}_{\mathbf{n}}(1)} & \ldots & e^{j \boldsymbol{\phi}_{\mathbf{n}}(M)} \\ 1& \ldots & 1\\ \vdots& \ldots & \vdots\\ e^{j \boldsymbol{\phi}_{\mathbf{k}}(1)} &\ldots & e^{j \boldsymbol{\phi}_{\mathbf{k}}(M)}\\ 1& \ldots & 1 \end{array} \right]}_{\mathbf{Q}} = \underbrace{\left[ \begin{array}{clrr} 1& \ldots & 1\\ \vdots& \ldots & \vdots\\ 0 & \ldots & 0\\ 1& \ldots & 1\\ \vdots& \ldots & \vdots\\ 0 &\ldots & 0\\ 1& \ldots & 1 \end{array} \right]}_{\mathbf{L}}+ \underbrace{\left[ \begin{array}{clrr} 0&\ldots & 0\\ \vdots& \ldots & \vdots\\ e^{j \boldsymbol{\phi}_{\mathbf{n}}(1)} & \ldots & e^{j \boldsymbol{\phi}_{\mathbf{n}}(M)} \\ 0&\ldots & 0\\ \vdots& \ldots & \vdots\\ e^{j \boldsymbol{\phi}_{\mathbf{k}}(1)} &\ldots & e^{j \boldsymbol{\phi}_{\mathbf{k}}(M)}\\ 0&\ldots & 0 \end{array} \right]}_{\mathbf{S}} \end{aligned}} $$
Incorporating this structure of the matrix Q as a constraint to the optimization problem, we obtain the following cost function:
$$ {{}\begin{aligned} \arg\min_{\mathbf{f},\boldsymbol{\beta},\mathbf{L}, \mathbf{S}} J(\mathbf{f},\boldsymbol{\beta}) & = \arg\min_{\mathbf{f}, \boldsymbol{\beta}, \mathbf{L}, \mathbf{S}} \left\|\mathbf{g}-\mathbf{C(\phi)f}\right\|^{2}_{2}+ \lambda_{1} \left\|\mathbf{f}\right\|_{1} \\ & \quad +\lambda_{L} \left\|\mathbf{L}\right\|_{*}+\lambda_{S} \left\|\mathbf{S}\right\|_{1} \\ & \qquad\quad {s.t.}~ \mathbf{Q}=\mathbf{L}+\mathbf{S} \end{aligned}} $$
Here, β is the vector created by stacking the columns of the matrix Q. λ L and λ S are the regularization parameters. L denotes the nuclear norm (trace norm) of the low-rank matrix L. Using field estimate \(\hat {\mathbf {f}}\) from the first step, we estimate the phase errors by minimizing the following cost function:
$$\begin{array}{*{20}l} {}\hat{\boldsymbol{\beta}}^{(n+1)},\hat{\mathbf{L}}^{(n+1)},\hat{\mathbf{S}}^{(n+1)} &=\arg\min_{\boldsymbol{\beta},\mathbf{L},\mathbf{S}} J(\hat{\mathbf{f}}^{(n+1)},\boldsymbol{\beta},\mathbf{L},\mathbf{S}) \\ & =\arg\min_{\boldsymbol{\beta},\mathbf{L},\mathbf{S}} \left\|\mathbf{g}-\mathbf{HD}^{(n+1)}\boldsymbol{\beta}\right\|^{2}_{2} \\ & \quad+\lambda_{L} \left\|\mathbf{L}\right\|_{*}+\lambda_{S} \left\|\mathbf{S}\right\|_{1} \\ &s.t.~\mathbf{Q}=\mathbf{L}+\mathbf{S} \end{array} $$
The augmented Lagrangian form of this cost function can be expressed as follows:
$$\begin{array}{*{20}l} {}L\left(\mathbf{Q},\mathbf{L},\mathbf{S},\mathbf{A}\right)=\left\|\mathbf{g}-\mathbf{HD}^{(n+1)}\boldsymbol{\beta}\right\|^{2}_{2} +\lambda_{L} \left\|\mathbf{L}\right\|_{*}+\lambda_{S} \left\|\mathbf{S}\right\|_{1} \\ +\left\langle \mathbf{A},\mathbf{Q}-\mathbf{L}-\mathbf{S}\right\rangle+ \frac{\gamma}{2}\left\|\mathbf{Q}-\mathbf{L}-\mathbf{S}\right\|^{2}_{F} \end{array} $$

where A is the Lagrange multiplier and γ>0 penalizes the violation of the constraint. To solve this minimization problem, we use alternating direction method of multipliers (ADMM) [18]. This problem is solved similarly to the optimization problem in [19].

3.2 \thelikesubsection Motion compensation using ROI

The approach we have described in the previous section looks for potential motion everywhere in the scene, e.g., it handles each point in the scene separately considering each point may potentially have a different motion. However, moving points usually exist together in limited regions of a scene. Let us consider a scene containing a few linearly moving vehicles. In this case, all the points belonging to a particular vehicle will have the same motion. In order to exploit such a structure both for computational gains and for improved robustness, we present a modified version of our method. First, we determine the range lines 1 that are likely to contain moving objects. This generates regions of interest which we use to estimate the phase errors. Assuming that the moving points in each of these regions have the same motion, we perform space-invariant phase error estimation and compensation for each region. Now let us describe the overall phase error estimation step in detail.

Let F be the 2D conventional image (reconstructed by the polar-format algorithm). Since we assume that the field to be imaged has a sparse structure (strong scatterers on a background of weak reflectivities), range lines, having much higher reflectivity values than the others, are likely to contain strong scatterers (belonging to moving and/or stationary objects). To find the range lines with strong scatterers, we first calculate the mean and standard deviation of reflectivities throughout the conventional image. The range lines in the image domain having at least one pixel with a reflectivity greater than the mean plus one standard deviation are selected as potential range lines including objects. To decide which of these range lines include moving objects, we use the idea of [20] which is based on the mapdrift autofocus technique [21]. First, two images are reconstructed from data corresponding to each of the two half apertures. While stationary objects lie in the same position in both images, moving objects will not appear in the same position since phase errors caused by moving objects will be different for each sub-aperture data. Therefore, if we compute the correlation coefficient for these range lines between the two sub-aperture images, we obtain small correlation coefficients for range lines including moving objects. Consequently, range lines having a correlation coefficient less than a pre-determined threshold are declared to be range lines with potential moving objects, i.e., the ROI. We have empirically chosen this threshold to be 0.7. When one is not sure how to choose this threshold, using a large value erring on the side of declaring more range lines as potentially containing moving objects would be the safe approach with a less reduction in computational cost with respect to the original version of our approach.

After this simple region determination process, the framework constructed earlier in this paper can be used. While the field estimation step remains the same, phase error estimation is performed region-wise. We assume that there is a single object in each distinct ROI and adjacent range lines correspond to the same object. Accordingly, we apply space-invariant focusing [12] for each distinct ROI. This reduces the number of unknown phase error terms significantly as compared to our original approach and leads to improved robustness in cases where the assumption that there is a single motion in each ROI is valid. Here, a questionable assumption can be the background clutter level. Note that the single motion assumption in each ROI applies to all pixels in that region. In order for this model to be accurate, the clutter reflectivities in the ROI must be small enough. However, in many cases, the clutter does not affect the phase error correction performance.

For a simple description of the ROI-based phase error estimation procedure, let us first assume that there is only one moving object in the scene. Let the parts of the model matrix and the field corresponding to the ROI be C roi and f roi and the parts of model matrix and the field corresponding to the outside of this region be C out and f out , respectively. Then, the phase error ϕ r o i is estimated by minimizing the following cost function for every aperture position:
$$\begin{array}{*{20}l} {}\hat{\boldsymbol{\phi}}_{\mathbf{roi}}^{(n+1)}(m) \!= \arg\min_{\boldsymbol{\phi}_{\mathbf{roi}}(m)} \!\left\|\mathbf{g}^{(n+1)}_{\mathbf{{roi}_{m}}} \!- e^{j \boldsymbol{\phi}_{\mathbf{roi}}(m)}\mathbf{C_{{roi}_{m}}}\hat{\mathbf{f}}^{(n+1)}_{\mathbf{roi}}\!\right\|^{2}_{2} \;\forall m \end{array} $$
where g roi is the phase history data corresponding to the ROI and is given by:
$$\begin{array}{@{}rcl@{}} \mathbf{g}^{(n+1)}_{\mathbf{roi}}=\mathbf{g}-\mathbf{C_{out}}\hat{\mathbf{f}}^{(n+1)}_{\mathbf{out}} \end{array} $$
The problem in (19) is solved in closed form for every aperture position [12]. Using the phase error estimate, the corresponding part of the model matrix is updated.
$$\begin{array}{@{}rcl@{}} \mathbf{C_{{roi}_{m}}}(\hat{\boldsymbol{\phi}}_{\mathbf{roi}}^{(n+1)}(m))=e^{j \hat{\boldsymbol{\phi}}_{\mathbf{roi}}^{(n+1)}(m)} \mathbf{C_{{roi}_{m}}} ~\forall m \end{array} $$

If there are multiple moving objects in the scene, then this procedure is implemented for all regions with a potentially moving object, sequentially. After the model matrix has been updated, the algorithm passes to the next iteration, by incrementing n and returning to the field estimation step.

4 Additional remarks

Before presenting experimental results, we find it valuable to mention some issues related to the proposed algorithms. The proposed algorithms are insensitive to constant and linear phase errors (as a function of the aperture position) like other existing autofocus techniques. Actually, a constant phase on the data has no effect on the reconstructed image [22]. However, a linear phase causes a spatial shift in the reconstructed image without blurring the image. Although the proposed method can remove all types of phase errors (parametric or random) which cause blurring, it cannot handle the shifts arising due to linearly varying phase terms. Such a phase error can be compensated by appropriate spatial operations on the scene [23]. For our ROI-based approach, we have applied such a spatial operation to move the focused but shifted objects to their true positions. This operation is based on determining the weighted centroid of the binarized reflectivities in each distinct ROI of the conventionally reconstructed defocused image. Here, we have two assumptions. The first assumption is that each ROI involves only one moving object and the second assumption is that the motion of the object causes a slowly varying phase error, e.g., a quadratic phase error, which causes a smearing-like blurring. Quadratic phase errors are very common: a constant velocity in the cross-range direction induces a quadratic phase error function in the data. Non-constant velocities can also be handled reasonably well by this operation if the data collection duration is relatively small. Note that the object centroid estimation procedure does not give always the exact true position of the object but quite a good approximation. In the next section, we demonstrate examples of the application of this procedure.

5 Experimental results

We present experimental results on various scenes consisting of synthetic or real clutter and synthetic moving or stationary objects. Images reconstructed by conventional imaging and sparsity-driven imaging assuming a stationary scene [15] are presented as well. Before getting to the results, let us first establish the physical relationship between the phase errors and the velocity of an object having a constant motion in the cross-range direction. The SAR system parameters for our experiments are shown in Table 1.
Table 1

SAR system parameters for the experiments in Fig. 1

Range resolution ρ r and cross-range resolution ρ cr

1 m

Wavelength λ w

0.02 m

Distance between the SAR platform and patch center d 0

30000 m

Platform velocity v p

300 m/s

Aperture time T=λ w d 0/2v p ρ cr

1 s

In the first experiment, the scene involves many stationary point objects and two moving objects with constant velocities of 2 and 4m/s in the cross-range direction. For the two moving objects, the cross-range velocity induced quadratic phase error is computed as follows [2]:
$$\begin{array}{@{}rcl@{}} \phi(t_{s})=\left(4 \pi v_{cr}v_{p}{t_{s}^{2}}\right)/\left(\lambda_{w} d_{0}\right) \end{array} $$
Here, t s is the slow-time variable (continuous variable along the aperture) and v cr is the constant cross-range velocity of the object. According to this relationship, the object with velocity 2m/s and the object with velocity 4m/s will induce a quadratic phase error defined over an aperture −T/2≤t s T/2 with a center to edge amplitude of π radians and 2π radians, respectively. In Fig. 1, the results for this experiment are displayed. In the results for conventional imaging and sparsity-driven imaging without any phase error correction, the defocusing and artifacts in the reconstructed images caused by the moving objects can be clearly observed. On the other hand, images reconstructed by the proposed method are well focused and exhibit the advantages of sparsity-driven imaging such as high resolution, reduced speckle, and sidelobes. For the images of the first experiment, we provide the corresponding colormap as well. For improved visibility, the logarithm of the intensities are used. Therefore, the interval of the colormap is chosen as [−40,0]. All images in the paper are displayed using the same colormap.
Fig. 1

a Original scene. b Image reconstructed by conventional imaging. c Image reconstructed by sparsity-driven imaging. d Image reconstructed by the proposed method

In the following four experiments, we employ our ROI-based approach. For these experiments, we use real clutter scenes of size 256×256, 512×512 and 1024×1024 from the TerraSAR-X public data set. These scenes are produced by putting synthetic targets on the patches from real SAR images. In these experiments, the SAR data are simulated by taking the 2D discrete Fourier transform (DFT) of the scene, e.g., we use a 2D DFT matrix as model matrix. The scene in Fig. 2 a involves two small strong-scattering objects. The phase history data of these two objects are corrupted by quadratic phase errors of different center to edge amplitudes. The conventionally reconstructed image without any motion compensation is displayed in Fig. 2 b. In Fig. 2 c, the result obtained by the proposed approach is displayed. Although the objects are well focused, they are displaced along the cross-range direction due to a linear phase term to which our algorithm is insensitive as we have mentioned in the previous section. To shift the objects to their true positions, we use the target centroid estimations obtained from the defocused image as seen in Fig. 2 d and from the image reconstructed by the ROI-based approach as seen in Fig. 2 e. The image with shifted objects is demonstrated in Fig. 2 f.
Fig. 2

a Original scene. b Image reconstructed by conventional imaging. c Image reconstructed by our proposed ROI-based approach. d Target centroids on the conventional defocused image. e Target centroids on the image reconstructed by the ROI-based approach. f Final image with shifted objects

In the next experiment, whose results we present in Fig. 3, we have applied our ROI-approach to another scene including one larger rigid-body object. The phase history data of this object are corrupted by a quadratic phase error. In Fig. 3 c, it is seen that the proposed method can correct the phase errors effectively and produces a well-focused image. However, the object is displaced. To shift the target to its true position, we benefit again from the target centroid information extracted from the defocused image. In this case, the centroid information (Fig. 3 b) is not exactly true but it is a good estimate for the true position of the object. The final image with the shifted object is displayed in Fig. 3 d.
Fig. 3

a Original scene. b Conventional defocused image with target centroid. c Image reconstructed by the ROI-based approach. d Final image with shifted object

In Fig. 4, the results of our ROI-based method on a 512×512 scene are displayed. The phase history data of the distributed object in the scene are corrupted by a quadratic phase error function with a center to edge amplitude of 35π radians. Figure 4 a shows the original image. The target is marked with a red rectangle. In Fig. 4 b, c, the conventional image with defocused target and the image produced by sparsity-driven imaging without any motion compensation are displayed respectively. As it is seen from the image in Fig. 4 c, sparsity-driven imaging itself cannot handle the motion-induced smearing in the image. Figure 4 d demonstrates the result of our ROI-based method. We observe that with the ROI-based method, the artifacts caused by the moving object are completely removed and a focused image of the object is obtained.
Fig. 4

a Original scene. b Conventional defocused image. c Image reconstructed by sparsity-driven imaging. d Image reconstructed by the ROI-based method with shifted object

Figure 5 demonstrates the results of the ROI-based method on a 1024×1024 scene including an extended target. The phase history data of the extended target in the scene are corrupted by a quadratic phase error function with a center to edge amplitude of 50π radians. Figure 5 a shows the original image. In Fig. 5 b, c, the conventional image with defocused target and the image produced by sparsity-driven imaging are displayed. As it is seen from the image in Fig. 5 d, the ROI-based method produces promising results even on larger scenes containing extended targets.
Fig. 5

a Original scene. b Conventional defocused image. c Image reconstructed by sparsity-driven imaging. d Image reconstructed by the ROI-based method with shifted target

Finally, in Fig. 6, we present on a toy example of preliminary results for the group sparsity and low-rank sparse decomposition approaches to provide a basic proof-of-principle for these extensions. Due to the high computational cost of our current implementation of both algorithms, we perform the experiment on a small scene containing 5 point targets. Two of these point targets are stationary. The phase history data of two of the other three targets are corrupted with a quadratic phase error function of amplitude π, and the phase history data of the third one are corrupted with a quadratic phase error function of amplitude 1.5π. The results show that both approaches are capable of correcting the phase errors. In this example, the only visual difference between the images reconstructed by both approaches is that the target-background ratio of the image obtained by the group-sparsity approach is better than the target-background ratio of the image obtained by the low-rank sparse decomposition approach. This may have resulted from non-optimal parameter selection.
Fig. 6

a Original scene. b Conventional defocused image. c Image reconstructed by the group sparsity approach. d Image reconstructed by the low-rank sparse decomposition approach

6 Conclusions

We have presented a sparsity-driven framework for SAR moving target imaging. In this framework, the sparsity information about both the field and the phase errors are incorporated into the problem. To enforce the sparsity of the phase errors three different regularization terms are proposed within the same framework. The method produces high-resolution images thanks to its sparsity-driven nature and simultaneously removes phase errors causing defocusing in the cross-range direction. Additionally, we provide an ROI-based variation of the method as well for a reduced computational cost and an efficient phase error estimation.



This work was partially supported by the Scientific and Technological Research Council of Turkey under Grant 105E090, and by a Turkish Academy of Sciences Distinguished Young Scientist Award.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Faculty of Engineering, Turkish-German University
Faculty of Engineering and Natural Sciences, Sabancı University


  1. CV Jr. Jakowatz, DE Wahl, PH Eichel, Refocus of constant-velocity moving targets in synthetic aperture radar imagery. Proc. SPIE 3370. Algoritm. Synth. Aperture Radar Imagery V.3370:, 85–95 (1998).Google Scholar
  2. JR Fienup, Detecting moving targets in SAR imagery by focusing. IEEE Trans. Aerosp. Electron. Syst.37(3), 794–809 (2001).View ArticleGoogle Scholar
  3. JK Jao, Theory of synthetic aperture radar imaging of a moving target. IEEE Trans. Geosci. Remote Sens.39(9), 1984–1992 (2001).View ArticleGoogle Scholar
  4. MJ Minardi, LA Gorham, EG Zelnio, Ground moving target detection and tracking based on generalized SAR processing and change detection (Invited Paper). Proc. SPIE 5808, Algoritm. Synth. Aperture Radar Imagery XII. 5808:, 156–165 (2005).View ArticleGoogle Scholar
  5. WG Carrara, RM Majewski, RS Goodman, Spotlight synthetic aperture radar: signal processing algorithms (Artech House, Norwood, MA, 1995).MATHGoogle Scholar
  6. I Stojanovic, WC Karl, Imaging of moving targets with multi-static SAR using an overcomplete dictionary. IEEE J. Sel. Topics Signal Process.4(1), 164–176 (2010).View ArticleGoogle Scholar
  7. AS Khwaja, J Ma, Applications of compressed sensing for SAR moving-target velocity estimation and image compression. IEEE Trans. Instrum. Meas.60(8), 2848–2860 (2011).View ArticleGoogle Scholar
  8. S Zhu, A Mohammad-Djafari, H Wang, B Deng, X Li, J Mao, Parameter estimation for SAR micromotion target based on sparse signal representation. EURASIP J. Adv. Signal Process.2012(1) (2012).
  9. Q Wu, M Xing, C Qiu, B Liu, Z Bao, TS Yeo, Motion parameter estimation in the SAR system with low PRF sampling. IEEE Geosci. Remote Sens. Lett.7(3), 450–454 (2010).View ArticleGoogle Scholar
  10. S Bidon, JY Tourneret, L Savy, Sparse representation of migrating targets in low PRF wideband radar. IEEE Radar Conference (RADAR), 0314–0319 (2012).
  11. G Newstadt, E Zelnio, A Hero, Moving target inference with bayesian models in SAR imagery. IEEE Trans. Aerosp. Electron. Syst.50(3), 2004–2018 (2014).View ArticleGoogle Scholar
  12. NO Önhon, M Çetin, A sparsity-driven approach for joint SAR imaging and phase error correction. IEEE Trans. Image Process.21(4), 2075–2088 (2012).MathSciNetView ArticleGoogle Scholar
  13. NO Önhon, M Çetin, SAR moving target imaging in a sparsity-driven framework. Optics+Photonics, SPIE, Wavelets and Sparsity XIV. 8138:, 813806–1-813806-9 (2011).Google Scholar
  14. NO Önhon, M Çetin, Sparsity-driven image formation and space-variant focusing for SAR. IEEE Int. Conf. on Image Processing (ICIP), 173–176 (2011).
  15. M Çetin, WC Karl, Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans. Image Process.10(4), 623–631 (2001).View ArticleMATHGoogle Scholar
  16. S Samadi, M Çetin, MA Masnadi-Shirazi, Sparse representation-based synthetic aperture radar imaging. IET Radar, Sonar & Navig.5(2), 182–193 (2011).View ArticleGoogle Scholar
  17. S Boyd, L Vandenberghe, Convex optimization (Cambridge University Press, Cambridge, 2004).View ArticleMATHGoogle Scholar
  18. S Boyd, N Parikh, BP E. Chu, J Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn.3(1), 1–122 (2011).View ArticleMATHGoogle Scholar
  19. A Soganli, M Çetin, Low-rank sparse matrix decomposition for sparsity-driven SAR image reconstruction. 3rd Int. Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 239–243 (2015).
  20. JR Moreira, W Keydel, A new MTI-SAR approach using the reflectivity displacement method. IEEE Trans. Geosci. Remote Sens.33(5), 1238–1244 (1995).View ArticleGoogle Scholar
  21. TM Calloway, GW Donohoe, Subaperture autofocus for synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst.30(2), 617–621 (1994).View ArticleGoogle Scholar
  22. CV Jr. Jakowatz, DE Wahl, PH Eichel, DC Ghiglia, PA Thompson, Spotlight-mode synthetic aperture radar: a signal processing approach (Springer, New York, USA, 1996).View ArticleGoogle Scholar
  23. JR Fienup, Synthetic-aperture radar autofocus by maximizing sharpness. OSA Optics Letters. 25(4), 221–223 (2000).View ArticleGoogle Scholar


© The Author(s) 2017