- Research
- Open Access
- Published:

# Aperture undersampling using compressive sensing for synthetic aperture stripmap imaging

*EURASIP Journal on Advances in Signal Processing*
**volume 2014**, Article number: 156 (2014)

## Abstract

Synthetic aperture imaging is a high-resolution imaging technique employed in radar and sonar applications, which construct a large aperture by constantly transmitting pulses while moving along a scene of interest. In order to avoid azimuth image ambiguities, spatial sampling requirements have to be fulfilled along the aperture trajectory. The latter, however, limits the maximum speed and, therefore, the coverage rate of the imaging system. This paper addresses the emerging field of compressive sensing for stripmap synthetic aperture imaging using transceiver as well as single-transmitter and multi-receiver systems so as to overcome the spatial Nyquist criterion. As a consequence, future imaging systems will be able to significantly reduce their mission time due to an increase in coverage rate. We demonstrate the capability of our proposed compressive sensing approach to at least double the maximum sensor speed based on synthetic data and real data examples. Simultaneously, azimuth image ambiguities are successfully suppressed. The real acoustical measurements are obtained by a small-scale ultrasonic synthetic aperture laboratory system.

## 1 Introduction

Synthetic aperture imaging[1, 2] is a technique to produce high-resolution reflectivity maps of a scene of interest, e.g., of the earth surface for reconnaissance missions[3] or for glacier monitoring[4, 5] using synthetic aperture radar (SAR). While the resolution is almost identical to optical satellite images[6], SAR imaging is advantageous due to its weather-independent and daytime-independent deployment.

For imaging large areas in underwater applications, the use of optical sensors is inadequate given the attenuation of electromagnetic waves in water. Instead, synthetic aperture sonar (SAS)[7–9] systems are deployed to achieve highly improved coverage rates compared to normal side-scan sonars[8]. In the context of SAS, mine hunting applications are of broad concern[10, 11]. The main synthetic aperture modes encompass spotlight and stripmap operation. While spotlight operation is typically favored over stripmap due to its improved resolution capabilities in SAR, the stripmap mode is commonly used in SAS[8].

The principle of synthetic aperture techniques is to synthesize a large aperture by constantly transmitting pulses (pings) using a single transceiver in SAR or a single-transmitter and multi-receiver configuration in SAS[8]. The physical aperture is mounted to an imaging platform, which travels along a pre-determined rectilinear trajectory passed to the area of interest, the so-called along-track dimension. Given the limited beamwidth of the physical aperture, the synthetic aperture length dynamically adjusts itself proportional to the focusing range. Thus, a constant along-track resolution is ideally maintained for the entire scenery leading to high-resolution images[1, 2, 12].

However, for post-mission image reconstruction, the coherent processing of numerous consecutive echo signals is required, which have to be stored in memory during data collection. Hence, current synthetic aperture systems produce a remarkable amount of data during only a few hours of operation, which leads to issues with respect to data storage, data transportation, and data processing[13]. Additionally, the advance per ping, i.e., the traveling distance of the imaging platform between two consecutive transmission times, is dictated by the spatial sampling theorem[1, 2]. Violation of this requirement leads to the occurrence of azimuth image ambiguities also called grating lobes or ghost targets in the reconstructed image. The latter may mask important image content such as objects or their shadows, which are lost beyond recall. Moreover, the advance per ping influences the coverage rate of the imaging system and hence determines the mission time. Consequently, alternative processing methods are of utmost interest so as to avoid the massive amount of data to be collected and simultaneously to suppress azimuth image ambiguities while reducing the mission time.

The emerging field of compressive sensing (CS) introduces a novel sampling framework[14, 15], which is able to lower the sampling rate significantly below the Nyquist rate if the captured signal has a sparse representation in some domain. For example, the image domain itself can be sparse considering a few man-made objects lying on the seafloor. Moreover, an incoherence criterion between the measurement and sparsity domain has to be fulfilled. Roughly speaking, the criterion states that the measurement and sparsity domain have to be highly uncorrelated so that CS can work. Feasible applications cover diverse areas. Among others, the CS framework has been successfully applied in the context of digital imaging[16], medical scanners[17], as well as in various radar and radar imaging applications[18–24], which will be briefly discussed subsequently to point out their difference to our proposed approach.

In[18], the need for a matched-filter operation is avoided using CS for focusing the echo signals in range direction and simultaneously reducing the sampling rate. While the use of a specially designed waveform, the Alltop sequence, is suggested in[19] to design a high-resolution radar using CS, the author in[23] chooses a stepped-frequency signal model. Then, CS is applied to reduce the recording time due to the sequential transmission of numerous mono-chromatic signals in the application of radar pulse compression. Similarly, a CS stepped-frequency approach is suggested in[24] in the context of spotlight SAR to decrease the recording time and data storage requirements. Contrarily, the authors in[20–22] use the common linear frequency modulated (LFM) pulse sequence for CS-based SAR imaging using the spotlight mode. Promising results have been achieved using both synthetic data and real radar measurements. In[20, 21], the narrowband as well as the far-field assumption is applied, and thus, range migration[1, 12], which is of major concern in stripmap sonar imaging systems, is not taken into account. Especially, the assumption in[21] of two separate 1-D processing steps is not feasible in near-field scenarios as typically given for SAS systems. The authors of[22] motivate synthetic aperture undersampling in their CS framework to reduce data storage and to obtain wider swath width, again for a spotlight operation mode assuming the tomographic formulation[2, 25]. Interestingly, a randomized transmit scheme is used to lower azimuth ambiguities.

To the best of our knowledge, a general description on how to apply CS for stripmap synthetic aperture imaging is not addressed in the literature yet. In this paper, we use the linear system model of the received echo signals as provided in[26] and link it to the CS framework. We demonstrate based on synthetic data as well as on real data measurements that by regularly undersampling the synthetic aperture, CS is capable of successfully suppressing the occurrence of azimuth image ambiguities. This allows that the area coverage rate as a key parameter is improved by increasing the speed of the imaging platform while maintaining the pulse repetition interval. We extend the proposed reconstruction scheme to be used with a single-transmitter and multi-receiver synthetic aperture system as commonly applied in SAS to relax sampling constraints in order to achieve useful coverage rates.

The remainder of the paper is organized as follows: Section 2 provides a brief overview of the signal model of stripmap synthetic aperture systems. Additionally, it addresses the vector-matrix notation of the introduced model, the spatial sampling requirements for the synthetic aperture, as well as the single-transmitter and multi-receiver extension. Section 3 outlines the conventional imaging technique and our proposed CS imaging technique. Moreover, we introduce undersampling schemes. Section 4 provides synthetic data results, and Section 5 shows real data results using our ultrasonic laboratory synthetic aperture system. Finally, we discuss the results in Section 6.

## 2 Synthetic aperture stripmap data model

The principle to form a synthetic array is through transmitting pulses at index times *p* and receiving the echo signals at each sensor element position *a*_{
p
}=[0,*p* *Δ*^{A},*h*^{og}]^{T}, where *h*^{og} is the height overground and *Δ*^{A} denotes the advance per ping of the imaging platform. A typical geometrical setup of a synthetic aperture imaging system operating in stripmap mode is depicted in Figure1, where the direction of wave propagation and the traveling direction of the imaging platform are called range, *x*, and along-track, *y*, respectively.

Here, the target scene *f*(*x*,*y*) consists of a set of *D* stationary point targets each with a target reflectivity *σ*_{
d
} and located at positions *q*_{
d
}= [ *x*_{
d
},*y*_{
d
},0]^{T}, with *d*=1,…,*D*. To simplify matters, the reflectivity *σ*_{
d
} is assumed to be independent of frequency and angle of incidence of the impinging wave. Moreover, any spreading losses are incorporated into *σ*_{
d
}. Then, the ideal target reflectivity function[1] of the scene of interest is given by

where *δ*(*x*,*y*) is the two-dimensional delta function of range direction *x* and along-track direction *y*. Given the distance between target *d* located at *q*_{
d
} and the imaging platform at position *a*_{
p
} as

where ∥·∥_{2} denotes the Euclidean norm, the discretized echo signals of a mono-static synthetic aperture system under the stop-and-hop assumption[12] can be expressed as the superposition of individual target responses as follows:

In (3), the round trip delay *η*_{d,p} between the sensor at its current position *a*_{
p
} and target *d* is given by

and the function *b*(*θ*_{d,p}) describes an indicator function resembling an ideal beam pattern of the transceiver. It determines whether the target at location *q*_{
d
} is seen by the sensor at position *a*_{
p
} and can be expressed as

Here, *θ*_{0} is the beamwidth of the physical sensor and *θ*_{d,p} denotes the aspect angle between the *d* th target and the sensor location *a*_{
p
}. Moreover, *c* denotes the speed of propagation of the wave in the respective medium, e.g., speed of light or sound in water or air, *T*_{
s
} is the sampling rate, and *s*(*n*) characterizes the transmitted signal pulse form in discrete time. Typical radar and sonar systems use a LFM pulse for *s*(*n*) due to its properties w.r.t. range resolution and Doppler shift insensitivity during pulse compression[12].

In order to get a more realistic model, we introduce the term *v*_{
p
}(*n*) in (3) which models additive white sensor noise. Figure2 illustrates a noise-free example, i.e., *v*_{
p
}(*n*)≡0, of the phase response of synthetic echo measurements along the synthetic aperture (Figure2a) resulting from a single point scatterer as depicted in Figure2b. As can be seen from Figure2a, the received echo data can be presented in matrix form as

where *p* is the discrete along-track ping index (slow time) of a total number of *M*_{
p
} pings, i.e., number of rows, and *M*_{
n
} is the number of range bins (fast time), i.e., number of columns.

### 2.1 Data model in matrix-vector notation

This section outlines how to rewrite the echo data model of (3) and its matrix representation of (6) into a system of linear equations[26] given by

such that ** e**=vec(

*E*^{T}), where vec(·) is the operator to vectorize a matrix column after column. In (7), the vector

**describes additive sensor noise and vector**

*v***contains the reflectivity of all**

*σ**D*targets, i.e.,

**=[**

*σ**σ*

_{1},…,

*σ*

_{ D }]

^{T}. The target reflectivity vector

**is multiplied with the pulse system matrix**

*σ*which is a stacked matrix consisting of individual pulse matrices, *S*_{
p
} with *p*=0,…,*M*_{
p
}-1. An individual pulse matrix describes the echo signals of all *D* targets received at position *a*_{
p
} during the *p*^{th} ping. It can be expressed as

where *s*_{d,p} is the delayed version of the transmitted pulse, which has been reflected by the *d*^{th} target. It is given by

where *0*_{1×M} denotes a row vector of zeros with size *M*. Here, ** s**=[

*s*(0),…,

*s*(

*M*

_{ s }-1)] denotes the transmitted waveform vector with a pulse length of

*M*

_{ s }samples. Its position index within the vector

*s*_{d,p}depends on the number of samples of the round trip delay. The latter can be expressed as

where ⌊·⌋ rounds towards the next smaller integer value. Since a total number of *M*_{
n
} fast-time samples is recorded, the vector *s*_{d,p} must be zero-padded with{\stackrel{~}{M}}_{\eta \left(d,p\right)}={M}_{n}-\left({M}_{\eta \left(d,p\right)}+{M}_{s}\right) trailing zeros. Please note that the number of fast-time samples *M*_{
n
} is related to the maximum range *R*_{max} of the imaging system.

### 2.2 Sampling requirements

Spatially sampling a synthetic aperture is achieved by moving the imaging sensor by a distance *Δ*^{A} between two consecutive pings as depicted in Figure1. As a consequence, there is a relation between the speed *v*, the advance per ping of the sensor *Δ*^{A}, and the time interval between two pings *T*_{PRI}, which is given by *Δ*^{A}=*v* *T*_{PRI}[1, 12].

In order to avoid azimuth image ambiguities, which either lead to a contrast reduction in the reconstructed image or are mistakenly interpreted as targets, the synthetic aperture has to be correctly sampled. This is achieved by moving the sensor with a maximum distance of *Δ*^{A}≤*Δ* maxA, where *Δ* maxA represents the maximum advance per ping, before transmitting the next pulse. To find an expression for *Δ* maxA, let us assume an ideal point scatterer at position [*x*_{
d
},*y*_{
d
}]^{T} as depicted in Figure2b. Then, the relationships between the wavenumber (i.e., spatial frequencies) in range direction, *k*_{
x
}, and along-track direction, *k*_{
y
}, and the wavenumber of wave propagation direction, *k*_{
r
}, assuming far-field conditions are given by

where *θ*_{d,p} represents the aspect angle between the sensor and target. Clearly, due to sampling in the *y*-direction, we have to consider the relation of *k*_{
y
} to find a requirement on the spatial sampling interval. Given the relation between wavenumber and wavelength as *k*_{
r
}=(2*π*)/*λ*, the spatial sampling interval *Δ* maxA according to the Nyquist theorem is given by

where *λ*_{max} denotes the maximum wavelength of the transmitted signal. For a stripmap imaging system, the maximum aspect angle *θ*_{d,max} equals half of the beamwidth angle *θ*_{0} of the physical aperture[1]. In the case of a planar aperture, the half beamwidth angle *θ*_{0}/2 is given by

where *D*_{
y
} is the physical aperture diameter in the *y*-direction[1, 27]. Substituting (14) into (13) leads to the maximum advance per ping, and thus, the spatial sampling constraint can be expressed as

Thus, the maximum advance per ping depends on the size of the physical aperture. Violating the condition in (15) yields azimuth image ambiguities, which affect the image quality and may be misinterpreted as real targets. Note that simply increasing the physical aperture *D*_{
y
} to enlarge the maximum advance per ping *Δ* maxA contradicts with the along-track synthetic aperture resolution given by *δ*_{
y
}=*D*_{
y
}/2[1].

### 2.3 Multi-receiver sampling and data model

Due to the slow sound speed in water (as compared to speed of light), in sonar applications, the transceiver sensor is replaced by a single-transmitter and multi-receiver system to relax spatial sampling constraints and achieve useful coverage rates[8]. Typically, the multi-receiver configuration consists of a uniform linear array (ULA) of *N*_{rx} elements. Then, the maximum achievable advance per ping without causing azimuth image ambiguities is given by[8]

where *L*_{phy} describes the length of the physical array. Each receiver, *u*=1,…,*N*_{rx}, records its own echo data along the synthetic aperture, which is denoted by the single receiver echo matrix *E*_{
u
}. This matrix is similar to the transceiver scenario of (6) except that the round trip delay in (4) changes. It is now related to the distance between the transmitter and point scatterer and back to the receiver element and is expressed as

where{\mathit{a}}_{p}^{\text{tx}} and{\mathit{a}}_{p}^{\text{rx}}\left(u\right) denote the Cartesian coordinates of the transmitter and the *u* th receiver location, respectively.

## 3 Synthetic aperture imaging

The objective of synthetic aperture imaging is to focus the received echo signals in range direction and along-track direction to obtain an estimate of the reflectivity of the target area. Typically, image reconstruction techniques can be classified into time-domain and frequency-domain approaches. In this paper, we concentrate on a classical time-domain approach known as time-domain correlation[1, 8, 26]. This technique correlates the echo data with the 2-D signature of the synthetic aperture for each grid point in the target scene. In the following, we provide a description of the time-domain correlation technique in vector-matrix notation and demonstrate that the inversion problem can be solved via CS.

### 3.1 Conventional focusing technique

In order to apply the time-domain correlation scheme using vector-matrix manipulations, a focusing matrix ** G** is required. It relates the target area with the received echo signals and is identical to the pulse matrix

**in (8) except that it covers the entire grid**

*S*

*g*_{ kl }=[

*x*

_{ k },

*y*

_{ l }]

^{T}of the discretized target scene, for

*k*=1,…,

*N*

_{ x }and

*l*=1,…,

*N*

_{ y }, rather than only target coordinates

*q*_{ d }, with

*d*=1,…,

*D*. Thus, the focusing matrix\mathit{G}\in {\mathrm{\xe2\u201e\u201a}}^{{M}_{n}{M}_{p}\times {N}_{y}{N}_{x}} is a stacked matrix of ping-based focusing matrices{\mathit{G}}_{p}\in {\mathrm{\xe2\u201e\u201a}}^{{M}_{n}\times {N}_{y}{N}_{x}} with

*p*=0,…,

*M*

_{ p }-1 that are similar to (9) and given by

The ping-based focusing matrix *G*_{
p
} describes the mapping between all grid points *g*_{
kl
} and the sensor position *a*_{
p
}. The position index of the received pulse *s*_{p,k,l} within each column depends on the number of samples of the round trip delay in (11) substituting the target coordinate *q*_{
d
} by the grid point location *g*_{
kl
}. After discretizing the target area *f*(*x*,*y*) into matrix form as

where each element of the matrix represents the reflectivity of the corresponding grid point *g*_{
kl
}, the data model for the reconstruction can be found. Similarly to (7), it can be denoted in the vector-matrix notation as

where ** f**=vec(

**). Next, we can formulate the time-domain correlation technique using vector-matrix operations[26] as**

*F*to estimate the reflectivity of the target scene\widehat{f}, where (·)^{H} denotes the Hermitian. Here,\widehat{f} denotes the stacked target scene, which has to be reshaped to obtain a presentable reconstructed image, i.e.,\widehat{\mathit{F}}={\text{vec}}^{-1}\left(\widehat{\mathit{f}}\right). Here, vec^{-1}(·) denotes the reshape operation to obtain a matrix given a stacked vector. While this time-domain approach is not very efficient in terms of computational complexity, it does not use approximations to solve the inverse reconstruction problem; simultaneously, it facilitates the use of arbitrary path deviations. The latter is extremely important for motion compensation techniques such as micronavigation in synthetic aperture sonar[1, 8].

Given a single-transmitter and multi-receiver synthetic aperture system, we can rewrite the focusing matrix ** G**. For each single receiver element,

*u*=1,…,

*N*

_{rx}, the sample round trip delay of the transceiver model in (11) is substituted by its equivalent delay of a transmitter-receiver pair. In this case, we replace the focusing matrix

**by its single receiver counterpart**

*G*

*G*^{u}in (21), which leads to the reconstruction of the single receiver image{\widehat{f}}_{u} of the

*u*th receiver. Coherently combining all single receiver images as

leads to the synthetic aperture image\widehat{f}. Note that the occurring azimuth image ambiguities in the single receiver images are canceled out during coherent summation. In the following section, we show how CS reconstruction can be performed.

### 3.2 Focusing using compressive sensing

This section introduces the basics of CS theory in the context of synthetic aperture imaging and outlines how CS can be used for image reconstruction. CS allows to sense a signal in a low-dimensional form, however, for this purpose, the captured signal is required to be sparse in a certain domain[14, 15]. Consider the signal ** f**=[

*σ*

_{1},…,

*σ*

_{ N }]

^{T}with

*N*=

*N*

_{ x }

*N*

_{ y }that can be sparsely represented by means of the transform matrix

**, which describes an**

*Ψ**N*×

*N*unitary basis such that

where ** χ**=[

*χ*

_{1},…,

*χ*

_{ N }]

^{T}denotes the

*K*-sparse coefficient vector. The

*K*-sparse property states that only

*K*coefficients are unequal zero with

*K*≪

*N*[28]. For

**=**

*Ψ***, where**

*I***is the identity matrix, the sparse coefficient vector**

*I***equals the signal**

*χ***. Thus, the signal**

*f***itself is assumed sparse. Subsequently, we assume that the sparse basis transform equals the identity matrix. In other words, we consider the target scene itself to be sparse, e.g., a few point-like objects lying on the seafloor. Instead of measuring the echo signal vector**

*f***of (20), which is of size**

*e**M*=

*M*

_{ p }

*M*

_{ n }, CS aims at reducing the measurements to\stackrel{~}{M}<M. Since the reconstruction of

**is of interest, an under-determined system of linear equation has to be solved. The latter is feasible and yields a unique solution given the sparsity of**

*f***. We undersample the received echo signals**

*f***by multiplying a selection matrix**

*e***as**

*Σ*where *e*^{cs} denotes the spatially and/or temporally undersampled vector of raw echo signals of size\stackrel{~}{M}. Note that the selection matrix ** Σ** is a fat matrix of dimension\stackrel{~}{M}\times M. It resembles an identity matrix with deleted rows for spatially undersampled along-track positions (see Section 3.3). Due to the presence of measurement noise, we formulate the reconstruction of the target scene as a basis pursuit denoising (BPDN)[29] optimization problem as follows

which can be solved using, e.g., the SpaRSA algorithm[30] that is directly capable to deal with complex data. Here, *Λ*^{cs} represents the regularization parameter of the optimization problem. Again, the result of the reconstruction is a stacked vector{\widehat{f}}^{\text{cs}}, which has to be reshaped to obtain a presentable image of the target scene{\widehat{\mathit{F}}}^{\text{cs}}. As for conventional imaging, the focusing matrix ** G** can by substituted by its single receiver counterpart

*G*^{u}in (25) to obtain the aliased single receiver image{\widehat{f}}_{u}^{\text{cs}}. Then, a coherent summation of the individual CS images{\widehat{f}}_{u}^{\text{cs}}, with

*u*=1,…,

*N*

_{rx}, leads to the synthetic aperture image{\widehat{f}}^{\text{cs}} of a single-transmitter and multi-receiver system, similar to (22). Alternatively, the overall focusing matrix

**could be constructed by stacking the individual receiver focusing matrices**

*G*

*G*^{u}and solving the optimization problem of (25) using the complete data model. On the one hand, this may lead to better imaging results due to a more sparse content of the reconstructed scene. However, on the other hand, this approach increases the computational complexity due to a larger size of the stacked focusing matrix

**. Thus, we trade-off computational complexity against imaging performance.**

*G*### 3.3 Area coverage rate and undersampling schemes

The main focus of this paper is to demonstrate that CS has the potential to increase the advance per ping *Δ*^{A} by enlarging the platform speed *v* while maintaining the pulse repetition time *T*_{PRI}. Consequently, this will lead to an improved area coverage rate, while maintaining the image quality due to ambiguity suppression. The area coverage rate of a synthetic aperture system is given by

where *R*_{max}=0.5 *T*_{PRI}/*c* describes the relation between the maximum range and the pulse repetition interval in order to avoid range ambiguities[12]. Hence, an increase in the advance per ping *Δ*^{A} is either related to an increase in the pulse repetition time *T*_{PRI} or in the platform speed *v*. However, a larger *T*_{PRI} affects the maximum range, which is at the same time limited by the signal-to-noise ratio (SNR). Thus, given a maximum range *R*_{max} of the imaging system, the area coverage rate *A*_{cr} is solely determined by the speed *v*.

In the following, we introduce two basic undersampling schemes, namely a regular along-track sampling scheme as well as a regular along-track and random range sampling scheme similar to[31]. Note that CS typically shows the best performance for random downsampling matrices[32, 33]. On the contrary, a purely random sampling in along-track direction without skipping the entire spatial sampling positions would not lead to an improvement in coverage rates but only to a reduced amount of data. The two schemes are illustrated in Figure3. Both subplots show a matrix of slow-time and fast-time samples with gray and white boxes, where the latter means that the corresponding samples have been dropped. In the case of regularly undersampling the synthetic aperture in Figure3a, every second slow-time position *a*_{
p
} is dropped, which is denoted by *Δ*^{A}=2*Δ* maxA. This means that the actual sampling interval is twice as large as required by the sampling theorem, and therefore, the platform speed can be increased by the same factor. In other words, the selection matrix ** Σ** resembles an identity matrix of size

*M*, where every second row is deleted. Thus, the actual dimension of the selection matrix

**is given by\stackrel{~}{M}\times M with\stackrel{~}{M}=0.5M.**

*Σ*Figure3b shows additionally how the range direction is randomly undersampled by dropping fast-time samples with a pre-defined ratio *ϱ*_{n}, e.g., *ϱ*_{n}=0.25 in Figure3b. This scheme is an extension to the undersampling scheme of the along-track direction that additionally leads to storage capacity savings. However, compared to the first scheme, it requires some changes in the hardware of the data acquisition of the imaging system and a different notation for reducing the measurements than by basic matrix multiplication. Instead, we can consider an element-wise reduction operation in (24) depending on the binary value of the undersampling scheme as depicted in Figure3b. Subsequently, we apply the introduced schemes for CS reconstruction on synthetic data examples.

## 4 Synthetic data examples

This section exemplarily demonstrates the capability of CS to suppress azimuth image ambiguities during the reconstruction of synthetic aperture imagery and, thus, facilitating an increase in the speed of the imaging platform. The reconstructed scenery is based on synthetic data of three homogeneous point scatterers. The system parameters to generate the synthetic echo signals for the following examples are listed in Table1. Note that the system parameters are chosen identical to the ones, which will be used in Section 5 to record the real ultrasound measurements. Therefore, the sampling rate highly oversamples the lowpass echo signals due to digital demodulation. However, the echo signals are downsampled to the Nyquist rate for the outlined processing steps.

The corresponding reconstruction results are depicted in Figures4 and5 for the conventional time-domain method and the proposed CS method, respectively, with a dynamic range of 30 dB. In each case, the subplots (a)-(c) illustrate the reconstructed images for an increased spatial undersampling of the synthetic aperture as outlined in the undersampling scheme of Figure3. For the CS reconstruction in Figure5b,c, an additional undersampling drop rate of *ϱ*_{n}=0.7 and *ϱ*_{n}=0.8 is chosen, respectively.

The occurrence of symmetric azimuth image ambiguities is obvious for Figure4b,c due to the regular undersampling. On the contrary, the CS method is capable to suppress the azimuth ambiguities in all three case. Moreover, there is no notable difference in the quality of the reconstruction of the individual point scatterers although the range undersampling ratio *ϱ* has been increased. The regularization parameter, which is a trade-off measure between data fidelity and sparsity, has been empirically chosen and set to *Λ*^{cs}=0.3∥(** ΣG**)

^{H}

*e*^{cs}∥

_{ ∞ }, where ∥·∥

_{ ∞ }denotes the maximum norm. This is similar to the heuristic used in[34]. Note that choosing the regularization parameter is a common problem for sparse reconstruction, e.g., in direction-of-arrival estimation[35], and still under current research for imaging techniques.

Subsequently, we show the extension of the proposed CS imaging technique applied to a synthetic aperture system consisting of a ULA with *N*_{rx}=4 receiving elements. The spatial sampling rate *Δ*^{ULA} is set to the Nyquist limit{\Delta}_{\text{max}}^{\text{ULA}} as stated in (16). The corresponding reconstruction result for correctly sampling the synthetic aperture is depicted in Figure6a. It shows the three point targets as in Figure4a. In contrast to Figure6a, azimuth image ambiguities are noticeable in Figure6b,c. The spatial sampling rate has been set to{\Delta}^{\text{ULA}}=2{\Delta}_{\text{max}}^{\text{ULA}} and{\Delta}^{\text{ULA}}=3{\Delta}_{\text{max}}^{\text{ULA}} for Figure6b,c, respectively. Next, the proposed CS imaging technique is applied to each receiver element *u* to obtain a single receiver image{\widehat{f}}_{u}^{\text{cs}}. The coherent combination of these individual reconstruction results then leads to the images as shown in Figure7a,b,c. While Figure7b with{\Delta}^{\text{ULA}}=2{\Delta}_{\text{max}}^{\text{ULA}} shows an identical reconstruction result compared to Figure7a, increasing the undersampling by factor three causes a small spreading of the point spread function of the target at along-track position *y*=-0.2 m as depicted in Figure7c. However, azimuth image ambiguities are also successfully suppressed for the multi-receiver configuration.

### 4.1 Simulation

In order to obtain a meaningful assessment of maximum undersampling ratios, for which the proposed CS imaging method still produces nearly identical reconstruction results as for correctly sampling the synthetic aperture, we have conducted *N*_{MC}=200 Monte Carlo simulations for different sets of undersampling ratios. Each set consists of a factor *κ* with *Δ*^{A}=*κ* *Δ* maxA and a factor *ζ* with *ϱ*_{n}=1-1/*ζ*, where *ϱ*_{n} is the nominal drop rate of fast time samples *M*_{
n
}. An average of actual drop rates\widehat{\varrho}(\kappa ,\zeta ) is depicted in Figure8a, where values with\widehat{\varrho}(\kappa ,\zeta )<0.9 have been clipped. The actual drop rates have been determined by thresholding the magnitude of the raw echo data, converting it to binary values and counting the non-zero values before and after undersampling.

For evaluating the image degradation due to the occurrence of azimuth image ambiguities as a consequence of undersampling, the structural similarity (SSIM)[36] measure is applied. It compares an image under test, i.e., the CS reconstructed image{\widehat{\mathit{F}}}_{\kappa ,\zeta}^{\text{cs}} for undersampled echo data, with a high-quality full-reference image, where the latter is given by the CS image{\widehat{\mathit{F}}}_{{\kappa}_{1},{\zeta}_{1}}^{\text{cs}} obtained by Nyquist sampling with *κ*_{1}=1 and *ζ*_{1}=1. The SSIM measure *Ξ*(*κ*,*ζ*) is defined for different undersampling factors of *κ* and *ζ* as

where the functions *L*_{u}(·), *C*_{o}(·), and *S*_{t}(·) describe the luminance, contrast, and structure measures between two image matrices, respectively. We refer to[36] for more detailed information. For the SSIM measure, a value *Ξ*=1 means that both images are identical and *Ξ*=0 that there is no similarity. However, by assessing a sparse target scene, the homogeneous background influences the SSIM. This effect is reduced by downsizing the area under test to *x*∈[0.4,1.2] m in range and *y*∈[-0.6,0.6] m in along-track direction, where the latter boundary is determined by the occurrence of grating lobes in Figure4c. The average simulation outcome of SSIM values *Ξ*(*κ*,*ζ*) for varying undersampling factors, *κ* and *ζ*, is illustrated in Figure8b, where values smaller than *Ξ*(*κ*,*ζ*)<0.6 are clipped. Moreover, CS reconstructed images with a SSIM value of *Ξ*(*κ*,*ζ*)<0.7 are affected by undersampling the raw echo data. Consequently, relating the amount of discarded data in Figure8a with the SSIM as a performance measure for successful CS reconstruction in Figure8b, the simulation has shown a data reduction of up to 95*%*.

## 5 Ultrasonic synthetic aperture system

This section briefly describes the ultrasonic laboratory system used to record the real acoustical data before discussing the CS reconstruction results. The laboratory system is based on a single-transmitter and multi-receiver configuration, which is operated as a stripmap synthetic aperture system using ultrasound. It is similar to the system in[37].

However, due to the non-calibrated array, the system is only employed as a bi-static transmitter/receiver system. Photographs of the laboratory setup are shown in Figure9. The transmitter is the most right element of the imaging platform in Figure9a, which sends LFM pulses with the specified signal parameters of Table1. To the left side of the transmitter, three equally spaced receivers are mounted on the moving platform.

Note that the mono-static model as outlined in the previous sections has to be replaced by a transmitter-receiver pair by using (17) instead of (2) with *N*_{rx}=1. The platform is moved along a metal rail by a motor with an approximate constant speed *v* as shown in Figure9b. The received signals are recorded using a National Instruments (NI) data acquisition card and are processed by a PC using MATLAB. The same system parameters as in Table1 are used for the laboratory system. Furthermore, a speed of *v*=0.05 m/s and a pulse repetition time of *T*_{PRI}=0.12 s are set to meet the spatial sampling requirements as discussed in Section 2.2. The high oversampling rate is used due to discrete-time demodulation of the received echo signals. However, the discrete-time signals are then downsampled to meet the temporal Nyquist rate of the transmitted pulse. The imaging scene (Figure9c) consists of three ping-pong balls similarly placed as the point targets of the synthetic data examples.

### 5.1 CS image reconstruction results

Subsequently, we apply both imaging methods to the experimental acoustical data. In case of aperture undersampling, the platform speed is increased to *v*=0.1 m/s as well as *v*=0.15 m/s, which is equivalent to an undersampling factor of 2 and 3. The regular along-track undersampling scheme is used as depicted in Figure3a and the CS regularization parameter is set as for the synthetic data examples. The corresponding reconstruction results are depicted in Figures10 and11 for the conventional and CS-based imaging method, respectively. The dynamic range of all images is 30 dB. In Figure10a, a clean image reconstruction of the three ping-pong balls can be seen. Note that the along-track resolution is better than the range resolution due to the relatively small bandwidth of the ultrasound sensors. Increasing the platform speed, however, yields again azimuth ambiguities for the conventional imaging method that are of varying strength on both sides of the true target location. This is related to a non-straight alignment of the ultrasound sensors used in the laboratory system. Moreover, the non-calibrated sensors currently hinder the use of the entire array as a single-transmitter and multi-receiver system.

Considering the reconstruction results using the proposed CS method, it becomes apparent that the azimuth image ambiguities can be successfully suppressed for the real data measurements and that the images show almost an identical quality (compare Figure11a with Figure11b). Hence, the platform speed can be doubled without any loss in image quality. For a higher speed (*v*=0.15 m/s), the CS reconstruction result starts to suffer from ambiguities. Also note that we have chosen a target scenario with closely spaced targets to keep the target strength variability small. Otherwise, the weaker targets might be suppressed by enforcing the sparsity of the scene. Moreover, the undersampling ratio is significantly smaller than for the synthetic data results but still twice as large as for proper Nyquist sampling.

## 6 Conclusions

In this paper, we have proposed a CS imaging technique for synthetic aperture systems operating in stripmap mode either using a transceiver or a single-transmitter and multi-receiver system to synthesize the aperture. The technique is based on the conventional time-domain correlation method. We have demonstrated its capability to suppress azimuth image ambiguities for synthetic data as well as for real acoustical measurements. Especially for synthetic data, a large data reduction has been achieved given the perfect match between the data model and the CS reconstruction model. On the contrary, significantly less undersampling has been feasible for the laboratory system most likely due to model mismatches between our target scene and the assumption of point targets. Nevertheless, we have been able to double the speed of the imaging platform while maintaining the image quality.

Currently, we are still facing open challenges that have to be solved before CS can be employed in a real non-laboratory system. In particular, this involves handling of target scenes consisting of heterogeneous target reflectivities as well as extended targets rather than point targets. While heterogenous target scenes may be addressed by an improved echo data modeling, the challenge of extended targets may be handled by choosing a different sparsity transform. Additionally, the echo data model should be adapted to mitigate the stop-and-hop assumption. Moreover, an automatic procedure for selecting the regularization parameter is necessary, and finally, a jittered pulsing scheme could be applied to randomize the undersampling in order to weaken the amplitude of azimuth image ambiguities.

## References

Soumekh M:

*Synthetic Aperture Radar Signal Processing: with MATLAB Algorithms*. Wiley & Sons, New York; 1999.Jakowatz JC, Wahl DE, Eichel PH, Ghiglia DC, Thompson PA:

*Spotlight Mode Synthetic Aperture Radar: a Signal Processing Approach*. Kluwer, Boston; 1996.Soumekh M: Reconnaissance with slant plane circular SAR imaging.

*IEEE Trans. Image Process*1996, 5(8):1252-1265. 10.1109/83.506760Rott H, Mätzler C: Possibilities and limits of synthetic aperture radar for snow and glacier surveying.

*Ann. Glaciology*1987, 9(1):195-199.Shi J, Dozier J, Rott H: Snow mapping in alpine regions with synthetic aperture radar.

*IEEE Trans. Geosci. Remote Sensing*1994, 32(1):152-158. 10.1109/36.285197Boissin MB, Gleyzes A, Tinel C: The pléiades system and data distribution (Munich, 22–27 July).

*Proceedings IEEE International Geoscience and Remote Sensing Symposium. (IGARSS)*2012, 7098-7101.Gough PT, Hawkins D: A short history of synthetic aperture sonar, vol. 2 (Seattle, 6–10 July).

*Proceedings IEEE International Geoscience and Remote Sensing Symposium. (IGARSS)*1998, 618-620.Hayes MP, Gough PT: Synthetic aperture sonar: a review of current status.

*IEEE J. Oceanic Eng*2009, 34(3):207-224.Chatillon J, Bouhier M-E, Zakharia ME: Synthetic aperture sonar for seabed imaging: relative merits of narrow-band and wide-band approached.

*IEEE J. Oceanic Eng*1992, 17(1):95-105. 10.1109/48.126958Groen J, Coiras E, Del Rio Vera J, Evans B: Model-based sea mine classification with synthetic aperture sonar.

*IET Radar Sonar Navigation*2010, 4(1):62-73. 10.1049/iet-rsn.2009.0071Fandos R, Zoubir AM, Siantidis K: Unified design of a feature-based ADAC system for mine hunting using synthetic aperture sonar.

*IEEE Trans. Geosci. Remote Sensing*2014, 52(5):2413-2426.Richards MA:

*Fundamentals of Radar Signal Processing*. McGraw-Hill, New York; 2005.Bhattacharya S, Blumensath T, Mulgrew B, Davies M: Synthetic aperture radar raw data encoding using compressed sensing (Rome, 26–30 May).

*Proceedings IEEE Radar Conf. (RADAR)*2008, 1-5.Donoho DL: Compressed sensing.

*IEEE Trans. Inform. Theory*2006, 52(4):1289-1306.Candès EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information.

*IEEE Trans. Inform. Theory*2006, 52(2):489-509.Duarte MF, Davenport MA, Takhar D, Laska JN, Sun T, Kelly KF, Baraniuk RG: Single-pixel imaging via compressive sampling.

*IEEE Signal Process. Mag*2008, 25(2):83-91.Lustig M, Donoho DL, Santos JM, Pauly JM: Compressed sensing MRI.

*IEEE Signal Process. Mag*2008, 25(2):72-82.Baraniuk R, Steeghs P: Compressive radar imaging (Boston, 7–20 Apr).

*Proceedings IEEE Radar Conf. (RADAR)*2007, 128-133.Herman MA, Strohmer T: High-resolution radar via compressed sensing.

*IEEE Trans. Signal Process*2009, 57(6):2275-2284.Stojanovic I, Karl W, Cetin M: Compressed sensing of mono-static and multi-static SAR (Orlando, 13–17 Apr).

*Proceedings SPIE Defense and Security Symposium, Algorithms for Synthetic Aperture Radar Imagery XVI*2009.Tello Alonso M, Lopez-Dekker P, Mallorqui JJ: A novel strategy for radar imaging based on compressive sensing.

*IEEE Trans. Geosci. Remote Sensing*2010, 12: 4285-4295.Patel VM, Easley GR, Healy D, Chellappa R: Compressed synthetic aperture radar.

*IEEE J. Selected Topics Signal Process*2010, 4(2):244-254.Ender JHG: On compressive sensing applied to radar.

*Signal Process*2010, 90(5):1402-1414. 10.1016/j.sigpro.2009.11.009Yang J, Thompson J, Huang X, Jin T, Zhou Z: Random-frequency SAR imaging based on compressed sensing.

*IEEE Trans. Geosci. Remote Sensing*2013, 51(2):983-994.Carrara WG, Goodman RS, Majewski RM:

*Spotlight Synthetic Aperture Radar: Signal Processing Algorithms*. Artech House, Boston; 1995.Gunther J, West R, Crookston N, Moon T: Maximum likelihood synthetic aperture radar image formation for highly nonlinear flight tracks (Sedona, 4–7 Jan).

*Proceedings IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop (DSP/SPE)*2011, 449-454.Van Trees HL:

*Detection, Estimation, and Modulation Theory, Optimum Array Processing*. Wiley & Sons, New York; 2002.Candès EJ, Wakin MB: An introduction to compressive sampling. 2008, 25(2):21-30.

Chen SS, Donoho DL, Saunders MA: Atomic decomposition by basis pursuit.

*SIAM Rev*2001, 43(1):129-159. 10.1137/S003614450037906XWright SJ, Nowak RD, Figueiredo MAT: Sparse reconstruction by separable approximation.

*IEEE Trans. Signal Process*2009, 57(7):2479-2493.Yoon Y-S, Amin MG: Compressed sensing technique for high-resolution radar imaging (Orlando, 16 Mar).

*Proceedings SPIE Signal Processing, Sensor Fusion, and Target Recognition XVII*2008.Baraniuk RG: Compressive sensing [lecture notes].

*IEEE Signal Process. Mag*2007, 24(4):118-121.Baraniuk R, Davenport M, DeVore R, Wakin M: A simple proof of the restricted isometry property for random matrices.

*Constr. Approximation*2008, 28(3):253-263. 10.1007/s00365-007-9003-xFigueiredo MA, Nowak RD, Wright SJ: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems.

*IEEE J. Selected Topics Signal Process*2007, 1(4):586-597.Panahi A, Viberg M: Maximum a posteriori based regularization parameter selection (Prague, 22–27 May).

*Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*2011, 2452-2455.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity.

*IEEE Trans. Image Process*2004, 13(4):600-612. 10.1109/TIP.2003.819861Vincent F, Mouton B, Chaumette E, Nouals C, Besson O: Synthetic aperture radar demonstration kit for signal processing education (Honolulu, 15–20 Apr).

*Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing. (ICASSP)*2007, 709-712.

## Acknowledgements

This research work is part of a collaboration with ATLAS ELEKTRONIK GmbH in Bremen, Germany. The authors would like to thank Dr. J. Groen from ATLAS ELEKTRONIK GmbH for his valuable feedback on the paper and M. Leigsnering for all the fruitful discussions.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Leier, S., Zoubir, A.M. Aperture undersampling using compressive sensing for synthetic aperture stripmap imaging.
*EURASIP J. Adv. Signal Process.* **2014, **156 (2014). https://doi.org/10.1186/1687-6180-2014-156

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-6180-2014-156

### Keywords

- Compressive sensing
- Synthetic aperture radar
- Synthetic aperture sonar
- Stripmap mode
- Aperture undersampling