 Research
 Open Access
 Published:
Resolutionenhanced radar/SAR imaging: an experiment design framework combined with neural networkadapted variational analysis regularization
EURASIP Journal on Advances in Signal Processing volume 2011, Article number: 85 (2011)
Abstract
The convex optimizationbased descriptive experiment design regularization (DEDR) method is aggregated with the neural network (NN)adapted variational analysis (VA) approach for adaptive highresolution sensing into a unified DEDR VANN framework that puts in a single optimization frame highresolution radar/SAR image formation in uncertain operational scenarios, adaptive despeckling and dynamic scene image enhancement for a variety of sensing modes. The DEDR VANN method outperforms the existing adaptive radar imaging techniques both in resolution and convergence rate. The simulation examples are incorporated to illustrate the efficiency of the proposed DEDRVArelated imaging techniques.
1. Introduction
In this article, we consider the problem of enhanced remote sensing (RS) imaging stated and treated as an illposed nonlinear inverse problem with model uncertainties. The problem at hand is to perform highresolution reconstruction of the power spatial spectrum pattern (SSP) of the wavefield scattered from the extended remotely sensed scene via spacetime adaptive processing of finite recordings of the imaging radar/SAR data distorted in a stochastic uncertain measurement channel. The SSP is defined as a spatial distribution of the power (i.e., the secondorder statistics) of the random wavefield backscattered from the remotely sensed scene observed through the integral transform operator [1, 2]. Such an operator is explicitly specified by the employed radar/SAR signal modulation and is traditionally referred to as the signal formation operator (SFO) [2, 3]. The operational uncertainties are attributed to inevitable random signal perturbations in inhomogeneous propagation medium with unknown statistics, possible imperfect radar calibration, and uncontrolled sensor displacements or carrier trajectory deviations in the SAR case. The classical imaging with an array radar or SAR implies application of the method called "matched spatial filtering (MSF)" to process the recorded data signals [2, 3]. A number of approaches had been proposed to design the constrained regularization techniques for improving the resolution in the SSP obtained by ways different from the MSF, e.g., [1–9] but without aggregating the minimum risk (MR) descriptive estimation strategies with convex projection regularization. In [7], an approach was proposed to treat the uncertain RS imaging problems that unifies the MR spectral estimation strategy with the worst case statistical performance (WCSP) optimizationbased convex regularization resulting in the descriptive experiment design regularization (DEDR) method. Next, the variational analysis (VA) framework has been combined with the DEDR in [2, 9] to satisfy the desirable descriptive properties of the reconstructed RS images, namely: (i) convex optimizationbased maximization of spatial resolution balanced with noise suppression, (ii) consistency, (iii) positivity, (iv) continuity and agreement with the data. In this study, we extend the developments of the DEDR and VA techniques originated in [2, 7, 9] by performing the aggregation of the DEDR and VA paradigms and next putting the RS image enhancement/reconstruction tasks into the unified neural network (NN)adapted computational frame addressed as a unified DEDRVANN method. We have designed a family of such significantly speededup DEDRVArelated algorithms, and performed the simulations to illustrate the effectiveness of the proposed highresolution DEDRVANNbased image enhancement/fusion approach.
The rest of the article is organized as follows. In Section 2, we provide the formalism of the radar/SAR inverse imaging problem at hand with necessary experiment design considerations. In Section 3, we adapt the celebrated maximum likelihood (ML) inspired amplitude phase estimation (APES) technique for array sensor/SAR imaging. The unified DEDRVA framework for highresolution radar/SAR imaging in uncertain scenarios is conceptualized in Section 4, adapted to the NNoriented sensor systems/methods fusion mode in Section 5, next, is followed by illustrative simulations in Sections 6 and the conclusion in Section 7.
2. Problem formalism
The general mathematical formalism of the problem at hand is similar in notation and structural framework to that described in [2, 7, 9] and some crucial elements are repeated for convenience to the reader. Following [1, 2, 9], we define the model of the observation RS wavefield u by specifying the stochastic equation of observation (EO) of an operator form u = \mathcal{S}e + n, where e = e(r), represents the complex scattering function over the probing surface R ∋ r, n is the additive noise, u = u(p), is the observation field, p = (t, ρ) defines the time (t)space(ρ) points in the temporalspatial observation domain p ∈ P = T × P (t ∈ T, ρ ∈ P) (in the SAR case, ρ = ρ(t) specifies the carrier trajectory [7]), and the kerneltype integral SFO \mathcal{S}:E\left(R\right)\phantom{\rule{2.77695pt}{0ex}}\to \phantom{\rule{2.77695pt}{0ex}}U\left(P\right) defines a mapping of the source signal space E\left(R\right) onto the observation signal space U\left(P\right). The metrics structures in the corresponding Hilbert signal spaces U\left(P\right)E\left(R\right) are imposed by scalar products, {\left[u,{u}^{\prime}\right]}_{U}\phantom{\rule{2.77695pt}{0ex}}=\underset{P}{\int}u\left(\mathbf{p}\right){u}^{\prime}*\left(\mathbf{p}\right)d\mathbf{p},, {\left[e,{e}^{\prime}\right]}_{E}\phantom{\rule{2.77695pt}{0ex}}=\phantom{\rule{2.77695pt}{0ex}}\underset{R}{\int}e\left(\mathbf{r}\right){e}^{\prime}*\left(\mathbf{r}\right)d\mathbf{r}, respectively [1]. The functional kernel S(p, r) of the SFO \mathcal{S} is referred to as the unit signal[2] determined by the timespace modulation employed in a particular RS system. In the case of uncertain operational scenarios, the SFO is randomly perturbed [7], i.e. \stackrel{\u0303}{\mathcal{S}}=\mathcal{S}+\Delta \mathcal{S} where \Delta \mathcal{S} pertains to the random uncontrolled perturbations, usually with unknown statistics. The fields e, n, u. are assumed to be zeromean complex valued Gaussian random fields [1, 7]. Next, since in all RS applications the regions of high correlation of e(r) are always small in comparison with the resolution element on the probing scene [1–3], the signals e(r) scattered from different directions r, r ' ∈ R of the remotely sensed scene R are assumed to be uncorrelated with the correlation function R_{ e }(r, r') = 〈e(r)e*(r') 〉 = b(r) δ(rr');r,r'∈R where b(r) = 〈e(r)e*(r) 〉 = 〈e(r)^{2}〉; r∈R represents the power SSP of the scattered field [1]. The problem of highresolution RS imaging is to develop a framework and related method(s) that perform optimal estimation of the SSP (referred to as a scene image) from the available radar/SAR data measurements. It is noted that in this study we are going to develop and follow the unified DEDRVANN framework.
The RS radar/SAR systemoriented finitedimensional (i.e., discreteform) approximation of the EO is given by [7]
in which the disturbed M×K SFO matrix \stackrel{\u0303}{\mathbf{S}} = S + Δ is the discreteform approximation of the integral SFO for the uncertain operational scenario, and e, n, u represent zeromean vectors composed of the sample (decomposition) coefficients {e_{ k } , n_{ m }, u_{ m } ; k = 1,...,K; m = 1,...,M}, respectively [1–3]. These vectors are characterized by the correlation matrices: R_{ e } = D = D(b) = diag(b) (a diagonal matrix with vector b at its principal diagonal), R_{ n }, and R_{ u } = < \stackrel{\u0303}{\mathbf{S}}{\mathbf{R}}_{\mathbf{e}}{\stackrel{\u0303}{\mathbf{S}}}^{+} > _{p(}Δ_{)} + R_{ n }, respectively, where <·> _{p(Δ)}defines the averaging performed over the randomness of Δ characterized by the usually unknown probability density function p(Δ), and superscript "+" stands for Hermitian conjugate. Vector b composed of the elements, {b_{ k } =\mathcal{B}\left\{{e}_{k}\right\} = <e_{ k }e_{ k }*> = <e_{ k } ^{2}>; k = 1,...,K} is referred to as a KD vectorform approximation of the SSP, where \mathcal{B} represents the secondorder statistical ensemble averaging operator [1, 2]. The SSP vector b is associated with the lexicographically ordered pixelframed image [1, 7]. The corresponding conventional K_{ y } ×K_{ x } rectangular frameordered scene image B = {b(k_{ x } , k_{ y } ); k_{ x } , = 1,...,K_{ x } ; k_{ v } , = 1,...,K_{ y } } relates to its lexicographically ordered vectorform representation b = \mathcal{L}\left\{\mathbf{B}\right\} = {b(k); k = 1,...,K = K_{ y } ×K_{ x } } via the standard rowbyrow concatenation (i.e., lexicographical reordering) procedure, B = {\mathcal{L}}^{1}{b} [1]. It is noted that in the simple case of certain operational scenario [2, 3], the discreteform (i.e., matrixform) SFO S is assumed to be deterministic, i.e., the random perturbation term in (3) is irrelevant, Δ = 0.
The enhanced RS imaging problem is stated generally as follows: to map the scene pixelframed image \widehat{\mathbf{B}} via lexicographical reordering \widehat{\mathbf{B}}={\mathcal{L}}^{1}\left\{\widehat{\mathbf{b}}\right\} of the SSP vector estimate \widehat{\mathbf{b}} reconstructed from whatever available measurements of independent realizations of the recorded data (1). The reconstructed SSP vector \widehat{\mathbf{b}} is an estimate of the secondorder statistics of the scattering vector e observed through the perturbed SFO and contaminated with noise; hence, the imaging problem at hand must be qualified and treated as a statistical nonlinear uncertain inverse problem [1, 7, 9]. The enhanced highresolution imaging implies solution of such inverse problem in some optimal way. We know that in this article we intend to develop and follow the unified DEDRVA framework, next adapted to NNbased computational implementation.
3. Adaptation of APES technique for array sensor/SAR imaging
In this section, we perform an extension of the recently proposed highresolution ML inspired APES, i.e., the MLAPES method [6], for solving the SSP reconstruction inverse problem via its modification adapted to radar imaging of distributed RS scenes. In the considered low snapshot sample case (e.g., one recorded SAR trajectory data signal in a single look SAR sensing mode [7]), the sample data covariance matrix Y = \left(1\u2215J\right){\sum}_{j=1}^{J}{\mathbf{u}}_{\left(j\right)}{{\mathbf{u}}^{+}}_{\left(j\right)} is rank deficient (rank1 in the single radar snapshot and single look SAR sensing modes, J = 1). The convex optimization problem of minimization of the negative likelihood function \mathsf{\text{lndet{}}{}_{\mathbf{R}}^{\mathbf{u}}\mathsf{\text{}+tr}}\left\{{\mathbf{R}}_{\mathbf{u}}^{1}\mathbf{Y}\right\} with respect to the SSP vector b subject to the convexity guaranteed nonnegativity constraint results in the celebrated APES estimator [6]
In the APES terminology (as well as in the minimum variance distortionless response (MVDR) and other MLrelated approaches [1, 4, 6] etc.), s_{k} represents the socalled steering vector in the k th look direction, which in our notational conventions is essentially the k th column vector of the regular SFO matrix S. The numerical implementation of the APES algorithm (2) assumes application of an iterative fixed point technique by building the modelbased estimate {\widehat{\mathbf{R}}}_{\mathbf{u}}={\mathbf{R}}_{\mathbf{u}}\left({\widehat{\mathbf{b}}}_{\left[i\right]}\right) of the unknown covariance R_{ u } from the latest (i th) iterative SSP estimate {\widehat{\mathbf{b}}}_{\left[i\right]} with the zero step initialization {\widehat{\mathbf{b}}}_{\left[0\right]}={\widehat{\mathbf{b}}}_{MSF} computed applying the conventional MSF estimator [2].
In the vector form, the algorithm (2) can be expressed as
Where · defines the SchurHadamar [1] (element wise) vector/matrix product, F_{APES} = F^{(1)}={}_{\widehat{\mathbf{DS}}}^{+}{\mathbf{R}}_{\mathbf{u}}^{1}\left(\widehat{\mathbf{b}}\right) represents the APES matrixform solution operator (SO), in which
where operator {·}_{diag} returns the vector of a principal diagonal of the embraced matrix. The algorithmic structure of the vectorform nonlinear (i.e., solutiondependent) APES estimator (3) guarantees positivity but does not guarantee the consistency. In the realworld uncertain (rank deficient) RS operational scenarios, the inconsistency inevitably results in speckle corrupted images unacceptable for further processing and interpretation. To overcome these limitations, in the next section we extend the unified DEDRVA framework of [2, 9] for the considered here uncertain operational scenarios to guarantee consistency and significantly speedup convergence.
4. Unified DEDRVA framework for highresolution radar/SAR imaging in uncertain scenarios
4.1. DEDRVA approach
The DEDRVAoptimal SSP estimate \widehat{\mathbf{b}} is to be found as the regularized solution to the nonlinear equation [7]
where F_{DEDR} represents the adaptive (i.e., dependent on the SSP estimate \widehat{\mathbf{b}}) matrixform DEDR SO and \mathcal{P} is the VA inspired regularizing projector onto convex solution sets (POCS). Two fundamental issues constitute the benchmarks of the modified DEDRVA estimator (5) that distinguish it from the previously developed kernel SSP reconstruction algorithm [2], the DEDR method [7, 9] and the detailed above APES estimator (3). First, we reformulate the strategy for determining the DEDR SO F_{DEDR} in (5) in the MRinspired WCSP convex optimization setting [1, 7], i.e., as the MRWCSP constrained DEDR convex optimization problem (specified by [7, Equations 8 and 11]) to provide robustness of the SSP vector estimates against possible model uncertainties. The second issue relates to the VA inspired problemoriented codesign of the POCS regularization operator \mathcal{P} in (5) aimed at satisfying intrinsic and desirable properties of the solution such as positivity, consistency, model agreement (e.g., adaptive despeckling with edge preservation), and rapid convergence [1, 8]. The solution to the MRWCSP conditioned optimization problem [7, Equation 43] yields the DEDRoptimal SO
where \mathbf{K}={\left({\mathbf{S}}^{+}{\mathbf{R}}_{\Sigma}^{1}\mathbf{S}+\alpha {\mathbf{A}}^{1}\right)}^{1} defines the socalled reconstruction operator (with the regularization parameter α and stabilizer A^{1}), and {\mathbf{R}}_{\Sigma}^{1} is the inverse of the diagonal loaded noise correlation matrix [7]R_{ Σ } = N_{Σ}I with the composite noise power N_{Σ} = N_{0}+β, the additive observation noise power N_{0} augmented by the loading factor β = γη/α ≥ 0 adjusted to the regularization parameter α, the Loewner ordering factor γ > 0 of the SFO S[1] and the uncertainty bound η imposed by the MRWCSP conditional maximization (see [7, 8] for details).
It is noted that other feasible adjustments of the processinglevel degrees of freedom {α, N_{Σ}, A} summarized in [7, 8] specify the family of relevant POCSregularized DEDRrelated (DEDRPOCS) techniques that we unify here in the following general form
where Q = S^{+}YS defines the MSF measurement statistics matrix independent on the solution \widehat{\mathbf{b}}, and different (say P) reconstruction operators {K^{(p)}; p = 1,...,P} specified for P different feasible assignments to the processing degrees of freedom {α, N_{Σ}, A} define the corresponding DEDRPOCS estimators (7) with the relevant SO's {F^{(p)}= K^{(p)}S^{+}; p = 1,...,P}.
4.2. Convergence guarantees
Following the VA regularization formalism [1, 7, 9], the POCS regularization operator \mathcal{P} in (7) could be constructed as a composition of projectors {\mathcal{P}}_{n} onto convex sets {\u2102}_{n}; n = 1,...,N with nonempty intersection, in which case the (7) is guaranteed to converge to a point in the intersection of the sets {{\u2102}_{n}} regardless of the initialization {\widehat{\mathbf{b}}}_{\left[0\right]} that is a direct sequence of the fundamental theorem of POCS (see [7, Part I, Appendix B]). Also, any operator that acts in the same convex set, e.g., kerneltype windowing operator (WO) can be incorporated into such composite regularization operator \mathcal{P} to guarantee the consistency [1]. The RS systemoriented experiment design task is to make the use of the POCS regularization paradigm (5) employing the practical imaging radar/SARmotivated considerations that we perform in the next section.
4.3. VAmotivated POCS regularization
To approach the superresolution performances in the resulting SSP estimates (5), (7), we propose to follow the VA inspired approach [2, 7, 9] to specify the composite POCS regularizing operator
The {\mathcal{P}}_{2} in (8) represents the convergenceguaranteed projector onto the nonnegative convex solution set (the POCS operator) specified as the positivity operator, {\mathcal{P}}_{2}={\mathcal{P}}_{+}, that has an effect of clipping off all the negative values [1], and {\mathcal{P}}_{1} is an anisotropic WO that we construct here following the VA formalism [2, 9] as a metrics inducing operator
that specifies the metrics structure in the KD solution/image space {B}_{\left(K\right)}\u220d\mathbf{b} defined by the squared norm [2, 9]
The second sum on the righthand side of (10) is recognized to be a 4nearestneighbors differenceform approximation of the Laplacian operator {\nabla}_{\mathbf{r}}^{2} over the spatial coordinate r, while m^{(0)} and m^{(1)} represent the nonnegative realvalued scalars that control the balance between two metrics measures defined by the first and the second sums at the righthand side of (10). In the equibalanced case, m^{(0)} = m^{(1)} = 1, the same importance is assigned to the both metrics measures, in which case (9) specifies the discreteform approximation to the Sobolev metrics inducing operator \mathcal{M}={m}^{\left(0\right)}\mathcal{I}+{m}^{\left(1\right)}{\nabla}_{\mathbf{r}}^{2} in the relevant continuousform solution space B\left(R\right)\u220db\left(\mathbf{r}\right), where \mathcal{I} defines the identity operator [2]. Incorporating in (9) {\mathcal{P}}_{1}= \mathcal{M} for the continuous model and {\mathcal{P}}_{1} = M for the discreteform image model, respectively, specifies the consistencyguaranteed anisotropic kerneltype windowing [2, 9] because it controls not only the SSP (image) discrepancy measure but also its gradient flow over the scene.
4.4. DEDRVAoptimal dynamic SSP reconstruction
The transformation of (5) into the contractive iterative mapping format yields
initialized by the conventional lowresolution MSF image
with the relaxation parameter τ and the solutiondepended point spread function (PSF) matrix operator
Associating in (11) the iterations i+ 1→ t +Δt;i →t; τ→ Δt, with "evolution time", (Δt→ dt; t +Δt →t +dt) and considering the continuous 2D rectangular scene frame R ∋ r= (x, y) with the corresponding initial MSF scene image q(r) = \widehat{b}\left(\mathbf{r};0\right) and the "evolutionary"enhanced SSP estimate \widehat{b}\left(\mathbf{r};t\right), respectively, we proceed from (11) to the equivalent asymptotic dynamic scheme [2]
where {\Phi}_{\widehat{b}}\left(\mathbf{r},{r}^{\prime};t\right) represents the kernel PSF in evolution time t corresponding to the continuousform dynamic generalization of the PSF matrix {\mathbf{\Phi}}_{\mathbf{D}\left[i\right]} specified by (13), and \mathcal{M} defines the metrics inducing operator. For the adopted \mathcal{M}={m}^{\left(0\right)}\mathcal{I}+{m}^{\left(1\right)}{\nabla}_{\mathbf{r}}^{2}, the (14) is transformed into the VA dynamic process defined by the partial differential equation (PDE)
For the purpose of generality, instead of relaxation parameter τ and balancing coefficients m^{(0)} and m^{(1)} we incorporated into the PDE (15) three regularizing factors c_{0}, c_{1}, and c_{2}, respectively, to compete between noise smoothing and edge enhancement [2, 9]. These are viewed as additional VAlevel usercontrolled degrees of freedom.
4.5. Family of numerical DEDRVArelated techniques for SSP reconstruction
The discreteform approximation of the PDE (15) in "iterative time" {i = 0, 1, 2,...} yields the contractive mapping iterative numerical procedure [2]
i = 0,1,2,... with the same MSF initialization (12). Different feasible assignments to the usercontrolled degrees of freedom (i.e., balancing factors c_{0}, c_{1}, c_{2}) in (16) specify the family of corresponding DEDRVArelated SSP reconstruction techniques that produce the relevant RS images. Extending the previous studies on the DEDRVA topic [2, 9] herebeneath we exemplify the following ones.

(i)
The simplest case relates to the specifications: c _{0} = 0, c _{1} = 0, c _{2} = const = c, c > 0, and Φ (r, r';t) = δ(r  r') with excluded projector {\mathcal{P}}_{+}. In this case, the PDE (15) reduces to the isotropic diffusion (socalled heat diffusion) equation \partial \widehat{b}(\mathbf{r};t)/\partial t=c{\nabla}_{\mathbf{r}}^{2}\widehat{b}(\mathbf{r};t). We reject the isotropic diffusion because of its resolution deteriorating nature [1].

(ii)
The previous assignments but with the anisotropic conduction factor, c _{2} = c(r; t) ≥ 0 specified as a monotonically decreasing function of the magnitude of the image gradient distribution [4], i.e., a function c\left(\mathbf{r},\mid {\nabla}_{\mathbf{r}}\widehat{b}\left(\mathbf{r};t\right)\mid \right) ≥ 0, transforms the (15) into the anisotropic diffusion (AD) PDE, \partial \widehat{b}(\mathbf{r};t)/\partial t=c(\mathbf{r};{}_{\nabla}^{\mathbf{r}}\widehat{b}(\mathbf{r};t)){\nabla}_{\mathbf{r}}^{2}\widehat{b}(\mathbf{r};t), which specifies the celebrated PeronaMalik AD method [4] that sharpens the edge map on the lowresolution MSF images.

(iii)
For the Lebesgue metrics specification c _{0} = 1 with c _{1} = c _{2} = 0, the PDE (15) involves only the first term at the righthand side resulting in the locally selective robust adaptive spatial filtering (RASF) approach investigated in details in our previous studies [7, 9].

(iv)
The alternative assignments c _{0} = 0 with c _{1} = c _{2} = 1 combine the isotropic diffusion with the anisotropic gain controlled by the Laplacian edge map. This approach is addressed as a selective information fusion method [5] that manifests almost the same performances as the DEDRrelated RASF method [7].

(v)
The aggregated approach that we address here as the unified DEDRVA method involves all the three terms at the righthand side of the PDE (15) with the equibalanced c _{0} = c _{1} = c _{2} = const (one for simplicity), hence, it combines the isotropic diffusion (specified by the second term at the righthand side of (16)) with the composite anisotropic gain dependent both on the evolution of the synthesized SSP frame and its Laplacian edge map [2]. This produces a balanced compromise between the anisotropic reconstructionfusion and locally selective image despeckling with adaptive anisotropic kernel windowing that preserves and even sharpen the image edge map [2].
All exemplified above techniques with different feasible specifications of the usercontrollable degrees of freedom compose a family of the DEDRVArelated iterative techniques for SSP reconstruction/enhancement. The generalform DEDRVA framework is shown in Figure 1. It is noted that the progressive contractive mapping procedure (16) can be performed separately along the range (y) and azimuth (x) directions in a parallel fashion making an optimal use of the PSF sparseness properties of the realworld RS imaging systems. These features of the POCSregularized DEDRVArelated algorithms generalized by (16) result in the drastically decreased algorithmic computational complexity (e.g., up to ~10^{3} times for the typical largescale 10^{3} × 10^{3} SAR pixel image formats [8]).
Next, several RS images formed by different sensor systems or applying different image formation techniques can be aggregated into an enhanced fused RS image employing the NN computational framework [10]. We are now ready to proceed with construction of such NNadapted DEDRVArelated techniques.
5. Radar/SAR image enhancement via sensor and method fusion
5.1. Fusion problem formulation
Consider the set of equations
which model the data {q^{(p)}} acquired by P RS imaging systems that employ the image formation methods from the DEDRVArelated family specified in the previous section. In (17), b represents the original KD image vector, {Φ^{(p)}} are the RS image formation operators referred to as the PSF operators of the corresponding DEDRVArelated imaging systems (or methods) where we have omitted the sub index D for notational simplicity, and {ν^{(p)}} represent the system noise with further assumption that these are uncorrelated from system to system.
Define the discrepancies between the actually formed images {q^{(p)}} and the true original image b as the l_{2} squired norms, J_{ p } (b) = q^{(p)} Φ^{(p)}b^{2}; p = 1,...,P. Let us next adopt the VA inspired proposition [10] that the smoothness properties of the desired image are controlled by the secondorder Tikhonov stabilizer, J_{P+1}(b) = {\mathbf{b}}^{\mathsf{\text{T}}}{\mathcal{P}}_{1}\mathbf{b}, where {\mathcal{P}}_{1}=\mathbf{M}={m}^{\left(0\right)}\mathbf{I}+{m}^{\left(1\right)}{\nabla}^{2} is the VAbased metrics inducing (regularizing) operator specified previously by (9). We further define the image entropy as
Then, the contrivance for aggregating the imaging systems (methods), when solving the fusion problem, is the formation of the augmented objective (or augmented ME cost) function
and seeking for a fused restored image \widehat{\mathbf{b}} that minimizes the objective function (19), in which λ = (λ_{1}...λ_{ P }, λ_{P+1})^{T} represents the vector of weight parameters, commonly referred to as the fusion regularization parameters [10]. Hence, in the frame of the aggregate regularization approach to decentralized fusion [2, 6], the restored image is to be found as a solution of the convex optimization problem
for the assigned values of the regularization parameters λ. A proper selection of λ is next associated with parametrical optimization [10] of such the aggregated fusion process.
5.2. NNadapted fusion algorithm
The Hopfieldtype dynamical NN, which we propose to employ to solve the fusion problem (20), is an expansion of the maximum entropy NN (MENN) proposed in our previous study [10]. We consider the multistate Hopfieldtype (i.e., dynamic) NN [10, 11] with the KD state vector x and KD output vector z = sgn(Wx + θ), where W and θ are the matrix of synaptic weights and the vector of the corresponding bias inputs of the NN, respectively. The energy function of such the NN is expressed as [10]
The proposed idea for solving the RS system/method fusion problem (20) using the dynamical NN is based on extension of the following cognitive processing proposition invoked from [10]. If the energy function of the NN represents the function of a mathematical minimization problem over a parameter space, then the state of the NN would represent the parameters and the stationary point of the network would represent a local minimum of the original minimization problem. Hence, utilizing the concept of the dynamical net, we may translate our image reconstruction/enhancement problem with RS system/method fusion to the correspondent problem of minimization of the energy function (21) of the related MENN. Therefore, we define the parameters of the MENN in such a fashion that to aggregate the corresponding parameters of the RS systems/methods to be fused, i.e.,
∀k, i = 1,...,K, where we redefined {x_{ k } = b_{ k } } and ignored the constant term E_{const} in E(x) that does not involve the state vector x. The regularization parameters {λ_{ p } } in (22), (23) should be specified by an observer o preestimated invoking, for example, the VA inspired resolutionovernoisesuppression balancing method developed in [10, Section 3]. In the latter case, the result of the enhancementfusion becomes a balanced tradeoff between the gained spatial resolution and noise suppression in the resulting fused enhanced image with the POCSbased regularizing stabilizer.
Next, we propose to find a minimum of the energy function (21) as follows. The states of the network should be updated as x '' = x ' + Δx using the properly designed update rule ℜ(z) for computing a change Δx of the state vector x, where the superscripts ' and '' correspond to the state values before and after network state updating (at each iteration), respectively. To simplify the design of such the state update rule, we assume that all x_{ k } > > 1, which enables us to approximate the change of the energy function due to neuron k updating as [10]
We now redefine the outputs of neurons as {z_{ k } = sgn({\sum}_{i=1}^{K}{W}_{ki}{x}_{i}^{\prime}+{\theta}_{k}^{\prime}  1) ∀k = 1,...,K}. Using these definitions, and adopting the equibalanced fusion regularization weights, λ_{ p } = 1 ∀p = 1,...,P, we next, design the desired state update rule ℜ(z) which guarantees nonpositive values of the energy changes ΔE at each updating step as follows,
where Δ is the preassigned stepsize parameter. If no changes of ΔE(Δx) are examined while approaching to the stationary point of the network, then the stepsize parameter Δ may be decreased, which enables us to monitor the updating process as it progresses setting a compromise between the desired accuracy of finding the NN's stationary point and computational complexity [10]. To satisfy the condition x_{ k } > > 1 some constant x^{0} may be added to the gray level of every original image pixel and after restoration the same constant should be deducted from the gray level of every restored image pixel, hence, the selection of a particular value of x^{0} is not critical [10]. Consequently, the restored image \widehat{\mathbf{b}} corresponds to the state vector \widehat{\mathbf{x}} of the NN in its stationary point \widehat{\mathbf{x}} as, \widehat{\mathbf{b}}= \widehat{\mathbf{x}}  x^{0}1, where 1 = (1 1...1)^{T} ∈ R^{K} is the K×1 vector composed with units. The computational structures of such the MENN and its single neuron are presented in Figures 2 and 3, respectively.
6. Simulations
We simulated fractional sidelooking imaging SAR operating in uncertain scenario [7]. We adopted a triangular shape of such imaging SAR range ambiguity function (AF) and a Gaussian shape of the corresponding azimuth AF [2, 12]. Simulation results are presented in Figures 4 and 5. The figure captions specify each particular simulated image formation/enhancement method (p = 1,...,P = 5). Aggregation of the locally selective robust spatial filtering (RSF) technique [5] with the DEDRVAoptimal algorithm (16) was considered in the simulations of the NNbased fused enhancement mode. Next, Figure 6 reports the convergence rates for three most prominent VArelated enhanced RS imaging approaches: the APES [6], the DEDR, and the developed NNadapted DEDRVAoptimal method (16) implemented via the MENN technique (2025).
We employ two quality metrics for performance assessment of the reconstructive methods developed in this article. The traditional quantitative quality metric [7] for RS images is the socalled improvement in the output signaltonoise ratio (IOSNR), which provides the metrics for performance gains attained with different employed estimators in dB scale
where b_{ k } represents the value of the k th element (pixel) of the original SSP, {\widehat{b}}_{k}^{\left(\mathsf{\text{MSF}}\right)} represents the value of the k th element (pixel) of the rough SSP estimate formed applying the conventional lowresolution MSF technique (12), and {\widehat{b}}_{k}^{\left(p\right)} represents the value of the k th element (pixel) of the enhanced SSP estimate formed applying the p th enhanced imaging method (p = 1,...,P), correspondingly. We consider and compare here five (i.e., P = 5) RS image enhancement/reconstruction methods, in which case p = 1 corresponds to the Lee's local statisticsbased adaptive despeckling technique [2], p = 2 corresponds to the PeronaMalik AD method [5], p = 3 corresponds to the DEDRrelated locally selective RASF technique [7], p = 4 corresponds to the APES method [6], and p = 5 corresponds to the NNfused RSF and DEDRVA methods, respectively.
The second employed quality metric is the l_{1} total mean absolute error (MAE) metric [13]
The quality metrics specified by (26) and (27) allow us to quantify the performance of the developed DEDRVArelated highresolution reconstructive methods (enumerated above by p = 1,...,P = 5) and, also, the NN fusion quality.
The quantitative measures of the image enhancement/reconstruction performance gains achieved with the particular employed DEDRRSF method [7], the APES algorithm [6], and DEDRVANN technique (16) for different SNRs evaluated with two different quality metrics (26), (27) are reported in Tables 1 and 2, respectively. The numerical simulations verify that the MENN implemented DEDRVA method outperforms the most prominent existing competing highresolution RS imaging techniques [1–7] (both without fusion and in the fused version) in the attainable resolution enhancement as well as in the convergence rates.
7. Concluding remarks
The extended DEDR method combined with the dynamic VA regularization has been adapted to the NN computational framework for perceptually enhanced and considerably speeded up reconstruction of the RS imagery acquired with imaging array radar and/or fractional SAR imaging systems operating in an uncertain RS environment. Connections have been drawn between different types of enhanced RS imaging approaches, and it has been established that the convex optimizationbased unified DEDRVANN framework provides an indispensable toolbox for highresolution RS imaging system design offering to observer a possibility to control the order, the type, and the amount of the employed twolevel regularization (at the DEDR level and at the VA level, correspondingly). Algorithmically, this task is performed via construction of the proper POCS operators that unify the desirable image metrics properties in the convex image/solution sets with the employed radar/SAR motivated data processing considerations. The addressed family of the efficient contractive progressive mapping iterative DEDRVArelated techniques has particularly been adapted for the NN computing mode with sensor systems/method fusion. The efficiency of the proposed fusionbased enhancement of the fractional SAR imagery has been verified for the two method fusion example in the reported simulation experiments. Our algorithmic developments and the simulations revealed that with the NNadapted POCSregularized DEDRVA techniques, the overall RS imaging performances are improved if compared with those obtained using separately the most prominent in the literature despeckling, AD or locally selective RS image reconstruction methods that do not unify the DEDR, the VA and the NNadapted method fusion considerations. Therefore, the developed unified DEDRVANN framework puts in a single optimization frame, radar/SAR image formation, speckle reduction, and adaptive dynamic scene image enhancement/fusion performed in the rapidly convergent NNadapted computational fashion.
References
Barrett HH, Myers KJ: Foundations of Image Science. Willey, NY; 2004.
Shkvarko YV, Tuxpan J, Santos S: Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems. Sensors 2011,2011(11):44834511.
Shkvarko YV: Unifying regularization and Bayesian estimation methods for enhanced imaging with remotely sensed datapart I: theoryPart II: implementation and performance issues. IEEE Trans Geosci Remote Sens 2004,42(5):923940.
Perona P, Malik J: Scalespace and edge detection using anisotropic diffusion. IEEE Trans Pattern Anal Mach Intell 1990,12(7):629639. 10.1109/34.56205
John S, Vorontsov MA: Multiframe selective information fusion from robust error theory. IEEE Trans Image Proc 2005,14(5):577584.
Yarbidi T, Li J, Stoica P, Xue M, Baggeroer AB: Source localization and sensing: a nonparametric iterative adaptive approach based on weighted least squares. IEEE Trans Aerospace Electron Syst 2010,46(1):425443.
Shkvarko Y: Unifying experiment design and convex regularization techniques for enhanced imaging with uncertain remote sensing dataPart I: theoryPart II: adaptive implementation and performance issues. IEEE Trans Geosci Remote Sens 2010,48(1):82111.
CastilloAtoche A, TorresRoman D, Shkvarko YV: Experiment design regularizationbased hardware/software codesign for realtime enhanced imaging in uncertain remote sensing environment. EURASIP J Adv Signal Process 2010,2010(254040):121.
Shkvarko YV, Castillo B, Tuxpan J, Castro D: Highresolution radar/SAR imaging: an experiment design framework combined with variational analysis regularization IPCV 2011 Proceeding of the 2011 International Conference on Image Processing, Computer Vision, & Pattern Recognition. Volume II. Las Vegas, USA; 2011:652658.
Shkvarko YV, Shmaliy YS, JaimeRivas R, TorresCisneros M: System fusion in passive sensing using a modified Hopfield network. J Franklin Inst 2001, 338: 405427. 10.1016/S00160032(00)000843
Henderson FM, Lewis AV: Principles and Applications of Imaging Radar. Manual of Remote Sensing. Volume 3. 3rd edition. Wiley, NY; 1998.
Wehner DR: HighResolution Radar. 2nd edition. Artech House, Boston, MA; 1994.
Ponomaryov V, Rosales A, Gallegos F, Loboda I: Adaptive vector directional filters to process multichannel images. IEICE Trans Commun 2007, E90B: 429430. 10.1093/ietcom/e90b.2.429
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Shkvarko, Y., Santos, S. & Tuxpan, J. Resolutionenhanced radar/SAR imaging: an experiment design framework combined with neural networkadapted variational analysis regularization. EURASIP J. Adv. Signal Process. 2011, 85 (2011). https://doi.org/10.1186/16876180201185
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201185
Keywords
 SAR system
 image enhancement
 image reconstruction
 neural network
 remote sensing