 Research
 Open Access
 Published:
Features extraction from SAR interferograms for tectonic applications
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 155 (2012)
Abstract
In this article, a new technique for features extraction from SAR interferograms is presented. The technique combines the properties of autoassociative neural networks with those of more traditional approaches such as discrete Fourier transform or discrete wavelet transform. The feature extraction is chained to another neural module performing the estimation of the fault parameters characterizing a seismic event. The whole procedure has been validated with the experimental data acquired for the analysis of the dramatic L’Aquila earthquake which occurred in Italy in 2009. The results show the effectiveness of the approach either in terms of dimensionality reduction or in terms retrieval capabilities.
Introduction
Crosstrack radar interferometry is a processing technique of synthetic aperture radar (SAR) data based on the generation of an interferogram using two complex images of the same area acquired with slightly different look angles (for a more detailed treatment refer to Bürgmann et al.[1]). Since its first applications in the 1990s, SAR Interferometry (InSAR) technique has been applied to several geophysical problems, among which we find seismology, volcanology, hydrogeology, glaciology, subsidence studies, and topographic mapping. SAR interferograms are generally affected by different types of errors[1]. Phase noise in interferometry is introduced by the radar system, by the propagation path through the variably refractive atmosphere, by spatial decorrelation of the electromagnetic fields scattered back from the surface elements. In most of the cases, such as DEM generation, where a pixelbased information is required, to reduce noise a multilook processing is frequently implemented by averaging neighboring pixels. However, in other InSAR applications, the pixelbased information is less important with respect to the overall spatial fringes distribution observed over the area of interest. In the Earth Sciences domain active tectonics is a framework where the application of InSAR achieved rather significant results. Indeed, this technique is used by seismologists to better detect and measure the surface displacement field originated by a seismic event. More specifically, the retrieval problem is focused on the estimation of the fault parameters from the InSAR differential interferogram. This latter is generated by computing the phase difference of two radar images, acquired before and after an earthquake, on a pixelbypixel basis. The phase component is wrapped modulo 2π, being characterized by the phase cycles caused by the surface displacement. Elements such as the shape and periodicity of the fringes, the number of lobes, and their orientation represent the information carried from the interferogram. In[2] a Neural Network (NN) approach for the retrieval of tectonic parameters from an acquired SAR interferogram has been introduced. It has been shown that once the network is trained, it can perform the inversion automatically, directly from wrapped data, hence in a fast and objective way, which represents a considerable advantage with respect to more standard techniques discussed in specialized literature. Although the results obtained are very encouraging and represent a significant step towards the automation of the retrieval process, some improvement can be applied, especially in the design of the network performing the inversion task. In fact, in the original approach each differential interferogram, before being used as input to the NN algorithm, has been sampled considering 1 pixel every 10. If, on one side, this choice reduces the complexity of the network topology without discarding significant information, on the other side it still involves the use of rather huge network topologies with a number of connections of the order of millions. An additional reduction of the input dimensionality could still increase the NN mapping ability and computational efficiency. A network with fewer inputs has fewer adaptive parameters to be determined, which need a smaller training set to be properly constrained[3]. This leads to a network with improved generalization properties providing smoother mappings. In addition, a network with fewer weights may be faster to train. All these benefits make the reduction in the dimension of the input data a normal procedure when designing NN, even for a relatively low dimensional input space.
Starting from these motivations in this article we present a new technique to extract the essential features contained in a SAR interferogram image. The method consists in using AutoAssociative Neural Networks (AANN) combined with harmonic analysis approaches based on discrete Fourier transform (DFT) and discrete wavelet transform (DWT). The new complete inversion algorithm has been tested on the coseismic interferogram of the dramatic event of the earthquake occurred in the province of L’Aquila (central Italy) in April 2009[4]. Although the seismic parameter estimation is considered as the field of application, the same technique can be used for other scenarios where the fringes spatial distribution is the critical information of the image and an approach for dimensionality reduction is required.
Methods
Harmonic analysis
As far as we know, no specific techniques for dimensionality reduction applied to SAR interferograms have been presented in literature. On the other hand, a lot of algorithms have been developed for image filtering or denoising. Among them, harmonic analysis[5], a field of mathematics that studies the representation of functions as overlapping of fundamental waveforms, is recognized to be one of the most effective approaches. It is known that a multivariate function f can be well approximated by the linear combination of the elements of a given basis:
where a_{ k } are the coefficients which express the correlation of f with the basic functions γ_{ k }. This type of operation is called harmonic analysis. When harmonic analysis is applied to image data, the discrete image can be seen as a twodimensional signal and it is possible to consider a set of mathematical tools that perform a transformation suitable to extract some features otherwise difficult to be identified. This can be done by means of particular functional operators like the DFT and the DWT. Both the transforms express the signal as coefficients in a function space spanned by a set of basis functions. The basis of the DFT is complex exponential functions, representing sinusoid functions in the real domain, and the multiplying coefficients are complex numbers as well. The basis of the DWT is scaled and shifted versions of a mother wavelet realvalued function. In this case, the coefficients have real values. It has to be noted that more sophisticated transformations with respect to DFT and DWT exist, which are characterized by properties of invariance to changes in rotation and shift. However, such properties are not useful for the purpose of this study where the orientation and the position of the fringes represent crucial pieces of information. Low pass filtering of the DFT and DWT transformed images can be considered for the extraction of low spatial frequency features. An energy conservation criterion can be adopted to guide such an extraction of the transformation coefficients.
In Figure 1, the phase spectrum of an interferometric image computed by means of a 2D DFT is shown. The original interferogram has a fixed dimension of 1500 × 1500 pixels, and, as it can be seen, large areas of the image are uniform. In such areas a high correlation between values of locally near pixels is observed. The performance of a filter for dimensionality reduction operating in the spatial domain, such as the one adopted in[2], should be in principle improved by considering a transformation to a domain, such as frequency, where these kind of redundancies can be more effectively removed. The coefficients computed by means of the DFT are complex and can be visualized as images corresponding to the amplitude and the phase spectrum.
From Figure 1, the symmetry of both spectra is evident (the image is a real signal); this means that considering about the half of the samples it is possible to reconstruct the other half. An additional comment is that the components with higher values in the amplitude spectrum, representing the spatial frequencies that carry more energy, are lowfrequency components. Therefore, considering that the information of interest mostly appears at large scale variation, an approach conserving the lowfrequency coefficients of the DFT seems to be appropriated. Moreover, the modulus of the DFT is primarily a measure of local contrast variation of the image. The phase mainly contains information about the features location. For example, an image shift adds a linear term to the phase and has no effect on the modulus. From these considerations, we can assume that, in general, the phase carries information on the structure of the image, while the modulus is associated to the intensity of individual image elements. Since the information of interest are related to the shape, the orientation, and periodicity of the fringes, the features extraction has been implemented using only the phase spectrum, omitting the information contained in the amplitude spectrum which is considered less significant.
A similar processing has been applied by means of the 2D DWT. Signals showing nonstationary characteristics could be missed using the classical Fourier analysis because of the unlimited time definition of the basis functions. Instead the harmonic analysis contained in the DWT allowed us to obtain a representation of the signal that analyzes the frequency content in a multiscale domain. The coefficients can be visualized in a wavelet power spectrum (WPS) plot where the two horizontal axes represent the scale factors in the two image dimensions. Each WPS point is associated to a color representing the magnitude of the coefficient, the higher is the magnitude, the higher is the correlation between the signal and the wavelet elements (at the given scale factors). In the 2D case, the lower scaling factors are placed in the lower left quadrant. Figure 2 shows the WPS obtained for the same interferogram considered in Figure 1. Note that the Daubechies’ function[6] has been used as mother wavelet.
We see that the coefficients with the highest values of correlation are in the left lower quadrant of the WPS, corresponding to lower values of the scale factors. This is in agreement with what was seen using the DFT, where the highest energy contents were located at the low frequencies. A further consideration regards the capability of a better detection, with respect to DFT, of higher spatial frequency features.
A metric to measure the performance of the transformation and to cut off the less significant components can stem from the analysis of the signal energy in the transformed domain. To this aim, the cumulative energy (CE) quantity associated to the number of considered coefficients can be used. In Figure 3, an example of CE curves for both DFT (left) and DWT (right) is shown, respectively.
A preliminary analysis has been performed considering a set of 120 synthetic interferograms (equals to the 10% of the whole data set) to identify the average number of coefficients that retain the 80% of the total CE, for the DFT and DWT, respectively. Then, considering these average measures, the first 5,000 coefficients of DFT and the first 500 coefficients of DWT have been extracted for each interferogram.
Note that due to its spatialfrequency localization property the DWT enables the representation of an interferogram by means of less coefficients. In Figure 4, the original interferogram and its reconstructions obtained by inverting both the DFT and DWT, considering the 80% and the 60% of CE, are presented. For a comparison with the method used in[2] the interferogram obtained with the spatial sampling is also shown in Figure 4. We see that the threshold on CE assures that the main patterns of the fringes distribution are preserved. This is still not valid if the number of the considered coefficients is significantly smaller. Two different methods to judge the effectiveness of a reduction criteria can be considered at this stage. A first metric consists in computing the compression ratio (C_{ r }), i.e., the ratio in terms of bytes necessary to represent the image before and after the compression. A second performance index can be represented by an objective fidelity criterion.
In (2), the expression of the compression ratio is shown.
where n_{1} and n_{2} are the number of bytes representing the image before and after the compression, respectively. As far as the objective fidelity criterion is concerned, the root mean square error (RMSE) between the original interferogram and the reconstructed one has been adopted.
In the above equation,$\widehat{f}$ is the reconstructed copy of the interferogram f of dimension M × N. The C_{ r } and RMSE values for the DFT and DWT obtained for the synthetic interferogram of Figure 1 are shown in Table 1; the values corresponding to the spatial sampling applied in[2] are also reported.
The results reported in Table 1 show the effectiveness of the harmonic analysis approaches in comparison to the spatial sampling technique. With the transforms the C_{ r } increases and the RMSE decreases.
Whatever the inversion algorithm used to characterize the seismic source might be, the quality of the DInSAR data is also a concern. In particular, the loss of coherence between the two SAR acquisitions generates decorrelation on some areas of the interferogram. This latter is one of the principal factors affecting the final results. Anyway, with the harmonic analysis approach, such a problem could be mitigated. Indeed, the low pass approach filters the highfrequency noise, which is the one due to the lack of coherence (γ). As proven by Lee et al.[7] the interferometric phase noise in the real domain can be characterized by an additive noise model:
where ϕ is the measured phase, ϕ_{ x } is the original phase without noise, and v represents a zeromean noise depending on γ and the number of looks L. Bamler and Hartl[8] define this dependency for a singlelook (L = 1) interferogram as a function of the absolute value of the complex coherence γ as
In the above equation,${\sigma}_{\varphi ,L}^{2}$ is the phase standard deviation, related to the phase noise, and L_{i, 2} is the Euler’s dilogarithm, defined as
In Figure 5, three cases of simulated interferograms with decreasing coherence and the corresponding results obtained by means of spatial sampling, DFT, and DWT, are shown. The additive phase noise has been computed considering coherence mean values equal to 0.8, 0.6, and 0.4, respectively.
By comparing these results with that of Figure 4, it can be noted that the two proposed harmonic approaches are less sensitive to decorrelation than the spatial sampling. This consideration has been validated computing the RMSE values for the reconstructed noisy interferograms, reported in Table 2.
From the results summarized in Table 2, it can be noted that the two transform approaches show a better behavior than the spatial sampling on noisy data.
Nonlinear PCA
AANNs have already been successfully used in remote sensing for data dimensionality reduction in different applications, such as atmospheric microwave radiometry[9] and in the processing of hyperspectral data[10]. A nonlinear PCA can be implemented by means of a multilayer NN with a particular architecture called autoassociative[11]. This latter is characterized by a symmetric topology in which the input layer and the output layer have the same number of elements and three more hidden layers are present. As shown in Figure 6, the central layer has a smaller dimension than the input–output layers, hence such a layer can be seen as bottleneck layer.
Through the AANN scheme the input pattern is mapped onto itself applying an unsupervised learning based on the minimization of the sum of quadratic errors:
where the${y}_{k}\left({x}^{n}\right)$ is the output of the network,${x}_{k}^{n}$ is the target pattern (equal to the input), while the double sum is computed on the dataset dimensionality N and on the different patterns of the dataset d. The two symmetric sections of the AANN implement two distinct functional mappings F_{ 1 } and F_{ 2 }. The first mapping projects the original vector x^{n} onto a subspace$\mathcal{S}$ of dimensionality m < n, defined by the activations of the units in the bottleneck layer. This mapping, due to the first hidden layer of nonlinear elements, is essentially arbitrary and in particular it is not restricted to the linear case. The F_{ 2 } mapping reprojects the mdimensional space$\mathcal{S}$ onto the ndimensional starting space. Therefore, F_{ 2 } defines, through a nonlinear transformation, how$\mathcal{S}$ is included in the original space of input vectors x^{n}. An AANN actually operates a nonlinear principal components analysis (NLPCA), containing the linear PCA as a particular case. It has the advantage of not being limited by linear transformations; however, the dimensionality of the subspace$\mathcal{S}$ must be defined before the training process, which involves the implementation and the comparison of multiple networks with different values of m.
The hybrid approach
In the previous sections, we have reviewed the potential of different dimensionality reduction approaches for their application to interferograms. In our case, the final purpose is to be able to handle a vector of a rather limited dimensionality to be used for the retrieval of tectonic parameters from the SAR interferogram. Considering the harmonic analysis (DFT or DWT) we have seen that, in the best case (DWT), at least a number of about 500 coefficients have to be used to keep the most significant information content. Even if the reduction from the initial dimensionality is dramatic, 500 components may still represent a number of inputs involving a rather complicated MLP topology, with thousands of adaptive coefficients to be determined. In fact, such a topology can still be cause of overfitting during the training phase. On the other hand, a straightforward use of NLPCA can have the advantage of yielding an input vector consisting of a rather limited number of components but the AANN performing this task, receiving as input the whole SAR interferometric image, would be again characterized by a highly complex topology, hence the computational burden would be still difficult to manage in this case. We then propose the hybrid approach shown in Figure 7 where a first processing step relies on the harmonic analysis and, in a second step, the AANN is applied. Such an approach should in principle lead to the determination of a considerably reduced number of components for the interferogram representation without involving the training of huge AANN topologies. Note that DFT and DWT are considered as two alternative possibilities to implement the harmonic analysis.
Experimental setup
An ensemble of synthetic differential interferograms has been generated by a recursive implementation of the Okada formulation[12], explaining in a close analytic form the surface deformation due to a seismic event by a dislocation model in an elastic half space:
In the above equation, u_{ i } (x_{1}, x_{2}, x_{3}) is the displacement field due to a dislocation Δu_{j} (ξ_{1}, ξ_{2}, ξ_{3}) across a surface Σ in an isotropic medium, δ_{ jk } is the Kronecker delta, λ and μ are Lamé’s coefficients, specifying the elastic medium, ν_{ k } is the direction cosine of the normal to the surface element d Σ. The term u_{ i }^{j} is the i th component of the displacement at (x_{1}, x_{2}, x_{3}) due to the j th direction point force of magnitude F at (ξ_{1}, ξ_{2}, ξ_{3}). To obtain the synthetic interferogram the displacement vector u computed by Equation (8) is projected onto the satellite line of sight using the two angles, the radar incidence (from vertical) and the azimuth of the satellite ground track (from North). The computed phase is finally wrapped applying the operator W{}:
In Figure 8, some examples of the generated synthetic data are shown. The generated dataset has been used to train the NNs performing the information retrieval. Depending on the seismic source mechanism, faults can schematically be classified into three main groups: normal fault, strike slip fault, and reverse fault (thrust). The main parameters defining the fault geometry are length (km), width (km), bottom depth (km), strike angle (deg), that measures the angle between the fault and the N–S direction, and the dip angle (deg), that measures the inclination of the fault plane with respect to the surface. In this study, we have considered constant slip values along the fault plane, allowing fault length and width, fault dip and strike angles, bottom depth, to vary within predefined ranges. The synthetic dataset on which the NNs have been trained is composed by 1,200 interferograms of 1500 × 1500 pixels. In Table 3, the range of variation for the different fault parameters and for the three fault mechanism is shown, while Table 4 shows the statistical characterization of the synthetic dataset.
The steps of the proposed inversion method can schematically be summarized as

1.
Synthetic interferograms generation through Okada formulation, considering a set of fault parameters spanning the variations of the phenomena behavior.

2.
DFT or DWT feature extraction.

3.
NLPCA developed by means of AANN performing an additional dimensionality reduction starting from the DFT or DWT selected coefficients.

4.
Training and testing of the NN for the classification (hereafter NN1) of the fault mechanism (normal, strike slip, or thrust).

5.
Training and testing of three NNs (hereafter NN2), one for each fault mechanism, for the retrieval of the fault parameters (length, width, depth, dip angle, strike angle).
After different trials, the two architectures for NN1 and NN2 that showed the best performance have the configurations [50]–[30]–[10]–[3] and [50]–[30]–[10]–[5], respectively.
The test of the NNs performance, that can be assumed also as a test on the effectiveness of the preprocessing feature extraction phase, has been developed by means of an independent subset of synthetic data. In Table 5, the classification accuracy of the NN1 considering the DFT and the DWT features extraction is shown. Finally in Table 6, we report the values of the RMSE obtained over the different fault types and for the two preprocessing algorithms.
In general, we see that both considered approaches (using DFT or DWT) are characterized by good estimation capabilities. If the use of DFT shows better accuracy in terms of fault classification, the DWT seems to be more precise in the retrieval task. This can be explained considering that the DWT yields a more effective dimensionality reduction, which can be significant when the inversion task, as in the parameter retrieval case, is more easily affected by the overfitting problem.
A comparison with the scheme obtained considering, as in[2], a simple data dimensionality technique based on spatial sampling has also been carried out. We observed that, with the new technique, the time necessary for training the NN performing the inversion dramatically decreases by a factor of 20. Also improvements in the accuracies obtained in the classification and in the parameter retrievals have been noted.
L’Aquila earthquake test case
The procedure described above has been tested on a differential interferogram imaging the seismic event occurred in central Italy near the city of L'Aquila. On April 6th, 2009 (01:32 GMT), the Abruzzi region (Central Italy) has been affected by an M_{ w } 6.3 earthquake. The seism heavily hit the main city of the province, L’Aquila, and strongly damaged its historical heritage. The earthquake caused the partial or complete collapse of a significant number of highly vulnerable, recent, and historical buildings. The mainshock, located at a depth of approximately 9 km, was followed in the next week by seven aftershocks with M_{ w } > 5, the largest of which (M_{ w } = 5.6) occurred on 7th April, 15 km SE of the mainshock and 5 km deeper (Figure 9). The focal mechanism of the mainshock indicates a pure NW–SE normal fault dipping SW[13], in agreement with the extensional tectonics of the Apennines. The results, obtained with different approaches and algorithms based on geodetic measurements from GPS, leveling or InSAR, already presented in literature are shown in Table 7.
We applied DInSAR techniques on descending orbit CBand Envisat images (April 27th 2008–April 12th 2009) obtaining the interferogram used as input for the proposed procedure. The computed SAR interferogram is shown in Figure 10, while the results from the NN retrieval scheme are synthesized in Table 8.
The classification problem as well as the retrieval problem was satisfactorily managed with both the preprocessing stages. Indeed, the NN correctly associated the L’Aquila interferogram to a normal slip mechanism. Furthermore, the estimated geometric parameters were consistent with the results from Table 7.
The energy released by the earthquake has been estimated computing the M_{ w } (moment magnitude) by using Kanamori’s formulation[17]:
where M_{o} = μWLδ is the seismic moment, μ ≈ 3.2 × 10^{11} (dyne/cm^{2}) the shear modulus, W and L are the fault width and length, and δ is the slip. The M_{ w } values computed for the two sets of retrieved parameters are reported in Table 8 and are quite compatible with the assessments from seismological measures.
Conclusion
In this study, we addressed the problem of feature extraction from SAR interferograms in the particular framework of the analysis of tectonic events. Both the harmonic analysis, based on DFT and DWT, and a neural approach, based on AANN, have been considered for the final objective.
We found that a hybrid approach chaining the harmonic analysis and the neural technique was the most effective one. Indeed, the harmonic analysis alone is not capable of shrinking the original interferogram image dimensionality up to the desired level. On the other hand, the single application of the AANN would have involved the use of too complex network architectures. In fact, the results obtained with the implemented processing chain are rather satisfactory, since the time necessary to train the networks performing the final inversion (split in a classification stage and a parameter retrieval stage) was dramatically reduced and at the same time the parameter estimation accuracy improved, with respect to the case where dimensionality reduction is performed simply by interferogram subsampling. The capability of the harmonic analysis to mitigate the effect of decorrelation characterizing a SAR interferometric pair has also been demonstrated. The complete processing chain has finally been validated with the real case of L’Aquila earthquake of April 2009. The obtained results are in good agreement with the conclusions of other analysis presented in the literature. The magnitude moment M_{ w }, computed with the Kanamori’s formulation utilizing retrieved geometric parameters of the fault plane, is compatible with the value obtained by seismological analysis. The fault classification results, obtained by means of the synthetic dataset, have shown a better behavior of the DFT with respect to the DWT. This can be due to the fact that in this case the method selected for limiting the number of coefficients to be used, relying on CE measure, might not be the optimum one, so further investigation are required. On the other hand, in the parameter retrieval results, where probably the overfitting risk increases, the DWT + AANN performs slightly better than DFT + AANN. In fact, the values of C_{ r } and RMSE show better performance of DWT in the pure dimensionality reduction task.
The choice of considering uniform distributions for the slip vector components along the fault plain can be considered as a limitation of the whole inversion procedure. Such a limitation, however, can be removed, or reduced, by a refined implementation of the forward problem model. On the other hand, different advantages can be put forward. In particular, the possibility to retrieve the fault parameters directly from wrapped data, or a certain degree of tolerance to the phase noise, as shown in the Section 2. Moreover, when the classification and retrieval NNs have been trained, these can rapidly be applied to invert data with a high level of objectivity.
References
 1.
Bürgmann R, Rosen PA, Fielding EJ: Synthetic aperture radar interferometry to measure earth’s surface topography and its deformation. Ann. Rev. Earth Plan. Sci. 2000, 28: 169209. 10.1146/annurev.earth.28.1.169
 2.
Stramondo S, Del Frate F, Picchiani M, Schiavon G: Seismic source quantitative parameters retrieval from InSAR data and neural networks. IEEE Trans. Geosci. Remote Sens 2011, 49: 96104. 10.1109/TGRS.2010.2050776
 3.
Bishop CM: Neural Networks for Pattern Recognition. Oxford University Press, Oxford; 1995:374375.
 4.
Chiarabba C, Amato A, Anselmi M, Baccheschi P, Bianchi I, Cattaneo M, Cecere G, Chiaraluce L, Ciaccio MG, De Gori P, De Luca G, Di Bona M, Di Stefano R, Faenza L, Govoni A, Improta I, Lucente FP, Marchetti A, Margheriti L, Mele F, Michelini A, Monachesi G, Moretti M, Pastori M, Piana Agostinetti N, Piccinini D, Roselli P, Seccia D, Valoroso L: The 2009 L’Aquila (central Italy) MW 6.3 earthquake: main shock and aftershocks. Geophys. Res. Lett 2009, 36: L18308. 10.1029/2009GL039627
 5.
Harris FJ: On the use of windows for harmonic analysis with the discrete Fourier transform. Proc. IEEE 1978, 66(1):5183.
 6.
Daubechies I: Orthonormal bases of compactly supported wavelets. Commun. Pure Appl. Math. 1988, 41: 909996. 10.1002/cpa.3160410705
 7.
Lee J, Papathanassiou K, Ainsworth T, Grunes M, Reigber A: A new technique for phase noise filtering of SAR interferometric phase images. IEEE Trans. Geosci. Remote Sens. 1998, 36(5):14561465. 10.1109/36.718849
 8.
Bamler R, Hartl P: Synthetic aperture radar interferometry. Inverse Problem 1998, 14: R1R54. 10.1088/02665611/14/4/001
 9.
Del Frate F, Schiavon G: Nonlinear principal component analysis for the radiometric inversion of atmospheric profiles by using neural networks. IEEE Trans. Geosci. Remote Sens. 1999, 37(5):23352342. 10.1109/36.789630
 10.
Licciardi G, Del Frate F: Pixel unmixing in hyperspectral data by means of neural networks. IEEE Trans. Geosci. Remote Sens. 2011, 49(11):41634172. 10.1109/TGRS.2011.2160950
 11.
Kramer M: Nonlinear principal component analysis using autoassociative neural networks. AIChE J. 1991, 37: 233. 10.1002/aic.690370209
 12.
Okada Y: Surface deformation due to shear and tensile faults in a half space. Bull. Seism. Soc. Am. 1985, 75(4):11351154.
 13.
Pondrelli S, Salimbeni S, Morelli A: G EkstrÂ¨om, M Olivieri, E Boschi, Seismic moment tensors of the April 2009, L’Aquila (Central Italy), earthquake sequence. Geophys. J. Int. 2009, 180(1):238242. 10.1111/j.1365246X.2009.04418.x
 14.
Atzori S, Hunstad I, Chini M, Salvi S, Tolomei C, Bignami C, Stramondo S, Trasatti E, Antonioli A, Boschi E: Finite fault inversion of DInSAR coseismic displacement of the 2009 L’Aquila earthquake (central Italy). Geophys. Res. Lett. 2009, 36: L15305. 10.1029/2009GL039293
 15.
Walters RJ, Elliott JR, D’Agostino N, England PC, Hunstad I, Jackson JA, Parsons B, Phillips RJ, Roberts G: The 2009 L’Aquila earthquake (central Italy): a source mechanism and implications for seismic hazard. Geophys. Res. Lett. 2009, 36(6):L17312. 10.1029/2009GL039337
 16.
Anzidei M, Boschi E, Cannelli V, Devoti R, Esposito A, Galvani A, Melini D, Pietrantonio G, Riguzzi F, Sepe V, Serpelloni E: Coseismic deformation of the destructive April 6, 2009 L’Aquila earthquake (central Italy) from GPS data. Geophys. Res. Lett. 2009, 36: L17307. 10.1029/2009GL039145
 17.
Kanamori H: Magnitude scale and quantification of earthquakes. Tectonophysics 1983, 93: 185199. 10.1016/00401951(83)902731
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Picchiani, M., Del Frate, F., Schiavon, G. et al. Features extraction from SAR interferograms for tectonic applications. EURASIP J. Adv. Signal Process. 2012, 155 (2012). https://doi.org/10.1186/168761802012155
Received:
Accepted:
Published:
Keywords
 Neural networks
 Nonlinear PCA
 SAR interferometry
 Discrete fourier transform
 Discrete wavelet transform