- Open Access
A novel linear feature detector for SAR images
© He et al.; licensee BioMed Central Ltd. 2012
Received: 25 November 2011
Accepted: 12 October 2012
Published: 12 November 2012
A new linear feature detector for synthetic aperture radar (SAR) images is presented in this article by embedding a three-region filter into the wedgelet analysis framework. One of its main features is that it can detect linear features with a range of varying widths and orientations in the same image by changing the direction and size of the detector mask within a multiscale framework. In addition, this detector takes into account both statistical and geometrical characteristics to detect line segments directly instead of detecting target pixels. To show its effectiveness, the detector is applied to extract one of the most important linear features: roads. Results and comparisons with several multiscale analysis techniques as well as ratio correlation detector on DLR E-SAR images reveal its advantages.
In the past 30 years, plenty of approaches have been developed to extract linear features from synthetic aperture radar (SAR) images, such as the line extraction method by using MRF (Markov random field), the fusion of SAR Intensity and Coherence data raised up by Hellwich et al., network snakes model by bridging the gap between low-level feature extraction or segmentation and high-level geometric representation of objects and so on. Since roads are typical linear feature in SAR imagery, and their importance for construction goes without saying, many scholars are keeping an eye on road extraction in SAR imagery. Large numbers of methods on automatic extracting roads are put forward, for instance, some uses the context information[4, 5], starting form rural areas to built-up areas and taking the advantage of the existing context objects to acquire corresponding roads, some uses methods originally developed for optical imagery by making some modification such as TUM approach, some others employ a mask filter containing different homogeneous regions to detect target pixels or segments[7–9]. However, choosing an appropriate size of detector mask to meet the variable target widths, widths of main roads and side roads in a same image for example, becomes one of the main challenges[10, 11]. In[12, 13], the authors suggest a possible solution employing multi-resolution techniques to expand the search range for width. The method works in three steps: (1) create an image pyramid by averaging the amplitudes of 2×2 pixel blocks recursively, (2) extract features at each level in the pyramid, and (3) merge the features at different levels. However, lines can be easily degraded as a result of too much smoothing (down sampling). In addition, the merging step may affect location accuracy. In this article, we try to solve the problem by introducing multiscale analysis techniques to adjust the detector (or filter) mask instead of image data.
Wavevlets provide a robust representation for one-dimensional piecewise smooth signals, but they are poorly adapted for higher-dimensional phenomena such as edges and contours. Donoho and Huo suggest a multiscale image analysis framework named “beamlet analysis” in which line segments play a role analogous to that played by points in wavelet analysis. Beamlets provide a multiscale structure to represent linear or curvilinear features, but they exploit only the geometrical properties and thus leading to a disappointing performance in radar images. Wedgelets are widely used to detect edges or contours[17, 18], while they are not well designed to extract line segments as they employ a two-region filter mask that only detects step-structure edges effectively.
At each pixel, if the maximum response within all possible directions and widths is larger than a threshold, this pixel is considered as a target pixel.
where W is the set of all possible wedgelets on the square.
Here #C is the number of elements in the set C, and e(s) is computed according to (4). The parameter λ is the complexity-penalized coefficient. A smaller λ will make the decomposition contain more details (more squares) whereas a larger λ determines general structures in the image.
Multiscale linear feature detector
Wedgelets provide nearly minimax estimation of boundaries or contours, but they are not well designed to extract linear features, see Figure2. This is because the two-region mask they take detects step-structure edge only, while linear features are usually noted as ridge edges. In addition, they lack information about the width of a line segment[7, 19]. Though having a three-region mask, ratio correlation detector, on the other hand, lacks the flexibility to change mask size and direction adaptively, which results in a failure in the detection of features with varying widths, as shown in Figure2. In this article, we propose a method that regards a three-region filter mask, the ratio correlation detector, in a multiscale manner. As already mentioned, we name this detector multiscale linear feature detector (MLFD). MLFD can be used to extract curvilinear targets from SAR data. Three central components make up the MLFD: detector mask, detector response, and linear feature detection as well as the effective computation method are detailed in the remainder of this section.
The pair of endpoints v 1 and v 2 can be set according to flexible rules. For example, they can be any two points on the square to detect discontinuity in any direction, or points at several specified directions to avoid large computation load.
The width w of the central region can increase from 1 to an upper bound w max that depends on the scale. As a result, this method can extract linear features with different widths. Especially, if the scale can be adjusted adaptively, the search range will be changed adaptively.
μ 1, μ 2, and μ 3 are piecewise constants of three regions determining the profile, i.e., the size and offset of the line in the mask. They have the same meaning as those in ratio correlation detector.
The central region is subdivided into three subregions to test uniformity. It makes sure the mean values in those subregions are close.
where μ a , μ b , and μ c are the mean values of three subregions. A higher uniformity coefficient promises the central region in detector mask is a homogenous area.
Though the response (7) is similar to the response of ratio correlation detector (3), there are three main differences between the MLFD and the ratio correlation detector: (1) the former extracts linear features directly whereas the latter detects target points, (2) the former can detect lines in any direction while the latter can detect lines only in several specified directions, and (3) the width (or height) of the mask is sizable instead of a fixed value during detection.
Linear feature detection
Here r:s means that the detector mask r is on the square s. λ and #C are the same as those in (5) determining the decomposition resolution. In the decomposition, each block contains a linear feature candidate, see Figure2.
where T m (rc,i) is the maximum response of the i th child square and T m (r p ) is that of its corresponding parent square, the four child squares are retained and the parent’s maximum response is updated to the value on left side in the above formula; otherwise, they are discarded, that is, the parent square is not subdivided in the multiscale decomposition. This pruning process is repeated at each level until the root node is reached. The endpoints (v1, v2) and the width of central region (w) of the detector constitute the detected linear features.
This cuts down 16N2 (N − ϕ) / δ computation flops.
Road extraction using MLFD
Here d i and l i , respectively, are the average response and the label (1 for a road segment and 0 for others) of a segment. V(d i |l i ) is defined as the potential of a segment to be a road segment, while V c (l) is the clique potentials expressing a priori knowledge of roads. The algorithm is tested on two different DLR E-SAR images.
In this article, a new multiscale linear feature detector named MLFD and its application to road network extraction from SAR images are presented. Thanks to the multiscale thinking, MLFD can extract linear features with a range of varying widths and orientations in the same scene without specifying the size or orientation of the detector mask. This is of great use since a growing number of remote sensing images of varying resolutions, which cover a large scene and contain linear targets with different widths are available. The global optimization model employed in this article is the MRF model, which can be replaced with other models to meet different requirements. For example, the junction-aware MRF model could be used to improve the overall performance at road intersections.
Future study will focus on improving the calculation of responses of different linear features in different scenes based on the detector scale. The multiscale analysis framework also provides opportunities to develop road grouping algorithms[28, 29].
The authors would like to thank several authorities for providing data for the experiments, especially Intermap Corp. and Bayrisches Landesvermessungsamt Muenchen, IHR/DLR (E-SAR data), Aerosensing GmbH (AeS-1 data) as well as the DDRE (EmiSAR data). This study was supported by the National Basic Research Program of China (973 program) (No. 2013CB733404), NSFC Grant (Nos. 60702041, 41174120), the China Postdoctoral Science Foundation funded project and the LIESMARS Special Research Funding.
- Hellwich O, Mayer H: Extraction line features from synthetic aperture radar (SAR) scenes using a Markov random field model. In IEEE International Conference on Image Processing (ICIP), Proceedings. Lausanne; vol 3 (Lausanne 1996), pp. 883–886View ArticleGoogle Scholar
- Hellwich O: Fusion of SAR intensity and coherence data in a Markov random field model for line extraction. In International Geoscience and Remote Sensing Symposium 98. (IEEE Computer Society Press, Los Alamitos, Seattle; 1998:pp. 165-167.Google Scholar
- Butenuth M, Heipke C: Network snakes: graph-based object delineation with active contour models. Mach. Vision Appl 2011, 23: pp. 91-109.View ArticleGoogle Scholar
- Wessel B, Hinz S: Context-supported road extraction from SAR-imagery: transition from rural to built-up areas. Proc. 5th European conference on synthetic aperture radar vol 1 (VDE, EUSAR Berlin, 2004), pp. 399–403Google Scholar
- Wessel B, Wiedemann C, Ebner H: The role of context for road extraction from SAR imagery. IEEE Proc. Int. Geosci. Remote Sens. Symp 2003, 5: pp. 4025-4027.Google Scholar
- Wessel B, Wiedemann C: Analysis of automatic road extraction results from airborne SAR imagery. In Proceedings of the ISPRS conference “Photogrammetric Image Analysis”. International Archieves of Photogrammetry Remote Sensing and Spatial Information Sciences, publisher, Natural Resources Canada; 2003:pp. 105-110.Google Scholar
- Tupin F, Maitre H, Mangin JF, Nicolas JM, Pechersky E: Detection of linear features in SAR images: application to road network extraction. IEEE Trans. Geosci. Remote Sens 1998, 36(10):pp. 434-453.View ArticleGoogle Scholar
- Huber R, Lang K: Road extractin from high-resolution airborne SAR using operator fusion. Proc. IGARSS vol 6 (Sydney, Australia 2001), pp. 2813–2815Google Scholar
- Gamba P, Dell’Acqua F, Lisini G: Improving urban road extraction in high-resolution images exploiting directional filtering, perceptual grouping, and simple topological concepts. IEEE Geosci. Remote Sens. Lett 2006, 3(3):pp. 387-391.View ArticleGoogle Scholar
- Hedman K, Stilla U, Lisini G, Gamba P: Road network extration in VHR SAR images of urban and suburban areas by means of class-aided feature-level fusion. IEEE Trans. Geosic. Remote Sens 2010, 48(3):pp. 1294-1296.View ArticleGoogle Scholar
- Butenuth M: Geometric refinement of road networks using network snakes and SAR images. In Proc. IGARSS. Honolulu, Hawaii, USA; 2010:pp. 449-452.Google Scholar
- Lisini G, Tison C, Tupin F, Gamba P: Feature fusion to improve road network extraction in high-resolution SAR images. IEEE Geosci. Remote Sens. Lett 2006, 3(2):pp. 217-221.View ArticleGoogle Scholar
- Tupin F, Houshmand B, Datcu M: Road detection in dense urban areas using SAR imagery and the usefulness of multiple views. IEEE Trans. Geosci. Remote Sens 2002, 40(11):pp. 2405-2414.View ArticleGoogle Scholar
- Kozaitis SP, Cofer RH: Linear feature detection using multiresolution wavelet filters. Photogramm. Eng. Remote Sens 2005, 71(6):pp. 689-697.View ArticleGoogle Scholar
- Candes EJ, Donoho DL: Ridgelets: a key to higher-dimensional intermittency. Phil. Trans. R. Soc. Lond. A 1999, 357: pp. 2495-2509.MathSciNetView ArticleGoogle Scholar
- Donoho DL, Huo X: Beamlets and multiscale image analysis. Multiscale and Multiresolution Methods, Lec. Notes Comp. Sci. and Eng vol 20 (Springer, 2002), pp. 149–196Google Scholar
- Donoho DL: Wedgelets: nearly-minimax estimation of edges. Ann. Statist 1999, 27: pp. 859-897.MathSciNetView ArticleGoogle Scholar
- Romberg JK, Wakin M, Baraniuk R: Multiscale wedgelet image analysis: fast decompositions and modeling. IEEE Int. Conf. Image Proc 2002, 2: pp. 585-588.View ArticleGoogle Scholar
- Zhou G, Cui Y, Chen Y, Yang JY, Rashvand H, Yamaguchi Y: Linear feature detection in polarimetric SAR images. IEEE Trans. Geosci. Remote Sens 2011, 49(4):pp. 1453-1463.View ArticleGoogle Scholar
- Friedrich F, Demaret L, Fuhr H, Wicker K: Efficient moment computation over polygonal domains with an application to rapid wedgelet approximation. SIAM J. Sci. Comput 2007, 29(2):pp. 842-863.MathSciNetView ArticleGoogle Scholar
- Xu G, Sun H, Yang W, Shuai Y: An improved road extraction method based on MRFs in rural areas for SAR images. In Proc. 1st Asian Pacific Conf. Synthetic Aperture Radar. Huangshan city, Anhui China; 2007:pp. 489-492.Google Scholar
- Steger C: An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell 1998, 20(2):pp. 489-492.MathSciNetView ArticleGoogle Scholar
- Negri M, Gamba P, Lisini G, Tuping F: Junction-aware extraction and regularization of urban road networks in high-resolution SAR images. IEEE Trans. Geosci. Remote Sens 2006, 44(10):pp. 2962-2971.View ArticleGoogle Scholar
- Hedman K, Hinz S, Stilla U: Road extraction from SAR multi-aspect data supported by a statistical context-based fusion. Proc. Urban Remote Sensing Joint Event vol 44 (France, Paris, 2007), pp. 1–6Google Scholar
- Amberg V, Coulon M, Marthon P, Spigai M: Improvement of road extraction in high resolution SAR data by a context-based approach. Proc. IGARSS (Seoul, Korea, 2005), pp. 490–493Google Scholar
- Mayer H, Hinz S, Bacher U, Baltsavias E: A test of automatic road extraction approaches. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences vol 36, (Seoul, Korea, 2005), pp. 209–214Google Scholar
- Wiedemann C, Ebner H: Automatic completion and evaluation of road networks. Proc. Int. Arch. Photogramm. Remote Sens vol XXXIII ,part 3B (Netherland, Amsterdam, 2000), pp. 979–986Google Scholar
- Hinz S, Baumgartner A: Automatic extraction of urban road networks from multi-view aerial imagery. ISPRS J. Photogrammetry Remote Sens 2000, 58: pp. 83-98.View ArticleGoogle Scholar
- Baumgartner A, Hinz S: Multi-scale road extraction using local and global grouping criteria. International Archives of Photogrammetry and Remote Sensing vol 33 (Elsevier, 2003). part B3, pp. 58–65Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.