- Research
- Open Access

# A novel linear feature detector for SAR images

- Chu He
^{1, 2}Email author, - Zixian Liao
^{1}, - Fang Yang
^{1}, - Xinping Deng
^{3}and - Mingsheng Liao
^{2}

**2012**:235

https://doi.org/10.1186/1687-6180-2012-235

© He et al.; licensee BioMed Central Ltd. 2012

**Received:**25 November 2011**Accepted:**12 October 2012**Published:**12 November 2012

## Abstract

A new linear feature detector for synthetic aperture radar (SAR) images is presented in this article by embedding a three-region filter into the wedgelet analysis framework. One of its main features is that it can detect linear features with a range of varying widths and orientations in the same image by changing the direction and size of the detector mask within a multiscale framework. In addition, this detector takes into account both statistical and geometrical characteristics to detect line segments directly instead of detecting target pixels. To show its effectiveness, the detector is applied to extract one of the most important linear features: roads. Results and comparisons with several multiscale analysis techniques as well as ratio correlation detector on DLR E-SAR images reveal its advantages.

## Keywords

- Road Segment
- Synthetic Aperture Radar
- Markov Random Field
- Synthetic Aperture Radar Image
- Linear Feature

## Introduction

In the past 30 years, plenty of approaches have been developed to extract linear features from synthetic aperture radar (SAR) images, such as the line extraction method by using MRF (Markov random field)[1], the fusion of SAR Intensity and Coherence data raised up by Hellwich et al.[2], network snakes model by bridging the gap between low-level feature extraction or segmentation and high-level geometric representation of objects[3] and so on. Since roads are typical linear feature in SAR imagery, and their importance for construction goes without saying, many scholars are keeping an eye on road extraction in SAR imagery. Large numbers of methods on automatic extracting roads are put forward, for instance, some uses the context information[4, 5], starting form rural areas to built-up areas and taking the advantage of the existing context objects to acquire corresponding roads, some uses methods originally developed for optical imagery by making some modification such as TUM approach[6], some others employ a mask filter containing different homogeneous regions to detect target pixels or segments[7–9]. However, choosing an appropriate size of detector mask to meet the variable target widths, widths of main roads and side roads in a same image for example, becomes one of the main challenges[10, 11]. In[12, 13], the authors suggest a possible solution employing multi-resolution techniques to expand the search range for width. The method works in three steps: (1) create an image pyramid by averaging the amplitudes of 2×2 pixel blocks recursively, (2) extract features at each level in the pyramid, and (3) merge the features at different levels. However, lines can be easily degraded as a result of too much smoothing (down sampling)[14]. In addition, the merging step may affect location accuracy. In this article, we try to solve the problem by introducing multiscale analysis techniques to adjust the detector (or filter) mask instead of image data.

Wavevlets provide a robust representation for one-dimensional piecewise smooth signals, but they are poorly adapted for higher-dimensional phenomena such as edges and contours[15]. Donoho and Huo[16] suggest a multiscale image analysis framework named “beamlet analysis” in which line segments play a role analogous to that played by points in wavelet analysis. Beamlets provide a multiscale structure to represent linear or curvilinear features, but they exploit only the geometrical properties and thus leading to a disappointing performance in radar images. Wedgelets are widely used to detect edges or contours[17, 18], while they are not well designed to extract line segments as they employ a two-region filter mask that only detects step-structure edges effectively.

## Related study

### Fusion operator

*μ*

_{ i }and

*n*

_{ i }be the radiometric empirical mean and the pixel count, respectively, of region

*i*. The response of D1 is defined as

*r*

_{ ij }= 1−min(

*μ*

_{ i }/

*μ*

_{ j },

*μ*

_{ j }/

*μ*

_{ i })and the response of D2 is computed according to

*i*. By merging the two values using the symmetrical sum, the response of the ratio correlation detector can be written as

At each pixel, if the maximum response within all possible directions and widths is larger than a threshold, this pixel is considered as a target pixel.

### Wedgelet

*S*that is piecewise constant on either side of a line

*l*through

*S*, see Figure2. It can be denoted by

*w*(

*v*

_{1},

*v*

_{2},

*c*

_{ a },

*c*

_{ b }) with the points (

*v*

_{1}and

*v*

_{2}) representing the endpoints of

*l*and the constants (

*c*

_{ a }and

*c*

_{ b }) representing the mean of amplitude values in two different regions. Let

*I*be an image; the wedgelet approximation error over the square

*S*can be written as

where *W* is the set of all possible wedgelets on the square.

*C*that solves the optimization problem

Here *#C* is the number of elements in the set *C*, and *e*(*s*) is computed according to (4). The parameter *λ* is the complexity-penalized coefficient. A smaller *λ* will make the decomposition contain more details (more squares) whereas a larger *λ* determines general structures in the image.

## Multiscale linear feature detector

Wedgelets provide nearly minimax estimation of boundaries or contours, but they are not well designed to extract linear features, see Figure2. This is because the two-region mask they take detects step-structure edge only, while linear features are usually noted as ridge edges. In addition, they lack information about the width of a line segment[7, 19]. Though having a three-region mask, ratio correlation detector, on the other hand, lacks the flexibility to change mask size and direction adaptively, which results in a failure in the detection of features with varying widths, as shown in Figure2. In this article, we propose a method that regards a three-region filter mask, the ratio correlation detector, in a multiscale manner. As already mentioned, we name this detector multiscale linear feature detector (MLFD). MLFD can be used to extract curvilinear targets from SAR data. Three central components make up the MLFD: detector mask, detector response, and linear feature detection as well as the effective computation method are detailed in the remainder of this section.

### Detector mask

*r*(

*v*

_{1},

*v*

_{2},

*w*,

*μ*

_{1},

*μ*

_{2},

*μ*

_{3})formally, where

*v*

_{1},

*v*

_{2}are the endpoints of the central line and

*w*is the width of the central region;

*μ*

_{1},

*μ*

_{2}, and

*μ*

_{3}are the mean values of three different regions. The scale

*s*is defined as the sidelength of the corresponding square (the mask). During the detection, the scale is changed according to

*s*= 2

^{ j },

*j*= 0,1,2,…,

*n*. There are several features of this detector:

- 1.
The pair of endpoints

*v*_{1}and*v*_{2}can be set according to flexible rules. For example, they can be any two points on the square to detect discontinuity in any direction, or points at several specified directions to avoid large computation load. - 2.
The width

*w*of the central region can increase from 1 to an upper bound*w*_{max}that depends on the scale. As a result, this method can extract linear features with different widths. Especially, if the scale can be adjusted adaptively, the search range will be changed adaptively. - 3.
*μ*_{1},*μ*_{2}, and*μ*_{3}are piecewise constants of three regions determining the profile, i.e., the size and offset of the line in the mask. They have the same meaning as those in ratio correlation detector. - 4.
The central region is subdivided into three subregions to test uniformity. It makes sure the mean values in those subregions are close.

*v*

_{1}and

*v*

_{2}are tested for any two points on the square. And we set

*w*

_{max}to

*s*/

*δ*with

*δ*referring to the minimum scale in the detection. Thus the number of different detector masks at scale

*s*is

### Detector response

*l*denoting the length and

*α*denoting the uniformity coefficient of the central region, the detector response is defined as

*r*and

*ρ*are computed according to (1) and (2). The parameter

*l*gets rid of lines that are too short, i.e., the ones over the four corners by reducing their responses. Furthermore, it keeps the response of a line segment invariant whether or not it is divided into several parts which is important during the multiscale decomposition. The uniformity coefficient

*α*is employed to evaluate the continuity of the central region[8]. It is computed according to the following equation:

where *μ*_{
a
}, *μ*_{
b
}, and *μ*_{
c
} are the mean values of three subregions. A higher uniformity coefficient promises the central region in detector mask is a homogenous area.

Though the response (7) is similar to the response of ratio correlation detector (3), there are three main differences between the MLFD and the ratio correlation detector: (1) the former extracts linear features directly whereas the latter detects target points, (2) the former can detect lines in any direction while the latter can detect lines only in several specified directions, and (3) the width (or height) of the mask is sizable instead of a fixed value during detection.

### Linear feature detection

*C*at different scales and computes responses according to (7) on each block. The set

*C*satisfies

Here *r:s* means that the detector mask *r* is on the square *s*. *λ* and *#C* are the same as those in (5) determining the decomposition resolution. In the decomposition, each block contains a linear feature candidate, see Figure2.

where *T*_{
m
}(*r*_{c,i}) is the maximum response of the *i* th child square and *T*_{
m
}(*r*_{
p
}) is that of its corresponding parent square, the four child squares are retained and the parent’s maximum response is updated to the value on left side in the above formula; otherwise, they are discarded, that is, the parent square is not subdivided in the multiscale decomposition. This pruning process is repeated at each level until the root node is reached. The endpoints (*v*_{1}, *v*_{2}) and the width of central region (*w*) of the detector constitute the detected linear features.

*N*is the width of an image (with height equal to width) and is dyadic. Then the image can be divided into (

*N*/

*n*)

^{2}dyadic blocks at scale

*n*. Let

*δ*be the minimum scale in the decomposition and then according to (6), the count of all possible detector masks is

*N*is large, we divide this image into patches of which the widths are

*ϕ*(

*ϕ*<

*N*) as a long line segment will always be broken into several parts during the decomposition. Then we apply the decomposition for each patch. According to (11), the total count of flops is

This cuts down 16*N*^{2} (*N* − *ϕ*) / *δ* computation flops.

## Road extraction using MLFD

*T*(

*r*) /

*l*

_{ c }with

*T*(

*r*) referring to the response (7) of the detector and

*l*

_{ c }noting the segment length. The energy function can be formalized as

Here *d*_{
i
}and *l*_{
i
}, respectively, are the average response and the label (1 for a road segment and 0 for others) of a segment. *V*(*d*_{
i
}|*l*_{
i
}) is defined as the potential of a segment to be a road segment, while *V*_{
c
}(*l*) is the clique potentials expressing a priori knowledge of roads. The algorithm is tested on two different DLR E-SAR images.

*L*

_{ r }is total length of the extracted roads that match the ground truth,

*L*

_{gt}is ground truth (or reference data) length,

*L*

_{ N }is the total extracted road length and

*L*

_{ugt}is the length of actual roads that are unmatched with the extracted roads. Our method is compared with Tupin’s[7] and Steger’s[22] methods here. As we can see from Table1, generally, our methods are better than the other two methods. however, results for the first image Figure3 are worse than results for the second image. This is mainly because of the complexity of the scene in Figure3, which contains different types of roads (road, rail, and airstrip), forests, farmland and buildings. Furthermore, the quality of this image is relatively low, i.e., the brightness is not even.

## Conclusion

In this article, a new multiscale linear feature detector named MLFD and its application to road network extraction from SAR images are presented. Thanks to the multiscale thinking, MLFD can extract linear features with a range of varying widths and orientations in the same scene without specifying the size or orientation of the detector mask. This is of great use since a growing number of remote sensing images of varying resolutions, which cover a large scene and contain linear targets with different widths are available. The global optimization model employed in this article is the MRF model, which can be replaced with other models to meet different requirements. For example, the junction-aware MRF model could be used to improve the overall performance at road intersections.

Future study will focus on improving the calculation of responses of different linear features in different scenes based on the detector scale. The multiscale analysis framework also provides opportunities to develop road grouping algorithms[28, 29].

## Declarations

### Acknowledgements

The authors would like to thank several authorities for providing data for the experiments, especially Intermap Corp. and Bayrisches Landesvermessungsamt Muenchen, IHR/DLR (E-SAR data), Aerosensing GmbH (AeS-1 data) as well as the DDRE (EmiSAR data). This study was supported by the National Basic Research Program of China (973 program) (No. 2013CB733404), NSFC Grant (Nos. 60702041, 41174120), the China Postdoctoral Science Foundation funded project and the LIESMARS Special Research Funding.

## Authors’ Affiliations

## References

- Hellwich O, Mayer H: Extraction line features from synthetic aperture radar (SAR) scenes using a Markov random field model. In
*IEEE International Conference on Image Processing (ICIP), Proceedings*. Lausanne; vol 3 (Lausanne 1996), pp. 883–886View ArticleGoogle Scholar - Hellwich O: Fusion of SAR intensity and coherence data in a Markov random field model for line extraction. In
*International Geoscience and Remote Sensing Symposium 98*. (IEEE Computer Society Press, Los Alamitos, Seattle; 1998:pp. 165-167.Google Scholar - Butenuth M, Heipke C: Network snakes: graph-based object delineation with active contour models.
*Mach. Vision Appl*2011, 23: pp. 91-109.View ArticleGoogle Scholar - Wessel B, Hinz S: Context-supported road extraction from SAR-imagery: transition from rural to built-up areas.
*Proc. 5th European conference on synthetic aperture radar*vol 1 (VDE, EUSAR Berlin, 2004), pp. 399–403Google Scholar - Wessel B, Wiedemann C, Ebner H: The role of context for road extraction from SAR imagery.
*IEEE Proc. Int. Geosci. Remote Sens. Symp*2003, 5: pp. 4025-4027.Google Scholar - Wessel B, Wiedemann C: Analysis of automatic road extraction results from airborne SAR imagery. In
*Proceedings of the ISPRS conference “Photogrammetric Image Analysis”*. International Archieves of Photogrammetry Remote Sensing and Spatial Information Sciences, publisher, Natural Resources Canada; 2003:pp. 105-110.Google Scholar - Tupin F, Maitre H, Mangin JF, Nicolas JM, Pechersky E: Detection of linear features in SAR images: application to road network extraction.
*IEEE Trans. Geosci. Remote Sens*1998, 36(10):pp. 434-453.View ArticleGoogle Scholar - Huber R, Lang K: Road extractin from high-resolution airborne SAR using operator fusion.
*Proc. IGARSS*vol 6 (Sydney, Australia 2001), pp. 2813–2815Google Scholar - Gamba P, Dell’Acqua F, Lisini G: Improving urban road extraction in high-resolution images exploiting directional filtering, perceptual grouping, and simple topological concepts.
*IEEE Geosci. Remote Sens. Lett*2006, 3(3):pp. 387-391.View ArticleGoogle Scholar - Hedman K, Stilla U, Lisini G, Gamba P: Road network extration in VHR SAR images of urban and suburban areas by means of class-aided feature-level fusion.
*IEEE Trans. Geosic. Remote Sens*2010, 48(3):pp. 1294-1296.View ArticleGoogle Scholar - Butenuth M: Geometric refinement of road networks using network snakes and SAR images. In
*Proc. IGARSS*. Honolulu, Hawaii, USA; 2010:pp. 449-452.Google Scholar - Lisini G, Tison C, Tupin F, Gamba P: Feature fusion to improve road network extraction in high-resolution SAR images.
*IEEE Geosci. Remote Sens. Lett*2006, 3(2):pp. 217-221.View ArticleGoogle Scholar - Tupin F, Houshmand B, Datcu M: Road detection in dense urban areas using SAR imagery and the usefulness of multiple views.
*IEEE Trans. Geosci. Remote Sens*2002, 40(11):pp. 2405-2414.View ArticleGoogle Scholar - Kozaitis SP, Cofer RH: Linear feature detection using multiresolution wavelet filters.
*Photogramm. Eng. Remote Sens*2005, 71(6):pp. 689-697.View ArticleGoogle Scholar - Candes EJ, Donoho DL: Ridgelets: a key to higher-dimensional intermittency.
*Phil. Trans. R. Soc. Lond. A*1999, 357: pp. 2495-2509.MathSciNetView ArticleGoogle Scholar - Donoho DL, Huo X: Beamlets and multiscale image analysis.
*Multiscale and Multiresolution Methods, Lec. Notes Comp. Sci. and Eng*vol 20 (Springer, 2002), pp. 149–196Google Scholar - Donoho DL: Wedgelets: nearly-minimax estimation of edges.
*Ann. Statist*1999, 27: pp. 859-897.MathSciNetView ArticleGoogle Scholar - Romberg JK, Wakin M, Baraniuk R: Multiscale wedgelet image analysis: fast decompositions and modeling.
*IEEE Int. Conf. Image Proc*2002, 2: pp. 585-588.View ArticleGoogle Scholar - Zhou G, Cui Y, Chen Y, Yang JY, Rashvand H, Yamaguchi Y: Linear feature detection in polarimetric SAR images.
*IEEE Trans. Geosci. Remote Sens*2011, 49(4):pp. 1453-1463.View ArticleGoogle Scholar - Friedrich F, Demaret L, Fuhr H, Wicker K: Efficient moment computation over polygonal domains with an application to rapid wedgelet approximation.
*SIAM J. Sci. Comput*2007, 29(2):pp. 842-863.MathSciNetView ArticleGoogle Scholar - Xu G, Sun H, Yang W, Shuai Y: An improved road extraction method based on MRFs in rural areas for SAR images. In
*Proc. 1st Asian Pacific Conf. Synthetic Aperture Radar*. Huangshan city, Anhui China; 2007:pp. 489-492.Google Scholar - Steger C: An unbiased detector of curvilinear structures.
*IEEE Trans. Pattern Anal. Mach. Intell*1998, 20(2):pp. 489-492.MathSciNetView ArticleGoogle Scholar - Negri M, Gamba P, Lisini G, Tuping F: Junction-aware extraction and regularization of urban road networks in high-resolution SAR images.
*IEEE Trans. Geosci. Remote Sens*2006, 44(10):pp. 2962-2971.View ArticleGoogle Scholar - Hedman K, Hinz S, Stilla U: Road extraction from SAR multi-aspect data supported by a statistical context-based fusion.
*Proc. Urban Remote Sensing Joint Event*vol 44 (France, Paris, 2007), pp. 1–6Google Scholar - Amberg V, Coulon M, Marthon P, Spigai M: Improvement of road extraction in high resolution SAR data by a context-based approach.
*Proc. IGARSS*(Seoul, Korea, 2005), pp. 490–493Google Scholar - Mayer H, Hinz S, Bacher U, Baltsavias E: A test of automatic road extraction approaches.
*International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences*vol 36, (Seoul, Korea, 2005), pp. 209–214Google Scholar - Wiedemann C, Ebner H: Automatic completion and evaluation of road networks.
*Proc. Int. Arch. Photogramm. Remote Sens*vol XXXIII ,part 3B (Netherland, Amsterdam, 2000), pp. 979–986Google Scholar - Hinz S, Baumgartner A: Automatic extraction of urban road networks from multi-view aerial imagery.
*ISPRS J. Photogrammetry Remote Sens*2000, 58: pp. 83-98.View ArticleGoogle Scholar - Baumgartner A, Hinz S: Multi-scale road extraction using local and global grouping criteria.
*International Archives of Photogrammetry and Remote Sensing*vol 33 (Elsevier, 2003). part B3, pp. 58–65Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.