Skip to main content

A novel linear feature detector for SAR images


A new linear feature detector for synthetic aperture radar (SAR) images is presented in this article by embedding a three-region filter into the wedgelet analysis framework. One of its main features is that it can detect linear features with a range of varying widths and orientations in the same image by changing the direction and size of the detector mask within a multiscale framework. In addition, this detector takes into account both statistical and geometrical characteristics to detect line segments directly instead of detecting target pixels. To show its effectiveness, the detector is applied to extract one of the most important linear features: roads. Results and comparisons with several multiscale analysis techniques as well as ratio correlation detector on DLR E-SAR images reveal its advantages.


In the past 30 years, plenty of approaches have been developed to extract linear features from synthetic aperture radar (SAR) images, such as the line extraction method by using MRF (Markov random field)[1], the fusion of SAR Intensity and Coherence data raised up by Hellwich et al.[2], network snakes model by bridging the gap between low-level feature extraction or segmentation and high-level geometric representation of objects[3] and so on. Since roads are typical linear feature in SAR imagery, and their importance for construction goes without saying, many scholars are keeping an eye on road extraction in SAR imagery. Large numbers of methods on automatic extracting roads are put forward, for instance, some uses the context information[4, 5], starting form rural areas to built-up areas and taking the advantage of the existing context objects to acquire corresponding roads, some uses methods originally developed for optical imagery by making some modification such as TUM approach[6], some others employ a mask filter containing different homogeneous regions to detect target pixels or segments[79]. However, choosing an appropriate size of detector mask to meet the variable target widths, widths of main roads and side roads in a same image for example, becomes one of the main challenges[10, 11]. In[12, 13], the authors suggest a possible solution employing multi-resolution techniques to expand the search range for width. The method works in three steps: (1) create an image pyramid by averaging the amplitudes of 2×2 pixel blocks recursively, (2) extract features at each level in the pyramid, and (3) merge the features at different levels. However, lines can be easily degraded as a result of too much smoothing (down sampling)[14]. In addition, the merging step may affect location accuracy. In this article, we try to solve the problem by introducing multiscale analysis techniques to adjust the detector (or filter) mask instead of image data.

Wavevlets provide a robust representation for one-dimensional piecewise smooth signals, but they are poorly adapted for higher-dimensional phenomena such as edges and contours[15]. Donoho and Huo[16] suggest a multiscale image analysis framework named “beamlet analysis” in which line segments play a role analogous to that played by points in wavelet analysis. Beamlets provide a multiscale structure to represent linear or curvilinear features, but they exploit only the geometrical properties and thus leading to a disappointing performance in radar images. Wedgelets are widely used to detect edges or contours[17, 18], while they are not well designed to extract line segments as they employ a two-region filter mask that only detects step-structure edges effectively.

In this article, a multiscale linear feature detector (MLFD) is presented which extracts line segments directly from SAR images. It was inspired mainly by two sources: (i) the ratio correlation detector of Tupin et al.[7], and (ii) the multiscale thinking of wedgelets[17]. Figure1 is the flowchart of our algorithm. The MLFD embeds a three-region mask filter into the wedgelet analysis framework, and the central region is subdivided into three sub-regions to test the uniformity. There are several advantages of this detector: (1) it can change the size of the mask adaptively, (2) it can change the width and direction of the central region adaptively, (3) it detects linear features directly instead of detecting pixels, and (4) it takes into account both statistical and geometrical properties. In[19], a multiscale framework, curvelets, is employed to extract linear features from SAR images. But the approach employs curvelets to locate the regions containing linear targets, and the size of detector mask must be fixed before detection.

Figure 1
figure 1

The overview of linear feature detection using MLFD. The overview of linear feature detection using MLFD. The partition is used to subdivide the image into blocks at different scales. A tree pruning process determines whether or not to subdivide a square based on the detector responses. In the linear feature detection, lines connecting v1 and v2 and the width of central region make up the results.

Related study

Fusion operator

The fusion operator[7], proposed by Tupin, as shown in Figure2, exploits a three-region mask filter to detect pixels on linear targets. Its response is the result of fusing responses of a ratio line detector (D1) and a cross-correlation line detector (D2). Let μ i and n i be the radiometric empirical mean and the pixel count, respectively, of region i. The response of D1 is defined as

r=min( r 12 , r 13 )

where r ij  = 1−min(μ i  / μ j , μ j  / μ i )and the response of D2 is computed according to

ρ=min( ρ 12 , ρ 13 )
Figure 2
figure 2

The comparison of feature extraction results of MLFD and wedgelet decomposition. Comparison of linear feature detection results among fusion operator (the first row), wedgelet (the second row) and MLFD (the last row). The first column is masks employed by different method. The second and third column show results on synthesized image with horizontal lines and vertical lines of different widths, respectively, the red line segments are detected results. The last column show results on real SAR data.


ρ ij 2 = n i n j ( μ i μ j ) 2 1 + ( n i + n j ) ( n i σ i 2 + n j σ j 2 )

with σ i 2 noting the variance of amplitudes in region i. By merging the two values using the symmetrical sum, the response of the ratio correlation detector can be written as

γ= 1 r ρ + 2 ,r,ρ[0,1]

At each pixel, if the maximum response within all possible directions and widths is larger than a threshold, this pixel is considered as a target pixel.


A wedgelet[17] is a function on a square S that is piecewise constant on either side of a line l through S, see Figure2. It can be denoted by w(v1, v2, c a , c b ) with the points (v1 and v2) representing the endpoints of l and the constants (c a and c b ) representing the mean of amplitude values in two different regions. Let I be an image; the wedgelet approximation error over the square S can be written as

e(S)= min w W I(S)w 2

where W is the set of all possible wedgelets on the square.

Given an image, wedgelet decomposition is to subdivide it into a collection of non-overlapping dyadic squares C that solves the optimization problem

min s C e ( s ) + λ#C .

Here #C is the number of elements in the set C, and e(s) is computed according to (4). The parameter λ is the complexity-penalized coefficient. A smaller λ will make the decomposition contain more details (more squares) whereas a larger λ determines general structures in the image.

Multiscale linear feature detector

Wedgelets provide nearly minimax estimation of boundaries or contours, but they are not well designed to extract linear features, see Figure2. This is because the two-region mask they take detects step-structure edge only, while linear features are usually noted as ridge edges. In addition, they lack information about the width of a line segment[7, 19]. Though having a three-region mask, ratio correlation detector, on the other hand, lacks the flexibility to change mask size and direction adaptively, which results in a failure in the detection of features with varying widths, as shown in Figure2. In this article, we propose a method that regards a three-region filter mask, the ratio correlation detector, in a multiscale manner. As already mentioned, we name this detector multiscale linear feature detector (MLFD). MLFD can be used to extract curvilinear targets from SAR data. Three central components make up the MLFD: detector mask, detector response, and linear feature detection as well as the effective computation method are detailed in the remainder of this section.

Detector mask

As shown in Figure2, the detector employs a three-region mask denoted by r(v1, v2, w, μ1, μ2, μ3)formally, where v1, v2 are the endpoints of the central line and w is the width of the central region; μ1, μ2, and μ3 are the mean values of three different regions. The scale s is defined as the sidelength of the corresponding square (the mask). During the detection, the scale is changed according to s = 2j, j = 0,1,2,…,n. There are several features of this detector:

  1. 1.

    The pair of endpoints v 1 and v 2 can be set according to flexible rules. For example, they can be any two points on the square to detect discontinuity in any direction, or points at several specified directions to avoid large computation load.

  2. 2.

    The width w of the central region can increase from 1 to an upper bound w max that depends on the scale. As a result, this method can extract linear features with different widths. Especially, if the scale can be adjusted adaptively, the search range will be changed adaptively.

  3. 3.

    μ 1, μ 2, and μ 3 are piecewise constants of three regions determining the profile, i.e., the size and offset of the line in the mask. They have the same meaning as those in ratio correlation detector.

  4. 4.

    The central region is subdivided into three subregions to test uniformity. It makes sure the mean values in those subregions are close.

In this article, the endpoints v1 and v2 are tested for any two points on the square. And we set wmax to s/δ with δ referring to the minimum scale in the detection. Thus the number of different detector masks at scale s is

N(s) 4 s 2 w max 8 s 3 /δ.

Detector response

With l denoting the length and α denoting the uniformity coefficient of the central region, the detector response is defined as

T(r)= lαrρ 1 r ρ 2 ,r,ρ[0,1]

where r and ρ are computed according to (1) and (2). The parameter l gets rid of lines that are too short, i.e., the ones over the four corners by reducing their responses. Furthermore, it keeps the response of a line segment invariant whether or not it is divided into several parts which is important during the multiscale decomposition. The uniformity coefficient α is employed to evaluate the continuity of the central region[8]. It is computed according to the following equation:

α=min( μ 1 μ 2 , μ 2 μ 1 )×min( μ 2 μ 3 , μ 3 μ 2 )

where μ a , μ b , and μ c are the mean values of three subregions. A higher uniformity coefficient promises the central region in detector mask is a homogenous area.

Though the response (7) is similar to the response of ratio correlation detector (3), there are three main differences between the MLFD and the ratio correlation detector: (1) the former extracts linear features directly whereas the latter detects target points, (2) the former can detect lines in any direction while the latter can detect lines only in several specified directions, and (3) the width (or height) of the mask is sizable instead of a fixed value during detection.

Linear feature detection

Linear feature detection is accomplished by a multiscale decomposition that subdivides an image into a set of non-overlapping dyadic blocks C at different scales and computes responses according to (7) on each block. The set C satisfies

max s C max r : s T ( r ) λ#C .

Here r:s means that the detector mask r is on the square s. λ and #C are the same as those in (5) determining the decomposition resolution. In the decomposition, each block contains a linear feature candidate, see Figure2.

To solve (9), a bottom-up tree pruning process[17] is employed as shown in Figure1. The image is first subdivided into a series of squares recursively (each parent square is subdivided into 2×2 child squares with equal sidelengths). This will produce a complete quadtree. Then, responses of all possible detector masks over each square in the quadtree are calculated, but only the maximum response is saved. At last, starting from the bottom level, if

i = 1 4 T m ( r c , i )4λ> T m ( r p )λ

where T m (rc,i) is the maximum response of the i th child square and T m (r p ) is that of its corresponding parent square, the four child squares are retained and the parent’s maximum response is updated to the value on left side in the above formula; otherwise, they are discarded, that is, the parent square is not subdivided in the multiscale decomposition. This pruning process is repeated at each level until the root node is reached. The endpoints (v1, v2) and the width of central region (w) of the detector constitute the detected linear features.

Suppose N is the width of an image (with height equal to width) and is dyadic. Then the image can be divided into (N/n)2 dyadic blocks at scale n. Let δ be the minimum scale in the decomposition and then according to (6), the count of all possible detector masks is

B δ N = n = 2 j δ N ( N / n ) 2 × 8 n 3 / δ , j = 0 , 1 , 2 , = 8 N 2 ( 2 N / δ 1 ) .

For each one, computing the response using (7) requires a lot computation. As we can see from (1) and (2), the computation of moments costs lots of time. In[20], a fast moment computation algorithm is proposed which relies on the idea of considering polygonal domains with a fixed angular resolution, combined with an efficient implementation of a discrete version of Green’s theorem. Here we adopt it to fasten the response computation. In addition, when the image width N is large, we divide this image into patches of which the widths are ϕ (ϕ < N) as a long line segment will always be broken into several parts during the decomposition. Then we apply the decomposition for each patch. According to (11), the total count of flops is

N δ ϕ = ( N / ϕ ) 2 B δ ϕ =8 N 2 (2ϕ/δ1).

This cuts down 16N2 (N − ϕ) / δ computation flops.

Road extraction using MLFD

In this section, we apply MLFD to the extraction of one of the most important linear features: road networks. The procedure works in two steps. First, the detector is applied to the input image and this will generate road candidates. Then we identify the real road segments among the candidates using Markov random fields (MRFs)[7, 21]. The MRFs are adopted directly without much modification except that the average response of a candidate is replaced with T(r) /l c with T(r) referring to the response (7) of the detector and l c noting the segment length. The energy function can be formalized as

U(l|d)= i = 1 N V( d i | l i )+ c C V c (l).

Here d i and l i , respectively, are the average response and the label (1 for a road segment and 0 for others) of a segment. V(d i |l i ) is defined as the potential of a segment to be a road segment, while V c (l) is the clique potentials expressing a priori knowledge of roads. The algorithm is tested on two different DLR E-SAR images.

Both of the ESAR data used in our experiment are L-band with 0.92 × 1.4 m pixel size and 3.00 ×2.20 m resolution. The first test image Figure3a is an 18-look amplitude image with 1300 ×1200 pixels representing the scene around the Oberpfaffenhofen airport in Gauting, Germany. This experiment mainly aims at drawing a comparison of detection results using the proposed detector, the fusion operator, wedgelets, beamlets[16], the unbiased detector raised by Steger[22]. As we can see from Figure3b, both the main roads and the airstrips are extracted by MLFD. The fusion operator has an unfavorable performance on detecting airstrips which have large widths using a smaller template size (13 × 15) as shown in Figure3c, and it results in large gaps at the “thin” roads when taking a larger template size (25 × 29), shown in Figure3d. As in Figure3e, Steger’s method is applied to extract center roads as a comparison and realized by the halcon software. We can see that though many of the segments are extracted, the result still turn out to be unsatisfying because of the undesired segments are also extracted. Figure3f shows the result of the boundaries instead of the central lines of roads which are detected by wedgelets. A lot of noise segments are generated too. In Figure3g, we also present the detection results of beamlets. In fact, beamlets are more suitable for in the road grouping step instead of detecting line feature directly. From Figure3h, we can see that the linear and curvilinear roads are approximated with sequences of linear elements appropriately after global optimization by MRFs. Figure3i is the manually labeled ground-truth.

Figure 3
figure 3

The road extraction results. The road extraction results. (a) The first TerraSAR-X image representing the scene around an airport. (b) The road candidates detected by MLFD. In the decomposition, the complexity-penalized coefficient λ is set to 0.6 and the segments with the response less than 0.6 are discarded. (c) The detection results of fusion operator[7]. The response threshold is set to 0.3 and segments that are less than 5 are removed. (d) Another results by fusion operator with a larger template size. (e) the result by unbiased detector[22]. (f) The detection results of wedgelets. The complexity-penalized coefficient and threshold are 5 and 1, respectively. (g) The segments detected by beamlets. (h) The road network reconstructed after MRF optimization. (i) The ground truth.

Figure4a shows another test image representing an urban scene and Figure4d is the corresponding reference data extracted manually. This experiment is aimed at validating the ability of MLFD to approximate linear and curvilinear targets and extract “thin” roads as well as the main roads. Figure4b shows the detected candidates using MLFD and Figure4c is the detection results of fusion operator, similarly, the fusion operator still generates lots of unnecessary segments. Figure4d is the result by fusion operator with a larger template size, and it also miss many road segments using a larger template. Figure4e is the result Steger’s method, which is also realized by halcon software. Likewise, most correct segments are extracted, while much noise is extracted, too. Figure4f is the road networks after MRF based optimization. As we may visually observe “clean” roads of different widths are extracted. This validates the assertion that the presented detector can adjust the width (or height) of the mask adaptively to fit features with different widths. But we can also see that the method can not handle complex road intersections and roads in build-up areas very well. This may be improved by employing the junction-aware MRF model[23] and introducing context-based information[24, 25] which will be considered in further study.

Figure 4
figure 4

The road extraction results on the second test image. The road extraction results on the second test image. (a) The input image contains roads of different widths. (b) The road candidates extracted using MLFD with complexity-penalized coefficient λ and response threshold are 0.9 and 0.5, respectively. (c) the detection results of fusion operator[7]. (d) Another results by fusion operator with a larger template size. (e) the result by unbiased detector[22]. (f) The road network reconstructed after MRF optimization. (g) the ground truth.

There are many approaches to evaluate the results of automatic road extraction[26], and we choose the common three indexes[27] to evaluate our method, they are defined as (14).

completeness = L r / L gt correctness = L r / L N quality = L r / ( L N + L ugt )

are introduced to evaluate the quality of the road network extraction. Here L r is total length of the extracted roads that match the ground truth, Lgt is ground truth (or reference data) length, L N is the total extracted road length and Lugtis the length of actual roads that are unmatched with the extracted roads. Our method is compared with Tupin’s[7] and Steger’s[22] methods here. As we can see from Table1, generally, our methods are better than the other two methods. however, results for the first image Figure3 are worse than results for the second image. This is mainly because of the complexity of the scene in Figure3, which contains different types of roads (road, rail, and airstrip), forests, farmland and buildings. Furthermore, the quality of this image is relatively low, i.e., the brightness is not even.

Table 1 Quantitative evaluation of the results


In this article, a new multiscale linear feature detector named MLFD and its application to road network extraction from SAR images are presented. Thanks to the multiscale thinking, MLFD can extract linear features with a range of varying widths and orientations in the same scene without specifying the size or orientation of the detector mask. This is of great use since a growing number of remote sensing images of varying resolutions, which cover a large scene and contain linear targets with different widths are available. The global optimization model employed in this article is the MRF model, which can be replaced with other models to meet different requirements. For example, the junction-aware MRF model could be used to improve the overall performance at road intersections.

Future study will focus on improving the calculation of responses of different linear features in different scenes based on the detector scale. The multiscale analysis framework also provides opportunities to develop road grouping algorithms[28, 29].


  1. Hellwich O, Mayer H: Extraction line features from synthetic aperture radar (SAR) scenes using a Markov random field model. In IEEE International Conference on Image Processing (ICIP), Proceedings. Lausanne; vol 3 (Lausanne 1996), pp. 883–886

    Chapter  Google Scholar 

  2. Hellwich O: Fusion of SAR intensity and coherence data in a Markov random field model for line extraction. In International Geoscience and Remote Sensing Symposium 98. (IEEE Computer Society Press, Los Alamitos, Seattle; 1998:pp. 165-167.

    Google Scholar 

  3. Butenuth M, Heipke C: Network snakes: graph-based object delineation with active contour models. Mach. Vision Appl 2011, 23: pp. 91-109.

    Article  Google Scholar 

  4. Wessel B, Hinz S: Context-supported road extraction from SAR-imagery: transition from rural to built-up areas. Proc. 5th European conference on synthetic aperture radar vol 1 (VDE, EUSAR Berlin, 2004), pp. 399–403

    Google Scholar 

  5. Wessel B, Wiedemann C, Ebner H: The role of context for road extraction from SAR imagery. IEEE Proc. Int. Geosci. Remote Sens. Symp 2003, 5: pp. 4025-4027.

    Google Scholar 

  6. Wessel B, Wiedemann C: Analysis of automatic road extraction results from airborne SAR imagery. In Proceedings of the ISPRS conference “Photogrammetric Image Analysis”. International Archieves of Photogrammetry Remote Sensing and Spatial Information Sciences, publisher, Natural Resources Canada; 2003:pp. 105-110.

    Google Scholar 

  7. Tupin F, Maitre H, Mangin JF, Nicolas JM, Pechersky E: Detection of linear features in SAR images: application to road network extraction. IEEE Trans. Geosci. Remote Sens 1998, 36(10):pp. 434-453.

    Article  Google Scholar 

  8. Huber R, Lang K: Road extractin from high-resolution airborne SAR using operator fusion. Proc. IGARSS vol 6 (Sydney, Australia 2001), pp. 2813–2815

    Google Scholar 

  9. Gamba P, Dell’Acqua F, Lisini G: Improving urban road extraction in high-resolution images exploiting directional filtering, perceptual grouping, and simple topological concepts. IEEE Geosci. Remote Sens. Lett 2006, 3(3):pp. 387-391.

    Article  Google Scholar 

  10. Hedman K, Stilla U, Lisini G, Gamba P: Road network extration in VHR SAR images of urban and suburban areas by means of class-aided feature-level fusion. IEEE Trans. Geosic. Remote Sens 2010, 48(3):pp. 1294-1296.

    Article  Google Scholar 

  11. Butenuth M: Geometric refinement of road networks using network snakes and SAR images. In Proc. IGARSS. Honolulu, Hawaii, USA; 2010:pp. 449-452.

    Google Scholar 

  12. Lisini G, Tison C, Tupin F, Gamba P: Feature fusion to improve road network extraction in high-resolution SAR images. IEEE Geosci. Remote Sens. Lett 2006, 3(2):pp. 217-221.

    Article  Google Scholar 

  13. Tupin F, Houshmand B, Datcu M: Road detection in dense urban areas using SAR imagery and the usefulness of multiple views. IEEE Trans. Geosci. Remote Sens 2002, 40(11):pp. 2405-2414.

    Article  Google Scholar 

  14. Kozaitis SP, Cofer RH: Linear feature detection using multiresolution wavelet filters. Photogramm. Eng. Remote Sens 2005, 71(6):pp. 689-697.

    Article  Google Scholar 

  15. Candes EJ, Donoho DL: Ridgelets: a key to higher-dimensional intermittency. Phil. Trans. R. Soc. Lond. A 1999, 357: pp. 2495-2509.

    Article  MathSciNet  Google Scholar 

  16. Donoho DL, Huo X: Beamlets and multiscale image analysis. Multiscale and Multiresolution Methods, Lec. Notes Comp. Sci. and Eng vol 20 (Springer, 2002), pp. 149–196

    Google Scholar 

  17. Donoho DL: Wedgelets: nearly-minimax estimation of edges. Ann. Statist 1999, 27: pp. 859-897.

    Article  MathSciNet  Google Scholar 

  18. Romberg JK, Wakin M, Baraniuk R: Multiscale wedgelet image analysis: fast decompositions and modeling. IEEE Int. Conf. Image Proc 2002, 2: pp. 585-588.

    Article  Google Scholar 

  19. Zhou G, Cui Y, Chen Y, Yang JY, Rashvand H, Yamaguchi Y: Linear feature detection in polarimetric SAR images. IEEE Trans. Geosci. Remote Sens 2011, 49(4):pp. 1453-1463.

    Article  Google Scholar 

  20. Friedrich F, Demaret L, Fuhr H, Wicker K: Efficient moment computation over polygonal domains with an application to rapid wedgelet approximation. SIAM J. Sci. Comput 2007, 29(2):pp. 842-863.

    Article  MathSciNet  Google Scholar 

  21. Xu G, Sun H, Yang W, Shuai Y: An improved road extraction method based on MRFs in rural areas for SAR images. In Proc. 1st Asian Pacific Conf. Synthetic Aperture Radar. Huangshan city, Anhui China; 2007:pp. 489-492.

    Google Scholar 

  22. Steger C: An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell 1998, 20(2):pp. 489-492.

    Article  MathSciNet  Google Scholar 

  23. Negri M, Gamba P, Lisini G, Tuping F: Junction-aware extraction and regularization of urban road networks in high-resolution SAR images. IEEE Trans. Geosci. Remote Sens 2006, 44(10):pp. 2962-2971.

    Article  Google Scholar 

  24. Hedman K, Hinz S, Stilla U: Road extraction from SAR multi-aspect data supported by a statistical context-based fusion. Proc. Urban Remote Sensing Joint Event vol 44 (France, Paris, 2007), pp. 1–6

    Google Scholar 

  25. Amberg V, Coulon M, Marthon P, Spigai M: Improvement of road extraction in high resolution SAR data by a context-based approach. Proc. IGARSS (Seoul, Korea, 2005), pp. 490–493

    Google Scholar 

  26. Mayer H, Hinz S, Bacher U, Baltsavias E: A test of automatic road extraction approaches. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences vol 36, (Seoul, Korea, 2005), pp. 209–214

    Google Scholar 

  27. Wiedemann C, Ebner H: Automatic completion and evaluation of road networks. Proc. Int. Arch. Photogramm. Remote Sens vol XXXIII ,part 3B (Netherland, Amsterdam, 2000), pp. 979–986

    Google Scholar 

  28. Hinz S, Baumgartner A: Automatic extraction of urban road networks from multi-view aerial imagery. ISPRS J. Photogrammetry Remote Sens 2000, 58: pp. 83-98.

    Article  Google Scholar 

  29. Baumgartner A, Hinz S: Multi-scale road extraction using local and global grouping criteria. International Archives of Photogrammetry and Remote Sensing vol 33 (Elsevier, 2003). part B3, pp. 58–65

    Google Scholar 

Download references


The authors would like to thank several authorities for providing data for the experiments, especially Intermap Corp. and Bayrisches Landesvermessungsamt Muenchen, IHR/DLR (E-SAR data), Aerosensing GmbH (AeS-1 data) as well as the DDRE (EmiSAR data). This study was supported by the National Basic Research Program of China (973 program) (No. 2013CB733404), NSFC Grant (Nos. 60702041, 41174120), the China Postdoctoral Science Foundation funded project and the LIESMARS Special Research Funding.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Chu He.

Additional information

Competing interests

We would like to declare that we have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

He, C., Liao, Z., Yang, F. et al. A novel linear feature detector for SAR images. EURASIP J. Adv. Signal Process. 2012, 235 (2012).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: