Object representation for multi-beam sonar image using local higher-order statistics
- Haisen Li^{1, 2},
- Jue Gao^{1, 2},
- Weidong Du^{1, 2}Email authorView ORCID ID profile,
- Tian Zhou^{1, 2},
- Chao Xu^{1, 2} and
- Baowei Chen^{1, 2}
DOI: 10.1186/s13634-016-0439-7
© The Author(s). 2017
Received: 16 June 2016
Accepted: 17 December 2016
Published: 13 January 2017
Abstract
Multi-beam sonar imaging has been widely used in various underwater tasks such as object recognition and object tracking. Problems remain, however, when the sonar images are characterized by low signal-to-noise ratio, low resolution, and amplitude alterations due to viewpoint changes. This paper investigates the capacity of local higher-order statistics (HOS) to represent objects in multi-beam sonar images. The Weibull distribution has been used for modeling the background of the image. Local HOS involving skewness is estimated using a sliding computational window, thus generating the local skewness image of which a square structure is associated with a potential object. The ability of object representation with different signal-to-noise ratio (SNR) between object and background is analyzed, and the choice of the computational window size is discussed. In the case of the object with high SNR, a novel algorithm based on background estimation is proposed to reduce side lobe and retain object regions. The performance of object representation has been evaluated using real data that provided encouraging results in the case of the object with low amplitude, high side lobes, or large fluctuant amplitude. In conclusion, local HOS provides more reliable and stable information relating to the potential object and improves the object representation in multi-beam sonar image.
Keywords
Higher-order statistics Object representation Side lobe suppression Multi-beam sonar imaging Acoustic image Skewness Weibull distribution1 Introduction
Acoustic images acquired by a sonar system, such as side-scan sonar, forward-looking sonar, synthetic aperture sonar (SAS), or multi-beam echosounder, are used for many applications, including survey of the surrounding environment [1], obstacle avoidance [2], and underwater object detection [3]. Typical sonar images are generally composed of three types of regions [4, 5]: highlight, shadow, and bottom reverberation (referred to as the background). In the case of no shadow available, the highlight area involved in acoustic wave reflection from an object is the only clue indicating the presence of the object. Due to small size, similar amplitude with the background or a large fluctuation of amplitude, a potential object may correspond with the undistinguishable highlight area. Thus, object representation [6, 7], which is characterizing an object with the sonar image information, is not easy.
In previous work, a common strategy for object representation is directly making image segmentation that relates the highlighted area to an object. D. Y. Dai et al. [8] presented a method for segmenting moving and static objects in sector-scan sonar imagery. This method is based on filtering the data in the temporal domain. J. P. Stitt et al. [9] developed a fuzzy C-means (FCM) algorithm that segments the echo of an object and its acoustic shadow in the presence of reverberation noise. M. Mignotte et al. [10] presented a hierarchical Markov random field (MRF) model for high-resolution sonar image segmentation. Another strategy for object representation is based on classification that extracts some features characterizing the objects to generate the training set for the learning process of a classifier. G. C. Dobeck [11] implemented a matched filter to detect mine-like objects; after which, both a K-nearest neighbor neural network classifier and a discriminatory filter classifier is used to classify the objects as mine or not-mine. S Reed et al. [12] presented a model-based approach to mine classification by use of side-scan sonar. D. Williams [13] proposed a Bayesian data fusion approach for seabed classification using multi-view SAS imagery. In some cases, segmentation and classification are fused [14, 15] together to represent objects in the sonar image.
Both the strategies of segmentation and classification require local analysis for revealing trends, breakdown points, and self-similarities [5]. Thus, the corresponding local features are explored for representing the object. Recent research involving local features for sonar image included local Fourier histogram features [16], local invariant feature [17], and undecimated discrete wavelet transform features [5]. Higher-order statistics (HOS) is widely used when first- and second-order statistics fail to solve in the field of image processing [18, 19]. Considering the object as discontinuities to local background distribution, local HOS can be used as local features for object representation. The most relevant paper regarding local HOS application is proposed by F. Maussang [20]: a detection method based on HOS is applied in real sonar SAS data, the influence of the signal-to-noise ratio (SNR) on the results in the case of Gaussian is studied, and mathematical expressions of the estimators and of the expected performances are derived and experimentally confirmed. In this paper, we make further investigation on the capacity of local HOS to represent objects in multi-beam sonar image. The new contribution of this paper lies in the development of a set of integrated methods including choice of statistical background model, choice of computational window size, and side lobe suppression in the case of high SNR. Moreover, the influences of objects with different SNR and with different shape on the results are studied in the case of a Weibull distribution. The performance of object representation has been evaluated using real data that provided encouraging results in the case of an object with low amplitude, high side lobe, or large fluctuant amplitude.
This paper is organized as follows. Section 2 introduces the local properties of the HOS for Weibull background. Section 3 describes the local HOS for object representation in details. Section 4 provides experimental results on real data, and conclusions are presented in Section 5.
2 Local higher-order statistics for Weibull background
Kolmogorov distance and χ ^{2} error by approximating the image of Fig. 1a
Statistical model | Kolmogorov | χ ^{2} |
---|---|---|
Rayleigh | 0.2237 | 13474.67 |
Log-normal | 0.0211 | 708.98 |
Weibull | 0.0105 | 112.95 |
According to Eq. (2), m _{ B(r)} can be calculated by k and λ. Substituting Eq. (5) into Eq. (11), the object amplitude A can be replaced by SNR. Thus, the local skewness S _{ W } is a function depending on α as well as SNR.
3 Local higher-order statistics for object representation
3.1 Object modeling by local HOS
where n is the total pixel numbers of the image. In local skewness image, the object is represented by a square structure shown in Fig. 3b. The details of the square structure in local skewness image are shown in Fig. 3c, where the square structure is composed of lower values in the middle, higher values on the edge, and the highest value in the corners. The local skewness reaches the highest value Ŝ _{ W } = 3.48 corresponding to the case that one single pixel of the object regions is included in the computational window, while the theoretical highest value is S _{ W } = 3.53. The special structure is due to the variation of α shown in Fig. 3a.
3.2 Representing the object with different SNR
Performance of object representation (SNR_{1} = 40 dB, SNR_{2} = 25 dB, and SNR_{3} = 10 dB)
h _{ T } | C | σ _{ B '} | |
---|---|---|---|
SNR_{1} | 6.7473 | 0.9650 | 0.3072 |
SNR_{2} | 5.7620 | 4.6754 | 0.3072 |
SNR_{3} | 0.8063 | 5.0513 | 0.3072 |
3.3 Choice of the computational window size
Performance of object representation with different computational window sizes (SNR = 10 dB)
T _{ W } | h _{ T } | C | σ _{ B '} |
---|---|---|---|
3 | 1.5792 | 7.9350 | 0.5167 |
6 | 0.8916 | 5.4218 | 0.3393 |
9 | 0.6354 | 4.4079 | 0.2365 |
Performance of object representation with different computational window sizes for three objects with different sizes and shapes (SNR = 20 dB)
T _{ W } | C | σ _{ B '} | ||
---|---|---|---|---|
Object I | Object II | Object III | ||
4 | 4.9057 | 4.9647 | 4.9954 | 0.4075 |
7 | 5.8858 | 6.2262 | 6.6973 | 0.2827 |
10 | 5.3601 | 5.7749 | 6.2597 | 0.1785 |
In conclusion, the α _{min} deriving from the size of computational window T _{ W } should correspond to the α ' for a highest h _{ T }; however, a trade-off between the highest skewness of object h _{ T } and the standard deviation of background σ _{ B '} has to be made for selecting a suitable computational window size. Moreover, the large window size generally makes it difficult to locate the object.
3.4 Side lobe suppression for object with high SNR
- (1)Calculate the normalized amplitude probability distribution of X, obtaining the max distribution point l _{ m } and the max inflection point l _{ V }.
- (2)
Define X _{ V } as the max points along the direction of sampling number, from which the points larger than l _{ V } are labeled as X _{ S }.
- (3)
Save the object regions between the two valley points around each point of X _{ S }, calculate the correction factor d by the ratio between the max side lobe peak l _{ s } and the max point l _{ m }, then multiply the side lobe by d for offsetting.
4 Results and discussion
The proposed approach is verified on several real sonar images, which were obtained from multi-beam sonar developed by Harbin Engineering University. The sonar covers a region of 140 (vertical) × 2.5 (horizontal), with an operating frequency of 300 kHz. The emitted signal is continuous wave (CW) with a pulse width of 0.1 ms, while the receiver is 64-element uniform linear array with a sampling frequency of 48 kHz. A large number of datasets collected during several trials are processed by beam forming and scan conversion methods [32–35], generating the image sequences with a resolution grid of 0.05 × 0.05 m^{2}, and two of them are selected for example in the following.
Performance comparisons between the original image and local skewness image for partial frames of image sequence I
Frame 1 | Frame 2 | |||||||
---|---|---|---|---|---|---|---|---|
SNR(dB) | σ _{ B } | SNR ' (dB) | σ _{ B '} | SNR(dB) | σ _{ B } | SNR ' (dB) | σ _{ B '} | |
Object 1 | 28.63 | 0.2791 | 32.73 | 0.2680 | 31.54 | 0.2809 | 42.30 | 0.2648 |
Object 2 | 29.79 | 0.2791 | 36.84 | 0.2680 | 25.58 | 0.2809 | 35.08 | 0.2648 |
Performance comparisons between the original image and local skewness image for partial frames of image sequence II
Frame 1 | Frame 2 | |||||||
---|---|---|---|---|---|---|---|---|
SNR(dB) | σ _{ B } | SNR ' (dB) | σ _{ B '} | SNR(dB) | σ _{ B } | SNR ' (dB) | σ _{ B '} | |
Object 1 | 29.32 | 0.3322 | 34.97 | 0.3545 | 41.19 | 0.3397 | 36.96 | 0.3587 |
Object 2 | 39.46 | 0.3322 | 43.74 | 0.3545 | 39.38 | 0.3397 | 43.37 | 0.3587 |
Object 3 | 42.55 | 0.3322 | 40.73 | 0.3545 | 49.54 | 0.3397 | 46.44 | 0.3587 |
Object 4 | 50.40 | 0.3322 | 45.61 | 0.3545 | 48.93 | 0.3397 | 42.38 | 0.3587 |
5 Conclusions
This paper investigates the capacity of local higher-order statistics (HOS) to represent objects in multi-beam sonar images. Local skewness is estimated using a sliding computational window applied to a sonar image, thus generating local skewness image of which a square structure is associated with a potential object. One finds that: (1) The Weibull distribution has been proved to be a better choice for modeling the background of multi-beam sonar image, by comparing with the log-normal and Rayleigh distributions. (2) The square structure composes of lower values in the middle, higher values on the edge, and the highest value in the corners, and makes the object easily identifiable. (3) Mapping from original image to local skewness image, an object with lower SNR achieves a higher object contrast C, whereas an object with higher SNR obtains a lower object contrast C, thus the robustness of object representation is improved, especially in case of the object with a large SNR fluctuation. (4) In order to select a suitable sliding computational window size, the α _{min} deriving from the size of computational window T _{ W } should correspond to α ' for a highest h _{ T }; however, a trade-off between the higher skewness of object h _{ T } and the lower standard deviation of background σ _{ B '} has to be made. (5) In the case of object with high SNR, an algorithm based on background estimation is able to significantly reduce the side lobe and completely retain object regions. The local HOS can provide the local feature relating to the potential object for segmentation, detection and classification tasks; however, the robustness of local feature should be further tested and improved for shape recognition. In the future, we plan to extend this work to multiple objects tracking in complex scenes.
Notes
Declarations
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 41327004, 41376103, 41306038, and 41506115) and the Fundamental Research Funds for the Central Universities (Grant No. HEUCF160510). The authors would like to thank the anonymous reviewers for the constructive comments on the manuscript.
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- SM Simmons, DR Parsons, JL Best, O Orfeo, SN Lane, R Kostaschuk, RJ Hardy, G West, C Malzone, J Marcus, P Pocwiardowski, Monitoring suspended sediment dynamics using MBES. J. Hydraul. Eng. 136(1), 45–49 (2010)View ArticleGoogle Scholar
- I Quidu, L Jaulin, A Bertholom, Y Dupas, Robust multitarget tracking in forward-looking sonar image sequences using navigational data. IEEE J. Ocean. Eng. 37(3), 417–430 (2012)View ArticleGoogle Scholar
- GG Acosta, SA Villar, Accumulated CA–CFAR process in 2-D for online boject detection from sidescan sonar data. IEEE J. Ocean. Eng. 40(3), 558–569 (2015)View ArticleGoogle Scholar
- XF Ye, ZH Zhang, PX Liu, HL Guan, Sonar image segmentation based on GMRF and level-set models. Ocean Eng. 37(10), 891–901 (2010)View ArticleGoogle Scholar
- T Celik, T Tjahjadi, A novel method for sidescan sonar image segmentation. IEEE J. Ocean. Eng. 36(2), 186–194 (2011)View ArticleGoogle Scholar
- RJ Campbell, PJ Flynn, A survey of free-form object representation and recognition techniques. Comput. Vis. Image Underst. 81(2), 166–210 (2001)View ArticleMATHGoogle Scholar
- B Moghaddam, A Pentland, Probabilistic visual learning for object representation. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 786–793 (2005)Google Scholar
- D Dai, MJ Chantler, DM Lane, N Williams, Proceedings of International Conference on Image Processing and Its Applications. A spatial-temporal approach for segmentation of moving and static objects in sector scan sonar image sequences, 1995, pp. 163–167Google Scholar
- JP Stitt, RL Tutwiler, AS Lewis, Proceedings of the IASTED International Conference on Signal and Image Processing. Fuzzy c-means image segmentation of side-scan sonar images, 2001, pp. 27–32Google Scholar
- M Mignotte, C Collet, P Perez, P Bouthemy, Sonar image segmentation using an unsupervised hierarchical MRF model. IEEE Trans. Image Process. 9(7), 1216–1231 (2000)View ArticleGoogle Scholar
- GJ Dobeck, JC Hyland, L Smedley, Proceedings of the International Society for Optical Engineering (SPIE). Automated detection/classification of sea mines in sonar imagery, 1997, pp. 90–110Google Scholar
- S Reed, Y Petillot, J Bell, Model-based approach to the detection and classification of mines in sidescan sonar. Appl Opt 43(2), 237–246 (2004)View ArticleGoogle Scholar
- DP Williams, Bayesian data fusion of multiview synthetic aperture sonar imagery for seabed classification. IEEE Trans. Image Process. 18(6), 1239–1254 (2009)MathSciNetView ArticleGoogle Scholar
- CM Ciany, W Zurawski, Proceedings of Oceans Mts/IEEE Conference & Exhibition. Performance of computer aided detection/computer aided classification and data fusion algorithms for automated detection and classification of underwater mines, 2001, pp. 277–284Google Scholar
- S Reed, IT Ruiz, C Capus, Y Petillot, The fusion of large scale classified side-scan sonar image mosaics. IEEE Trans. Image Process. 15(7), 2049–2060 (2006)View ArticleGoogle Scholar
- GR Cutter Jr, Y Rzhanov, LA Mayer, Automated segmentation of seafloor bathymetry from multibeam echosounder data using local Fourier histogram texture features. J. Exp. Mar. Biol. Ecol. 285(2), 355–370 (2003)View ArticleGoogle Scholar
- A Mahiddine, J Seinturier, JM Boï, P Drap, D Merad, Proceedings of 20th International Conference on Computer Graphics, Visualization and Computer Vision. Performances Analysis of Underwater Image Preprocessing Techniques on the Repeatability of SIFT and SURF Descriptors, 2012, pp. 275–282Google Scholar
- S Lyu, H Farid, Steganalysis using higher-order image statistics. IEEE Trans. Inf. Forensic Secur. 1(1), 111–119 (2006)View ArticleGoogle Scholar
- A Briassouli, I Kompatsiaris, Robust temporal activity templates using higher order statistics. IEEE Trans. Image Process. 18(12), 2756–2768 (2009)MathSciNetView ArticleGoogle Scholar
- F Maussang, J Chanussot, A Hetet, M Amate, Higher-order statistics for the detection of small objects in a noisy background application on sonar imaging. EURASIP J. Adv. Signal Process. 2007(1), 1–17 (2007)View ArticleMATHGoogle Scholar
- F Maussang, J Chanussot, A Hetet, M Amate, Mean–standard deviation representation of sonar images for echo detection: application to SAS images. IEEE J. Ocean. Eng. 32(4), 956–970 (2007)View ArticleMATHGoogle Scholar
- S Kuttikkad, R Chellappa, Proceedings of IEEE International Conference on Image Processing. Non-Gaussian CFAR techniques for target detection in high resolution SAR images, 1994, pp. 910–914Google Scholar
- JM Gelb, RE Heath, GL Tipple, Statistics of distinct clutter classes in midfrequency active sonar. IEEE J. Ocean. Eng. 35(2), 220–229 (2010)View ArticleGoogle Scholar
- DA Abraham, JM Gelb, AW Oldag, Background and clutter mixture distributions for active sonar statistics. IEEE J. Ocean. Eng. 36(2), 231–247 (2011)View ArticleGoogle Scholar
- IR Joughin, DB Percival, DP Winebrenner, Maximum likelihood estimation of K distribution parameters for SAR data. IEEE Trans. Geosci. Remote Sens. 31(5), 989–999 (1993)View ArticleGoogle Scholar
- DR Iskander, AM Zoubir, B Boashash, A method for estimating the parameters of the K distribution. IEEE Trans. Signal Process 47(4), 1147–1151 (1999)View ArticleGoogle Scholar
- M Mignotte, C Collet, P Pérez, P Bouthemy, Three-class Markovian segmentation of high-resolution sonar images. Comput.Vis.Image.Und 76(3), 191–204 (1999)View ArticleGoogle Scholar
- DN Joanes, CA Gill, Comparing measures of sample skewness and kurtosis. J.R.Stat.Soc 47(1), 183–189 (1998)View ArticleGoogle Scholar
- JF Synnevag, A Austeng, S Holm, A low-complexity data-dependent beamformer. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 58(2), 281–289 (2011)View ArticleGoogle Scholar
- CC Gaudes, I Santamaría, J Vía, EM Gómez, TS Paules, Robust array beamforming with sidelobe control using support vector machines. IEEE Trans. Signal Process. 55(2), 574–584 (2007)MathSciNetView ArticleGoogle Scholar
- M Wax, Y Anu, Performance analysis of the minimum variance beamformer. IEEE Trans. Signal Process 44(4), 928–937 (1996)View ArticleGoogle Scholar
- C Xu, HS Li, BW Chen, T Zhou, Multibeam interferometric seafloor imaging technology. J. Harbin Eng. Univ. 34(9), 1159–1164 (2013)Google Scholar
- X Liu, HS Li, T Zhou, C Xu, B Yao, Multibeam seafloor imaging technology based on the multiple sub-array detection method. J. Harbin Eng. Univ. 33(2), 197–202 (2012)Google Scholar
- A Trucco, M Garofalo, S Repetto, G Vernazza, Processing and analysis of underwater acoustic images generated by mechanically scanned sonar systems. IEEE Trans. Instrum. Meas. 58(7), 2061–2071 (2009)View ArticleGoogle Scholar
- R Schettini, S Corchs, Underwater image processing: state of the art of restoration and image enhancement methods. EURASIP J. Adv. Signal Process. 2010(3), 1–14 (2010)View ArticleGoogle Scholar