 Research
 Open Access
 Published:
Object representation for multibeam sonar image using local higherorder statistics
EURASIP Journal on Advances in Signal Processing volume 2017, Article number: 7 (2017)
Abstract
Multibeam sonar imaging has been widely used in various underwater tasks such as object recognition and object tracking. Problems remain, however, when the sonar images are characterized by low signaltonoise ratio, low resolution, and amplitude alterations due to viewpoint changes. This paper investigates the capacity of local higherorder statistics (HOS) to represent objects in multibeam sonar images. The Weibull distribution has been used for modeling the background of the image. Local HOS involving skewness is estimated using a sliding computational window, thus generating the local skewness image of which a square structure is associated with a potential object. The ability of object representation with different signaltonoise ratio (SNR) between object and background is analyzed, and the choice of the computational window size is discussed. In the case of the object with high SNR, a novel algorithm based on background estimation is proposed to reduce side lobe and retain object regions. The performance of object representation has been evaluated using real data that provided encouraging results in the case of the object with low amplitude, high side lobes, or large fluctuant amplitude. In conclusion, local HOS provides more reliable and stable information relating to the potential object and improves the object representation in multibeam sonar image.
Introduction
Acoustic images acquired by a sonar system, such as sidescan sonar, forwardlooking sonar, synthetic aperture sonar (SAS), or multibeam echosounder, are used for many applications, including survey of the surrounding environment [1], obstacle avoidance [2], and underwater object detection [3]. Typical sonar images are generally composed of three types of regions [4, 5]: highlight, shadow, and bottom reverberation (referred to as the background). In the case of no shadow available, the highlight area involved in acoustic wave reflection from an object is the only clue indicating the presence of the object. Due to small size, similar amplitude with the background or a large fluctuation of amplitude, a potential object may correspond with the undistinguishable highlight area. Thus, object representation [6, 7], which is characterizing an object with the sonar image information, is not easy.
In previous work, a common strategy for object representation is directly making image segmentation that relates the highlighted area to an object. D. Y. Dai et al. [8] presented a method for segmenting moving and static objects in sectorscan sonar imagery. This method is based on filtering the data in the temporal domain. J. P. Stitt et al. [9] developed a fuzzy Cmeans (FCM) algorithm that segments the echo of an object and its acoustic shadow in the presence of reverberation noise. M. Mignotte et al. [10] presented a hierarchical Markov random field (MRF) model for highresolution sonar image segmentation. Another strategy for object representation is based on classification that extracts some features characterizing the objects to generate the training set for the learning process of a classifier. G. C. Dobeck [11] implemented a matched filter to detect minelike objects; after which, both a Knearest neighbor neural network classifier and a discriminatory filter classifier is used to classify the objects as mine or notmine. S Reed et al. [12] presented a modelbased approach to mine classification by use of sidescan sonar. D. Williams [13] proposed a Bayesian data fusion approach for seabed classification using multiview SAS imagery. In some cases, segmentation and classification are fused [14, 15] together to represent objects in the sonar image.
Both the strategies of segmentation and classification require local analysis for revealing trends, breakdown points, and selfsimilarities [5]. Thus, the corresponding local features are explored for representing the object. Recent research involving local features for sonar image included local Fourier histogram features [16], local invariant feature [17], and undecimated discrete wavelet transform features [5]. Higherorder statistics (HOS) is widely used when first and secondorder statistics fail to solve in the field of image processing [18, 19]. Considering the object as discontinuities to local background distribution, local HOS can be used as local features for object representation. The most relevant paper regarding local HOS application is proposed by F. Maussang [20]: a detection method based on HOS is applied in real sonar SAS data, the influence of the signaltonoise ratio (SNR) on the results in the case of Gaussian is studied, and mathematical expressions of the estimators and of the expected performances are derived and experimentally confirmed. In this paper, we make further investigation on the capacity of local HOS to represent objects in multibeam sonar image. The new contribution of this paper lies in the development of a set of integrated methods including choice of statistical background model, choice of computational window size, and side lobe suppression in the case of high SNR. Moreover, the influences of objects with different SNR and with different shape on the results are studied in the case of a Weibull distribution. The performance of object representation has been evaluated using real data that provided encouraging results in the case of an object with low amplitude, high side lobe, or large fluctuant amplitude.
This paper is organized as follows. Section 2 introduces the local properties of the HOS for Weibull background. Section 3 describes the local HOS for object representation in details. Section 4 provides experimental results on real data, and conclusions are presented in Section 5.
Local higherorder statistics for Weibull background
In order to introduce the local HOS for object representation, it is necessary to assume a statistical model of the background in multibeam sonar image. The classic description of the background follows a Rayleigh distribution; however, it usually fails to fit in the case of distributions with large tails and large deviationtomean ratio [21, 22]. Several nonRayleigh distributions, including lognormal, Weibull, and Kdistributions, have been used to model background statistics [20, 22–24]. The Kdistribution provides a good description of the background; however, the estimation of the parameters is computationally complex and time consuming [21, 25, 26]. A comparison is made among the lognormal, Rayleigh, and Weibull statistical models, using a real sonar image. The details of the real data are included in Section 4. Figure 1a shows the real sonar image without any object. Figure 1b presents the normalized amplitude distribution and the estimated distributions. As observed visually, the real background data is described by a Weibull distribution better than the other distributions. This is confirmed by the quantitative measures presented in Table 1, according to the Kolmogorov distance and χ ^{2} criterion [21, 27]. As a consequence, the Weibull distribution, which lies between the two extremes of lognormal and Rayleigh, appears to be a good choice for modeling the background in multibeam sonar image.
The statistics of the Weibulldistributed background B are described by the probability density function:
where k is the shape parameter and λ is the scale parameter. The r ‐ th order origin moment is defined as:
where Γ is the gamma function. The mean μ _{ B } and standard deviation σ _{ B } of the background are then given by:
Setting the object amplitude A, the SNR can be defined as the ratio between the mean power of the object echo and the mean power of the background echo:
To investigate the local property, HOS is estimated within an image using a sliding computational window, and α is denoted as the proportion of object within the computational window. When α = 0, the sliding computational window is totally composed of the background. The r ‐ th order origin moment computed within the whole window m _{ W(r)} is defined [20] as:
where m _{ O(r)} is r ‐ th order origin moment computed within the object region. The m _{ O(r)} is
Derived from the third moment, the skewness computed on the computational window is given by:
where \( {m}_{W(r)}^{\prime } \) is r ‐ th order central moment. The relations between the origin moment and central moment for 2 ‐ th and 3 ‐ th are
Combining the Eqs. (6), (9), and (10), Eq. (8) can be rewritten as:
According to Eq. (2), m _{ B(r)} can be calculated by k and λ. Substituting Eq. (5) into Eq. (11), the object amplitude A can be replaced by SNR. Thus, the local skewness S _{ W } is a function depending on α as well as SNR.
Local higherorder statistics for object representation
Object modeling by local HOS
Let us consider a simulated sonar image with a background that follows a Weibull distribution, with a size of 100 × 100 pixels. The scale parameter k = 6.67 and the shape parameter λ = 0.45 are estimated from the real dataset. The local skewness S _{ W } displayed in Fig. 2a is a twodimensional surface with respect to α and SNR. The local skewness function gets higher values when α is low and SNR is high, and lower values when α is high and SNR is around 0 dB. In the computational window, a zero S _{ W } indicates that the tails on both sides of the mean balance out, which is the case for a symmetric distribution. Negative S _{ W } indicates that the tail on the left side of the probability density function is longer than the right side. Conversely, positive S _{ W } indicates that the tail on the right side is longer than the left side. Therefore, a large S _{ W } can be regarded as a clue to the potential object, which is disrupted to local background distribution. Along the SNR axis, the α corresponding to the maximal S _{ W } is denoted as α '. The SNR versus α ' is shown in Fig. 2b. As shown in Fig. 2b, α ' = 0 when SNR is below 3 dB, α ' = 0.005 when SNR is above 25 dB, and α ' between 0.01 and 0.125 when SNR is between 3 and 25 dB.
Given an object with SNR = 20 dB, the local skewness S _{ W } versus α is shown in Fig. 3a. The local skewness reaches the maximum S _{ W } = 3.53 with α = 0.02, and drops as the α increases. Modeling a square object SNR = 20 dB with the size T _{ O } = 3, local skewness is estimated using a sliding computational window of size T _{ W } = 7. Note that the units of T _{ O } and T _{ W } are pixels throughout this paper. A biascorrected estimator of local skewness S _{ W } [28] is
where n is the total pixel numbers of the image. In local skewness image, the object is represented by a square structure shown in Fig. 3b. The details of the square structure in local skewness image are shown in Fig. 3c, where the square structure is composed of lower values in the middle, higher values on the edge, and the highest value in the corners. The local skewness reaches the highest value Ŝ _{ W } = 3.48 corresponding to the case that one single pixel of the object regions is included in the computational window, while the theoretical highest value is S _{ W } = 3.53. The special structure is due to the variation of α shown in Fig. 3a.
Representing the object with different SNR
The ability of representing the object with different SNR is investigated by introducing object I SNR_{1} = 40 dB, object II SNR_{2} = 25 dB, and object III SNR_{3} = 10 dB to a simulated image. The size of object is T _{ O } = 3 and the size of computational window is T _{ W } = 7. The relations between the minimal proportion of object within the computational window α _{min} and the size of computational window T _{ W } can be defined as:
According to Eq. (13), T _{ W } = 7 derives α _{min} = 0.02. In the case of α = 0.02, the local skewness S _{ W } versus SNR is shown in Fig. 4a. The local skewness remains low value with SNR below 0 dB, and high value with SNR above 40 dB, but grows rapidly with SNR between 0 and 40 dB. In the simulated image shown in Fig. 4b, object I with high SNR is clear, object II with medium SNR is a bit obscure, and object III with low SNR is totally mixed with the background. In the local skewness image shown in Fig. 4c, all objects are visually characterized by square structure. The higher the SNR, the larger the maximum value of the square structure and the more distinct the outline of the square structure is.
To evaluate the discrimination of the object between the original image and the local skewness image, the object contrast is defined as follows:
where h _{ T } is the maximum value of the square structure. The mean and standard deviation of background in local skewness image are μ _{ B '} and σ _{ B '}:
Table 2 presents the performance of object representation. The object with low SNR (SNR_{3} = 10 dB) gets C = 5.0513, the object with medium SNR (SNR_{2} = 25 dB) gets C = 4.6754, and the object with high SNR (SNR_{3} = 10 dB) gets C = 0.9650. In addition, h _{ T } for all objects are close to the theoretical values shown in Fig. 4a. Mapping from the original image to the local skewness image, an object with lower SNR achieves a higher C, whereas an object with higher SNR obtains a lower C due to the saturated h _{ T }.
Choice of the computational window size
As suggested by the curve in Fig. 2b and Eq. (13), the α _{min} deriving from the size of computational window T _{ W } should correspond to the α ' for a highest h _{ T }. For example, an object with SNR = 10 dB, the suitable size of computational window is supposed to be T _{ W } = 3. In order to confirm the assumption, a square object with a SNR of 10 dB and a size of T _{ O } = 3 is inserted in a simulated image shown in Fig. 5a, where the object is completely unable to distinguish. The local skewness image with the size of computational window T _{ W } = 3, T _{ W } = 6, and T _{ W } = 9 is shown in Fig. 5b–d, respectively. The performance with different computational window sizes is presented in Table 3. The high h _{ T } and C are obtained with T _{ W } = 3; however, the distinction between object and background is obscure due to the high σ _{ B '}, as shown in Fig. 5b. On the contrast, the low σ _{ B '} is obtained with T _{ W } = 9, but false alarms hinder the object determination due to the low h _{ T } and C, as shown in Fig. 5d. With the size of computational window T _{ W } = 6, the local skewness image shown in Fig. 5c provides satisfactory results with a compromise between the contrast C and the standard deviation σ _{ B '}.
Investigating the influences of the object size and shape, three objects (SNR = 20 dB) including square object I with the size T _{ O } = 3, square object II with the size T _{ O } = 6, and sphere object III with the diameter T _{ O } = 8 are added into a simulated image shown in Fig. 6a. The local skewness image with the size of computational window T _{ W } = 4, T _{ W } = 7, and T _{ W } = 10 is shown in Fig. 6b–d, respectively. One finds that the computational window size T _{ W } = 4, T _{ W } = 7, and T _{ W } = 10 is best for describing the structure of object size T _{ O } = 3, T _{ O } = 6, and T _{ O } = 8, respectively. The performance reported in Table 4 confirms that the size of computational window T _{ W } = 7 corresponding to α = 0.02 obtains the highest C and moderate σ _{ B '}, which is considered the best result for the case of SNR = 20 dB. It is concluded that a suitable window size, which is a bit larger than the object size, is able to represent this object accurately and the outline of the sphere object can be described by the high values of the edges. However, the shape recognition needs to be further studied.
In conclusion, the α _{min} deriving from the size of computational window T _{ W } should correspond to the α ' for a highest h _{ T }; however, a tradeoff between the highest skewness of object h _{ T } and the standard deviation of background σ _{ B '} has to be made for selecting a suitable computational window size. Moreover, the large window size generally makes it difficult to locate the object.
Side lobe suppression for object with high SNR
In the case of high SNR, an object can be observed visually, but the image may be contaminated with high side lobes, which can occlude nearby objects. A side lobe suppression algorithm is required to make a reduction in the directions of arrival (DOA) of strong interferences, while keeping the desired signal distortionless. Adaptive beamforming [29, 30], like the minimumvariance distortionless response (MVDR) beamformer [31], has shown good performance. However, a compromise should be made between resolution and contrast with limited computational cost. Therefore, we develop an algorithm based on background estimation, which identifies and offsets the high side lobes statistically. Consider a sonar amplitude image X = {x(i, j)1 ≤ i ≤ U, 1 ≤ j ≤ V}, with a size of U × V pixels. The proposed algorithm comprises three main stages below and a description of this algorithm in a pseudocode format is contained in Fig. 7.

(1)
Calculate the normalized amplitude probability distribution of X, obtaining the max distribution point l _{ m } and the max inflection point l _{ V }.

(2)
Define X _{ V } as the max points along the direction of sampling number, from which the points larger than l _{ V } are labeled as X _{ S }.

(3)
Save the object regions between the two valley points around each point of X _{ S }, calculate the correction factor d by the ratio between the max side lobe peak l _{ s } and the max point l _{ m }, then multiply the side lobe by d for offsetting.
As an illustration, a real multibeam sonar image containing a metal cube tied with ropes is displayed in Fig. 9a. The sonar amplitude image is shown in Fig. 8a, with the size of 1024 (beam number) × 350(sampling number), and the corresponding normalized amplitude distribution is presented in Fig. 8b. As shown in Fig. 8b, the max distribution point l _{ m } = 0.62 and the max inflection point l _{ V } = 0.87 are obtained. The max amplitude for each sampling number is shown in Fig. 8c. As shown in Fig. 8c, the amplitude points X _{ S }, which are larger than l _{ V } (dashed line), including the highest points N (sampling number 250) are extracted from the X _{ V }. For each point belonging to X _{ S }, the object regions are saved while the side lobe regions are offset with the correction factor d. The object regions of N (between two dashed lines) and the max side lobe peak are shown in Fig. 8d. The l _{ s } = 0.86 derives the d = 0.72.
Local skewness images before and after side lobe suppression are displayed in Fig. 9b, c, using the computational window size T _{ W } = 14. The comparison reveals that a considerable improvement for object representation has been achieved with the proposed algorithm. The side lobes are reduced and the object regions are retained with a square structure.
Another real multibeam sonar image (shown in Fig. 12a) containing two groups of objects is used to compare the algorithms’ performance. Figure 10a shows the side lobe suppression with MVDR beamformer, and Fig. 10b shows the side lobe suppression with background estimation. It is clear that both the algorithms adequately reduce the side lobe. A significant SNR enhancement is achieved by the MVDR beamformer, whereas the boundary representations for potential objects are described in greater detail by the proposed algorithm. Furthermore, the proposed algorithm is about 200 times faster than the MVDR beamformer. This is a significant improvement in terms of realtime performance.
Results and discussion
The proposed approach is verified on several real sonar images, which were obtained from multibeam sonar developed by Harbin Engineering University. The sonar covers a region of 140 (vertical) × 2.5 (horizontal), with an operating frequency of 300 kHz. The emitted signal is continuous wave (CW) with a pulse width of 0.1 ms, while the receiver is 64element uniform linear array with a sampling frequency of 48 kHz. A large number of datasets collected during several trials are processed by beam forming and scan conversion methods [32–35], generating the image sequences with a resolution grid of 0.05 × 0.05 m^{2}, and two of them are selected for example in the following.
Dataset I was acquired from a trial at an indoor tank, Harbin Engineering University, China. The corresponding image sequence I has the size of 111 × 241 pixels. Each frame describes the watercolumn scenes of 7 × 10 m^{2}, in which a plastic ball and a metal block move together in horizontal direction. Two typical frames are presented in Fig. 11a, b, where the objects are hardly visible due to the small size and the similar amplitude of the object comparing to the background. The local skewness is estimated using the computational window size T _{ W } = 12, and the results are shown in Fig. 11c, d. Both the objects are represented with the square structure and distinguished from the background obviously. Table 5 gives the performance results, of which SNR ' is the SNR in local skewness image. The results show that local skewness image obtains the higher SNR ' and lower σ _{ B '}, in contrast to original image.
Dataset II was obtained from a trial at Songhua Lake, Jilin province, China. The corresponding image sequence II has the size of 361 × 601 pixels. Each frame describes the watercolumn scene of 20 × 30 m^{2}, in which two groups of objects move relatively in vertical direction. Each group of object is composed of a plastic ball and a metal block. A typical frame is displayed in Fig. 12a. Two groups of objects are distant, of which the object with low SNR can hardly be identified and the object with high SNR has high side lobes. Another frame is shown in Fig. 12b. Two groups of objects are close, of which the side lobes caused by high SNR objects occlude the other objects. After side lobe suppression, the local skewness is estimated using the computational window size T _{ W } = 14. The results are presented in Fig. 12c, d, where all objects are apparent with the square structure and the influence of side lobe are reduced. Table 6 gives the performance results. It shows that object 1 has a fluctuation of 11.87 dB between the original frames, whereas the corresponding fluctuation is only 1.99 dB between the local skewness frames. It confirms that local skewness is robust for object representation, especially in case of the object with a large SNR fluctuation between consecutive frames. Furthermore, the boundary of the high SNR object is distinct by implementing the proposed side lobe suppression algorithm.
Conclusions
This paper investigates the capacity of local higherorder statistics (HOS) to represent objects in multibeam sonar images. Local skewness is estimated using a sliding computational window applied to a sonar image, thus generating local skewness image of which a square structure is associated with a potential object. One finds that: (1) The Weibull distribution has been proved to be a better choice for modeling the background of multibeam sonar image, by comparing with the lognormal and Rayleigh distributions. (2) The square structure composes of lower values in the middle, higher values on the edge, and the highest value in the corners, and makes the object easily identifiable. (3) Mapping from original image to local skewness image, an object with lower SNR achieves a higher object contrast C, whereas an object with higher SNR obtains a lower object contrast C, thus the robustness of object representation is improved, especially in case of the object with a large SNR fluctuation. (4) In order to select a suitable sliding computational window size, the α _{min} deriving from the size of computational window T _{ W } should correspond to α ' for a highest h _{ T }; however, a tradeoff between the higher skewness of object h _{ T } and the lower standard deviation of background σ _{ B '} has to be made. (5) In the case of object with high SNR, an algorithm based on background estimation is able to significantly reduce the side lobe and completely retain object regions. The local HOS can provide the local feature relating to the potential object for segmentation, detection and classification tasks; however, the robustness of local feature should be further tested and improved for shape recognition. In the future, we plan to extend this work to multiple objects tracking in complex scenes.
References
 1.
SM Simmons, DR Parsons, JL Best, O Orfeo, SN Lane, R Kostaschuk, RJ Hardy, G West, C Malzone, J Marcus, P Pocwiardowski, Monitoring suspended sediment dynamics using MBES. J. Hydraul. Eng. 136(1), 45–49 (2010)
 2.
I Quidu, L Jaulin, A Bertholom, Y Dupas, Robust multitarget tracking in forwardlooking sonar image sequences using navigational data. IEEE J. Ocean. Eng. 37(3), 417–430 (2012)
 3.
GG Acosta, SA Villar, Accumulated CA–CFAR process in 2D for online boject detection from sidescan sonar data. IEEE J. Ocean. Eng. 40(3), 558–569 (2015)
 4.
XF Ye, ZH Zhang, PX Liu, HL Guan, Sonar image segmentation based on GMRF and levelset models. Ocean Eng. 37(10), 891–901 (2010)
 5.
T Celik, T Tjahjadi, A novel method for sidescan sonar image segmentation. IEEE J. Ocean. Eng. 36(2), 186–194 (2011)
 6.
RJ Campbell, PJ Flynn, A survey of freeform object representation and recognition techniques. Comput. Vis. Image Underst. 81(2), 166–210 (2001)
 7.
B Moghaddam, A Pentland, Probabilistic visual learning for object representation. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 786–793 (2005)
 8.
D Dai, MJ Chantler, DM Lane, N Williams, Proceedings of International Conference on Image Processing and Its Applications. A spatialtemporal approach for segmentation of moving and static objects in sector scan sonar image sequences, 1995, pp. 163–167
 9.
JP Stitt, RL Tutwiler, AS Lewis, Proceedings of the IASTED International Conference on Signal and Image Processing. Fuzzy cmeans image segmentation of sidescan sonar images, 2001, pp. 27–32
 10.
M Mignotte, C Collet, P Perez, P Bouthemy, Sonar image segmentation using an unsupervised hierarchical MRF model. IEEE Trans. Image Process. 9(7), 1216–1231 (2000)
 11.
GJ Dobeck, JC Hyland, L Smedley, Proceedings of the International Society for Optical Engineering (SPIE). Automated detection/classification of sea mines in sonar imagery, 1997, pp. 90–110
 12.
S Reed, Y Petillot, J Bell, Modelbased approach to the detection and classification of mines in sidescan sonar. Appl Opt 43(2), 237–246 (2004)
 13.
DP Williams, Bayesian data fusion of multiview synthetic aperture sonar imagery for seabed classification. IEEE Trans. Image Process. 18(6), 1239–1254 (2009)
 14.
CM Ciany, W Zurawski, Proceedings of Oceans Mts/IEEE Conference & Exhibition. Performance of computer aided detection/computer aided classification and data fusion algorithms for automated detection and classification of underwater mines, 2001, pp. 277–284
 15.
S Reed, IT Ruiz, C Capus, Y Petillot, The fusion of large scale classified sidescan sonar image mosaics. IEEE Trans. Image Process. 15(7), 2049–2060 (2006)
 16.
GR Cutter Jr, Y Rzhanov, LA Mayer, Automated segmentation of seafloor bathymetry from multibeam echosounder data using local Fourier histogram texture features. J. Exp. Mar. Biol. Ecol. 285(2), 355–370 (2003)
 17.
A Mahiddine, J Seinturier, JM Boï, P Drap, D Merad, Proceedings of 20th International Conference on Computer Graphics, Visualization and Computer Vision. Performances Analysis of Underwater Image Preprocessing Techniques on the Repeatability of SIFT and SURF Descriptors, 2012, pp. 275–282
 18.
S Lyu, H Farid, Steganalysis using higherorder image statistics. IEEE Trans. Inf. Forensic Secur. 1(1), 111–119 (2006)
 19.
A Briassouli, I Kompatsiaris, Robust temporal activity templates using higher order statistics. IEEE Trans. Image Process. 18(12), 2756–2768 (2009)
 20.
F Maussang, J Chanussot, A Hetet, M Amate, Higherorder statistics for the detection of small objects in a noisy background application on sonar imaging. EURASIP J. Adv. Signal Process. 2007(1), 1–17 (2007)
 21.
F Maussang, J Chanussot, A Hetet, M Amate, Mean–standard deviation representation of sonar images for echo detection: application to SAS images. IEEE J. Ocean. Eng. 32(4), 956–970 (2007)
 22.
S Kuttikkad, R Chellappa, Proceedings of IEEE International Conference on Image Processing. NonGaussian CFAR techniques for target detection in high resolution SAR images, 1994, pp. 910–914
 23.
JM Gelb, RE Heath, GL Tipple, Statistics of distinct clutter classes in midfrequency active sonar. IEEE J. Ocean. Eng. 35(2), 220–229 (2010)
 24.
DA Abraham, JM Gelb, AW Oldag, Background and clutter mixture distributions for active sonar statistics. IEEE J. Ocean. Eng. 36(2), 231–247 (2011)
 25.
IR Joughin, DB Percival, DP Winebrenner, Maximum likelihood estimation of K distribution parameters for SAR data. IEEE Trans. Geosci. Remote Sens. 31(5), 989–999 (1993)
 26.
DR Iskander, AM Zoubir, B Boashash, A method for estimating the parameters of the K distribution. IEEE Trans. Signal Process 47(4), 1147–1151 (1999)
 27.
M Mignotte, C Collet, P Pérez, P Bouthemy, Threeclass Markovian segmentation of highresolution sonar images. Comput.Vis.Image.Und 76(3), 191–204 (1999)
 28.
DN Joanes, CA Gill, Comparing measures of sample skewness and kurtosis. J.R.Stat.Soc 47(1), 183–189 (1998)
 29.
JF Synnevag, A Austeng, S Holm, A lowcomplexity datadependent beamformer. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 58(2), 281–289 (2011)
 30.
CC Gaudes, I Santamaría, J Vía, EM Gómez, TS Paules, Robust array beamforming with sidelobe control using support vector machines. IEEE Trans. Signal Process. 55(2), 574–584 (2007)
 31.
M Wax, Y Anu, Performance analysis of the minimum variance beamformer. IEEE Trans. Signal Process 44(4), 928–937 (1996)
 32.
C Xu, HS Li, BW Chen, T Zhou, Multibeam interferometric seafloor imaging technology. J. Harbin Eng. Univ. 34(9), 1159–1164 (2013)
 33.
X Liu, HS Li, T Zhou, C Xu, B Yao, Multibeam seafloor imaging technology based on the multiple subarray detection method. J. Harbin Eng. Univ. 33(2), 197–202 (2012)
 34.
A Trucco, M Garofalo, S Repetto, G Vernazza, Processing and analysis of underwater acoustic images generated by mechanically scanned sonar systems. IEEE Trans. Instrum. Meas. 58(7), 2061–2071 (2009)
 35.
R Schettini, S Corchs, Underwater image processing: state of the art of restoration and image enhancement methods. EURASIP J. Adv. Signal Process. 2010(3), 1–14 (2010)
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 41327004, 41376103, 41306038, and 41506115) and the Fundamental Research Funds for the Central Universities (Grant No. HEUCF160510). The authors would like to thank the anonymous reviewers for the constructive comments on the manuscript.
Competing interests
The authors declare that they have no competing interests.
Author information
Affiliations
Corresponding author
Additional information
An erratum to this article is available at http://dx.doi.org/10.1186/s1363401704712.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Li, H., Gao, J., Du, W. et al. Object representation for multibeam sonar image using local higherorder statistics. EURASIP J. Adv. Signal Process. 2017, 7 (2017). https://doi.org/10.1186/s1363401604397
Received:
Accepted:
Published:
Keywords
 Higherorder statistics
 Object representation
 Side lobe suppression
 Multibeam sonar imaging
 Acoustic image
 Skewness
 Weibull distribution