Open Access

High-Resolution Sonars: What Resolution Do We Need for Target Recognition?

EURASIP Journal on Advances in Signal Processing20102010:205095

Received: 23 December 2009

Accepted: 1 December 2010

Published: 9 December 2010


Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms.We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition.

1. Introduction

Target recognition in sonar imagery has long been an active research area in the maritime domain. Recently, however, it has received increased attention, in part due to the development of new generations of sensors with increased resolution and in part due to the emergence of new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms. The recent introduction of operational Synthetic Aperture Sonar (SAS) systems [1, 2] and the development of ultrahigh resolution acoustic cameras [3] have increased tenfold the resolution of the images available for target recognition as demonstrated in Figure 1. In parallel, traditional dedicated ships are being replaced by small, low cost, autonomous platforms easily deployable by any vessel of opportunity. This creates new sensing and processing challenges, as the classification algorithms need to be fully automatic and run in real time on the platforms. The platforms' behaviours also require to be autonomously adapted online, to guarantee appropriate detection performance is met, sometimes on very challenging terrains. This creates a direct link between sensing and mission planning, sometimes called active perception, where the data acquisition is directly controlled by the scene interpretation.
Figure 1

Example of Target in Synthetic Aperture Sonar (a) and Acoustic Camera (b). Images are courtesy of the NATO Undersea Research Centre (a) and Soundmetrics Ltd (b).

Detection and identification techniques have tended to focus on saliency (global rarity or local contrast) [46], model-based detection [715] or supervised learning [1622]. Alternative approaches to investigate the internal structure of objects using wideband acoustics [23, 24] are showing some promises, but it is now widely acknowledged that current techniques are reaching their limits. Yet, their performances do not enable rapid and effective mine clearance and false alarm rates remain prohibitively high [422]. This is not a critical problem when operators can validate the outputs of the algorithms directly, as they still enable a very high data compression rate by dramatically reducing the amount of information that an operator has to review. The increasing use of autonomous platforms raises fundamentally different challenges. Underwater communication is very poor due to the very low bandwidth of the medium (the data transfer rate is typically around 300 bits/s) and it does not permit online operator visualisation or intervention. For this reason the use of collaborating multiple platforms requires robust and accurate on-board decision making.

The question of resolution has been raised again by the advent of very high resolution sidescan, forward-look and SAS systems. These change the quality of the images markedly producing near-optical images. This paper looks at whether the resolution is now high enough to apply optical image processing techniques to take advantage of advances made in other fields.

In order to improve these performances, the MCM (Mine and Counter Measures) community has focused on improving the resolution of the sensors and high resolution sonars are now a reality. However, these sensors are very expensive and very limited data (if any) are available to the research community. This has hampered the development of new algorithms for effective on-board decision making.

In this paper, we present tools and algorithms to address the challenges for the development of improved target detection algorithms using high resolution sensors. We focus on two key challenges.
  1. (i)

    The development of fast simulation tools for high resolution sensors: this will enable us to tackle the current lack of real datasets to develop and evaluate new algorithms including generative models for target identification. It will also provide a ground-truth simulation environment to evaluate potential active perception strategies.

  2. (ii)

    What resolution do we need? The development of new sensors has been driven by the need for increased resolution.


The remainder of the paper is organized as follows: In Section 2, a fast and realistic sonar simulator is described. In Sections 3 and 4, the simulator is used to explore the resolution issue. Its flexibility enables the generation of realistic sonar images at various resolutions and the exploration of the effects of resolution on classification performance. Extensive simulations provide a database of synthetic images on various seabed types. Algorithms can be developed and evaluated using the database. The importance of the pixel resolution for image-based algorithms is analysed as well as the amount of information contained in the target shadow.

2. Sidescan Simulator

Sonar images are difficult and expensive to obtain. A realistic simulator offers an alternative to develop and test MCM algorithms. High-frequency sonars and SAS increase the resolution of the sonar image from tens of cm to a few cm (3 to 5 cm). The resulting sonar images become closer to optical images. By increasing the resolution of the image the objects become sharper. Our objective here is to produce a simulator that can realistically reproduce such images in real time.

There is an existing body of research into sonar simulation[25, 26]. The simulators are generally based on ray-tracing techniques [27] or on a solution to the full wave equation [28]. SAS simulation takes into account the SAS processing and is, in general, highly complex [26]. Critically, in all cases, the algorithms are extremely slow (one hour to several days to compute a synthetic sidescan image with a desktop computer). When high frequencies are used, the path of the acoustic waves can be approximated by straight lines. In this case, classical ray-tracing techniques combined with a careful and detailed modeling of the energy-based sonar equation can be used. The results obtained are very similar to those obtained using more complex propagation models. Yet they are much faster and produce very realistic images.

Note that this simulator is a high-precision sidescan simulator, which can be equally well applied to forward looking sonar. SAS images differ from sidescan images in mainly two points: a constant pixel resolution at all ranges and a blur in the object shadows [29]. The simulator can cope with the constant range resolution so synthetic target highlights will appear similar. A fully representative SAS shadow model remains to be implemented, but the analyses are still relevant for identification of targets from highlights in SAS imagery.

The simulator presented here first generates a realistic synthetic 3D environment. The 3D environment is divided into three layers: a partition layer which assigns a seabed type to each area, an elevation profile corresponding to the general variation of the seabed, and a 3D texture that models each seabed structure. Figure 2 displays snapshots of four different types of seabed (flat sediment, sand ripples, rocky seabed and a cluttered environment) that can be generated by the simulator. All these natural structures can be well modeled using fractal representations. The simulator can also take into account various compositions of the seabed in terms of scattering strengths. The boundaries between each seabed type are also modeled using fractals.
Figure 2

Snapshots of four different types of seabed: (a) flat seabed, (b) sand ripples, (c) rocky seabed and (d) cluttered environment.

Objects of different shapes and different materials can be inserted into the environment. For MCM algorithms, several types of mines have been modeled such as the Manta (truncated cone shape), Rockan and cylindrical mines.

The resulting 3D environment is an heightmap, meaning that to one location corresponds one unique elevation. So objects floating in midwater for example cannot be modelled here.

The sonar images are produced from this 3D environment, taking into account a particular trajectory of the sensor (mounted on a vessel or an autonomous platforms). The seabed reflectivity is computed thanks to state-of-the-art models developed by APL-UW in the High-Frequency Ocean Environmental Acoustic Models Handbook [30] and the reflectivity of the targets is based on a Lambertian model. A pseudo ray-tracing algorithm is performed and the sonar equation is solved for each insonified area giving the backscattered energy. Note that the shadows are automatically taken into account thanks to the pseudo ray-tracing algorithm. The processing time required to compute a sonar image of 50 m by 50 m using a 2 GHz Intel Core 2 Duo with 2 GB of memory is approximately 7 seconds. The remainder of the section details each of the modules required to perform the simulation.

2.1. 3D Digital Terrain Model Generator

The aim of this module is to generate realistic 3D seabed environments. It should be able to handle several types of seabed, to generate a realistic model for each seabed type, and to synthesize a realistic 3D elevation. For these reasons, the final 3D structure is built by superposition of three different layers: a partition layer, an elevation layer and a texture layer. Figure 3 shows an example of the three different layers which form the final 3D environment.
Figure 3

Decomposition of the 3D representation of the seafloor in 3 layers: partition between the different types of seabed, global elevation, roughness and texture.

In the late seventies, mathematicians such as Mandelbrot [31] linked the symmetry patterns and self-similarity found in nature to mathematical objects called fractals [3235]. Fractals have been used to model realistic textures and heightmap terrains [33]. A quick way to generate realistic 3D fractal heightmap terrains is by using a pink noise generator [33]. A pink noise is characterized by its power spectral density decreasing as , where .

2.1.1. The Partition Layer

In the simulator, various types of seabeds can be chosen (up to three for a given image). The boundaries between the seabed types are computed using fractal borders.

2.1.2. Elevation Layer

This layer contains two types of possible elevation: a linear slope characterizing coastal seabeds and a random 3D elevation. The random elevation is a smoothing of a pink noise process. The parameter is used to tune the roughness of the seabed.

2.1.3. Texture Layer

Four different textures have been created to model four kinds of seabed. Once again the textures are synthesized by fractal models derived from pink noise models.

(a) Flat Seabed

A simple flat floor is used for the flat seabed. No texture is needed in this case. Differences in reflectivity and scattering between sediment types are handled by the Image Generator module.

(b) Sand Ripples

The sand ripples are characterized by the periodicity and the direction of the ripples. A modified pink noise is used here. In this case the frequency decay is anisotropic. The amplitude of the magnitude of the Fourier transform follows (1). The frequency of the ripples is given by and the direction is given by . The phase is modeled by a uniform distribution

(c) Rocky Seabed

The magnitude of the Fourier transform of the the rocky seabed is modeled by (2). The factor models the anisotropic erosion of the rock due to underwater currents

(d) Cluttered Environment

The cluttered environment is characterized by a random distribution of small rocks. A poisson distribution has been chosen for the spatial distribution of the rocks on the seabed as the mean number of occurrences is relatively small.

2.2. Targets

A separate module is provided for adding targets into the environment. Figure 4 displays the 3D models of 6 different targets. Location, size and material composition can be adjusted by the user. The resulting sidescan images offer a large data base for detection and classification algorithms.
Figure 4

3D models of the different targets and minelike objects.

Nonmine targets can also be generated by varying parameters in this module. Several are used to test the algorithms with results presented in Section 4.1.2.

2.3. The Sonar Image Generator

The sonar module computes the sidescan image from a given 3D environment. The simulator is ray-tracing-based and solves the sonar equation [36] (given in (3)). Because (3) is an energetic equation, phenomena such as multipaths are not taken into account. For a monostatic sonar system, the sound propagation can be expressed from an energetic point of view as

where is the excess level, that is, the backscattering energy, is the Source Level of the projector, is the Directivity Index, is the Transmission Loss, is the Noise Level, is the Reverberation Level and is the Target Strength. All the parameters are measured in decibels (dB) relative to the standard reference intensity of a 1 μ Pa plane wave.

In a wide range of cases, a good approximation to transmission loss can be made by considering the process as a combination of free field spherical spreading and an added absorption loss. This working rule can be expressed as

where is the transmission range and is an attenuation coefficient expressed in dB/m. The attenuation coefficient can be expressed as the sum of two chemical relaxation processes and the absorption of pure water. It can be computed numerically thanks to the Francois-Garrison formula [37].

Reverberation Level is an important restricting factor in the detection process, especially in the context of MCM. At short ranges, it represents the most significant noise factor. The surface reverberation can be developed as drawn in Figure 5, where defines the elementary surface subtended by horizontal angle and is dependent on the pulse length and range. Returns from the front and rear ends of the pulse determine the size of the elementary surface element, . So, for the seabed contribution to reverberation level, we can write
where is the altitude of the sonar, is the range to the seabed along the main axis of the transducer beam and is time.
Figure 5

Definitions for surface reverberation modeling.

Three types of seabed have been implemented: Rough Rock, Sandy Gravel and Very Fine Sand. A theoretical Bottom Scattering Strength ( in (5)) can be computed thanks to [30].

The source level SL is the power of the transmitter. It is a constant and given by the sonar manufacturer. For sidescan the SL is typically between 200 and 230 dB.

The Directivity Index (DI) is a sonar-dependent factor associated with directionality of the transducer system. The simulator includes a simple beam pattern derived from a continuous line array model of length l. The beam pattern function can be computed thanks to  the following:

Also any transducer beam pattern can be integrated into the simulator.

In our model, the targets form part of the 3D environment. The Target Scattering Strength (TS) is computed using a Lambertian model. The reflectance factor in the Lambertian law is associated to the acoustic impedance. The simulator takes into account the acoustic impedance of the target given by , where is the density of the material, and the longitudinal sound speed in the material.

The sidescan simulator is designed for validity in the range of frequencies from 80 kHz to 300 kHz. We only consider one contribution for the ambient Noise Level: the thermal noise. For thermal agitation, the equivalent noise spectrum level is given by the empirical formula [36]:
The trajectory of the sonar platform is tuneable (as shown in Figure 6). This allows multiview sidescan images of the same environment. Figure 7 displays sonar images of the same scene with two different angles of view.
Figure 6

The trajectory of the sonar platform can be placed into the 3D environment.

Figure 7

Display of the resulting sidescan images ((a) and (b)) of the same scene with different trajectory. The seafloor is composed with two sand ripples phenomena at different frequencies and different sediments (VeryFineSand for the high frequency ripples and VeryCoarseSand for the low frequency ripples). A manta object has been put in the centre of the map.

Further examples of typical images obtained for the various types of seabed are shown in Figure 8.
Figure 8

Examples of simulated sonar images for different seabed types (clutter, flat, ripples), 3D elevation and scattering strength. (a) represents a smooth seabed with some small variations, (b) represents a mixture of flat and cluttered seabed and (c) represents a rippled seabed.

3. Classifier

3.1. Target Recognition Using Sonar Images

Target recognition in sonar imagery is a long-standing problem which has attracted considerable attention [715]. However, the resolution of the sensors available has limited not only the spectrum of techniques applicable, but also their performances. Most techniques for detection rely on matched filtering [38] or statistical modeling [11, 14], whilst recognition is mainly model-based [10, 13, 15].

The limitations of current sidescan technology are highlighted in Figure 9. It would seem from this figure that only SAS systems can give large area coverage and still give high resolution needed for identification. However the boundaries drawn between detection and identification are more the results of general wisdom than solid scientific evidences.
Figure 9

Ability to detect and identify targets as a function of resolution and coverage rate (Nm/h: nautical mile per hour) for the best sidescan and synthetic aperture sonars. The SAS sonars here are a typical 100–300 kHz sonar in optimal conditions for synthetic aperture.

New high resolution sonars such as SAS produce images which get closer to traditional optical imagery. This is also opening a new era of algorithm development for acoustics, as techniques recently developed in computer vision become more applicable. For example, the SAS system developed by NURC (MUSCLE) can achieve a 5 to 3 cm pixel resolution, almost independent of range. Thanks to this resolution, direct analysis of the target echo rather than traditional techniques based on its shadow become possible.

Identifying the resolution required to perform target classification is not a simple problem. In sonar, this has been attempted by various authors [3942], generally looking at the minimum resolution required to distinguish a sphere from a cube and using information theory approaches. These techniques provide a lower bound on the minimum resolution required but tend to be over optimistic. We focus here on modern subspace algorithms based on PCA (Principal Component Analysis) as a mechanism to analyze the resolution needs for classification. Why focus on such techniques? The main reason is that they are very versatile and have been applied successfully to a variety of classical target identification problems. This has been demonstrated recently on face recognition [43] and land-based object detection problems [44].

3.2. Principal Component Analysis

The algorithm used in this paper for classification is based on the eigenfaces algorithm. The PCA-basedd eigenfaces approach has been used for face recognition purposes [45, 46] and is still close to the state of the art for this application [43].

Assuming the training set is composed of images of a target. Each target image is an matrix. The are converted into vectors of dimension . A mean image of the target is computed using the following:
The training vectors are centered and normalized according to (9). In the training set, the target is selected from various ranges (from 5 m to 50 m from the sonar). The contrast and illumination change drastically through the training set. The normalization by the standard deviation of the image reduce this effect. Let std be the standard deviation of

Let be the preprocessing training set of dimension . The covariance matrix is calculated. The largest eigenvalues of are computed, and the corresponding eigenvectors form the decomposition base of the target. The subspace formed by the eigenvectors is called target space. The number of eigenvalues has been set to 20.

The classifier projects the test target image to each target space. We denote the projection of in the target space . The estimated target is the target corresponding to the minimum distance between and as expressed in

with the minimum distance represents the most compact space which represents the object under inspection.

4. Results

In previous works [15, 16, 47, 48], target classification algorithms using standard sidescan sonars have mainly been based on the analysis of the targets' shadows. With high resolution sonars, we note that more information should be exploitable from the target's highlight. In this section, we investigate the resolution needed for the PCA image-based classifier described earlier to classify using only the information carried by the highlight.

The sidescan simulator presented in Section 2 will provide synthetic data in order to train and to test the PCA image-based classifier. All the sidescan images are generated with a randomly selected seafloor (from flat seabed, ripples and cluster environment), random sonar altitude (from 2 to 10 metres altitude) and random range for the targets (from 5 to 50 metres range).

For each experiment, two separate sets of sonar images have been computed, one specifically for training (in order to compute the target space ) and one specifically for testing. At each sonar resolution and for each target, 80 synthetic target images at random ranges, random altitude and with a randomly selected seafloor have been used for training. A larger set of 40000 synthetic target images are used to test the classifier. The classifier is trained and tested according to the algorithm described in Section 3.2.

4.1. What Precision Is Needed?

4.1.1. Identification

In this first experiment the PCA classifier is train for identification. Assuming a minelike object has been detected and classified as a mine, the algorithm identifies the kind of mine the target Four targets have been chosen: a Manta mine (truncated cone with dimensions 98 cm lower diameter; 49 cm upper diameter; 47 cm height), a Rockan mine (L × W × H: 100 cm × 50 cm × 40 cm), a cuboid with dimensions 100 cm × 30 cm × 30 cm and a cylinder 100 cm long and 30 cm in diameter.

Figure 10 displays snapshots of the four different targets for a 5 cm sonar resolution.
Figure 10

Snapshot of the four targets. (a) Manta, on sand ripples, (b) Rockan on cluttered environment, (c) Cuboid on flat seabed, (d) Cylinder on sand ripples. The pixel size in these targets images is 5 cm.

The pixel resolution is tunable in the simulator. Sidescan simulation/classification processes have been run for 15 different pixel resolutions from 3 cm (high resolution sonar) to 30 cm (low resolution sonar) covering the detection and classification range of side looking sonars. Figure 11 displays the misidentification % of the four targets against the pixel resolution.
Figure 11

Misidentification of the four targets as a function of the pixel resolution. This is considering the highlight of the targets.

As expected, the image-based classifier fails at low resolutions. Between 15 and 20 cm resolution, which corresponds to the majority of standard sonar systems, classification based on the highlights is poor (between 50% and 80% correct classification). The results stabilize at around 5 cm resolution to reach around 95% correct classification.

In previous work involving face recognition where it has been shown that PCA techniques are not very robust to rotation [49]. The algorithm can be optimized using multiple subspaces for each nonsymmetric target, each of the subspaces covering a limited angular range.

4.1.2. Classification

In this section we extend the PCA classifier for underwater object classification purposes. A larger set of seven targets have been chosen with three minelike objects: the Manta, the Rockan, a cylinder 100 cm long and 30 cm diameter and four nonmine objects: a cuboid with dimension 100 cm × 50 cm × 40 cm, two hemispheres with diameters, respectively, 100 cm and 50 cm and a box with dimension 70 cm × 70 cm × 40 cm. Note that the nonmine targets have been chosen such as the dimension of the big hemisphere matches with the dimension of the Manta, and the dimension of the box matches with the dimension of the Rockan. Figure 12 provides snapshots of the different targets.
Figure 12

Snapshot of the targets used for classification. On the first line, the minelike targets with the Manta, the Rockan and the cylinder. On the second line, the nonmine targets with the cube, the two hemispheres, and the box shape target. The pixel size in these targets images is 5 cm.

As described in Section 4.1.1, two data sets for training and testing have been produced. The target classification relies on two steps: at first the target is identify following the same process as Section 4.1.1 and then classified into two classes minelike and nonmine

Figure 13(a) displays the results of the identification step. the curves of misidentification for each target follow the general pattern described earlier in Section 4.1.1 with a low misidentification (below 5%) for a pixel resolution lower than 5 cm. In Figure 13(b), the results of the classification between minelike and nonmine is showed. Contrary to the identification process, the classification curves stabilise at higher pixel resolution (around 10 cm) to 2-3% misclassification.
Figure 13

(a) Misidentification of the seven targets as a function of the pixel resolution. (b) Misclassification of the target as function of the pixel resolution.

In these examples we show that the identification task needs a higher pixel resolution that the classification task to match the same performances 95% correct identification/classification.

4.2. Identification with Shadow

As mentioned earlier, current sidescan ATR algorithms depend strongly on the target shadow for detection and classification. The usual assumption made is: at low resolution the information relative to the target is mostly contained in its shadow. In this section we aim to confirm this statement by using the classifier described in Section 3.2 directly on the target shadows.

We study here the quantity of information contained into the shape of the shadow, and how this information is retrievable depending on the pixel resolution.

Shadows are the result of the directional acoustic illumination of a 3D target. They are therefore range dependent. For the purposes of this experiment, in order to remove the effect of the range dependence of the shadows, the targets are positioned at a fixed range of 25 m from the sensor. Image segments containing the target shadows are extracted from the data. Figure 14 displays snapshots of target shadows with different orientations and backgrounds for a 5 cm pixel resolution. We process the target shadow images in exactly in the same way as we did for the target highlight images in the previous sections. For each sonar resolution, 80 target shadows per object are used for training the classifier, and a set of 40000 shadow images is used for testing.
Figure 14

Snapshot of the shadow of the four targets (from left to right: Manta, Rockan, Cube and Cylinder) to classify with different orientations and backgrounds. The pixel size is these target images is 5 cm. The size of each snapshot is 1.25 m × 2.75 m.

In total 15 training/classification simulations have been done for 15 sonar pixel resolutions (from 5 cm to 30 cm). Figure 15 shows the percentage of misclassification versus the pixel resolution for various target types.
Figure 15

Percentage of misidentification versus the pixel resolution for various target types. This considers the shadow of the target and not its echo.

Concerning the Cylinder and Cuboid targets, their shadows are very similar due the similar geometry. In Figure 14 it is almost impossible to distinguish visually between the two objects looking only at their shadows. In broadside for example, the two shadows have exactly the same rectangular shape, explaining why the confusion between these two objects is high.

For the Manta and Rockan targets, the misidentification curves stabilize near 0% misidentification below 20 cm sonar resolution. Therefore, for standard sidescan systems with a resolution in the 10–30 cm range, the target information can be extracted from the shadow with an excellent probability of correct identification. In comparison, correct identification using the target highlights at 20 cm resolution is about 50% (cf. Figure 11)

5. Conclusions and Future Work

In this paper, a new real-time realistic sidescan simulator has been presented. Thanks to the flexibility of this numerical tool, realistic synthetic data can be generated at different pixel resolutions. A subspace target identification technique based on PCA has been developed and used to evaluate the ability of modern sonar systems to identify a variety of targets.

The results processing shadow images back up the widely accepted idea that identification from current sonars at 10–20 cm resolution is reaching its performance limit. The advent of much higher resolution sonars has now made it possible to bring in and apply techniques new to the field from optical image processing. The PCA analyses presented here, operating on highlight as opposed solely to shadow, show that these can give a significant improvement in target identification and classification performance opening the way for reinvigorated effort in this area.

The emergence of very high resolution sonar systems such as SAS and acoustic cameras will enable more advanced target identification techniques to be used very soon. The next phase of this work will be to validate and confirm these using real SAS data. We are currently undertaking this phase in collaboration with the NATO Undersea Research Centre and DSTL under the UK Defense Research Centre program.



This work was supported by EPSRC and DSTL under research contracts EP/H012354/ 1 and EP/F068956/ 1. The authors also acknowledge support from the Scottish Funding Council for the Joint Research Institute in Signal and Image Processing between the University of Edinburgh and Heriot-Watt University which is a part of the Edinburgh Research Partnership in Engineering and Mathematics (ERPem).

Authors’ Affiliations

School of Electrical and Physical Science, Oceans Systems Laboratory, Heriot Watt University


  1. Bellettini A: Design and experimental results of a 300-kHz synthetic aperture sonar optimized for shallow-water operations. IEEE Journal of Oceanic Engineering 2009, 34(3):285-293.MathSciNetView ArticleGoogle Scholar
  2. Ferguson BG, Wyber RJ: Generalized framework for real aperture, synthetic aperture, and tomographic sonar imaging. IEEE Journal of Oceanic Engineering 2009, 34(3):225-238.View ArticleGoogle Scholar
  3. Belcher EO, Lynn DC, Dinh HQ, Laughlin TJ: Beamforming and imaging with acoustic lenses in small, high-frequency sonars. Proceedings of the Oceans Conference, September 1999 1495-1499.Google Scholar
  4. Goldman A, Cohen I: Anomaly subspace detection based on a multi-scale Markov random field model. Signal Processing 2005, 85(3):463-479. 10.1016/j.sigpro.2004.10.013View ArticleMATHGoogle Scholar
  5. Maussang F, Chanussot J, Hétet A, Amate M: Higher-order statistics for the detection of small objects in a noisy background application on sonar imaging. EURASIP Journal on Advances in Signal Processing 2007, 2007:-17.Google Scholar
  6. Calder BR, Linnett LM, Carmichael DR: Spatial stochastic models for seabed object detection. Detection and Remediation Technologies for Mines and Minelike Targets II, April 1997, Proceeding of SPIE 172-182.View ArticleGoogle Scholar
  7. Mignotte M, Collet C, Perez P, Bouthemy P: Hybrid genetic optimization and statistical model-based approach for the classification of shadow shapes in sonar imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence 2000, 22(2):129-141. 10.1109/34.825752View ArticleGoogle Scholar
  8. Calder B: Bayesian spatial models for sonar image interpretation, Ph.D. dissertation. Heriot-Watt University; September 1997.Google Scholar
  9. Dobeck GJ, Hyland JC, Smedley LED: Automated detection and classification of sea mines in sonar imagery. Detection and Remediation Technologies for Mines and Minelike Targets II, April 1997, Proceedings of SPIE 90-110.View ArticleGoogle Scholar
  10. Quidu I, Malkasse JPH, Burel G, Vilbe P: Mine classification based on raw sonar data: an approach combining Fourier descriptors, statistical models and genetic algorithms. Proceedings of the Oceans Conference, September 2000 285-290.Google Scholar
  11. Calder BR, Linnett LM, Carmichael DR: Bayesian approach to object detection in sidescan sonar. IEE Proceedings: Vision, Image and Signal Processing 1998, 145(3):221-228. 10.1049/ip-vis:19982038Google Scholar
  12. Balasubramanian R, Stevenson M: Pattern recognition for underwater mine detection. Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, CanadaGoogle Scholar
  13. Reed S, Petillot Y, Bell J: Automated approach to classification of mine-like objects in sidescan sonar using highlight and shadow information. IEE Proceedings: Radar, Sonar and Navigation 2004, 151(1):48-56. 10.1049/ip-rsn:20040117Google Scholar
  14. Reed S, Petillot Y, Bell J: Model-based approach to the detection and classification of mines in sidescan sonar. Applied Optics 2004, 43(2):237-246. 10.1364/AO.43.000237View ArticleGoogle Scholar
  15. Dura E, Bell J, Lane D: Superellipse fitting for the recovery and classification of mine-like shapes in sidescan sonar images. IEEE Journal of Oceanic Engineering 2008, 33(4):434-444.View ArticleGoogle Scholar
  16. Zerr B, Bovio E, Stage B: Automatic mine classification approach based on auv manoeuverability and the cots side scan sonar. Proceedings of the Autonomous Underwater Vehicle and Ocean Modelling Networks Conference (GOATS '00), 2001 315-322.Google Scholar
  17. Azimi-Sadjadi M, Jamshidi A, Dobeck G: Adaptive underwater target classification with multi-aspect decision feedback. Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, CanadaGoogle Scholar
  18. Quidu I, Malkasse JPH, Burel G, Vilbe P: Mine classification using a hybrid set of descriptors. Proceedings of the Oceans Conference, September 2000 291-297.Google Scholar
  19. Fawcett J: Image-based classification of side-scan sonar detections. Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, CanadaGoogle Scholar
  20. Perry S, Guan L: Detection of small man-made objects in multiple range sector scan imagery using neural networks. Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, CanadaGoogle Scholar
  21. Ciany C, Zurawski W: Performance of computer aided detection/computer aided classification and data fusion algorithms for automated detection and classification of underwater mines. Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, CanadaGoogle Scholar
  22. Ciany CM, Huang J: Computer aided detection/computer aided classification and data fusion algorithms for automated detection and classification of underwater mines. Proceedings of the Oceans Conference, September 2000 277-284.Google Scholar
  23. Pailhas Y, Capus C, Brown K, Moore P: Analysis and classification of broadband echoes using bio-inspired dolphin pulses. Journal of the Acoustical Society of America 2010, 127(6):3809-3820. 10.1121/1.3372754View ArticleGoogle Scholar
  24. Capus C, Pailhas Y, Brown K: Classification of bottom-set targets from wideband echo responses to bio-inspired sonar pulses. Proceedings of the 4th International Conference on Bio-acoustics, 2007Google Scholar
  25. Bell J: A model for the simulation of sidescan sonar, Ph.D. dissertation. Heriot-Watt University; August 1995.Google Scholar
  26. Hunter AJ, Hayes MP, Gough PT: Simulation of multiple-receiver, broadband interferometric SAS imagery. Proceeding of IEEE Oceans Conference, September 2003 2629-2634.Google Scholar
  27. Bell JM: Application of optical ray tracing techniques to the simulation of sonar images. Optical Engineering 1997, 36(6):1806-1813. 10.1117/1.601325View ArticleGoogle Scholar
  28. Elston GR, Bell JM: Pseudospectral time-domain modeling of non-Rayleigh reverberation: synthesis and statistical analysis of a sidescan sonar image of sand ripples. IEEE Journal of Oceanic Engineering 2004, 29(2):317-329. 10.1109/JOE.2004.828206View ArticleGoogle Scholar
  29. Pinto M: Design of synthetic aperture sonar systems for high-resolution seabed imaging. Proceedings of MTS/IEEE Oceans Conference, 2006, Boston, Mass, USAGoogle Scholar
  30. at the University of Washington APL: High-Frequency Ocean Environmental Acoustic Models Handbook. October 1994., (APLUW TR 9407):Google Scholar
  31. Mandelbrot B: The Fractal Geometry of Nature. W. H. Freeman; 1982.MATHGoogle Scholar
  32. Pentland AP: Fractal-based description of natural scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 1984, 6(6):661-674.View ArticleGoogle Scholar
  33. Voss RF: Random Fractal Forgeries in Fundamental Algorithms for Computer Graphics, R. A. Earnshaw, Ed.. Springer, Berlin, Germany; 1985.Google Scholar
  34. Burrough PA: Fractal dimensions of landscapes and other environmental data. Nature 1981, 294(5838):240-242. 10.1038/294240a0View ArticleGoogle Scholar
  35. Lovejoy S: Area-perimeter relation for rain and cloud areas. Science 1982, 216(4542):185-187. 10.1126/science.216.4542.185View ArticleGoogle Scholar
  36. Urick RJ: Principles of Underwater Sound. 3rd edition. McGraw-Hill, New York, NY, USA; 1975.Google Scholar
  37. Francois RE: Sound absorption based on ocean measurements: Part I: pure water and magnesium sulfate contributions. The Journal of the Acoustical Society of America 1982, 72(3):896-907. 10.1121/1.388170MathSciNetView ArticleGoogle Scholar
  38. Aridgides T, Fernandez MF, Dobeck GJ: Adaptive three-dimensional range-crossrange-frequency filter processing string for sea mine classification in side scan sonar imagery. Detection and Remediation Technologies for Mines and Minelike Targets II, April 1997, Proceedings of SPIE 111-122.View ArticleGoogle Scholar
  39. Pinto M: Performance index for shadow classification in minehunting sonar. Proceedings of the UDT Conference, 1997Google Scholar
  40. Myers V, Pinto M: Bounding the performance of sidescan sonar automatic target recognition algorithms using information theory. IET Radar, Sonar and Navigation 2007, 1(4):266-273. 10.1049/iet-rsn:20060182View ArticleGoogle Scholar
  41. Kessel RT: Estimating the limitations that image resolution and contrast place on target recognition. Automatic Target Recognition XII, April 2002, usa, Proceedings of SPIE 316-327.View ArticleGoogle Scholar
  42. Florin F, Van Zeebroeck F, Quidu I, Le Bouffant N: Classification performance of minehunting sonar: theory, practical, results and operational applications. Proceeedings of the UDT Conference, 2003Google Scholar
  43. Wright J, Yang AY, Ganesh A, Sastry SS, Ma YI: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 2009, 31(2):210-227.View ArticleGoogle Scholar
  44. Nayak A, Trucco E, Ahmad A, Wallace AM: SimBIL: appearance-based simulation of burst-illumination laser sequences. IET Image Processing 2008, 2(3):165-174. 10.1049/iet-ipr:20070207View ArticleGoogle Scholar
  45. Sirovich L, Kirby M: Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A 1987, 4(3):519-524. 10.1364/JOSAA.4.000519View ArticleGoogle Scholar
  46. Etemad K, Chellappa R: Discriminant analysis for recognition of human face images. Journal of the Optical Society of America A 1997, 14(8):1724-1733. 10.1364/JOSAA.14.001724View ArticleGoogle Scholar
  47. Reed S, Petillot Y, Bell J: An automatic approach to the detection and extraction of mine features in sidescan sonar. IEEE Journal of Oceanic Engineering 2003, 28(1):90-105. 10.1109/JOE.2002.808199View ArticleGoogle Scholar
  48. Myers VL: Image segmentation using iteration and fuzzy logic. Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, 2001Google Scholar
  49. Turk M, Pentland A: Face recognition using eigenfaces. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1991 586-591.Google Scholar


© Yan Pailhas et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.