# High-Resolution Sonars: What Resolution Do We Need for Target Recognition?

- Yan Pailhas
^{1}Email author, - Yvan Petillot
^{1}and - Chris Capus
^{1}

**2010**:205095

https://doi.org/10.1155/2010/205095

© Yan Pailhas et al. 2010

**Received: **23 December 2009

**Accepted: **1 December 2010

**Published: **9 December 2010

## Abstract

Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms.We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition.

## 1. Introduction

Detection and identification techniques have tended to focus on saliency (global rarity or local contrast) [4–6], model-based detection [7–15] or supervised learning [16–22]. Alternative approaches to investigate the internal structure of objects using wideband acoustics [23, 24] are showing some promises, but it is now widely acknowledged that current techniques are reaching their limits. Yet, their performances do not enable rapid and effective mine clearance and false alarm rates remain prohibitively high [4–22]. This is not a critical problem when operators can validate the outputs of the algorithms directly, as they still enable a very high data compression rate by dramatically reducing the amount of information that an operator has to review. The increasing use of autonomous platforms raises fundamentally different challenges. Underwater communication is very poor due to the very low bandwidth of the medium (the data transfer rate is typically around 300 bits/s) and it does not permit online operator visualisation or intervention. For this reason the use of collaborating multiple platforms requires robust and accurate on-board decision making.

The question of resolution has been raised again by the advent of very high resolution sidescan, forward-look and SAS systems. These change the quality of the images markedly producing near-optical images. This paper looks at whether the resolution is now high enough to apply optical image processing techniques to take advantage of advances made in other fields.

In order to improve these performances, the MCM (Mine and Counter Measures) community has focused on improving the resolution of the sensors and high resolution sonars are now a reality. However, these sensors are very expensive and very limited data (if any) are available to the research community. This has hampered the development of new algorithms for effective on-board decision making.

- (i)
The development of fast simulation tools for high resolution sensors: this will enable us to tackle the current lack of real datasets to develop and evaluate new algorithms including generative models for target identification. It will also provide a ground-truth simulation environment to evaluate potential active perception strategies.

- (ii)
What resolution do we need? The development of new sensors has been driven by the need for increased resolution.

The remainder of the paper is organized as follows: In Section 2, a fast and realistic sonar simulator is described. In Sections 3 and 4, the simulator is used to explore the resolution issue. Its flexibility enables the generation of realistic sonar images at various resolutions and the exploration of the effects of resolution on classification performance. Extensive simulations provide a database of synthetic images on various seabed types. Algorithms can be developed and evaluated using the database. The importance of the pixel resolution for image-based algorithms is analysed as well as the amount of information contained in the target shadow.

## 2. Sidescan Simulator

Sonar images are difficult and expensive to obtain. A realistic simulator offers an alternative to develop and test MCM algorithms. High-frequency sonars and SAS increase the resolution of the sonar image from tens of cm to a few cm (3 to 5 cm). The resulting sonar images become closer to optical images. By increasing the resolution of the image the objects become sharper. Our objective here is to produce a simulator that can realistically reproduce such images in real time.

There is an existing body of research into sonar simulation[25, 26]. The simulators are generally based on ray-tracing techniques [27] or on a solution to the full wave equation [28]. SAS simulation takes into account the SAS processing and is, in general, highly complex [26]. Critically, in all cases, the algorithms are extremely slow (one hour to several days to compute a synthetic sidescan image with a desktop computer). When high frequencies are used, the path of the acoustic waves can be approximated by straight lines. In this case, classical ray-tracing techniques combined with a careful and detailed modeling of the energy-based sonar equation can be used. The results obtained are very similar to those obtained using more complex propagation models. Yet they are much faster and produce very realistic images.

Note that this simulator is a high-precision sidescan simulator, which can be equally well applied to forward looking sonar. SAS images differ from sidescan images in mainly two points: a constant pixel resolution at all ranges and a blur in the object shadows [29]. The simulator can cope with the constant range resolution so synthetic target highlights will appear similar. A fully representative SAS shadow model remains to be implemented, but the analyses are still relevant for identification of targets from highlights in SAS imagery.

Objects of different shapes and different materials can be inserted into the environment. For MCM algorithms, several types of mines have been modeled such as the Manta (truncated cone shape), Rockan and cylindrical mines.

The resulting 3D environment is an heightmap, meaning that to one location corresponds one unique elevation. So objects floating in midwater for example cannot be modelled here.

The sonar images are produced from this 3D environment, taking into account a particular trajectory of the sensor (mounted on a vessel or an autonomous platforms). The seabed reflectivity is computed thanks to state-of-the-art models developed by APL-UW in the High-Frequency Ocean Environmental Acoustic Models Handbook [30] and the reflectivity of the targets is based on a Lambertian model. A pseudo ray-tracing algorithm is performed and the sonar equation is solved for each insonified area giving the backscattered energy. Note that the shadows are automatically taken into account thanks to the pseudo ray-tracing algorithm. The processing time required to compute a sonar image of 50 m by 50 m using a 2 GHz Intel Core 2 Duo with 2 GB of memory is approximately 7 seconds. The remainder of the section details each of the modules required to perform the simulation.

### 2.1. 3D Digital Terrain Model Generator

In the late seventies, mathematicians such as Mandelbrot [31] linked the symmetry patterns and self-similarity found in nature to mathematical objects called fractals [32–35]. Fractals have been used to model realistic textures and heightmap terrains [33]. A quick way to generate realistic 3D fractal heightmap terrains is by using a pink noise generator [33]. A pink noise is characterized by its power spectral density decreasing as , where .

#### 2.1.1. The Partition Layer

In the simulator, various types of seabeds can be chosen (up to three for a given image). The boundaries between the seabed types are computed using fractal borders.

#### 2.1.2. Elevation Layer

This layer contains two types of possible elevation: a linear slope characterizing coastal seabeds and a random 3D elevation. The random elevation is a smoothing of a pink noise process. The parameter is used to tune the roughness of the seabed.

#### 2.1.3. Texture Layer

Four different textures have been created to model four kinds of seabed. Once again the textures are synthesized by fractal models derived from pink noise models.

*(a) Flat Seabed*

A simple flat floor is used for the flat seabed. No texture is needed in this case. Differences in reflectivity and scattering between sediment types are handled by the Image Generator module.

*(b) Sand Ripples*

*(c) Rocky Seabed*

*(d) Cluttered Environment*

The cluttered environment is characterized by a random distribution of small rocks. A poisson distribution has been chosen for the spatial distribution of the rocks on the seabed as the mean number of occurrences is relatively small.

### 2.2. Targets

Nonmine targets can also be generated by varying parameters in this module. Several are used to test the algorithms with results presented in Section 4.1.2.

### 2.3. The Sonar Image Generator

where
is the excess level, that is, the backscattering energy,
is the Source Level of the projector,
is the Directivity Index,
is the Transmission Loss,
is the Noise Level,
is the Reverberation Level and
is the Target Strength. All the parameters are measured in decibels (dB) relative to the standard reference intensity of a 1 *μ* Pa plane wave.

where is the transmission range and is an attenuation coefficient expressed in dB/m. The attenuation coefficient can be expressed as the sum of two chemical relaxation processes and the absorption of pure water. It can be computed numerically thanks to the Francois-Garrison formula [37].

*noise*factor. The surface reverberation can be developed as drawn in Figure 5, where defines the elementary surface subtended by horizontal angle and is dependent on the pulse length and range. Returns from the front and rear ends of the pulse determine the size of the elementary surface element, . So, for the seabed contribution to reverberation level, we can write

Three types of seabed have been implemented: Rough Rock, Sandy Gravel and Very Fine Sand. A theoretical Bottom Scattering Strength ( in (5)) can be computed thanks to [30].

The source level SL is the power of the transmitter. It is a constant and given by the sonar manufacturer. For sidescan the SL is typically between 200 and 230 dB.

Also any transducer beam pattern can be integrated into the simulator.

In our model, the targets form part of the 3D environment. The Target Scattering Strength (TS) is computed using a Lambertian model. The reflectance factor in the Lambertian law is associated to the acoustic impedance. The simulator takes into account the acoustic impedance of the target given by , where is the density of the material, and the longitudinal sound speed in the material.

## 3. Classifier

### 3.1. Target Recognition Using Sonar Images

Target recognition in sonar imagery is a long-standing problem which has attracted considerable attention [7–15]. However, the resolution of the sensors available has limited not only the spectrum of techniques applicable, but also their performances. Most techniques for detection rely on matched filtering [38] or statistical modeling [11, 14], whilst recognition is mainly model-based [10, 13, 15].

New high resolution sonars such as SAS produce images which get closer to traditional optical imagery. This is also opening a new era of algorithm development for acoustics, as techniques recently developed in computer vision become more applicable. For example, the SAS system developed by NURC (MUSCLE) can achieve a 5 to 3 cm pixel resolution, almost independent of range. Thanks to this resolution, direct analysis of the target echo rather than traditional techniques based on its shadow become possible.

Identifying the resolution required to perform target classification is not a simple problem. In sonar, this has been attempted by various authors [39–42], generally looking at the minimum resolution required to distinguish a sphere from a cube and using information theory approaches. These techniques provide a lower bound on the minimum resolution required but tend to be over optimistic. We focus here on modern subspace algorithms based on PCA (Principal Component Analysis) as a mechanism to analyze the resolution needs for classification. Why focus on such techniques? The main reason is that they are very versatile and have been applied successfully to a variety of classical target identification problems. This has been demonstrated recently on face recognition [43] and land-based object detection problems [44].

### 3.2. Principal Component Analysis

The algorithm used in this paper for classification is based on the eigenfaces algorithm. The PCA-basedd eigenfaces approach has been used for face recognition purposes [45, 46] and is still close to the state of the art for this application [43].

Let be the preprocessing training set of dimension . The covariance matrix is calculated. The largest eigenvalues of are computed, and the corresponding eigenvectors form the decomposition base of the target. The subspace formed by the eigenvectors is called target space. The number of eigenvalues has been set to 20.

with the minimum distance represents the most compact space which represents the object under inspection.

## 4. Results

In previous works [15, 16, 47, 48], target classification algorithms using standard sidescan sonars have mainly been based on the analysis of the targets' shadows. With high resolution sonars, we note that more information should be exploitable from the target's highlight. In this section, we investigate the resolution needed for the PCA image-based classifier described earlier to classify using only the information carried by the highlight.

The sidescan simulator presented in Section 2 will provide synthetic data in order to train and to test the PCA image-based classifier. All the sidescan images are generated with a randomly selected seafloor (from flat seabed, ripples and cluster environment), random sonar altitude (from 2 to 10 metres altitude) and random range for the targets (from 5 to 50 metres range).

For each experiment, two separate sets of sonar images have been computed, one specifically for training (in order to compute the target space ) and one specifically for testing. At each sonar resolution and for each target, 80 synthetic target images at random ranges, random altitude and with a randomly selected seafloor have been used for training. A larger set of 40000 synthetic target images are used to test the classifier. The classifier is trained and tested according to the algorithm described in Section 3.2.

### 4.1. What Precision Is Needed?

#### 4.1.1. Identification

In this first experiment the PCA classifier is train for identification. Assuming a minelike object has been detected and classified as a mine, the algorithm identifies the kind of mine the target Four targets have been chosen: a Manta mine (truncated cone with dimensions 98 cm lower diameter; 49 cm upper diameter; 47 cm height), a Rockan mine (L × W × H: 100 cm × 50 cm × 40 cm), a cuboid with dimensions 100 cm × 30 cm × 30 cm and a cylinder 100 cm long and 30 cm in diameter.

As expected, the image-based classifier fails at low resolutions. Between 15 and 20 cm resolution, which corresponds to the majority of standard sonar systems, classification based on the highlights is poor (between 50% and 80% correct classification). The results stabilize at around 5 cm resolution to reach around 95% correct classification.

In previous work involving face recognition where it has been shown that PCA techniques are not very robust to rotation [49]. The algorithm can be optimized using multiple subspaces for each nonsymmetric target, each of the subspaces covering a limited angular range.

#### 4.1.2. Classification

As described in Section 4.1.1, two data sets for training and testing have been produced. The target classification relies on two steps: at first the target is identify following the same process as Section 4.1.1 and then classified into two classes *minelike* and *nonmine*

*minelike*and

*nonmine*is showed. Contrary to the identification process, the classification curves stabilise at higher pixel resolution (around 10 cm) to 2-3% misclassification.

In these examples we show that the identification task needs a higher pixel resolution that the classification task to match the same performances 95% correct identification/classification.

### 4.2. Identification with Shadow

As mentioned earlier, current sidescan ATR algorithms depend strongly on the target shadow for detection and classification. The usual assumption made is: *at low resolution the information relative to the target is mostly contained in its shadow*. In this section we aim to confirm this statement by using the classifier described in Section 3.2 directly on the target shadows.

We study here the quantity of information contained into the shape of the shadow, and how this information is retrievable depending on the pixel resolution.

Concerning the Cylinder and Cuboid targets, their shadows are very similar due the similar geometry. In Figure 14 it is almost impossible to distinguish visually between the two objects looking only at their shadows. In broadside for example, the two shadows have exactly the same rectangular shape, explaining why the confusion between these two objects is high.

For the Manta and Rockan targets, the misidentification curves stabilize near 0% misidentification below 20 cm sonar resolution. Therefore, for standard sidescan systems with a resolution in the 10–30 cm range, the target information can be extracted from the shadow with an excellent probability of correct identification. In comparison, correct identification using the target highlights at 20 cm resolution is about 50% (cf. Figure 11)

## 5. Conclusions and Future Work

In this paper, a new real-time realistic sidescan simulator has been presented. Thanks to the flexibility of this numerical tool, realistic synthetic data can be generated at different pixel resolutions. A subspace target identification technique based on PCA has been developed and used to evaluate the ability of modern sonar systems to identify a variety of targets.

The results processing shadow images back up the widely accepted idea that identification from current sonars at 10–20 cm resolution is reaching its performance limit. The advent of much higher resolution sonars has now made it possible to bring in and apply techniques new to the field from optical image processing. The PCA analyses presented here, operating on highlight as opposed solely to shadow, show that these can give a significant improvement in target identification and classification performance opening the way for reinvigorated effort in this area.

The emergence of very high resolution sonar systems such as SAS and acoustic cameras will enable more advanced target identification techniques to be used very soon. The next phase of this work will be to validate and confirm these using real SAS data. We are currently undertaking this phase in collaboration with the NATO Undersea Research Centre and DSTL under the UK Defense Research Centre program.

## Declarations

### Acknowledgments

This work was supported by EPSRC and DSTL under research contracts *EP/H012354/* 1 and *EP/F068956/* 1. The authors also acknowledge support from the Scottish Funding Council for the Joint Research Institute in Signal and Image Processing between the University of Edinburgh and Heriot-Watt University which is a part of the Edinburgh Research Partnership in Engineering and Mathematics (ERPem).

## Authors’ Affiliations

## References

- Bellettini A: Design and experimental results of a 300-kHz synthetic aperture sonar optimized for shallow-water operations.
*IEEE Journal of Oceanic Engineering*2009, 34(3):285-293.MathSciNetView ArticleGoogle Scholar - Ferguson BG, Wyber RJ: Generalized framework for real aperture, synthetic aperture, and tomographic sonar imaging.
*IEEE Journal of Oceanic Engineering*2009, 34(3):225-238.View ArticleGoogle Scholar - Belcher EO, Lynn DC, Dinh HQ, Laughlin TJ: Beamforming and imaging with acoustic lenses in small, high-frequency sonars.
*Proceedings of the Oceans Conference, September 1999*1495-1499.Google Scholar - Goldman A, Cohen I: Anomaly subspace detection based on a multi-scale Markov random field model.
*Signal Processing*2005, 85(3):463-479. 10.1016/j.sigpro.2004.10.013View ArticleMATHGoogle Scholar - Maussang F, Chanussot J, Hétet A, Amate M: Higher-order statistics for the detection of small objects in a noisy background application on sonar imaging.
*EURASIP Journal on Advances in Signal Processing*2007, 2007:-17.Google Scholar - Calder BR, Linnett LM, Carmichael DR: Spatial stochastic models for seabed object detection.
*Detection and Remediation Technologies for Mines and Minelike Targets II, April 1997, Proceeding of SPIE*172-182.View ArticleGoogle Scholar - Mignotte M, Collet C, Perez P, Bouthemy P: Hybrid genetic optimization and statistical model-based approach for the classification of shadow shapes in sonar imagery.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*2000, 22(2):129-141. 10.1109/34.825752View ArticleGoogle Scholar - Calder B:
*Bayesian spatial models for sonar image interpretation, Ph.D. dissertation*. Heriot-Watt University; September 1997.Google Scholar - Dobeck GJ, Hyland JC, Smedley LED: Automated detection and classification of sea mines in sonar imagery.
*Detection and Remediation Technologies for Mines and Minelike Targets II, April 1997, Proceedings of SPIE*90-110.View ArticleGoogle Scholar - Quidu I, Malkasse JPH, Burel G, Vilbe P: Mine classification based on raw sonar data: an approach combining Fourier descriptors, statistical models and genetic algorithms.
*Proceedings of the Oceans Conference, September 2000*285-290.Google Scholar - Calder BR, Linnett LM, Carmichael DR: Bayesian approach to object detection in sidescan sonar.
*IEE Proceedings: Vision, Image and Signal Processing*1998, 145(3):221-228. 10.1049/ip-vis:19982038Google Scholar - Balasubramanian R, Stevenson M: Pattern recognition for underwater mine detection.
*Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, Canada*Google Scholar - Reed S, Petillot Y, Bell J: Automated approach to classification of mine-like objects in sidescan sonar using highlight and shadow information.
*IEE Proceedings: Radar, Sonar and Navigation*2004, 151(1):48-56. 10.1049/ip-rsn:20040117Google Scholar - Reed S, Petillot Y, Bell J: Model-based approach to the detection and classification of mines in sidescan sonar.
*Applied Optics*2004, 43(2):237-246. 10.1364/AO.43.000237View ArticleGoogle Scholar - Dura E, Bell J, Lane D: Superellipse fitting for the recovery and classification of mine-like shapes in sidescan sonar images.
*IEEE Journal of Oceanic Engineering*2008, 33(4):434-444.View ArticleGoogle Scholar - Zerr B, Bovio E, Stage B: Automatic mine classification approach based on auv manoeuverability and the cots side scan sonar.
*Proceedings of the Autonomous Underwater Vehicle and Ocean Modelling Networks Conference (GOATS '00), 2001*315-322.Google Scholar - Azimi-Sadjadi M, Jamshidi A, Dobeck G: Adaptive underwater target classification with multi-aspect decision feedback.
*Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, Canada*Google Scholar - Quidu I, Malkasse JPH, Burel G, Vilbe P: Mine classification using a hybrid set of descriptors.
*Proceedings of the Oceans Conference, September 2000*291-297.Google Scholar - Fawcett J: Image-based classification of side-scan sonar detections.
*Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, November 2001, Halifax, Canada*Google Scholar - Perry S, Guan L: Detection of small man-made objects in multiple range sector scan imagery using neural networks.
- Ciany C, Zurawski W: Performance of computer aided detection/computer aided classification and data fusion algorithms for automated detection and classification of underwater mines.
- Ciany CM, Huang J: Computer aided detection/computer aided classification and data fusion algorithms for automated detection and classification of underwater mines.
*Proceedings of the Oceans Conference, September 2000*277-284.Google Scholar - Pailhas Y, Capus C, Brown K, Moore P: Analysis and classification of broadband echoes using bio-inspired dolphin pulses.
*Journal of the Acoustical Society of America*2010, 127(6):3809-3820. 10.1121/1.3372754View ArticleGoogle Scholar - Capus C, Pailhas Y, Brown K: Classification of bottom-set targets from wideband echo responses to bio-inspired sonar pulses.
*Proceedings of the 4th International Conference on Bio-acoustics, 2007*Google Scholar - Bell J:
*A model for the simulation of sidescan sonar, Ph.D. dissertation*. Heriot-Watt University; August 1995.Google Scholar - Hunter AJ, Hayes MP, Gough PT: Simulation of multiple-receiver, broadband interferometric SAS imagery.
*Proceeding of IEEE Oceans Conference, September 2003*2629-2634.Google Scholar - Bell JM: Application of optical ray tracing techniques to the simulation of sonar images.
*Optical Engineering*1997, 36(6):1806-1813. 10.1117/1.601325View ArticleGoogle Scholar - Elston GR, Bell JM: Pseudospectral time-domain modeling of non-Rayleigh reverberation: synthesis and statistical analysis of a sidescan sonar image of sand ripples.
*IEEE Journal of Oceanic Engineering*2004, 29(2):317-329. 10.1109/JOE.2004.828206View ArticleGoogle Scholar - Pinto M: Design of synthetic aperture sonar systems for high-resolution seabed imaging.
*Proceedings of MTS/IEEE Oceans Conference, 2006, Boston, Mass, USA*Google Scholar - at the University of Washington APL: High-Frequency Ocean Environmental Acoustic Models Handbook. October 1994., (APLUW TR 9407):Google Scholar
- Mandelbrot B:
*The Fractal Geometry of Nature*. W. H. Freeman; 1982.MATHGoogle Scholar - Pentland AP: Fractal-based description of natural scenes.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*1984, 6(6):661-674.View ArticleGoogle Scholar - Voss RF:
*Random Fractal Forgeries in Fundamental Algorithms for Computer Graphics, R. A. Earnshaw, Ed.*. Springer, Berlin, Germany; 1985.Google Scholar - Burrough PA: Fractal dimensions of landscapes and other environmental data.
*Nature*1981, 294(5838):240-242. 10.1038/294240a0View ArticleGoogle Scholar - Lovejoy S: Area-perimeter relation for rain and cloud areas.
*Science*1982, 216(4542):185-187. 10.1126/science.216.4542.185View ArticleGoogle Scholar - Urick RJ:
*Principles of Underwater Sound*. 3rd edition. McGraw-Hill, New York, NY, USA; 1975.Google Scholar - Francois RE: Sound absorption based on ocean measurements: Part I: pure water and magnesium sulfate contributions.
*The Journal of the Acoustical Society of America*1982, 72(3):896-907. 10.1121/1.388170MathSciNetView ArticleGoogle Scholar - Aridgides T, Fernandez MF, Dobeck GJ: Adaptive three-dimensional range-crossrange-frequency filter processing string for sea mine classification in side scan sonar imagery.
*Detection and Remediation Technologies for Mines and Minelike Targets II, April 1997, Proceedings of SPIE*111-122.View ArticleGoogle Scholar - Pinto M: Performance index for shadow classification in minehunting sonar.
*Proceedings of the UDT Conference, 1997*Google Scholar - Myers V, Pinto M: Bounding the performance of sidescan sonar automatic target recognition algorithms using information theory.
*IET Radar, Sonar and Navigation*2007, 1(4):266-273. 10.1049/iet-rsn:20060182View ArticleGoogle Scholar - Kessel RT: Estimating the limitations that image resolution and contrast place on target recognition.
*Automatic Target Recognition XII, April 2002, usa, Proceedings of SPIE*316-327.View ArticleGoogle Scholar - Florin F, Van Zeebroeck F, Quidu I, Le Bouffant N: Classification performance of minehunting sonar: theory, practical, results and operational applications.
*Proceeedings of the UDT Conference, 2003*Google Scholar - Wright J, Yang AY, Ganesh A, Sastry SS, Ma YI: Robust face recognition via sparse representation.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*2009, 31(2):210-227.View ArticleGoogle Scholar - Nayak A, Trucco E, Ahmad A, Wallace AM: SimBIL: appearance-based simulation of burst-illumination laser sequences.
*IET Image Processing*2008, 2(3):165-174. 10.1049/iet-ipr:20070207View ArticleGoogle Scholar - Sirovich L, Kirby M: Low-dimensional procedure for the characterization of human faces.
*Journal of the Optical Society of America A*1987, 4(3):519-524. 10.1364/JOSAA.4.000519View ArticleGoogle Scholar - Etemad K, Chellappa R: Discriminant analysis for recognition of human face images.
*Journal of the Optical Society of America A*1997, 14(8):1724-1733. 10.1364/JOSAA.14.001724View ArticleGoogle Scholar - Reed S, Petillot Y, Bell J: An automatic approach to the detection and extraction of mine features in sidescan sonar.
*IEEE Journal of Oceanic Engineering*2003, 28(1):90-105. 10.1109/JOE.2002.808199View ArticleGoogle Scholar - Myers VL: Image segmentation using iteration and fuzzy logic.
*Proceedings of the Computer-Aided Classification/Computer-Aided Design Conference, 2001*Google Scholar - Turk M, Pentland A: Face recognition using eigenfaces.
*Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1991*586-591.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.