Skip to main content

Underwater Image Processing: State of the Art of Restoration and Image Enhancement Methods


The underwater image processing area has received considerable attention within the last decades, showing important achievements. In this paper we review some of the most recent methods that have been specifically developed for the underwater environment. These techniques are capable of extending the range of underwater imaging, improving image contrast and resolution. After considering the basic physics of the light propagation in the water medium, we focus on the different algorithms available in the literature. The conditions for which each of them have been originally developed are highlighted as well as the quality assessment methods used to evaluate their performance.

1. Introduction

In order to deal with underwater image processing, we have to consider first of all the basic physics of the light propagation in the water medium. Physical properties of the medium cause degradation effects not present in normal images taken in air. Underwater images are essentially characterized by their poor visibility because light is exponentially attenuated as it travels in the water and the scenes result poorly contrasted and hazy. Light attenuation limits the visibility distance at about twenty meters in clear water and five meters or less in turbid water. The light attenuation process is caused by absorption (which removes light energy) and scattering (which changes the direction of light path). The absorption and scattering processes of the light in water influence the overall performance of underwater imaging systems. Forward scattering (randomly deviated light on its way from an object to the camera) generally leads to blurring of the image features. On the other hand, backward scattering (the fraction of the light reflected by the water towards the camera before it actually reaches the objects in the scene) generally limits the contrast of the images, generating a characteristic veil that superimposes itself on the image and hides the scene. Absorption and scattering effects are due not only to the water itself but also to other components such as dissolved organic matter or small observable floating particles. The presence of the floating particles known as "marine snow" (highly variable in kind and concentration) increase absorption and scattering effects. The visibility range can be increased with artificial lighting but these sources not only suffer from the difficulties described before (scattering and absorption), but in addition tend to illuminate the scene in a non uniform fashion, producing a bright spot in the center of the image with a poorly illuminated area surrounding it. Finally, as the amount of light is reduced when we go deeper, colors drop off one by one depending on their wavelengths. The blue color travels the longest in the water due to its shortest wavelength, making the underwater images to be dominated essentially by blue color. In summary, the images we are interested on can suffer of one or more of the following problems: limited range visibility, low contrast, non uniform lighting, blurring, bright artifacts, color diminished (bluish appearance) and noise. Therefore, application of standard computer vision techniques to underwater imaging requires dealing first with these added problems.

The image processing can be addressed from two different points of view: as an image restoration technique or as an image enhancement method:

  1. (i)

    The image restoration aims to recover a degraded image using a model of the degradation and of the original image formation; it is essentially an inverse problem. These methods are rigorous but they require many model parameters (like attenuation and diffusion coefficients that characterize the water turbidity) which are only scarcely known in tables and can be extremely variable. Another important parameter required is the depth estimation of a given object in the scene.

  2. (ii)

    Image enhancement uses qualitative subjective criteria to produce a more visually pleasing image and they do not rely on any physical model for the image formation. These kinds of approaches are usually simpler and faster than deconvolution methods.

In what follows we give a general view of some of the most recent methods that address the topic of underwater image processing providing an introduction of the problem and enumerating the difficulties found. Our scope is to give the reader, in particular who is not an specialist in the field and who has a specific problem to address and solve, the indications of the available methods focusing on the imaging conditions for which they were developed (lighting conditions, depth, environment where the approach was tested, quality evaluation of the results) and considering the model characteristics and assumptions of the approach itself. In this way we wish to guide the reader so as to find the technique that better suits his problem or application.

In Section 2 we briefly review the optical properties of the light propagation in water and the image formation model of Jaffe-McGlamery, following in Section 3 with a report of the image restoration methods that take into account this image model. In Section 4, works addressing image enhancement and color correction in underwater environment are presented. We include a brief description of some of the most recent methods. When possible, some examples (images before and after correction) that illustrate these approaches are also included. Section 5 considers the lighting problems and Section 6 focuses on image quality metrics. Finally the conclusions are sketched in Section 7.

2. Propagation of Light in the Water

In this section we focus on the special transmission properties of the light in the water. Light interacts with the water medium through two processes: absorption and scattering. Absorption is the loss of power as light travels in the medium and it depends on the index of refraction of the medium. Scattering refers to any deflection from a straight-line propagation path. In underwater environment, deflections can be due to particles of size comparable to the wavelengths of travelling light (diffraction), or to particulate matter with refraction index different from that of the water (refraction).

According to the Lambert-Beer empirical law, the decay of light intensity is related to the properties of the material (through which the light is travelling) via an exponential dependence. The irradiance E at position r can be modeled as:


where is the total attenuation coefficient of the medium. This coefficient is a measure of the light loss from the combined effects of scattering and absorption over a unit length of travel in an attenuation medium. Typical attenuation coefficients for deep ocean water, coastal water and bay water are 0.05 , 0.2 , and 0.33 , respectively.

Assuming an isotropic, homogeneous medium, the total attenuation coefficient can be further decomposed as a sum of two quantities and , the absorption and scattering coefficients of the medium, respectively:


The total scattering coefficient is the superposition of all scattering events at all angles through the volume scattering function (this function gives the probability for a ray of light to be deviated of an angle from its direction of propagation)


The parameters , , , and represent the inherent properties of the medium and their knowledge should theoretically permit us to predict the propagation of light in the water. However, all these parameters depend on the location (in a three dimensional space) and also on time. Therefore, the corresponding measurements are a complex task and computational modeling is needed.

McGlamery [1] laid out the theoretical foundations of the optical image formation model while Jaffe [2] extended the model and applied it to design different subsea image acquisition systems. Modeling of underwater imaging has also been carried out by Monte Carlo techniques [3].

In this section we follow the image formation model of Jaffe-McGlamery. According to this model, the underwater image can be represented as the linear superposition of three components (see Figure 1). An underwater image experiment consists of tracing the progression of light from a light source to a camera. The light received by the camera is composed by three components: (i) the direct component (light reflected directly by the object that has not been scattered in the water), (ii) the forward-scattered component (light reflected by the object that has been scattered at a small angle) and (iii) the backscatter component (light reflected by objects not on the target scene but that enters the camera, for example due to floating particles). Therefore, the total irradiance reads:


Spherical spreading and attenuation of the source light beam is assumed in order to model the illumination incident upon the target pane. The reflected illumination is then computed as the product of the incident illumination and the reflectance map. Assuming a Lambertian reflector, geometric optics is used to compute the image of the direct component in the camera plane. The reflected light is also small scattered on its way to the camera. A fraction of the resultant blurred image is then added to the direct component. The backscatter component is the most computationally demanding to calculate. The model partitions 3-dimensional space into planes parallel to the camera plane, and the radiation scattered toward the camera is computed superposing small volume elements weighted by an appropriate volume scattering function. The detail derivation of each of the components of (4) can be found in [2]. We report here the final results, as they appear in Jaffe's article. The direct component results (see Figure 2 for the coordinate system)


where is the irradiance on the scene surface at point , is the distance from to the camera and the function represents the surface reflectance map. We note that and typical values for objects of oceanographic interest are [4]. The camera system is characterized by (-number of the lens), (lens transmittance) and (focal length). The angle is the angle between the reflectance map and a line between the position and the camera. The forward scatter component is calculated from the direct component via the convolution operator with a point spread function ; and its derivation is valid under the small angle scattering approximation


where the function g is given by


with an empirical factor such that and a damping function determined empirically. indicates the inverse Fourier transform and is the radial frequency. Experimental measurements of the point spread function validate the use of the small angle scattering theory [5, 6]. For the calculation of the backscatter component the small angle approximation is no longer valid as the backscattered light enters the camera from a large distribution of angles. The model takes into account the light contributions from the volume of water between the scene and the camera. The three dimensional space is divided into a large number of differential volumes . The backscatter component is a linear superposition of these illuminated volumes of water, weighted by the volume scattering function


where is the direct component of the backscattered irradiance and is evaluated as


with the thickness of the backscattering volume , the distance from a point in the camera to the center of the backscatter slab; is the volume scattering function and is the irradiance in the three dimensional space propagating away from the light source.

Figure 1
figure 1

The three components of underwater optical imaging: direct component (straight line), forward component (dashed line) and backward scatter component (dash-dot line).

Figure 2
figure 2

Coordinate system of the Jaffe-McGlamery model.

In Jaffe's work [2, 7] the relationship between image range, camera light separation and the limiting factors in underwater imaging are considered. If only short ranges are desired (one attenuation length), a simple conventional system that uses close positioning of camera and lights can yield good results but these configurations are contrast-limited at greater ranges. If longer distances are desired (2-3 attenuation lengths), systems with separated camera and lights are preferred but backscattering problems appear as the distance increases. For greater distances more sophisticated technology is required, like for example, laser range-gated systems and synchronous scan imaging.

3. Image Restoration

A possible approach to deal with underwater images is to consider the image transmission in water as a linear system [8].

Image restoration aims at recovering the original image from the observed image using (if available) explicit knowledge about the degradation function (also called point spread function PSF) and the noise characteristics :


where denotes convolution. The degradation function includes the system response from the imaging system itself and the effects of the medium (water in our case). In the frequency domain, we have:


where are spatial frequencies and ,,, and are Fourier transforms of , , , and , respectively. The system response function in the frequency domain is referred as the optical transfer function (OTF) and its magnitude is referred as modulation transfer function (MTF). Usually, the system response is expressed as a direct product of the optical system itself and the medium:


The better the knowledge we have about the degradation function, the better are the results of the restoration. However, in practical cases, there is insufficient knowledge about the degradation and it must be estimated and modeled. In our case, the source of degradation in underwater imaging includes turbidity, floating particles and the optical properties of light propagation in water. Therefore, underwater optical properties have to be incorporated into the PSF and MTF. The presence of noise from various sources further complicates these techniques.

Recently, Hou et al. [911] incorporated the underwater optical properties to the traditional image restoration approach. They assume that blurring is caused by strong scattering due to water and its constituents which include various sized particles. To address this issue, they incorporated measured in-water optical properties to the point spread function in the spatial domain and the modulation transfer function in frequency domain. The authors modeled for circular symmetrical response systems (2-dimensional space) as an exponential function


The exponent, , is the decay transfer function obtained by Wells [12] for the seawater within the small angle approximation


where is the mean square angle, and are the total scattering and attenuation coefficients, respectively. The system (camera/lens) response was measured directly from calibrated imagery at various spatial frequencies. In water optical properties during the experiment were measured: absorption and attenuation coefficients, particle size distributions and volume scattering functions. The authors implemented an automated framework termed Image Restoration via Denoised Deconvolution. To determine the quality of the restored images, an objective quality metric was implemented. It is a wavelet decomposed and denoised perceptual metric constrained by a power spectrum ratio (see Section 6). Image restoration is carried out and medium optical properties are estimated. Both modeled and measured optical properties are taken into account in the framework. The images are restored using PSFs derived from both the modeled and measured optical properties (see Figure 3).

Figure 3
figure 3

Image taken at 7. 5 m depth in Florida. The original (a), the restored image based on measured MTF (b) and the restored image based on modeled MTF (c). Courtesy of Hou et al. [9].

Trucco and Olmos [13] presented a self-tuning restoration filter based on a simplified version of the Jaffe-McGlamery image formation model. Two assumptions are made in order to design the restoration filter. The first one assumes uniform illumination (direct sunlight in shallow waters) and the second one is to consider only the forward component of the image model as the major degradation source, ignoring back scattering and the direct component . This appears reasonable whenever the concentration of particulate matter generating backscatter in the water column is limited. A further simplification considers the difference of exponentials in the forward scatter model (6) as an experimental constant (with typical values between 0.2 and 0.9)


Within these assumptions, from (7), a simple inverse filter in the frequency domain is designed as follows (the parameter is approximated by )


Optimal values of these parameters were estimated automatically for each individual image by optimizing a quality criterion based on a global contrast measure (optimality is defined as achieving minimum blur). Therefore, low-backscatter and shallow-water conditions represent the optimal environment for this technique. The authors assessed both qualitative (by visual inspection) and quantitatively the performance of the restoration filter. They assessed quantitatively the benefits of the self-tuning filer as preprocessor for image classification: images were classified as containing or not man-made objects [14, 15]. The quantitative tests with a large number of frames from real videos show an important improvement to the classification task of detecting man-made objects on the seafloor. The training videos were acquired under different environments: instrumented tank, shallow and turbid waters conditions in the sea.

Liu et al. [16] measured the PSF and MTF of seawater in the laboratory by means of the image transmission theory and used Wiener filters to restore the blurred underwater images. The degradation function is measured in a water tank. An experiment is constructed with a slit image and a light source. In a first step, one dimensional light intensity distribution of the slit images at different water path lengths is obtained. The one dimensional PSF of sea water can be obtained by the deconvolution operation. Then, according to the property of the circle symmetry of the PSF of seawater, the 2-dimensional PSF can be calculated by mathematical method. In a similar way, MTFs are derived. These measured functions are used for blurred image restoration. The standard Wiener deconvolution process is applied. The transfer function reads


where and are the power spectrum of noise and original image, respectively, and is the conjugate matrix of (measured result as previously described). Noise is regarded as white noise, and is a constant that can be estimated form the blurred images with noise while is estimated as


where is the power spectrum of the blurred image. Then, the spectrum of the restored image is


Also parametric Wiener filter is used by the authors and both deconvolution methods are compared.

Schechner and Karpel [17] exploit the polarization effects in underwater scattering to compensate for visibility degradation. The authors claim that image blur is not the dominant cause for image contrast degradation and they associate underwater polarization with the prime visibility disturbance that they want to delete (veiling light or backscattered light). The Jaffe-McGlamery image formation model is applied under natural underwater lighting exploiting the fact that veiling light is partially polarized horizontally [18]. The algorithm is based on a couple of images taken through a polarizer at different orientations. Even when the raw images have very low contrast, their slight differences provide the key for visibility improvement. The method automatically accounts for dependencies on object distance, and estimates a distance map of the scene. A quantitative estimate for the visibility improvement is defined as a logarithmic function of the backscatter component. Additionally, an algorithm to compensate for the strong blue hue is also applied. Experiments conducted in the sea show improvements of scene contrast and color correction, nearly doubling the underwater visibility range. In Figure 4 a raw image and its recovered version are shown.

Figure 4
figure 4

Underwater scene at the Red-Sea at 26 m below the water surface. Left, raw image; right, recovered image. Image courtesy of Schechner and Karpel [17].

Recently, Treibitz and Schechner [19] used a similar polarization-based method for visibility enhancement and distance estimation in scattering media. They studied the formation of images under wide field (non-scanning) artificial illumination. Based on backscattered light characteristics (empirically obtained) they presented a visibility recovery approach which also yields a rough estimate of the 3D scene structure. The method is simple and requires compact hardware, using active wide field polarized illumination. Two images of the scene are instantly taken, with different states of a camera-mounted polarizer. The authors used the approach to demonstrate recovery of object signals and significant visibility enhancement in experiments in various sea environments at night. The distance reconstruction is effective in a range of 1-2 m. In Figure 5, an underwater image taken in the Mediterranean sea with two articial light sources is shown together with the corresponding de-scattered image result [19].

Figure 5
figure 5

Raw image (a), De-scattered image (b) [19]. From

4. Image Enhancement and Color Correction

These methods make total abstraction of the image formation process, and no a priori knowledge of the environment is needed (do not use attenuation and scattering coefficients for instance). They are usually simpler and faster than the image restoration techniques.

Regarding color correction, as depth increases, colors drop off one by one depending on their wavelength. First of all, red color disappears at the depth of 3 m approximately. At the depth of 5 m, the orange color is lost. Most of the yellow goes off at the depth of 10 m and finally the green and purple disappear at further depth. The blue color travels the longest in the water due to its shortest wavelength. The underwater images are therefore dominated by blue-green color. Also the light source variations will affect the color perception. As a consequence, a strong and non uniform color cast will characterize the typical underwater images.

Bazeille et al. [20, 21] propose an algorithm to pre-process underwater images. It reduces underwater perturbations and improves image quality. It is composed of several successive independent processing steps which correct non uniform illumination (homorphic filtering), suppress noise (wavelet denoising), enhance edges (anisotropic filtering) and adjust colors (equalizing RGB channels to suppress predominant color). The algorithm is automatic and requires no parameter adjustment. The method was used as a preliminary step of edge detection. The robustness of the method was analyzed using gradient magnitude histograms and also the criterion used by Arnold-Bos et al. [22] was applied. This criterion assumes that a well-contrasted and noise-free image has a distribution of the gradient magnitude histogram close to exponential and it attributes a mark from zero to one. In Figure 6 pairs of images are shown before and after Bazeille et al'. processing [20].

Figure 6
figure 6

Pairs of images before (a) and after (b) Bazeille et al.' processing. Image courtesy of Bazeille et al. [20].

Chambah et al. [23] proposed a color correction method based on ACE model, an unsupervised color equalization algorithm developed by Rizzi et al. [24]. ACE is a perceptual approach inspired by some adaptation mechanisms of the human vision system, in particular lightness constancy and color constancy. ACE was applied on videos taken in aquatic environment that present a strong and non uniform color cast due to the depth of the water and the artificial illumination. Images were taken from the tanks of an aquarium. Inner parameters of the ACE algorithm were properly tuned to meet the requirements of image and histogram shape naturalness and to deal with these kinds of aquatic images. In Figure 7 two example original images and their restored ACE version are shown.

Figure 7
figure 7

Original images (a), after correction with ACE (b). Image courtesy of Chambah et al. [23].

Iqbal et al. [25] presented an underwater image enhancement method using an integrated color model. They proposed an approach based on slide stretching: first, contrast stretching of RGB algorithm is used to equalize the color contrast in the images. Second, saturation and intensity stretching of HSI is applied to increase the true color and solve the problem of lighting. The blue color component in the image is controlled by the saturation and intensity to create the range from pale blue to deep blue. The contrast ratio is therefore controlled by decreasing or increasing its value. In Figure 8 two example images before and after Iqbal et al'. technique are shown.

Figure 8
figure 8

Original images (a), images after enhancement using Iqbal et al'. technique (b). Image courtesy of Iqbal et al. [25].

Arnold Bos et al. [22, 26] presented a complete preprocessing framework for underwater images. They investigated the possibility of addressing the whole range of noises present in underwater images by using a combination of deconvolution and enhancement methods. First, a contrast equalization system is proposed to reject backscattering, attenuation and lighting inequalities. If is the original image and its low-pass version, a contrast-equalized version of is . Contrast equalization is followed by histogram clipping and expansion of the image range. The method is relevant because backscattering is a slowly varying spatial function. Backscattering is considered as the first noise addressed in the algorithm but contrast equalization also corrects the effect of the exponential light attenuation with distance. Remaining noises corresponding to sensor noise, floating particles and miscellaneous quantification errors are suppressed using a generic self-tuning wavelet-based algorithm. The use of the adaptive smoothing filter significantly improves edge detection in the images. Results on simulated and real data are presented.

The color recovery is also analyzed by Torres-Mendez and Dudek [27] but from a different perspective: it is formulated as an energy minimization problem using learned constraints. The idea, on which the approach is based, is that an image can be modeled as a sample function of a stochastic process known as Markov Random Field. The color correction is considered as a task of assigning a color value to each pixel of the input image that best describes its surrounding structure using the training image patches. This model uses multi-scale representations of the color corrected and color depleted (bluish) images to construct a probabilistic algorithm that improves the color of underwater images. Experimental results on a variety of underwater scenes are shown.

Ahlen et al. [28] apply underwater hyperspectral data for color correction purposes. They develop a mathematical stability model which gives a value range for wavelengths that should be used to compute the attenuation coefficient values that are as stable as possible in terms of variation with depth. Their main goal is to monitor coral reefs and marine habitats. Spectrometer measurements of a colored plate at various depths are performed. The hyperspectral data is then color corrected with a formula derived from Beer's law


where is the pixel intensity in the image for depth and is the corresponding attenuation coefficient calculated from spectral data. In this way, they obtain images as if they were taken at a much shallower depth than in reality. All hyperspectral images are "lifted up" to a depth of 1.8 m, where almost all wavelengths are still present (they have not been absorbed by the water column). The data is finally brought back into the original RGB space.

Another approach to improve color rendition is proposed by Petit et al. [29]. The method is based on light attenuation inversion after processing a color space contraction using quaternions. Applied to the white vector in the RGB space, the attenuation gives a hue vector characterizing the water color


where , , and are the attenuation coefficients for red, green and blue wavelengths, respectively. Using this reference axis, geometrical transformations into the color space are computed with quaternions. Pixels of water areas of processed images are moved to gray or colors with a low saturation whereas the objects remain fully colored. In this way, objects contrasts result enhanced and bluish aspect of images is removed. Two example images before and after correction by Petit et al'. algorithm are shown in Figure 9.

Figure 9
figure 9

Original image (a), corrected by Petit et al.' algorithm (b). Image courtesy of Petit et al. [29].

5. Lighting Problems

In this section we summarize the articles that have been specifically focused on solving lighting problems. Even if this aspect was already taken into account in some of the methods presented in the previous sections, we review here the works that have addressed in particular this kind of problem, proposing different lighting correction strategies.

Garcia et al. [30] analyzed how to solve the lighting problems in underwater imaging and reviewed different techniques. The starting point is the illumination-reflectance model, where the image sensed by the camera is considered as a product of the illumination i(x,y), the reflectance function and a gain factor plus an offset term :


The multiplicative factor due to light sources and camera sensitivity can be modeled as a smooth function (the offset term is ignored). In order to model the non-uniform illumination, a Gaussian-smoothed version of the image is proposed. The smoothed image is intended to be an estimate of how much the illumination field (and camera sensitivity) affects every pixel. The acquired image is corrected by a point-by-point division by the smoothed image , giving rise to an estimate of ideal image


where is a normalization constant. Next, the contrast of the resulting image is emphasized, giving rise to an equalized version of .

Some authors compensate for the effects of non-uniform lighting by applying local equalization to the images [31, 32]. The non uniform of lighting demands a special treatment for the different areas of the image, depending on the amount of light they receive. The strategy consists in defining annxn neighborhood, computing the histogram of this area and applying an equalization function but modifying uniquely the central point of the neighborhood [33]. A similar strategy is used in Zuidervel [34].

An alternative model consists of applying homomorphic filtering [30]. This approach assumes that the illumination factor varies smoothly through the field of view; generating low frequencies in the Fourier transform of the image (the offset term is ignored). Taking the logarithm of (22), the multiplicative effect is converted into an additive one


Taking the Fourier transform of (24) we obtain


where , , and are the Fourier transforms of ,, and , respectively. Low frequencies can be suppressed by multiplying these components by a high pass homomorphic filter given by


where is the cutoff frequency, is a multiplicative factor and is an offset term. This filter not only attenuates non uniform illumination but also enhances the high frequencies, sharpening the edges.

Rzhanov et al. [35] disregards the multiplicative constant , considering the lighting of the scene as an additive factor which should be subtracted from the original image


where is a two dimensional polynomial spline and is a normalization constant.

Garcia et al. [30] tested and compared the different lighting-corrections strategies for two typical underwater situations. The first one considers images acquired in shallow waters at sun down (simulating deep ocean). The vehicle carries its own light producing a bright spot in the center of the image. The second sequence of images was acquired in shallow waters on a sunny day. The evaluation methodology for the comparisons is qualitative. The best results have been obtained by the homomorphic filtering and the point-by-point correction by the smoothed image. The authors emphasize that both methods consider the illumination field is multiplicative and not subtractive.

6. Quality Assessment

In the last years many different methods for image quality assessment have been proposed and analyzed with the goal of developing a quality metric that correlates with perceived quality measurements (for a detailed review see [36]). Peak Signal to Noise Ratio and Mean Squared Error are the most widely used objective image quality/distortion metrics. In the last decades however, a great effort has been made to develop new objective image quality methods which incorporate perceptual quality measures by considering human visual system characteristics. Wang et al. [37] propose a Structural Similarity Index that does not treat the image degradation as an error measurement but as a structural distortion measurement.

The objective image quality metrics are classified in three groups: full reference (there exists an original image with which the distorted image is to be compared), no-reference or "blind" quality assessment and reduced-reference quality assessment (the reference image is only partially available, in the form of a set of extracted features).

In the present case of underwater image processing, no original image is available to be compared, and therefore, no-reference metrics are necessary. Within the above cited methods for enhancement and restoration, many of the authors use subjective quality measurements to evaluate the performance of their methods. In what follows we focus on the quantitative metrics used by some of the authors to evaluate the algorithm performance and image quality in the specific case of underwater images.

Besides visual comparison, Hou and Weidemann [38] also propose an objective quality metric for the scattering-blurred typical underwater images. The authors measure the image quality by its sharpness using the gradient or slope of edges. They use wavelet transforms to remove the effect of scattering when locating edges and further apply the transformed results in restraining the perceptual metric. Images are first decomposed by a wavelet transform to remove random and medium noise. Sharpness of the edges is determined by linear regression, obtaining the slope angle between grayscale values of edge pixels versus location. The overall sharpness of the image is the average of measured grayscale angles weighted by the ratio of the power of the high frequency components of the image to the total power of the image (WGSA metric). The metric has been used in their automated image restoration program and the results demonstrate consistency for different optical conditions and attenuation ranges.

Focusing on underwater video processing algorithms, Arredondo and Lebart [39] propose a methodology to quantitative assess the robustness and behavior of algorithms in face of underwater noises. The principle is to degrade test images with simulated underwater perturbations and the focus is to isolate and assess independently the effects of the different perturbations. These perturbations are simulated with varying degrees of severity. Jaffe and McGlamery' model is used to simulate blur and unequal illumination. Different levels of blurring are simulated using the forward-scattered component of images taken at different distances from the scene: in (6) is increased varying from to meters to the scene at intervals . The non-uniform lighting is simulated placing the camera at distances between and meters, at intervals of . In order to isolate the effect of non-uniform lighting, only the direct component is taken into account. The lack of contrast is simulated by histogram manipulation. As a specific application, different optical flow algorithms for underwater conditions are compared. A well known ground-truth synthetic sequence is used for the experiments. The true motion of the sequence is known and it is possible to measure quantitatively the effect of the degradations on the optical flow estimates. In [39] different methods available are compared. The angular deviation between the estimated velocity and the correct one is measured. An attenuation coefficient typical of deep ocean is used. It is shown that the angular error increases linearly with the Gaussian noise for all the methods compared.

In order to assess the quality of their adaptive smoothing method for underwater image denoising, Arnold-Bos et al. [26] proposed a simple criterion based on a general result by Pratt [40]: for most well contrasted and noise free images, the distribution of the gradient magnitude histogram is closely exponential, except for a small peak at low gradients corresponding to homogeneous zones. They define a robustness index between 0 and 1 (it is linked to the variance of the linear regression of the gradient magnitude histogram) that measures the closeness of the histogram with an exponential distribution. The same index was also used by Bazeille et al. [20] to evaluate the performance of their algorithm.

In Table 1 we summarize the articles above reviewed indicating the model assumptions and imaging conditions for which they have been developed and tested as well as the image quality assessment method used to evaluate the corresponding results.

Table 1 Brief description of the algorithms.

To make a quantitative comparison of the above cited methods, judging which of them gives the best/worst results is beyond the scope of this article. In fact, in order to do such a quantitative comparison of results, a common data base should be available in order to test the corresponding algorithms according to specific criteria. To our knowledge, no such underwater database exist at present and therefore, to build this database could be one of the future research lines from which the underwater community would certainly beneficiate. However, we have pointed out how each of the algorithms has been evaluated by the own authors: subjectively (by visual inspection) or objectively (by the implementation of an objective image quality measure). The majority of the algorithms here reviewed have been evaluated using subjective visual inspection of their results.

7. Conclusions

The difficulty associated with obtaining visibility of objects at long or short distance in underwater scenes presents a challenge to the image processing community. Even if numerous approaches for image enhancement are available, they are mainly limited to ordinary images and few approaches have been specifically developed for underwater images. In this article we have reviewed some of them with the intention of bringing the information together for a better comprehension and comparison of the methods. We have summarized the available methods for image restoration and image enhancement, focusing on the conditions for which each of the algorithms has been originally developed. We have also analyzed the methodology used to evaluate the algorithms' performance, highlighting the works where a quantitative quality metric has been used.

As pointed by our analysis, to boost underwater imaging processing, a common suitable database of test images for different imaging conditions together with standard criteria for qualitative and/or quantitative assessment of the results is still required.

Nowadays, leading advancements in optical imaging technology [41, 42] and the use of sophisticated sensing techniques is rapidly increasing the ability to image objects in the sea. Emerging underwater imaging techniques and technologies make it necessary to adapt and extend the above cited methods to, for example, handle data from multiple sources that can extract 3-dimensional scene information. On the other hand, studying the vision system of underwater animals (their physical optics, photoreceptors and neurophysiological mechanisms) will certainly give us new insights to the information processing of underwater images.


  1. B. McGlamery, “A computer model for underwater camera system,” in Ocean Optics VI, S. Q. Duntley, Ed., vol. 208 of Proceedings of SPIE, pp. 221–231, 1979.

    Google Scholar 

  2. Jaffe JS: Computer modeling and the design of optimal underwater imaging systems. IEEE Journal of Oceanic Engineering 1990, 15(2):101-111. 10.1109/48.50695

    Article  MathSciNet  Google Scholar 

  3. Funk C, Bryant S, Heckman P: Handbook of underwater imaging system design. Naval Undersea Center, San Diego, Calif, USA; 1972.

    Google Scholar 

  4. Dixon TH, Pivirotto TJ, Chapman RF, Tyce RC: A range-gated laser system for ocean floor imaging. Marine Technology Society Journal 1983., 17:

    Google Scholar 

  5. McLean J, Voss K: Point spread functions in ocean water: comparison between theory and experiment. Applied Optics 1991, 30: 2027-2030. 10.1364/AO.30.002027

    Article  Google Scholar 

  6. Voss K: Simple empirical model of the oceanic point spread function. Applied Optics 1991, 30: 2647-2651. 10.1364/AO.30.002647

    Article  Google Scholar 

  7. Jaffe J, Moore K, McLean J, Strand M: Underwater optical imaging: status and prospects. Oceanography 2001, 14: 66-76.

    Article  Google Scholar 

  8. Mertens J, Replogle F: Use of point spread and beam spread functions for analysis of imaging systems in water. Journal of the Optical Society of America 1977, 67: 1105-1117. 10.1364/JOSA.67.001105

    Article  Google Scholar 

  9. Hou W, Gray DJ, Weidemann AD, Fournier GR, Forand JL: Automated underwater image restoration and retrieval of related optical properties. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '07), 2007 1889-1892.

    Google Scholar 

  10. Hou W, Weidemann AD, Gray DJ, Fournier GR: Imagery-derived modulation transfer function and its applications for underwater imaging. Applications of Digital Image Processing, August 2007, San Diego, Calif, USA, Proceedings of SPIE 6696:

    Google Scholar 

  11. Hou W, Gray DJ, Weidemann AD, Arnone RA: Comparison and validation of point spread models for imaging in natural waters. Optics Express 2008, 16(13):9958-9965. 10.1364/OE.16.009958

    Article  Google Scholar 

  12. Wells W: Theory of Small Angle Scattering. North Atlantic Treaty Organization; 1973.

    Google Scholar 

  13. Trucco E, Olmos A: Self-tuning underwater image restoration. IEEE Journal of Oceanic Engineering 2006, 31(2):511-519. 10.1109/JOE.2004.836395

    Article  Google Scholar 

  14. Olmos A, Trucco E: Detecting man-made objects in unconstrained subsea videos. Proceedings of the British Machine Vision Conference, 2002 517-526.

    Google Scholar 

  15. Olmos A, Trucco E, Lane D: Automatic man-made object detection with intensity cameras. Proceedings of the IEEE Conference Oceans Record, 2002 3: 1555-1561.

    Google Scholar 

  16. Liu Z, Yu Y, Zhang K, Huang H: Underwater image transmission and blurred image restoration. Optical Engineering 2001, 40(6):1125-1131. 10.1117/1.1364500

    Article  Google Scholar 

  17. Schechner YY, Karpel N: Recovery of underwater visibility and structure by polarization analysis. IEEE Journal of Oceanic Engineering 2005, 30(3):570-587. 10.1109/JOE.2005.850871

    Article  Google Scholar 

  18. Koennen G: Polarized Light in Nature. Cambridge University Press, Cambridge, UK; 1985.

    Google Scholar 

  19. Treibitz T, Schechner YY: Active polarization descattering. IEEE Transactions on Pattern Analysis and Machine Intelligence 2009, 31(3):385-399.

    Article  Google Scholar 

  20. Bazeille S, Quidu I, Jaulin L, Malkasse JP: Automatic underwater image pre-processing. Proceedings of the Caracterisation du Milieu Marin (CMM '06), 2006

    Google Scholar 

  21. Bazeille S: Vision sous-marine monoculaire pour la reconnaissance d'objets, Ph.D. thesis. Université de Bretagne Occidentale; 2008.

    Google Scholar 

  22. Arnold-Bos A, Malkasse JP, Kerven G: A pre-processing framework for automatic underwater images denoising. Proceedings of the European Conference on Propagation and Systems, March 2005, Brest, France

    Google Scholar 

  23. Chambah M, Semani D, Renouf A, Courtellemont P, Rizzi A: Underwater color constancy: enhancement of automatic live fish recognition. Color Imaging IX: Processing, Hardcopy, and Applications, January 2004, San Jose, Calif, USA, Proceedings of SPIE 5293: 157-168.

    Google Scholar 

  24. Rizzi A, Gatta C, Marini D: A new algorithm for unsupervised global and local color correction. Pattern Recognition Letters 2003, 24: 1663-1677. 10.1016/S0167-8655(02)00323-9

    Article  Google Scholar 

  25. Iqbal K, Abdul Salam R, Osman A, Zawawi Talib A: Underwater image enhancement using an integrated color model. International Journal of Computer Science 2007, 34: 2.

    Google Scholar 

  26. Arnold-Bos A, Malkasset J-P, Kervern G: Towards a model-free denoising of underwater optical images. Proceedings of the IEEE Europe Oceans Conference, June 2005, Brest, France 1: 527-532.

    Google Scholar 

  27. Torres-Mendez LA, Dudek G: Color correction of underwater images for aquatic robot inspection. In Proceedings of the 5th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR '05), November 2005, St. Augustine, Fla, USA, Lecture Notes in Computer Science. Volume 3757. Edited by: Rangarajan A, Ve-muri BC, Yuille AL. Springer; 60-73.

    Chapter  Google Scholar 

  28. Ahlen J, Sundgren D, Bengtsson E: Application of underwater hyperspectral data for color correction purposes. Pattern Recognition and Image Analysis 2007, 17(1):170-173. 10.1134/S105466180701021X

    Article  Google Scholar 

  29. Petit F, Capelle-Laizé A-S, Carré P: Underwater image enhancement by attenuation inversion with quaternions. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '09), 2009, Taiwan 1177-1180.

    Google Scholar 

  30. Garcia R, Nicosevici T, Cufi X: On the way to solve lighting problems in underwater imaging. Proceedings of the IEEE Oceans Conference Record, 2002 2: 1018-1024.

    Google Scholar 

  31. Singh H, Howland J, Yoerger D, Whitcomb L: Quantitative photomosaicing of underwater imaging. Proceedings of the IEEE Oceans Conference, 1998 1: 263-266.

    Google Scholar 

  32. Eustice R, Singh H, Howland J: Image registration underwater for fluid flow measurements and mosaicking. Proceedings of the IEEE Oceans Conference Record, 2000 3: 1529-1534.

    Google Scholar 

  33. Pizer SM, Amburn EP, Austin JD, et al.: Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing 1987, 39(3):355-368. 10.1016/S0734-189X(87)80186-X

    Article  Google Scholar 

  34. Zuidervel K: Contrast limited adaptive histogram equalization. In Graphics Gems IV. Edited by: Heckbert P. Academic Press; 1994.

    Google Scholar 

  35. Rzhanov Y, Linnett LM, Forbes R: Underwater video mosaicing for seabed mapping. Proceedings of IEEE International Conference on Image Processing, 2000 1: 224-227.

    Article  Google Scholar 

  36. Wang Z, Bovik A: Modern Image Quality Assessment. Morgan & Claypool; 2006.

    Google Scholar 

  37. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13(4):600-612. 10.1109/TIP.2003.819861

    Article  Google Scholar 

  38. Hou W, Weidemann AD: Objectively assessing underwater image quality for the purpose of automated restoration. Visual Information Processing XVI, April 2007, Orlando, Fla, USA, Proceedings of SPIE 6575:

    Chapter  Google Scholar 

  39. Arredondo M, Lebart K: A methodology for the systematic assessment of underwater video processing algorithms. Proceedings of the IEEE Europe Oceans Conference, 2005 1: 362-367.

    Google Scholar 

  40. Pratt W: Digital Image Processing. John Wiley & Sons, New York, NY, USA; 1991.

    MATH  Google Scholar 

  41. Kocak DM, Caimi FM: The current art of underwater imaging—with a glimpse of the past and vision of the future. Marine Technology Society Journal 2005, 39(3):5-26. 10.4031/002533205787442576

    Article  Google Scholar 

  42. Kocak DM, Dalgleish FR, Caimi FM, Schechner YY: A focus on recent developments and trends in underwater imaging. Marine Technology Society Journal 2008, 42(1):52-67. 10.4031/002533208786861209

    Article  Google Scholar 

Download references


The authors acknowledge Gianluigi Ciocca for a critical reading of the manuscript and the reviewers for their critical observations. We also gratefully acknowledge F. Petit, W. Hou, K. Iqbal, Y. Schechner, A. Rizzi, M. Chambah and S. Bazeille who kindly made their figures available upon our request.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Silvia Corchs.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Schettini, R., Corchs, S. Underwater Image Processing: State of the Art of Restoration and Image Enhancement Methods. EURASIP J. Adv. Signal Process. 2010, 746052 (2010).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: