# Visibility enhancement using an image filtering approach

- Yong-Qin Zhang
^{1, 2}, - Yu Ding
^{2}, - Jin-Sheng Xiao
^{3}Email author, - Jiaying Liu
^{1}and - Zongming Guo
^{1}

**2012**:220

https://doi.org/10.1186/1687-6180-2012-220

© Zhang et al.; licensee Springer. 2012

**Received: **15 March 2012

**Accepted: **17 September 2012

**Published: **11 October 2012

## Abstract

The misty, foggy, or hazy weather conditions lead to image color distortion and reduce the resolution and the contrast of the observed object in outdoor scene acquisition. In order to detect and remove haze, this article proposes a novel effective algorithm for visibility enhancement from a single gray or color image. Since it can be considered that the haze mainly concentrates in one component of the multilayer image, the haze-free image is reconstructed through haze layer estimation based on the image filtering approach using both low-rank technique and the overlap averaging scheme. By using parallel analysis with Monte Carlo simulation from the coarse atmospheric veil by the median filter, the refined smooth haze layer is acquired with both less texture and retaining depth changes. With the dark channel prior, the normalized transmission coefficient is calculated to restore fogless image. Experimental results show that the proposed algorithm is a simpler and efficient method for clarity improvement and contrast enhancement from a single foggy image. Moreover, it can be comparable with the state-of-the-art methods, and even has better results than them.

## Keywords

## Introduction

Visibility is the ability to see through air, irrelevant to the sunlight or the moonlight. Clear clean air has a better visibility than air polluted with dust particles or water droplets. There are a number of factors affecting visibility, including precipitation, fog, mist, haze, smoke, and in coastal areas sea spray, and they are generally composed principally of water droplets or the particles whose size cannot be ignored for the wavelength. The difference between fog, mist, and haze can be quantified as the visibility distance. Visibility degradation is caused by the absorption and scattering of light by particles and gases in the atmosphere. Scattering by particulate, impairs visibility more severe than absorption. Visibility is mainly reduced by scattering from particles between an observer and a distant object. Particles scatter light from the sun and the rest of the sky through the line of sight of the observer, thereby decreasing the contrast between the object and the background sky.

Images or videos acquired are often affected by visibility in surveillance, traffic, and remote sensing systems, due to light scattering and absorbtion by the atmospheric particles and water droplets. For the rest of the paper, the the atmospheric particles and water droplets from the mist, haze, fog, smog, and cloud are not distinguished for convenience. Visibility enhancement methods of degraded outdoor images fall into two main categories. The first category is non-model-based methods, such as histogram equalization[1, 2], Retinex theory[3], and Wavelet transform[4]. However, the shortcomings of these methods are that they have less effectiveness on maintaining the color fidelity, and also seriously affect the clear region. The second category is model-based methods, who can achieve better results by modeling from the scattering and absorbtion, but usually need additional assumptions of imaging environment or imaging system, such as scene depth[5–7] or multiple images[8–11]. Nonetheless, when their assumptions are not accurate, the effectiveness is greatly compromised. Consequently, visibility enhancement using a single image for haze removal has become a key focus of ongoing studies in image restoration in recent years. Very recently, Sun et al.[12] provided a de-fogging method based on Poisson matting which uses the boosting brush to tune the transmission coefficient in the selected region to generate the fogless image by solving Poisson equations, which combines the local operations including channel selection and local filtering. In this respect, the least square filters[13, 14] and the patch-based method[15] attract great attention in visibility enhancement. But this de-fogging method needs the user to manually adjust the local scattering coefficient and scene depth. During the subsequent years,the four approaches in[16–21] were proposed for single image dehazing without the need of any extra information. The algorithm in[16] is based on color information under the assumption that the surface shading and transmission functions are locally uncorrelated. This method can only be applied to color images. Those algorithms in[17, 18, 20] can be applied to both gray and color images, but they are computationally intensive. The results of the algorithm in[17] depend on the scene saturation, and both of the algorithms in[18, 20] introduce dark channel prior to remove haze, but sometimes have very serious color distortion and poor results. Then, Tarel et al.[19, 21] proposed the visibility restoration method with low complexity for gray and color images, which adopted white balance, gamma correction, and tone mapping to maintain color fidelity. But, the results of some degraded images processed by the algorithm in[19, 21] still show high level residual haze.

Inspired by a multilayer image formed from several types of measurement on the same detection area covered by a single pixel[11, 22], the hazy image can be seen as a combined one in the presence of a multilayer structure consisting of both the clean attenuated component image and the haze layer, which mainly result from the medium absorbtion and atmospheric scattering, respectively. Since the atmospheric veil almost has no specific edges or textures, we may adopt image filtering approach to estimate the haze layer from the hazy image. In this article, different from the existing visibility restoration methods in the previous studies, with the help of dimension reduction technique, the proposed method utilizes the image filtering approach consisting of the median filter and the truncated singular value decomposition to estimate atmospheric veil with dark channel prior to restore the haze-free image. And a comparison of the proposed approach with the state-of-the-arts is also presented.

The rest of this article is organized as follows. In “Image degradation model” section, we briefly review the outdoor degraded bilayer image model. We adopt this model for visibility enhancement from a single hazy image in “The image filtering approach” section. The proposed scheme of the image filtering approach using low-rank technique that produces the haze layer approximation is described here. The experimental results and analysis of the comparison between developed approach and the state-of-the-arts are shown in “Results and analysis” section, and the conclusion and future work are given at the last section.

## Image degradation model

where *x* denotes pixel position, I∈ C^{M×N}is an observed image, J is scene radiance, **A** is global atmosphere light, and *t* is medium transmission. J(*x*)*t*(*x*) is direct attenuation term, representing the scene radiation decay effect in the medium; **A**(1−*t*(*x*)) is airlight term, describing the light scattering from atmosphere particles inducing color distortion.

*t*and global atmosphere light

**A**. Therefore, the degraded image model (1) cannot directly be used to reach contrast enhancement. So, we take the airlight term on the right-hand side of Equation (1) as the haze layer:

As a consequence, instead of estimating the medium transmission coefficient *t*, the haze removal algorithm can be decomposed into several steps: inference of B(*x*) from *I*(*x*), estimation of A, and derivation of *J*(*x*) after inverting Equation (3).

## The image filtering approach

**A**is isotropic. Therefore, the estimation of the transmission coefficient

*t*can be replaced by the airlight term

**A**(1−

*t*(

*x*)) also called the haze layer

**B**(

*x*) under the supposed condition of the constant atmosphere light

**A**. According to the mathematical formula derivation in “Image degradation model” section, for the modified image degraded model in Equation (4), there are two steps for visibility enhancement. The first step of image restoration is to infer the haze layer B(

*x*) with the pixel position

*x*. Through the observation of the mist, haze, or fog, haze density is proportional to the scene depth. Attributing to its physical properties, the haze layer has two constraints: 0 ≤

*B*(

*x*) and not more than the minimal one of the components of

*I*(

*x*). And the whiten image as the channel minimal value at each pixel of gray or color image is derived as[19]:

**F**(

*x*) generated from the whiten image

**g**(

*x*) as follows:

where *Ω* denotes the local neighborhood at each pixel.

It is observed that the haze layer is smooth, while retaining the depth changes. To remove the redundant image texture affecting the haze layer, we first explain how the low-rank technique can be applied to the coarse haze layer **F**(*x*) to extract the corrected haze layer. Let *F*_{
i
}be the *i* th sample in **F** and **F** be a column stacked representation of **F**(*x*), i.e., **F** is a matrix of size *MN* × *L*, whose each row contains the$\sqrt{L}\times \sqrt{L}$ patch around the location of *F*_{
i
}in the image **F**.

where **U** is the matrix of eigenvectors derived from${\stackrel{\u0304}{\mathbf{f}}}^{T}\stackrel{\u0304}{\mathbf{f}}$,$\mathrm{\Sigma}=\text{diag}\left\{{\sigma}_{1},\begin{array}{l}{\sigma}_{2},\dots \end{array},{\sigma}_{r}\right\},$$r=\text{rank}\left({\stackrel{\u0304}{\mathbf{f}}}^{T}\stackrel{\u0304}{\mathbf{f}}\right)$, and its diagonal singular values *σ*_{1} ≥ *σ*_{2} ≥ ⋯ ≥ *σ*_{
r
}> 0*.*

**U**, the reformed matrix$\widehat{\mathbf{f}}$ is

*K*eigenvectors are chosen based on their eigenvalues. Since the parameter

*K*should be both large enough to allow fitting the characteristics of the data and small enough to filter out the non-relevant noise and redundancy; therefore, the

*K*largest values are selected by parallel analysis (PA) with Monte Carlo simulation. Many literatures[23, 24] prove that PA is one of the most successful methods for determining the number of true principal components. In our algorithm, without the assumption of a given random distribution, we generate the artificial data by randomly permuting each element across each patch in the image

**F**. And the improved PA with Monte Carlo simulation estimates data dimensionality as

where *σ*_{
p
} and *α*_{
p
} are the singular values of the raw image$\stackrel{\u0304}{\mathbf{f}}$ and the simulated data, respectively. The intuition is that *α*_{
p
}is a threshold for *σ*_{
p
}below which the *p* th component is judged to have occurred due to chance. Currently, it is recommended to use the singular values that corresponds to a given percentile, such as the 95th of the distribution of singular values derived from the random data. From the diagonal singular values, *K*(*K* ≪ *r*) principal components is truncated as *σ*_{
i
}(*i* = 1,2,…,*K*).

**F**. The image is decomposed into a sum of components from the primary to the secondary. To remove residual image textures, we develop one novel filtering scheme based on projection onto the signal subspace spanned by the first

*K*eigenvectors with noise removal and texture reduction. The straightforward way to restore a corrected haze layer is to directly project the

*MN*by

*L*matrix$\stackrel{\u0304}{\mathbf{f}}$ onto the subspace spanned by the top

*K*eigenvectors. The projected weight matrix on the signal subspace basis is

*l*,

*p*= 1,2,…,

*L*, the matrix left division operator ∖ returns a basic solution with

*K*non-zero components where

*K*is less than the rank of the eigenvector

**U**. And the projected matrix is reconstructed based on weighted subspace basis:

*i*= 1,2,…,

*MN*;

*p*= 1,2,…,

*L*; and

*l*= 1,2,…,

*K.*Then, through the overlap averaging scheme, the reorganized reconstructed refined haze layer approximation

**V**

_{ c }is reshaped as

where *N*_{
x
} is the number of a pixel used in patch stacks for the whole image; and the variables *p,L* are defined as above.

where the parameter *q* ∈ (0,1).

**A**is usually estimated from pixels with most dense haze. However, the brightest pixels may be the white objects. In our approach, the dark channel prior in[18, 20] is also used to improve the approximation of the atmosphere light. For simplicity, Let B

_{ a }=

*B*(

*x*)/A. Considering the results of the division operation B(

*x*)/Amay be more than one, the normalization is necessary. Therefore, the medium transmission coefficient is calculated as

Through substituting Equation (14) and (15) in Equation (4), the final haze-free image *J*(*x*) after contrast enhancement is acquired from a single hazy image.

## Results and analysis

*q*= 0

*.*90, the sliding window size of the median filter is 15×15 pixels, and the patch size

*L*= 25. Table1 demonstrates that the proposed algorithm and the methods in the literatures[20, 21] were compared in computation time using Matlab version 7.8 on the platform of Pentium(R) Dual-Core CPU E5800

*@*3.20 GHz 2-GB cache for 300 × 200 and 250 × 400 test images, respectively. Compared with the state-of-the-art methods, the proposed algorithm based on image filtering approach has a little longer computation time than that of[21], and much shorter calculation time than that of[20]. Figure1 shows mist removal results of current excellent methods[20, 21] and the proposed algorithm for visibility enhancement from a single degraded gray image, respectively. And the resolution improvement and contrast enhancement from three color misty images with different haze density containing different scenes, such as, road, tree, car, and house, for the proposed method is demonstrated in Figures2 and3 where the restored image by the proposed algorithm is compared with those by the authors of[20, 21]. And Figure4 demonstrates that a comparison of visibility enhancement results from a color image with high-density haze between the proposed algorithm and the two popular methods[20, 21]. Except for the computation speed and the visual effect, several objective evaluation criteria are also used to analyze the experimental results. The evaluative criteria in[19] including three indicators$e,\stackrel{\u0304}{r},$ and

*H*, which denote the newly visible edges after restoration, the average visibility enhancement, and the image information entropy, are used here to compare two gray level or color images: the input image and the restored image. The quantitative evaluation and comparative study of He et al.[20], Tarel et al.[21], and our algorithm have also been implemented on three test images in this experiment. Table2 gives the similar or better quality results of the proposed method compared with the other two popular algorithms. For these assessment indicators, the higher value means the greater visibility of the restored image and the better dehazing effect. However, according to the visual results shown in from Figures1,2,3 and4, we can find that these objective metrics, i.e.,$e,\stackrel{\u0304}{r}$ and

*H*introduced in the literature[19] are not exactly consistent with the human subjective reception. The proposed method works well for a wide variety of outdoor foggy images and can remove more haze and restore clearer images with more details. As seen from the experimental results, the proposed algorithm with less computation time can be comparable with the state-of-the-art methods, and even reach better effectiveness than them on haze removal.

**Rate**
e
**of new visible edges, mean ratio**
$\stackrel{\u0304}{r}$
**of the gradients at visible edges and the image information entropy**
H
**for these methods on three test images**

## Conclusion and future work

We analyze and compare the experimental results in visual effects, speed, and objective evaluation criteria. Though comparing the results, we demonstrate the advantage and disadvantage of these methods. We have proposed simple but powerful algorithm based on median filtering using low-rank technique for visibility enhancement from a single hazy image. Since the computational complexity of the low-rank technique is low, it is shown that the proposed approach for haze removal is fast, and can even achieve better results than the state-of-the-art methods in a single image dehazing.

However, the proposed approach maybe not work well for the far scenes with heavy fog and great depth jump. The restored image has the halos or residual haze at depth discontinuities that can be observed in these experimental results. And another shortcoming is unable to obtain the actual value of global atmosphere light A. To overcome these constraints of our current method, we intend to incorporate better edge-preserving image filtering method with low complexity and other techniques. This is our future research.

## Declarations

### Acknowledgements

The authors would like to thank the generous help of the anonymous editor and reviewers. This work was supported by National Basic Research Program (973 Program) of China under Contract No.2009CB320907, National Natural Science Foundation of China under contract No.61201442, and Doctoral Fund of Ministry of Education of China under contract No.20110001120117.

## Authors’ Affiliations

## References

- Pizeretal SM: Adaptive histogram equalization and its variations.
*Comput. Vis. Graph. Image Process*1987, 39: 355-368. 10.1016/S0734-189X(87)80186-XView ArticleGoogle Scholar - Stark JA: Adaptive image contrast enhancement using generalizations of histogram equalization.
*IEEE Trans. Image Process*2000, 9(5):889-896. 10.1109/83.841534View ArticleGoogle Scholar - Rahman Z, Jobson DJ, Woodell GA: Retinex processing for automatic image enhancement.
*J. Electron. Imaging*2004, 13(1):100-110. 10.1117/1.1636183View ArticleGoogle Scholar - Scheunders P: A multivalued image wavelet representation based on multiscale fundamental forms.
*IEEE Trans. Image Process*2002, 10(5):568-575.MathSciNetView ArticleGoogle Scholar - Oakley JP, Satherley BL: Improving image quality in poor visibility conditions using a physical model for contrast degradation.
*IEEE Trans. Image Process*1998, 7(2):167-179. 10.1109/83.660994View ArticleGoogle Scholar - Tan KK, Oakley JP: Physics-based approach to color image enhancement in poor visibility conditions.
*J. Opt. Soc. Am. A*2001, 18(10):2460-2467. 10.1364/JOSAA.18.002460View ArticleGoogle Scholar - Tan KK, Oakley JP: Enhancement of color image in poor visibility conditions. In
*IEEE International Conference on Image Processing (ICIP 2000)*. Vancouver, Canada; Sept 2000:788-791.Google Scholar - Narasimhan SG, Nayar SK: Contrast restoration of weather degraded images.
*IEEE Trans. Pattern Anal. Mach. Intell*2003, 25(6):713-724. 10.1109/TPAMI.2003.1201821View ArticleGoogle Scholar - Narasimhan SG, Nayar SK: Vision and the atmosphere.
*Int. J. Comput. Vis*2002, 48(3):233-254. 10.1023/A:1016328200723View ArticleMATHGoogle Scholar - Schechner YY, Narasimhan SG, Nayar SK: Polarization based vision through haze.
*Appl. Opt*2003, 42(3):511-525. 10.1364/AO.42.000511View ArticleGoogle Scholar - Pandian PS, Kumaravel M, Singh M: Multilayer imaging and compositional analysis of human male breast by laser reflectometry and Monte Carlo simulation.
*Med. Biol. Eng. Comput*2009, 47(11):1197-1206. 10.1007/s11517-009-0531-3View ArticleGoogle Scholar - Sun J, Jia J, Tang CK, Shum HY: Poisson matting.
*ACM Trans Graph*2004, 23(3):315-321. 10.1145/1015706.1015721View ArticleGoogle Scholar - Shao L, Zhang H, de Haan G: An overview performance evaluation of classification based least squares trained filters.
*IEEE Trans. Image Process*2008, 17(10):1772-1782.MathSciNetView ArticleGoogle Scholar - Shao L, Wang J, Ihor OK, de Haan G: Quality adaptive least squares filters for compression artifacts removal using a no-reference block visibility metric.
*J. Visual Commun. Image Represent*2011, 22(1):23-32. 10.1016/j.jvcir.2010.09.007View ArticleGoogle Scholar - Yan RM, Shao L, Cvetkovic SD, Klijn J: Improved nonlocal means based on pre-classification and invariant block matching.
*IEEE/OSA J. Disp. Technol*2012, 8(4):212-218.View ArticleGoogle Scholar - Fattal R: Single image dehazing.
*ACM Trans. Graph*2008, 27(3):988-992.View ArticleGoogle Scholar - Tan RT: Visibility in bad weather from a single image. In
*IEEE Conference on Computer Vision and Pattern Recognition(CVPR 2008)*. Anchorage, AK, USA; Jun 2008:2347-2354.Google Scholar - He K, Sun J, Tang X: Single image haze removal using dark channel prior. In
*IEEE Conference on Computer Vision and Pattern Recognition(CVPR 2009)*. Miami Beach, FL, USA; Jun 2009:1956-1963.Google Scholar - Tarel JP, Hautiere N: Fast visibility restoration from a single color or gray level image. In
*IEEE International Conference on Computer Vision(ICCV09)*. Kyoto, Japan; Oct 2009:2201-2208.Google Scholar - He K, Sun J, Tang X: Single image haze removal using dark channel prior.
*IEEE Trans. Pattern Anal. Mach. Intell*2011, 33(12):2341-2353.View ArticleGoogle Scholar - Tarel JP, Hautiere N, Caraffa L, Cord A, Halmaoui H, Gruyer D: Vision enhancement in homogeneous and heterogeneous fog.
*IEEE Intell. Transport. Syst. Mag*2012, 4(2):6-20.View ArticleGoogle Scholar - Hokmabadi MP, Rostami A: Novel TMM for analyzing evanescent waves and optimized subwavelength imaging in a multilayer structure.
*Optik*2012, 123(2):147-151. 10.1016/j.ijleo.2011.03.013View ArticleGoogle Scholar - Horn JL: A rationale and test for the number of factors in factor analysis.
*Psychomerica*1965, 30(2):179-185. 10.1007/BF02289447View ArticleGoogle Scholar - Glorfeld LW: An improvement xon Horn’s parallel analysis methodology for selecting the correct number of factor’s to retain.
*Educ. Psychol. Meas*1995, 55(3):377-393. 10.1177/0013164495055003002View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.