Skip to main content

Visibility enhancement using an image filtering approach

Abstract

The misty, foggy, or hazy weather conditions lead to image color distortion and reduce the resolution and the contrast of the observed object in outdoor scene acquisition. In order to detect and remove haze, this article proposes a novel effective algorithm for visibility enhancement from a single gray or color image. Since it can be considered that the haze mainly concentrates in one component of the multilayer image, the haze-free image is reconstructed through haze layer estimation based on the image filtering approach using both low-rank technique and the overlap averaging scheme. By using parallel analysis with Monte Carlo simulation from the coarse atmospheric veil by the median filter, the refined smooth haze layer is acquired with both less texture and retaining depth changes. With the dark channel prior, the normalized transmission coefficient is calculated to restore fogless image. Experimental results show that the proposed algorithm is a simpler and efficient method for clarity improvement and contrast enhancement from a single foggy image. Moreover, it can be comparable with the state-of-the-art methods, and even has better results than them.

Introduction

Visibility is the ability to see through air, irrelevant to the sunlight or the moonlight. Clear clean air has a better visibility than air polluted with dust particles or water droplets. There are a number of factors affecting visibility, including precipitation, fog, mist, haze, smoke, and in coastal areas sea spray, and they are generally composed principally of water droplets or the particles whose size cannot be ignored for the wavelength. The difference between fog, mist, and haze can be quantified as the visibility distance. Visibility degradation is caused by the absorption and scattering of light by particles and gases in the atmosphere. Scattering by particulate, impairs visibility more severe than absorption. Visibility is mainly reduced by scattering from particles between an observer and a distant object. Particles scatter light from the sun and the rest of the sky through the line of sight of the observer, thereby decreasing the contrast between the object and the background sky.

Images or videos acquired are often affected by visibility in surveillance, traffic, and remote sensing systems, due to light scattering and absorbtion by the atmospheric particles and water droplets. For the rest of the paper, the the atmospheric particles and water droplets from the mist, haze, fog, smog, and cloud are not distinguished for convenience. Visibility enhancement methods of degraded outdoor images fall into two main categories. The first category is non-model-based methods, such as histogram equalization[1, 2], Retinex theory[3], and Wavelet transform[4]. However, the shortcomings of these methods are that they have less effectiveness on maintaining the color fidelity, and also seriously affect the clear region. The second category is model-based methods, who can achieve better results by modeling from the scattering and absorbtion, but usually need additional assumptions of imaging environment or imaging system, such as scene depth[57] or multiple images[811]. Nonetheless, when their assumptions are not accurate, the effectiveness is greatly compromised. Consequently, visibility enhancement using a single image for haze removal has become a key focus of ongoing studies in image restoration in recent years. Very recently, Sun et al.[12] provided a de-fogging method based on Poisson matting which uses the boosting brush to tune the transmission coefficient in the selected region to generate the fogless image by solving Poisson equations, which combines the local operations including channel selection and local filtering. In this respect, the least square filters[13, 14] and the patch-based method[15] attract great attention in visibility enhancement. But this de-fogging method needs the user to manually adjust the local scattering coefficient and scene depth. During the subsequent years,the four approaches in[1621] were proposed for single image dehazing without the need of any extra information. The algorithm in[16] is based on color information under the assumption that the surface shading and transmission functions are locally uncorrelated. This method can only be applied to color images. Those algorithms in[17, 18, 20] can be applied to both gray and color images, but they are computationally intensive. The results of the algorithm in[17] depend on the scene saturation, and both of the algorithms in[18, 20] introduce dark channel prior to remove haze, but sometimes have very serious color distortion and poor results. Then, Tarel et al.[19, 21] proposed the visibility restoration method with low complexity for gray and color images, which adopted white balance, gamma correction, and tone mapping to maintain color fidelity. But, the results of some degraded images processed by the algorithm in[19, 21] still show high level residual haze.

Inspired by a multilayer image formed from several types of measurement on the same detection area covered by a single pixel[11, 22], the hazy image can be seen as a combined one in the presence of a multilayer structure consisting of both the clean attenuated component image and the haze layer, which mainly result from the medium absorbtion and atmospheric scattering, respectively. Since the atmospheric veil almost has no specific edges or textures, we may adopt image filtering approach to estimate the haze layer from the hazy image. In this article, different from the existing visibility restoration methods in the previous studies, with the help of dimension reduction technique, the proposed method utilizes the image filtering approach consisting of the median filter and the truncated singular value decomposition to estimate atmospheric veil with dark channel prior to restore the haze-free image. And a comparison of the proposed approach with the state-of-the-arts is also presented.

The rest of this article is organized as follows. In “Image degradation model” section, we briefly review the outdoor degraded bilayer image model. We adopt this model for visibility enhancement from a single hazy image in “The image filtering approach” section. The proposed scheme of the image filtering approach using low-rank technique that produces the haze layer approximation is described here. The experimental results and analysis of the comparison between developed approach and the state-of-the-arts are shown in “Results and analysis” section, and the conclusion and future work are given at the last section.

Image degradation model

Assume that the atmosphere is homogenous in space, the image degraded model caused by hazy weather conditions is often described as[16, 19, 20]:

I(x)=J(x)t(x)+A(1t(x))
(1)

where x denotes pixel position, I CM×Nis an observed image, J is scene radiance, A is global atmosphere light, and t is medium transmission. J(x)t(x) is direct attenuation term, representing the scene radiation decay effect in the medium; A(1−t(x)) is airlight term, describing the light scattering from atmosphere particles inducing color distortion.

In fact, since the atmospheric layer depends on the object depths, visibility restoration is related with the estimation of the true colors of the objects, the haze properties, and the scene depth map. Due to lack of scene structure or scene depth for a single gray or color image, it is impossible to accurately distinguish transmission coefficient t and global atmosphere light A. Therefore, the degraded image model (1) cannot directly be used to reach contrast enhancement. So, we take the airlight term on the right-hand side of Equation (1) as the haze layer:

B=A(1t(x))
(2)

Then Equation (1) can be written as in another form

I(x)=J(x) 1 B ( x ) / A +B(x)
(3)

Through inverse operation for Equation (3), so the final image after visibility enhancement is acquired as

J(x)= I ( x ) B ( x ) / 1 B ( x ) / A
(4)

As a consequence, instead of estimating the medium transmission coefficient t, the haze removal algorithm can be decomposed into several steps: inference of B(x) from I(x), estimation of A, and derivation of J(x) after inverting Equation (3).

The image filtering approach

From the multilayer image model (1), each pixel of the observed image I is composed of two components: the scene radiance and the airlight, respectively. Assume that the global atmosphere light A is isotropic. Therefore, the estimation of the transmission coefficient t can be replaced by the airlight term A(1−t(x)) also called the haze layer B(x) under the supposed condition of the constant atmosphere light A. According to the mathematical formula derivation in “Image degradation model” section, for the modified image degraded model in Equation (4), there are two steps for visibility enhancement. The first step of image restoration is to infer the haze layer B(x) with the pixel position x. Through the observation of the mist, haze, or fog, haze density is proportional to the scene depth. Attributing to its physical properties, the haze layer has two constraints: 0 ≤ B(x) and not more than the minimal one of the components of I(x). And the whiten image as the channel minimal value at each pixel of gray or color image is derived as[19]:

g(x)=min I ( x )
(5)

Tarel and Hautiere[19] adopted a fast visibility restoration algorithm by using the median filter to compute atmosphere veil from the whiten image. However, this algorithm brings about the relatively severe atmosphere veil discontinuities. To tackle this problem, we proposed the dimension reduction technique to correct the preliminary haze layer estimation. First, the median filter is used to calculate the coarse haze layer prediction F(x) generated from the whiten image g(x) as follows:

F(x)= median Ω (g(x))
(6)

where Ω denotes the local neighborhood at each pixel.

It is observed that the haze layer is smooth, while retaining the depth changes. To remove the redundant image texture affecting the haze layer, we first explain how the low-rank technique can be applied to the coarse haze layer F(x) to extract the corrected haze layer. Let F i be the i th sample in F and F be a column stacked representation of F(x), i.e., F is a matrix of size MN × L, whose each row contains the L × L patch around the location of F i in the image F.

By removing the mean value from each row, the difference matrix f ̄ is derived as

f ̄ (i,p)=f(i,p) 1 MN i = 1 MN f(i,p)
(7)

To reduce calculation time, the matrix f ̄ T f ̄ can be decomposed by this form:

f ̄ T f ̄ =U Σ 2 U T
(8)

where U is the matrix of eigenvectors derived from f ̄ T f ̄ ,Σ=diag σ 1 , σ 2 , , σ r ,r=rank f ̄ T f ̄ , and its diagonal singular values σ1σ2σ r > 0.

After the projection of f ̄ onto the new basis U, the reformed matrix f ̂ is

f ̂ = f ̄ ×U
(9)

Therefore, the new axes are the eigenvectors of the correlation matrix of the original variables, which captures the similarities of the original variables based on how data samples are projected onto them. As we know, if the eigenvalues are very small and the size of image patch from a single hazy image is large enough, the less significant components can be ignored without loss much information. Only the first K eigenvectors are chosen based on their eigenvalues. Since the parameter K should be both large enough to allow fitting the characteristics of the data and small enough to filter out the non-relevant noise and redundancy; therefore, the K largest values are selected by parallel analysis (PA) with Monte Carlo simulation. Many literatures[23, 24] prove that PA is one of the most successful methods for determining the number of true principal components. In our algorithm, without the assumption of a given random distribution, we generate the artificial data by randomly permuting each element across each patch in the image F. And the improved PA with Monte Carlo simulation estimates data dimensionality as

K=max( 1 p r | σ p α p )
(10)

where σ p and α p are the singular values of the raw image f ̄ and the simulated data, respectively. The intuition is that α p is a threshold for σ p below which the p th component is judged to have occurred due to chance. Currently, it is recommended to use the singular values that corresponds to a given percentile, such as the 95th of the distribution of singular values derived from the random data. From the diagonal singular values, K(K r) principal components is truncated as σ i (i = 1,2,…,K).

The eigenvectors of the matrix f ̄ can be used for multivariate analysis of the coarse layer F. The image is decomposed into a sum of components from the primary to the secondary. To remove residual image textures, we develop one novel filtering scheme based on projection onto the signal subspace spanned by the first K eigenvectors with noise removal and texture reduction. The straightforward way to restore a corrected haze layer is to directly project the MN by L matrix f ̄ onto the subspace spanned by the top K eigenvectors. The projected weight matrix on the signal subspace basis is

W l , p = f ̂ · , l f ̄ · , p
(11)

where l,p = 1,2,…,L, the matrix left division operator returns a basic solution with K non-zero components where K is less than the rank of the eigenvector U. And the projected matrix is reconstructed based on weighted subspace basis:

R i , p = f ̂ i , l ×W l , p + 1 MN i = 1 MN f i , p
(12)

where i = 1,2,…,MN; p = 1,2,…,L; and l = 1,2,…,K. Then, through the overlap averaging scheme, the reorganized reconstructed refined haze layer approximation V c is reshaped as

V c x = 1 N x p = 1 L R i , p
(13)

where N x is the number of a pixel used in patch stacks for the whole image; and the variables p,L are defined as above.

Since the strength of the image degradation caused by haze is less than the minimum intensity of image pixels, so the final revised atmospheric veil can be obtained as

B x =q·max V c x , 0
(14)

where the parameter q (0,1).

In single-image dehazing applications, the global atmosphere light A is usually estimated from pixels with most dense haze. However, the brightest pixels may be the white objects. In our approach, the dark channel prior in[18, 20] is also used to improve the approximation of the atmosphere light. For simplicity, Let B a = B(x)/A. Considering the results of the division operation B(x)/Amay be more than one, the normalization is necessary. Therefore, the medium transmission coefficient is calculated as

t(x)=1q· B a x / max x B a x
(15)

Through substituting Equation (14) and (15) in Equation (4), the final haze-free image J(x) after contrast enhancement is acquired from a single hazy image.

Results and analysis

The degraded gray and color images caused by haze were used for both the subjective evaluation and the objective evaluation of the dehazing performance between the proposed algorithm based on image filtering approach using low-rank technique and the state-of-the-arts[20, 21]. For different types of test images, a lot of experiments for image restoration have been done and the results were compared to validate the proposed method. In our experimental setting, the parameter q = 0. 90, the sliding window size of the median filter is 15×15 pixels, and the patch size L = 25. Table1 demonstrates that the proposed algorithm and the methods in the literatures[20, 21] were compared in computation time using Matlab version 7.8 on the platform of Pentium(R) Dual-Core CPU E5800 @ 3.20 GHz 2-GB cache for 300 × 200 and 250 × 400 test images, respectively. Compared with the state-of-the-art methods, the proposed algorithm based on image filtering approach has a little longer computation time than that of[21], and much shorter calculation time than that of[20]. Figure1 shows mist removal results of current excellent methods[20, 21] and the proposed algorithm for visibility enhancement from a single degraded gray image, respectively. And the resolution improvement and contrast enhancement from three color misty images with different haze density containing different scenes, such as, road, tree, car, and house, for the proposed method is demonstrated in Figures2 and3 where the restored image by the proposed algorithm is compared with those by the authors of[20, 21]. And Figure4 demonstrates that a comparison of visibility enhancement results from a color image with high-density haze between the proposed algorithm and the two popular methods[20, 21]. Except for the computation speed and the visual effect, several objective evaluation criteria are also used to analyze the experimental results. The evaluative criteria in[19] including three indicatorse, r ̄ , and H, which denote the newly visible edges after restoration, the average visibility enhancement, and the image information entropy, are used here to compare two gray level or color images: the input image and the restored image. The quantitative evaluation and comparative study of He et al.[20], Tarel et al.[21], and our algorithm have also been implemented on three test images in this experiment. Table2 gives the similar or better quality results of the proposed method compared with the other two popular algorithms. For these assessment indicators, the higher value means the greater visibility of the restored image and the better dehazing effect. However, according to the visual results shown in from Figures1,2,3 and4, we can find that these objective metrics, i.e.,e, r ̄ and H introduced in the literature[19] are not exactly consistent with the human subjective reception. The proposed method works well for a wide variety of outdoor foggy images and can remove more haze and restore clearer images with more details. As seen from the experimental results, the proposed algorithm with less computation time can be comparable with the state-of-the-art methods, and even reach better effectiveness than them on haze removal.

Figure 1
figure 1

Visual comparison of the dehazing results of these methods for a misty gray image. These methods are[20, 21] and the proposed algorithm using image filtering approach. (a) Original image, (b)[20], (c)[21], (d) the proposed algorithm.

Figure 2
figure 2

Visual comparison of the dehazing results of these methods for a foggy color image. These methods are[20, 21] and the proposed algorithm using image filtering approach. (a) Original image, (b)[20], (c)[21], (d) the proposed algorithm.

Figure 3
figure 3

Visual comparison of the dehazing results of these methods for a heavy foggy color image. These methods are[20, 21] and the proposed algorithm using image filtering approach. (a) Original image, (b)[20], (c)[21], (d) the proposed algorithm.

Figure 4
figure 4

Visual comparison of the dehazing results of these methods for a dense foggy color image. These methods are[20, 21] and the proposed algorithm using image filtering approach. (a) Original image, (b)[20], (c)[21], (d) the proposed algorithm.

Table 1 Comparison of computation time between [[20, 21]] and the proposed algorithm (unit: seconds)
Table 2 Rate e of new visible edges, mean ratio r ̄ of the gradients at visible edges and the image information entropy H for these methods on three test images

Conclusion and future work

We analyze and compare the experimental results in visual effects, speed, and objective evaluation criteria. Though comparing the results, we demonstrate the advantage and disadvantage of these methods. We have proposed simple but powerful algorithm based on median filtering using low-rank technique for visibility enhancement from a single hazy image. Since the computational complexity of the low-rank technique is low, it is shown that the proposed approach for haze removal is fast, and can even achieve better results than the state-of-the-art methods in a single image dehazing.

However, the proposed approach maybe not work well for the far scenes with heavy fog and great depth jump. The restored image has the halos or residual haze at depth discontinuities that can be observed in these experimental results. And another shortcoming is unable to obtain the actual value of global atmosphere light A. To overcome these constraints of our current method, we intend to incorporate better edge-preserving image filtering method with low complexity and other techniques. This is our future research.

References

  1. Pizeretal SM: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process 1987, 39: 355-368. 10.1016/S0734-189X(87)80186-X

    Article  Google Scholar 

  2. Stark JA: Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process 2000, 9(5):889-896. 10.1109/83.841534

    Article  Google Scholar 

  3. Rahman Z, Jobson DJ, Woodell GA: Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13(1):100-110. 10.1117/1.1636183

    Article  Google Scholar 

  4. Scheunders P: A multivalued image wavelet representation based on multiscale fundamental forms. IEEE Trans. Image Process 2002, 10(5):568-575.

    Article  MathSciNet  Google Scholar 

  5. Oakley JP, Satherley BL: Improving image quality in poor visibility conditions using a physical model for contrast degradation. IEEE Trans. Image Process 1998, 7(2):167-179. 10.1109/83.660994

    Article  Google Scholar 

  6. Tan KK, Oakley JP: Physics-based approach to color image enhancement in poor visibility conditions. J. Opt. Soc. Am. A 2001, 18(10):2460-2467. 10.1364/JOSAA.18.002460

    Article  Google Scholar 

  7. Tan KK, Oakley JP: Enhancement of color image in poor visibility conditions. In IEEE International Conference on Image Processing (ICIP 2000). Vancouver, Canada; Sept 2000:788-791.

    Google Scholar 

  8. Narasimhan SG, Nayar SK: Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell 2003, 25(6):713-724. 10.1109/TPAMI.2003.1201821

    Article  Google Scholar 

  9. Narasimhan SG, Nayar SK: Vision and the atmosphere. Int. J. Comput. Vis 2002, 48(3):233-254. 10.1023/A:1016328200723

    Article  MATH  Google Scholar 

  10. Schechner YY, Narasimhan SG, Nayar SK: Polarization based vision through haze. Appl. Opt 2003, 42(3):511-525. 10.1364/AO.42.000511

    Article  Google Scholar 

  11. Pandian PS, Kumaravel M, Singh M: Multilayer imaging and compositional analysis of human male breast by laser reflectometry and Monte Carlo simulation. Med. Biol. Eng. Comput 2009, 47(11):1197-1206. 10.1007/s11517-009-0531-3

    Article  Google Scholar 

  12. Sun J, Jia J, Tang CK, Shum HY: Poisson matting. ACM Trans Graph 2004, 23(3):315-321. 10.1145/1015706.1015721

    Article  Google Scholar 

  13. Shao L, Zhang H, de Haan G: An overview performance evaluation of classification based least squares trained filters. IEEE Trans. Image Process 2008, 17(10):1772-1782.

    Article  MathSciNet  Google Scholar 

  14. Shao L, Wang J, Ihor OK, de Haan G: Quality adaptive least squares filters for compression artifacts removal using a no-reference block visibility metric. J. Visual Commun. Image Represent 2011, 22(1):23-32. 10.1016/j.jvcir.2010.09.007

    Article  Google Scholar 

  15. Yan RM, Shao L, Cvetkovic SD, Klijn J: Improved nonlocal means based on pre-classification and invariant block matching. IEEE/OSA J. Disp. Technol 2012, 8(4):212-218.

    Article  Google Scholar 

  16. Fattal R: Single image dehazing. ACM Trans. Graph 2008, 27(3):988-992.

    Article  Google Scholar 

  17. Tan RT: Visibility in bad weather from a single image. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR 2008). Anchorage, AK, USA; Jun 2008:2347-2354.

    Google Scholar 

  18. He K, Sun J, Tang X: Single image haze removal using dark channel prior. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR 2009). Miami Beach, FL, USA; Jun 2009:1956-1963.

    Google Scholar 

  19. Tarel JP, Hautiere N: Fast visibility restoration from a single color or gray level image. In IEEE International Conference on Computer Vision(ICCV09). Kyoto, Japan; Oct 2009:2201-2208.

    Google Scholar 

  20. He K, Sun J, Tang X: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell 2011, 33(12):2341-2353.

    Article  Google Scholar 

  21. Tarel JP, Hautiere N, Caraffa L, Cord A, Halmaoui H, Gruyer D: Vision enhancement in homogeneous and heterogeneous fog. IEEE Intell. Transport. Syst. Mag 2012, 4(2):6-20.

    Article  Google Scholar 

  22. Hokmabadi MP, Rostami A: Novel TMM for analyzing evanescent waves and optimized subwavelength imaging in a multilayer structure. Optik 2012, 123(2):147-151. 10.1016/j.ijleo.2011.03.013

    Article  Google Scholar 

  23. Horn JL: A rationale and test for the number of factors in factor analysis. Psychomerica 1965, 30(2):179-185. 10.1007/BF02289447

    Article  Google Scholar 

  24. Glorfeld LW: An improvement xon Horn’s parallel analysis methodology for selecting the correct number of factor’s to retain. Educ. Psychol. Meas 1995, 55(3):377-393. 10.1177/0013164495055003002

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the generous help of the anonymous editor and reviewers. This work was supported by National Basic Research Program (973 Program) of China under Contract No.2009CB320907, National Natural Science Foundation of China under contract No.61201442, and Doctoral Fund of Ministry of Education of China under contract No.20110001120117.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jin-Sheng Xiao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhang, YQ., Ding, Y., Xiao, JS. et al. Visibility enhancement using an image filtering approach. EURASIP J. Adv. Signal Process. 2012, 220 (2012). https://doi.org/10.1186/1687-6180-2012-220

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-220

Keywords