From the multilayer image model (1), each pixel of the observed image I is composed of two components: the scene radiance and the airlight, respectively. Assume that the global atmosphere light A is isotropic. Therefore, the estimation of the transmission coefficient t can be replaced by the airlight term A(1−t(x)) also called the haze layer B(x) under the supposed condition of the constant atmosphere light A. According to the mathematical formula derivation in “Image degradation model” section, for the modified image degraded model in Equation (4), there are two steps for visibility enhancement. The first step of image restoration is to infer the haze layer B(x) with the pixel position x. Through the observation of the mist, haze, or fog, haze density is proportional to the scene depth. Attributing to its physical properties, the haze layer has two constraints: 0 ≤ B(x) and not more than the minimal one of the components of I(x). And the whiten image as the channel minimal value at each pixel of gray or color image is derived as[19]:
Tarel and Hautiere[19] adopted a fast visibility restoration algorithm by using the median filter to compute atmosphere veil from the whiten image. However, this algorithm brings about the relatively severe atmosphere veil discontinuities. To tackle this problem, we proposed the dimension reduction technique to correct the preliminary haze layer estimation. First, the median filter is used to calculate the coarse haze layer prediction F(x) generated from the whiten image g(x) as follows:
(6)
where Ω denotes the local neighborhood at each pixel.
It is observed that the haze layer is smooth, while retaining the depth changes. To remove the redundant image texture affecting the haze layer, we first explain how the low-rank technique can be applied to the coarse haze layer F(x) to extract the corrected haze layer. Let F
i
be the i th sample in F and F be a column stacked representation of F(x), i.e., F is a matrix of size MN × L, whose each row contains the patch around the location of F
i
in the image F.
By removing the mean value from each row, the difference matrix is derived as
(7)
To reduce calculation time, the matrix can be decomposed by this form:
where U is the matrix of eigenvectors derived from,, and its diagonal singular values σ1 ≥ σ2 ≥ ⋯ ≥ σ
r
> 0.
After the projection of onto the new basis U, the reformed matrix is
Therefore, the new axes are the eigenvectors of the correlation matrix of the original variables, which captures the similarities of the original variables based on how data samples are projected onto them. As we know, if the eigenvalues are very small and the size of image patch from a single hazy image is large enough, the less significant components can be ignored without loss much information. Only the first K eigenvectors are chosen based on their eigenvalues. Since the parameter K should be both large enough to allow fitting the characteristics of the data and small enough to filter out the non-relevant noise and redundancy; therefore, the K largest values are selected by parallel analysis (PA) with Monte Carlo simulation. Many literatures[23, 24] prove that PA is one of the most successful methods for determining the number of true principal components. In our algorithm, without the assumption of a given random distribution, we generate the artificial data by randomly permuting each element across each patch in the image F. And the improved PA with Monte Carlo simulation estimates data dimensionality as
(10)
where σ
p
and α
p
are the singular values of the raw image and the simulated data, respectively. The intuition is that α
p
is a threshold for σ
p
below which the p th component is judged to have occurred due to chance. Currently, it is recommended to use the singular values that corresponds to a given percentile, such as the 95th of the distribution of singular values derived from the random data. From the diagonal singular values, K(K ≪ r) principal components is truncated as σ
i
(i = 1,2,…,K).
The eigenvectors of the matrix can be used for multivariate analysis of the coarse layer F. The image is decomposed into a sum of components from the primary to the secondary. To remove residual image textures, we develop one novel filtering scheme based on projection onto the signal subspace spanned by the first K eigenvectors with noise removal and texture reduction. The straightforward way to restore a corrected haze layer is to directly project the MN by L matrix onto the subspace spanned by the top K eigenvectors. The projected weight matrix on the signal subspace basis is
(11)
where l,p = 1,2,…,L, the matrix left division operator ∖ returns a basic solution with K non-zero components where K is less than the rank of the eigenvector U. And the projected matrix is reconstructed based on weighted subspace basis:
(12)
where i = 1,2,…,MN; p = 1,2,…,L; and l = 1,2,…,K. Then, through the overlap averaging scheme, the reorganized reconstructed refined haze layer approximation V
c
is reshaped as
(13)
where N
x
is the number of a pixel used in patch stacks for the whole image; and the variables p,L are defined as above.
Since the strength of the image degradation caused by haze is less than the minimum intensity of image pixels, so the final revised atmospheric veil can be obtained as
(14)
where the parameter q ∈ (0,1).
In single-image dehazing applications, the global atmosphere light A is usually estimated from pixels with most dense haze. However, the brightest pixels may be the white objects. In our approach, the dark channel prior in[18, 20] is also used to improve the approximation of the atmosphere light. For simplicity, Let B
a
= B(x)/A. Considering the results of the division operation B(x)/Amay be more than one, the normalization is necessary. Therefore, the medium transmission coefficient is calculated as
(15)
Through substituting Equation (14) and (15) in Equation (4), the final haze-free image J(x) after contrast enhancement is acquired from a single hazy image.