 Research
 Open Access
 Published:
A generalized gamma correction algorithm based on the SLIP model
EURASIP Journal on Advances in Signal Processing volume 2016, Article number: 69 (2016)
Abstract
Traditional contrast enhancement techniques were developed to enhance the dynamic range of images with narrow histograms. However, it is not unusual that an image with a broad histogram still suffers from low contrast in both the shadow and highlight areas. In this paper, we first develop a unified framework called the generalized gamma correction for the enhancement of these two types of images. The generalization is based on the interpretation of the gamma correction algorithm as a special case of the scalar multiplication of a generalized linear system (GLS). By using the scalar multiplication based on other GLS, we obtain the generalized gamma correction algorithm. We then develop an algorithm based on the generalized gamma correction algorithm which uses the recently developed symmetric logarithmic image processing (SLIP) model. We demonstrate that the proposed algorithm can be configured to enhance both types of images by adaptively choosing the mapping function and the multiplication factor. Experimental results and comparisons with classical contrast enhancement and stateoftheart adaptive gamma correction algorithms demonstrate that the proposed algorithm is an effective and efficient tool for the enhancement of images with either narrow or broad histogram.
Introduction
Problem formulation
The contrast of an image is one of the most important factors influencing its subjective quality. An image, which is subjectively rated as low contrast, is usually associated with a limited dynamic range. In practice, pixels of an image can be broadly classified as either in the areas of shadow, midtone, or highlight. They correspond to pixels in the lower end, middle part, and the higher end of the histogram, respectively. An image, which is classified as global low contrast, can have a narrow histogram in one of these areas. On the other hand, pixels of an image can be distributed mostly in the shadow and highlight areas which have limited dynamic ranges. Such an image is classified as local low contrast. Figure 1 shows histograms of the six test images. The first three are typical cases of images with global low contrast, while the other three are typical cases of local low contrast.
To enhance images of global low contrast, we can use classical dynamic range stretching algorithms such as histogram equalization, gamma correction, and linear contrast stretching [1]. However, since an image of local low contrast usually has a broad histogram, these algorithm may not produce the desired result.
In this work, we focus on the following problem: to develop a unified framework such that it can be used to enhance the two types of images.
A brief review of related works
Image enhancement is an active research area which has accumulated many papers on contrast enhancement. Contrast enhancement can be broadly classified as the following: histogrambased methods such as many different ways of performing histogram equalization; linear contrast stretching; nonlinear signal transformation such as gamma correction; and transform domainbased methods such as performing enhancement in the wavelet or Fourier transform domain. Since this paper is on the generalization of the gamma correction algorithm, we will only provide a brief review of some related works. Computational intensive image enhancement algorithms such as the Retinex [2] and its variants including optimization through variational methods [3, 4] are not discussed.
In the following discussion, a pixel of an image is represented by x where the spacial location of the pixel is omitted to simplify the notation. It is also assumed that the pixel gray scale has been normalized such that x∈(0,1).
The logarithmic image processing model
In [5], the scalar multiplication operation of the logarithmic image processing (LIP) model is used to enhance an image as follows:
where \(\phi _{\text {LIP}}(x)=\log (1x)\) and γ _{1} is an imagedependent adaptive gain. It is determined by maximizing the dynamic range of the processed image. The optimal γ is given by
where μ and σ are the mean and standard deviation of the transformed image data \(\phi _{\text {LIP}}(x)=\log (1x)\). However, a drawback of this result is that it only works for images for which the condition μ>σ is satisfied.
The parametric logratio model
In [6], the scalar multiplication operation of the parametric logratio (PLR) model is used to enhance an image as follows:
where
The multiplication factor γ _{2} and the model parameter η are determined by the userspecified mapping of the two input pixels denoted (x _{1},x _{2}) to the corresponding two output pixels (y _{1},y _{2}) by solving the following equations:
and
The local color correction algorithm
The local color correction (LCC) algorithm [7, 8] is defined as follows:
where
and f(x) is the pixel value after Gaussian lowpass filtering of the original image.
Adaptive gamma correction
In the adaptive gamma correction (AGC) algorithm [9], the gain parameter is based on the modified histogram which is defined as follows:
where h(k) (k=0:255 for an 8bit/pixel image) is the normalized histogram, \(h_{M}=\max \{h(n)\}\), \(h_{m}=\min \{h(n)\}\), and ε is a normalizing factor to ensure \(\sum _{k=1}^{255}p(k)=1\). For a pixel with value k (k is an 8bit integer), the gamma correction is performed by \(255\times (k/255)^{\gamma _{4}(k)}\phantom {\dot {i}\!}\), where
Since h(k) is the probability distribution function (PDF) of pixels in an image, p(k) can be regarded as a modified PDF. As such, γ _{4}(k) is the complementary cumulative distribution function with respect to p(k).
Summary
The LCC and AGC algorithms are actually the classical gamma correction algorithm with different ways of adaptively calculating the gain γ. Although the LIP modelbased algorithm is not directly related to the gamma correction, their relationship can be seen by rewriting Eq. ?? as the follows:
A further simplification shows
where \(\bar {y}=1y\) and \(\bar {x}=1x\). Thus, the LIP modelbased algorithm can be regarded as the gamma correction algorithm operating on the negative image (1−x) and the result is an enhanced negative image \(\bar {y}\). The desired result is then obtained be the inverse \(y=1\bar {y}\). As such, the LIP modelbased scalar multiplication can be regarded as a generalized gamma correction algorithm. This motivates us to explore a principled approach for the generalization. The PLR modelbased algorithm can also be considered a generalized gamma correction algorithm. This is will be discussed in Section 2.2.
Computationally, all of the above mentioned algorithms use the exponential operation. The difference in complexity is largely due to the different ways of calculating the exponent for different algorithms. In an extreme case where the exponent is fixed, the complexity is the lowest. In another extreme case where the exponent is adaptively calculated for each pixel, the complexity is the highest. Depending on the available hardware resources, a tradeoff between computational complexity and performance has to be made.
Contribution of this paper
The motivation is to extend the idea of the scalar multiplication based on the LIP model which has limited success due to the constraint. The novelty of this work is the development of the generalized gamma correction algorithm by which the two types of lowcontrast images can be enhanced. This is in contrast to the classical gamma correction algorithm which can only enhance underexposed or overexposed images. These images belong to the broad class of global low contrast.
There are two key contributions in this work. (1) We demonstrate that the scalar multiplication operation of a generalized linear system is a principled way to develop the unified framework which is called the generalized gamma correction. This is because the gamma correction algorithm is shown to be a special case of the scalar multiplication. We also show that a natural extension of the LIP model is to use the recently developed symmetric LIP (SLIP) model. An important feature of the SLIP model is that it is the same as the LIP model when the signal value is in (0,1). However, in the SLIP model, the signal is defined in (−1,1) while in the LIP model, the signal is defined in \((\infty,1)\). We demonstrate that such a difference is essential for the SLIP model to be used as the basis for the development of the generalized gamma correction framework. (2) Based on the generalized gamma correction and the SLIP model, we propose an algorithm which is an effective and efficient tool for the enhancement of images suffering either local or global low contrast. We also develop a simple method for the classification of lowcontrast images into either local or global low contrast. As such, automatic image enhancement with default parameter settings can be performed.
The organization of this paper is as follows. In Section 2, after a brief review of the concept of a GLS, we define and compare several systems based on their generating functions. We then show that the classical gamma correction algorithm is a special case of the scalar multiplication of a GLS for which the generating function is the logarithmic function. This leads naturally to the development of the generalized gamma correction algorithm in which the logarithmic function is replaced by other generating functions. By comparing the properties of scalar multiplication operations of different systems, we show that the SLIP model can be configured to enhance the two types of lowcontrast images effectively. In Section 3, we describe the proposed dynamic range enhancement algorithm using the SLIP modelbased scalar multiplication. The proposed algorithm has three key elements: (1) premapping of the signal from (0,1) to (−1,1), (2) determining the multiplication factor and performing the scalar multiplication, and (3) postmapping the signal (−1,1) to (0,1). In Section 4, we test the proposed algorithm using six images. We study the effect of parameter setting and compare the performance of the proposed algorithm with those of the stateoftheart adaptive gamma correction algorithms and the classical contrast enhancement algorithms including linear contrast stretching and contrastlimited histogram equalization. Experimental results show that the proposed algorithm successfully enhances the two types of images, while other algorithms considered in this paper can only enhance one type of image well. In Section 5, we summarize the main result of this paper.
The generalized gamma correction algorithm
After a brief review and the definition of the GLS, we develop the generalized gamma correction algorithm which is the scalar multiplication operation of a GLS.
The generalized linear system
A brief review
The GLS, such as the homomorphic multiplicative system (MHS) [10, 11], generalized mean filter [12], the logratio (LR) model [13], and the logarithmic image processing (LIP) model [14], has been studied since the late 1960s. The LIP model has been applied to many practical problems [5, 15–26]. Its operations have been justified from perspectives of physical image formation model, human vision models [15], and information theory [27]. Based on a new imaging device model [28], a generalized LIP (GLIP) model has been developed [29]. Other extensions of the LIP model include the parametric [21, 30], the pseudo and the harmonic LIP models [31, 32], and the symmetric extension [33]. The LR model has also been recently extended from two perspectives: the Bregman divergence [34] and the triangular norm [6]. The same idea of the LR model has also been further explored in [35] to study other generalized linear systems.
Definition
The block diagram of a GLS is shown in Fig. 2, where ϕ is called the generating function of the system. The generating function is strictly monotonically increasing and is a onetoone and onto mapping. For example, let the input signal set S={xx∈(m,M)}, where m and M are the lower and upper bounds of the signal values, respectively. The mapping ϕ(x) has the property \(\phi (x)\in (\infty,\infty)\). As such, the inverse mapping has the property: \(\phi ^{1}:(\infty,\infty)\rightarrow (m,M)\).
Different generating functions result in different systems. Despite their differences, generalized linear systems have two fundamental operations: vector addition ⊕ and scalar multiplication ⊗, which are defined by using the generating function as follows:
and
where x,y∈S, and γ∈R. An important property of the GLS is that it is closed under the vector addition and scalar multiplication, i.e., x⊕y∈(m,M) and γ⊗x∈(m,M). This closure property ensures that an image processed by a GLS will not have the outofrange problem.
A special value in the signal set is the additive identity element, denoted by I, and is defined as follows:
A useful property of the identity element is that it is preserved under the scalar multiplication
This property will be used to develop the proposed algorithm.
Examples
Prominent examples of generalized linear systems include the multiplicative homomorphic filter (MHF), the parametric LR model, the LIP model, and the SLIP model. Key elements of these systems are summarized in Table 1.
Generalized gamma correction
To develop the generalized gamma correction algorithm, let us rewrite the gamma correction as follows:
where x>0, γ is a real number, and \(\phi (x)=\log (x)\). As such, the gamma correction is written as the scalar multiplication of a particular GLS which is the MHF.
The concept of the GLS provides a theoretical framework to generalize the gamma correction algorithm by using other generating functions. We will thus call the scalar multiplication operation of a GLS a generalized gamma correction. In the past, image enhancement using the scalar multiplication operation of the LIP model [5] and the PLR model [6] have been studied. In this work, we study the application of the SLIP model.
Generalized gamma correction using the SLIP model
The generalized gamma correction due the SLIPbased scalar multiplication is as follows:
In Fig. 3 c, d, we demonstrate the effects of setting different values of γ. Because in the SLIP model the signal is defined in the interval (−1,1) and the grayscale value of the image is usually in the interval (0,1), a preprocessing step which maps the interval (0,1) to (−1,1) is thus required. Details of this mapping will be discussed in next section. After the mapping, the scalar multiplication using the SLIP model can be configured (γ<1) to enhance the dynamic range of both shadow and highlight areas at the cost of compressing the dynamic range of the midtone. It can also be configured (γ>1) to enhance the dynamic range of an image with a narrow histogram at the cost of compression of the dynamic ranges of both shadow and highlight. This can be clearly seen in Fig. 3 c, d.
In Fig. 3, we also demonstrate the main differences between different generalized gamma correction algorithms due to different generating functions. We can see that gamma correction (using MHF) can be configured (γ<1) to enhance the dynamic range of the shadow area at the cost of compression of that of the highlight area and vice versa (γ>1). The LIP scalar multiplication has a similar effect as that of gamma correction. Obviously, these two algorithms are not capable of enhancing the dynamic ranges of the shadow and the highlight simultaneously. The PLR model (defined in Table 1) has a parameter η(η>0). In actual applications, it is easier to indirectly specify η by using the identity element of the vector addition operation denoted by I _{0} which is given by I _{0}=(1+η)^{−1} [6]. We can see from Fig. 3 e, f that setting different values of I _{0} leads to an asymmetrical enhancement or compression effects on the shadow, midtone, and highlight areas. For example, when γ<1, it will enhance the shadow or the highlight or both areas depending on the setting of the identity element. On the other hand, when γ>1, it will enhance the dynamic range of the narrow histogram which can be centered at different pixel values which depend on the setting of the identity element.
Proposed algorithm
The general structure
Let the input image to be processed be denoted as x and the final output image be denoted as y. Using the SLIP model, the signal set is defined as S={xx∈(−1,1)}. Since after proper normalization, the gray scale of digital images is in the interval (0,1), the first step of using the proposed SLIPbased generalized gamma correction algorithm is to determine a function
such that \(f:(0,1)\rightarrow (1,1)\). The second step is the application of the generalized gamma correction algorithm
where v∈(−1,1). The third step is to determine a function
such that \(g:(1,1)\rightarrow (0,1)\). The general structure of the proposed algorithm is shown Fig. 4.
In the following, we define x _{m} and x _{M} as the minimum and maximum values of the input image x. We also define the ρquantile value denoted by x(ρ) as P r(x<x(ρ))=ρ where x _{m}≤x(ρ)≤x _{M} and 0≤ρ≤1. In the limiting cases, we have x _{m}=x(ρ)_{ ρ=0} and x _{M}=x(ρ)_{ ρ=1}. For image v, we also define v _{ m }, v _{ M }, and v(ρ) in the same way as those corresponding terms in image x.
Enhancing global lowcontrast image
Determine the function f(x)
The dynamic range of the image can be defined as
where x _{2}=x(ρ _{2}) and x _{1}=x(ρ _{1}). For example, we can set ρ _{2}=0.995 and ρ _{1}=0.005 such that 99 % of the pixels are in the interval [x _{1},x _{2}]. Since images of global low contrast usually have a narrow histogram, one way to map the interval (0,1) to (−1,1) is by using
where x _{m}<b<x _{M}. As such, we have u _{1}=x _{1}−b and u _{2}=x _{2}−b.
To determine the parameter b, it is reasonable to assume that after the scalar multiplication the results are
and
where c is a positive constant close to 1. As such, the dynamic range of the image v is given by
This assumption ensures that the dynamic range of the image is increased after the scalar multiplication. Using Eqs. 23 and 24 and the generating function of the SLIP model, we can derive the following result:
Determine the gain γ
Referring to Fig. 3 d, the enhancement of the image is through the scalar multiplication with γ>1. Setting a larger value for γ will lead to more contrast enhancement. We leave this as a parameter for the user to adjust to achieve the desirable results.
Determine the function g(v)
The final output image is obtained by a simple linear mapping
where ρ _{1} and ρ _{2} are userspecified parameters with the property 0≤ρ _{1}≤ρ _{2}≤1. One simple way to set these two parameters is to use the minimum v _{m} and the maximum v _{M} instead of v(ρ _{1}) and v(ρ _{2}).
Enhancing local lowcontrast image
Determine the function f(x)
We assume that the local lowcontrast image has a broad histogram such as the one shown in Fig 1. We determine the mapping function f(x) based on the following considerations. Referring to Fig. 3 c, in order for the scalar multiplication to effectively expand the dynamic range of the image content in both the shadow and highlight areas, the function f(x) should have the property that f(x _{m})=−c and f(x _{M})=c, where c>0 and c is a constant very close to 1, e.g., c=0.999. As such, the grayscale values in the shadow area are mapped to (−1,u _{1}), where u _{1}<0 is an imagedependent constant. Similarly, the grayscale values in the highlight area are mapped to (u _{2},1), where u _{2}>0 is an imagedependent constant. The center of the midtone should be mapped to 0.
We consider a linear mapping function
When it satisfies the above conditions, it can be shown that
and
In the simplest case, if we assume x _{m}=0,x _{M}=1, and c=1, then the mapping will be
Determine the gain γ
Referring to Fig. 3 c, since our goal is to enhance the dynamic ranges of the shadow and the highlight, the dynamic range of midtone has to be compressed. The gain γ can be determined by the amount of compression of the dynamic range of the midtone. More specifically, the midtone can be specified by the probability P r(u<u _{0})=ρ _{0}. Once ρ _{0} (0<ρ _{0}<1) is specified, u _{0} can be determined and the dynamic range [−u _{0},u _{0}] will be compressed based on a simple scaling v _{0}=τ u _{0}, where τ (0<τ<1) is the userspecified parameter. From Eq. 19, we can determine the scaling factor denoted γ _{0} as follows:
and
We set γ=γ _{0} to enhance the image.
In practice, as in most image processing software packages, the user can directly specify γ to achieve a desired outcome. However, obtaining γ using Eq. 33, the user can have more control over the tradeoff between the compression of the midtone and the stretching of the dynamic ranges for the shadow and highlight.
Determine the function g(v)
For simplicity, we will adopt the same mapping function as that stated in Eq. 27.
Automatic enhancement
Image classification
In this section, we describe a simple method to classify an image as either global or local low contrast. The first task is to calculate the dynamic range of the input image using Eq. 21. If it is smaller than a predefined threshold τ, i.e., R _{ x }<τ, then the image is classified as global low contrast. If R _{ x }≥τ, then a further test is performed to see if the image is of local low contrast. The assumption for the test is that a global lowcontrast image usually has a unimodal histogram while the local lowcontrast image has a bimodal histogram. This assumption is justified from histograms of real images shown in Fig. 1.
The test is through the median absolute deviation (MAD) δ which is defined for a set of data {d _{ n }}_{ n=1:N } as follows:
where τ=median{d _{ n }} is the median of the data set. The MAD is a robust measure of the spread of the data [36]. For an image, all pixels form a set of data denoted as {x _{ n }}_{ n=1:N }. We use Otsu’s method [37] to partition pixels of the image into two sets: {u _{ n }}_{ n=1:J } and {v _{ n }}_{ n=1::K }, where J+K=N. We then calculate the MAD for the whole image δ _{0}, and the MAD for the two sets of pixels δ _{ u } and δ _{ v }.
The image is classified as local low contrast when \(\delta _{0}>\max (\delta _{u},\delta _{v})\). This classification rule is based on the observation that for an image with a bimodal histogram, pixels can be robustly classified into two classes. The MAD of each class is usually smaller than the MAD of the whole image. The proposed classification method is confirmed in Table 2 which shows the values for images shown in Fig. 1.
Automatic enhancement
Once the lowcontrast image is classified, the proposed algorithms developed in this paper can be applied. To enhance the global lowcontrast image, the algorithm has one parameter γ (γ>1) which can be set to a default value γ=2. In actual application, the user can then adjust the value to achieve the desirable result. Similarly, the proposed algorithm for the local lowcontrast image has one parameter γ (γ<1) which can be specified through the tradeoff between the compression of the midtone and the expansion of the shadow and highlight. It can also be set to a default value, e.g., γ=0.6. The user can then adjust the value to achieve the desirable result.
Results and comparison
For a color image, we first convert it from the RGB color space to the HSI space. The intensity component is processed. The result is then converted back to RGB space. For comparison, we have processed the image using the following algorithms in which the first two are classical and well established, and the other three are recently published and have demonstrated good results.

LCS: linear contrast stretching using MATLAB function imadjust with default settings

CLAHE: contrastlimited adaptive histogram equalization [38] using MATLAB function adapthisteq with contrast parameter set to 0.004

LCC: local color correction algorithm [7, 8] with default parameter settings^{1}

AGC: adaptive gamma correction^{2} [9]

PLR: parametric logratio model [6]
We use the standard deviation (σ), the mean (μ), and the entropy (\(\mathcal {H}\)) of the intensity component as numerical measures to compare images. The standard deviation has been used as an indicator of the contrast of the image [39]. The mean is used to measure the overall brightness of the image. The entropy is a measure of the flatness of the probability distribution of the grayscale values in an image. When the distribution is a uniform distribution, the upper bound of the entropy of 8bit is achieved for an image with grayscale values quantized to 8 bits. Achieving higher entropy is one of the goals in image enhancement, e.g., histogram equalization [1]. However, it should be noted that none of these numerical measures can replace the human subjective evaluation. The subjective evaluation depends on a lot of factors such as the viewing environment, the physical characteristics of the display device, and most importantly, the differences in viewer’s individual preference of contrast, sharpness, color, etc. As such, it is a subject currently under intense investigation and is out of the scope of this paper.
Enhancement of global lowcontrast images
We use the Rachmaninov image to test the proposed algorithm. This image has a narrow histogram which is a typical image of global low contrast. Referring to Section 3.2, we have tested three settings: γ=2,5,7. Results are shown in Fig. 5 and in Table 3. From these results, we can make the following observations. Using the proposed algorithm, the standard deviation of the processed image increases with the increase of the parameter γ. Compared to the original image, the contrast of the processed image is significantly enhanced. This is confirmed visually and numerically. Since the same image may appear to have different contrast on different display devices, the parameter γ can be set by the observer to produce a satisfactory result. The entropy of the processed images is also greater than that of the original image. This is an indication that the probability distribution of pixels in the processed image is closer to the uniform distribution than that of the original image.
Compared with the other algorithms, we can see that the proposed algorithm produces results visually quite close to that of the classical LCS. All algorithms tested, except the LCC algorithm, have improved the image quality to some extent. This is confirmed from their respective values of σ, μ, and \(\mathcal {H}\). The AGC also enhances the contrast, but the overall brightness of the processed image is also increased. The CLAHE and PLR algorithms produce similar results in which the enhancement in contrast is less than those produced by the proposed algorithm and the LCS. This is confirmed by the standard deviation of these images. The LCC algorithm increases the overall brightness of the image, but it does not enhance the contrast of the image. As a result, the subjective quality of these images is not as satisfactory as that of the proposed algorithm and the LCS.
To further investigate the performance of the proposed algorithm, we run the same test using an overexposed flower image. It has a narrow histogram which is concentrated in the highlight area. Results are shown in Fig. 6 and in Table 4. From these results, we can make very similar observations as those with the Rachmaninov image. Visually, the quality of the image produced by the proposed algorithm is similar to that produced by the PLR algorithm but is better than those of other algorithm tested.
To demonstrate the robustness of the algorithm, we perform experiment on the church image. We have tested three settings: γ=2,5,7. Results are shown in Fig. 7 and in Table 5. From these results, we can clearly see that the lowcontrast original image is due to hazelike effect rather than inaccurate exposure such as the flower image or the aging effect such as the Rachmaninov image. The proposed algorithm has successfully enhanced the contrast of this image by removing the haze effect. Similar results have been achieved by using the linear contrast stretching. The numerical results shown in Table 5 support these observations.
Overall, for the three test images, the proposed algorithm has produced images with the largest values of standard deviation and entropy among all algorithms tested.
Enhancement of local lowcontrast images
We use the “iris” image to test the proposed algorithm for the enhancement of images of local low contrast. We test the proposed algorithm (refer to Section 3.3) by setting ρ _{0}=0.1 and τ=0.5, by which the dynamic range of 10 % of the pixels in the midtone will be compressed by a scaling factor of 0.5. To test the performance of the proposed algorithm with userspecified γ, we also set γ=1.2γ _{0} and γ=0.8γ _{0}.
Results are shown in Fig. 8 and in Table 6. From these results, we can make the following observations. Using the proposed algorithm, the mean of the processed image increases with the increase of the parameter γ. This is an indication that the brightness of the shadow area has been enhanced. The standard deviation of the processed image increases with the increase of of γ but is always smaller than that of the original image. This is because the original image has an excessive contrast such that details in the shadow and highlight areas cannot be clearly seen. The proposed algorithm enhances the image by stretching the dynamic range of both areas towards the midtone. This results in a smaller standard deviation. The entropy of the processed image is roughly the same as that of the original image. Compared with the original image, the improvement in image quality can be observed in both the shadow area (e.g., the leaves of the plant and the heater below the window) and the highlight area (e.g., the clouds in the sky). A negative effect of compressing the midtone can be observed in the part of the image where there are trees. Compared with the original image, the details of the trees in the processed image seem to be smoothed. This is because that part of the image is in the midtone and its dynamic range is compressed. This results in lost of contrast which leads to lost of details.
Compared with the other algorithms, we can see that the proposed algorithm produces results visually quite close to that of the LCC and PLR algorithms. The adaptive histogram equalization (CLAHE) significantly enhances the highlight area, but it does not enhance the shadow area. The AGC enhances the shadow area, but it does not enhance the highlight area. The classical linear contrast enhancement algorithm does not enhance both areas. These observations can be explained from the standard deviation point of view. Refer to Table 6, the standard deviation of images produced by the LCS, CLAHE, LCC, and AGC algorithms is quite close to that of the original image. This indicates that these algorithms do not reduce the excess contrast of the original image. From the same point of view, we can understand that the quality of image produced by the PLR algorithm is similar to that produced by the proposed algorithm, because the standard deviations are quite close to each other.
To further investigate the performance of the proposed algorithm, we process another image which is used in [9]. We test the proposed algorithm (refer to Section 3.3) by setting ρ _{0}=0.15 and τ=0.7, by which the dynamic range of 15 % of the pixels in the midtone will be compressed by a scaling factor of 0.7. To test the performance of proposed algorithm with userspecified γ, we also set γ=1.2γ _{0} and γ=0.8γ _{0}.
Results are shown in Fig. 9 and in Table 7. From these results, we can make the following observations which are quite similar to those with the “iris” image. Using the proposed algorithm, the standard deviation increases with the increase of the parameter γ. However, similar to the case of the “iris” image, the standard deviation of the processed image is smaller than that of the original image. This is because the original image has an excessive contrast with dark shadow areas such as the street inside the building and bright highlight areas such as the sky. The proposed algorithm enhances the image by stretching the dynamic range of both areas towards the midtone. As a result, the details of the dark areas can be easily seen, while the contrast of the sky is preserved.
Compared with the other algorithms, we can see that the proposed algorithm produces results visually quite close to that of the LCC and PLR algorithms. This can be explained from the standard deviation point of view, i.e., the standard deviation of these images are quite close to each other. The adaptive histogram equalization (CLAHE) significantly enhances the highlight area, but it does not enhance the shadow area. The AGC enhances the shadow area, but it does not enhance the highlight area. The classical linear contrast enhancement algorithm does not enhance both areas.
To demonstrate the robustness of the proposed algorithm, we test it using the “ferrari” image^{3} which has a broad histogram. We set the parameters for the proposed algorithm as follows: ρ _{0}=0.2 and τ=0.65. As such, the dynamic range of 2 % of the pixels in the midtone will be compressed by a scaling factor of 0.65. To test the performance of the proposed algorithm with the userspecified γ, we also set γ=1.2γ _{0} and γ=0.8γ _{0}, where γ _{0} is determined by the settings ρ _{0}=0.2 and τ=0.65. Results are shown in Fig. 10 and Table 8. From this figure, we can clearly see that the results from the proposed algorithm are similar to those produced by other algorithms. In fact, when γ=γ _{0}, the proposed algorithm is able to achieve a good balance between retaining the contrast of the sky and enhancing the contrast of the bonnet of the car.
For the above three test images, the entropy of the processed images by all algorithms is about the same as that of the original image. This is because these test images have broad histograms and the aim of the proposed algorithm is not to further broaden the histogram. As such, the entropy of the processed image does change much from that of original image. In contrast, a global lowcontrast image has a narrow histogram. As a result of enhancement of the dynamic range of the image, the proposed algorithm broadens the histogram leading to increase in entropy.
Conclusions
In this paper, based on the concept of the generalized linear system (GLS), we first proposed the generalized gamma correction algorithm as the scalar multiplication of a GLS. We then proposed an image enhancement algorithm by using the recently developed symmetric LIP (SLIP) model. We show that the proposed can be configured to effectively and efficiently enhance images of either global low contrast or local low contrast. While the classical gamma correction algorithm can only enhance underexposed or overexposed images, the generalized gamma correction algorithm can be used to enhance images with low contrast in areas of shadow, midtone, highlight, or a combination of them. The expansion of the capability of the gamma algorithm thus constitutes a novel contribution of this paper. Experimental results and comparisons with classical and recently developed image enhancement algorithms demonstrate that the proposed generalized gamma correction algorithm is an effective tool.
Endnotes
^{1} The LCC algorithm is run remotely from the web site http://www.ipol.im/pub/art/2011/gl_lcc/
^{2} The source code is kindly provided by the authors.
^{3} Available from http://www.ipol.im/pub/art/2011/gl_lcc/
References
 1
RC Gonzalez, RE Woods, Digital Image Processing, (3rd Edition) (PrenticeHall, Inc., Upper Saddle River, NJ, USA, 2006).
 2
EH Land, JJ McCann, Lightness and the retinex theory. J. Opt. Soc. Am. 61:, 1–11 (1971).
 3
R Kimmel, M Elad, D Shaked, R Keshet, I Sobel, A variational framework for retinex. Int. J. Comput. Vis. 52:, 7–23 (2003).
 4
L Wang, L Xiao, H Liu, Z Wei, Variational Bayesian method for retinex. IEEE Trans. Image Process. 23:, 3381–3396 (2014).
 5
M Jourlin, JC Pinoli, Image dynamic range enhancement and stabilization in the context of the logarithmic image processing model. Signal Process. 41:, 225–237 (1995).
 6
G Deng, Parametric generalized linear system based on the notion of the tnorm. IEEE Trans. Image Process. 22(7), 2903–2910 (2013).
 7
N Moroney, in IS&T/SID 8th Color Imaging Conference. Local color correction using nonlinear masking (IS&T/SIDScottsdale, Arizona, 2000), pp. 108–111.
 8
JGG Salas, JL Lisani, Local color correction. Image Process. On Line (2011). http://www.ipol.im/pub/art/2011/gl_lcc/.
 9
SC Huang, FC Cheng, YS Chiu, Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 22(3), 1032–1041 (2013).
 10
AV Oppenheim, RW Schafer, Jr Stockham TG, Nonlinear filtering of multiplied and convolved signals. IEEE Trans. Audio Electroacoustics. 16(3), 437–466 (1968).
 11
Jr. Stockham TG, Image processing in the context of a visual model. Proc. IEEE. 60(7), 842–828 (1972).
 12
I Pitas, A Venetsanopoulos, Nonlinear mean filters in image processing. IEEE Trans. Acoust., Speech, Signal Process. 34(3), 573–584 (1986).
 13
H Shvaytser, S Peleg, Inversion of picture operators. Pattern Recogn. Lett. 5(1), 49–61 (1987).
 14
M Jourlin, JC Pinoli, A model for logarithmic image processing. J. Microsc. 149:, 21–35 (1988).
 15
JC Pinoli, A general comparative study of the multiplicative homomorphic, logratio and logarithmic image processing approaches. Signal Process. 58(1), 11–45 (1997).
 16
G Deng, JC Pinoli, Differentiationbased edge detection using the logarithmic image processing model. J. Math. Imaging Vis. 8(2), 161–180 (1998).
 17
G Courbebaisse, F Trunde, M Jourlin, Wavelet transform and lip model. Image Anal. Stereol. 21:, 121–125 (2002).
 18
M Lievin, F Luthon, Nonlinear color space and spatiotemporal MRF for hierarchical segmentation of face features in video. IEEE Trans. Image Process. 13(1), 63–71 (2004).
 19
JM Palomares, J Gonzalez, ER Vidal, A Prieto, General logarithmic image processing convolution. IEEE Trans. Image Process. 15(11), 3602–3608 (2006).
 20
JC Pinoli, J Debayle, Logarithmic adaptive neighborhood image processing (LANIP): introduction, connections to human brightness perception, and application issues. EURASIP J. Adv. Signal Process. 2007(Article ID 36 105), 22 (2007). doi:10.1155/2007/36105.
 21
K Panetta, E Wharton, S Agaian, Human visual systembased image enhancement and logarithmic contrast measure. IEEE Trans. Syst. Man Cybern. B. 38(1), 174–188 (2008).
 22
K Panetta, EJ Wharton, SS Agaian, Logarithmic edge detection with applications. J. Comput. 3:, 11–19 (2008).
 23
H Gouinaud, Y Gavet, J Debayle, JC Pinoli, in Proc. 7tn Int. Symposium on Image and Signal Processing and Analysis. Color correction in the framework of color logarithmic image processing (IEEEDubrovnik, Croatia, 2011), pp. 129–133.
 24
M Jourlin, J Breugnot, F Itthirad, M Bouabdellah, B Close, in Advances in Imaging and Electron Physics, 168, ed. by PW Hawkes. Logarithmic image processing for color images (Elsevier, 2011), pp. 65–107.
 25
JC Pinoli, J Debayle, Adaptive generalized metrics, distance maps and nearest neighbor transforms on gray tone images. Pattern Recognit. 45(7), 2758–2768 (2012).
 26
M Jourlin, E Couka, B Abdallah, J Corvo, J Breugnot, Asplünd’s metric defined in the logarithmic image processing (LIP) framework: a new way to perform doublesided image probing for nonlinear grayscale pattern matching. Pattern Recognit. 47(9), 2908–2924 (2014).
 27
G Deng, An entropy interpretation of the logarithmic image processing model with application to contrast enhancement. IEEE Trans. Image Process. 18(5), 1135–1140 (2009).
 28
L Sbaiz, Y Feng, E Charbon, S Susstrunk, M Vetterli, in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing. The gigavision camera (IEEETaiwan, 2009), pp. 1093–1096.
 29
G Deng, A generalized logarithmic image processing model based on the gigavision sensor model. IEEE Trans. Image Process. 3:, 1406–1414 (2012).
 30
SC Nercessian, K Panetta, SS Agaian, Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion. EURASIP J. Adv. Signal Process. 2011:, Article ID 515084 (2011). doi:10.1155/2011/515084.
 31
V Patrascu, in IEEE Int. Conf. on Fuzzy Systems. Fuzzy enhancement method using logarithmic models (IEEEBudapest, 2004), pp. 1431–1436.
 32
C Vertan, A Oprea, C Florea, L Florea, in Advanced Concepts for Intelligent Vision Systems. A pseudologarithmic image processing framework for edge detection (SpringerJuanlesPins, France, 2008), pp. 637–644.
 33
L Navarro, G Deng, G Courbebaisse, The symmetric logarithmic image processing model. Digital Signal Process. 23(5), 1337–1343 (2013).
 34
G Deng, A generalized unsharp masking algorithm. IEEE Trans. Image Process. 20(5), 1249–1261 (2011).
 35
R Vorobel, in Proc 2010 Int. Kharkov Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves. Logarithmic type image processing algebras (IEEEKharkiv, Ukraine, 2010), pp. 1–3.
 36
DC Hoalin, F Mosteller, JW Tukey, Understanding Robust and Exploratory Data Analysis (John Wiley & Sons, 1983).
 37
N Otsu, A threshold selection method from graylevel histograms. IEEE Trans. Syst., Man, Cybern. 9:, 62–66 (1979).
 38
K Zuiderveld, ed. by PS Heckbert. Graphics Gems IV, ch. Contrast Limited Adaptive Histogram Equalization (Academic Press Professional, Inc.San Diego, CA, USA, 1994), pp. 474–485.
 39
E Peli, Contrast in complex images. J. Opt. Soc. Am. A. 7:, 2032–2040 (1990).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The author declares that he has no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Deng, G. A generalized gamma correction algorithm based on the SLIP model. EURASIP J. Adv. Signal Process. 2016, 69 (2016). https://doi.org/10.1186/s1363401603667
Received:
Accepted:
Published:
Keywords
 Symmetric LIP model
 Generalized linear system
 Contrast enhancement