 Research
 Open Access
 Published:
A parallel nonlinear adaptive enhancement algorithm for low or highintensity color images
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 70 (2014)
Abstract
This article addresses the problem of color image enhancement for images with low or high intensity and poor contrast (LIPC or HIPC). A parallel nonlinear adaptive enhancement (PNAE) algorithm using information from local neighborhood is presented to resolve the problem in parallel. The PNAE algorithm consists of three steps. First, a redgreenblue (RGB) color image is converted to an intensity image, then an adaptive intensity adjustment with local contrast enhancement is parallelly performed, and finally, colors are restored. The PNAE algorithm can be adjusted to control the level of enhancement on the overall lightness and the contrast achieved at the output separately. Most of the parameters used in PNAE are robust for LIPC and HIPC color image enhancement. Experimental results show that PNAE outperforms two popular methods in both computational efficiency and overall content preservation of image while improving local contrast for LIPC and HIPC image enhancement.
1 Introduction
Color images obtained by image acquisition devices like digital camera usually suffer from certain defects, such as low or high intensity with poor contrast and noises, and these defects result in poor visual quality. The study of image enhancement to improve visual quality has gained increasing attention and become an active area in image and video processing [1]. Reviews of various image enhancement techniques can be found in [2, 3]. This article focuses on processing two kinds of poor contrast color images: one is low intensity with poor contrast (LIPC) images and another is high intensity with poor contrast (HIPC) images. Some LIPC images are poor contrast images with only dark regions (Figure 1a), and some LIPC images are poor contrast images with both bright and dark regions (Figure 1b,c). The HIPC images usually result from overexposure and have an overall high intensity with poor contrast (Figure 1d).
The objective of LIPC and HIPC image enhancement is to improve the perception of information contained in an image for human viewers, or to provide ‘better’ inputs for other automated image processing systems. The main requirements to achieve the objective are how to properly adjust the intensity and enhance the local contrast simultaneously, which are our focus in the paper.
Traditional image enhancement methods have certain intensity adjustment abilities, but the abilities for contrast enhancement or detail protection are not strong. These methods include logarithmic compression, gamma correction, histogram equalization [4], etc. The limited performance of these methods results in feature loss or feature unenhanced [5]. In addition, they may not be able to enhance all the regions proportionately. For example, with logarithmic enhancement, the low intensity pixel values can be enhanced at the loss of high intensity values [6]; with histogram equalization enhancement, the equalization may overenhance the image, resulting in an undesired loss of visual data, quality, and intensity scale. Enhancement results suffer from local detail losses due to the global treatments on the images [2]. Global processing is often the basic idea of these techniques, so they are not sophisticated enough to preserve or enhance significant image details.
There are some image enhancement algorithms that can adjust the intensity and enhance the contrast at the same time. Retinexbased algorithms, such as Multiscale Retinex (MSR), are capable of providing betterthanobserved imagery, especially where scene content is greatly obscured, as in the case of rain, fog, or severe haze [7]. The Multiscale Retinex for Color Restoration (MSRCR) [8] is an effective technique that achieves intensity adjustment, local contrast enhancement, and color consistency simultaneously. However, a common problem of Retinexbased algorithms is that separate nonlinear processing is needed for each of the three color bands, and the color restoration is nonlinear. It not only produces artifacts at the boundaries, but also makes the algorithm computationally intensive [6]. In 2005, Tao and Asari proposed a more promising algorithm called adaptive and integrated neighborhooddependent approach for nonlinear enhancement (AINDANE) [9], and it is more effective than MSRCR. The AINDANE method is composed of two processes. The first process is an adaptive intensity enhancement, and the second process is an adaptive contrast enhancement. The first process is to adjust the intensity of the image, and the second process is to restore the contrast after the intensity enhancement. The AINDANE method usually performs well for low illuminated images, but it may overenhance the dark regions of an image and not provide a solution to overexposed images. In 2006, an algorithm called optimal fuzzy transformation (OFT) was proposed [10]. The OFT is an effective technique that achieves better visualization of details on images with poor contrast, regardless of the dark or light background of these details, but the two processes of intensity adjustment and contrast enhancement in OFT are not parallel. To provide a solution to images captured under extremely nonuniform lighting conditions, methods like multilevel windowed inverse sigmoid (MWIS) [11] and spacevariant luminance map (SVLM) [12] were proposed in 2006 and 2010, respectively. The major contribution of MWIS is using a multilevel windowed inverse sigmoid function to render images captured under extremely nonuniform lighting conditions. The major contribution of SVLM is that a twodimensional gamma correction is developed to adjust the intensity in dark regions and bright regions in the luminance domain. The two algorithms reveal the details of the original image as well as minimize the loss of the edge sharpness in the nonuniform and low lighting conditions. Two innovative techniques named locally tuned sine nonlinear enhancement (LTSNE) [5] and neighborhooddependent nonlinear enhancement (NDNE) [6] were proposed in 2008 and 2010, respectively. LTSNE and NDNE can also obtain fine details of the original image. The major contribution of LTSNE is the simultaneous enhancement and compression of dark and bright pixels using a nonlinear sine squared function with imagedependent parameters. For the NDNE algorithm, as an improved algorithm of LTSNE, its major contribution is that the computations of the imagedependent parameters are simplified. The processing time is reduced, and the visual quality of the processed image is improved. Although the algorithms mentioned above are adaptive processing methods based on local neighborhood, they have a common disadvantage that the two processes, intensity adjustment and the contrast enhancement, are not parallel. From the implementation point of view, parallel processing is faster on multiprocessors and improves the computational efficiency in practical applications. In order to further improve the computational efficiency, a simultaneous dynamic range compression and local contrast enhancement (SDRCLCE) algorithm [1] was proposed in 2011. The major contributions of the SDRCLCE algorithm were its parallelization property and its generalization ability to combine with any continuously differentiable intensity mapping function. The SDRCLCE algorithm employs a complicated hyperbolic tangent function as the intensity mapping function, which increases the computational efficiency due to the computation of the firstorder derivative function. Moreover, the hyperbolic tangent function cannot be used to decrease image intensity and enhance HIPC images.
To sum up, those image enhancement algorithms discussed above have certain deficiencies:

1.
Some algorithms are based on global processing and cannot effectively enhance local contrast.

2.
Some algorithms are not fit for the parallel structure.

3.
Some algorithms can only be used to enhance LIPC images, but not HIPC images.

4.
For some algorithms, the intensity mapping functions are complicated, or the normalization methods for the intensity values in the enhanced images are ineffective.
A parallel nonlinear adaptive enhancement (PNAE) algorithm based on the local neighborhood is proposed in the paper. The algorithm enhances both LIPC and HIPC images. We compare PNAE with NDNE and SDRCLCE both quantitatively and visually with LIPC and HIPC images because these two popular algorithms are more effective than many commonly used enhancement algorithms for LIPC and HIPC images and the facts were shown in [1, 6]. The main differences between PNAE with SDRCLCE and NDNE are summarized as follows:

1.
SDRCLCE and NDNE employ a complicated hyperbolic tangent function and a sine function as the intensity mapping function, respectively, which reduce the computational efficiency. The proposed PNAE algorithm employs a simple power function as the intensity mapping function, which can be used to enhance LIPC and HIPC images with higher computational efficiency.

2.
A new simple and effective normalization method is proposed in the PNAE algorithm that improves the normalization method of SDRCLCE in both enhancement effect and calculation efficiency.

3.
PNAE has the parallel processing ability as SDRCLCE, while NDNE does not have the ability.
In the following section, the PNAE algorithm is discussed in detail. Experimental results of the algorithm are discussed in Section 3, followed by the conclusions and discussions of future work in Section 4.
2 The PNAE algorithm
The PNAE algorithm for LIPC and HIPC image enhancement consists of three steps. First, a redgreenblue (RGB) color image is converted to an intensity image, then an adaptive intensity adjustment with contrast enhancement based on the local neighborhood is performed parallelly on the intensity image, and then colors are restored to produce the enhancement result. The primary step is the adaptive intensity adjustment with contrast enhancement based on the local neighborhood. Because the contrast may be degraded in the intensityadjusted image, a contrast enhancement process is applied simultaneously to enhance the contrast to improve visual effect. Finally, an enhanced color image is obtained by performing a linear color restoration process using the chromatic information of the original image. The structure of PNAE is shown in Figure 2, and the shaded blocks represent the primary step of the PNAE algorithm.
2.1 Adaptive intensity adjustment based on the local neighborhood
As mentioned above, we need to convert an RGB color image to an intensity image first. To compute the input intensity image, several existing methods can be employed, such as the NTSC standard method [13] and the intensity value V of the huesaturationvalue (HSV) [14] color model. According to [15], the HSV intensity value is suggested as it achieves color consistency in the RGB color image enhancement. However, considering the computational efficiency and the convenience when comparing with the NDNE and SDRCLCE algorithms, the NTSC standard is used in the paper to obtain the intensity. The intensity formula is given by
where I_{ r }(x, y), I_{ g }(x, y), and I_{ b }(x, y) are the red, green, and blue components of a pixel located at (x, y) in the RGB color image. The intensity image is further normalized to
To decrease the intensity values of the highillumination pixels and increase the intensity values of the lowillumination pixels simultaneously, the intensity image is treated with enhancement and compression processes, respectively, using a specifically designed nonlinear sine intensity mapping function defined in [6] as
The parameter q in (3) corresponds to the local mean intensity value of the pixel. According to the mathematical theory (see Appendix),
We normalized the formula (4), let p = 2q, and get the new normalized intensity mapping function
The power p is given as
where I_{ave} (x, y) ∈ [0,1] is the normalized local mean intensity value of the pixel at location (x, y), c_{1} and c_{2} are constants determined empirically, and ϵ = 0.01 is a numerical stability factor introduced to avoid division by zero when I_{ave}(x, y) = 1.
It is easy to see that p is increasing on I_{ave}(x, y). The change curves of function (5) are drawn in Figure 3 for p = 0.2, 0.4, 0.7, 1, 2, 3, and 8. The intensity mapping function (5) is a decreasing convex function with p < 1 and a decreasing concave function with p > 1 as shown in Figure 3. Notice that if a pixel is in a dark neighborhood, then I_{ave}(x, y) is smaller and p is less than 1 with appropriate c_{1} and c_{2}, then T (I_{in}(x, y)) is bigger than I_{in}(x, y). Hence, the intensity of the pixel in the dark neighborhood would be pulled up. On the contrary, the intensity of a pixel in a bright neighborhood would be pulled down. Therefore, the intensity mapping function (5) has the ability to adjust image intensity adaptively, based on the situation of the local neighborhood.
Generally, noises may also be enhanced as I_{ave}(x, y) is close to 0, but the enhancement for those noises in extreme dark regions can be restrained by the parameter c_{2} in formula (6). The parameter c_{1} is used to avoid the great lowering of pixel values in extremely bright regions because of a super high value of $\frac{{\mathit{I}}_{\mathrm{ave}}\left(\mathit{x},\mathit{y}\right)}{1{\mathit{I}}_{\mathrm{ave}}\left(\mathit{x},\mathit{y}\right)}$. The effects of c_{1} and c_{2} will be discussed in detail in Section 3.
In PNAE, the local average of the image I_{ave}(x, y) in formula (6) is computed by
where the operator ⊗ denotes the 2D convolution operation, and F_{LPF}(x, y) denotes a spatial lowpass filter kernel function and is subject to the condition
In PNAE, F_{LPF}(x, y) is a Gaussian smoothing operator, and I_{ave}(x, y) is computed by
where (x, y) is the center pixel of the M × M neighborhood Ω, I_{in}(m, n) is the intensity value of the pixel in the location (m, n) of the original intensity image, and ω_{ mn } is the weight of the pixel in the location (m, n) given by
where σ is the standard derivation of ω_{ mn }, and K is the normalization factor given by
Formula (9) is a discrete form of formula (7) with the discrete spatial lowpass filter, the Gaussian kernel function in (10). In NDNE and SDRCLCE, a multiscale and a singlescale Gaussian smoothing operator is used, respectively, to produce the mean intensity image. Considering the computational efficiency, a singlescale Gaussian smoothing operator with one neighborhood is used to enhance image in PNAE. The effects of the neighborhood radius R (M = 2R + 1) and σ will be discussed in details in Section 3, too.
2.2 Adaptive contrast enhancement based on the local neighborhood
The formula of intensity adjustment with local contrast enhancement used in SDRCLCE is given by
where ${\mathit{C}}_{\mathrm{out}}^{\mathrm{enh}}\left[{\mathit{I}}_{\mathrm{in}}\left(\mathit{x},\mathit{y}\right)\right]$ denotes the result of intensity adjustment with local contrast enhancement for I_{in}(x, y), $\overline{\mathit{I}}\left(\mathit{x},\mathit{y}\right)$ is given by
and T′[I_{in}(x, y)] denotes the firstorder derivative of the mapping function (5), which is T′[I_{in}(x, y)] = p[I_{in}(x, y)]^{p  1}I_{in}(x, y). In formula (12), the item C_{out1} can be used to adjust intensity of the original image, and the item C_{out2} can be used to enhance local contrast. Moreover, C_{out1} and C_{out2} do not depend on each other and can be computed independently and simultaneously, i.e., formula (12) is a parallel process for intensity adjustment and local contrast enhancement with a dual core processor.
In formula (12), the value of ${\mathit{C}}_{\mathrm{out}}^{\mathrm{enh}}\left[{\mathit{I}}_{\mathrm{in}}\left(\mathit{x},\mathit{y}\right)\right]$ can be out of the range of [0, 1]. In PNAE, the proposed method to normalize the value of ${\mathit{C}}_{\mathrm{out}}^{\mathrm{enh}}\left[{\mathit{I}}_{\mathrm{in}}\left(\mathit{x},\mathit{y}\right)\right]$ is given by
where ${\mathit{C}}_{\mathrm{outnorm}}^{\mathrm{enh}}\left[{\mathit{I}}_{\mathrm{in}}\left(\mathit{x},\mathit{y}\right)\right]$ denotes the normalized value for the output value of I_{in}(x, y). Though quite simple, the proposed normalization method is still an effective way and has a higher computational efficiency than the normalization method in SDRCLCE, which is confirmed in our experiments in Section 3.
2.3 Color restoration
In order to recover the enhanced RGB color image, a simplified multiplicative model based on the chromatic information of the original image can be applied with minimum color distortion. If ${\mathit{P}}_{\mathrm{in}}^{\mathrm{RGB}}=\left[{\mathit{R}}_{\mathrm{in}},{\mathit{G}}_{\mathrm{in}},{\mathit{B}}_{\mathrm{in}}\right]$ and ${\mathit{P}}_{\mathrm{out}}^{\mathrm{RGB}}=\left[{\mathit{R}}_{\mathrm{out}},{\mathit{G}}_{\mathrm{out}},{\mathit{B}}_{\mathrm{out}}\right]$ denote the input and output color values of each pixel in RGB color space, respectively, then the multiplicative model of linear color remapping in RGB color space is expressed as [1]
where β(x, y) is given by
and ϵ = 0.01 is a numerical stability factor introduced to avoid division by zero when I_{in}(x, y) = 0.
3 Results and discussion
In this section, we focus on five issues that include feasibility test and parameter influence discussion of the proposed method, demonstrations of LIPC and HIPC image enhancement results, visual comparisons with NDNE and SDRCLCE, computational speed evaluation, and quantitative comparisons with the results produced by these methods.
3.1 Feasibility test and parameter influences on PNAE
We use a LIPC image (Figure 4a) acquired with a digital camera in daylight and the LIPC image (Figure 5a) in the Test Images Database of Computer Vision Group [16] to test the feasibility of PNAE.The result at each step of PNAE is shown in Figures 4 and 5. As the most important step of the intensity adjustment with local contrast enhancement, Figures 4d and 5d show remarkably highquality results. The great visual improvement demonstrated by the comparison between Figures 4e and 5e with their original images implies the feasibility of the PNAE algorithm.
There are four parameters in the PNAE algorithm: the neighborhood radius R (M = 2R + 1), c_{1}, c_{2}, and σ. In order to study the effects of the parameters, we design four parameter tweaking experiments below:

1.
Tweaking σ with fixed c _{1}, c _{2}, and R (Figure 6A)

2.
Tweaking R with fixed c _{1}, c _{2}, and σ (Figure 6B)

3.
Tweaking c _{1} with fixed R, c _{2}, and σ (Figure 6C)

4.
Tweaking c _{2} with fixed R, c _{1}, and σ (Figure 6D)
The parameter values of the above four experiments are shown in Table 1. Figure 6 (a) is a LIPC image in the Test Images Database of Computer Vision Group [16]. As shown in A of Figure 6, the algorithm is robust to parameter σ. As shown in B of Figure 6, a larger R value leads to a more obvious contrast enhancement result. In order to avoid overenhanced image and improve the computational efficiency, R = 1 is suitable for a lot of experiments. As shown in C of Figure 6, a larger c_{1} value leads to a smaller value of the overall lightness, and c_{1} ∈ (0, 0.4] is suitable for a lot of experiments. As shown in D of Figure 6, a larger c_{2} value also leads to a smaller value of the overall lightness. The parameter c_{2} ∈ [0.3, 0.5] is suitable for LIPC image enhancement, and c_{2} > 1 is suitable for HIPC image enhancement, verified by a lot of experiments. In the next section, we will see that the parameters in PNAE algorithm are somewhat robust and we only need to tweak parameter c_{2} for LIPC and HIPC image enhancement.
3.2 LIPC and HIPC image enhancement result demonstration
The PNAE algorithm has been applied to a large number of LIPC and HIPC images captured under varying lighting conditions and images in the Test Images Database of Computer Vision Group [16] for performance evaluation. In this section, we test the PNAE algorithm with three LIPC images and one HIPC image. Figure 7b shows the enhancement result in LIPC image with only dark regions. Figure 7d,f shows the enhancement results in LIPC images with both bright and dark regions. Figure 7h shows the enhancement result in an overexposed HIPC image. A larger value of c_{2} is needed for the overexposed image shown in Figure 7g in order to compress the overexposed area by the specifically designed nonlinear intensity mapping function (5). Parameters set for all the experiments for PNAE are given in Table 2. It can be seen from Table 2 that the parameters R, c_{1}, and σ have some robust properties and it is sufficient to only tweak the parameter c_{2} for LIPC and HIPC image enhancement. Figure 7b,d,f achieves better visual effects. The letters are much clearer in the red rectangle in Figure 7h, while these letters cannot be seen clearly in the original image.
To provide a fair comparison, we use the same intensity mapping function (5) and the same Gaussian smoothing operator to calculate I_{ave}(x, y) for the proposed PNAE algorithm and SDRCLCE algorithm in the following experiments with only different normalization methods.
3.3 The visual quality comparisons with NDNE and SDRCLCE
Figures 8 and 9 are provided for visual quality comparisons with NDNE and SDRCLCE. To provide a fair comparison, the parameter settings of PNAE and SDRCLCE are the same. The parameter settings of NDNE are given in Table 3. In Figures 8 and 9, NDNE produces somewhat overenhanced results. The enhancement results in Figure 8c,d by SDRCLCE and PNAE, respectively, have almost the same visual effects, but the situations are different in Figure 9. The enhancement result in Figure 9d by PNAE is visually better than that in Figure 9b,c, especially in the rectangular regions. The different results in the rectangular regions of Figure 9c,d are caused by the different normalization methods used in PNAE and SDRCLCE, respectively. Figure 9 shows that the image enhanced by PNAE has an overall higher visual quality in both the lowintensity and highintensity regions.
3.4 Computational speed evaluation
Table 4 lists the results of comparisons of PNAE with NDNE and SDRCLCE on average computation time, respectively, using Matlab version 7.9 (R2009b) on the platform of Intel Core2 processor, running at 2.00 GHz with 1.93 GB of memory, with twenty 360 × 240 and twenty 460 × 350 test color images.
Table 4 shows that the average processing time of PNAE is less than that of SDRCLCE and is much shorter than that of NDNE. The PNAE algorithm requires approximately 60% of the average processing time of NDNE and 80% of average processing time of SDRCLCE. The PNAE algorithm requires less processing time than NDNE because NDNE uses a complicated intensity mapping function, and NDNE cannot be parallelized in a sequential process framework. The PNAE algorithm requires less processing time than SDRCLCE because PNAE uses a more efficient and simpler normalization method than SDRCLCE. Table 4 shows that the average processing times of PNAE and SDRCLCE are much shorter than NDNE because PNAE and SDRCLCE are all based on a parallel processing architecture.
3.5 The quantitative comparisons with NDNE and SDRCLCE
A quantitative assessment of image enhancement is not an easy task as an improved perception is difficult to quantify owing to the lack of a priori knowledge of the most favorable enhanced image. It is therefore necessary to establish a basis which is used to define a good measure of enhancement [17]. In this section, the visually optimal (VO) region, EMEE and average discrete entropy DE_{ave} as quantitative measures are used to analyze the experimental results in only intensity channel of the original image and their enhanced image for a color image.
In [18], a statistical method is used to compare the performance of different enhancement algorithms. Figure 10 illustrates the concept of the statistics of visual representation, which is composed of the global mean of the image and the global mean of the regional standard deviation of the image. The VO region approximately ranges from 40 to 80 for the mean of standard deviation and from 100 to 200 for the intensity mean of the image.
Recently, some practically efficient image enhancement measures based on the human visual system have been proposed, such as AWC [19, 20] and EMEE [20, 21]. The contrast measure AWC was based upon the Weber contrast law and later developed into EMEE, based on Fechner's law and the wellknown entropy concept. The EMEE was used to measure our results. The EMEE is calculated by dividing an image into k_{1} × k_{2} sized blocks, processing each block with Equation 17, and averaging the results. The EMEE is summarized by the following formula:
where Φ is a given enhancement algorithm; par denotes parameters in the enhancement algorithm; k_{1} and k_{2} are the numbers of horizontal and vertical blocks in an image, which are related to the blocks and the image size; ${\mathit{I}}_{max;\mathit{k},\mathit{l}}^{\mathit{W}}$ and ${\mathit{I}}_{min;\mathit{k},\mathit{l}}^{\mathit{W}}$ are the maximum and minimum intensity values of the block, respectively; and c is a small constant to avoid dividing by 0. A higher EMEE value indicates an image with a higher contrast.
The discrete entropy DE of an image X measures its content, where a higher value indicates an image with richer details [17]. It is defined as
where p(x_{1}) is the probability of pixel intensity x_{1} that is estimated from the normalized histogram. The average absolute discrete entropy difference DE_{ave} between the input image X_{ i } and the output image Y_{ i } is defined as
Of an enhancement algorithm, the smaller DE_{ave} value means a better preservation ability for the overall content of the input image while improving its contrast [17].
In the section, we use four typical original images of Figures 7a,e, 8a, and 9a to calculate the global intensity mean value M and the regional mean value of standard deviation $\overline{\mathit{D}}$. Those results by NDNE, SDRCLCE, and PNAE are shown in Table 5. We also calculated the EMEE (the block size is 8 × 8), DE, DE_{ave} values for all images in Table 6 by using their original normalized intensity images and their corresponding normalized intensity adjustment with contrastenhanced images. The results of the comparisons among NDNE, SDRCLCE, and PNAE are shown in Table 6.
As shown in Table 5, Figures 7a,e, 8a, and 9a are all enhanced to the VO region by PNAE and SDRCLCE, but Figures 8a and 9a are not enhanced to the VO region by NDNE because the regional standard deviation $\overline{\mathit{D}}$ is more than 80. The M results of PNAE are similar to those of NDNE because of their approximately equivalent intensity mapping functions. The M and $\overline{\mathit{D}}$ values of PNAE are similar to those of SDRCLCE on the whole because of the only difference on the normalization method.
As shown in Table 6, in addition to Figure 7c,e, the EMEE values for PNAE of the remaining six images are greater than the EMEE values for both NDNE and SDRCLCE. The average absolute discrete entropy difference DE_{ave} for PNAE, NDNE, and SDRCLCE are 0.3548, 0.5418, and 0.4038, respectively. Since the EMEE and the DE_{ave} are related to the ability of contrast enhancement and the overall image content preservation, one can say that the proposed PNAE preserves the overall content of the image better than NDNE and SDRCLCE while improving its local contrast.
4 Conclusions
This paper proposes an adaptive algorithm of intensity adjustment and local contrast enhancement for LIPC and HIPC color image enhancement. The proposed PNAE algorithm, inspired by the algorithm SDRCLCE, is able to simultaneously enhance the image intensity and local contrast with better visual effects. The performance of the proposed PNAE algorithm has been compared with two popular methods both quantitatively and visually. Experimental results show that the PNAE algorithm not only outperforms those two in terms of computational efficiency, but also provides better visual representation in quantitative comparisons. Compared with NDNE and SDRCLCE, PNAE has the following merits:

1.
PNAE has a higher computational efficiency than NDNE and SDRCLCE because PNAE uses a simpler intensity mapping function and a simpler normalization method.

2.
Some parameters in PNAE are certainly robust for LIPC and HIPC color image enhancement and make the algorithm adjustable to separately control the level of enhancement on the overall lightness and the contrast achieved at the output.

3.
The proposed PNAE preserves the overall content of the image better than NDNE and SDRCLCE while improving its local contrast.
Moreover, the PNAE algorithm is amenable to parallel processing like the SDRCLCE algorithm and can be used to enhance LIPC and HIPC color image enhancement like the NDNE algorithm. The acceleration of PNAE and optimal design of parameters are left to our future study.
5 Consent
Written informed consent was obtained from the patient’s guardian/parent/next of kin for the publication of this report and any accompanying images.
Appendix
Using the Taylor mean value theorem, there is
where ${\mathit{R}}_{2\mathit{m}}\left(\mathit{x}\right)=\frac{{sin}^{\left(2\mathit{m}+1\right)}\left(\mathit{\theta x}\right)}{\left(2\mathit{m}+1\right)!}{\mathit{x}}^{2\mathit{m}+1},0<\mathit{\theta}<1$.
Let m = 1; according to (20), there is
and the error is
According to (21), there is
For further discussion, please email Zhigang Zhou at zzghust@163.com.
Authors' information
ZZ received his M.S. degree in computational mathematics from Chengdu University of Technology, Chengdu, China, in 2006. He is now a Ph.D. candidate in the Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China. His research interests include digital image processing and pattern recognition.
NS was born in 1968. He is a Ph.D. degree holder and a professor in National Key Laboratory of Science & Technology on Multispectral Information Processing, Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology (HUST). He is a vice dean of the automation institute, HUST. His research interest covers computational modeling of biological vision perception, and applications in computer vision, image analysis and object recognition based on statistical learning, medical image processing and analysis, interpretation of remote sensing images, and intelligent video surveillance.
XH was born in 1973 and earned her Ph.D. at the Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology in 2008. Now she is a professor and a vice dean in the School of Mathematics and Computer Science, Wuhan Textile University, People's Republic of China. Her research interests include image processing, virtual reality technology, and computer vision.
References
 1.
Tsai CY, Chou CH: A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras. EURASIP Journal on Image and Video Processing 2011, 6: 119. https://link.springer.com/article/10.1186/1687528120116
 2.
Bedi SS, Khandelwal R: Various image enhancement techniques  a critical review. International Journal of Advanced Research in Computer and Communication Engineering 2013, 2(3):16051609.
 3.
Saini MK, Narang D: Review on image enhancement in spatial domain. In Proc. of Int. Conf. On Advances in Signal Processing and Communication. Lucknow, India; 2013:7679. 21–22 June 2013
 4.
Kim TK, Paik JK, Kang BS: Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Transactions on Consumer Electronics 1998, 44(1):8287. 10.1109/30.663733
 5.
Arigela S, Asari KV: A locally tuned nonlinear technique for color image enhancement. WSEAS Transactions on Signal Processing 2008, 4(8):514519.
 6.
Rupal P, Vijayan K, Asari : A neighborhood dependent nonlinear technique for color image enhancement. Image Analysis and Recognition Lecture Notes in Computer Science 2010, Volume 6111: 2334. http://link.springer.com/chapter/10.1007/9783642137723_3 10.1007/9783642137723_3
 7.
Woodell G, Jobson D, Rahman Z, Hines G: Advanced image processing of aerial imagery, in Proc. SPIE Visual Inform. Process. XIV, Kissimmee, FL, May; 2006. 2006
 8.
Jobson D, Rahman Z, Woodell G: A multiscale Retinex for bridging the gap between color images and human observation of scenes. IEEE Trans. Image Process. 1997, 6(7):965976. 10.1109/83.597272
 9.
Tao L, Asari VK: Adaptive and integrated neighborhooddependent approach for nonlinear enhancement of color images. J. Electron. Imaging 2005., 14(4): 043006104300614
 10.
Vorobel R, Berehulyak O: Gray image contrast enhancement by optimal fuzzy transformation. Artificial Intelligence and Soft Computing – ICAISC 2006, Lecture Notes in Computer Science 2006, Volume 4029: 860869. https://link.springer.com/chapter/10.1007%2F11785231_90#page1 10.1007/11785231_90
 11.
Vijayan Asari K, Ender O, Saibabu A: Nonlinear enhancement of extremely high contrast images for visibility improvement. Computer Vision, Graphics and Image Processing Lecture Notes in Computer Science 2006, Volume 4338: 240251. 10.1007/11949619_22
 12.
Lee S, Kwon H, Han H, Lee G, Kang B: A spacevariant luminance map based color image enhancement. IEEE Transactions on Consumer Electronics 2010, 56(4):26362643.
 13.
Valensi G: Color Television System. US Patent. 1970., 3534153:
 14.
Marsi S, Impoco G, Ukovich A, Carrato S, Ramponi G: Video enhancement and dynamic range control of HDR sequences for automotive applications. EURASIP Journal on Advances in Signal Processing 2007, 2007(080971):19.
 15.
Tao L, Tompkins R, Asari VK: An illuminancereflectance model for nonlinear enhancement of color images. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA; 2005:159166. 20–26 June 2005
 16.
Computer Vision Group: Test Image Database. . Accessed 13 Mar 2014 http://decsai.ugr.es/cvg/index2.php
 17.
Celik T, Tjahjadi T: Automatic image equalization and contrast enhancement using Gaussian mixture modeling. IEEE Transactions on Image Processing 2012, 21(1):145156.
 18.
Jobson DJ, Rahman Z, Woodell GA: Statistics of visual representation. SPIE Proceeding 2002, 4736: 2535. 10.1117/12.477589
 19.
Agaian SS: Visual morphology. In SPIE Proceeding. Nonlinear Image Processing X. Volume 3646. San Jose, CA; 1999:139150. 10.1117/12.341081
 20.
Agaian SS, Silver B, Panetta KA: Transform coefficient histogram  based image enhancement algorithms using contrast entropy. IEEE Transactions on Image Processing 16: 741758.
 21.
Agaian SS, Panetta K, Grigoryan AM: A new measure of image enhancement. In IASTED Int. Conf. Signal Processing Communication. Marbella, Spain; 2000:1922. 19–22 Sept 2000
Acknowledgements
The paper is partially supported by the National Science Foundation of China with grant no. 61103085.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zhou, Z., Sang, N. & Hu, X. A parallel nonlinear adaptive enhancement algorithm for low or highintensity color images. EURASIP J. Adv. Signal Process. 2014, 70 (2014). https://doi.org/10.1186/16876180201470
Received:
Accepted:
Published:
Keywords
 High intensity
 Low intensity
 Adaptive enhancement
 Parallel
 Statistics of visual representation