- Research
- Open Access

# A parallel nonlinear adaptive enhancement algorithm for low- or high-intensity color images

- Zhigang Zhou
^{1, 2}, - Nong Sang
^{1}Email author and - Xinrong Hu
^{2}

**2014**:70

https://doi.org/10.1186/1687-6180-2014-70

© Zhou et al.; licensee Springer. 2014

**Received:**24 January 2014**Accepted:**25 April 2014**Published:**16 May 2014

## Abstract

This article addresses the problem of color image enhancement for images with low or high intensity and poor contrast (LIPC or HIPC). A parallel nonlinear adaptive enhancement (PNAE) algorithm using information from local neighborhood is presented to resolve the problem in parallel. The PNAE algorithm consists of three steps. First, a red-green-blue (RGB) color image is converted to an intensity image, then an adaptive intensity adjustment with local contrast enhancement is parallelly performed, and finally, colors are restored. The PNAE algorithm can be adjusted to control the level of enhancement on the overall lightness and the contrast achieved at the output separately. Most of the parameters used in PNAE are robust for LIPC and HIPC color image enhancement. Experimental results show that PNAE outperforms two popular methods in both computational efficiency and overall content preservation of image while improving local contrast for LIPC and HIPC image enhancement.

## Keywords

- High intensity
- Low intensity
- Adaptive enhancement
- Parallel
- Statistics of visual representation

## 1 Introduction

The objective of LIPC and HIPC image enhancement is to improve the perception of information contained in an image for human viewers, or to provide ‘better’ inputs for other automated image processing systems. The main requirements to achieve the objective are how to properly adjust the intensity and enhance the local contrast simultaneously, which are our focus in the paper.

Traditional image enhancement methods have certain intensity adjustment abilities, but the abilities for contrast enhancement or detail protection are not strong. These methods include logarithmic compression, gamma correction, histogram equalization [4], etc. The limited performance of these methods results in feature loss or feature un-enhanced [5]. In addition, they may not be able to enhance all the regions proportionately. For example, with logarithmic enhancement, the low intensity pixel values can be enhanced at the loss of high intensity values [6]; with histogram equalization enhancement, the equalization may over-enhance the image, resulting in an undesired loss of visual data, quality, and intensity scale. Enhancement results suffer from local detail losses due to the global treatments on the images [2]. Global processing is often the basic idea of these techniques, so they are not sophisticated enough to preserve or enhance significant image details.

There are some image enhancement algorithms that can adjust the intensity and enhance the contrast at the same time. Retinex-based algorithms, such as Multi-scale Retinex (MSR), are capable of providing better-than-observed imagery, especially where scene content is greatly obscured, as in the case of rain, fog, or severe haze [7]. The Multi-scale Retinex for Color Restoration (MSRCR) [8] is an effective technique that achieves intensity adjustment, local contrast enhancement, and color consistency simultaneously. However, a common problem of Retinex-based algorithms is that separate nonlinear processing is needed for each of the three color bands, and the color restoration is nonlinear. It not only produces artifacts at the boundaries, but also makes the algorithm computationally intensive [6]. In 2005, Tao and Asari proposed a more promising algorithm called adaptive and integrated neighborhood-dependent approach for nonlinear enhancement (AINDANE) [9], and it is more effective than MSRCR. The AINDANE method is composed of two processes. The first process is an adaptive intensity enhancement, and the second process is an adaptive contrast enhancement. The first process is to adjust the intensity of the image, and the second process is to restore the contrast after the intensity enhancement. The AINDANE method usually performs well for low illuminated images, but it may over-enhance the dark regions of an image and not provide a solution to overexposed images. In 2006, an algorithm called optimal fuzzy transformation (OFT) was proposed [10]. The OFT is an effective technique that achieves better visualization of details on images with poor contrast, regardless of the dark or light background of these details, but the two processes of intensity adjustment and contrast enhancement in OFT are not parallel. To provide a solution to images captured under extremely nonuniform lighting conditions, methods like multilevel windowed inverse sigmoid (MWIS) [11] and space-variant luminance map (SVLM) [12] were proposed in 2006 and 2010, respectively. The major contribution of MWIS is using a multilevel windowed inverse sigmoid function to render images captured under extremely nonuniform lighting conditions. The major contribution of SVLM is that a two-dimensional gamma correction is developed to adjust the intensity in dark regions and bright regions in the luminance domain. The two algorithms reveal the details of the original image as well as minimize the loss of the edge sharpness in the nonuniform and low lighting conditions. Two innovative techniques named locally tuned sine nonlinear enhancement (LTSNE) [5] and neighborhood-dependent nonlinear enhancement (NDNE) [6] were proposed in 2008 and 2010, respectively. LTSNE and NDNE can also obtain fine details of the original image. The major contribution of LTSNE is the simultaneous enhancement and compression of dark and bright pixels using a nonlinear sine squared function with image-dependent parameters. For the NDNE algorithm, as an improved algorithm of LTSNE, its major contribution is that the computations of the image-dependent parameters are simplified. The processing time is reduced, and the visual quality of the processed image is improved. Although the algorithms mentioned above are adaptive processing methods based on local neighborhood, they have a common disadvantage that the two processes, intensity adjustment and the contrast enhancement, are not parallel. From the implementation point of view, parallel processing is faster on multiprocessors and improves the computational efficiency in practical applications. In order to further improve the computational efficiency, a simultaneous dynamic range compression and local contrast enhancement (SDRCLCE) algorithm [1] was proposed in 2011. The major contributions of the SDRCLCE algorithm were its parallelization property and its generalization ability to combine with any continuously differentiable intensity mapping function. The SDRCLCE algorithm employs a complicated hyperbolic tangent function as the intensity mapping function, which increases the computational efficiency due to the computation of the first-order derivative function. Moreover, the hyperbolic tangent function cannot be used to decrease image intensity and enhance HIPC images.

- 1.
Some algorithms are based on global processing and cannot effectively enhance local contrast.

- 2.
Some algorithms are not fit for the parallel structure.

- 3.
Some algorithms can only be used to enhance LIPC images, but not HIPC images.

- 4.
For some algorithms, the intensity mapping functions are complicated, or the normalization methods for the intensity values in the enhanced images are ineffective.

- 1.
SDRCLCE and NDNE employ a complicated hyperbolic tangent function and a sine function as the intensity mapping function, respectively, which reduce the computational efficiency. The proposed PNAE algorithm employs a simple power function as the intensity mapping function, which can be used to enhance LIPC and HIPC images with higher computational efficiency.

- 2.
A new simple and effective normalization method is proposed in the PNAE algorithm that improves the normalization method of SDRCLCE in both enhancement effect and calculation efficiency.

- 3.
PNAE has the parallel processing ability as SDRCLCE, while NDNE does not have the ability.

In the following section, the PNAE algorithm is discussed in detail. Experimental results of the algorithm are discussed in Section 3, followed by the conclusions and discussions of future work in Section 4.

## 2 The PNAE algorithm

### 2.1 Adaptive intensity adjustment based on the local neighborhood

*I*

_{ r }(

*x*,

*y*),

*I*

_{ g }(

*x*,

*y*), and

*I*

_{ b }(

*x*,

*y*) are the red, green, and blue components of a pixel located at (

*x*,

*y*) in the RGB color image. The intensity image is further normalized to

*q*in (3) corresponds to the local mean intensity value of the pixel. According to the mathematical theory (see Appendix),

*p*= 2

*q*, and get the new normalized intensity mapping function

*p*is given as

where *I*_{ave} (*x*, *y*) ∈ [0,1] is the normalized local mean intensity value of the pixel at location (*x*, *y*), *c*_{1} and *c*_{2} are constants determined empirically, and *ϵ* = 0.01 is a numerical stability factor introduced to avoid division by zero when *I*_{ave}(*x*, *y*) = 1.

*p*is increasing on

*I*

_{ave}(

*x*,

*y*). The change curves of function (5) are drawn in Figure 3 for

*p*= 0.2, 0.4, 0.7, 1, 2, 3, and 8. The intensity mapping function (5) is a decreasing convex function with

*p*< 1 and a decreasing concave function with

*p*> 1 as shown in Figure 3. Notice that if a pixel is in a dark neighborhood, then

*I*

_{ave}(

*x*,

*y*) is smaller and

*p*is less than 1 with appropriate

*c*

_{1}and

*c*

_{2}, then

*T*(

*I*

_{in}(

*x*,

*y*)) is bigger than

*I*

_{in}(

*x*,

*y*). Hence, the intensity of the pixel in the dark neighborhood would be pulled up. On the contrary, the intensity of a pixel in a bright neighborhood would be pulled down. Therefore, the intensity mapping function (5) has the ability to adjust image intensity adaptively, based on the situation of the local neighborhood.

Generally, noises may also be enhanced as *I*_{ave}(*x*, *y*) is close to 0, but the enhancement for those noises in extreme dark regions can be restrained by the parameter *c*_{2} in formula (6). The parameter *c*_{1} is used to avoid the great lowering of pixel values in extremely bright regions because of a super high value of $\frac{{\mathit{I}}_{\mathrm{ave}}\left(\mathit{x},\mathit{y}\right)}{1-{\mathit{I}}_{\mathrm{ave}}\left(\mathit{x},\mathit{y}\right)}$. The effects of *c*_{1} and *c*_{2} will be discussed in detail in Section 3.

*I*

_{ave}(

*x*,

*y*) in formula (6) is computed by

*F*

_{LPF}(

*x*,

*y*) denotes a spatial low-pass filter kernel function and is subject to the condition

*F*

_{LPF}(

*x*,

*y*) is a Gaussian smoothing operator, and

*I*

_{ave}(

*x*,

*y*) is computed by

*x*,

*y*) is the center pixel of the

*M*×

*M*neighborhood Ω,

*I*

_{in}(

*m*,

*n*) is the intensity value of the pixel in the location (

*m*,

*n*) of the original intensity image, and

*ω*

_{ mn }is the weight of the pixel in the location (

*m*,

*n*) given by

*σ*is the standard derivation of

*ω*

_{ mn }, and

*K*is the normalization factor given by

Formula (9) is a discrete form of formula (7) with the discrete spatial low-pass filter, the Gaussian kernel function in (10). In NDNE and SDRCLCE, a multiscale and a single-scale Gaussian smoothing operator is used, respectively, to produce the mean intensity image. Considering the computational efficiency, a single-scale Gaussian smoothing operator with one neighborhood is used to enhance image in PNAE. The effects of the neighborhood radius *R* (*M* = 2*R* + 1) and *σ* will be discussed in details in Section 3, too.

### 2.2 Adaptive contrast enhancement based on the local neighborhood

*I*

_{in}(

*x*,

*y*), $\overline{\mathit{I}}\left(\mathit{x},\mathit{y}\right)$ is given by

and *T*′[*I*_{in}(*x*, *y*)] denotes the first-order derivative of the mapping function (5), which is *T*′[*I*_{in}(*x*, *y*)] = *p*[*I*_{in}(*x*, *y*)]^{p - 1}*I*_{in}(*x*, *y*). In formula (12), the item *C*_{out1} can be used to adjust intensity of the original image, and the item *C*_{out2} can be used to enhance local contrast. Moreover, *C*_{out1} and *C*_{out2} do not depend on each other and can be computed independently and simultaneously, i.e., formula (12) is a parallel process for intensity adjustment and local contrast enhancement with a dual core processor.

where ${\mathit{C}}_{\mathrm{outnorm}}^{\mathrm{enh}}\left[{\mathit{I}}_{\mathrm{in}}\left(\mathit{x},\mathit{y}\right)\right]$ denotes the normalized value for the output value of *I*_{in}(*x*, *y*). Though quite simple, the proposed normalization method is still an effective way and has a higher computational efficiency than the normalization method in SDRCLCE, which is confirmed in our experiments in Section 3.

### 2.3 Color restoration

*β*(

*x*,

*y*) is given by

and *ϵ* = 0.01 is a numerical stability factor introduced to avoid division by zero when *I*_{in}(*x*, *y*) = 0.

## 3 Results and discussion

In this section, we focus on five issues that include feasibility test and parameter influence discussion of the proposed method, demonstrations of LIPC and HIPC image enhancement results, visual comparisons with NDNE and SDRCLCE, computational speed evaluation, and quantitative comparisons with the results produced by these methods.

### 3.1 Feasibility test and parameter influences on PNAE

*R*(

*M*= 2

*R*+ 1),

*c*

_{1},

*c*

_{2}, and

*σ*. In order to study the effects of the parameters, we design four parameter tweaking experiments below:

- 1.
Tweaking

*σ*with fixed*c*_{1},*c*_{2}, and*R*(Figure 6A)

*σ*. As shown in B of Figure 6, a larger

*R*value leads to a more obvious contrast enhancement result. In order to avoid over-enhanced image and improve the computational efficiency,

*R*= 1 is suitable for a lot of experiments. As shown in C of Figure 6, a larger

*c*

_{1}value leads to a smaller value of the overall lightness, and

*c*

_{1}∈ (0, 0.4] is suitable for a lot of experiments. As shown in D of Figure 6, a larger

*c*

_{2}value also leads to a smaller value of the overall lightness. The parameter

*c*

_{2}∈ [0.3, 0.5] is suitable for LIPC image enhancement, and

*c*

_{2}> 1 is suitable for HIPC image enhancement, verified by a lot of experiments. In the next section, we will see that the parameters in PNAE algorithm are somewhat robust and we only need to tweak parameter

*c*

_{2}for LIPC and HIPC image enhancement.

**The parameter values of Figure**
6

A | B | C | D |
---|---|---|---|

c | c | c | c |

(b) | (d) | (j) | (j) |

(c) | (e) | (h) | (k) |

(f) | (i) | (l) |

### 3.2 LIPC and HIPC image enhancement result demonstration

*c*

_{2}is needed for the overexposed image shown in Figure 7g in order to compress the overexposed area by the specifically designed nonlinear intensity mapping function (5). Parameters set for all the experiments for PNAE are given in Table 2. It can be seen from Table 2 that the parameters

*R*,

*c*

_{1}, and

*σ*have some robust properties and it is sufficient to only tweak the parameter

*c*

_{2}for LIPC and HIPC image enhancement. Figure 7b,d,f achieves better visual effects. The letters are much clearer in the red rectangle in Figure 7h, while these letters cannot be seen clearly in the original image.

**The parameter values in all experiments of PNAE**

To provide a fair comparison, we use the same intensity mapping function (5) and the same Gaussian smoothing operator to calculate *I*_{ave}(*x*, *y*) for the proposed PNAE algorithm and SDRCLCE algorithm in the following experiments with only different normalization methods.

### 3.3 The visual quality comparisons with NDNE and SDRCLCE

### 3.4 Computational speed evaluation

**Comparisons of average processing times of NDNE**, **SDRCLCE, and PNAE** (**unit**, **seconds**)

Color images size (pixels) | NDNE | SDRCLCE | PNAE |
---|---|---|---|

360 × 240 | 0.18 | 0.12 | 0.11 |

460 × 350 | 0.30 | 0.19 | 0.17 |

Table 4 shows that the average processing time of PNAE is less than that of SDRCLCE and is much shorter than that of NDNE. The PNAE algorithm requires approximately 60% of the average processing time of NDNE and 80% of average processing time of SDRCLCE. The PNAE algorithm requires less processing time than NDNE because NDNE uses a complicated intensity mapping function, and NDNE cannot be parallelized in a sequential process framework. The PNAE algorithm requires less processing time than SDRCLCE because PNAE uses a more efficient and simpler normalization method than SDRCLCE. Table 4 shows that the average processing times of PNAE and SDRCLCE are much shorter than NDNE because PNAE and SDRCLCE are all based on a parallel processing architecture.

### 3.5 The quantitative comparisons with NDNE and SDRCLCE

A quantitative assessment of image enhancement is not an easy task as an improved perception is difficult to quantify owing to the lack of *a priori* knowledge of the most favorable enhanced image. It is therefore necessary to establish a basis which is used to define a good measure of enhancement [17]. In this section, the visually optimal (VO) region, EMEE and average discrete entropy DE_{ave} as quantitative measures are used to analyze the experimental results in only intensity channel of the original image and their enhanced image for a color image.

*k*

_{1}×

*k*

_{2}sized blocks, processing each block with Equation 17, and averaging the results. The EMEE is summarized by the following formula:

where Φ is a given enhancement algorithm; par denotes parameters in the enhancement algorithm; *k*_{1} and *k*_{2} are the numbers of horizontal and vertical blocks in an image, which are related to the blocks and the image size; ${\mathit{I}}_{max;\mathit{k},\mathit{l}}^{\mathit{W}}$ and ${\mathit{I}}_{min;\mathit{k},\mathit{l}}^{\mathit{W}}$ are the maximum and minimum intensity values of the block, respectively; and *c* is a small constant to avoid dividing by 0. A higher EMEE value indicates an image with a higher contrast.

*X*measures its content, where a higher value indicates an image with richer details [17]. It is defined as

*p*(

*x*

_{1}) is the probability of pixel intensity

*x*

_{1}that is estimated from the normalized histogram. The average absolute discrete entropy difference DE

_{ave}between the input image

*X*

_{ i }and the output image

*Y*

_{ i }is defined as

Of an enhancement algorithm, the smaller DE_{ave} value means a better preservation ability for the overall content of the input image while improving its contrast [17].

*M*and the regional mean value of standard deviation $\overline{\mathit{D}}$. Those results by NDNE, SDRCLCE, and PNAE are shown in Table 5. We also calculated the EMEE (the block size is 8 × 8), DE, DE

_{ave}values for all images in Table 6 by using their original normalized intensity images and their corresponding normalized intensity adjustment with contrast-enhanced images. The results of the comparisons among NDNE, SDRCLCE, and PNAE are shown in Table 6.

**Values of** M **and**$\overline{\mathit{D}}$ **in NDNE**, **SDRCLCE, and PNAE**

Original image | Enhanced image | |||||||
---|---|---|---|---|---|---|---|---|

PNAE | NDNE | SDRCLCE | ||||||

M | $\overline{\mathit{D}}$ | M | $\overline{\mathit{D}}$ | M | $\overline{\mathit{D}}$ | M | $\overline{\mathit{D}}$ | |

Figure 7a | 23.87 | 15.60 | 104.43 | 43.61 | 104.87 | 41.01 | 103.14 | 45.40 |

Figure 7e | 84.21 | 35.84 | 148.1 | 54.92 | 149.4 | 53.86 | 146.3 | 55.19 |

Figure 8a | 58.19 | 40.28 | 120.9 | 60.74 | 121.1 | 85.20 | 117.1 | 59.01 |

Figure 9a | 64.54 | 35.30 | 127.8 | 58.30 | 127.2 | 90.11 | 125.3 | 55.18 |

**Values of EMEE**, **DE, and DE**_{
ave
}**in NDNE**, **SDRCLCE, and PNAE**

Original image EMEE | Enhanced image EMEE | Original image DE(X | Enhanced image DE(Y | |||||
---|---|---|---|---|---|---|---|---|

PNAE | NDNE | SDRCLCE | PNAE | NDNE | SDRCLCE | |||

Figure 4a | 1,823.0 | 14,223 | 13,448 | 14,165 | 5.1464 | 5.3010 | 5.3208 | 5.3120 |

Figure 5 a | 137.78 | 7,351.5 | 7,296.2 | 7,129.3 | 4.5923 | 4.7295 | 4.9543 | 4.8431 |

Figure 7a | 16.302 | 2,567.4 | 2,433.8 | 2,500.2 | 3.7005 | 4.3670 | 5.1030 | 4.3539 |

Figure 7c | 3.6401 | 1,168.7 | 1,181.0 | 1,121.9 | 5.4420 | 5.1024 | 5.2352 | 5.2021 |

Figure 7e | 5.2005 | 2,316.8 | 2,414.0 | 2,459.1 | 5.2545 | 5.3686 | 5.4309 | 5.5001 |

Figure 7g | 0.0664 | 1.3242 | 1.2350 | 1.2012 | 2.3721 | 3.2316 | 3.5171 | 3.2416 |

Figure 8a | 1,383.2 | 25,625 | 25,513 | 25,401 | 4.8774 | 5.1549 | 5.2464 | 5.2402 |

Figure 9a | 13.596 | 5,487.4 | 5,474.3 | 5,327.6 | 4.9469 | 5.2360 | 5.4450 | 5.3893 |

DE | 0.3548 | 0.5418 | 0.4038 |

As shown in Table 5, Figures 7a,e, 8a, and 9a are all enhanced to the VO region by PNAE and SDRCLCE, but Figures 8a and 9a are not enhanced to the VO region by NDNE because the regional standard deviation $\overline{\mathit{D}}$ is more than 80. The *M* results of PNAE are similar to those of NDNE because of their approximately equivalent intensity mapping functions. The *M* and $\overline{\mathit{D}}$ values of PNAE are similar to those of SDRCLCE on the whole because of the only difference on the normalization method.

As shown in Table 6, in addition to Figure 7c,e, the EMEE values for PNAE of the remaining six images are greater than the EMEE values for both NDNE and SDRCLCE. The average absolute discrete entropy difference DE_{ave} for PNAE, NDNE, and SDRCLCE are 0.3548, 0.5418, and 0.4038, respectively. Since the EMEE and the DE_{ave} are related to the ability of contrast enhancement and the overall image content preservation, one can say that the proposed PNAE preserves the overall content of the image better than NDNE and SDRCLCE while improving its local contrast.

## 4 Conclusions

- 1.
PNAE has a higher computational efficiency than NDNE and SDRCLCE because PNAE uses a simpler intensity mapping function and a simpler normalization method.

- 2.
Some parameters in PNAE are certainly robust for LIPC and HIPC color image enhancement and make the algorithm adjustable to separately control the level of enhancement on the overall lightness and the contrast achieved at the output.

- 3.
The proposed PNAE preserves the overall content of the image better than NDNE and SDRCLCE while improving its local contrast.

Moreover, the PNAE algorithm is amenable to parallel processing like the SDRCLCE algorithm and can be used to enhance LIPC and HIPC color image enhancement like the NDNE algorithm. The acceleration of PNAE and optimal design of parameters are left to our future study.

## 5 Consent

Written informed consent was obtained from the patient’s guardian/parent/next of kin for the publication of this report and any accompanying images.

## Appendix

where ${\mathit{R}}_{2\mathit{m}}\left(\mathit{x}\right)=\frac{{sin}^{\left(2\mathit{m}+1\right)}\left(\mathit{\theta x}\right)}{\left(2\mathit{m}+1\right)!}{\mathit{x}}^{2\mathit{m}+1},0<\mathit{\theta}<1$.

*m*= 1; according to (20), there is

For further discussion, please email Zhigang Zhou at zzghust@163.com.

## Authors' information

ZZ received his M.S. degree in computational mathematics from Chengdu University of Technology, Chengdu, China, in 2006. He is now a Ph.D. candidate in the Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China. His research interests include digital image processing and pattern recognition.

NS was born in 1968. He is a Ph.D. degree holder and a professor in National Key Laboratory of Science & Technology on Multi-spectral Information Processing, Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology (HUST). He is a vice dean of the automation institute, HUST. His research interest covers computational modeling of biological vision perception, and applications in computer vision, image analysis and object recognition based on statistical learning, medical image processing and analysis, interpretation of remote sensing images, and intelligent video surveillance.

XH was born in 1973 and earned her Ph.D. at the Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology in 2008. Now she is a professor and a vice dean in the School of Mathematics and Computer Science, Wuhan Textile University, People's Republic of China. Her research interests include image processing, virtual reality technology, and computer vision.

## Declarations

### Acknowledgements

The paper is partially supported by the National Science Foundation of China with grant no. 61103085.

## Authors’ Affiliations

## References

- Tsai CY, Chou CH: A novel simultaneous dynamic range compression and local contrast enhancement algorithm for digital video cameras.
*EURASIP Journal on Image and Video Processing*2011, 6: 1-19. https://link.springer.com/article/10.1186/1687-5281-2011-6Google Scholar - Bedi SS, Khandelwal R: Various image enhancement techniques - a critical review.
*International Journal of Advanced Research in Computer and Communication Engineering*2013, 2(3):1605-1609.Google Scholar - Saini MK, Narang D: Review on image enhancement in spatial domain. In
*Proc. of Int. Conf. On Advances in Signal Processing and Communication*. Lucknow, India; 2013:76-79. 21–22 June 2013Google Scholar - Kim TK, Paik JK, Kang BS: Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering.
*IEEE Transactions on Consumer Electronics*1998, 44(1):82-87. 10.1109/30.663733View ArticleGoogle Scholar - Arigela S, Asari KV: A locally tuned nonlinear technique for color image enhancement.
*WSEAS Transactions on Signal Processing*2008, 4(8):514-519.Google Scholar - Rupal P, Vijayan K, Asari : A neighborhood dependent nonlinear technique for color image enhancement.
*Image Analysis and Recognition Lecture Notes in Computer Science*2010, Volume 6111: 23-34. http://link.springer.com/chapter/10.1007/978-3-642-13772-3_3 10.1007/978-3-642-13772-3_3View ArticleGoogle Scholar - Woodell G, Jobson D, Rahman Z, Hines G:
*Advanced image processing of aerial imagery, in Proc*. SPIE Visual Inform. Process. XIV, Kissimmee, FL, May; 2006. 2006Google Scholar - Jobson D, Rahman Z, Woodell G: A multiscale Retinex for bridging the gap between color images and human observation of scenes.
*IEEE Trans. Image Process.*1997, 6(7):965-976. 10.1109/83.597272View ArticleGoogle Scholar - Tao L, Asari VK: Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images.
*J. Electron. Imaging*2005., 14(4): 043006-1-043006-14Google Scholar - Vorobel R, Berehulyak O: Gray image contrast enhancement by optimal fuzzy transformation.
*Artificial Intelligence and Soft Computing – ICAISC 2006, Lecture Notes in Computer Science*2006, Volume 4029: 860-869. https://link.springer.com/chapter/10.1007%2F11785231_90#page-1 10.1007/11785231_90View ArticleGoogle Scholar - Vijayan Asari K, Ender O, Saibabu A: Nonlinear enhancement of extremely high contrast images for visibility improvement.
*Computer Vision, Graphics and Image Processing Lecture Notes in Computer Science*2006, Volume 4338: 240-251. 10.1007/11949619_22View ArticleGoogle Scholar - Lee S, Kwon H, Han H, Lee G, Kang B: A space-variant luminance map based color image enhancement.
*IEEE Transactions on Consumer Electronics*2010, 56(4):2636-2643.View ArticleGoogle Scholar - Valensi G: Color Television System.
*US Patent.*1970., 3534153:Google Scholar - Marsi S, Impoco G, Ukovich A, Carrato S, Ramponi G: Video enhancement and dynamic range control of HDR sequences for automotive applications.
*EURASIP Journal on Advances in Signal Processing*2007, 2007(080971):1-9.MATHGoogle Scholar - Tao L, Tompkins R, Asari VK: An illuminance-reflectance model for nonlinear enhancement of color images. In
*Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition*. San Diego, CA, USA; 2005:159-166. 20–26 June 2005Google Scholar - Computer Vision Group: Test Image Database. . Accessed 13 Mar 2014 http://decsai.ugr.es/cvg/index2.php
- Celik T, Tjahjadi T: Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
*IEEE Transactions on Image Processing*2012, 21(1):145-156.MathSciNetView ArticleGoogle Scholar - Jobson DJ, Rahman Z, Woodell GA: Statistics of visual representation.
*SPIE Proceeding*2002, 4736: 25-35. 10.1117/12.477589View ArticleGoogle Scholar - Agaian SS: Visual morphology. In
*SPIE Proceeding. Nonlinear Image Processing X*.*Volume 3646*. San Jose, CA; 1999:139-150. 10.1117/12.341081View ArticleGoogle Scholar - Agaian SS, Silver B, Panetta KA: Transform coefficient histogram - based image enhancement algorithms using contrast entropy. IEEE Transactions on Image Processing 16: 741-758.Google Scholar
- Agaian SS, Panetta K, Grigoryan AM: A new measure of image enhancement. In
*IASTED Int. Conf. Signal Processing Communication*. Marbella, Spain; 2000:19-22. 19–22 Sept 2000Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.