Shadow detection in color aerial images based on HSI space and color attenuation relationship
© Shi and Li; licensee Springer. 2012
Received: 25 October 2011
Accepted: 11 June 2012
Published: 12 July 2012
Many problems in image processing and computer vision arise from shadows in a single color aerial image. This article presents a new algorithm by which shadows are extracted from a single color aerial image. Apart from using the ratio value of the hue over the intensity in some state-of-the-art algorithms, this article introduces another ratio map, which is obtained by applying the saturation over the intensity. Candidate shadow and nonshadow regions are separated by applying Otus’s thresholding method. The color attenuation relationship that describes the relationship between the attenuation of each color channel is derived from the Planck’s blackbody irradiance law. For each region, the color attenuation relationship and other determination conditions are performed iteratively to segment it into smaller sub-regions and to identify whether each sub-region is a true shadow region. Compared with previous methods, the proposed algorithm presents better shadow detection accuracy in the images that contain some dark green lawn, river, or low brightness shadow regions. The experimental results demonstrate the advantage of the proposed algorithm.
Shadows in digital images are either helpful or troublesome in image processing and pattern recognition. The shadows in an aerial image provide visible evidence of the existence of objects. The shadows can be used to recognize and track object in video surveillance and estimate the height and/or position of buildings. However, the existence of shadows also causes some undesirable problems. For example, the shadows may cause objects to merge or shapes to distort, thus resulting in information loss or distortion of objects [1, 2]. On one hand, the shadows attached to some detected objects will misclassify the objects and their shadows as a totally erroneous object in the image. On the other hand, the shape distortion makes the segmentation method less reliable. Therefore, it is of great practical significance to detect shadows in an aerial image.
Shadow detection methods are classified into two types: the method based on features and the method based on models . The first type of shadow detection method, feature-based method, uses the intensity values, chromaticity information, or geometric characteristics to detect the shadows. For example, if the intensity value of a region is lower than that of the pixels around, the region is detected as a shadow region in gray aerial images. Some algorithms have been presented for this kind of images [4–6]. However, here arises the problem, for some nonshadow regions with low-intensity surface features may be identified as shadows, such as black cars or buildings. Color images are thus introduced in the shadow detection. The chromaticity information in color aerial images is used to improve the shadow detection accuracy. Finlayson et al. [7–10] have proposed a method to locate the shadows in a single RGB image by using an invariant color model. In this method, the scene is assumed to be lit by Planckian light, the surface of the objects in the image is assumed to be Lambertian, and the image is regarded to be captured by a narrow-band camera. Thus, it may not work well for color aerial images because the condition may not be satisfied. Tsai  analyzed the intensity and color properties of shadows in color aerial images, based on color space such as HSI, HSV, YCbCr, HCV, and YIQ, and brought forward a method by calculating the ratio of hue over the intensity for each pixel to build a ratio map. Then, the ratio map is used with thresholding method to extract shadow regions. But, this method tends to misclassify the dark blue and dark green color surfaces as shadow regions. Chung et al.  proposed a successive thresholding scheme for detecting shadows in color aerial images. This algorithm has better shadow detection accuracy compared with that of Tsai’s, because the image segmentation is performed locally using different thresholds. The second type of shadow detection method, model-based method, needs some prior knowledge about the scene, such as sun altitude , sun angle , or NDVI index . However, it is difficult to obtain the measurement for an arbitrary scene since the sunlight changes at different hours and at different places. Makarau  and Tian  use a blackbody radiator model for shadow detection. Their algorithms are fully motivated by the physical process of shadow formation. They need the prior measurement of the color temperature or the center wavelength for the sensors of each channel.
This article is inspired by Tsai , Chung et al. , and Tian . Instead of using only one ratio map proposed in their articles, we propose a novel method for shadow detection in color aerial images, which uses two ratio maps plus color attenuation relationship derived from blackbody radiator model. The new ratio map is obtained by applying the saturation over the intensity. The color attenuation relationship describes the relationship between the value attenuation in R and B channels. And the attenuation is compared with shadow sub-region pixels in the proposed method, rather than compared with nonshadow sub-region pixels in Tian’s study , although the proposed and the Tian’s studies are both derived from blackbody radiator model. This is because, in this study, we suppose a tested sub-region to be a shadow sub-region first, and we can judge whether this guess holds true or not by other properties. In Tian’s study, the pixels whose values are larger than the mean value are taken as the nonshadow pixels. However, their study sometimes meets problems: although their algorithm is automatic and simple, it depends more or less upon the accuracy of the priori segmentation result and the global thresholds selection. The experiment results show subjective and objective evaluations of six tested images, and prove that our proposed method has better shadow detection performance than the algorithms proposed by Tsai and Chung et al.
In general, the result of shadow detection is followed by the shadow region removal process. Methods for shadow removal in a single image can be found in [2, 17–19]. In this article, we focus on the detection of shadows.
The rest of the article is organized as follows: In the following section, the shadow detection algorithm proposed by Chung et al. is reviewed. In the section “Our proposed shadow detection algorithm”, our proposed HSI space and color attenuation relationship-based algorithm is offered. In the section “Experiment results”, the performance comparison among our proposed algorithm, Chung et al.’s algorithm, and the algorithm of Tsai is presented, and finally the article is ended with conclusion section.
2 Previous shadow detection work by Chung et al
In their implementation, T s is determined when the condition is held, where P(i) denotes the probability of the ratio value i in RM, and σ is calculated by . The value of P s is set to 0.95, empirically.
Based on the modified ratio map, a global thresholding process is performed to separate the input image into candidate shadow and nonshadow pixels. The connected component process  is performed to identify the candidate shadow regions. Then, the local thresholding process is performed iteratively to each candidate shadow region to detect true shadow pixels. After that, a fine-shadow determination process is presented to extract true shadows from candidate shadows. Furthermore, the remaining candidate shadows are enforced to be the nonshadows.
However, such detection is not accurate in the images that contain some dark green areas or low brightness shadow regions.
3 Our proposed shadow detection algorithm
In this section, we propose a novel algorithm to detect shadows from a single color aerial image. We use two ratio maps, which are the ratio value of the hue over the intensity and the ratio value of saturation over the intensity, to obtain candidate shadow and nonshadow regions. The two different regions are constructed by applying Otus’s thresholding method. Color attenuation relationship that describes the relationship of the attenuation between each color channel is derived based on Planck’s blackbody irradiance law. The color attenuation relationship and other determination conditions are performed iteratively to segment each region into smaller sub-regions. Subsequently, whether each sub-region is a true shadow region is identified.
A. Hue singularity and two ratio maps
We extract the singular pixels with R = G = B or R + G + B < Tsum from the original image. They will be classified into shadow regions or nonshadow regions when the other pixels in the image are identified. Tsum is set to 3, empirically.
We also adopt the modified ratio map defined in (3) to calculate modified ratio maps and in our proposed method. A denoising filter  is applied to both modified ratio maps to alleviate the noise effect and at the same time protect the edges between shadow and nonshadow regions.
where , , and , , and P(i) is the probability of pixels with gray level i in the modified ratio map or .
CP(x, y) = 1 indicates that the pixel at position (x, y) is a candidate shadow pixel, while CP(x, y) = 0 means that the pixel at that position is a candidate nonshadow pixel. Based on the candidate shadow and nonshadow pixels, the candidate shadow and nonshadow regions are identified by using the connecting component analysis.
B. Color attenuation relationship
The simplest way to judge a candidate region comes from the following two basic features of shadow region: the intensity is lower than that of the neighboring pixels and the chromaticity in shadow region is similar to that of its neighboring pixels . However, mere consideration of these two features will sometimes lead to the misclassification of shadow regions. In this article, the relationship between the attenuation of R component and that of B component in shadow regions is applied to distinguish shadow and nonshadow regions.
When the sunlight passes through the atmosphere, it undergoes a process of being absorbed by the atmosphere and scattering in the air. Rayleigh scattering occurs when the diameter of the particle is much smaller than the wavelength of the light. Therefore, we can infer that the blue light, whose wavelength is shorter, will scatter in a much greater angle than red and green lights. The main illumination of shadow regions is the scattered blue light from the atmosphere.
The phenomenon is considered in detecting shadow pixels whose blue component is larger than red component and whose intensity in a color aerial image is low . However, with this consideration, only spectral irradiance factor is considered, while surface reflecting factor is neglected.
where is the R-channel intensity value and is B-channel intensity value, in a given pixel in shadow. The relationship is derived later in the Appendix.
We can also derive the relationship between ΔR and ΔG or ΔG and ΔB, but we only use the relationship between ΔR and ΔB in the following steps for simplicity. Now, we do not know whether the candidate region is a shadow region or not. We can regard the tested candidate region as a shadow region, and consider its neighboring pixels as nonshadow region. We define the neighboring pixels as 5 pixels wide around the candidate region exclusive of the pixels that have been detected as shadow pixels. Therefore, we will be able to judge whether the tested candidate region is a shadow region by using formula (9).
C. Our proposed method
Although we have separated the original image into candidate shadow and nonshadow regions, there are also some shadow sub-regions in the candidate nonshadow regions, and some nonshadow sub-regions in the candidate shadow regions. Therefore, we should distinguish true shadow regions from these two candidate regions.
After extracting the hue singularity pixels from the original image and classifying the pixels in the image as candidate shadow and nonshadow regions, the following two properties are useful to distinguish true shadow regions from candidate shadow and nonshadow regions.
Property 1: A shadow region usually has lower mean intensity values than its neighboring pixels.
Property 2: A shadow region usually has similar chromaticity values to its neighboring pixels.
These two properties together with the color attenuation relationship can be performed iteratively to identify whether each sub-region is a true shadow region or not.
The thresholds and are set as 12 and 8, and are set as 9 and 6, empirically. The selection of thresholds is based on similar experiments as is shown in Figure 3. The application of these thresholds is to determine whether the mean chromaticity value and standard deviation of each candidate region are similar to those of the neighboring pixels. As for smaller thresholds, more candidate regions can be identified as nonshadow regions, but some true shadow regions may be identified as nonshadow regions. However, if the values of the thresholds are too great, some true shadow regions will not be detected from candidate nonshadows regions.
If the mean intensity value and chromaticity value of a candidate region satisfy the relationship illustrated above and if the color attenuation relationship satisfies formula (9), the candidate region can be classified as a shadow region. Otherwise, the region should be separated into smaller sub-regions by Otsu’s thresholding method, and then the sub-regions should be further tested under the conditions mentioned above. The iteration is executed, until the standard deviation of the sub-region is smaller than a threshold T σ , which is set to 5, empirically.
Our multi-step algorithm for detecting shadow in a color aerial image is as follows:
Step 1 Extract the hue singularity pixels that R = G = B or R + G + B near zero from an image, and Q is a set to store these pixels.
Step 2 Transfer the color space from RGB color model to HSI color model by formula (4), and construct two ratio maps by formula (5).
Step 3 Modify the ratio maps by formula (3), from which two ratio maps are obtained: and .
Step 4 Apply a denoising filter  to both and so as to alleviate the noise effect.
Step 5 Segment the image into candidate shadow and nonshadow pixels by formula (6). Apply the well-known Otsu’s thresholding method and formula (7) to both the modified ratio maps.
Step 6 Conduct a connected component analysis to classify the candidate shadow and nonshadow pixels as candidate shadow regions and candidate nonshadow regions.
Step 7 Verify each candidate region to make it clear whether it satisfies the color attenuation relationship (9) and the relationship described by Figures 2 and 4. If it satisfies these conditions, the region is identified as a shadow region; otherwise, the standard deviation of the region should be calculated. If the standard deviation of the region is small enough, say, smaller than T σ , it is identified as a nonshadow region. Otherwise, the region classification by iteration manner from Step 5 should be implemented, and the region should be segmented into smaller sub-regions, until all the regions are classified as shadow regions or nonshadow regions.
Step 8 Calculate the number of shadow and nonshadow pixels in the neighborhood of each hue singularity pixel in set Q. If the number of shadow pixels is greater than that of the nonshadow pixels, identify the hue singularity pixel as a shadow pixel, and vice versa.
4 Experiment results
In this section, some image results are selected from the experiment to compare the concerned shadow detection algorithms. Our proposed method is compared with the algorithms of Tsai’s and Chung et al.’s methods in the shadow detection accuracy. And subjective and objective evaluations of the three algorithms are demonstrated as follows.
A. Subjective evaluation
For the first four image groups as shown in Figures 5, 6, 7 and 8, the shadow detection accuracy is quite similar when either Chung et al.’s or our proposed algorithm are used, and the both algorithms are much better than the algorithm of Tsai’s. With Tsai’s algorithm, the greensward in Figure 5c is identified as shadow region and the river and the green lawn in Figure 6c are regarded as shadows. With our proposed algorithm, some error shadow detection in the green lawn in the up-right corner of the image can be eliminated when compared with Figure 6d and 6e. In the same way, in Figures 7 and 8, some vegetation regions are mistakenly detected as shadows by the algorithms of Tsai’s and Chung et al.’s. While by using our proposed method, true shadow areas are retained and the false detected regions are eliminated effectively.
The other two testing images in Figures 9 and 10 show obviously that, of the three concerned algorithms, ours has the best shadow detection accuracy. Its detection results are close to the ideal shadow detection images, as shown in Figures 9b and 10b. In Figure 9c, the shadows are over detected when compared with the ideal shadow images, while in Figure 9d, some low brightness shadow areas are not detected. In Figures 10c and 10d, the river is regarded as shadows by the algorithms of Tsai’s and Chung et al.’s.
The subjective evaluation demonstrates that our proposed algorithm has the best accuracy among the three concerned algorithms.
B. Objective evaluation
The performance of shadow detection algorithms is usually assessed by several metrics objectively. In this section, we adopt the three commonly used metrics introduced in  to evaluate the three concerned shadow detection algorithms. The three types of evaluation, namely the producer’s accuracy, the user’s accuracy, and the overall accuracy, are used here in the objective evaluation.
where the true positive (TP) denotes the number of true shadow pixels correctly identified; the false negative (FN) denotes the number of true shadow pixels identified as nonshadow pixels; the false positive (FP) denotes the number of nonshadow pixels identified as shadow pixels; and the true negative (TN) denotes the number of nonshadow pixels correctly identified. The parameters η s and η n stand for the ratio of the correctly detected true shadow and nonshadow over the total true shadow and nonshadow, respectively.
where the parameters p s and p n stand for the ratio of the correctly detected true shadow and nonshadow over the totally detected true shadow and nonshadow, respectively.
where TP + TN stands for the number of the correctly detected true shadow and nonshadow pixels, and TP + TN + FP + FN stands for the total number of pixels in the image.
Detection accuracy of three methods for Figure 5
η s (%)
η n (%)
p s (%)
p n (%)
Chung et al.’s
Detection accuracy of three methods for Figure 6
η s (%)
η n (%)
p s (%)
p n (%)
Chung et al.’s
Detection accuracy of three methods for Figure 7
η s (%)
η n (%)
p s (%)
p n (%)
Chung et al.’s
Detection accuracy of three methods for Figure 8
η s (%)
η n (%)
p s (%)
p n (%)
Chung et al.’s
Detection accuracy of three methods for Figure 9
η s (%)
η n (%)
p s (%)
p n (%)
Chung et al.’s
Detection accuracy of three methods for Figure 10
η s (%)
η n (%)
p s (%)
p n (%)
Chung et al.’s
The reason for the worst performance by Tsai’s algorithm lies in that Tsai’s algorithm uses only one threshold to separate the shadow and nonshadow regions. However, in the ratio maps, the value of shadow pixels in one place may equal to the value of nonshadow pixels in another place. Therefore, the local threshold strategy proposed in this article and Chung et al.’s study makes more sense.
The shadow pixels in the up-left of Figure 9d are not detected, because of the existence of the hue singularity pixels in this regions. When the hue singularity pixels are separated and classified independently by the proposed algorithm, better detection performance is achieved in Figure 9e. In Figure 10d, only one ratio map is used for shadow detection. The detection result in Figure 10e performs better than that in Figure 10d, because an additional constraint, the new ratio map of saturation over the intensity, is combined with hue over the intensity. The color attenuation relationship also improves the shadow detection performance in many detail regions when compared with Tsai’s and Chung et al.’s algorithm in Figures 5, 6, 7, 8, 9 and 10.
C. Limitation of the proposed algorithm
This article is devoted to the problem of shadow detection in color aerial images. Hue singularity pixels are extracted. The candidate shadow and nonshadow regions are constructed on the base of the modified ratio maps by using the Otsu’s thresholding method and the connected component analysis. The intensity property, chromaticity property of the shadow areas, and the color attenuation relationship derived from Planck’s blackbody irradiance law are used iteratively to segment each candidate region into smaller sub-regions, so that whether each sub-region is true shadow region is identified. The extracted hue singularity pixels are classified on the base of its neighboring pixels. From the above experimental results, it could be concluded that our proposed shadow detection algorithm presents best shadow detection accuracy when compared with Tsai’s and Chung et al.’s algorithms. Future work need to be done to solve the auto thresholds selection problem.
where σ is exposure time, and λ is wavelength.
where and (k = R, G, B) are the k-channel intensity values of a given pixel in nonshadow and shadow regions, respectively.
Where and , in which
We thank the referees for their comments that have significantly improved the manuscript of this work.
- Chung K-L, Lin Y-R, Huang Y-H: Efficient shadow detection of color aerial images based on successive thresholding scheme. IEEE Trans Geosci Remote Sens 2009, 47: 671-682.View ArticleGoogle Scholar
- Su Y-F, Chen HH: A three-stage approach to shadow field estimation from partial boundary information. IEEE Trans. Image Process. 2010, 19: 2749-2760.MathSciNetView ArticleGoogle Scholar
- Yang J, Zhao Z, Yang J: A shadow removal method for high resolution remote sensing image. Geomatics Inf Sci Wuhan Univ 2008, 33: 17-20.Google Scholar
- Chao X, Yanjun L, Ke Z: Shadow detecting using PSO and Kolmogorov test, in Sixth International Conference on Natural Computation (ICNC), Yantai, Shandong. China August 2010, 2: 572-576.Google Scholar
- Yu H-Y, Sun J-G, Liu L-N: MSER based shadow detection in high resolution remote sensing image, in International Conference on Machine Learning and Cybernetics (ICMLC), Qingdao. China July 2010, 6: 780-783.Google Scholar
- Zhu J, Samuel KGG, Masood SZ, Tappen MF: Learning to recognize shadows in monochromatic natural images. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco; 2010:223-230.Google Scholar
- Finlayson GD, Fredembach C, Drew MS: Detecting illumination in images. in IEEE 11th International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil; 2007:1-8.Google Scholar
- Finlayson GD, Drew MS, Lu C: Intrinsic images by entropy minimization. in Europeon Conference on Computer Vision (ECCV), Prague, Czech Republic; 2004:582-595. vol: 3023/2004Google Scholar
- Finlayson GD, Hordley SD, Lu C, Drew MS: On the removal of shadows from images. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28: 59-68.View ArticleGoogle Scholar
- Finlayson GD, Hordley SD, Drew MS: Removing shadows from images. in Europeon Conference on Computer Vision (ECCV), Copenhagen, Denmark; 2002:129-132. vol: 2353/2006Google Scholar
- Tsai VJD: A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens 2006, 44: 1661-1671.View ArticleGoogle Scholar
- Li Y, et al.: Integrated shadow removal based on photogrammetry and image analysis. Int J Remote Sens 2005, 26: 3911-3929. 10.1080/01431160500159347View ArticleGoogle Scholar
- Liu W, Yamazaki F: Shadow extraction and correction from quickbird images. in IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Honolulu, Hawaii, USA; 2010:2206-2209.Google Scholar
- Cai D, Li M, Bao Z, Chen Z, Wei W, Zhang H: Study on shadow detection method on high resolution remote sensing image based on HIS space transformation and NDVI index. in 18th International Conference on Geoinformatics, Beijing, China; 2010:1-4.Google Scholar
- Makarau A, Richter R, Muller R, Reinartz P: Adaptive shadow detection using a blackbody radiator model. IEEE Trans. Geosci. Remote Sens. 2011, 49: 2049-2059.View ArticleGoogle Scholar
- Tian J, Sun J, Tang Y: Tricolor attenuation model for shadow detection. IEEE Trans. Image Process. 2009, 18: 2355-2363.MathSciNetView ArticleGoogle Scholar
- Arbel E, Hel-Or H: Shadow removal using intensity surfaces and texture anchor points. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33: 1202-1216.View ArticleGoogle Scholar
- McFeely R, Hughes C, Jones E, Glavin M: Removal of nonuniform complex and compound shadows from textured surfaces using adaptive directional smoothing and the thin plate model. IET Image Process 2011, 5: 233-248. 10.1049/iet-ipr.2009.0198View ArticleGoogle Scholar
- Liu Q, Cao X, Deng C, Guo X: Identifying image composites through shadow matte consistency. IEEE Trans. Inf. Forensics Security 2011, 6(3):1111-1122.View ArticleGoogle Scholar
- Gonzalez RC, Eddins SL: Digital Image Processing Using MATLAB. Pearson Education, India; 2004.Google Scholar
- Gevers T, Smeulders AWM: PicToSeek: combining color and shape invariant features for image retrieval. IEEE Trans. Image Process. 2000, 9(1):102-119. 10.1109/83.817602View ArticleGoogle Scholar
- Polidorio AM, Flores FC, Imai NN, Tommaselli AMG, Franco C: Automatic shadow segmentation in aerial color images. XVI Brazilian Symposium on Computer Graphics and Image Processing, SIBGRAPI 2003, Sao Carlos, Brazil; 2003:270-277.Google Scholar
- Rudin LI, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 1992, 60: 259-268. 10.1016/0167-2789(92)90242-FView ArticleGoogle Scholar
- Otsu N: A threshold selection method from gray-level histograms. IEEE Trans Systems Man Cybern 1979, 9(1):62-66.MathSciNetView ArticleGoogle Scholar
- Wyszecki G, Stiles WS: Color Science: Concepts and Methods, Quantitative Data and Formulas. Wiley, New York; 1967.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.