# An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

- Chin-Teng Lin
^{1}, - Chien-Ting Yang
^{1}, - Yu-Wen Shou
^{2}Email author and - Tzu-Kuei Shen
^{1}

**2010**:945130

https://doi.org/10.1155/2010/945130

© Chin-Teng Lin et al. 2010

**Received: **1 December 2009

**Accepted: **10 May 2010

**Published: **8 June 2010

## Abstract

We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%*~* 10% for our three tested videos in the experimental results of vehicle counting.

## Keywords

## 1. Introduction

Inrecent years, the researches about intelligent video surveillances have increased noticeably. Foreground object detection could be considered as one of the fundamental and critical techniques in this field. In the conventional methods, background subtraction and temporal difference have been widely used for foreground extraction in case of using stationary cameras. The continuing improvements in background modeling techniques have led to many new and fascinating applications like event detection, object behavior analysis, suspicious object detection, traffic monitoring, and so forth. However, some factors like dynamic backgrounds and moving shadows might affect the results of foreground detection and make the problems more complicated. Dynamic backgrounds, one of the factors, might detect and treat the escalators and swaying trees as the foreground regions. Another factor, moving shadows, occurred when light was blocked by moving objects, usually misclassified the foreground regions. This paper would focus on the studies of moving shadows and aim at developing an efficient and robust algorithm of moving shadow removal as well as the related applications.

About the studies on influences of moving shadows, Zhang et al. [1] classified these techniques into four categories, including color model, statistical model, textural model, and geometric model. Color model used the difference of colors between the shaded and nonshaded pixels. Cucchiara et al. [2] removed moving shadows by using the concept on HSV color space that the hue component of shaded pixels would vary in a smaller range and the saturation component would decrease more obviously. Some researchers proposed shadow detection methods based on RGB color space and normalized-RGB color space. Yang et al. [3] described the ratio of intensities between a shaded pixel and its neighboring shaded pixel in the current image, and this intensity ratio was found to be close to that in the background image. They also made use of the slight change of intensities on the normalized R and G channels between the current and background image. Cavallaro et al. [4] found that the color components would not change their orders and the photometric invariant features would have a small variation while shadows occurred. Beside of color model, statistical model used the probabilistic functions to determine whether or not a pixel belonged to the shadows. Zhang et al. [1] introduced an illumination invariance feature and then analyzed and modeled shadows as a Chi-square distribution. They classified each moving pixel into the shadow or foreground object by performing a significance test. Song and Tai [5] applied Gaussian model to representing the constant RGB-color ratios, and determined whether a moving pixel belonged to the shadow or foreground object by setting ±1.5 standard deviation as a threshold. Martel-Brisson and Zaccarin [6] proposed GMSM (Gaussian Mixture Shadow Model) for shadow detection which was integrated into a background detection algorithm based on GMM. They tested if the mean of a distribution could describe a shaded region, and if so, they would select this distribution to update the corresponding Gaussian mixture shadow model.

The texture model assumed that the texture of the foreground object would be totally different from that of the background, and that the textures would be distributed uniformly inside the shaded region. Joshi and Papanikolopoulos [7, 8] proposed an algorithm that could learn and detect the shadows by using support vector machine (SVM). They defined four features of images, including intensity ratio, color distortion, edge magnitude distortion and edge gradient distortion. They introduced a cotraining architecture which could make two SVM classifiers help each other in the training process, and they should need a small set of labeled samples on shadows before training SVM classifiers for different video sequences. Leone et al. [9] presented a shadow detection method by using Gabor features. Mohammed Ibrahim and Anupama [10] proposed their method by using division image analysis and projection histogram analysis. Image division operation was performed on the current and reference frames to highlight the homogeneous property of shadows. They afterwards eliminated the left pixels on the boundaries of shadows by using both column-and row-projection histogram analyses. Geometric model attempted to remove shadowed regions or the shadowing effect by observing the geometric information of objects. Hsieh et al. [11] used the histograms of vehicles and the calculated center of lane to detect the lane markings, and also developed a horizontal and vertical line-based method to remove shadows by characteristics of those lane markings. This method might become ineffective in case of no lane markings.

As for some other approaches by combination of the mentioned models, Benedek and Szirányi [12] proposed a method based upon LUV color model. They used "darkening factor", the distortion of U and V channel, and microstructural responses to be the determinative features, where microstructural responses represented a local textured feature. Their proposed algorithm integrated all those features by using Gaussian model and segmented foreground objects, backgrounds and shadows by calculating their probabilities. Xiao et al. [13] proposed a shadow removal method based on edge information for traffic scenes which in sequence consisted of an edge extraction technique, the morphological operations on removing the edges of shadows, and the analysis of spatial properties for separating the occluded vehicles which resulted from the influences of shadows. They could reconstruct the size of each object and decide the practical regions of shadows. However, this method intrinsically could not deal with the textured shadowed regions like the regions with lane markings, and the more complicated cases of occlusions like the concave shapes of vehicles would make this method fail to separate vehicles by use of spatial properties.

The color information from color cameras might give fine results for shadow removal, but B/W (Black & White) cameras could provide the better resolution and more sensitive quality under lowly illuminating conditions. That was the reason why B/W cameras rather than color cameras would be much more popular for outdoor applications. The shadow removal method based on color model might not work in such situations. Similarly, texture model could have better results under the unstably illuminating conditions without the color information. However, texture model might give the poorest performances for the textureless objects. Geometric model could be more adaptive to the specific scenes due to dependency upon the geometric relations between objects and scenes, and would be prevailingly applied in simulated environments so far. Its biggest problem in the heavily computational loading would obviously restrict the related uses in real-time cases. By considering all these methods, we proposed a fast moving shadow removal scheme by combining texture and statistical models. Our proposed method was experimentally proved to be stable and used the texture model instead of color model to simplify our systematic procedures efficiently. Furthermore, we made use of statistical methods to improve the performances of systems by successfully dealing with the textureless objects. This paper would be organized as follows, including the overall architecture, foreground object extraction, feature combination, experimental results on both our proposed moving shadow removal algorithm and the additional application (vehicle-counting), and conclusions.

## 2. Our Systematic Architecture for Moving Shadow Removal

## 3. Foreground-Object Extraction

As Figure 1 showed, the sequence of images in gray-level should be taken as the input of the foreground object extraction processes, and the moving object with its minimum bounding rectangle as the output of the algorithm. We would incorporate Gaussian Mixture Model (GMM) into our mechanism functioning as the development of background image which is a representative approach to background subtraction. It would be more appropriate to choose the way by background subtraction for extracting foreground objects rather than that by temporal difference under the consideration of all the pros and cons of these two typical approaches. Furthermore, the latter could do a better job in extracting all relevant pixels, and this paper aimed at tackling the problems generated from the traffic monitoring systems where cameras would be usually set fixedly. Some previous studies [14, 15] have proposed a standard process of background construction, hence we would put a higher premium on the following two parts, inclusive of foreground-pixel extraction by edge-based and gray-level based shadow removal.

### 3.1. Foreground-Pixel Extraction by Edge-Based Shadow Removal

To demonstrate the necessity of our proposed procedure, we circled the edges of lane markings and textures on ground surface by red ellipses in both Figures 7 and 8(f). By using the max-operation, apparently, we could reduce the effect caused either by lane markings or textures on ground surfaces and also keep the homogeneous property inside the shadows.

where and are the mean and standard deviation of the mask centered at the pixel , respectively, is the maximum value of the standard deviation (in gray level image, ), is the parameter with positive values in the range [0.2, 0.5], and is a suppression term and its value is set to be 50 empirically in this paper.

### 3.2. Foreground-Pixel Extraction by Gray Level-Based Shadow Removal

#### 3.2.1. Gaussian Darkening Factor Model Updating

- (1)
Pixels must belong to the foreground objects, for the shadowed pixels must be part of the foregrounds ideally.

- (2)
The intensity of a pixel in the current frame should be smaller than that in the frame of backgrounds, for the shadowed pixels must be darker than background-pixels.

- (3)
The pixels obtained from the foreground-pixel extraction by edge-based shadow removal should be excluded to reduce the number of pixels which might be classified as nonshadowed pixels.

We set a threshold as a minimum number of updating times, and the updating times of each Gaussian model must exceed this threshold to ensure the stability of each model. Besides, in order to reduce the computation loading of updating procedure, we gave a limit that each Gaussian model could only be updated at most 200 times for one frame.

#### 3.2.2. Determination of Non-Shadowed Pixels

## 4. Feature Combination

After that, the connected-component-labeling approach could be applied on the dilated images. As for our defined rules for this procedure, we used the minimum bounding rectangle for each independent region. Then, if any two minimum bounding rectangles are close to each other, we will merge these two rectangles. Finally, iteratively check and merge the previous results till no rectangle can be merged with.

After labeling and grouping, we would use the size filter to eliminate the minimum bounding rectangle of which the width and height were both smaller than a threshold, as depicted in Algorithm 1. The subscript "*k*" indicated the *k* th minimum bounding rectangle. Figure 25 showed some examples of the final located real objects; the green and light blue rectangles revealed the foreground objects and final located real objects, respectively.

**Algorithm 1**

end

## 5. Experimental Results

In this section, we would demonstrate the results of our proposed shadow removal algorithm. We implemented our algorithm on the platform of PC with P4 3.0 GHz and 1 GB RAM. The software we used is Borland C++ Builder on Windows XP OS. All of the testing inputs are uncompressed AVI video files. The resolution of each video frame is . The average processing time is 13.84 milliseconds for each frame.

### 5.1. Experimental Results of Our Shadow Removal Algorithm

#### 5.1.1. Occlusions Caused by Shadows

#### 5.1.2. Discussions of Gray Level-Based Method

### 5.2. Vehicle Counting

Descriptions of testing videos for vehicle counting.

Testing video | Scene | Shadow description | Video FPS |
---|---|---|---|

Video1 | Highway | Obvious and Large | 30 |

Video2 | Highway | Light and Large | 30 |

Video3 | Expressway | Obvious and Large | 25 |

Number of passing vehicles in each lane.

Testing video | Partition number | Lane1 | Lane2 | Lane3 | Lane4 |
---|---|---|---|---|---|

(vehicles) | (vehicles) | (vehicles) | (vehicles) | ||

Video1 | 6 | 102 | 189 | 116 | 89 |

Video2 | 13 | 464 | 505 | 373 | 261 |

Video3 | 2 | 58 | 75 | — | — |

Vehicle counting results.

Testing videos | Compared methods | Lane1 | Lane2 | Lane3 | Lane4 |
---|---|---|---|---|---|

(Average Accuracy rate) | (Average Accuracy rate) | (Average Accuracy rate) | (Average Accuracy rate) | ||

Video1 | Without Shadow Removal | 81.58% | 97.50% | 96.57% | 82.29% |

With Proposed Algorithm | 100% | 99.02% | 97.22% | 100% | |

Video2 | Without Shadow Removal | 92.88% | 96.27% | 95.55% | 89.51% |

With Proposed Algorithm | 97.59% | 99.31% | 99.68% | 99.26% | |

Video3 | Without Shadow Removal | 95.16% | 97.14% | — | — |

With Proposed Algorithm | 100% | 100% | — | — |

Average accuracy of all lanes in each video.

Testing Video | Average Accuracy Rate | |
---|---|---|

Video1 | Without Shadow Removal | 89.49% |

With Proposed Algorithm | 99.06% | |

Video2 | Without Shadow Removal | 93.55% |

With Proposed Algorithm | 98.96% | |

Video3 | Without Shadow Removal | 96.15% |

With Proposed Algorithm | 100% |

## 6. Conclusions

We in this paper present a real-time and efficient moving shadow removal algorithm based on versatile uses of GMM, including the background removal and development of features by Gaussian models. Our algorithm innovates to use the homogeneous property inside the shadowed regions, and hierarchically detects the foreground objects by extracting the edge-based, gray level-based features, and feature combination. Our approach can be characterized by some original procedures such as "pixel-by-pixel maximization", subtraction of edges from background images in the corresponding regions, adaptive binarization, boundary elimination, the automatic selection mechanism for shadow-potential regions, and the Gaussian darkening factor model for each gray level.

Among all these proposed procedures, "pixel-by-pixel maximization" and subtraction of edges from background images in the corresponding regions deal with the problems which result from the shadowed regions with edges. Adaptive binarization and boundary elimination are developed to extract the foreground-pixels of nonshadowed regions. Most significantly, we propose the Gaussian darkening factor model for each gray level to extract nonshadow pixels from foreground objects by using the information of gray levels, and integrate all the useful features to locate the real objects without shadows. Finally, in comparison with the previous approaches, the experimental results show that our proposed algorithm can accurately detect and locate the foreground objects in different scenes and various types of shadows. What's more, we apply the presented algorithm to vehicle counting to prove its capability and effectiveness. Our algorithm indeed improves the results of vehicle counting and it is also verified to be efficient with the prompt processing speed.

## Declarations

### Acknowledgment

This work was supported in part by the Aiming for the Top University Plan of National Chiao Tung University, the Ministry of Education, Taiwan, under Contract 99W962, and supported in part by the National Science Council, Taiwan, under Contracts NSC 99-3114-E-009 -167 and NSC 98-2221-E-009 -167.

## Authors’ Affiliations

## References

- Zhang W, Fang XZ, Yang XK, Wu QMJ: Moving cast shadows detection using ratio edge.
*IEEE Transactions on Multimedia*2007, 9(6):1202-1214.View ArticleGoogle Scholar - Cucchiara R, Grana C, Piccardi M, Prati A: Detecting moving objects, ghosts, and shadows in video streams.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*2003, 25(10):1337-1342. 10.1109/TPAMI.2003.1233909View ArticleGoogle Scholar - Yang M-T, Lo K-H, Chiang C-C, Tai W-K: Moving cast shadow detection by exploiting multiple cues.
*IET Image Processing*2008, 2(2):95-104. 10.1049/iet-ipr:20070113View ArticleGoogle Scholar - Cavallaro A, Salvador E, Ebrahimi T: Shadow-aware object-based video processing.
*IEE Proceedings: Vision, Image and Signal Processing*2005, 152(4):398-406. 10.1049/ip-vis:20045108Google Scholar - Song K-T, Tai J-C: Image-based traffic monitoring with shadow suppression.
*Proceedings of the IEEE*2007, 95(2):413-426.View ArticleGoogle Scholar - Martel-Brisson N, Zaccarin A: Learning and removing cast shadows through a multidistribution approach.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*2007, 29(7):1133-1146.View ArticleGoogle Scholar - Joshi AJ, Papanikolopoulos NP: Learning to detect moving shadows in dynamic environments.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*2008, 30(11):2055-2063.View ArticleGoogle Scholar - Joshi AJ, Papanikolopoulos N: Learning of moving cast shadows for dynamic environments.
*Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '08), May 2008*987-992.Google Scholar - Leone A, Distante C, Buccolieri F: A texture-based approach for shadow detection.
*Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS '05), September 2005*371-376.Google Scholar - Mohammed Ibrahim M, Anupama R: Scene adaptive shadow detection algorithm.
*Proceedings of World Academy of Science, Engineering and Technology*2005, 2: 1307-6884.Google Scholar - Hsieh J-W, Yu S-H, Chen Y-S, Hu W-F: Automatic traffic surveillance system for vehicle tracking and classification.
*IEEE Transactions on Intelligent Transportation Systems*2006, 7(2):179-187.View ArticleMATHGoogle Scholar - Benedek C, Szirányi T: Bayesian foreground and shadow detection in uncertain frame rate surveillance videos.
*IEEE Transactions on Image Processing*2008, 17(4):608-621.MathSciNetView ArticleGoogle Scholar - Xiao M, Han C-Z, Zhang L: Moving shadow detection and removal for traffic sequences.
*International Journal of Automation and Computing*2007, 4(1):38-46. 10.1007/s11633-007-0038-zView ArticleGoogle Scholar - Stauffer C, Grimson WEL: Adaptive background mixture models for real-time tracking.
*Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), June 1999*2: 246-252.View ArticleGoogle Scholar - Suo P, Wang Y: An improved adaptive background modeling algorithm based on Gaussian mixture model.
*Proceedings of the 9th International Conference on Signal Processing (ICSP '08), October 2008*1436-1439.Google Scholar - Sauvola J, Pietikäinen M: Adaptive document image binarization.
*Pattern Recognition*2000, 33(2):225-236. 10.1016/S0031-3203(99)00055-2View ArticleGoogle Scholar - Shafait F, Keysers D, Breuel TM: Efficient implementation of local adaptive thresholding techniques using integral images.
*Document Recognition and Retrieval XV, January 2008, San Jose, Calif, USA, Proceedings of SPIE*6815:View ArticleGoogle Scholar - Rosin PL, Ellis T: Image difference threshold strategies and shadow detection.
*Proceedings of the 6th British Machine Vision Conference, 1994*Google Scholar - Wang Y, Loe K-F, Wu J-K: A dynamic conditional random field model for foreground and shadow segmentation.
*IEEE Transactions on Pattern Analysis and Machine Intelligence*2006, 28(2):279-289.View ArticleGoogle Scholar - Mikić I, Cosman PC, Kogut GT, Trivedi MM: Moving shadow and object detection in traffic scenes.
*Proceedings of the 15th International Conference on Pattern Recognition, 2000*1: 321-324.View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.