Skip to main content

An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

Abstract

We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%~ 10% for our three tested videos in the experimental results of vehicle counting.

1. Introduction

Inrecent years, the researches about intelligent video surveillances have increased noticeably. Foreground object detection could be considered as one of the fundamental and critical techniques in this field. In the conventional methods, background subtraction and temporal difference have been widely used for foreground extraction in case of using stationary cameras. The continuing improvements in background modeling techniques have led to many new and fascinating applications like event detection, object behavior analysis, suspicious object detection, traffic monitoring, and so forth. However, some factors like dynamic backgrounds and moving shadows might affect the results of foreground detection and make the problems more complicated. Dynamic backgrounds, one of the factors, might detect and treat the escalators and swaying trees as the foreground regions. Another factor, moving shadows, occurred when light was blocked by moving objects, usually misclassified the foreground regions. This paper would focus on the studies of moving shadows and aim at developing an efficient and robust algorithm of moving shadow removal as well as the related applications.

About the studies on influences of moving shadows, Zhang et al. [1] classified these techniques into four categories, including color model, statistical model, textural model, and geometric model. Color model used the difference of colors between the shaded and nonshaded pixels. Cucchiara et al. [2] removed moving shadows by using the concept on HSV color space that the hue component of shaded pixels would vary in a smaller range and the saturation component would decrease more obviously. Some researchers proposed shadow detection methods based on RGB color space and normalized-RGB color space. Yang et al. [3] described the ratio of intensities between a shaded pixel and its neighboring shaded pixel in the current image, and this intensity ratio was found to be close to that in the background image. They also made use of the slight change of intensities on the normalized R and G channels between the current and background image. Cavallaro et al. [4] found that the color components would not change their orders and the photometric invariant features would have a small variation while shadows occurred. Beside of color model, statistical model used the probabilistic functions to determine whether or not a pixel belonged to the shadows. Zhang et al. [1] introduced an illumination invariance feature and then analyzed and modeled shadows as a Chi-square distribution. They classified each moving pixel into the shadow or foreground object by performing a significance test. Song and Tai [5] applied Gaussian model to representing the constant RGB-color ratios, and determined whether a moving pixel belonged to the shadow or foreground object by setting ±1.5 standard deviation as a threshold. Martel-Brisson and Zaccarin [6] proposed GMSM (Gaussian Mixture Shadow Model) for shadow detection which was integrated into a background detection algorithm based on GMM. They tested if the mean of a distribution could describe a shaded region, and if so, they would select this distribution to update the corresponding Gaussian mixture shadow model.

The texture model assumed that the texture of the foreground object would be totally different from that of the background, and that the textures would be distributed uniformly inside the shaded region. Joshi and Papanikolopoulos [7, 8] proposed an algorithm that could learn and detect the shadows by using support vector machine (SVM). They defined four features of images, including intensity ratio, color distortion, edge magnitude distortion and edge gradient distortion. They introduced a cotraining architecture which could make two SVM classifiers help each other in the training process, and they should need a small set of labeled samples on shadows before training SVM classifiers for different video sequences. Leone et al. [9] presented a shadow detection method by using Gabor features. Mohammed Ibrahim and Anupama [10] proposed their method by using division image analysis and projection histogram analysis. Image division operation was performed on the current and reference frames to highlight the homogeneous property of shadows. They afterwards eliminated the left pixels on the boundaries of shadows by using both column-and row-projection histogram analyses. Geometric model attempted to remove shadowed regions or the shadowing effect by observing the geometric information of objects. Hsieh et al. [11] used the histograms of vehicles and the calculated center of lane to detect the lane markings, and also developed a horizontal and vertical line-based method to remove shadows by characteristics of those lane markings. This method might become ineffective in case of no lane markings.

As for some other approaches by combination of the mentioned models, Benedek and Szirányi [12] proposed a method based upon LUV color model. They used "darkening factor", the distortion of U and V channel, and microstructural responses to be the determinative features, where microstructural responses represented a local textured feature. Their proposed algorithm integrated all those features by using Gaussian model and segmented foreground objects, backgrounds and shadows by calculating their probabilities. Xiao et al. [13] proposed a shadow removal method based on edge information for traffic scenes which in sequence consisted of an edge extraction technique, the morphological operations on removing the edges of shadows, and the analysis of spatial properties for separating the occluded vehicles which resulted from the influences of shadows. They could reconstruct the size of each object and decide the practical regions of shadows. However, this method intrinsically could not deal with the textured shadowed regions like the regions with lane markings, and the more complicated cases of occlusions like the concave shapes of vehicles would make this method fail to separate vehicles by use of spatial properties.

The color information from color cameras might give fine results for shadow removal, but B/W (Black & White) cameras could provide the better resolution and more sensitive quality under lowly illuminating conditions. That was the reason why B/W cameras rather than color cameras would be much more popular for outdoor applications. The shadow removal method based on color model might not work in such situations. Similarly, texture model could have better results under the unstably illuminating conditions without the color information. However, texture model might give the poorest performances for the textureless objects. Geometric model could be more adaptive to the specific scenes due to dependency upon the geometric relations between objects and scenes, and would be prevailingly applied in simulated environments so far. Its biggest problem in the heavily computational loading would obviously restrict the related uses in real-time cases. By considering all these methods, we proposed a fast moving shadow removal scheme by combining texture and statistical models. Our proposed method was experimentally proved to be stable and used the texture model instead of color model to simplify our systematic procedures efficiently. Furthermore, we made use of statistical methods to improve the performances of systems by successfully dealing with the textureless objects. This paper would be organized as follows, including the overall architecture, foreground object extraction, feature combination, experimental results on both our proposed moving shadow removal algorithm and the additional application (vehicle-counting), and conclusions.

2. Our Systematic Architecture for Moving Shadow Removal

In this section, we would introduce our entire architecture for algorithms of moving shadow removal which consisted of five blocks, including foreground object extraction, foreground-pixel extraction by edge-based shadow removal, foreground-pixel extraction by gray level-based shadow removal, feature combination, and the practical applications. The architecture diagram of our proposed algorithm was shown in Figure 1.

Figure 1
figure 1

Architecture of the proposed shadow removal algorithm.

3. Foreground-Object Extraction

As Figure 1 showed, the sequence of images in gray-level should be taken as the input of the foreground object extraction processes, and the moving object with its minimum bounding rectangle as the output of the algorithm. We would incorporate Gaussian Mixture Model (GMM) into our mechanism functioning as the development of background image which is a representative approach to background subtraction. It would be more appropriate to choose the way by background subtraction for extracting foreground objects rather than that by temporal difference under the consideration of all the pros and cons of these two typical approaches. Furthermore, the latter could do a better job in extracting all relevant pixels, and this paper aimed at tackling the problems generated from the traffic monitoring systems where cameras would be usually set fixedly. Some previous studies [14, 15] have proposed a standard process of background construction, hence we would put a higher premium on the following two parts, inclusive of foreground-pixel extraction by edge-based and gray-level based shadow removal.

3.1. Foreground-Pixel Extraction by Edge-Based Shadow Removal

The main ideas for extracting foreground-pixels by the information of edges in the detected object were inspired by that the edges of object in interest would be identified and removed much easier if the homogeneity in the shadow region could be within a small range of variance. Relatively, we could also obtain the features of edges for nonshadow regions. The flowchart of foreground-pixel extraction by edge-based shadow removal was shown in Figure 2. More clearly, we firstly used Sobel operations to extract the edges for both GMM-based background images and foreground objects. Figures 3 and 4 showed the results of edge extraction by Sobel operations from   and where and represent the edges extracted from background images and foreground objects, respectively. The subscript "MBR" means the minimum bounding rectangle, the only region which we have to process inside.

Figure 2
figure 2

The flowchart of foreground-pixel extraction by edge-based shadow removal.

Figure 3
figure 3

Sobel edge extraction from background images. Background image

Figure 4
figure 4

Sobel edge extraction from moving objects. Foreground Object

In order to avoid extracting the undesired edges, for example, the edges of lane marking or textures on the ground surface, we technically took advantage of pixel-by-pixel max-operations on the extracted edges of background images and foreground objects, and the results would be expressed as in (1):

(1)

where represents the coordinate of the pixel, and one example of would be illustrated in Figure 5.

Figure 5
figure 5

Max-operations and .

Then, we subtracted from to obtain . Figure 6 showed the result of . Similarly, Figure 7 showed the same result without using our proposed procedures. It could be easily observed that the extracted edges of lane marking indeed reduced if our proposed procedure could be applied appropriately. Figure 8 indicated that our proposed procedure would also work well on the images with textured roads.

Figure 6
figure 6

The result of subtracting from .

Figure 7
figure 7

The result of subtracting from .

Figure 8
figure 8

Examples of the ground surface with textures. Foreground ObjectBackground ImageSubtract from

To demonstrate the necessity of our proposed procedure, we circled the edges of lane markings and textures on ground surface by red ellipses in both Figures 7 and 8(f). By using the max-operation, apparently, we could reduce the effect caused either by lane markings or textures on ground surfaces and also keep the homogeneous property inside the shadows.

After edge extracting, we used an adaptive binarization method to obtain the binary image from . Here, we took Sauvola's method [16, 17], one kind of local binarization methods, to provide good results even in the condition of nonuniform luminance. Sauvola used an mask, covered on the image in each scanning-iteration, to calculate the local mean and standard deviation of pixel-intensities in the mask as to determine a proper threshold according to the contrast in the local neighborhood of a pixel. If there is a high contrast in some region of the image, the equation may result in the condition . To reduce the influences by unimportant edges, we added a suppression term to Sauvola's equation. Equation (2) shows the revised equation.

(2)

where and are the mean and standard deviation of the mask centered at the pixel , respectively, is the maximum value of the standard deviation (in gray level image, ), is the parameter with positive values in the range [0.2, 0.5], and is a suppression term and its value is set to be 50 empirically in this paper.

We then applied to the following binarization step at location according to (3) once had been obtained.

(3)

where represented the result after binarization. The result of using the binarization method on would be given in Figure 9. Another advantage of using adaptive binarization methods instead of taking a fixed threshold was that users would not have to manually set a proper threshold for each video scene. And that should be a significant factor for automatic monitoring systems.

Figure 9
figure 9

Examples of .Binarization of Figure 6Binarization of Figure 8(e)

We also had to remove the outer borders of because of the following problems. In Figure 9, the shadow region and real foreground object with the same motion vectors would make them always adjacent to each other. Also, the interior region of shadowed/foreground objects should be homogeneous/nonhomogeneous (nontexture or edgeless/dominant edged), which implied that the edges from shadows would appear at the outer borders of foreground objects. Considering these two properties, the objective of removing shadows could be treated as eliminating the outer borders and preserving the remaining edges which belongs to real foreground objects. Also, the latter property mentioned above might not be always satisfied. The interior region of shadows would be sometimes little textured (e.g., lane markings) like the example shown in Figure 10. We could solve this kind of problem by the procedures that we have mentioned earlier. From Figure 11(b), although the interior region of shadows had few edge-points after binarization, we could easily cope with these noise-like points only by using a specific filter after our subsequent processing procedures.

Figure 10
figure 10

Homogeneous property of shadows, example 1. Foreground object

Figure 11
figure 11

Homogeneous property of shadow, example 2. Moving object

As mentioned above, we used a mask to achieve boundary elimination. According to what have been observed, the widths of edge detected in shadows are in fact very approaching regardless of the edges that are extracted from far or near perspectives in the foreground images. And the width of edge is almost less than the length of 3 pixels in most conditions. As Figure 12 illustrated, we put the green mask on the binarized edge points (marked as yellow color) of , and then scanned every point in . If the region covered by the mask completely belongs to foreground objects (marked as white point), we reserve this point (marked as red color); otherwise, we eliminate this point (marked as light blue point). After applying the outer boundary elimination, we could obtain the features for nonshadow pixels, notated as . In Figure 13, we showed an actual example with expressed as red points.

Figure 12
figure 12

Illustrations of boundary elimination. Using mask to scan Mask has covered the nonforeground regionMask is inside the foreground regionFinal result of outer border elimination

Figure 13
figure 13

An example of boundary elimination.

3.2. Foreground-Pixel Extraction by Gray Level-Based Shadow Removal

We tried to integrate the foreground-pixel extraction by gray level-based approach into our shadow removal algorithm, and this novel arrangement could enhance and stabilize the performance of that only by edge-based scheme. Figure 14 showed the flowchart of gray level-based shadow removal foreground pixel extraction. Worthily speaking, we developed a modified "constant ratio" rule by Gaussian model. We selected some pixels which belong to shadow-potential regions from foreground objects, calculated the darkening factors as our training data, and then built a Gaussian model for each gray level. Once the Gaussian model was trained, we could use this model to determine if each of the pixels inside foreground objects belonged to the shadowed region or not.

Figure 14
figure 14

Flowchart of gray level-based shadow removal foreground pixel extraction.

Here we would simply introduce the original "constant ratio" rule so as to illustrate our modifications. Some studies [12, 1820], using the property of "constant ration" for shadow detection, expressed the intensity of each pixel on the coordinate in terms of with (4):

(4)

where is the wavelength parameter, is the illumination function, is the spectral reflectance, and is the sensitivity of camera sensor. The term indicated the difference between nonshadowed and shadowed regions. For backgrounds, the term is composed of the direct and diffused-reflected light components, but in the shadowed area, only contains the diffused-reflected light components. This difference implies the constant ratio property. Equation (5) shows the ratio of and where and represent the intensities of a shadowed pixel and a nonshadowed background pixel, respectively, and is called the darkening factor which will be a constant over the whole image.

(5)

3.2.1. Gaussian Darkening Factor Model Updating

As Figure 15 showed, we would rather stimulate the darkening factor with respect to each gray level by one Gaussian Model than that with respect to each pixel. In the beginning, we would select the shadow-potential pixels as the updating data of Gaussian models by our three predefined conditions introduced in the followings.

  1. (1)

    Pixels must belong to the foreground objects, for the shadowed pixels must be part of the foregrounds ideally.

  2. (2)

    The intensity of a pixel in the current frame should be smaller than that in the frame of backgrounds, for the shadowed pixels must be darker than background-pixels.

  3. (3)

    The pixels obtained from the foreground-pixel extraction by edge-based shadow removal should be excluded to reduce the number of pixels which might be classified as nonshadowed pixels.

Figure 15
figure 15

An illustrated figure, each gray level has a Gaussian model.

For the practical case shown in Figure 16, the red pixels were the selected points for Gaussian model updating.

Figure 16
figure 16

The selected pixels for Gaussian model updating. Foreground objectRed points are the selected pixels

After the pixels for updating were selected, we would update the mean and standard deviation of Gaussian model. Figure 17 displayed the flowchart of the updating process of Gaussian darkening factor model. The darkening factor could be calculated as in (6):

(6)

where is the intensity of the selected pixel at , and is the intensity of the background-pixel at . After calculating the darkening factor, we would update the th Gaussian model.

Figure 17
figure 17

Gaussian darkening factor model updating procedure.

We set a threshold as a minimum number of updating times, and the updating times of each Gaussian model must exceed this threshold to ensure the stability of each model. Besides, in order to reduce the computation loading of updating procedure, we gave a limit that each Gaussian model could only be updated at most 200 times for one frame.

3.2.2. Determination of Non-Shadowed Pixels

Here, we introduced how to extract the nonshadowed pixels by using the trained Gaussian darkening factor model. Figure 18 gave us the rules to calculate the difference between the mean of Gaussian model and the darkening factor, and check if the difference was smaller than 3 times of standard deviation. If yes, the pixel would be classified as shadowed. Otherwise, it would be considered as the nonshadowed pixel and could be reserved as a feature point.

Figure 18
figure 18

Illustration of determination.

Figure 19 described our tasks to determine the nonshadowed pixels. If the th Gaussian model was not trained, we would go checking if the nearby Gaussian models were marked as trained or not. In our programs, we selected the nearby 6 Gaussian models for checking, and we chose the nearest one if there existed any trained Gaussian model. Figure 20 gave an example where the pixels labeled by red color were the extracted feature pixels after our determination task, and we denoted the set of these pixels as .

Figure 19
figure 19

Flowchart of the nonshadowed pixel determination.

Figure 20
figure 20

An example of gray level-based nonshadow foreground-pixel extraction. Foreground Object

4. Feature Combination

We combined two kinds of features which have been introduced in the Section 3 in our algorithm to extract foreground objects in a more accurate way. Figure 21 exhibited the flowchart of our feature combination. We integrated these two features by applying "OR" operations. Figure 22 demonstrated a real example of processed images by our combined features, called the feature-integration images in this paper.

Figure 21
figure 21

Flowchart of feature combination.

Figure 22
figure 22

An example after integration. Foreground ObjectFeature integration image

After acquiring the feature-integration images, we would locate the real foreground objects, namely, the foreground objects excluding the shadowed regions. Hence, we used the connected-component-labeling approach with the minimum bounding rectangle to orientate the real foreground objects. What is different from the common applications, we would make some necessary preprocessing procedures before applying the connected-component-labeling procedure. The preprocesses consisted of filtering and dilation operations. Both the median filter operation and morphological dilation operation was conducted just once. In Figure 23, we could see that there existed some odd points in the left part of the feature-integration images. It was obviously due to the influences by land markings (see Figure 23(a)). In Figure 23(b), these pixels could be eliminated after the filtering procedure. We then applied dilation operations to the results after filtering in order to concentrate the left feature pixels, as shown in Figure 24.

Figure 23
figure 23

Filtering by median filter for feature-integration images. Feature integration imageAfter filtering

Figure 24
figure 24

The result of dilation operations on Figure 23(b).

After that, the connected-component-labeling approach could be applied on the dilated images. As for our defined rules for this procedure, we used the minimum bounding rectangle for each independent region. Then, if any two minimum bounding rectangles are close to each other, we will merge these two rectangles. Finally, iteratively check and merge the previous results till no rectangle can be merged with.

After labeling and grouping, we would use the size filter to eliminate the minimum bounding rectangle of which the width and height were both smaller than a threshold, as depicted in Algorithm 1. The subscript "k" indicated the k th minimum bounding rectangle. Figure 25 showed some examples of the final located real objects; the green and light blue rectangles revealed the foreground objects and final located real objects, respectively.

Algorithm 1

   

end

Figure 25
figure 25

Examples of final located real objects.

5. Experimental Results

In this section, we would demonstrate the results of our proposed shadow removal algorithm. We implemented our algorithm on the platform of PC with P4 3.0 GHz and 1 GB RAM. The software we used is Borland C++ Builder on Windows XP OS. All of the testing inputs are uncompressed AVI video files. The resolution of each video frame is . The average processing time is 13.84 milliseconds for each frame.

5.1. Experimental Results of Our Shadow Removal Algorithm

In the followings, we showed our experimental results under no occlusion situation in different scenes. In comparison with the results without using our algorithm, we used "red" and "green" rectangles to indicate the detected objects after processing with and without applying our shadow removal algorithm, respectively. In Figure 26, we could see that the proposed algorithm indeed successfully detected the real objects and neutralized the negative influences by shadows, since the intensity of shadowed regions was low enough to be distinguished from that of backgrounds. Besides, our proposed algorithm could also cope with the larger-scale shadows and provide the satisfactory results for different sizes of vehicles like trucks which would be shown in Figures 26(d) and 26(f). In Figure 27, we demonstrated the processed results for different kinds of shadows such as the smaller-scale shadows (shown in Figure 27(a)), the low contrast between intensities of shadows and backgrounds (shown in Figure 27(c)), and both the smaller-scale and lowly contrastive shadows (shown in Figure 27(e)). Clearly, our proposed method could work robustly no matter how large and how observable the shadows would be. Also in Figure 28, we gave the testing results of another scene where motorcycles and riders were precisely detected (shown in Figures 28(c) and 28(d)). Figure 29 provided the compared results processed by our proposed algorithm with those by the representative methods which could have been accessible through internet for the highway sequences of videos, which showed a much better processed result by our proposed algorithm (right columns).

Figure 26
figure 26

Experimental results of foreground object detection.

Figure 27
figure 27

Experimental results for different kinds of shadows.

Figure 28
figure 28

Experimental results of foreground object detection.

Figure 29
figure 29

Experimental results of foreground object detection.

5.1.1. Occlusions Caused by Shadows

Here, we would demonstrate some examples of occlusions caused by the influences of shadows. In Figures 30(a), 30(e), 30(g) and 30(i), two vehicles (or motorcycle—vehicle) were detected in the same rectangle due to the influences of shadows, and Figure 30(c) indicated an even worse case in which vehicles were framed together on account of light shadows. From all the figures in the right column of Figure 30, it was apparent to see, our method could correctly detect the foreground objects under the influences of shadows. Moreover, we could still have the correct results (shown in Figure 30(j)) in the foreground object detection for a more difficult case (shown in Figure 30(i)) that three shadowed regions were detected as one foreground object. That was to say, our proposed algorithm could handle the problems of occlusions caused by shadows which have been always considered as tough tasks.

Figure 30
figure 30

Experimental results under the occlusion situation.

5.1.2. Discussions of Gray Level-Based Method

In Section 3.2, we used darkening factors to enhance the performance and reliability of the proposed algorithm. We hence in here made the comparisons of experimental results by applying and not applying our proposed approach mentioned in Section 3.2. Figure 31 showed a conspicuous example in which green/red rectangles represented the foreground objects/detected objects. Figures 31(a), 31(c), and 31(e) were the results without using the foreground-pixel-extraction approach by gray level-based shadow removal. In other words, the only feature for extracting foreground objects was obtained from the foreground-pixel-extraction approach by edge-based shadow removal that we introduced in Section 3.1. But it would bring about the failure of detections when the objects were edgeless (or textureless). As the images shown in the left column of Figure 31, the car roofs could be regarded as edgeless, which might result in that the detected object were in pieces or that only the rear bumper could be detected. Figures 31(b), 31(d), and 31(f) exhibited the better results which were obtained from our introduced structure of feature combination including the information of edge-based and gray level-based shadow removal.

Figure 31
figure 31

Compared results of not applying and applying gray level-based shadow removal foreground-pixel extraction.

5.2. Vehicle Counting

To prove the validity and versatility of the proposed approach, we tried to apply our developed algorithm to vehicle counting, one of the popular applications in ITS. We had 3 testing videos for vehicle counting, and the scenes and properties of shadows in each video were arranged in Table 1. Figure 32 showed the scene of each video for vehicle counting. In order to illustrate the compared and statistical results in a more convenient way, we had Video1 in 6 sectors, Video2 in 13 sectors, and Video3 in 2 sectors, respectively. Each of the sectors was about 2 minutes. We had 4 lanes for both Video1 and Video2, and 2 lanes for Video3. In Table 2, we gave the number of passing vehicles on each lane for each video manually.

Table 1 Descriptions of testing videos for vehicle counting.
Table 2 Number of passing vehicles in each lane.
Figure 32
figure 32

Scenes of videos for vehicle counting. Video1Video2Video3

We calculated the accuracy rate for each sector by (7) to give the comparisons in a more reasonable manner.

(7)

where and represented the number of vehicles which were obtained from manual operations and our programs, respectively. As Table 3 showed, we could obtain the compared consequences of average accuracy rates by using the foreground object-detection results with/without applying our proposed algorithm as the inputs in the experiments of vehicle counting. Table 4 indicated the average accuracy rate for each video in all lanes. From the testing results, we would also list two kinds of failed examples in Figure 33 to indicate some erroneous detected results in the vehicle counting application, which may be reasonable to illustrate the failed conditions in a more quantized manner. One possible condition which might result in the false consequences came from the much more complicated textures within the detected shadows and was illustrated in Figure 33(a). Figure 33(a) revealed that the algorithm failed to provide a correct result in vehicle counting due to the overcomplicated edge information reflected in detected shadows. The major reason can be easily observed that the shadow of some specific object was not successfully detected and eliminated. In fact, the research of shadow detection and removal based upon all the processes by image processing only has been a tough issue since we did not try to make use of any other information from some useful sensors but cameras. Moreover, this paper aimed at developing a practical algorithm in image processing procedures to efficiently remove the shadowing effect before dealing with the applications of ITS, which would have less impact on the performance of shadow removal and make the influences dependent on some specific application. Owing to the rare appearance of such conditions in a longer recorded file of videos, the detected error rate in counting vehicles could be kept lower to a satisfactory range. As for the other possible condition that might result in the false detection results, Figure 33(b) illustrated this kind of example and also showed the false detection consequence in the vehicle counting application. This occlusion case for two cars caused the false counting result because the shadows would be too unapparent to be correctly detected and the two detected objects were moving simultaneously. This kind of problem should be categorized to another research field of image processing, yet not the issue that we have focused on in this paper. Since this phenomenon may result from many other conditions such as the dynamic behavior of moving vehicles, discussions of different shadows, and influences of light reflection, we would rather concentrate on developing a more practical and automatic algorithm in shadow removal. Those experimental results revealed that our proposed algorithm could not only work well in vehicle counting but also improve the performance of any applications under constraints of shadows.

Table 3 Vehicle counting results.
Table 4 Average accuracy of all lanes in each video.
Figure 33
figure 33

Some failed examples of image frames for (a) the much-texture case and (b) the special occlusion case.

6. Conclusions

We in this paper present a real-time and efficient moving shadow removal algorithm based on versatile uses of GMM, including the background removal and development of features by Gaussian models. Our algorithm innovates to use the homogeneous property inside the shadowed regions, and hierarchically detects the foreground objects by extracting the edge-based, gray level-based features, and feature combination. Our approach can be characterized by some original procedures such as "pixel-by-pixel maximization", subtraction of edges from background images in the corresponding regions, adaptive binarization, boundary elimination, the automatic selection mechanism for shadow-potential regions, and the Gaussian darkening factor model for each gray level.

Among all these proposed procedures, "pixel-by-pixel maximization" and subtraction of edges from background images in the corresponding regions deal with the problems which result from the shadowed regions with edges. Adaptive binarization and boundary elimination are developed to extract the foreground-pixels of nonshadowed regions. Most significantly, we propose the Gaussian darkening factor model for each gray level to extract nonshadow pixels from foreground objects by using the information of gray levels, and integrate all the useful features to locate the real objects without shadows. Finally, in comparison with the previous approaches, the experimental results show that our proposed algorithm can accurately detect and locate the foreground objects in different scenes and various types of shadows. What's more, we apply the presented algorithm to vehicle counting to prove its capability and effectiveness. Our algorithm indeed improves the results of vehicle counting and it is also verified to be efficient with the prompt processing speed.

References

  1. Zhang W, Fang XZ, Yang XK, Wu QMJ: Moving cast shadows detection using ratio edge. IEEE Transactions on Multimedia 2007, 9(6):1202-1214.

    Article  Google Scholar 

  2. Cucchiara R, Grana C, Piccardi M, Prati A: Detecting moving objects, ghosts, and shadows in video streams. IEEE Transactions on Pattern Analysis and Machine Intelligence 2003, 25(10):1337-1342. 10.1109/TPAMI.2003.1233909

    Article  Google Scholar 

  3. Yang M-T, Lo K-H, Chiang C-C, Tai W-K: Moving cast shadow detection by exploiting multiple cues. IET Image Processing 2008, 2(2):95-104. 10.1049/iet-ipr:20070113

    Article  Google Scholar 

  4. Cavallaro A, Salvador E, Ebrahimi T: Shadow-aware object-based video processing. IEE Proceedings: Vision, Image and Signal Processing 2005, 152(4):398-406. 10.1049/ip-vis:20045108

    Google Scholar 

  5. Song K-T, Tai J-C: Image-based traffic monitoring with shadow suppression. Proceedings of the IEEE 2007, 95(2):413-426.

    Article  Google Scholar 

  6. Martel-Brisson N, Zaccarin A: Learning and removing cast shadows through a multidistribution approach. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007, 29(7):1133-1146.

    Article  Google Scholar 

  7. Joshi AJ, Papanikolopoulos NP: Learning to detect moving shadows in dynamic environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 2008, 30(11):2055-2063.

    Article  Google Scholar 

  8. Joshi AJ, Papanikolopoulos N: Learning of moving cast shadows for dynamic environments. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '08), May 2008 987-992.

    Google Scholar 

  9. Leone A, Distante C, Buccolieri F: A texture-based approach for shadow detection. Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS '05), September 2005 371-376.

    Google Scholar 

  10. Mohammed Ibrahim M, Anupama R: Scene adaptive shadow detection algorithm. Proceedings of World Academy of Science, Engineering and Technology 2005, 2: 1307-6884.

    Google Scholar 

  11. Hsieh J-W, Yu S-H, Chen Y-S, Hu W-F: Automatic traffic surveillance system for vehicle tracking and classification. IEEE Transactions on Intelligent Transportation Systems 2006, 7(2):179-187.

    Article  MATH  Google Scholar 

  12. Benedek C, Szirányi T: Bayesian foreground and shadow detection in uncertain frame rate surveillance videos. IEEE Transactions on Image Processing 2008, 17(4):608-621.

    Article  MathSciNet  Google Scholar 

  13. Xiao M, Han C-Z, Zhang L: Moving shadow detection and removal for traffic sequences. International Journal of Automation and Computing 2007, 4(1):38-46. 10.1007/s11633-007-0038-z

    Article  Google Scholar 

  14. Stauffer C, Grimson WEL: Adaptive background mixture models for real-time tracking. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), June 1999 2: 246-252.

    Article  Google Scholar 

  15. Suo P, Wang Y: An improved adaptive background modeling algorithm based on Gaussian mixture model. Proceedings of the 9th International Conference on Signal Processing (ICSP '08), October 2008 1436-1439.

    Google Scholar 

  16. Sauvola J, Pietikäinen M: Adaptive document image binarization. Pattern Recognition 2000, 33(2):225-236. 10.1016/S0031-3203(99)00055-2

    Article  Google Scholar 

  17. Shafait F, Keysers D, Breuel TM: Efficient implementation of local adaptive thresholding techniques using integral images. Document Recognition and Retrieval XV, January 2008, San Jose, Calif, USA, Proceedings of SPIE 6815:

    Chapter  Google Scholar 

  18. Rosin PL, Ellis T: Image difference threshold strategies and shadow detection. Proceedings of the 6th British Machine Vision Conference, 1994

    Google Scholar 

  19. Wang Y, Loe K-F, Wu J-K: A dynamic conditional random field model for foreground and shadow segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 2006, 28(2):279-289.

    Article  Google Scholar 

  20. Mikić I, Cosman PC, Kogut GT, Trivedi MM: Moving shadow and object detection in traffic scenes. Proceedings of the 15th International Conference on Pattern Recognition, 2000 1: 321-324.

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported in part by the Aiming for the Top University Plan of National Chiao Tung University, the Ministry of Education, Taiwan, under Contract 99W962, and supported in part by the National Science Council, Taiwan, under Contracts NSC 99-3114-E-009 -167 and NSC 98-2221-E-009 -167.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Wen Shou.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Lin, CT., Yang, CT., Shou, YW. et al. An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS. EURASIP J. Adv. Signal Process. 2010, 945130 (2010). https://doi.org/10.1155/2010/945130

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/945130

Keywords