Skip to main content

Robust Background Subtraction with Foreground Validation for Urban Traffic Video

Abstract

Identifying moving objects in a video sequence is a fundamental and critical task in many computer-vision applications. Background subtraction techniques are commonly used to separate foreground moving objects from the background. Most background subtraction techniques assume a single rate of adaptation, which is inadequate for complex scenes such as a traffic intersection where objects are moving at different and varying speeds. In this paper, we propose a foreground validation algorithm that first builds a foreground mask using a slow-adapting Kalman filter, and then validates individual foreground pixels by a simple moving object model built using both the foreground and background statistics as well as the frame difference. Ground-truth experiments with urban traffic sequences show that our proposed algorithm significantly improves upon results using only Kalman filter or frame-differencing, and outperforms other techniques based on mixture of Gaussians, median filter, and approximated median filter.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Sen-Ching S. Cheung.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Cheung, SC.S., Kamath, C. Robust Background Subtraction with Foreground Validation for Urban Traffic Video. EURASIP J. Adv. Signal Process. 2005, 726261 (2005). https://doi.org/10.1155/ASP.2005.2330

Download citation

Keywords and phrases

  • background subtraction
  • foreground validation
  • urban traffic video
\