Video analysis-based vehicle detection and tracking using an MCMC sampling framework
- Jon Arróspide^{1}Email author,
- Luis Salgado^{1} and
- Marcos Nieto^{2}
https://doi.org/10.1186/1687-6180-2012-2
© Arrospide et al.; licensee Springer. 2012
Received: 15 May 2011
Accepted: 6 January 2012
Published: 6 January 2012
Abstract
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is defined. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Keywords
1 Introduction
Signal processing techniques have been widely used in sensing applications to automatically characterize the environment and understand the scene. Typical problems include ego-motion estimation, obstacle detection, and object localization, monitoring, and tracking, which are usually addressed by processing the information coming from sensors such as radar, LIDAR, GPS, or video-cameras. Specifically, methods based on video analysis play an important role due to their low cost, the striking increase of processing capabilities, and the significant advances in the field of computer vision.
Naturally object localization and monitoring are crucial to have a good understanding of the scene. However, they have an especially critical role in safety applications, where the objects may constitute a threat to the observer or to any other individual. In particular, the tracking of vehicles in traffic scenarios from an on-board camera constitutes a major focus of scientific and commercial interest, as vehicles cause the majority of accidents.
Video-based vehicle detection and tracking have been addressed in a variety of ways in the literature. The former aims at localizing vehicles by exhaustive search in the images, whereas the latter aims to keep track of already detected vehicles. As regards vehicle detection, since exhaustive image search is costly, most of the methods in the literature proceed in a two-stage fashion: hypothesis generation, and hypothesis verification. The first usually involves a rapid search, so that the image regions that do not match an expected feature of the vehicle are disregarded, and only a small number of regions potentially containing vehicles are further analyzed. Typical features include edges [1], color [2, 3], and shadows [4]. Many techniques based on stereovision have also been proposed (e.g., [5, 6]), although they involve a number of drawbacks compared to monocular methods, especially in terms of cost and flexibility.
Verification of hypotheses is usually addressed through model-based or appearance-based techniques. The former exploit a priori knowledge of the structure of the vehicles to generate a description (i.e., the model) that can be matched with the hypotheses to decide whether they are vehicles or not. Both rigid (e.g., [7]) and deformable (e.g., [8]) vehicle models have been proposed. Appearance-based techniques, in contrast, involve a training stage in which features are extracted from a set of positive and negative samples to design a classifier. Neural networks [9] and support vector machines (SVM) [10, 11] are extensively used for classification, while many different techniques have been proposed for feature extraction. Among others, histograms of oriented gradients (HOG) [12, 13], principal component analysis [14], Gabor filters [11] and Haar-like features [15, 16] have been applied to derive the feature set for classification.
Direct use of many of these techniques is very time-consuming and thus unrealistic in real-time applications. Therefore, in this study we propose a vehicle detection method that exploits the intrinsic structure of the vehicles in order to achieve good detection results while involving a small feature space (and hence low computational overhead). The method combines prior knowledge on the structure of the vehicle, based on the analysis of vertical symmetry of the rear, with appearance-based feature training using a new HOG-based descriptor and SVM. Additionally, a new database containing vehicle and non-vehicle images has been generated and made public, which is used to train the classifier. The database distinguishes between vehicle instances depending on their relative position with respect to the camera, and hence allows for an adaptation of the feature selection and the classifier in the training phase according to the vehicle pose.
In regard to object tracking, feature-based and model-based approaches have been traditionally utilized. The former aim to characterize objects by a set of features (e.g., corners [17] and edges [18] have been used to represent vehicles) and to subsequently track them through inter-frame feature matching. In contrast, model-based tracking uses a template that represents a typical instance of the object, which is often dynamically updated [19, 20]. Unfortunately, both approaches are prone to errors in traffic environments due to the difficulty in extracting reliable features or in providing a canonical pattern of the vehicle.
To deal with these problems, many recent approaches to object tracking entail a probabilistic framework. In particular, the Bayesian approach [21, 22], especially in the form of particle filtering, has been used in many recent studies (e.g., [23–25]), to model the inherent degree of uncertainty in the information obtained from image analysis. Bayesian tracking of multiple objects can be found in the literature both using individual Kalman or particle filters (PF) for each object [24, 26] and a joint filter for all of the objects [27, 28]. The latter is better suited for applications in which there is some degree of interaction among objects, as it allows for the controlling of the relations among objects in a common dynamic model (those are much more complicated to handle through individual PF [29]). Notwithstanding, the computational complexity of joint-state traditional importance sampling strategies grows exponentially with the number of objects, which results in a degraded performance with respect to independent PF-based tracking when there are several participants (as occurs in a traffic scenario). Some recent studies, especially relating to radar/sonar tracking applications [30], resort to finite set statistics (FISST) and use random sets rather than vectors to model multiple objects state, which is especially suitable for the cases where the number of objects is unknown.
On the other hand, PF-based object tracking methods found in the literature resort to appearance information for the definition of the observation model. For instance, in [23], a likelihood model comprising edge and silhouette observation is employed to track the motion of humans. In turn, the appearance-based model used in [27] for ant tracking consists of simple intensity templates. However, methods using appearance-only models are only bound to be successful under controlled scenarios, such as those in which the background is static. In contrast, the considered on-board traffic monitoring scenarios entail a dynamically changing background and varying illumination conditions, which affect the appearance of the vehicles.
In this study, we present a new framework for vehicle tracking which combines efficient sampling, handling of vehicle interaction, and reliable observation modeling. The proposed method is based on the use of Markov chain Monte Carlo (MCMC) approach to sampling (instead of the traditional importance sampling) which renders joint state modeling of the objects affordable, while also allowing to easily accommodate interaction modeling. In effect, driver decisions are affected by neighboring vehicle trajectories (vehicles tend to occupy free space), and thus an interaction model based on Markov random fields (MRF) [31] is introduced to manage intervehicle relations. In addition, an enriched observation model is proposed, which fuses appearance information with motion information. Indeed, motion is an inherent feature of vehicles and is considered here through the geometric analysis of the scene. Specifically, the projective transformation relating the road plane between consecutive time points is instantaneously derived and filtered temporally based on a data estimation framework using a Kalman filter. The difference between the current image and the previous image warped with this projectivity allows for the detection of regions likely featuring motion. Most importantly, the combination of appearance and motion-based information provides robust tracking even if one of the sources is temporarily unreliable or unavailable. The proposed system has been proven to successfully track vehicles in a wide variety of challenging driving situations and to outperform existing methods.
2 Problem statement and proposed framework
where z_{1:k}integrates all the measurements up to time k[21]. Unfortunately, the analytical solution is intractable except for a set of restrictive cases. Particularly, when the state sequence evolution is a known linear process with Gaussian noise and the measurement is a known linear function of the state (also with Gaussian noise) then the Kalman filter constitutes the optimal algorithm to solve the Bayesian tracking problem. However, these conditions are highly restrictive and do not hold for many practical applications. Hence, a number of suboptimal algorithms have been developed to approximate the analytical solution. Among them, particles filters (also known as bootstrap filtering or condensation algorithm) play an outstanding role and have been used extensively to solve problems of a very different nature. The key idea of particles filters is to represent the posterior probability density function by a set of random discrete samples (called particles). In the most common approach to particle filtering, known as importance sampling, the samples are drawn independently from a proposal distribution q(·), called importance density.
where the state of the r th particle at time k is denoted ${\mathbf{s}}_{k}^{\left(r\right)}$, N is the number of particles, and c is the inverse of the evidence factor in the denominator of (1). As opposed to importance sampling, a record of the current state is kept, and each new sample is generated from a proposal distribution that depends on the current sample, thus forming a Markov chain. The proposal distribution is usually chosen to be simple so that samples can easily be drawn. The advantage of MCMC methods is that the complexity increases only linearly with the number of objects, in contrast to importance sampling, in which the complexity grows exponentially [27]. This implies that using the same computational resources, MCMC will be able to generate a larger number of particles and hence, better approximate the posterior distribution. The potential of MCMC has been shown for processing data of different sensors, e.g., for target tracking in radar [32] or video-based ant tracking [27]. An MCMC framework is thus used in this study for vehicle tracking.
This framework requires that the observation model, p(z_{ k } | s _{ k }), and the dynamic or motion model, p(s_{ k } |s_{k - 1}), be defined. Selection of these models is a key aspect to the performance of the framework. In particular, in order to define a scheme that can be lead to improved performance in an MCMC-based Bayesian framework, we have tried to first identify the weaknesses of the state-of-the-art methods related to the definition of these models. Regarding the observation model, as stated in Section 1, most methods in the literature resort to appearance-based models typically using templates or some features that characterize the objects of interest. Although this kind of models perform well when applied to controlled scenarios, they prove insufficient for the traffic scenario. In this environment the background changes dynamically, and so do weather and illumination conditions, which limits the effectiveness of appearance-only models. In addition, the appearance of vehicles themselves is very heterogenous (e.g., color, size), thus making their modeling much more challenging.
These limitations in the design of the observation model are addressed twofold. First, rather than the usual template matching methods, a probabilistic approach is taken to define the appearance-based observation model using the Expectation-Maximization technique for likelihood function optimization. Additionally, we extend the observation model so that it not only includes a set of appearance-based features, but also considers a feature that is inherent to vehicles, i.e., their motion, so that it is more robust to changes in the appearance of the objects. In particular, the model for the observation of motion is based on the temporal alignment of the images in the sequence through the analysis of multiple-view geometry.
As regards the motion model, it is designed under the assumption that vehicles velocity can be approximated to be locally constant, which is valid in highway environments. As a result, the evolution of a vehicle's position can be traced by a first-order linear model. However, linearity is lost due to the perspective effect in the acquired image sequence. To preserve linearity we resort to a plane rectification technique, usually known as inverse perspective mapping (IPM) [33]. This computes the projective transformation, T, that produces an aerial or bird's-eye view of the scene from the original image. The image resulting from plane rectification will be referred to as the rectified domain or the transformed domain. In the rectified domain, the motion of vehicles can be safely described as a first-order linear equation with an added random noise.
One important limitation regarding the dynamic model in existing methods is the interaction treatment. Most approaches to multiple vehicle tracking involve independent motion models for each vehicle. However, this requires an external method for handling of interaction, and often this is simply disregarded. In contrast, we have designed an MRF-based interaction model that can be easily integrated with the above-mentioned individual vehicle dynamic model.
Finally, a method is necessary to detect new vehicles in the scene, so that they can be integrated in the tracking framework. This is addressed in the current work by using a two-step procedure composed of an initial hypothesis generation and a subsequent hypothesis verification. In particular, candidates are verified using a supervised classification strategy over a new descriptor based on HOG features. The proposed feature descriptor and the classification strategy are explained in Section 6.
3 Vehicle tracking algorithm
The designed vehicle tracking algorithm aims at estimating the position of the vehicles existing at each time of the image sequence. Hence, the state vector is defined to comprise the position of all the vehicles ${\mathbf{s}}_{k}={\left\{{s}_{i,k}\right\}}_{i=1}^{M}$, where s_{i,k}denotes the position of vehicle i, and M is the number of vehicles existing in the image at time k. As stated, the position of a vehicle is defined in the rectified domain given by the transformation T, although back-projection to the original domain is naturally possible via the inverse projective transformation T^{- 1}.
This implies that, if the posterior probability of the candidate sample is larger than that of ${\mathbf{s}}_{k}^{\left(\tau \right)}$the candidate sample is accepted, and if it is smaller, it is accepted with probability equal to the ratio between them. The latter case can be readily simulated by selecting a random number t from a uniform distribution over the interval (0, 1), and then accepting the candidate sample if $A\left({{\mathbf{s}}^{\prime}}_{k},\phantom{\rule{2.77695pt}{0ex}}{\mathbf{s}}_{k}^{\left(\tau \right)}\right)>t$. In the case of acceptance, ${\mathbf{s}}_{k}^{\left(\tau +1\right)}={{\mathbf{s}}^{\prime}}_{k}$. Otherwise the previous sample is repeated ${\mathbf{s}}_{k}^{\left(\tau +1\right)}={\mathbf{s}}_{k}^{\left(\tau \right)}$.
3.1 Summary of the sampling algorithm
- (1)
The average of the particles at the previous time step is taken as the initial state of the Markov chain: ${\mathbf{s}}_{k}^{0}={\sum}_{r}{\mathbf{s}}_{k-1}^{\left(r\right)}/N$.
- (2)
To generate each new sample of the chain, ${\mathbf{s}}_{k}^{\left(\tau +1\right)}$, an object i is picked randomly and a new state ${s}_{i,k}^{\prime}$ is proposed for it by sampling from the proposal distribution, $Q\left({s}_{i,k}^{\prime}|{s}_{i,k}^{\left(\tau \right)}\right)=\mathcal{N}\left({s}_{i,k}^{\prime}|\phantom{\rule{0.3em}{0ex}}{s}_{i,k}^{\left(\tau \right)},\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{q}\right)$. Since the other targets remain unchanged, the candidate joint state is ${{s}^{\prime}}_{k}=\left({\mathbf{s}}_{\backslash i,k}^{\left(\tau \right)},\phantom{\rule{2.77695pt}{0ex}}{s}_{i,k}^{\prime}\right)$.
- (3)
The posterior probability estimate of the proposed sample, $p\left({{\mathbf{s}}^{\prime}}_{k}|{\mathbf{z}}_{1:k}\right)$, is computed according to the Equation (3), which depends on both the motion and the observation models. The motion model, p(s _{ k }|s _{k-1}), is given by Equation (9), while the observation model for a vehicle, p(z_{ i,k } | s_{ i,k } ), is specified in (22).
- (4)
The candidate sample ${{\mathbf{s}}^{\prime}}_{k}$ is accepted with probability $A\left({{\mathbf{s}}^{\prime}}_{k},\phantom{\rule{2.77695pt}{0ex}}{\mathbf{s}}_{k}^{\left(\tau \right)}\right)$ computed as in Equation (4). In the case of acceptance, the new sample of the Markov chain is ${\mathbf{s}}_{k}^{\left(\tau +1\right)}={{\mathbf{s}}^{\prime}}_{k}$, otherwise the previous sample is copied, ${\mathbf{s}}_{k}^{\left(\tau +1\right)}={\mathbf{s}}_{k}^{\left(\tau \right)}$.
- (5)
Finally, only one every L th samples is retained to avoid excessive correlation, and the first B samples are discarded. The final set of N samples provides an estimate of the posterior distribution and the vehicle position estimates are computed as the average of the samples as in Equation (5).
4 Motion and interaction model
where ${\sigma}_{m}=\left({\sigma}_{m}^{x},\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{m}^{y}\right)$.
Modeling of vehicle interaction thus requires only the evaluation of an additional factor in the posterior distribution, while producing significant gain in the tracking performance.
5 Observation model
5.1 Appearance-based analysis
The first part of the observation model deals with the appearance of the objects. The aim is to obtain the probability p_{ a } (z_{ i,k } | s_{ i,k } ) of the current appearance observation given the object state s_{ i,k } (note the subscript a that denotes "appearance"). In other words, we would like to know if the current appearance-related measurements support the hypothesized object state. In order to derive the probability p_{ a } (z_{ i,k } | s_{ i,k } ) we will proceed in two levels. First, the probability that a pixel belongs to a vehicle will be defined according to the observation for that pixel. Second, by analyzing the pixel-wise information around the position given by s_{ i,k } , the final observation model will be defined at region level.
The pixel-wise model aims to provide the probability that a pixel belongs to a vehicle. This will be addressed as a classification problem, and it is therefore necessary to define the different categories expected in the image. In particular, the rectified image (see example in Figure 2.) contains mainly three types of elements: vehicles, road pavement, and lane markings. A fourth class will also be included in the model to account for any other kind of elements (such as median stripes or guard rails).
where p(z_{ x } | X_{ i } ) is the likelihood function, P(X_{ i } ) is the prior probability of class X_{ i } , and P (z_{ x } ) is the evidence, computed as $P\left({z}_{x}\right)={\sum}_{i\in \mathcal{S}}p\phantom{\rule{0.3em}{0ex}}\left({z}_{x}|{X}_{i}\right)P\left({X}_{i}\right)$, which is a scale factor that ensures that the posterior probabilities sum to one. Likelihoods and prior probabilities are defined in the following section.
5.1.1 Likelihood functions
where Θ_{I,i}= {µ_{I,i}, σ_{I,i}} and Θ_{ I }= {Θ_{I,i}}_{i∈P,L,V}. Observe that the prior probabilities have been substituted by factors ω_{I,i}to adopt the notation typical of mixture models. The set of unknown parameters is composed of the parameters of the densities and of the mixing coefficients, Θ = {Θ_{I,i}, ω_{I,i}}_{i∈P,L,V}. Thereby, the parameters resulting from the final EM iteration are fed into the Bayesian model defined in Equations (12)-(15). The process is completely analogous for the feature R_{ x } .
5.1.2 Appearance-based likelihood model
where R_{ a } is the region of size (w + 1) × h/ 2 above s_{ i,k } , R_{ a } = {x_{ i,k } - w/ 2 ≤ x < x_{ i,k } + w/ 2; y_{ i,k } - h/ 2 ≤ y < y_{ i,k } }, and R_{ b } is the region of the same size below s_{ i,k } , R_{ b } = {x_{ i,k } - w/ 2 ≤ x < x_{ i,k } + w/ 2; y_{ i,k } < y ≤ y_{ i,k } + h/ 2}.
5.2 Motion-based analysis
As mentioned above, the second source of information for the definition of the likelihood model is motion analysis. Two-view geometry fundamentals are used to relate the previous and current views of the scene. In particular, the homography (i.e., projective transformation) of the road plane is estimated between these two points in time. This allows us to generate a prediction of the road plane appearance in future instants. However, vehicles (which are generally the only objects moving on the road plane) feature inherent motion in time, hence their projected position in the plane differs from that observed. The regions involving motion are identified through image alignment of the current image and the previous image warped with the homography. These regions will correspond to vehicles with high probability.
5.2.1 Homography calculation
The first step toward image alignment is the calculation of the road plane homography between consecutive frames. As shown in [37] the homography that relates the points of a plane between two different views can be obtained from a minimum of four feature correspondences by means of the direct linear transformation (DLT). Indeed, in many applications the texture of the planar object allows to obtain numerous feature correspondences using standard feature extraction and matching techniques, and to subsequently find a good approximation to the underlying homography. However, this is not the case in traffic environments: the road plane is highly homogeneous, and hence most of the points delivered by feature detectors applied on the images belong to background elements or vehicles, and few correspond to the road plane. Therefore, the resulting dominant homography (even if using robust estimation techniques) is in general not that of the road plane.
To overcome this problem, we propose to exploit the specific nature of the environment. In particular, highways are expected to have different kind of markings (mostly lane markings) painted on the road. Therefore, we propose to first use a standard lane marking detector (such as the ones described in [33–35]) and then to restrict the feature search area in extended regions around lane markings. Nevertheless, the resulting set of correspondences will still typically be scarce, and some of them may be incorrect or inaccurate, depending on the sharpness of the lane marking corners and on the resolution of the image around them. Hence, the instantaneous homography computed from feature correspondences using DLT might be highly unreliable (errors in one of the points will have a large impact in the solution to the equation system of DLT), and sometimes the number of points is not even sufficient to compute it.
The process and measurement noise, w_{ k } and v _{ k }, are assumed to be given by independent Gaussian distributions, p(w) ~ N(0, Q) and p(v) ~ N(0, R). Observe that the measurement vector is composed of the elements of the instantaneous homography matrix, H ^{ k } , computed from image correspondences. As stated above, measurements are expected to be prone to error due to the usually small set of correspondences available, hence the measurement error should be tuned to be larger than the process noise (in the proposed configuration it is Q = 10^{-6}, R = 10^{-3}).
The designed filter provides corrected estimates for the homography at time k, ${\widehat{H}}^{k}$, built from the posterior estimate of the filter state, ${\widehat{\mathbf{x}}}_{k}$. Most importantly, this measure can be used as a prediction for the homography in the next time point. This prediction provides an effective reference to evaluate whether or not the computed instantaneous measurement may be erroneous. Indeed, at the current time k, we can compare the instantaneous homography H^{ k } to the prediction made in the previous time instant ${\widehat{H}}^{k-1}$: if H^{ k } is close to the expected value ${\widehat{H}}^{k-1}$ then the filter equations will be conveniently updated; in contrast, if the matrices are significantly different, then it is natural to think that noisy correspondences were involved in the calculation of H^{ k } .
The distance between matrices is measured according to the norm of the matrix of differences. Specifically, the norm induced by the 2-norm of a Euclidean space is used. This is obtained by performing singular value decomposition (SVD) of the matrix and retaining its largest singular value [38]. The incoming matrices are accepted and introduced into the Kalman filtering framework only if $\left|\right|{H}^{k}-{\widehat{H}}^{k-1}\left|\right|\phantom{\rule{2.77695pt}{0ex}}<\phantom{\rule{2.77695pt}{0ex}}{t}_{a}$. Otherwise, the measured homography is deemed to be unreliable and the predicted homography is used. The threshold t_{ a } modulates the maximum acceptable distance to the predicted matrix, which depends on the kinematic restrictions of the platform in which the camera is mounted.
In the case of highways, vehicle dynamics are bounded by the maximum speed, the maximum turning angle (i.e., yaw angle, β) and the maximum variation in the pitch angle, α, for a given frame rate. The maximum velocity is considered to be v = 120 km/h (33.3 m/s), as enforced by most nation governments. Additionally, a maximum pitch angle variation of α = ± 5° is considered, and an upper bound of β = ± 3° is inferred for the turning angle according to the standard road geometry design rules. Taking into account these bounds, and assuming an image processing rate of at least 1 fps, the threshold is experimentally found to be t_{ a } = 60.
5.2.2 Motion-based likelihood model
As suggested in the overview of Section 5.2, the reason for image alignment is that all elements in the road plane (except for the points of the vehicle that belong to this plane) are static, and thus their actual position matches that projected by the homography. In contrast, vehicles are moving, hence their positions in the road plane at time k significantly differ from those projected by the homography, which assumes they are static. Therefore, the differences between the image at time k and the image at time k - 1 warped with ${\widehat{H}}^{k}$ shall be null for all the elements of the road plane except for the contact zones of the vehicles with the road. The differences in these regions will be more significant when the velocity of the vehicles is high. Figure 4.d illustrates the difference between the current image-Figure 4.b-and the previous image warped-Figure 4.c-for the example referred below. As can be observed, whiter pixels-indicating significant difference-appear in the areas of motion of the vehicles in the road. The transformation of the elements outside the road is naturally not well represented by ${\widehat{H}}^{k}$ (this is the homography of the road plane) and thus random regions of strong differences arise in the background, which will be considered clutter.
The pixel-wise difference between the current image and the previous image warped provides information on the likelihood of the current object state candidate, s_{ i,k } . Analogously to the appearance-based likelihood modeling, the region around the vehicle position indicated by s_{ i,k } will be evaluated in order to derive its likelihood. Also, to preserve the duality with the appearance-based analysis, the processing is shifted to the rectified domain using the transformation T defined in Section 2. The resulting image, denoted D_{ r } , is illustrated in Figure 4.e for the previous example. In particular, the likelihood of belonging to a region of motion is maximum in x_{max} = argmax(D_{ r } (x)), hence a map of probabilities that the pixel x belongs to a moving region, denoted p(m|x), can be inferred for the whole image as p (m|x) = D_{ r } (x)/D_{ r } (x_{max}).
Note that, although the product of likelihoods could have been used instead, the mean is preferred in order to avoid that the calculation be biased by likelihood values that are too small.
6 Vehicle detection
Up to this point, the method for vehicle tracking has been explained. However, in normal driving situations it is natural that vehicles come in and out of the field of view of the camera throughout the sequence of images. While management of outgoing vehicles is fairly straightforward (the track simply exceeds the limits of the image), a method for incoming vehicles must be designed. The method proposed in this study follows a two-step approach. In the first stage, hypotheses for vehicle positions are made using the results of appearance-based classification explained in Section 5.1. In the second, they are verified according to the analysis of a set of features in their associated regions in the original domain.
6.1 Hypothesis generation
Exhaustive search of a certain pattern in the entire image is too time-consuming for applications requiring real-time operation. Hence, it is common to perform some kind of fast pre-processing that restricts the search areas. In this case, we exploit the information extracted from the construction of likelihood models for tracking and use it to generate a set of candidate regions that will be further analyzed. In particular, two types of inputs could be used that correspond to the appearance-based analysis in Section 5.1 and the motion-based analysis in Section 5.2. As referred to in the corresponding section, the latter usually involves noise due to background structures, thus appearance-based information is more suitable for hypothesis generation.
Naturally, regions corresponding to the tracked vehicles should exist in B_{ m } . Besides, if there is an additional region in B_{ m } this is regarded as a potential new vehicle in the image and it is further analyzed in the hypothesis verification stage. In particular, in the example in Figure 5.c three regions are obtained: the upper two regions correspond to existing vehicles, labeled 1 and 2, while the small region in the lower left corner constitutes a potential new vehicle (in this case it is actually a vehicle, as can be observed in Figure 5.a). Since only the lower part of the vehicles is reliable in the rectified domain, candidates are characterized by their position and width. As potential vehicles are verified according to their appearance in the original image, their position and width in this domain are computed by means of the inverse transformation T^{-1}. Finally, a 1:1 aspect ratio is initially assumed for the vehicle so that a bounding box R_{ h } can be hypothesized for vehicle verification.
6.2 Hypothesis verification
Vehicle verification is based on a supervised classification stage based on SVM. A database of vehicle rear images is generated for the training of the classifier as will be explained in Section 6.2.2. Most importantly the database separates images according to the region in which the vehicle is found (close/middle range in the front, close/middle range in the left, close/middle range in the right, and far range). Indeed, the view of the vehicle rear changes in these areas and thus affects its intrinsic features. This is taken into account in the design of the feature description, which adapts to the particularities of the different areas. Besides, a different classifier is trained for each of them using the corresponding subsets of images in the database.
As for the feature description, a new descriptor is proposed based on two characteristics that are inherent to the vehicles: high edge content and symmetry. Indeed, the method automatically adapts the area for feature extraction according to a vertical symmetry-based local window refinement. This allows for the correction of position offsets in the hypothesis generation stage and for the adaptation to the vehicle rear contour. Regarding the feature extraction within the refined region, a new descriptor that exploits the inherently rectangular structures of the vehicle rear is designed. The descriptor, called CR-HOG, is based on the analysis of HOG in concentric rectangles around the center of symmetry.
6.2.1 CR-HOG feature extraction
HOGs evaluate local histograms of image gradient orientations in a dense grid. The underlying idea is that the local appearance and shape of the objects can often be well characterized by the distribution of the local edge directions, even if the corresponding edge positions are not accurately known.
This idea is implemented by dividing the image into small regions called cells. Then, for each cell, a histogram of the gradient orientations over the pixels is extracted. The original HOG technique, proposed by Dalal and Triggs [12], presents two different kinds of configurations, called Rectangular HOG (R-HOG) and circular HOG (C-HOG), depending on the geometry of the cells used. Specifically, the former involves a grid of rectangular spatial cells and the latter uses cells partitioned in a log-polar fashion.
In practice, the hypothesized region for vehicle verification, R_{ h } , may not perfectly fit the actual bounding box of the vehicle in terms of size and alignment. Specifically, it is often the case that the vehicle is not perfectly centered in R_{ h } , especially on the horizontal axis. Therefore, direct application of CR-HOG (or of standard HOG) over R_{ h } will possibly result in degraded performance. Instead, we refine the region likely containing the vehicle through the analysis of vertical symmetry in the intensity of the region. In particular, the subregion within R_{ h } giving the maximum degree of vertical symmetry is kept for HOG computing. Vertical symmetry is calculated using the method in [39]. As a result, we obtain the axis of vertical symmetry, x_{ s } and the width of the region that maximizes the symmetry measure, w_{ s } . The height h_{ s } of the window for HOG application is taken as that of R_{ h } and its center is thus given by c_{ s } = (x_{ s } , h_{ s } /2).
Figure 6.b illustrates the window adaptation approach based on symmetry analysis. Observe that the refined vertical side limits (colored in red) fit much better the bounding edges of the vehicle rears. In practice, the area for calculation of CR-HOG is extended by a 10% so that the outer edges of the vehicle are also accounted for in the descriptor.
The bins have been shifted so that the vertical and horizontal orientations, which are very frequent in the rear of vehicles, are in the middle of their respective bins. This way, small fluctuations around 0° and 90° will not affect the descriptor. The histogram of each cell is finally normalized by the area of the cell so that histograms of different cells are in the same order of magnitude. The CR-HOG descriptor is composed of the normalized histogram of orientations of all the cells. The optimal configuration of the number of orientation bins, n, and the number of cells, b, is discussed in Section 7 for each region of the image.
6.2.2 Classification stage
The CR-HOG descriptors are introduced in a Support Vector Machine-based classifier. A new database containing 4,000 positive vehicle images and 4,000 negative vehicle images is used to train and test the classifier (this can be accessed in the Internet [40]). The core of the database is composed of images from our own collection. Additionally, images have also been extracted from the Caltech [41, 42] and the TU Graz-02 [43, 44] databases and included in the data set. The joint database consists of images of resolution 64 × 64 acquired from a vehicle-mounted forward-looking camera. Each image provides a view of the rear of a single vehicle. Some images contain the vehicle completely while others have been drawn to contain it only partially (all images contain at least 50% of the vehicle rear) in order to simulate putative results of the hypothesis generator.
Images involve many different viewpoints of the vehicle rear corresponding to vehicles in different locations relative to the vehicle in which the camera is mounted. Specifically, the space is divided into four main regions: close/middle range in the front, close/middle range in the left, close/middle range in the right, and far range. The database contains 1,000 images of each of these views. A set of 4,000 images not containing vehicles have also been used to train and test the classifier. The images are selected in such a way that the variability in terms of vehicle appearance, pose, and acquisition conditions (e.g., weather conditions, lighting) is maximized. A classifier based on SVM using linear basis functions is used for each of the four image regions.
7 Experiments and discussion
Experiments regarding the proposed method have been performed twofold. On the one hand, the performance of the novel CR-HOG based approach for vehicle detection is tested on the database referred to in Section 6.2.2. On the other hand, experiments have been carried out in the complete system integrating vehicle detection and tracking over a wide variety of video sequences.
7.1 Experiments on vehicle detection
The SVM-based classifier for vehicle detection explained in Section 6 has been trained and tested in Matlab using the Bioinformatics Toolbox. The method involves two design parameters, namely the number of orientation bins in the histogram, n, and the number of cells, b. Experiments have been performed on the database for values n = 8, 12, 18 and b = 2, 3, 4. A cross-validation procedure is used to test the method. Specifically, 50% of the images are randomly selected for the training set and the remaining 50% are used for the testing set. This process is repeated 5 times and the average is computed.
Classification accuracy rates of CR- HOG
Close/Middle | Left | Right | Far | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
b = 2 | b = 3 | b = 4 | b = 2 | b = 3 | b = 4 | b = 2 | b = 3 | b = 4 | b = 2 | b = 3 | b = 4 | |
n = 8 | 94,88 | 94,98 | 94,68 | 91,04 | 91,18 | 91,16 | 88,58 | 89,14 | 87,94 | 85,92 | 85,86 | 85,76 |
n = 12 | 94,96 | 94,80 | 95,14 | 91,46 | 91,82 | 91,46 | 89,28 | 89,42 | 88,16 | 85,32 | 85,24 | 85,16 |
n = 18 | 94,78 | 93,96 | 93,24 | 91,98 | 91,60 | 91,06 | 89,34 | 88,84 | 88,10 | 85,76 | 85,22 | 84,60 |
As a first conclusion of the experiments we can infer that the accuracy decreases for b = 4 in all the areas of the image. As for the number of orientation bins, a different behavior is observed for the frontal and the sides views. Specifically, for the central close/middle and far ranges similar results are obtained for n = 8 and n = 12, while the performance decreases notably for n = 18. On the contrary, for the left and right areas a significant accuracy increase is observed from n = 8 to n = 12; a further increase to n = 18 does not bring an additional gain. This contrast is indeed reasonable since, from a completely orthogonal viewpoint, the edges of a vehicle are fairly invariant and mostly vertical and horizontal; conversely, in the side views the upright edges corresponding to the back window and its contour (especially the furthest from the image center) tend to divert from verticality. Consequently, more variability is found in the gradient orientation map, and therefore more bins are necessary to capture fine-detail.
A good trade-off between complexity and performance is achieved by selecting (b, n) = (2, 8) for the close/middle and far ranges, and (b, n) = (3, 12) for the left and right views. This involves respective detection accuracies of 94.88, 85.92, 91.82, and 89.42%, which results in an average correct detection rate of 90.51%. The rate difference between left and right views is due to the particularities of the traffic participants in the right lane, which usually includes slow vehicles (buses, trucks, vans, etc.). These involve a great appearance variety and hence make classification much more challenging. Naturally, the worst classification rate is obtained for the furthest vehicles, in which the edge-details are degraded. The results are improved to an overall accuracy of 92.77% when using a Gaussian radial basis function kernel (instead of the linear kernel), with respective correct detection rates of 96.14, 89.92, 94.14, and 90.86% for the different areas. However, the proposed method continuously generates hypotheses for the potential vehicles. Hence, even if a vehicle is not detected in a given frame, it is usually detected in the following frames. Therefore, the small latency incurred by the linear kernel-based classification is usually negligible and it is not necessary to use more complex kernels.
7.2 Experiments on vehicle tracking
In regard to vehicle tracking, the designed method has been tested on a wide variety of sequences recorded on Madrid, Brussels, and Turin. These sequences, which were acquired in several sessions with different driving conditions (i.e., illumination conditions, weather, pavement color, traffic density, etc.), amount to 22:38 min. Test sequences were acquired from a forward-looking digital video camera installed near the rear mirror of a vehicle driven in highways. The method is able to operate near real-time at 10 fps on average over an Intel(R) Core(TM) i5 processor running at 2.67 GHz. Implementation is done in C++.
The above-mentioned test sequences are used to compare the performance of the proposed tracking method with the two methods most widely used in the literature. Namely, those involve independent tracking of multiple objects with a Kalman filter assigned to each object (which will be referred to as KF-based tracking), or joint tracking using PF based on importance sampling (shortly, SIS-based tracking). For the implementation of KF-based tracking, appearance-based region labeling through connected-component analysis is used as in Section 6.1 to locate vehicles in every frame, and tracks are formed temporally by matching the regions according to a minimum distance criterion. As for SIS-based tracking, a sequential resampling scheme is used (see details of the algorithm in [21]). Additionally, the motion model used for SIS-based tracking is exactly that designed for the proposed method, while KF-based tracking uses the same dynamic model for independent motion of vehicles, but cannot accommodate any interaction model. Other parameters of the dynamic model are w_{ l } = 90 and d_{ s } = 96. Regarding the observation model, the dimensions of the local windows R_{ a } and R_{ b } are set to w = h = 10. Also, the standard deviation of the proposal density is optimally calculated for the proposed method and for SIS-based tracking as σ_{ q } = (2, 3) and σ_{ q } = (5, 8), respectively. Finally, the same number of samples N = 250 is used for both methods.
Summary of tracking results
Method | Tracking failures | Number of frames | Number of vehicles |
---|---|---|---|
KF-based Tracking | 36 | ||
SIS-based Tracking | 31 | 33454 | 120 |
Proposed Method | 9 |
7.3 Analysis of computational complexity
Average processing time of the proposed algorithm per block
Appearance analysis | Motion analysis | Vehicle detection | Sampling | Total | |
---|---|---|---|---|---|
Processing time (ms) | 44.33 | 9.15 | 19.54 | 23.04 | 96.06 |
Processing load (%) | 46.15 | 9.52 | 20.35 | 23.98 | 100 |
Comparison of time complexity between SIS-based tracking and the proposed method
SIS-based tracking | Proposed method | |
---|---|---|
Number of vehicles | M | M |
Number of particles | C ^{ M } | C · M |
Time complexity | Ω (C^{ M }) | Ω (C · M) |
Performance comparison between SIS-based tracking and the proposed method
Method | Number of particles | Processing time for sampling (ms) | Average time per frame (ms) | Frame processing rate (fps) | Tracking failures |
---|---|---|---|---|---|
SIS-based tracking | 250 | 23.55 | 96.56 | 10.36 | 31 |
SIS-based tracking | 1000 | 114.33 | 187.34 | 5.34 | 22 |
Proposed method | 250 | 23.04 | 96.06 | 10.41 | 9 |
7.4 Discussion and examples
The process of sample generation in the framework of the Markov chain is superimposed on Figure 10.a3 and Figure 10.a4. In particular, the segment between the previous sample and the proposed sample is colored in green whenever the latter is accepted and in red if it is rejected. As can be observed, accepted samples concentrate in the area of high likelihood (i.e., the transition between road and vehicles), while samples diverging from this area are rejected. The final estimates for vehicles positions are indicated in Figure 10.a5 with white segments underlining the vehicle rears.
As stated, dual modeling from two sources prevents the method from losing track whenever one of the sources is unreliable. This is illustrated in Figure 10.b, and Figure 10.c, where the sampling process is depicted analogously to Figure 10.a for the images in Figure 10.b1 and Figure 10.c1. In particular, in Figure 10.b the motion-based observation provides no measurement for the right vehicle (Figure 10.b4), however this is compensated by the correct appearance-based observation, which avoids particle dispersion. Therefore, the vehicle is correctly tracked as shown in Figure 10.b5. In contrast, the particles for the left vehicle overcome an inaccurate initialization and converge to its actual position due to good appearance-based and motion-based observations. The opposite case is illustrated in Figure 10.c, in which the appearance-based model fails to detect the furthest vehicle (see Figure 10.c3), whereas the region of motion is still observable in the difference between aligned images in Figure 10.c4. The combinations of the two sources results in the correct tracking of the vehicle, as shown in Figure 10.c5.
8 Conclusions
In this article, a new probabilistic framework for vehicle detection and tracking has been presented based on MCMC. As regards vehicle detection, a new descriptor, CR-HOG, has been defined based on the extraction of gradient features in radial rectangular bins. The descriptor has proven to have good discriminative properties using a reduced number of features in a simple linear-kernel SVM classifier, and is thus ideally suited for real-time applications. In addition, the tracker is proven to perform better than state-of-the-art methods based on Kalman and particle filtering in terms of tracking failures. The power of the algorithm lies in the fusion of information of different nature, especially regarding the observation model. In effect, aside from appearance the method incorporates the analysis of another feature that is inherent to vehicles: their motion. In addition, MCMC method is capitalized on to perform efficient sampling and to avoid the performance degradation of particle filter-based approaches in multiple object tracking arising from the curse of dimensionality. In summary, the method is able to overcome the common limitations of particle filter-based approaches and to provide robust vehicle tracking in a wide variety of driving situations and environment conditions.
Declarations
Acknowledgments
This study was supported in part by the Ministerio de Ciencia e Innovación of the Spanish Government under projects TEC2007-67764 (SmartVision) and TEC2010-20412 (Enhanced 3DTV).
Authors’ Affiliations
References
- Song GY, Lee KY, Lee JW: Proceedings of the IEEE Intelligent Vehicles Symposium. Eindhoven, The Netherlands; 2008.Google Scholar
- Gao L, Li C, Fang T, Xiong Z: Image Analysis and Recognition International Conference on Image Analysis and Recognition. Volume 5112. Edited by: Campilho A, Kamel M. Springer, Heidelberg; 2008:142-150. Póvoa de Varzim June 2008, Lecture Notes in Computer ScienceGoogle Scholar
- Tsai L-W, Hsieh J-W, Fan K-C: IEEE Trans Image Process. 2007, 16(3):850-864.MathSciNetView ArticleGoogle Scholar
- Liu W, Wen X, Duan B, Yuan H, Wang N: Proceedings of the IEEE Intelligent Vehicles Symposium. Istanbul, Turkey; 2007.Google Scholar
- Ess A, Leibe B, Schindler K, van Gool L: IEEE Trans Pattern Anal Mach Intell. 2009, 31(10):1831-1846.View ArticleGoogle Scholar
- Toulminet G, Bertozzi M, Mousset S, Bensrhair A, Broggi A: IEEE Trans Image Process. 2006, 15(8):2364-2375.View ArticleGoogle Scholar
- Tan TN: IEEE Trans Image Process. 2000, 9(8):1343-1356. 10.1109/83.855430View ArticleGoogle Scholar
- Collado JM, Hilario C, de la Escalera A, Armingol JM: Proceedings of the IEEE Intelligent Vehicles Symposium. University of Parma, Italy; 2004.Google Scholar
- Ha DM, Lee J-M, Kim Y-D: Image Vis Comput. 2004, 22: 899-907. 10.1016/j.imavis.2004.05.006View ArticleGoogle Scholar
- Sun Z, Bebis G, Miller R: Proceedings of the International Conference on Digital Signal Processing. Santorini, Greece; 2002.Google Scholar
- Sun Z, Bebis G, Miller R: IEEE Trans Image Process. 2006, 15(7):2019-2034.View ArticleGoogle Scholar
- Dalal N, Triggs B: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, California; 2005.Google Scholar
- Negri P, Clady X, Hanif SM, Prevost L: A cascade of boosted generative and discriminative classifiers for vehicle detection. EURASIP J Adv Signal Process 2008. 10.1155/2008/782432Google Scholar
- Wang C-CR, Lien J-JJ: IEEE Trans Intell Transport Syst. 2008, 9(1):83-96.View ArticleGoogle Scholar
- Papageorgiou C, Poggio T: Int J Comput Vis. 2000, 38(1):15-33. 10.1023/A:1008162616689View ArticleGoogle Scholar
- Lienhart R, Maydt J: Proceedings of the IEEE International Conference on Image Processing. Rochester, New York; 2002.Google Scholar
- Arrospide J, Salgado L, Nieto M, Jaureguizar F: Proceedings of the IEEE International Conference on Image Processing. San Diego, California; 2008.Google Scholar
- Chen Y, Das M, Bajpai D: Proceedings of the Canadian Conference on Computer and Robot Vision. Montreal, Quebec; 2007.Google Scholar
- Goecke R, Pettersson N, Petersson L: Proceedings of the IEEE Intelligent Vehicles Symposium. Istanbul, Turkey; 2007.Google Scholar
- Asadi M, Monti F, Regazzoni CS: Feature classification for robust shape-based collaborative tracking and model updating. EURASIP J Image Video Process 2008. 10.1155/2008/274349Google Scholar
- Arulampalam MS, Maskell S, Gordon N, Clapp T: IEEE Trans Signal Process. 2002, 50(2):174-188. 10.1109/78.978374View ArticleGoogle Scholar
- Dore A, Soto M, Regazzoni CS: IEEE Signal Process Mag. 2010, 27(5):46-55.View ArticleGoogle Scholar
- Pantrigo JJ, Sánchez A, Montemayor AS, Duarte A: Pattern Recogn Lett. 2008, 29: 1160-1174. 10.1016/j.patrec.2007.12.012View ArticleGoogle Scholar
- Gao T, Liu Z-G, Gao W-C, Zhang J: Advances in Neuro-Information Processing 15th International Conference on Neural Information Processing of the Asia-Pacific Neural Network Assembly. Volume 5507. Edited by: Köppen M, Kasabov N, Coghill G. Springer Berlin, Heidelberg; 2008:695-702. Auckland November 2008, Lecture Notes in Computer ScienceGoogle Scholar
- Luo C, Cai X, Zhang J: Proceedings of IEEE Workshop on Multimedia Signal Processing. Cairns, Australia; 2008.Google Scholar
- Wang J, Ma Y, Li C, Wang H, Liu J: Proceedings of World Congress on Computer Science and Information Engineering. Los Angeles, California; 2009.Google Scholar
- Khan Z, Balch T, Dellaert F: IEEE Trans Pattern Anal Mach Intell. 2005, 27(11):1805-1819.View ArticleGoogle Scholar
- Wang J, Yin Y, Man H: Multiple human tracking using particle filter with Gaussian process dynamical model. EURASIP J Image Video Process 2008. 10.1155/2008/969456Google Scholar
- Smith K, Gatica-Perez D, Odobez J-M: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, California; 2005.Google Scholar
- Vo B-N, Vo B-T, Pham N-T, Suter D: IEEE Trans Signal Process. 2010, 58(10):5129-5141.MathSciNetView ArticleGoogle Scholar
- Bishop CM: Pattern Recognition and Machine Learning. Springer, New York; 2006.MATHGoogle Scholar
- Pang SK, Li J, Godsill SJ: IEEE Trans Aerosp Electron Syst. 2005, 47(1):472-502.View ArticleGoogle Scholar
- Bertozzi M, Broggi A: Comput Vis Image Understand. 1998, 113(6):743-749.Google Scholar
- Danescu R, Nedevschi S, Meinecke M-M, To T-B: Proceedings of the IEEE Intelligent Vehicles Symposium. Eindhoven, The Netherlands; 2008.Google Scholar
- Nieto M, Salgado L, Jaureguizar F, Cabrera J: Proceedings of the IEEE Intelligent Vehicles Symposium. Istanbul, Turkey; 2007.Google Scholar
- Bilmes JA: A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models. Technical Report TR-97-021 1998.Google Scholar
- Hartley RI, Zisserman A: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge; 2000.MATHGoogle Scholar
- Moon TK, Stirling WC: Mathematical Methods and Algorithms for Signal Processing. Prentince Hall, Englewood Cliffs, NJ; 1999.Google Scholar
- Zielke T, Brauckmann T, Von Seelen W: CVGIP: Image Understand. 1993, 58(2):177-190. 10.1006/ciun.1993.1037View ArticleGoogle Scholar
- The GTI-UPM Vehicle Image Database[http://www.gti.ssr.upm.es/data]
- The Caltech Database (Computational Vision at California Institute of Technology, Pasadena)[http://www.vision.caltech.edu/html-files/archive.html]
- Fergus R, Perona P, Zisserman A: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, Wisconsin; 2003.Google Scholar
- The TU Graz-02 Database (Graz University of Technology)[http://www.emt.tugraz.at/~pinz/data/GRAZ_02/]
- Opelt A, Pinz A: Proceedings of the 14th Scandinavian Conference on Image Analysis. Joensuu, Finland; 2005.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.