Skip to main content

Multi-camera multi-object voxel-based Monte Carlo 3D tracking strategies

Abstract

This article presents a new approach to the problem of simultaneous tracking of several people in low-resolution sequences from multiple calibrated cameras. Redundancy among cameras is exploited to generate a discrete 3D colored representation of the scene, being the starting point of the processing chain. We review how the initiation and termination of tracks influences the overall tracker performance, and present a Bayesian approach to efficiently create and destroy tracks. Two Monte Carlo-based schemes adapted to the incoming 3D discrete data are introduced. First, a particle filtering technique is proposed relying on a volume likelihood function taking into account both occupancy and color information. Sparse sampling is presented as an alternative based on a sampling of the surface voxels in order to estimate the centroid of the tracked people. In this case, the likelihood function is based on local neighborhoods computations thus dramatically decreasing the computational load of the algorithm. A discrete 3D re-sampling procedure is introduced to drive these samples along time. Multiple targets are tracked by means of multiple filters, and interaction among them is modeled through a 3D blocking scheme. Tests over CLEAR-annotated database yield quantitative results showing the effectiveness of the proposed algorithms in indoor scenarios, and a fair comparison with other state-of-the-art algorithms is presented. We also consider the real-time performance of the proposed algorithm.

1 Introduction

Tracking multiple objects and keeping record of their identities along time in a cluttered dynamic scene is a major research topic in computer vision, basically fostered by the number of applications that benefit from the retrieved information. For instance, multi-person tracking has been found useful for automatic scene analysis [1], human-computer interfaces [2], and detection of unusual behaviors in security applications [3].

A number of methods for camera-based multi-person 3D tracking have been proposed in the literature [47]. A common goal in these systems is robustness under occlusions created by the multiple objects cluttering the scene when estimating the position of a target. Single-camera approaches [8] have been widely employed, but they are vulnerable to occlusions, rotation, and scale changes of the target. In order to avoid these drawbacks, multi-camera tracking techniques exploit spatial redundancy among different views and provide 3D information at the actual scale of the objects in the real world. Integration of data extracted from multiple cameras has been proposed in terms of a fusion at feature level as image correspondences [9] or multi-view histograms [10] among others. Information fusion at data or raw level has been achieved by means of voxel reconstructions [11], polygon meshes [12], etc.

Most multi-camera approaches rely on a separate analysis of each camera view, followed by a feature fusion process to finally generate an output. Exploiting the underlying epipolar geometry of a multi-camera setup toward finding the most coherent feature correspondence among views was first tackled by Mikič et al. [13] using algebraic methods together with a Kalman filter, and further developed by Focken et al. [14]. Exploiting epipolar consistency within a robust Bayesian framework was also presented by Canton-Ferrer et al. [9]. Other systems rely on detecting semantically relevant patterns among multiple cameras to feed the tracking algorithm as done in [15] by detecting faces. Particle filtering (PF) [16] has been a commonly employed algorithm because of its ability to deal with problems involving multi-modal distributions and non-linearities. Lanz et al. [10] proposed a multi-camera PF tracker exploiting foreground and color information, and several contributions have also followed this path: [4, 7]. Occlusions, being a common problem in feature fusion methods, have been addressed in [17] using HMM to model the temporal evolution of occlusions within a PF algorithm. Information about the tracking scenario can also be exploited toward detecting and managing occlusions as done in [18] by modeling the occluding elements, such as furniture, in a training phase before tracking. It must be noted that, in this article, we assume that all cameras will be covering the area under study. Other approaches to multi-camera/multi-person tracking do not require maximizing the overlap of the field of view of multiple cameras, leading to the non-overlapped multi-camera tracking algorithms [19].

Multi-camera/multi-person tracking algorithms based on a data fusion before doing any analysis was pioneered by Lopez et al. [20] by using a voxela reconstruction of the scene. This idea was further developed by the authors in [5, 21] finally leading to the present article. Up to our knowledge, this is the first approach to multi-person tracking exploiting data fusion from multiple cameras as the input of the algorithms. In this article, we first introduce a methodology to multi-person tracking based on a colored voxel representation of the scene as the start of the processing chain. The contribution of this article is twofold. First, we emphasize the importance of the initiation and termination of tracks, usually neglected in most tracking algorithms, that has indeed an impact on the performance of the overall system. A general technique for the initiation/termination of tracks is presented. The second contribution is the filtering step where two techniques are introduced. The first technique applies PF to input voxels to estimate the centroid of the tracked targets. However, this process is far from real-time performance and an alternative, that we call Sparse Sampling (SS). SS aims at decreasing computation time by means of a novel tracking technique based on the seminal PF principle. Particles no longer sample the state space but instead a magnitude whose expectancy produces the centroid of the tracked person: the surface voxels. The likelihood evaluation relying on occupancy and color information is computed on local neighborhoods, thus dramatically decreasing the computation load of the overall algorithm. Finally, effectiveness of the proposed techniques is assessed by means of objective metrics defined in the framework of the CLEAR [22] multi-target tracking database. Computational performance is reviewed toward proving the real-time operation of the SS algorithms. Fair comparisons with state-of-the-art methods evaluated using the same database are also presented and discussed.

2 Tracker design methodology

Typically, a multi-target tracking system can be depicted as in Figure 1 and comprises a number of elementary modules. Although most articles present techniques that contribute to filtering module, the overall architecture is rarely addressed assuming that some blocks are already available. In this section, this scheme will be analyzed and some proposals for each module will be presented. The filtering step, being our major contribution, will be addressed in a separate section.

Figure 1
figure 1

Multi-person tracking scheme.

2.1 Input and output data

When addressing the problem of multi-person tracking within a multi-camera environment, a strategy about how to process this information is needed. Many approaches perform an analysis of the images separately, and then combine the results using some geometric constraints [10]. This approach is denoted as an information combination by fusion of decisions. However, a major issue in this procedure is dealing with occlusion and perspective effects. A more efficient way to combine information is data fusion [23]. In our case, data fusion leads to a combination of information from all images to build up a new data representation, and to apply the algorithms directly on these data. Several data representations aggregating the information of multiple views have been proposed in the literature such as voxel reconstructions [11, 24], level sets [25], polygon meshes [12], conexels [26], depth maps [27], etc. In our research, we opted for a colored voxel representation due to both its fast computation and accuracy.

For a given frame in the video sequence, a set of NC images are obtained from the NC cameras (see a sample in Figure 2(a)). Each camera is modeled using a pinhole camera model based on perspective projection with camera calibration information available. Foreground regions from input images are obtained using a segmentation algorithm based on Stauffer-Grimson's background learning and subtraction technique [28] as shown in Figure 2(b).

Figure 2
figure 2

Input data generation example. (a) A sample of the original images. (b) Foreground segmentation of the input images employed by the SfS algorithm. (c) Example of the binary 3D voxel reconstruction. (d) The final colored version shown over a background image.

Redundancy among cameras is exploited by means of a Shape-from-Silhouette (SfS) technique [11]. This process generates a discrete occupancy representation of the 3D space (voxels). A voxel is labeled as foreground or background by checking the spatial consistency of its projection on the NC segmented silhouettes, and finally obtaining the 3D binary reconstruction shown in Figure 2(c). We will denote this raw voxel reconstruction as V. The visibility of a surface voxel onto a given camera is assessed by computing the discrete ray originating from its optical center to the center of this voxel using Bresenham's algorithm and testing whether this ray intersects with any other foreground voxel. The most saturated color among pixels of the set of cameras that see a surface voxel is assigned to it. A colored representation of surface voxels of the scene is obtained, denoted as V C . An example of this process is depicted in Figure 2(d). It should be taken into account that, without loss of generality, other background/foreground and 3D reconstruction algorithms may be used to generate the input data to the tracking algorithm presented in this article.

The resulting colored 3D scene reconstruction is fed to the proposed system that assigns a tracker to each target and the obtained tracks are processed by a higher semantic analysis module. Information about the environment (dimensions of the room, furniture, etc.) allows assessing the validity of tracked volumes and discarding false volume detections.

Finally, the output of the overall tracking algorithm will be a number of hypotheses for the centroid position of each of the targets present in the scene.

2.2 Tracker state and filtering

One of the major challenges in multi-target tracking is the estimation of the number of targets and their positions in the scene, based on a set of uncertain observations. This issue can be addressed from two perspectives. First, extending the theory of single-target algorithms to multiple targets. This approach defines the working state space X as the concatenation of the positions of all NT targets as X= [ x 1 , x 2 . . . x N T ] . The difficulty here is the time variant dimensionality of this space. Monte Carlo approaches, and specifically PF approaches, to this problem have to face the exponential dependency between the number of particles required by the filter and the dimension of X, turning out to be computationally infeasible. Recently, a solution based on random finite sets achieving linear complexity has been presented [29].

Multi-target tracking can also be tackled by tracking each target independently, that is to maintain NT trackers with a state space X i = x i . In this case, the system attains a linear complexity with the number of targets, thus allowing feasible implementations. However, interactions among targets must be modeled in order to ensure the most independent set of tracks. This approach to multi-person tracking will be adopted in our research.

2.3 Track initiation and termination

A crucial factor in the performance of a tracking system is the module that addresses the initiation and termination of tracks. The initiation of a new tracker is independent of the employed filtering technique and only relies on the input data and the current state (position) of the tracks in the scene. On the other hand, the termination of a new tracking filter is driven by the performance of the tracker.

The initialization of a new filter is determined by the correct detection of a person in the analyzed scene. This process is crucial when tracking, and its correct operation will drive the overall system's accuracy. However, despite the importance of this step, little attention is paid to it in the design of multi-object trackers in the literature. Only few articles explicitly mention this process such as [30] that employs a face detector to detect a person or [31] that uses scout particle filters to explore the 3D space for new targets. Moreover, it is assumed that all targets in the scene are of interest, i.e., people, not accounting for spurious objects, i.e., furniture, shadows, etc. In this section, we introduce a method to properly handle the initiation and termination of filters from a Bayesian perspective.

2.3.1 Track initiation criteria

The 3D input data V fed to the tracking system is usually corrupted and presents a number of inaccuracies such as objects not reconstructed, mergings among adjacent blobs, spurious blobs, etc. Hence, defining a track initialization criterium based solely on the presence of a blob might lead to poor performance of the system. For instance, objects such as furniture might be wrongly detected as foreground, reconstructed and tracked. Instead, a classification of the blobs based on a probabilistic criteria can be applied during this initialization process aiming at a more robust operation. Training of this classifier is based on the development set of the used database, together with the available ground truth describing the position of the tracked objects.

Let X GT = x 1 , . . . , x N G T be the ground truth positions of the NGT targets present in the scene of the development set at a given instant. Once the reconstruction V is available, a connected component analysis is performed over these data thus obtaining a set of K disjoint components, C i , fulfilling:

V = i = 1 K C i .
(1)

We will consider the region of influence of a target with centroid x as the ellipsoid E ( x , y ) with axis size s = (s x , s y , s z ) centered at c.

A mapping is defined such that for every x j XGT a component C i is assigned. Let us denote [x]{x,y,z}as the x, y or z coordinate of vector x. The assignation process is defined as follows: first, a region of influence E ( x j , s ) with size s = (s x , s y , [x j ]z) centered at c = x j is placed in the 3D space. The radii s x and s y are chosen to contain an average person, s x = s y = 30 cm. Let us define the operator |·| applied to a volume as the number of non-zero voxels contained in it. Then, the assignation is defined as

x j argmax i E ( x j , s ) C i ,
(2)

that is to assign x j to the component with the largest volume enclosed in the region of influence. It must be noted that some x j might not have any C i associated due to a wrong segmentation or faulty reconstruction of the target. Moreover, the set of components not associated to any ground truth position can be identified as spurious objects, reconstructed shadows, etc.

Finally, we have grouped the set of connected components C i in two categories: person and non-person. A set of features are extracted from each of these components, thus conforming the characteristics that will be used to train a person/no-person binary classifier. This set of extracted features is described in Table 1.

Table 1 Features employed by the person/no-person classifier where magnitude [ V ] { x , y , z } denotes the x, y, or z coordinates of voxel V

In order to characterize the objects to be tracked and to decide the best classifier system, we have performed an exploratory data analysis [32], which will allow us to contrast the underlying hypotheses of the classifiers with the actual data. Histograms of these features are computed as shown in Figure 3 and scatter plots depicting the cross dependencies among all features are computed. Observing Figure 3, we see that some variables are easily separable, i.e., weight, height, and bounding box. Moreover, they show a low cross dependency with other features.

Figure 3
figure 3

Normalized histograms of the variables conforming the feature vector employed by the person/non-person classifier.

A number of standard binary classifiers has been tested and their performances have been evaluated, namely Gaussian, Mixture of Gaussians, Neural Networks, K-Means, PCA, Parzen and Decision Trees [33, 34]. Due to the aforementioned properties of the statistic distributions of the features, some classifiers are unable to obtain a good performance, i.e., Gaussian, PCA, etc. Other classifiers require a large number of characterizing elements, such as K-Means, MoG, or Parzen. Decision trees [33] have reported the best results. Separable variables such as height, weight, and bounding box size are automatically selected to build up a decision tree that yields a high recognition rate with a precision of 0.98 and a recall of 0.99 in our test database.

Another complementary criterium employed in the initiation of new tracks is based on the current state of the tracker. It will not be allowed to create a new track if its distance to the closest target is below a threshold.

2.3.2 Track termination criteria

A target will be deleted if one of the following conditions is fulfilled:

  • If two or more tracks fall too close to one another, this indicates that they might be tracking the same target, hence only one will be kept alive while the rest will be removed.

  • If tracker's efficiency becomes very low it might indicate that the target has disappeared and should be removed.

  • The person/no-person classifier is applied to the set of features extracted from the voxels assigned to a target. If the classifier outputs a no-person verdict for a number of frames, the target will be considered as lost.

3 Voxel-based solutions

The filtering block shown in Figure 1 addresses the problem of keeping consistent trajectories of the tracked objects, resolving crossings among targets, mergings with spurious objects (i.e., shadows) and producing an accurate estimation of the centroid of the target based on the input voxel information. Although there is a number of papers addressing the problem of multi-camera/multi-person tracking, very few contributions have been based on voxel analysis [20, 21].

3.1 PF tracking

PF is an approximation technique for estimation problems where the variables involved do not hold Gaussianity uncertainty models and linear dynamics. The current tracking scenario can be tackled by means of this algorithm to estimate the 3D position of a person x t = (x, y, z) t at time t, taking as observation a set of colored voxels representing the 3D scene up to time t denoted as z1: t. For a given target x t , PF approximates the posterior density p(x t |z1:t) as a sum of Np Dirac functions:

p x t | z 1 : t j = 1 N p w t j δ ( x t - x t j ) ,
(3)

where w t j are the weights associated to the particles, fulfilling j w t j =1, and x t j their positions. For this type of tracking problem, a sampling importance re-sampling (SIR) PF is applied to drive particles along time [16]. Assuming importance density to be equal to the prior density, weight update is recursively computed as

w t j w t - 1 j p z t | x t j .
(4)

SIR PF avoids the particle degeneracy problem by re-sampling at every time step. In this case, weights are set to w t - 1 j = N P - 1 , j; therefore,

w t j p z t | x t j .
(5)

Hence, the weights are proportional to the likelihood function that will be computed over the incoming volume z t .

Finally, the best state at time t, x ̃ t , is derived based on the discrete approximation of Equation 3. The most common solution is the Monte Carlo approximation of the expectation as

x ̃ t = E [ x t | z 1 : t ] j = 1 N P w t j x t j .
(6)

Basically, in the PF operation loop two steps must be defined: likelihood evaluation and particles propagation. In the following, we present our proposal for the PF implementation.

3.1.1 Likelihood evaluation

Binary and color information contained in z t will be employed to define the likelihood function p z t | x t j relating the observation z t with the human body instance given by particle x t j ,1j N P . Two partial likelihood functions, p Raw V t | x t j and p Color V t C | x t j , will be combined linearly to produce p z t | x t j as:

p z t | x t j = λ p Raw V t | x t j + ( 1 - λ ) p Color V t C | x t j .
(7)

Factor λ controls the influence of each term (foreground and color information) in the overall likelihood function. Empirical tests have shown that λ = 0.8 provides satisfactory results. A more detailed review of the impact of color information in the overall performance of the algorithm is addressed in Section5.1.

Likelihood associated to raw data is defined as the ratio of overlap between the input data V t and the ellipsoid E t j defined by particle x t j (see Section 2.3.1)

as

p Raw V t | x t j = V t E t j E t j .
(8)

For a given target k, an adaptive reference histogram H t k of the colored surface voxels is available. This histogram is constructed using the YCbCr color space due to its robustness against light variations. The number of bins per channel will drive the ability of the system to distinguish between different color blobs; for our experiments, 21 bins per channel have been set empirically. The color likelihood function is constructed as

p Color V t C | x t j = B H t k , H V t C E t j ,
(9)

where B(·) is the Bhattacharya distance and H(·) stands for the color histogram extraction operation of the enclosed volume. Update of the reference histogram is performed in a linear manner following the rule:

H t k = α H t - 1 k + ( 1 - α ) H V t C E t x ̃ ,
(10)

where E t x ̃ stands for the ellipsoid placed in the centroid estimation x ̃ t and α is the adaptation coefficient. In our experiments, α = 0.9 provided satisfactory results.

3.1.2 Particle propagation

The propagation model has been chosen to be a Gaussian noise added to the state of the particles after the re-sampling step: x t + 1 j = x t j + N . The covariance matrix P corresponding to N is proportional to the maximum variation of the centroid of the target and this information is obtained from the development part of the testing dataset. More sophisticated schemes employ previously learnt motion priors to drive the particles more efficiently [6]. However, this would penalize the efficiency of the system when tracking unmodeled motions patterns and, since our algorithm is intended for any motion tracking, no dynamical model is adopted.

3.1.3 Interaction model

Let us assume that there are NT independent tracked targets. However, they are not fully independent since each tracker can consider voxels from other targets in both the likelihood evaluation and the 3D re-sampling step, resulting in target merging or identity mismatches. In order to achieve the most independent set of trackers, a blocking method to model interactions is considered. Some blocking proposals can be found in 2D tracking related studies [6] and an extension to the 3D domain is proposed. Blocking methods rely on penalizing particles whose associated ellipsoid model overlaps with other targets' ellipsoid as shown in Figure 4. Hence, blocking information can also be considered when computing the particle weights for the k th target as

Figure 4
figure 4

Particles from the tracker A (yellow ellipsoid) falling into the exclusion zone of tracker B (green ellipsoid) will be penalized by a multiplicative factor α [0, 1].

w t k , j = p z t | x t k , j l=1 l k N T ϕ x ̃ t - 1 k , x ̃ t - 1 l ,
(11)

where x ̃ t - 1 k stands for the estimation of the PF at time t - 1 for target k and ϕ(·) is the blocking function defining exclusion zones that penalize particles from target l falling into the exclusion zone of target k. In this particular case, considering that people in the room are always sitting or standing up, this zone can be constrained to the xy plane. The proposed function is

ϕ x ̃ t - 1 k , x ̃ t - 1 l = 1 - exp - k x ̃ t k x , y - x ̃ t l x , y 2 ,
(12)

where k s x - 2 is the parameter that drives the sensibility of the exclusion zone.

3.2 SS tracking

In the presented PF tracking algorithm, likelihood evaluation can be computationally expensive, thus rendering this approach unsuitable for real-time systems. Moreover, data are usually noisy and may contain merged blobs corresponding to different targets. A new technique, SS, is proposed as an efficient and flexible alternative to PF.

Assuming a homogeneous 3D object, it can be proved that its centroid can exactly be computed based only on the surface voxels, since the interior voxels do not provide any relevant information. Hence, this centroid can be estimated through a discrete version of Green's theorem on the surface voxels [35, 36], while other approaches obtain an accurate approximation of the centroid using feature points (see [37] for a review). A common assumption of these techniques is the availability of surface data extracted beforehand, hence a labeling of the voxels in the scene should be available. By assuming that the object under study presents a central symmetry in the xy plane, the computation of the centroid can be done as an average of the positions of the surface voxels:

x ̃ t = V V t [ V ] x V t = V V t s [ V ] x V t s .
(13)

3.2.1 Degree of mass and degree of surfaceness

Let us model the human body as an ellipsoid as previously done in the PF approach. In order to test the robustness of the centroid computation of Equation13 against missing data, we studied the error committed when only a fraction of these input data is employed. A number of voxels (surface or interior voxels in each case) is randomly selected and employed to compute the centroid. Then, the error is computed showing that the surface-based estimation is more sensitive than the estimation using interior voxels (see Figure 5). However, this proves that the centroid can be computed from a number of randomly selected surface voxels still achieving a satisfactory performance. This idea is the underlying principle of the SS algorithm.

Figure 5
figure 5

Centroid's estimation error when computed with a fraction of surface or interior voxels. The employed ellipsoid had a radii s = (30, 30,100) cm, and voxels with sv = 2 cm were used.

Let us estimate the centroid of an object by analyzing a randomly selected number of voxels from the whole scene, denoted as W. An approach to the computation of the centroid would be

x ̃ t W W t ρ ( W ) [ W ] x W W t ρ ( W ) , ρ ( W ) = 1 if W V t 0 if W V t ,
(14)

where ρ ( W ) gives the mass density of voxel W. Since it is assumed that all voxels have the same mass, this is a binary function that checks the occupancy of a given voxel. Hence, only the fraction of (randomly selected) voxels inside the object will contribute to the computation of the centroid. Equation14 can be rewritten as

x ̃ t W W t ρ ( W ) W W t ρ ( W ) [ W ] x = W W t ρ ̃ ( W ) [ W ] x ,
(15)

where ρ ̃ ( W ) can be considered as the normalized mass contribution of voxel W to the computation of the centroid. If function ρ ( W ) takes values in the range [0,1] we may consider it as the "degree of mass" of W or the importance of voxel W into the calculation of x ̃ t . Then, ρ ( W ) might be considered as a normalized weight assigned to W. Since we stated that the centroid can be computed using surface voxels, Equation13 can be also posed as

x ̃ t W W t ρ s ( W ) W W t ρ ( W ) [ W ] x = W W t ρ ̃ s ( W ) [ W ] x ,
(16)

where ρ s ( W ) [ 0 , 1 ] measures the "degree of surfaceness" of voxel W. Within this context, functions ρ(·) and ρS(·) might be understood as pseudo-likelihood functions and Equations 16 and 15 as a sample-based representation of an estimation problem.

3.2.2 Difference with particle filters

There is an obvious similarity between these representation and the formulation of particle filters but there is a significant difference. While particles in PF represent an instance of the whole body, our samples W W t are points in the 3D space. Moreover, particle likelihoods are computed over all data while sample pseudo-likelihoods will be computed in a local domain.

The presented concepts are applied to define the SS algorithm. Let y t i 3 , a point in the 3D space and ω t i its associated weight measuring the pseudo-likelihood of this position being part of the object or part of its surface. Under certain assumptions, it is achieved that the centroid can be computed as

x ̃ t i = 1 N s ω t i y t i ,
(17)

where Ns is the number of sampling points. When using SS we are no longer sampling the state space since y t i cannot be considered an instance of the centroid of the target as happened with particles, x t j , in PF. Hence, we will talk about samples instead of particles and we will refer to ( y t i , ω t i ) i = 1 N s as the sampling set. This set will approximate the surface of the k th target, V S , k , and will fulfill the sparsity condition N s V S , k . .

4 SS implementation

In order to define a method to recursively estimate x ̃ t from the sampling set ( y t i , ω t i ) i = 1 N s , a filtering strategy has to be set. Essentially, the proposal is to follow the PF analysis loop (re-sampling, propagation, evaluation, and estimation) with some opportune modifications to ensure the convergence of the algorithm.

4.1 Pseudo-likelihood evaluation

Associated weight w t j to a sample y t i will measure the likelihood of that 3D position to be part of the surface of the tracked target. When computing the pseudo-likelihood, surface has been chosen instead of interior voxels, based on the efficiency of surface samples to propagate rapidly as will be explained in the next section. As in the defined PF likelihood function, two partial pseudo-likelihood functionsb, p Raw V t | y t i and p Color V t C | y t i , are linearly combined to form p z t | y t i as

p z t | y t i = λ p Raw V t | y t i + ( 1 - λ ) p Color V t C | y t i .
(18)

Partial likelihoods will be computed on a local domain centered in the position y t i . Let C y t i , q , r be a neighborhood of radius r over a connectivity q domain on the 3D orthogonal grid around a sample place in a voxel position y t i . Then, we define the occupancy and color neighborhoods around y t i as O t i = V t C y t i , q , r and C t i = V t C C y t i , q , r , respectively.

For a given sample y t i occupying a single voxel, its weight associated to the raw data will measure its likelihood to belong to the surface of an object. It can be modeled as

p Raw V t | y t i =1- 2 O t i C y t i , q , r - 1 .
(19)

Ideally, when the sample y t i is placed in a surface, half of its associated occupancy neighborhood will be occupied and the other half empty. The proposed expression attains its maximum when this condition is fulfilled.

Function p Color V t C | y t i can be defined as the likelihood of a sample belonging to the surface corresponding to the k th target characterized by an adaptive reference color histogram H t k :

p Color V t C | y t i = D H t k , C t j .
(20)

Since C t j contains only local color information with reference of the global histogram H t k , the distance D(·) is constructed toward giving a measure of the likelihood between this local colored region and H t k . For every voxel in C t j , it is decided whether it is similar to H t m by selecting the histogram value for the tested color and checking whether it is above a threshold γ or not. Finally, the ratio between the number of similar color and total voxels in the neighborhood gives the color similarity score. Since reference histogram is updated and changes over time, a variable threshold γ is computed, so that the 80% of the values of H t m are taken into account.

One of the advantages of the SS algorithm is its computational efficiency. The complexity to compute p z t | y t i is quite reduced since it only evaluates a local neighborhood around the sample in comparison with the computational load required to evaluate the likelihood of a particle in the PF algorithm. This point will be quantitatively addressed in Section5.2.

The parameters defining the neighborhood were set to q = 26 and r = 2 yielding to satisfactory results. Larger values of the radius r did not significantly improve the overall algorithm performance but increased its computational complexity.

4.2 Sample propagation and 3D discrete resampling

A sample y t i placed near a surface will have an associated weight ω t j with a high value. It is a valid assumption to consider that some surrounding positions might also be part of this surface. Hence, placing a number of new particles in the vicinity of x t j would contribute to progressively explore the surface of a voxel set. This idea leads to the spatial re-sampling and propagation scheme that will drive samples along time in the surface of the tracked target.

Given the discrete nature of the 3D voxel space, it will be assumed that every sample is constrained to occupy a single voxel or discrete 3D coordinate and there cannot be two samples placed in the same location. Re-sampling is mimicked from PF so a number of replicas proportional to the normalized weight of the sample are generated. Then, these new samples are propagated and some discrete noise is added to their position meaning that their new positions are also constrained to occupy a discrete 3D coordinate (see an example in Figure 6). However, two re-sampled and propagated particles may fall in the same 3D voxel location as shown in Figure 6. In such case, one of these particles will randomly explore the adjacent voxels until reaching an empty location; if there is not any suitable location for this particle, it will be dismissed.

Figure 6
figure 6

Example of discrete re-sampling and propagation (in 2D). (a) A sample is re-sampled and its replicas are randomly placed occupying a single voxel. (b) Two re-sampled samples fall in the same position (red cell) and one of them (blue) performs a random search through the adjacent voxels to find an empty location.

The choice of sampling the surface voxels of the object instead of its interior voxels to finally obtain its centroid is motivated by the fact that propagating samples along the surface rapidly spread them all around the object as depicted in Figure 7. Propagating samples on the surface is equivalent to propagate them on a 2D domain, hence the condition of not placing two samples in the same voxel will make them to explore the surface faster (see Figure 6). On the other hand, interior voxels propagate on a 3D domain, thus having more space to explore and therefore becoming slower to spread all around the volume (see Figure 6). Although both (pseudo-)likelihoods should produce a fair estimation of the object's centroid, both sampling sets must fulfill the condition to be randomly spread around the object volume, otherwise the centroid estimation will be biased.

Figure 7
figure 7

Sample positions evolution and centroid estimation. Likelihood based on: (a) interior voxels, or (b) surface voxels.

4.2.1 Interaction model

The flexibility of a sample-based analysis may, sometimes, lead to situations where particles spread out too much from the computed centroid. In order to cope with this problem, a intra-target samples' interaction model is devised. If a sample is placed in a position such that y t i x , y - x ̃ t - 1 x , y >δ it will be removed (that is to assign ω t i =0) and we set the threshold as δ = αs x , with s x = 30 cm. Factor α = 1.5 produced accurate results in our experiments.

The interaction among targets is modeled in similar way as in the PF approach. Formulas in Equations 11 and 12 are applied to samples with the appropriate scaling parameter k.

5 Results and evaluation

In order to assess the performance of the proposed tracking systems, they have been tested on the set of benchmarking image sequences provided by the CLEAR Evaluation Campaigns 2007 [22]. Typically, these evaluation sequences involved up to five people moving around in a meeting room. This benchmarking set was formed by two separate datasets, development, and evaluation, containing sequences recorded by five of the participating partners. A sample of these data can be seen in Figure 8. The development set consisted in 5 sequences of an approximate duration of 20 min each, while the evaluation set was formed by 40 sequences of 5min each, thus adding up to 5 h of data. Each sequence was recorded with four cameras placed in the corners of the room and a zenithal camera placed in the ceiling. All cameras were calibrated and had resolutions ranging from 640 × 480 to 756 × 576 pixels at an average frame rate of fR = 25fps. The test environments were a 5 × 4 m rooms with occluding elements such as tables and chairs. Images of the empty rooms were also provided to train the background/foreground segmentation algorithms.

Figure 8
figure 8

CLEAR [22]evaluation dataset sample. Images from several partners showing a common indoor conference room configuration involving several participants.

Metrics proposed in [4] for multi-person tracking evaluation have been adopted, namely the Multiple Object Tracking Precision (MOTP), which shows tracker's ability to estimate precise object positions, and the Multiple Object Tracking Accuracy (MOTA), which expresses its performance at estimating the number of objects, and at keeping consistent trajectories. MOTP scores the average metric error when estimating multiple target 3D centroids, while MOTA evaluates the percentage of frames where targets have been missed, wrongly detected or mismatched.

The aim of a tracking system would be to produce high values of MOTA and low values of MOTP thus indicating its ability to correctly track all targets and estimate their positions accurately. When comparing two algorithms, there will be a preference to choose the one outputting the highest MOTA score.

5.1 Results

To demonstrate the effectiveness of the proposed multi-person tracking approaches, a set of experiments were conducted over the CLEAR 2007 database. The development part of the dataset was used to train the initiation/termination of tracks modules as described in Section 2.3 and the remaining test part was used for our experiments.

First, the multi-camera data are pre-processed performing the foreground and background segmentations and 3D voxel reconstruction algorithm. In order to analyze the dependency of the tracker's performance with the resolution of the 3D reconstruction, several voxel sizes were employed s v = { 2 , 5 , 10 , 15 } cm. A colored version of these voxel reconstructions was also generated, according to the technique introduced in Section 2.1. Then, these data were the input fed to the PF and SS proposed approaches.

In both types of filters, SS or PF, three parameters drive the performance of the algorithm: the voxel size s v , the number of samples Ns, or particles Np, and the usage of color information. Experiments carried out explore the influence of these parameters in the MOTP, precision in cm., and MOTA, tracker accuracy (in % of correctly tracked targets), shown in Figure 9. Some remarks can be drawn

Figure 9
figure 9

MOTP and MOTA scores for the SS and the PF techniques using raw and colored voxels. Several voxel sizes sv = {2, 5, 10, 15} cm have been used in the experiments.

  • Number of samples/particles: There is a dependency between the MOTP score and the number of particles/samples, especially for the SS algorithm. The contribution of a new sample to the estimation of the centroid in the SS has less impact than the addition of a new particle in the PF, hence the slower decay of the MOTP curves for the SS than for the PF. Regarding the MOTA score, there is not a significant dependency with Ns or Np. Two factors drive the MOTA of an algorithm: the track initiation/termination modules, that mainly contributes to the ratio of misses and false positives, and the filtering step that has an impact to the mismatches ratio. The low dependency of MOTA with Ns or Np shows that most of the impact of the algorithm in this score is due to the particle/sample propagation and interaction strategies rather than the quantity of particles/samples itself. Moreover, the influence in the MOTA score is tightly correlated with the track initiation/termination policy. This assumption was experimentally validated by testing several classification methods (mixture of Gaussians, PCA, Parzen, and K-Means) in the initiation/termination modules yielding to a drop in the MOTA score proportional to their ability to correctly classify a blob as person/no-person.

  • Voxel size: Scenes reconstructed with a large voxel size do not capture well all spatial details and may miss some objects thus decreasing the performance of the system (both in SS and PF). It can be observed that MOTP and MOTA scores improve as the voxel size decrease.

  • Color features: Color information improves the performance of SS and PF in both MOTP and MOTA scores. First, there is an improvement when using color information for a given voxel size, specially for the SS algorithm. Moreover, the smaller the voxel size the most noticeable difference between the experiments using raw and color features. This effect is supported by the fact that color characteristics are better captured when using small voxel sizes. The performance improvement when using color in the SS algorithm is more noticeable since samples are placed in the regions with a high likelihood to be part of the target. For instance, this effect is more evident in cases where the subject is sitting and the particles concentrate in the upper body part, disregarding the part of the chair. In the SS algorithm, MOTP score benefits from this efficient sample placement. PF algorithm is constrained to evaluate the color likelihood in the ellipsoid defined in Equation 9 thus not being able to differentiate between parts of the blob that do not belong to the tracked target. Color information used within the filtering loop leads to a better distinguishability among blobs, thus reducing the mismatch ratio and slightly improving the MOTA score. Merging of adjacent blobs or complex crossing among targets is also correctly resolved. An example of the impact of color information is shown in Figure 10 where the usage of color avoids the mismatch between two targets. This effect is more noticeable when targets in the scene are dressed in different colors.

Figure 10
figure 10

Zenithal view of two comparative experiments showing the influence of color in the SS algorithm. The cross-over between two targets is correctly tackled when using color information whereas using only raw features leads to a mismatch and, afterwards, a track loss (white ellipsoid) and the initiation of a new one (cyan ellipsoid).

We can compare the results obtained by SS and PF with other algorithms evaluated using the same CLEAR 2007 database whose scores are reported in Table 2. Most of these methods exploited multi-view information with the exception of [31] that only used the zenithal camera facing the associated distortion and perspective problems. PF is the most employed technique due to its suitability to the characteristics of this problem although Kalman filtering used by [15] provided fair results when fed by higher semantical features extracted from the input data (in this case, faces). Note the low FP score for this system as a consequence of the unlikely event of detecting a face in a spurious object. A 3D voxel reconstruction was used as the input data in [5] together with a simple track management system. The rest of the methods [7, 31] relied on a fixed human body appearance model similar to the ellipsoidal region of interest used in our PF proposal. However, the novelty of these methods is the strategies to combine the information coming from the analysis of different views without performing any 3D reconstruction. Comparing the best proposed tracking system [31]c with our two approaches, we obtain a relative improvement of Δ(MOTP, MOTA)ss = (7.63,17.13)% and Δ(MOTP, MOTA)PF = (5.16,7.15)%.

Table 2 Results presented at the CLEAR 2007 [22] by several partners

In order to visually show the performance of the SS algorithm, some videos corresponding to the most challenging tracking scenarios have been made available at http://www.cristiancanton.org.

5.2 Computational performance

Comparing obtained metrics among different algorithms can give an idea about their performance in a scenario where computational complexity is not taken into account. An analysis of the operation time of several algorithms under the same conditions and the produced MOTP/MOTA metrics might give a more informative and fairer comparison tool. Although there is not a standard procedure to measure the computational performance of a tracking process, we devised a method to assess the computational efficiency of our algorithms to present a comparative study.

The RTFfactor associated with a performance measure MOTP/MOTA (in both vertical axes) of the SS and PF algorithms when dealing with raw and colored input voxels is presented in Figure 11. This factor indicates a proportional measure of the speed of the algorithm where RTF = 1 stands for real-time operation while RTF > 1 and RTF < 1 indicate a faster or slower performance, respectively. Each point of every curve is the result of an experiment conducted over all the CLEAR data set associated to a number of samples/particles of each algorithm.

Figure 11
figure 11

Computational performance comparison between PF and SS using several voxel sizes s V = { 2 , 5 , 10 , 15 } cm and features (raw or colored voxels). MOTP and MOTA scores are related to the real-time factor (RTF) showing the computational load required by each algorithm to attain a given tracking performance.

The first noticeable characteristic of these charts is that, due to the computational complexity of each algorithm, when comparing SS and PF algorithms under the same operation conditions, the RTF associated with SS is always higher than the associated with PF. Similarly, the computational load is higher when analyzing colored than raw inputs. All the plotted curves attain lower RTF performance values as the size of the voxel s v decreases since the amount of data to process increases (note the different RTF scale ranges for each voxel size in Figure 11). Regarding the MOTP/MOTA metrics, there is a common tendency to a decrease in the MOTP and an increase in the MOTA as the RTF decreases. The separation between the SS and PF curves is bigger as the voxel size decreases since the PF algorithm has to evaluate a larger amount of data.

The observation of these results yields the conclusion that the SS algorithm is able to produce a similar and, in some cases, better results than the PF algorithm with a lower computational cost. For example, using s v =5 cm, a MOTP score of around 165 mm can be obtained using SS with a RTF ten times larger than when using PF and similarly with the MOTA score.

6 Conclusions

In this article, we have presented a number of contributions to the multi-person tracking task in a multi-camera environment. A block representation of the whole tracking process allowed to identify the performance bottlenecks of the system and address efficient solutions to each of them. Real-time performance of the system was a major goal hence efficient tracking algorithms have been produced as well as an analysis of their performance.

The performance of these systems has thoroughly been tested over the CLEAR database and quantitatively compared through two scores: MOTP and MOTA. A number of experiments have been conducted toward exploring the influence of the resolution of the 3D reconstruction and the color information. Results have been compared with other state-of-the-art algorithms evaluated with the same metrics using the same testing data.

The relevance of the initiation and termination of filters have been proved, since these modules have a major impact on the MOTA score. However, most articles in the literature do not specifically address the operation of these modules. We proposed a statistical classifier based on classification trees as a way to discriminate blobs between the person/no-person classes. Training of this classifier was done using data available in the development part of the employed database and a number of features (namely weight, height, top in z-axis, bounding box size) were extracted and provided as the input to the classifier. Another criterium such as a proximity to other already existing tracks was employed to create or destroy a track. Performance scores in Table 2 for the PF and SS systems present the lowest values for the false positives (FP) and missed targets (Miss) ratios hence supporting the relevance of the initiation and termination of tracks modules.

Two proposals for the filtering step of the tracking system have been presented: PF and SS. An independent tracker was assigned to every target and an interaction model was defined. PF technique proved to be robust and leaded to state-of-the-art results but its computational load was unaffordable for small voxel sizes. As an alternative, SS algorithm has been presented achieving a similar and, in some occasions, better performance than PF at a smaller computational cost. Its sample-based estimation of the centroid allowed a better adaptation to noisy data and distinguishability among merged blobs. In both PF and SS, color information provided a useful cue to increase the robustness of the system against track mismatches thus increasing the MOTA score. In the SS, color information also allowed a better placement of the samples allowing to distinguish among parts belonging to the tracked object and parts of a merging with a spurious object, leading to a better MOTP score.

Future research within this topic involves multi-modal data fusion with audio data toward improving the precision of the tracker, MOTP, and avoid mismatches among targets, thus improving the MOTA score.

End notes

aAnalogously to the pixel definition (picture element) as the minimum information unit in a discrete image, the voxel (volume element) is defined as the minimum information unit in a 3D discrete representation of a volume.

bFor the sake of simplicity in the notation, pseudo-likelihood functions will be denoted as p(·) instead of defining a specific notation for it.

cWhen selecting the best system, the MOTA score is regarded as the most significant value.

The authors declare that they have no competing interests

References

  1. Park S, Trivedi MM: Understanding human interactions with track and body synergies captured from multiple views. Comput Vis Image Understand 2008,111(1):2-20. 10.1016/j.cviu.2007.10.005

    Article  Google Scholar 

  2. Project CHIL--Computers in the Human Interaction Loop2004. [http://chil.server.de]

  3. Haritaoglu I, Harwood D, Davis LS: W4: real-time surveillance of people and their activities. IEEE Trans Pattern Anal Mach Intell 2000,22(8):809-830. 10.1109/34.868683

    Article  Google Scholar 

  4. Bernardin K, Elbs A, Stiefelhagen R: Multiple object tracking performance metrics and evaluation in a smart Room environment. Proceedings of IEEE International Workshop on Visual Surveillance 2006.

    Google Scholar 

  5. Canton-Ferrer C, Salvador J, Casas JR: Multi-person tracking strategies based on voxel analysis. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:91-103.

    Google Scholar 

  6. Khan Z, Balch T, Dellaert F: Efficient particle filter-based tracking of multiple interacting targets using an MRF-based motion model. Proceedings of International Conference on Intelligent Robots and Systems 2003,1(1):254-259.

    Google Scholar 

  7. Lanz O, Chippendale P, Brunelli R: An appearance-based particle filter for visual tracking in smart rooms. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:57-69.

    Google Scholar 

  8. Yilmaz A, Javed O, Shah M: Object tracking: a survey. ACM Comput Surv 2006,38(4):1-45.

    Article  Google Scholar 

  9. Canton-Ferrer C, Casas JR, Pardàs M: Towards a Bayesian approach to robust finding correspondences in multiple view geometry environments. In Proceedings of 4th International Workshop on Computer Graphics and Geometric Modelling. Volume 3515. Lecture Notes on Computer Science; 2005:281-289.

    Google Scholar 

  10. Lanz O: Approximate Bayesian multibody tracking. IEEE Trans Pattern Anal Mach Intell 2006,28(9):1436-1449.

    Article  Google Scholar 

  11. Cheung GKM, Kanade T, Bouguet JY, Holler M: A real time system for robust 3D voxel reconstruction of human motions. IEEE Conference on Computer Vision and Pattern Recognition 2000, 2: 714-720.

    Google Scholar 

  12. Isidoro J, Sclaroff S: Stochastic refinement of the visual hull to satisfy photometric and silhouette consistency constraints. Proceedings of IEEE International Conference on Computer Vision 2003, 2: 1335-1342.

    Article  Google Scholar 

  13. Mikič I, Santini S, Jain R: Tracking objects in 3D using multiple camera views, in. Proceedings of Asian Conference on Computer Vision 2000.

    Google Scholar 

  14. Focken D, Stiefelhagen R: Towards vision-based 3-D people tracking in a Smart Room. Proceedings of IEEE International Conference on Multimodal Interfaces 2002, 400-405.

    Chapter  Google Scholar 

  15. Katsarakis N, Talantzis F, Pnevmatikakis A, Polymenakos L: The AIT 3D audiovisual person tracker for CLEAR 2007. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:35-46.

    Google Scholar 

  16. Arulampalam MS, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans Signal Process 2002,50(2):174-188. 10.1109/78.978374

    Article  Google Scholar 

  17. Lien K, Huang C: Multiview-based cooperative tracking of multiple human objects. EURASIP J. Image Video Process 2008,8(2):1-13.

    Article  Google Scholar 

  18. Osawa T, Wu X, Sudo K, Wakabayashi K, Arai H: MCMC based multi-body tracking using full 3D model of both target and environment. Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance 2007, 224-229.

    Google Scholar 

  19. Black J, Ellis T, Rosin P: Multi view image surveillance and tracking. Proceedings of Workshop on Motion and Video Computing 2002, 169-174.

    Google Scholar 

  20. López A, Canton-Ferrer C, Casas JR: Multi-person 3D tracking with particle filters on voxels. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2007, 1: 913-916.

    Google Scholar 

  21. Canton-Ferrer C, Sblendido R, Casas JR, Pardàs M: Particle Filtering and sparse sampling for multi-person 3D tracking. Proceedings of IEEE International Conference on Image Processing 2008, 2644-2647.

    Google Scholar 

  22. CLEAR--Classification of Events, Activities and Relationships Evaluation and Workshop2007. [http://www.clear-evaluation.org]

  23. Hall DL, McMullen SAH: Mathematical Techniques in Multisense Data Fusion. Artech House 2004.

    Google Scholar 

  24. Kutulakos KN, Seitz SM: A theory of shape by space carving. Int J Comput Vis 2000,38(3):199-218. 10.1023/A:1008191222954

    Article  MATH  Google Scholar 

  25. Faugeras O, Keriven R: Variational principles, surface evolution, PDE's, level set methods and the stereo problem. Proceedings of 5nd IEEE EMBS International Summer School on Biomedical Imaging 2002.

    Google Scholar 

  26. Casas JR, Salvador J: Image-based multi-view scene analysis using conexels. Proceedings of HCSNet Workshop on Use of Vision in Human-Computer Interaction 2006, 19-28.

    Google Scholar 

  27. Kolmogorov V, Zabin R: What energy functions can be minimized via graph cuts? IEEE Trans Pattern Anal Mach Intell 2004,26(2):147-159. 10.1109/TPAMI.2004.1262177

    Article  Google Scholar 

  28. Stauffer C, Grimson W: Adaptive background mixture models for real-time tracking, in. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition 1999, 252-259.

    Google Scholar 

  29. Maggio E, Piccardo E, Regazzoni C, Cavallaro A: Particle PHD filtering for multi-target visual tracking. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2007, 1: 1101-1104.

    Google Scholar 

  30. Talantzis F, Pnevmatikakis A, Constantinides AG: Audio-visual active speaker tracking in cluttered indoors environments. IEEE Trans Syst Man Cybern B 2008,38(3):799-807.

    Article  Google Scholar 

  31. Bernardin K, Gehrig T, Stiefelhagen R: Multi-level particle filter fusion of features and cues for audio-visual person tracking. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:70-81.

    Google Scholar 

  32. Tuckey JW: Exploratory Data Analysis. Addison-Wesley 1977.

    Google Scholar 

  33. Breiman L, Friedman JH, Olshen RA, Stone CJ: Classification and Regression Trees. Chapman and Hall 1993.

    Google Scholar 

  34. Duda RO, Hart PE, Stork DG: Pattern Classification. Wiley-Interscience 2000.

    Google Scholar 

  35. Crisco JJ, McGovern RD: Efficient calculation of mass moments of intertia for segmented homogeneous three-dimensional objects. J Biomech 1998,31(1):97-101.

    Article  Google Scholar 

  36. Leu JG: Computing a shape's moments from its boundary. Pattern Recogn 1991,24(10):116-122.

    Article  MathSciNet  Google Scholar 

  37. Yang L, Albregtsen F: Fast and exact computation of Cartesian geometric moments using discrete Green's theorem. Pattern Recogn 1996,29(7):1061-1073. 10.1016/0031-3203(95)00147-6

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cristian Canton-Ferrer.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Authors’ original file for figure 34

Authors’ original file for figure 35

Authors’ original file for figure 36

Authors’ original file for figure 37

Authors’ original file for figure 38

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Canton-Ferrer, C., Casas, J.R., Pardàs, M. et al. Multi-camera multi-object voxel-based Monte Carlo 3D tracking strategies. EURASIP J. Adv. Signal Process. 2011, 114 (2011). https://doi.org/10.1186/1687-6180-2011-114

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-114

Keywords