Multi-camera multi-object voxel-based Monte Carlo 3D tracking strategies
© Canton-Ferrer et al; licensee Springer. 2011
Received: 15 May 2011
Accepted: 23 November 2011
Published: 23 November 2011
This article presents a new approach to the problem of simultaneous tracking of several people in low-resolution sequences from multiple calibrated cameras. Redundancy among cameras is exploited to generate a discrete 3D colored representation of the scene, being the starting point of the processing chain. We review how the initiation and termination of tracks influences the overall tracker performance, and present a Bayesian approach to efficiently create and destroy tracks. Two Monte Carlo-based schemes adapted to the incoming 3D discrete data are introduced. First, a particle filtering technique is proposed relying on a volume likelihood function taking into account both occupancy and color information. Sparse sampling is presented as an alternative based on a sampling of the surface voxels in order to estimate the centroid of the tracked people. In this case, the likelihood function is based on local neighborhoods computations thus dramatically decreasing the computational load of the algorithm. A discrete 3D re-sampling procedure is introduced to drive these samples along time. Multiple targets are tracked by means of multiple filters, and interaction among them is modeled through a 3D blocking scheme. Tests over CLEAR-annotated database yield quantitative results showing the effectiveness of the proposed algorithms in indoor scenarios, and a fair comparison with other state-of-the-art algorithms is presented. We also consider the real-time performance of the proposed algorithm.
Tracking multiple objects and keeping record of their identities along time in a cluttered dynamic scene is a major research topic in computer vision, basically fostered by the number of applications that benefit from the retrieved information. For instance, multi-person tracking has been found useful for automatic scene analysis , human-computer interfaces , and detection of unusual behaviors in security applications .
A number of methods for camera-based multi-person 3D tracking have been proposed in the literature [4–7]. A common goal in these systems is robustness under occlusions created by the multiple objects cluttering the scene when estimating the position of a target. Single-camera approaches  have been widely employed, but they are vulnerable to occlusions, rotation, and scale changes of the target. In order to avoid these drawbacks, multi-camera tracking techniques exploit spatial redundancy among different views and provide 3D information at the actual scale of the objects in the real world. Integration of data extracted from multiple cameras has been proposed in terms of a fusion at feature level as image correspondences  or multi-view histograms  among others. Information fusion at data or raw level has been achieved by means of voxel reconstructions , polygon meshes , etc.
Most multi-camera approaches rely on a separate analysis of each camera view, followed by a feature fusion process to finally generate an output. Exploiting the underlying epipolar geometry of a multi-camera setup toward finding the most coherent feature correspondence among views was first tackled by Mikič et al.  using algebraic methods together with a Kalman filter, and further developed by Focken et al. . Exploiting epipolar consistency within a robust Bayesian framework was also presented by Canton-Ferrer et al. . Other systems rely on detecting semantically relevant patterns among multiple cameras to feed the tracking algorithm as done in  by detecting faces. Particle filtering (PF)  has been a commonly employed algorithm because of its ability to deal with problems involving multi-modal distributions and non-linearities. Lanz et al.  proposed a multi-camera PF tracker exploiting foreground and color information, and several contributions have also followed this path: [4, 7]. Occlusions, being a common problem in feature fusion methods, have been addressed in  using HMM to model the temporal evolution of occlusions within a PF algorithm. Information about the tracking scenario can also be exploited toward detecting and managing occlusions as done in  by modeling the occluding elements, such as furniture, in a training phase before tracking. It must be noted that, in this article, we assume that all cameras will be covering the area under study. Other approaches to multi-camera/multi-person tracking do not require maximizing the overlap of the field of view of multiple cameras, leading to the non-overlapped multi-camera tracking algorithms .
Multi-camera/multi-person tracking algorithms based on a data fusion before doing any analysis was pioneered by Lopez et al.  by using a voxela reconstruction of the scene. This idea was further developed by the authors in [5, 21] finally leading to the present article. Up to our knowledge, this is the first approach to multi-person tracking exploiting data fusion from multiple cameras as the input of the algorithms. In this article, we first introduce a methodology to multi-person tracking based on a colored voxel representation of the scene as the start of the processing chain. The contribution of this article is twofold. First, we emphasize the importance of the initiation and termination of tracks, usually neglected in most tracking algorithms, that has indeed an impact on the performance of the overall system. A general technique for the initiation/termination of tracks is presented. The second contribution is the filtering step where two techniques are introduced. The first technique applies PF to input voxels to estimate the centroid of the tracked targets. However, this process is far from real-time performance and an alternative, that we call Sparse Sampling (SS). SS aims at decreasing computation time by means of a novel tracking technique based on the seminal PF principle. Particles no longer sample the state space but instead a magnitude whose expectancy produces the centroid of the tracked person: the surface voxels. The likelihood evaluation relying on occupancy and color information is computed on local neighborhoods, thus dramatically decreasing the computation load of the overall algorithm. Finally, effectiveness of the proposed techniques is assessed by means of objective metrics defined in the framework of the CLEAR  multi-target tracking database. Computational performance is reviewed toward proving the real-time operation of the SS algorithms. Fair comparisons with state-of-the-art methods evaluated using the same database are also presented and discussed.
2 Tracker design methodology
2.1 Input and output data
When addressing the problem of multi-person tracking within a multi-camera environment, a strategy about how to process this information is needed. Many approaches perform an analysis of the images separately, and then combine the results using some geometric constraints . This approach is denoted as an information combination by fusion of decisions. However, a major issue in this procedure is dealing with occlusion and perspective effects. A more efficient way to combine information is data fusion . In our case, data fusion leads to a combination of information from all images to build up a new data representation, and to apply the algorithms directly on these data. Several data representations aggregating the information of multiple views have been proposed in the literature such as voxel reconstructions [11, 24], level sets , polygon meshes , conexels , depth maps , etc. In our research, we opted for a colored voxel representation due to both its fast computation and accuracy.
Redundancy among cameras is exploited by means of a Shape-from-Silhouette (SfS) technique . This process generates a discrete occupancy representation of the 3D space (voxels). A voxel is labeled as foreground or background by checking the spatial consistency of its projection on the NC segmented silhouettes, and finally obtaining the 3D binary reconstruction shown in Figure 2(c). We will denote this raw voxel reconstruction as . The visibility of a surface voxel onto a given camera is assessed by computing the discrete ray originating from its optical center to the center of this voxel using Bresenham's algorithm and testing whether this ray intersects with any other foreground voxel. The most saturated color among pixels of the set of cameras that see a surface voxel is assigned to it. A colored representation of surface voxels of the scene is obtained, denoted as . An example of this process is depicted in Figure 2(d). It should be taken into account that, without loss of generality, other background/foreground and 3D reconstruction algorithms may be used to generate the input data to the tracking algorithm presented in this article.
The resulting colored 3D scene reconstruction is fed to the proposed system that assigns a tracker to each target and the obtained tracks are processed by a higher semantic analysis module. Information about the environment (dimensions of the room, furniture, etc.) allows assessing the validity of tracked volumes and discarding false volume detections.
Finally, the output of the overall tracking algorithm will be a number of hypotheses for the centroid position of each of the targets present in the scene.
2.2 Tracker state and filtering
One of the major challenges in multi-target tracking is the estimation of the number of targets and their positions in the scene, based on a set of uncertain observations. This issue can be addressed from two perspectives. First, extending the theory of single-target algorithms to multiple targets. This approach defines the working state space as the concatenation of the positions of all NT targets as . The difficulty here is the time variant dimensionality of this space. Monte Carlo approaches, and specifically PF approaches, to this problem have to face the exponential dependency between the number of particles required by the filter and the dimension of , turning out to be computationally infeasible. Recently, a solution based on random finite sets achieving linear complexity has been presented .
Multi-target tracking can also be tackled by tracking each target independently, that is to maintain NT trackers with a state space . In this case, the system attains a linear complexity with the number of targets, thus allowing feasible implementations. However, interactions among targets must be modeled in order to ensure the most independent set of tracks. This approach to multi-person tracking will be adopted in our research.
2.3 Track initiation and termination
A crucial factor in the performance of a tracking system is the module that addresses the initiation and termination of tracks. The initiation of a new tracker is independent of the employed filtering technique and only relies on the input data and the current state (position) of the tracks in the scene. On the other hand, the termination of a new tracking filter is driven by the performance of the tracker.
The initialization of a new filter is determined by the correct detection of a person in the analyzed scene. This process is crucial when tracking, and its correct operation will drive the overall system's accuracy. However, despite the importance of this step, little attention is paid to it in the design of multi-object trackers in the literature. Only few articles explicitly mention this process such as  that employs a face detector to detect a person or  that uses scout particle filters to explore the 3D space for new targets. Moreover, it is assumed that all targets in the scene are of interest, i.e., people, not accounting for spurious objects, i.e., furniture, shadows, etc. In this section, we introduce a method to properly handle the initiation and termination of filters from a Bayesian perspective.
2.3.1 Track initiation criteria
The 3D input data fed to the tracking system is usually corrupted and presents a number of inaccuracies such as objects not reconstructed, mergings among adjacent blobs, spurious blobs, etc. Hence, defining a track initialization criterium based solely on the presence of a blob might lead to poor performance of the system. For instance, objects such as furniture might be wrongly detected as foreground, reconstructed and tracked. Instead, a classification of the blobs based on a probabilistic criteria can be applied during this initialization process aiming at a more robust operation. Training of this classifier is based on the development set of the used database, together with the available ground truth describing the position of the tracked objects.
We will consider the region of influence of a target with centroid x as the ellipsoid with axis size s = (s x , s y , s z ) centered at c.
that is to assign x j to the component with the largest volume enclosed in the region of influence. It must be noted that some x j might not have any associated due to a wrong segmentation or faulty reconstruction of the target. Moreover, the set of components not associated to any ground truth position can be identified as spurious objects, reconstructed shadows, etc.
Features employed by the person/no-person classifier where magnitude denotes the x, y, or z coordinates of voxel
ρ = 1.1 [gr/cm3]
A number of standard binary classifiers has been tested and their performances have been evaluated, namely Gaussian, Mixture of Gaussians, Neural Networks, K-Means, PCA, Parzen and Decision Trees [33, 34]. Due to the aforementioned properties of the statistic distributions of the features, some classifiers are unable to obtain a good performance, i.e., Gaussian, PCA, etc. Other classifiers require a large number of characterizing elements, such as K-Means, MoG, or Parzen. Decision trees  have reported the best results. Separable variables such as height, weight, and bounding box size are automatically selected to build up a decision tree that yields a high recognition rate with a precision of 0.98 and a recall of 0.99 in our test database.
Another complementary criterium employed in the initiation of new tracks is based on the current state of the tracker. It will not be allowed to create a new track if its distance to the closest target is below a threshold.
2.3.2 Track termination criteria
If two or more tracks fall too close to one another, this indicates that they might be tracking the same target, hence only one will be kept alive while the rest will be removed.
If tracker's efficiency becomes very low it might indicate that the target has disappeared and should be removed.
The person/no-person classifier is applied to the set of features extracted from the voxels assigned to a target. If the classifier outputs a no-person verdict for a number of frames, the target will be considered as lost.
3 Voxel-based solutions
The filtering block shown in Figure 1 addresses the problem of keeping consistent trajectories of the tracked objects, resolving crossings among targets, mergings with spurious objects (i.e., shadows) and producing an accurate estimation of the centroid of the target based on the input voxel information. Although there is a number of papers addressing the problem of multi-camera/multi-person tracking, very few contributions have been based on voxel analysis [20, 21].
3.1 PF tracking
Hence, the weights are proportional to the likelihood function that will be computed over the incoming volume z t .
Basically, in the PF operation loop two steps must be defined: likelihood evaluation and particles propagation. In the following, we present our proposal for the PF implementation.
3.1.1 Likelihood evaluation
Factor λ controls the influence of each term (foreground and color information) in the overall likelihood function. Empirical tests have shown that λ = 0.8 provides satisfactory results. A more detailed review of the impact of color information in the overall performance of the algorithm is addressed in Section5.1.
Likelihood associated to raw data is defined as the ratio of overlap between the input data and the ellipsoid defined by particle (see Section 2.3.1)
where stands for the ellipsoid placed in the centroid estimation and α is the adaptation coefficient. In our experiments, α = 0.9 provided satisfactory results.
3.1.2 Particle propagation
The propagation model has been chosen to be a Gaussian noise added to the state of the particles after the re-sampling step: . The covariance matrix P corresponding to N is proportional to the maximum variation of the centroid of the target and this information is obtained from the development part of the testing dataset. More sophisticated schemes employ previously learnt motion priors to drive the particles more efficiently . However, this would penalize the efficiency of the system when tracking unmodeled motions patterns and, since our algorithm is intended for any motion tracking, no dynamical model is adopted.
3.1.3 Interaction model
where is the parameter that drives the sensibility of the exclusion zone.
3.2 SS tracking
In the presented PF tracking algorithm, likelihood evaluation can be computationally expensive, thus rendering this approach unsuitable for real-time systems. Moreover, data are usually noisy and may contain merged blobs corresponding to different targets. A new technique, SS, is proposed as an efficient and flexible alternative to PF.
3.2.1 Degree of mass and degree of surfaceness
where measures the "degree of surfaceness" of voxel . Within this context, functions ρ(·) and ρS(·) might be understood as pseudo-likelihood functions and Equations 16 and 15 as a sample-based representation of an estimation problem.
3.2.2 Difference with particle filters
There is an obvious similarity between these representation and the formulation of particle filters but there is a significant difference. While particles in PF represent an instance of the whole body, our samples are points in the 3D space. Moreover, particle likelihoods are computed over all data while sample pseudo-likelihoods will be computed in a local domain.
where Ns is the number of sampling points. When using SS we are no longer sampling the state space since cannot be considered an instance of the centroid of the target as happened with particles, , in PF. Hence, we will talk about samples instead of particles and we will refer to as the sampling set. This set will approximate the surface of the k th target, , and will fulfill the sparsity condition .
4 SS implementation
In order to define a method to recursively estimate from the sampling set , a filtering strategy has to be set. Essentially, the proposal is to follow the PF analysis loop (re-sampling, propagation, evaluation, and estimation) with some opportune modifications to ensure the convergence of the algorithm.
4.1 Pseudo-likelihood evaluation
Partial likelihoods will be computed on a local domain centered in the position . Let be a neighborhood of radius r over a connectivity q domain on the 3D orthogonal grid around a sample place in a voxel position . Then, we define the occupancy and color neighborhoods around as and , respectively.
Ideally, when the sample is placed in a surface, half of its associated occupancy neighborhood will be occupied and the other half empty. The proposed expression attains its maximum when this condition is fulfilled.
Since contains only local color information with reference of the global histogram , the distance D(·) is constructed toward giving a measure of the likelihood between this local colored region and . For every voxel in , it is decided whether it is similar to by selecting the histogram value for the tested color and checking whether it is above a threshold γ or not. Finally, the ratio between the number of similar color and total voxels in the neighborhood gives the color similarity score. Since reference histogram is updated and changes over time, a variable threshold γ is computed, so that the 80% of the values of are taken into account.
One of the advantages of the SS algorithm is its computational efficiency. The complexity to compute is quite reduced since it only evaluates a local neighborhood around the sample in comparison with the computational load required to evaluate the likelihood of a particle in the PF algorithm. This point will be quantitatively addressed in Section5.2.
The parameters defining the neighborhood were set to q = 26 and r = 2 yielding to satisfactory results. Larger values of the radius r did not significantly improve the overall algorithm performance but increased its computational complexity.
4.2 Sample propagation and 3D discrete resampling
A sample placed near a surface will have an associated weight with a high value. It is a valid assumption to consider that some surrounding positions might also be part of this surface. Hence, placing a number of new particles in the vicinity of would contribute to progressively explore the surface of a voxel set. This idea leads to the spatial re-sampling and propagation scheme that will drive samples along time in the surface of the tracked target.
4.2.1 Interaction model
The flexibility of a sample-based analysis may, sometimes, lead to situations where particles spread out too much from the computed centroid. In order to cope with this problem, a intra-target samples' interaction model is devised. If a sample is placed in a position such that it will be removed (that is to assign ) and we set the threshold as δ = αs x , with s x = 30 cm. Factor α = 1.5 produced accurate results in our experiments.
5 Results and evaluation
Metrics proposed in  for multi-person tracking evaluation have been adopted, namely the Multiple Object Tracking Precision (MOTP), which shows tracker's ability to estimate precise object positions, and the Multiple Object Tracking Accuracy (MOTA), which expresses its performance at estimating the number of objects, and at keeping consistent trajectories. MOTP scores the average metric error when estimating multiple target 3D centroids, while MOTA evaluates the percentage of frames where targets have been missed, wrongly detected or mismatched.
The aim of a tracking system would be to produce high values of MOTA and low values of MOTP thus indicating its ability to correctly track all targets and estimate their positions accurately. When comparing two algorithms, there will be a preference to choose the one outputting the highest MOTA score.
To demonstrate the effectiveness of the proposed multi-person tracking approaches, a set of experiments were conducted over the CLEAR 2007 database. The development part of the dataset was used to train the initiation/termination of tracks modules as described in Section 2.3 and the remaining test part was used for our experiments.
First, the multi-camera data are pre-processed performing the foreground and background segmentations and 3D voxel reconstruction algorithm. In order to analyze the dependency of the tracker's performance with the resolution of the 3D reconstruction, several voxel sizes were employed cm. A colored version of these voxel reconstructions was also generated, according to the technique introduced in Section 2.1. Then, these data were the input fed to the PF and SS proposed approaches.
Number of samples/particles: There is a dependency between the MOTP score and the number of particles/samples, especially for the SS algorithm. The contribution of a new sample to the estimation of the centroid in the SS has less impact than the addition of a new particle in the PF, hence the slower decay of the MOTP curves for the SS than for the PF. Regarding the MOTA score, there is not a significant dependency with Ns or Np. Two factors drive the MOTA of an algorithm: the track initiation/termination modules, that mainly contributes to the ratio of misses and false positives, and the filtering step that has an impact to the mismatches ratio. The low dependency of MOTA with Ns or Np shows that most of the impact of the algorithm in this score is due to the particle/sample propagation and interaction strategies rather than the quantity of particles/samples itself. Moreover, the influence in the MOTA score is tightly correlated with the track initiation/termination policy. This assumption was experimentally validated by testing several classification methods (mixture of Gaussians, PCA, Parzen, and K-Means) in the initiation/termination modules yielding to a drop in the MOTA score proportional to their ability to correctly classify a blob as person/no-person.
Voxel size: Scenes reconstructed with a large voxel size do not capture well all spatial details and may miss some objects thus decreasing the performance of the system (both in SS and PF). It can be observed that MOTP and MOTA scores improve as the voxel size decrease.
Color features: Color information improves the performance of SS and PF in both MOTP and MOTA scores. First, there is an improvement when using color information for a given voxel size, specially for the SS algorithm. Moreover, the smaller the voxel size the most noticeable difference between the experiments using raw and color features. This effect is supported by the fact that color characteristics are better captured when using small voxel sizes. The performance improvement when using color in the SS algorithm is more noticeable since samples are placed in the regions with a high likelihood to be part of the target. For instance, this effect is more evident in cases where the subject is sitting and the particles concentrate in the upper body part, disregarding the part of the chair. In the SS algorithm, MOTP score benefits from this efficient sample placement. PF algorithm is constrained to evaluate the color likelihood in the ellipsoid defined in Equation 9 thus not being able to differentiate between parts of the blob that do not belong to the tracked target. Color information used within the filtering loop leads to a better distinguishability among blobs, thus reducing the mismatch ratio and slightly improving the MOTA score. Merging of adjacent blobs or complex crossing among targets is also correctly resolved. An example of the impact of color information is shown in Figure 10 where the usage of color avoids the mismatch between two targets. This effect is more noticeable when targets in the scene are dressed in different colors.
Results presented at the CLEAR 2007  by several partners
Face detection+Kalman filtering 
Appearance models+PF 
Upper body detection+PF 
Zenithal camera analysis+PF 
Voxel analysis+Heuristic tracker 
Voxel analysis+PF (best case)
Voxel analysis+SS (best case)
In order to visually show the performance of the SS algorithm, some videos corresponding to the most challenging tracking scenarios have been made available at http://www.cristiancanton.org.
5.2 Computational performance
Comparing obtained metrics among different algorithms can give an idea about their performance in a scenario where computational complexity is not taken into account. An analysis of the operation time of several algorithms under the same conditions and the produced MOTP/MOTA metrics might give a more informative and fairer comparison tool. Although there is not a standard procedure to measure the computational performance of a tracking process, we devised a method to assess the computational efficiency of our algorithms to present a comparative study.
The first noticeable characteristic of these charts is that, due to the computational complexity of each algorithm, when comparing SS and PF algorithms under the same operation conditions, the RTF associated with SS is always higher than the associated with PF. Similarly, the computational load is higher when analyzing colored than raw inputs. All the plotted curves attain lower RTF performance values as the size of the voxel decreases since the amount of data to process increases (note the different RTF scale ranges for each voxel size in Figure 11). Regarding the MOTP/MOTA metrics, there is a common tendency to a decrease in the MOTP and an increase in the MOTA as the RTF decreases. The separation between the SS and PF curves is bigger as the voxel size decreases since the PF algorithm has to evaluate a larger amount of data.
The observation of these results yields the conclusion that the SS algorithm is able to produce a similar and, in some cases, better results than the PF algorithm with a lower computational cost. For example, using cm, a MOTP score of around 165 mm can be obtained using SS with a RTF ten times larger than when using PF and similarly with the MOTA score.
In this article, we have presented a number of contributions to the multi-person tracking task in a multi-camera environment. A block representation of the whole tracking process allowed to identify the performance bottlenecks of the system and address efficient solutions to each of them. Real-time performance of the system was a major goal hence efficient tracking algorithms have been produced as well as an analysis of their performance.
The performance of these systems has thoroughly been tested over the CLEAR database and quantitatively compared through two scores: MOTP and MOTA. A number of experiments have been conducted toward exploring the influence of the resolution of the 3D reconstruction and the color information. Results have been compared with other state-of-the-art algorithms evaluated with the same metrics using the same testing data.
The relevance of the initiation and termination of filters have been proved, since these modules have a major impact on the MOTA score. However, most articles in the literature do not specifically address the operation of these modules. We proposed a statistical classifier based on classification trees as a way to discriminate blobs between the person/no-person classes. Training of this classifier was done using data available in the development part of the employed database and a number of features (namely weight, height, top in z-axis, bounding box size) were extracted and provided as the input to the classifier. Another criterium such as a proximity to other already existing tracks was employed to create or destroy a track. Performance scores in Table 2 for the PF and SS systems present the lowest values for the false positives (FP) and missed targets (Miss) ratios hence supporting the relevance of the initiation and termination of tracks modules.
Two proposals for the filtering step of the tracking system have been presented: PF and SS. An independent tracker was assigned to every target and an interaction model was defined. PF technique proved to be robust and leaded to state-of-the-art results but its computational load was unaffordable for small voxel sizes. As an alternative, SS algorithm has been presented achieving a similar and, in some occasions, better performance than PF at a smaller computational cost. Its sample-based estimation of the centroid allowed a better adaptation to noisy data and distinguishability among merged blobs. In both PF and SS, color information provided a useful cue to increase the robustness of the system against track mismatches thus increasing the MOTA score. In the SS, color information also allowed a better placement of the samples allowing to distinguish among parts belonging to the tracked object and parts of a merging with a spurious object, leading to a better MOTP score.
Future research within this topic involves multi-modal data fusion with audio data toward improving the precision of the tracker, MOTP, and avoid mismatches among targets, thus improving the MOTA score.
aAnalogously to the pixel definition (picture element) as the minimum information unit in a discrete image, the voxel (volume element) is defined as the minimum information unit in a 3D discrete representation of a volume.
bFor the sake of simplicity in the notation, pseudo-likelihood functions will be denoted as p(·) instead of defining a specific notation for it.
cWhen selecting the best system, the MOTA score is regarded as the most significant value.
The authors declare that they have no competing interests
- Park S, Trivedi MM: Understanding human interactions with track and body synergies captured from multiple views. Comput Vis Image Understand 2008,111(1):2-20. 10.1016/j.cviu.2007.10.005View ArticleGoogle Scholar
- Project CHIL--Computers in the Human Interaction Loop2004. [http://chil.server.de]
- Haritaoglu I, Harwood D, Davis LS: W 4 : real-time surveillance of people and their activities. IEEE Trans Pattern Anal Mach Intell 2000,22(8):809-830. 10.1109/34.868683View ArticleGoogle Scholar
- Bernardin K, Elbs A, Stiefelhagen R: Multiple object tracking performance metrics and evaluation in a smart Room environment. Proceedings of IEEE International Workshop on Visual Surveillance 2006.Google Scholar
- Canton-Ferrer C, Salvador J, Casas JR: Multi-person tracking strategies based on voxel analysis. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:91-103.Google Scholar
- Khan Z, Balch T, Dellaert F: Efficient particle filter-based tracking of multiple interacting targets using an MRF-based motion model. Proceedings of International Conference on Intelligent Robots and Systems 2003,1(1):254-259.Google Scholar
- Lanz O, Chippendale P, Brunelli R: An appearance-based particle filter for visual tracking in smart rooms. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:57-69.Google Scholar
- Yilmaz A, Javed O, Shah M: Object tracking: a survey. ACM Comput Surv 2006,38(4):1-45.View ArticleGoogle Scholar
- Canton-Ferrer C, Casas JR, Pardàs M: Towards a Bayesian approach to robust finding correspondences in multiple view geometry environments. In Proceedings of 4th International Workshop on Computer Graphics and Geometric Modelling. Volume 3515. Lecture Notes on Computer Science; 2005:281-289.Google Scholar
- Lanz O: Approximate Bayesian multibody tracking. IEEE Trans Pattern Anal Mach Intell 2006,28(9):1436-1449.View ArticleGoogle Scholar
- Cheung GKM, Kanade T, Bouguet JY, Holler M: A real time system for robust 3D voxel reconstruction of human motions. IEEE Conference on Computer Vision and Pattern Recognition 2000, 2: 714-720.Google Scholar
- Isidoro J, Sclaroff S: Stochastic refinement of the visual hull to satisfy photometric and silhouette consistency constraints. Proceedings of IEEE International Conference on Computer Vision 2003, 2: 1335-1342.View ArticleGoogle Scholar
- Mikič I, Santini S, Jain R: Tracking objects in 3D using multiple camera views, in. Proceedings of Asian Conference on Computer Vision 2000.Google Scholar
- Focken D, Stiefelhagen R: Towards vision-based 3-D people tracking in a Smart Room. Proceedings of IEEE International Conference on Multimodal Interfaces 2002, 400-405.View ArticleGoogle Scholar
- Katsarakis N, Talantzis F, Pnevmatikakis A, Polymenakos L: The AIT 3D audiovisual person tracker for CLEAR 2007. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:35-46.Google Scholar
- Arulampalam MS, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans Signal Process 2002,50(2):174-188. 10.1109/78.978374View ArticleGoogle Scholar
- Lien K, Huang C: Multiview-based cooperative tracking of multiple human objects. EURASIP J. Image Video Process 2008,8(2):1-13.View ArticleGoogle Scholar
- Osawa T, Wu X, Sudo K, Wakabayashi K, Arai H: MCMC based multi-body tracking using full 3D model of both target and environment. Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance 2007, 224-229.Google Scholar
- Black J, Ellis T, Rosin P: Multi view image surveillance and tracking. Proceedings of Workshop on Motion and Video Computing 2002, 169-174.Google Scholar
- López A, Canton-Ferrer C, Casas JR: Multi-person 3D tracking with particle filters on voxels. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2007, 1: 913-916.Google Scholar
- Canton-Ferrer C, Sblendido R, Casas JR, Pardàs M: Particle Filtering and sparse sampling for multi-person 3D tracking. Proceedings of IEEE International Conference on Image Processing 2008, 2644-2647.Google Scholar
- CLEAR--Classification of Events, Activities and Relationships Evaluation and Workshop2007. [http://www.clear-evaluation.org]
- Hall DL, McMullen SAH: Mathematical Techniques in Multisense Data Fusion. Artech House 2004.Google Scholar
- Kutulakos KN, Seitz SM: A theory of shape by space carving. Int J Comput Vis 2000,38(3):199-218. 10.1023/A:1008191222954View ArticleMATHGoogle Scholar
- Faugeras O, Keriven R: Variational principles, surface evolution, PDE's, level set methods and the stereo problem. Proceedings of 5nd IEEE EMBS International Summer School on Biomedical Imaging 2002.Google Scholar
- Casas JR, Salvador J: Image-based multi-view scene analysis using conexels. Proceedings of HCSNet Workshop on Use of Vision in Human-Computer Interaction 2006, 19-28.Google Scholar
- Kolmogorov V, Zabin R: What energy functions can be minimized via graph cuts? IEEE Trans Pattern Anal Mach Intell 2004,26(2):147-159. 10.1109/TPAMI.2004.1262177View ArticleGoogle Scholar
- Stauffer C, Grimson W: Adaptive background mixture models for real-time tracking, in. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition 1999, 252-259.Google Scholar
- Maggio E, Piccardo E, Regazzoni C, Cavallaro A: Particle PHD filtering for multi-target visual tracking. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2007, 1: 1101-1104.Google Scholar
- Talantzis F, Pnevmatikakis A, Constantinides AG: Audio-visual active speaker tracking in cluttered indoors environments. IEEE Trans Syst Man Cybern B 2008,38(3):799-807.View ArticleGoogle Scholar
- Bernardin K, Gehrig T, Stiefelhagen R: Multi-level particle filter fusion of features and cues for audio-visual person tracking. In Proceedings of Classification of Events, Activities and Relationships Evaluation and Workshop. Volume 4625. Lecture Notes on Computer Science; 2007:70-81.Google Scholar
- Tuckey JW: Exploratory Data Analysis. Addison-Wesley 1977.Google Scholar
- Breiman L, Friedman JH, Olshen RA, Stone CJ: Classification and Regression Trees. Chapman and Hall 1993.Google Scholar
- Duda RO, Hart PE, Stork DG: Pattern Classification. Wiley-Interscience 2000.Google Scholar
- Crisco JJ, McGovern RD: Efficient calculation of mass moments of intertia for segmented homogeneous three-dimensional objects. J Biomech 1998,31(1):97-101.View ArticleGoogle Scholar
- Leu JG: Computing a shape's moments from its boundary. Pattern Recogn 1991,24(10):116-122.MathSciNetView ArticleGoogle Scholar
- Yang L, Albregtsen F: Fast and exact computation of Cartesian geometric moments using discrete Green's theorem. Pattern Recogn 1996,29(7):1061-1073. 10.1016/0031-3203(95)00147-6View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.