 Research
 Open Access
 Published:
Automated target tracking and recognition using coupled view and identity manifolds for shape representation
EURASIP Journal on Advances in Signal Processing volume 2011, Article number: 124 (2011)
Abstract
We propose a new couplet of identity and view manifolds for multiview shape modeling that is applied to automated target tracking and recognition (ATR). The identity manifold captures both interclass and intraclass variability of target shapes, while a hemispherical view manifold is involved to account for the variability of viewpoints. Combining these two manifolds via a nonlinear tensor decomposition gives rise to a new target generative model that can be learned from a small training set. Not only can this model deal with arbitrary view/pose variations by traveling along the view manifold, it can also interpolate the shape of an unknown target along the identity manifold. The proposed model is tested against the recently released SENSIAC ATR database and the experimental results validate its efficacy both qualitatively and quantitatively.
1 Introduction
Automated target tracking and recognition (ATR) is an important capability in many military and civilian applications. In this work, we mainly focus on tracking and recognition techniques for infrared (IR) imagery, which is a preferred imaging modality for most military applications. A major challenge in visionbased ATR is how to cope with the variations of target appearances due to different viewpoints and underlying 3D structures. Both factors, identity in particular, are usually represented by discrete variables in practical existing ATR algorithms [1–3]. In this paper we will account for both factors in a continuous manner by using view and identity manifolds. Coupling the two manifolds for target representation facilitates the ATR process by allowing us to meaningfully synthesize new target appearances to deal with previously unknown targets as well as both known and unknown targets under previously unseen viewpoints.
Common IR target representations are nonparametric in nature, including templates [1], histograms [4], edge features [5] etc. In [5] the target is represented by intensity and shape features and a selforganizing map is used for classification. Histogrambased representations were shown to be simple yet robust under difficult tracking conditions [4, 6], but such representations cannot effectively discriminate among different target types due to the lack of higher order structure. In [7], the shape variability due to different structures and poses is characterized explicitly using a deformable and parametric model that must be optimized for localization and recognition. This method requires highresolution images where salient edges of a target can be detected, and may not be appropriate for ATR in practical IR imagery. On the other hand, some ATR approaches [8, 1, 9] depend on the use of multiview exemplar templates to train a classifier. Such methods normally require a dense set of training views for successful ATR tasks and they are often limited in dealing with unknown targets.
In this work, we propose a new couplet of identity and view manifolds for multiview shape modeling. As shown in Figure 1, the 1D identity manifold captures both interclass and intraclass shape variability. The 2D hemispherical view manifold is used deal with view variations for ground vehicles. We use a nonlinear tensor decomposition technique to integrate these two manifolds into a compact generative model. Because the two variables, view and identity, are continuous in nature and defined along their respective manifolds, the ATR inference can be efficiently implemented by means of a particle filter where tracking and recognition can be accomplished jointly in a seamless fashion. We evaluate this new target model against the ATR database recently released by the Military Sensing Information Analysis Center (SENSIAC) [10] that contains a rich set of IR imagery depicting various military and civilian vehicles. To examine the efficacy of the proposed target model, we develop four ATR algorithms based on different ways of handling the view and identity factors. The experimental results demonstrate the advantages of coupling the view and identity manifolds for shape interpolation, both qualitatively and quantitatively.
The remainder of this paper is organized as follows. In Section 2, we review some related work in the area of 3D object representation. In Section 3, we present our generative model where the identity and view manifolds are discussed in detail. In Section 4, we discuss the implementation of the particle filter based inference algorithm that incorporates the proposed target model for ATR tasks. In Section 5, we report experimental results of target tracking and recognition on both IR sequences from the SENSIAC dataset and some visibleband video sequences, and we also discuss the limitations and possible extensions of the proposed generative model. Finally, we present our conclusions in Section 6.
2 Related Work
This section begins with a review of different ways to represent a 3D object and the reasons for our choice of a multiview silhouettebased method. Then we focus on several existing shape representation methods by examining their ability to parameterize shape variations, the ability to interpolate, and the ease of parameter estimation.
There are two commonly used approaches to represent 3D rigid objects. The first approach suggests a set of representative 2D snapshots [11, 12] captured from multiple viewpoints. These snapshots may be represented in the form of simple shape silhouettes, contours, or complex features such as SIFT, HOG, or image patches. The second approach involves an explicit 3D object model [13] where common representations vary from simple polyhedrons to complex 3D meshes. In the first case, unknown views can be interpolated from the given sample set, whereas in the second case, the 3D model is used to match the observed view via 3Dto2D projection. Accordingly, most object recognition methods can be categorized into one of two groups: those involving 2D multiview images [14–19] and those supported by explicit 3D models [20–23]. There are also hybrid methods [24] that make use of both the 3D shape and 2D appearances/features.
In this work, we choose to represent a target by its representative 2D views due to two main reasons. First, this is theoretically supported by the psychophysical evidence presented in [25] which suggest that the human visual system is better described as recognizing objects by 2D view interpolation than by alignment or other methods that rely on objectcentered 3D models. Second, it could be practically cumbersome to store and reference a large collection of detailed 3D models of different target types in a practical ATR system. Moreover, it is worth noting that many robust features (HOG, SIFT) used to represent objects were developed mainly for visibleband images and their use is limited by some factors such as image quality, resolution etc. In IR imagery, the targets are often small and frequently lack sufficient resolution to support robust features. Finally, the IR sensors in the SENSIAC database are static, facilitating target segmentation by background subtraction. Thus the ability to efficiently extract target silhouettes and the simplicity of silhouettebased shape representation motivates us to use the silhouette for multiview target representation.
There are two related issues for shape representation. One is how to effectively represent the shape variation, and the other is how to infer the underlying shape variables, i.e., view and identity. As pointed out in [26], feature vectors obtained from common shape descriptors, such as shape contexts [27] and moment descriptors [28], are usually assumed to lie in a Euclidean space to facilitate shape modeling and recognition. However, in many cases the underlying shape space may be better described by a nonlinear low dimensional (LD) manifold that can be learned by nonlinear dimensionality reduction (DR) techniques, where the learned manifold structures are often either targetdependent or viewdependent [29]. Another trend is to explore a shape space where every point represents a plausible shape and a curve between two points in this space represents a deformation path between two shapes. Though this method was shown successful in applications such as action recognition [26] and shape clustering [30], it is difficult to explicitly separate the identity and view factors during shape deformation as is necessary in the context of ATR applications.
This brings us to the point of learning the LD embedding of the latent factors, e.g., view and identity, from the highdimensional (HD) data, e.g., silhouettes. In an early work [31], PCA was used to find two separate eigenspaces for visual learning of 3D objects, one for the identity and one for the pose. The bilinear models [32] and tensor analysis [33] provide a more systematic multifactor representation by decomposing HD data into several independent factors. In [34], the view variable is related with the appearance through shape submanifolds which have to be learned for each object class. All of these methods are limited to a discrete identity variable where each object is associated with a separate view manifold. Our work draws inspiration from [35] where a nonlinear tensor decomposition method is used to learn an identityindependent view manifold for multiview dynamic motion data. A torus manifold was also proposed in [36, 37] for the same purpose that is a product of two circularshaped manifolds, i.e., the view and pose manifolds. In [36, 37, 35], the style factor of body shape (i.e., the identity) is a continuous variable defined in a linear space.
Our work presented in this paper is distinct from that in [36, 37, 35] primarily in terms of two main original contributions. The first is our couplet of view and identity manifolds for multiview shape modeling: unlike [36, 37, 35] where the identity is treated linearly, for the first time we propose a 1D identity manifold to support a continuous nonlinear identity variable. Also, the view and pose manifolds in [36, 37, 35] have welldefined topologies due to their sequential nature. However, in our IR ATR application the topology of the identity manifold is not clear owing to a lack of understanding of the intrinsic LD structure spanning a diverse set of targets. Finding an appropriate ordering relationship among a set of targets is the key to learning a valid identity manifold for effective shape interpolation. To better support ATR tasks, the view manifold used here involves both the azimuth and elevation angles, compared with the case of a single variable in [36, 37, 35]. The second contribution is the development of a particle filterbased ATR approach that integrates the proposed model for shape interpolation and matching. This new approach supports joint tracking and recognition for both known and unknown targets and achieves superior results compared with traditional templatebased methods in both IR and visibleband image sequences.
3 Target Generative Models
Our generative model is learned using silhouettes from a set of targets of different classes observed from multiple viewpoints. The learning process identifies a mapping from the HD data space to two LD manifolds corresponding to the shape variations represented in terms of view and identity. In the following, we first discuss the identity and view manifolds. Then we present a nonlinear tensor decomposition method that integrates the two manifolds into a generative model for multiview shape modeling, as shown in Figure 2.
3.1 Identity manifold
The identity manifold that plays a central role in our work is intended to capture both interclass and intraclass shape variability among training targets. In particular, the continuous nature of the proposed identity manifold makes it possible to interpolate valid target shapes between known targets in the training data. There are two important questions to be addressed in order to learn an identity manifold with the desired interpolation capability. The first one is which space this identity manifold should span. In other words, should it be learned from the HD silhouette space or a LD latent space? We expect traversal along the identity manifold to result in gradual shape transition and valid shape interpolation between known targets. This would ideally require the identity manifold to span a space that is devoid of all other factors that contribute to the shape variation. Therefore the identity manifold should be learned in a LD latent space with only the identity factor rather than in the HD data space where the view and identity factors are coupled together. The second important question is how to learn a semantically valid identity manifold that supports meaningful shape interpolation for an unknown target. In other words, what kind of constraint should be imposed on the identity manifold to ensure that interpolated shapes correspond to feasible realworld targets? We defer further discussion of the first issue to Section 3.3 and focus here on the second one that involves the determination of an appropriate topology for the identity manifold.
The topology determines the span of a manifold with respect to its connectivity and dimensionality. In this work, we suggest a 1D closedloop structure to represent the identity manifold and there are several important considerations to support this seemingly arbitrary but actually practical choice. First, the learning of a higherdimensional manifold requires a large set of training samples that may not be available for a specific ATR application where only a relatively small candidate pool of possible targetsofinterest is available. Second, this identity manifold is assumed to be closed rather than open, because all targets in our ATR problem are manmade ground vehicles which share some degree of similarity with extreme disparity unlikely. Third, the 1D closed structure would greatly facilitate the inference process for online ATR tasks. As a result, the manifold topology is reduced to a specific ordering relationship of training targets along the 1D closed identity manifold. Ideally, we want targets of the same class or those with similar shapes to stay closer on the identity manifold compared with dissimilar ones. Thus we introduce a classconstrained shortestclosedpath method to find a unique ordering relationship for the training targets. This method requires a viewindependent distance or dissimilarity measure between two targets. For example, we could use the shape dissimilarity between two 3D target models that can be approximated by the accumulated mean square errors of multiview silhouettes.
Assume we have a set of training silhouettes from N target types belonging to one of Q classes imaged under M different views. Let {\mathbf{y}}_{m}^{k} denote the vectorized silhouette of target k under view m (after the distance transform [29]) and let L_{ k } denote its class label, L_{ k } ∈ [1, Q] (Q is the number of target classes and each class has multiple target types). Also assume that we have identified a LD identity latent space where the k'th target is represented by the vector i^{k}, k ∈ {1, · · ·, N} (N is the number of total target types). Let the topology of the manifold spanning the space of {i^{k}k = 1,..., N} be denoted by T = [t_{1} t_{2} ··· t_{N+ 1}] where t_{ i } ∈ [1, N], t_{ i } ≠ t_{ j } for i ≠ j with the exception of t_{1} = t_{N+ 1}to enforce a closedloop structure. Then the classconstrained shortestclosedpath can be written as
where D(i^{u}, i^{v}) is defined as
where . represents the Euclidean distance and β is a constant. The first term in (2) denotes a view independent shape similarity measure between targets u and v as it is averaged over all training views. The second term is a penalty term that ensures targets belonging to the same class to be grouped together. The manifold topology T^{*} defined in (1) tends to group targets of similar 3D shapes and/or the same class together, enforcing the best local semantic smoothness along the identity manifold, which is essential for a valid shape interpolation between target types.
It is worth mentioning that the identity manifold to be learned according to T^{*} will encompass multiple target classes each of which has several subclasses. For example, we consider six classes of vehicles in this work each of which includes six subclass types. Although it is easy to understand the feasibility and necessity of shape interpolation within a class to accommodate intraclass variability, the validity of shape interpolation between two different classes may seem less clear. Actually, T^{*} not only defines the ordering relationship within each class but also the neighboring relationship between two different classes. For example the six classes considered in this paper are ordered as: Armored Personnel Carriers (APCs) → Tanks → Pickup Trucks → Sedan Cars → Minivans → SUVs → APCs. Although APCs may not look like Tanks or SUVs in general, APCs are indeed located between Tanks and SUVs along the identity manifold according to T*. It occurs because that (1) finds an APCTank pair and an APCSUV pair that have the least shape dissimilarity compared with all other pairs. Thus this ordering still supports sensible interclass shape interpolation, although it may not be as smooth as intraclass interpolation, as will be shown later in the experiments.
3.2 Conceptual view manifold
We need a view manifold to accommodate the viewinduced shape variability for different targets. A common approach is to use nonlinear DR techniques, such as LLE or Laplacian eigenmaps, to find the LD view manifold for each target type [29]. One main drawback of using identitydependent view manifolds is that they may lie in different latent spaces and have to be aligned together in the same latent space for general multiview modeling. Therefore, the view manifold here is designed to be a hemisphere that embraces almost all possible viewing angles around a ground vehicle as shown in Figure 1 and is characterized by two parameters: the azimuth and elevation angles Θ = {θ, ϕ}. This conceptual manifold provides a unified and intuitive representation of the view space and supports efficient dynamic view estimation.
3.3 Nonlinear Tensor Decomposition
We extend the nonlinear tensor decomposition in [35] to develop the proposed generative model. The key is to find a viewindependent space for learning the identity manifold through the commonlyshared conceptual view manifold (the first question raised in Section 3.1).
Let {\mathbf{y}}_{m}^{k}\in {\mathbb{R}}^{d}be the ddimensional, vectorized distance transformed silhouette observation of target k under view m, and let Θ_{ m } = [θ_{ m }, ϕ_{ m }], 0 ≤ θ_{ m } ≤ 2π, 0 ≤ ϕ_{ m } ≤ π, denote the point corresponding to view m on the LD view manifold. For each target type k, we can learn a nonlinear mapping between {\mathbf{y}}_{m}^{k} and the point Θ_{ m } using the generalized radial basis function (GRBF) kernel as
where κ(.) represents the Gaussian kernel, {S_{ l } l = 1,..., N_{ c }} are N_{ c } kernel centers that are usually chosen to coincide with the training views on the view manifold, {w}_{l}^{k} are the target specific weights of each kernel and b_{ l } is the coefficient of the linear polynomial [1 Θ_{ m }] term included for regularization. This mapping can be written in matrix form as
where B^{k} is a d × (N_{ c } + 3) target dependent linear mapping term composed of the weight terms {w}_{l}^{k} in (4) and \psi \left({\mathbf{\Theta}}_{m}\right)=\left[\kappa \left(\parallel {\mathbf{\Theta}}_{m}{\mathbf{S}}_{1}\parallel \right),\phantom{\rule{2.77695pt}{0ex}}\cdots \phantom{\rule{0.3em}{0ex}},\phantom{\rule{2.77695pt}{0ex}}\kappa \left(\parallel {\mathbf{\Theta}}_{m}{\mathbf{S}}_{{N}_{c}}\parallel \right),\phantom{\rule{2.77695pt}{0ex}}1,\phantom{\rule{2.77695pt}{0ex}}{\mathbf{\Theta}}_{m}\right)] is a target independent nonlinear kernel mapping. Since ψ(Θ_{ m }) is dependent only on the view angle we reason that the identity related information is contained within the term B^{k}. Given N training targets, we obtain their corresponding mapping functions B^{k} for k = {1,..., N} and stack them together to form a tensor C = [B^{1} B^{2} ... B^{N}] that contains the information regarding the identity. We can use the highorder singular value decomposition (HOSVD) [38] to determine the basis vectors of the identity space corresponding to the data tensor C. The application of HOSVD to C results in the following decomposition:
where {i^{k}∈ ℝ^{N} k = 1,..., N} are the identity basis vectors, A is the core tensor with dimensionality d × (N_{ c } + 3) × N that captures the coupling effect between the identity and view factors, and ×_{ j } denotes modej tensor product. Using this decomposition it is possible to reconstruct the training silhouette corresponding to the k'th target under each training view according to
This equation supports shape interpolation along the view manifold. This is possible due to the interpolation friendly nature of RBF kernels and the well defined structure of the view manifold. However it cannot be said with certainty that any arbitrary vector i∈ span(i^{1},..., i^{N}) will result in a valid shape interpolation due to the sparse nature of the training set in terms of the identity variation.
To support meaningful shape interpolation, we constrain the identity space to be a 1D structure that includes only those points on a closed Bspline curve connecting the identity basis vectors {i^{k}k = 1,..., N} according to the manifold topology defined in (1). We refer to this 1D structure as the identity manifold denoted by \mathcal{M}\subset {\mathbb{R}}^{N}. Then an arbitrary identity vector \mathit{i}\in \mathcal{M} would be semantically meaningful due to its proximity to the basis vectors, and should support a valid shape interpolation. Although the identity manifold \mathcal{M} has an intrinsic 1D closedloop structure, it is still defined in the tensor space ℝ^{N}. To facilitate the inference process, we introduce an intermediate representation, i.e., a unit circle as an equivalent of \mathcal{M} parameterized by a single variable. First, we map all identity basis vectors {i^{k}k = 1,..., N} onto a set of angles uniformly distributed along a unit circle, {α_{ k } = (k  1) * 2π/Nk = 1,..., N}.
Then, as shown in Figure 3, for any α' ∈ [0, 2π) that is between α_{ j } and α_{j+ 1}along the unit circle, we can obtain its corresponding identity vector \mathit{i}\text{(}{\alpha}^{\prime}\text{)}\in \mathcal{M} from two closest basis vectors i^{j}and i^{j+1}via spline interpolation along \mathcal{M} while maintaining the distance ratio defined below:
where \mathcal{D}\left(\cdot \mid \mathcal{M}\right) is a distance function defined along \mathcal{M}. Now (7) can be generalized for shape interpolation as
where α ∈ [0, 2π) is the identity variable and i\text{(}\alpha \text{)}\in \mathcal{M} is its corresponding identity vector along the identity manifold in ℝ^{N}. Thus (9) defines a generative model for multiview shape modeling that is controlled by two continuous variables α and Θ defined along their own manifolds.
4 Inference Algorithm
We develop an inference algorithm to sequentially estimate the target state including the 3D position and the identity from a sequence of segmented target silhouettes {z_{ t }t = 1,...,T}. We cast this problem in the probabilistic graphical model shown in Figure 4. Specifically, the state vector X_{ t } = [x_{ t } y_{ t } z_{ t } φ_{ t } v_{ t }] represents the target's position along the horizon, the elevation, and range directions, the heading direction (with respect to the sensor's optical axis) and the velocity in a 3D coordinate system. P_{ t } is the camera projection matrix. Considering the fact that the camera in the SENSIAC dataset is static, we set P_{ t } = P. We let α_{ t } ∈ [0, 2π) denote the angular identity variable.
In addition to α_{ t }, the generative model defined in (9) also needs the view parameter Θ, which can be computed from X_{ t } and P_{ t }, in order to synthesize a target shape y_{ t }. Target silhouettes used in training the generative model are obtained by imaging a 3D target model at a fixed distance from a virtual camera. Therefore y_{ t } must be appropriately scaled to account for different imaging ranges. In summary, the synthesized silhouette y_{ t } is a function of three factors: α_{ t }, P_{ t } and X_{ t }. Given an observed target silhouette z_{ t }, the problem of ATR becomes that of sequentially estimating the posterior probability p(α_{ t }, X_{ t }z_{ t }). Due to the nonlinear nature of this inference problem, we resort to the particle filtering approach [39] that requires the dynamics of the two variables p(X_{ t }X_{t1}) and p(α_{ t }α_{t1}) as well as a likelihood function p(z_{ t }α_{ t }, X_{ t }) (the condition on P_{ t } is ignored due to the assumption of a static camera in this work). Since the targets considered here are all ground vehicles, it is appropriate to employ a simple white noise motion model to represent the dynamics of X_{ t } according to
where Δt is the time interval between two adjacent frames. The process noise associated with the target kinematics is Gaussian, i.e., {w}_{t}^{\phi}~N\left(0,\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{\phi}^{2}\right), {w}_{t}^{v}\phantom{\rule{2.77695pt}{0ex}}~N\left(0,\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{v}^{2}\right), {w}_{t}^{x}~N\left(0,\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{x}^{2}\right), {w}_{t}^{y}~N\left(0,\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{y}^{2}\right), and {w}_{t}^{z}~N\left(0,\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{z}^{2}\right). The Gaussian variances should be chosen to reflect the possible target dynamics and ground conditions. For example, if the candidate pool includes highly maneuvering targets, then large values {\sigma}_{\phi}^{2} and {\sigma}_{v}^{2} are needed while tracking on a rough or uneven ground plane requires larger values {\sigma}_{y}^{2}.
Although the target identity does not change, the estimated identity value along the identity manifold could vary due to the uncertainty and ambiguity in the observations. We define the dynamics of α_{ t } to be a simple random walk as
where {w}_{t}^{\alpha}~N\left(0,\phantom{\rule{2.77695pt}{0ex}}{\sigma}_{\alpha}^{2}\right). This model allows the estimated identity value to evolve along the identity manifold and converge to the correct one during sequential estimation. There are two possible future improvements to make this approach more efficient. One is to add an annealing treatment to reduce {\sigma}_{\alpha}^{2} over time and the other is to make {\sigma}_{\alpha}^{2} viewdependent. In other words, the variance can be reduced near the side view when the target is more discriminative and increased near front/rear views when it is more ambiguous.
Given the hypotheses on X_{ t } and α_{ t } in the t th frame as well as P_{ t }, the corresponding synthesized shape y_{ t } can be created by the generative model (9) followed by a scaling factor reflecting the range z_{ t } ∈ X_{ t }. The likelihood function that measures the similarity between y_{ t } and z_{ t } is defined as
where σ^{2} controls the sensitivity of shape matching and ·^{2} gives the mean square error between the observed and hypothesized shape silhouettes. Pseudocode for the particle filterbased inference algorithm is given below in Table 1.
5 Experimental results
We have developed four particle filterbased ATR algorithms that share the same inference framework shown in Figure 4 and by which we can evaluate the effectiveness of shape interpolation. MethodI uses the proposed target generative model involving both the view and identity manifolds for shape interpolation (i.e., both the identity and view variables are continuous). MethodII applies a simplified version where only the view manifold is involved for shape interpolation (i.e., the identity variable is discrete). MethodIII involves shape interpolation along the identity manifold only (i. e., the view variable is discrete). Finally, MethodIV is a traditional templatebased method that only uses the training data for shape matching without shape interpolation (i.e., both the view and identity variables are discrete).
We report three major experimental results in the following. First we present the learning of the proposed generative model along with some simulated results of shape interpolation. Then we introduce the SENSIAC dataset [10] followed by detailed results on a set of IR sequences of various targets at multiple ranges. We also include three visiblebased video sequences for algorithm evaluation, among which two were captured from remotecontrolled toy vehicles in a room and one was from a realworld surveillance video. Background subtraction [40] was applied to all testing sequences to obtain the initial target segmentation result in each frame and the distance transform [29] was applied to create the observation sequences that were used for shape matching.
5.1 Generative Model Learning
We acquired six 3D CAD models for each of the six target classes (APCs, tanks, pickups, cars, minivans, SUVs) for model learning, as shown in Figure 5. All 3D models were scaled to similar sizes and those in the same class share the same scaling factor. This classdependent scaling is useful to learn the unified generative model and to estimate the range information in a 3D scene. For each 3D model, we generated a set of silhouettes corresponding to training viewpoints selected on the view manifold. For simplicity, we only considered elevation angles in the range 0 ≤ ϕ < 45° and azimuth angles in the range 0 ≤ θ < 360°. Specifically, 150 training viewpoints were selected by setting 12° and 10° intervals along the azimuth and elevation angles, respectively, leading to nonuniformly distributed viewpoints on the view manifold. Ideally, we may need less training views when the elevation angle is large (close to the topdown view) to reduce the redundance of training data. Our method of selecting training viewpoints is directly related to the kernel parameters set in (4) to ensure that model learning is effective and efficient. After model learning, we evaluated the generative model in terms of its shape interpolation capability through three experiments.

Shape interpolation along the view manifold: We selected one target from each of the six classes and created three interpolated shapes (after thresholding) between three training views, as shown in Figure 6(a). We observe smooth transitions between the interpolated shapes and training shapes, especially around the wheels of the targets.

Shape interpolation along the identity manifold within the same class: We generated six interpolated shapes along the identity manifold between three adjacent training targets for each of the six classes, as shown in Figure 6(b). Despite the fact that the three training targets are quite different in terms of their 3D structures, the interpolated shapes blend the spatial features from the two adjacent training targets in a natural way.

Shape interpolation along the identity manifold between two adjacent classes: It is also interesting to see the shape interpolation results between two adjacent target classes, as shown in Figure 6(c). Although the series of shape variations may not be as smooth as that in Figure 6(b), the generative model still produces intermediate shapes between two vehicle classes that are realistic looking.
The above results show that the target model supports semantically meaningful shape interpolation along the two manifolds, making it possible to handle not only a known target seen from a new view but also an unknown target seen from arbitrary views. Also, the continuous nature of the view and identity variables facilitates the ATR inference process.
5.2 Tests on the SENSIAC database
The SENSIAC ATR database contains a large collection of visible and midwave IR (MWIR) imagery of six military and two civilian vehicles (Figure 7). The vehicles were driven along a continuous circle marked on the ground with a diameter of 100 meters (m). They were imaged at a frame rate of 30 Hz for one minute from distances of 1,000 m to 5,000 m (with 500 m increment) during both day and night conditions. In the four ATR algorithms, we set {\sigma}_{\phi}^{2}=0.1, {\sigma}_{v}^{2}=1, {\sigma}_{x}^{2}=0.1, {\sigma}_{y}^{2}=0.1, {\sigma}_{z}^{2}=1 in (10) and {\sigma}_{\alpha}^{2}=0.01 in (11). We chose 48 night time IR sequences of eight vehicles at six ranges (1000 m, 1500 m, 2000 m, 2500 m, 3000 m, and 3500 m). Each sequences has approximately 1000 frames. Additionally, the SENSIAC database includes a rich set of meta data for each frame of every sequence. This information includes the true north offsets of the sensor (in azimuth and elevation, Figure 8(a)), the target type, the target speed, the range and slant ranges from the sensor to the target (Figure 8(b)), the pixel location of the target centroid, heading direction with respect to true north, and aspect orientation of the vehicle (Figure 8(c)). Furthermore, we defined a sensorcentered 3D world coordinate system (Figure 8(d)) and developed a pinhole camera calibration technique to obtain the groundtruth 3D position of the target in each frame. The tracking performance is evaluated based on the errors in the estimated 3D position and aspect orientation.
5.2.1 Tracking Evaluation
We computed the errors in estimated 3D target positions along the x (horizon) and z (range) axes as shown in Figure 8(d), as well as of the aspect orientation of the target (Figure 8(c)). All tracking trials were initialized by the ground truth data in the first frame. The overall tracking performance averaged over eight targets with the same range is shown in Figure 9. All four algorithms achieved comparable errors of less than one meter along the horizon direction, with MethodI delivering performance gains of 10%, 20%  40%, and 30%  50% over MethodsII, III and IV, respectively. MethodI also outperforms the other three methods on the range and aspect estimation with over 10%  50% and 20%  80% improvements. These results show that shape interpolation along the view manifold is more important than that along the identity manifold and that using both of them yields the best tracking performance. Even at a range of 3500 m, the averaged horizontal/depth/aspect errors of MethodI are only 0.5 m, 25 m, and 0.5 rad (28.7°), compared to the MethodIV errors of 0.9 m, 45 m, and 1.1 rad (63.1°). We also present some tracking results for MethodI against four 1000 m sequences in Figure 10, where the interpolated shapes are overlaid on the target according to the estimated 3D position and aspect angle as well as the given camera model. All of these results demonstrate the general usefulness of the generative model in interpolating target shapes along the view and identity manifolds for realistic ATR tasks.
5.2.2 Recognition Evaluation
As mentioned before, the lD closedloop identity manifold learned from the tensor coefficient space can be mapped into a unit circle to ease the inference process. The identity variable then becomes an angular one α ∈ [0, 2π). Correspondingly, the six target classes, i.e., tanks, APCs, SUVs, picksups, minivans and cars, can be represented by six angular sections along the circularly shaped identity manifold (as shown in Figure 1). Since the target type is estimated frame by frame during tracking, we define the overall recognition accuracy as the percentage of frames where the target is correctly classified in terms of the six classes. Also, it is interesting to check the two bestmatched training targets for a given sequence that can be found along the identity manifold. The overall recognition results of the four methods for 48 sequences are shown in Table 2, where the accuracy of Tanks is averaged over the T72, ZSU23, and 2S3 target types and that of the APCs is averaged over those of BTR70, BMP2, and BRDM2 target types. Overall, MethodI outperforms the other three methods, again showing the usefulness of shape interpolation along both of the two manifolds. The improvements of MethodI are more significant for longrange sequences when the targets are small and shape interpolation is more important for correct recognition. The reason that recognition accuracies are below 80% for tanks and APCs at long ranges (≥ 2500 m) is mainly because of small target sizes and poor segmentation results, as shown in Figure 11, which shows the targets in the original IR sequences as well as the segmented silhouettes. We used a simple morphological opening operation to clean up the segmentation results. However, when the targets are small, morphological opening has to be moderate to ensure the target shapes are well preserved, which also results in noisier segmentations.
More details on the recognition results of MethodI for the eight 1000 m sequences are shown in Figure 12, which shows not only the framebyframe target recognition results but also the two bestmatched training targets. In most frames, the estimated identity values are in the correct region of class and misclassification usually occurs around the front/rear views when the target is not very distinguishable. Interestingly, the two best matches for the BTR70 and ISUZUSUV sequences include the exact correct target model. Also, the best matches for the other sequences include a similar target model. For example, BMP1, T72, BRDM1, and AS90 are among the two best matches for BMP2, T80, BRDM2 and 2S3, respectively.^{1} We do not have 3D models for the Ford pickup and the ZSU23 in our training set, but their best matches (Chevy/Toyota pickups and T62/T80 tanks) still resemble the actual targets in the SENSIAC sequences.
5.3 Results on Visibleband Sequences
We also tested the four ATR methods on three visibleband video sequences. Two of them (the car and the SUV) were captured indoors using a remote controlled toy vehicle where both the target pose and 3D position were estimated by making use of the camera calibration information, and one was a realworld surveillance video (the cargo van) for which camera calibration is not available and only pose estimation was performed from the normalized silhouette sequences. To compare the four methods, we used an overlap metric [41] to quantify the overlap between the interpolated shapes and the segmented target. Let A and B represent the tracking gate and the groundtruth bounding box respectively. Then the overlap ratio ζ is defined as
where # is the number of pixels. A larger ζ ratio implies a better tracking performance, as shown in Figure 13. The overlap ratios of all four methods on three visibleband sequences are shown in Figure 14. It is clearly seen that MethodI is again superior to other three methods.
We now focus on the recognition results of MethodI, as shown in Figure 15. Although the three targets are previously unknown, the recognition accuracy is still 100% for the first two sequences and close to 100% (97%) for the last one. Note that the two best matches do indeed resemble the unknown target for each sequence. In particular, the cargo van in the third sequence is very different from all training models of minivan. Yet the two best matches, VW Samba and Nissan Elgrand, give a reasonable approximation. Detailed tracking results of the three sequences are shown in Figure 16. Although target segmentation results are not ideal in many frames, especially for the cargo van, the estimated pose trajectories along the view manifold are still smooth and represent the actual pose variation of the target during the sequence. Moreover, the interpolated shapes match reasonably well with the segmented targets, indicating the correct estimation for both the view and identity variables.
5.4 Discussion and Limitations
Although these results are promising, we still consider this work preliminary for three main reasons. First, the computational complexity of the proposed algorithm (MethodI) is relatively high due to the shape interpolation using the generative model. Our experimental results are based on a nonoptimized Matlab implementation. Shape interpolation requires approximately 0.03s on a PC i7 computer (without parallel computation), and the inference with 200 particles requires about 6.9s per frame. Fast implementation is still needed to support realtime processing. Second, we use a silhouettebased shape representation that requires target segmentation. The background subtraction used here assumes that the camera platform is not moving. In the case of a moving camera platform, the initial target segmentation could become a challenging issue. Third, we did not consider the issue of occlusion that has to be accounted for in any practical ATR system. The silhouette is a global feature that could be sensitive to occlusion. An extension to other more salient and robust features such as SIFT and HOG would increase the applicability of the proposed method for realworld applications. Nevertheless, our main contribution is a new shapebased target model where, for the first time, both the view and identity variables are continuous and defined along their own respective manifolds.
6 Conclusion and Future Work
We have presented a new shapebased generative model that incorporates two continuous manifolds for multiview target modeling. Specifically, the identity manifold was proposed to capture both interclass and intraclass shape variability among different target types. The hemispherical view manifold is designed to reflect nearly all possible viewpoints. A particle filterbased ATR algorithm was presented that adopts the new target model for joint tracking and recognition. The experiments on both IR and visiblebased video sequences show the advantages of shape interpolation along both the view and identity manifolds.
However, the current work only considers the silhouettebased shape for target representation that may not be sufficiently distinctive in some challenging cases. This work could be extended to other more salient and robust features thereby making the proposed model more promising for realworld applications. Another issue that needs further research is the structure and dimensionality of the identity manifold. In some sense, the lD identity manifold used here is a practical simplification where a small set of training models (e.g., six models for each of the six classes, totally 36 in this work) is used for learning the generative model. It is possible we can learn a 2D or even 3D identity manifold for more generalized target modeling given sufficient training data. However, there will be two major challenges in going to a higher dimension space. One is how to learn an appropriate manifold topology in 2D or 3D, which is much harder than the lD learning we considered here. The other is how to infer the identity variable effectively in a 2D or 3D identity manifold. There should be a balanced consideration of both the complexity and efficiency when using the couplet of view and identity manifolds for realworld ATR applications.
Notes
^{1}Both1 AS90 and 2S3 are selfpropelled howitzers.
References
 1.
Mei X, Zhou SK, Wu H: Integrated Detection, Tracking and Recognition for IR VideoBased Vehicle Classification. Proc IEEE International Conference on Acoustics, Speech and Signal Processing 2006.
 2.
Miller MI, Grenander U, Osullivan JA, Snyder DL: Automatic target recognition organized via jumpdiffusion algorithms. IEEE Trans Image Processing 1997, 6: 157174. 10.1109/83.552104
 3.
Venkataraman V, Fan X, Fan G: Integrated Target Tracking and Recognition using Joint AppearanceMotion Generative Models. Proc IEEE Workshop on Object Tracking and Classification Beyond Visible Spectrum (OTCBVS08) in conjunction with CVPR08 2008.
 4.
Venkataraman V, Fan G, Fan X: Target Tracking with Online Feature Selection in FLIR Imagery. Proc IEEE Workshop on Object Tracking and Classification Beyond Visible Spectrum (OTCBVS07) in conjunction with CVPR07 2007.
 5.
Shaik J, Iftekharuddin K: Automated tracking and classification of infrared images. Proc International Joint Conference on Neural Networks 2003.
 6.
Venkataraman V, Fan G, Fan X, Havlicek J: Appearance Learning by Adaptive Kalman Filters for FLIR Tracking. Proc IEEE Workshop on Object Tracking and Classification Beyond Visible Spectrum (OTCBVS09) in conjunction with CVPR09 2009.
 7.
Zhang Z, Dong W, Huang K, Tan T: EDA Approach for Model Based Localization and Recognition of Vehicles. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2007.
 8.
Chan L, Nasrabadi N: Modular waveletbased vector quantization for automatic target recognition. Proc International Conference on Multisensor Fusion and Integration for Intelligent Systems 1996.
 9.
Wang L, Der S, Nasrabadi N: Automatic target recognition using a featuredecomposition and datadecomposition modular neural network. IEEE Trans Image Processing 1998,7(8):11131121. 10.1109/83.704305
 10.
Military Sensing Information Analysis Center (SENSIAC)2008. [Https://www.sensiac.org/]
 11.
Poggio T, Edelman S: A network that learns to recognize threedimensional objects. Nature 1990, 343: 263266. 10.1038/343263a0
 12.
Ullman S, Basri R: Recognition by Linear Combinations of Models. IEEE Trans Pattern Analysis and Machine Intelligence 1991, 13: 9921006. 10.1109/34.99234
 13.
Ullman S: An Approach to Object Recognition: Aligning Pictorial Descriptions. Cognition 1989, 32: 193254. 10.1016/00100277(89)90036X
 14.
Khan S, Cheng H, Matthies D, Sawhney H: 3D model based vehicle classification in aerial imagery. Proc IEEE International Conf on Computer Vision and Pattern Recognition 2010.
 15.
Kushal A, Schmid C, Ponce J: Flexible object models for categorylevel 3D object recognition. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2007.
 16.
Su H, Sun M, FeiFei L, Savarese S: Learning a dense multiview representation for detection, viewpoint classification and synthesis of object categories. Proc IEEE International Conference on Computer Vision 2009.
 17.
Ozcanli O, Tamrakar A, Kimia B: Augmenting shape with appearance in vehicle category recognition. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2006.
 18.
Savarese S, FeiFei L: Multiview Object Categorization and Pose Estimation. Computer Vision, Volume 285 of Studies in Computational Intelligence, Springer 2010.
 19.
Toshevand A, Makadiaa A, Daniilidis K: Shapebased object recognition in videos using 3D synthetic object models. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2009.
 20.
Lou J, Tan T, Hu W, Yang H, Maybank S: 3D modelbased vehicle tracking. IEEE Trans Image Processing 2005, 14: 15611569.
 21.
Leotta M, Mundy J: Predicting high resolution image edges with a generic, adaptive, 3D vehicle model. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2009.
 22.
Sandhu R, Dambreville S, Yezzi A, A T: Nonrigid 2D3D pose estimation and 2D image segmentation. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2009.
 23.
Tsin Y, Gene Y, Ramesh V: Explicit 3D modeling for vehicle monitoring in nonoverlapping cameras. Proc IEEE International Conference on Advanced Video and Signal based Surveillance 2009.
 24.
Liebelt J, Schmid C: Multiview object class detection with a 3D geometric model. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2010.
 25.
Bülthoff H, Edelman S: Psychophysical support for a 2D view interpolation theory of object recognition. Proc of the National Academy of Science 1992, 89: 6064. 10.1073/pnas.89.1.60
 26.
Abdelkader M, AbdAlmageed W, Srivastava A, Chellappa R: Silhouettebased gesture and action recognition via modeling trajectories on Riemannian shape manifolds. Computer Vision and Image Understanding 2011,115(3):439455. 10.1016/j.cviu.2010.10.006
 27.
Belongie S, Malik J, Puzicha J: Shape matching and object recognition using shape contexts. IEEE Trans Pattern Analysis and Machine Intelligence 2002,24(4):509522. 10.1109/34.993558
 28.
Hu M: Visual pattern recognition by moment invariants. IRE Trans Information Theory 1962,8(2):179187. 10.1109/TIT.1962.1057692
 29.
Elgammal A, Lee CS: Separating style and content on a nonlinear manifold. Proc IEEE International Conference on Computer Vision and Pattern Recognition 2004.
 30.
Srivastava A, Joshi S, Mio W, Liu X: Statistical shape analysis: clustering, learning, and testing. IEEE Trans Pattern Analysis and Machine Intelligence 2005,27(4):590602.
 31.
Murase H, Nayar S: Visual learning and recognition of 3D objects from appearance. International Journal of Computer Vision 1995, 14: 524. 10.1007/BF01421486
 32.
Tenenbaum J, Freeman WT: Separating style and content with bilinear models. Neural Computation 2000, 12: 12471283. 10.1162/089976600300015349
 33.
Vasilescu MAO, Terzopoulos D: Multilinear analysis of image ensembles: Tensorfaces. Proc IEEE European Conference on Computer Vision 2002.
 34.
Gosch C, Fundana K, Heyden A, Schnörr C: View point tracking of rigid objects based on shape submanifolds. Proc European Conference on Computer Vision 2008.
 35.
Lee C, Elgammal A: Modeling View and Posture Manifolds for Tracking. Proc IEEE International Conference on Computer Vision 2007.
 36.
Elgammal A, Lee CS: Tracking people on torus. IEEE Trans on Pattern Analysis and Machine Intelligence 2009, 31: 520538.
 37.
Lee C, Elgammal A: Simultaneous Inference of View and Body Pose using Torus Manifolds. Proc IEEE Int'l Conference on Pattern Recognition 2006.
 38.
Vasilescu MAO, Terzopoulos D: Multilinear image analysis for facial recognition. Proc IEEE International Conferenec on Pattern Recognition 2002.
 39.
Arulampalam S, Maskell S, Gordon N, Clapp T: A Tutorial on Particle Filters for Online Nonlinear/NonGaussian Bayesian Tracking. IEEE Trans Signal Processing 2002,50(2):174188. 10.1109/78.978374
 40.
Zivkovic Z, van der Heijden F: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognition Letters 2006, 27: 773780. 10.1016/j.patrec.2005.11.005
 41.
She K, Bebis G, Gu H, Miller R: Vehicle Tracking Using OnFusion of Color and Shape Features. Proc IEEE International Conference on Intelligent Transportation Systems 2004.
Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions that helped us improve this paper.
This work was supported in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under grants W911NF0410221 and W911NF0810293, the National Science Foundation under Grant IIS0347613, and an OHRS award (HR09030) from the Oklahoma Center for the Advancement of Science and Technology.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Venkataraman, V., Fan, G., Yu, L. et al. Automated target tracking and recognition using coupled view and identity manifolds for shape representation. EURASIP J. Adv. Signal Process. 2011, 124 (2011). https://doi.org/10.1186/168761802011124
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168761802011124
Keywords
 tracking and recognition
 shape representation
 shape interpolation
 manifold learning