# Human tracking with an infrared camera using a curve matching framework

- Suk Jin Lee
^{1}, - Gaurav Shah
^{1}, - Arka Aloke Bhattacharya
^{2}and - Yuichi Motai
^{1}Email author

**2012**:99

https://doi.org/10.1186/1687-6180-2012-99

© Lee et al; licensee Springer. 2012

**Received: **16 May 2011

**Accepted: **3 May 2012

**Published: **3 May 2012

## Abstract

The Kalman filter (KF) has been improved for a mobile robot to human tracking. The proposed algorithm combines a curve matching framework and KF to enhance prediction accuracy of target tracking. Compared to other methods using normal KF, the Curve Matched Kalman Filter (CMKF) method predicts the next movement of the human by taking into account not only his present motion characteristics, but also the previous history of target behavior patterns-the CMKF provides an algorithm that acquires the motion characteristics of a particular human and provides a computationally inexpensive framework of human-tracking system. The proposed method demonstrates an improved target tracking using a heuristic weighted mean of two methods, i.e., the curve matching framework and KF prediction. We have conducted the experimental test in an indoor environment using an infrared camera mounted on a mobile robot. Experimental results validate that the proposed CMKF increases prediction accuracy by more than 30% compared to normal KF when the characteristic patterns of target motion are repeated in the target trajectory.

### Keywords

object tracking far-infrared imaging human target curve matching Kalman filter## 1. Introduction

Kalman filter (KF) is a set of mathematical equations that update the underlying system state with the repetition process of prediction and correction to minimize the estimated error covariance based on the last state for its predictions and not the history of the target motion in itself [1]. Each target motion may have a prominent moving pattern, or a cyclic motion pattern, considered as basis in [2–4]-which may not be detected accurately by KF alone, even if these characteristic patterns are repeated many times in the target trajectory. Such a repetition may provide computational benefits when it is integrated with more advanced and complex motion analysis systems.

We assume in this article that such a repetition of certain characteristic patterns exists in the target trajectory. The more the time elapsed, the greater the characteristic motion pattern which has been recorded in the past trajectory repeated. Noting and acquiring these distinctive marks can yield a more accurate prediction performance, compared to any standalone implementations of KF. We set up an experimental environment indoor to demonstrate the prediction accuracy of the developed algorithm. The experiment setup includes a human-tracking mobile robot used in an indoor environment, fitted with an infrared camera and a laser range finder.

The main contribution of this article is to implement the Curve Matched Kalman Filter (CMKF) by using curve matching framework for checking the repetition of characteristic motion patterns. This article takes KF as a basic implementation model, and shows through experimental results how curve matching can improve its prediction accuracy when it is used for the target tracking of human subjects. By combining KF with a curve matching framework, we are essentially increasing the number of states upon which the prediction process depends. Therefore, this article may provide a state-of-the-art target tracking method with the development of a filter design which improves the prediction accuracy of any filter aimed at tracking human subjects.

The rest of this article is organized as follows. In Section 2, we present related article pertaining to this study. In Section 3, we present the proposed human tracking using CMKF. Section 4 describes the experimental setup prepared to test the new concept and presents experimental results of target tracking with different target conditions of multiple trials, compared to other methods including KF, arbitrate OFKF, particle filter, and optical flow. In Section 5, we present the prediction analysis of improvement after using CMKF and show a further application where the CMKF far outperforms a normal KF. Finally, Section 6 concludes this study and describes future work.

## 2. Related studies

There are two categories of articles related to the study reported in this article. The first group of articles is concerned with tracking targets with the help of mobile robots with or without the use of KF. The second group of articles deals with curve matching. They describe how to match curves efficiently and show how these concepts have been applied to target tracking in the past.

### 2.1. Human tracking by mobile robot

Human tracking by mobile robots has been an area of intense research for many years. The human body can be tracked by several methods including processing 2D data or 3D positional data by using normal KF [5, 6], by segmentation of the main target from the background [7], by multi-sensor fusion data and laser-based leg-detection techniques using on-board laser range finder [8], by techniques such as the hierarchical KF [9], or by the use of quaternions [10]. Humans have also been tracked by tracking multiple features [11], or by localizing the position of the robot itself [12], with respect to its environment. There has been work which deals with detecting and classifying abnormal gaits [13], and human recognition at a distance through gait energy image [14], gait detection using feature extraction from spatial and temporal templates [15], or identification of humans through spatial temporal silhouette analysis [16]. None of these algorithms implement any technique where the target's trajectory is marked and learnt.

In works that have implemented the target trajectory, robots have been trained to learn from human motions as in [17, 18], and to follow the way a human walks [19]. In other cases, robots have been made to model unknown environments as in [20]. However, these behavior analyses for human motion have not been specifically implemented in any prediction algorithms of target's trajectory.

### 2.2. KF-based tracking

Extended KF has widely been used for tracking moving people [21, 22]. We have further developed the extended KF into an interactive multiple model [23, 24] successfully. The KF provides a general solution to the recursive minimized mean square estimation problem within the class of linear estimators [1, 25–27]. Use of the KF will minimize the mean squared error as long as the target dynamics and the measurement noise are accurately modeled. Consider a discrete-time linear dynamic system with additive white Gaussian noise that models unpredictable disturbances. The problem of this KF framework is to assume the target behavior follows a random variable distribution [1], which may degenerate the prediction accuracy of human localization. For example, acquiring the characteristics of human motion by curve matching has not been implemented in any of the aforementioned articles.

### 2.3. Curve matching

Curve matching has often been implemented in robotics but to a different end. Most of the works on curve matching deal with identifying some known curve from an image, and then initiating various algorithms to follow it [28–34], while [35] deals only with a parameter-free approach to register images using elasticity theory. Further work deals with how best to find out contours from an image to identify and classify a target [36–41].

Another extensive area of work has been the field of curve matching itself. There have been many articles which deal with curve matching using splines [42–46]. Other algorithms for curve matching use polygonal arc methods [47], matching the curve structure on the most significant scales [48], unary and binary measurements invariant for curves [49], fuzzy logic [50] or Sethian's Fast Marching method [51].

CMKF concentrates on using curve matching for checking motion repetition characteristics and partially bases its curve matching algorithm from [52]. All of the mentioned articles allow curve matching at a high computational cost. A new and straightforward model has thus been developed in this article, which not only improves the performance of the KF, but also keeps the time required for the extra computation as low as possible.

## 3. Proposed human tracking using a CMKF

### 3.1. Concept

This section proposes a new model called the CMKF. The model intends to track the movements of a human by essentially increasing the number of states on which the prediction depends. This approach uses mathematical curve matching to relate the motion of the human at the present instant to some similar motion in the past, and thus make a better prediction of the man's state.

We consider two curves-*P* denoting the man's trajectory in the past or the past curve, and *C* denoting the man's current trajectory or the current curve. We search in *P* for stretches where the present curve *C* matches. When a strong match is found we try to predict the next movement of the man based on how it had behaved in the past. At each instant of time, a weight is assigned to the curve matching algorithm (CMA) and the KF depending on whether the present motion of the man strongly resembles his past motion. The weighted mean of the CMA and the KF is then calculated, according to which the mobile robot is moved. To facilitate finding matches of curve *C* in curve *P*, we encode them as strings *C*' and *P*', respectively, and apply computationally light string matching algorithms to calculate the extent and degree of matches.

### 3.2. Justification for using curve matching

In the experiment setup, we consider a mobile robot always centered in on the human target. Whenever the human makes a move, the robot head moves accordingly to bring the human back to the center of its sensor image. Under such situations, if the human target suddenly changes direction, the KF cannot catch up immediately because it cannot predict the target using the last few states that the body was going to make such a move. Under more general circumstances, it is impossible to predict if a human will make a sudden move from his present position. However, an alternate solution is to utilize past records of motion to help predict complicated movements of human subjects. That is why we extend KF to the area of mathematical curve matching in order to increase the state-space which may be used for prediction making.

It is also noted that current trajectory may not always significantly match any motion in the past. Hence, the CMA prediction should only be used when there is ample evidence that a repetition in trajectory is taking place. The algorithm will supply the robot with a weighted average of the KF prediction as stated above and the value predicted by CMA. The better the present motion curve matches one in the past, the higher the weight of the prediction returned by the CMA. Also converting the curves into strings ensures lightweight computation, and hence viable for implementation in real-time systems. In the following sections, we describe in more detail each of the aspects of the CMA.

### 3.3. Converting the curve into a string

Two curves are initially calculated. The first curve *C* denotes the current motion of the human, and the second curve *P* denotes his past motion. Each curve is taken and converted into a corresponding string denoted as *C*' and *P*', respectively. These two strings are then matched. The length of *C*' represents the length of the time through which the present motion has been following a past motion. The string matching algorithm also calculates the extent to which the strings have matched. Initially, both *P*' and *C*' are null. As the robot starts tracking, the motion of the human gradually gets started to store in *P*'. Thus, *P*' gives the entire history of the motion which the robot has tracked till the present moment. *C*' stores only that much amount of the ending portion of *P*' that can be matched to an earlier instant in *P*'. Thus, *C*' only contains the last few frames of data which can be matched to a previous motion of the man. If the current motion does not match any previous motion already stored in *P*', we re-initialize *C*' to null.

*s*). The curve is sampled at specified intervals. The relative difference of the neighboring sample points in the graph is calculated (as shown in Figure 1). The value obtained at each sample point is then converted into an integer. A sequence of integers denoting every sampled portion of the curve is thus obtained.

where Φ is the string of integers to which the curve is being converted, and *i* denotes a particular sampled point corresponding to a particular character in the string.

over some fixed *j* and *k*. *δ* is some fixed non-negative real number.

*M*to space out the values obtained. We then convert each Φ

_{ i }to the nearest integer.

int is the integer function, which finds the greatest integer lesser than or equal to the number.

The sequence Φ'_{1}Φ'_{2},...,Φ'_{
k
} thus becomes the desired string encoding of the motion curve.

Note that many special features of the curve, such as rotationally invariant points, etc., need not be taken for our purpose of curve matching. This procedure helps reduce time taken for the algorithm to run.

### 3.4. Matching the two strings

*ε*. There is a match between corresponding characters if

where *P*' and *C*' are the two strings denoting the previous history and the current motion, respectively, and *k* and *k*' are the indices where they match.

### 3.5. Update of the strings

Initially, the string *C*' includes the latest move only. *P*' and *C*' are then matched and the points where they have matched are stored. The next move is checked at exactly these points in the previous memory string *P*'. If there is a match with the next move as well, *C'* (which denotes how much the present move has matched some action in its previous history) is augmented to include the present move, otherwise *C*' is re-initialized to contain the latest character only.

Whenever a portion of the string is matched, the curve matching prediction returned is the immediate next move taken by the human at the point where the curve has been matched. If there is more than one place where the present motion curve has matched, then we take the average of the readings. Also, as will be seen in part E, the weight of the CMA is automatically reduced if the body has been shown to act differently after each such matching motion.

*C*' curve has to match, it will match only in those indices where the unaugmented

*C*' had previously matched. This way, the entirety of

*P*' need not be scanned for a match every time the current string is updated. Figure 2 demonstrates how much of the present move will be matched to the previous motion. The present move will constitute

*C*and the remaining curve will constitute

*P*.

### 3.6. Complexity of the CMA

Curve matching is computationally expensive. There have to be a number of algorithmic procedures which need to be adopted in order to lessen the computation required for curve matching.

*P*' to be

*n*. We assume that character comparisons take place in a constant time

*c*

_{1}. The first value of

*C*' is matched in

*O*(

*n*) time, by a naïve string matching algorithm. Thus, the first step has a complexity of

*T*(

*n*) is the time taken to run. Let the first character of

*C*' match in

*p*separate locations in the recorded database. The next time a character comes in, we only need to check it against the

*p*positions, and not all of the

*n*positions in the database.

*p ≤ n*. The time required per iteration (as only one new data is obtained in one iteration) now is

The value of *p* is non-increasing for a particular match. As more of *C*' is matched, the number of positions in *P*', where it has matched will get reduced. Generally, *C*' will initially match in only some select indices in *P*', hence, it can be assumed that the entire process will require less time per cycle, even in the worst case. The time taken is an important metric as this algorithm has to be online.

For improvements in running time when *P*' becomes large, *P*' is made to contain only a limited amount of history (such as only those portions of the motion which have been matched repeatedly). The amount of history which is to be stored can be optimized based on the frame rate of the sensor used, i.e., the time required to make the CMKF prediction should not exceed the frame rate of the sensor. This will help in restricting the continuous increase in the length of the stored string *P*', and also help reduce the running time.

### 3.7. Assigning a weight to the CMA

Three cases need to be taken into account when deciding the weight of the prediction from the CMA.

#### Case (i) : Length of the match

The first aspect is the extent to which the curve matches exactly. This is very important, as this will govern whether exactly such a motion has indeed taken place in the past. If the curve only matches to a very small extent, then it will be evident that the present motion is not a strong repetition of its past motion.

where *w*_{
CMA
} is the weight of the CMA, *l*_{
C',exact
} is the number of characters in *C'* that has matched exactly to the corresponding characters in *P*', and *l*_{
C', total
} the total number of characters in *C*'. If *l*_{
C', exact
} equals *l*_{
C', total
} then the curves have matched exactly, and the weight given to the CMA is high. The function *f* is a function which has a value close to 1 when *l*_{
C', total
} is high. This is done to give more importance to longer matched curves. The function *f* can be chosen to be any suitable function, such as the sigmoid function.

#### Case (ii): Frequency of the match in history

The second aspect to be taken into consideration is the number of times *C*' has been matched in *P*'. There may be a case where the present motion has been repeated more than once in the past.

where *m* is the number of matches. As the values of *l*_{
C', exact
} and *m* increase, the weight of the CMA also increases.

#### Case (iii): Motion maintenance after similar trajectory

where *a*_{+} is the number of actions in the positive direction (according to the chosen reference), and *a*_{-} is the number of actions in the negative direction, following matched motions. Here, delta function δ(·) has the value as follows: 1 for δ(0); otherwise |*a*| for δ(|*a*|). Note that the weight is the highest if there is a motion maintenance, and is low in cases of inconsistent behavior.

### 3.8. Flowchart of entire process

CMKF consists of taking the KF prediction, the CMA prediction and assigning a weight to each of them, depending on their importance to the context. Then, the weighted average of their predictions is taken to move the robot. The entire process is described by the following six steps:

**Step 1 :** The human's position is calculated using sensors.

**Step 2 :** The KF prediction is obtained.

**Step 3 :** The CMA prediction is obtained.

**Step 4 :**The weight of the CMA is calculated as the product of the three terms shown in cases (i), (ii), and (iii).

*w*

_{ CMA }can have a maximum value of 1 (the value of the constant of proportionality

*k*is adjusted in the appropriate manner), and that too in the most ideal case. The weight attached to the KF is

where *w*_{
KF
} is the weight of the prediction of the KF. It is evident that in a majority of the cases the weight of the KF far outweighs the weight of the CMA. It is only in those cases that the motion is highly repetitive that *w*_{
CMA
} will be very high.

**Step 5 :** Make the robot move according to the weighted average of the KF prediction and the prediction based on CMA.

**Step 6 :** The position of the human is obtained, the Kalman matrices are updated and the entire process beginning from Step 1 is repeated.

## 4. Experimental results

Section 4.1 shows the experimental environment setting. Sections 4.2, 4.3, and 4.4 show the advantages of CMKF over the KF by analyzing in detail an instance where the mobile robot was made to track a human in an indoor environment for about 400 s. Other runs, with various degrees of motion repetition, are analyzed in Section 4.5. Finally, computational time is evaluated for the complexity in Section 4.6.

### 4.1. Experimental setting for target tracking

We implemented a human-tracking mobile robot in an indoor environment, with an infrared laser and a laser range finder mounted on the Active Media robot. Online video was collected using Raytheon Infrared Camera operated at 15-Hz frame rate and image size of 640 × 480 pixels. The video stream was processed at 15 Hz using a PC for a real-time analysis. We claim that since the CMKF gives an improved performance on this version of the KF, it will similarly improve the performance when implemented in conjunction with other varieties of KFs, such as the use of different dynamical models. This is because the concept developed is independent of the efficiency and type of the KF implemented. We try to predict the angle by which the robot should be turned so as to bring the man back to the center if and when the man makes some motion. The range finder is used to find the distance of the man from the robot so as to estimate the relative angular position of the human accurately and also to ensure that the robot remains within a certain specified distance of the man.

### 4.2. Target tracking by CMKF

*t*+ 1)th second, and it is compared to the prediction of the CMKF made from data till the

*t*th second. In Figure 4, the ground truth is labeled as the original curve. Once the prediction is done, we update the CMKF equations by the true angle subtended and then rotate the robot by the CMKF prediction to center on the human. We also store this value in the motion history of the target, i.e., in

*P*'.

It may be argued that even if the human repeats his motion, his angular positions may not be exactly the same as they were previously. Moreover, the camera may have been rotated through slightly different angles now. Hence, the ground truth that we have taken to update our equations and augment *P*' may also change slightly. However, the variation in these values will never exceed a certain threshold which is bounded by the average error which the implemented KF has. Thus, choosing the *ε* in Equation (4) based on the average error of the version of the KF implemented helps us circumvent the problem.

Figure 5a shows an example of the target behavior with the length of the current motion string, *C*', versus time. One of the most important criteria on which the weight of the CMA depends is the length of *C*'. If the present motion matches for a long period of time, the weight of the curve matching becomes high. Figure 5a shows the portion of curve matched at any instant of time. It may be noted that around the time period of 16 s, the present motion matches some past motion for a long period of time. During this period of the corresponding time the weight of the CMA dominates over that of the KF, as is seen in Figure 5b.

### 4.3. Comparison of error between CMKF and KF

Details of tracking

Mean error | Standard deviation of error | Mean time required per cycle | Duration of the tracking correctness within 3-sigma bound (%) | |
---|---|---|---|---|

KF | 0.4868 | 5.2748 | 1.5992e-4 | 15 |

CMKF | 0.0697 | 0.0662 | 7.2918e-5 | 32 |

In Figure 6, we find that the error in the portion in which the curves match is reduced by close to 95%. This results in extremely accurate tracking, and also brings down the entire mean error of the process.

### 4.4. Comparison of error among other methods using different target conditions

In Figure 7, we find that optical flow has a significant angle error at the beginning of the prediction and that the particle filter also has a significant angle error at transient states. The angle errors of arbitrate OFKF are fluctuating over all the observation periods. The standard deviations of angle error for optical flow, arbitrate OFKF, particle filter, and CMKF are 2.25, 1.93, 3.01, and 0.25, respectively, as shown in Figure 7.

Methodological comparison of tracking

Duration of the tracking correctness within 3-sigma bound (%) | Standard deviation of the tracking error under different condition | Bias for residual error during the stable periods | ||||||
---|---|---|---|---|---|---|---|---|

Ground proof | Noisy | |||||||

Fast | Med | Slow | Fast | Med | Slow | |||

CMKF | 32 | 0.07 | 0.08 | 0.08 | 0.07 | 0.08 | 0.08 | 0.02 |

Arbitrate OFKF | 3.5 | 239.36 | 1.77 | 1.83 | 2.50 | 2.51 | 5.56 | 0.41 |

Particle filter | 3.02 | 0.82 | 0.42 | 0.31 | 1.10 | 1.25 | 0.52 | 0.45 |

Optical flow | 5.01 | 7.58 | 0.76 | 0.60 | 0.63 | 0.36 | 0.74 | 1.76 |

### 4.5. Multiple trials with Monte Carlo simulation

Multiple trials of tracking with different speed using CMFK

Trials of different speed target movement | Mean error | Standard deviation of error | Mean time required per cycle |
---|---|---|---|

Slow | -0.0228 | 0.0809 | 8.1074e-5 |

Medium | 0.0153 | 0.0870 | 8.5299e-5 |

Fast | 0.0129 | 0.0753 | 8.5866e-5 |

Figure 8 shows the comparison of tracking error among the multiple trials of different speed target movements. In the fast speed target movement, we find that some significant tracking errors at the change of degree acceleration, but the overall angle errors of multiple trials of tracking are within 1.5° with different speed using CMKF. We summarize the mean error, the standard deviation of error, and mean time required per cycle in Table 3.

### 4.6. Comparison of time complexity

It has also been determined that the time required by the proposed algorithm to make its prediction from a particular frame does not differ significantly from the time taken by the normal Kalman prediction algorithm. Due to the efficiency of the algorithm proposed, the extra computations are required for the CMAs.

Comparison of average mean time with other methods

CMKF | KF | Arbitrate OFKF | Particle filter | Optical flow | |
---|---|---|---|---|---|

Average mean time | 2.3429e-4 | 1.6232e-4 | 6.4351e-5 | 0.0013 | 4.1963e-5 |

## 5. Prediction analysis via evaluation

### 5.1. Analysis of improvement after using CMKF

The algorithm works when tracking humans with a tendency of repetitive motion. Since each human has his own characteristic motion, given a long enough track the robot, such as 20 s with 90 repetitions in the movement, can match the human's present motion to some motion in the past, and can make much more accurate decisions than the KF implementation.

It can be seen that with more repetition in movement, the error with which the CMKF predicts the position of the human decreases. The error also decreases with time, as there is a larger database to compare the actions with, and there is a high likelihood of the present motion of the human matching with some motion in its past history. The only trade-off is the time required for the algorithm to predict, which is not too much of a drawback as it is of the order of 10e-4s. An optimized equation maybe drawn between the frame rate of the sensor available and the time taken for the CMKF to predict, and the size of the database maybe restricted accordingly (as mentioned in Section 3.6).

### 5.2. Experimental verification

Figure 13a shows that for about 20 frames after the robot loses track of its human, the error in its prediction based on curve matching and the man's actual motion lies within 20°. As the number of frames in which the human is absent begins to increase, the error in the prediction of the CMA also increases. Figure 13b shows the mean and standard deviation of the error of the predictions made with the assumption that the human is out of sight. It can be seen that for the first few frames after the robot has lost track of the human, it makes accurate guesses about the position of the human, and can be assumed to get the human back into view within a short amount of time.

CMKF will outperform KF in cases where the human has shown a tendency to change its direction while moving, which is normally what happens. KF will not be able to predict the changes in direction of motion of the human, whereas the CMKF will. In situations where the human shows a tendency of smooth motion, the CMKF will give as good a result as the KF because the CMKF will predict a smooth motion matching some portion in the human's past history.

The robot used has a pan-vision of about 45° on each side of its optical axis. If the error is less than 45° at any instant, the robot can get the human back in sight, after making a prediction, but if the error exceeds 45°, then the rotation of the robot will not bring the human into view at all. This happens around frame number 40. Forty frames from the sensor correspond to roughly 3 s (frame rate of the infrared sensor used in the experiment is 15 Hz); so, if the robot does not have the track of the human for more than 3 s, we can assume that it is lost.

When the object goes out of view, for whatever reason, the CMKF will do a much more efficient job of predicting where the human might be, having only the last few values of motion to extrapolate the curve of motion.

## 6. Conclusion

When curve matching is added to an existing KF model in the integrated manner, it has been shown to improve the performance of KF to an extent as displayed in the experiments of infrared image-based analysis. When the percentage of repeated moves is around 20%, the improvement in mean error is almost 12%.

If the proposed hybrid algorithm is used to track objects or humans with a continuous tendency of repetitive of motion, the mean error improves by 50-70% over a traditional implementation of KF. Also, the algorithm uses exclusively the dataset which is obtained from the human it is tracking-and hence does not need previously stored training datasets. Training datasets, if available, will always boost the performance of the CMKF. This top-down and bottom-up algorithm is thus expected to perform much better than ordinary methods even if the tracked target has a unique but repetitive motion because it has the capability of acquiring the characteristics of the target real time. More accurate CMAs and better human segmentation methods will further enhance the performance of the CMKF on human tracking by fusing more sensors to get help in making the CMKF more accurate. These areas can be probed into for further development of this algorithm.

## Declarations

### Acknowledgements

This study was supported in part by the dean's office of the School of Engineering at Virginia Commonwealth University, and NSF ECCS #1054333.

## Authors’ Affiliations

## References

- Bar-Shalom Y, Li X, Kirubarajan T:
*Estimation With Applications to Tracking and Navigation*. Wiley, New York; 2001.View ArticleGoogle Scholar - Nelson C, Polana R: Detection and recognition of periodic, non-rigid motion.
*Int J Comput Vis*1997, 23: 261-282.View ArticleGoogle Scholar - Albu AB, Bergevin R, Quirion S: Generic temporal segmentation of cyclic human motion.
*Pattern Recognit*2008, 41: 6-21.View ArticleMATHGoogle Scholar - Tsai PS, Shah M, Keiter K, Kasparis T: Cyclic motion detection for motion based recognition.
*Pattern Recognit*1994, 27: 1591-1603.View ArticleGoogle Scholar - Mikic I, Trivedi M, Hunter E, Cosman P: Human body model acquisition and tracking using voxel data.
*Int J Comput Vis*2003, 53: 199-223.View ArticleMATHGoogle Scholar - Jang DS, Jang SW, Choi HI: 2D human body tracking with structural Kalman filter.
*Pattern Recognit*2002, 35: 2041-2049.View ArticleMATHGoogle Scholar - Rastogi AK, Chatterji BN, Ray AK: Design of a real-time tracking system for fast-moving objects.
*IETE J Res*1997, 43: 359-369.View ArticleGoogle Scholar - Bellotto N, HS HU: Multisensor-based human detection and tracking for mobile service robots.
*IEEE Trans Syst Man Cybern B Cybernetics*2009, 39: 167-181.View ArticleGoogle Scholar - Jung S, Wohn K: Tracking and motion estimation of the articulated object: a hierarchical Kalman filter approach.
*Real-Time Imag*1997, 3: 415-432.View ArticleGoogle Scholar - Yun XP, Bachmann ER: Design, implementation, and experimental results of a quaternion-based Kalman filter for human body motion tracking.
*IEEE Trans Robot*2006, 22: 1216-1227.View ArticleGoogle Scholar - Yang MT, Wang SC, Lin YY: A multimodal fusion system for people detection and tracking.
*Int J Imag Syst Technol*2005, 15: 131-142.View ArticleGoogle Scholar - Jetto L, Longhi S, Venturini G: Development and experimental validation of an adaptive extended Kalman filter for the localization of mobile robots.
*IEEE Trans Robot Autom*1999, 15: 219-229.View ArticleGoogle Scholar - Bauckhage C, Tsotsos JK, Bunn FE: Automatic detection of abnormal gait.
*Image Vis Comput*2009, 27: 108-115.View ArticleGoogle Scholar - Xhon XL, Bhanu B: Integrating face and gait for human recognition at a distance in video.
*IEEE Trans Syst Man Cybern B Cybernetics*2007, 37: 1119-1137.View ArticleGoogle Scholar - Huang PS: Automatic gait recognition via statistical approaches for extended template features.
*IEEE Trans Syst Man Cybern B Cybernetics*2001, 31: 818-824.View ArticleGoogle Scholar - Wang L, Tan T, Ning HZ, Hu W: Silhouette analysis-based gait recognition for human identification.
*IEEE Trans Pattern Anal Mach Intell*2003, 25: 1505-1518.View ArticleGoogle Scholar - Yeasin M, Chaudhuri S: Toward automatic robot programming: learning human skill from visual data.
*IEEE Trans Syst Man Cybern B Cybernetics*2000, 30: 180-185.View ArticleGoogle Scholar - Lerasle F, Rives G, Dhome M: Tracking of human limbs by multiocular vision.
*Comput Vis Image Understand*1999, 75: 229-246.View ArticleGoogle Scholar - Yeasin M, Chaudhuri S: Development of an automated image processing system for kinematic analysis of human gait.
*Real-Time Imag*2000, 6: 55-67.View ArticleGoogle Scholar - Weckesser P, Dillmann R: Modeling unknown environments with a mobile robot.
*Robot Auton Syst*1998, 23: 293-300.View ArticleGoogle Scholar - Rosales R, Sclaroff S: A framework for heading-guided recognition of human activity.
*Comput Vis Image Understand*2003, 91: 335-367.View ArticleGoogle Scholar - Wu SG, Hong L: Hand tracking in a natural conversational environment by the interacting multiple model and probabilistic data association (IMM-PDA) algorithm.
*Pattern Recognit*2005, 38: 2143-2158.View ArticleGoogle Scholar - Himberg RH, Motai Y: Head orientation prediction: delta quaternions versus quaternions.
*IEEE Trans Syst Man Cybern B Cybernetics*2009, 39(6):1382-1392.View ArticleGoogle Scholar - Barrios C, Motai Y: Improving estimation of vehicle's trajectory using latest global positioning system with Kalman filtering.
*IEEE Trans Instrum Meas*2011, 60(12):3747-3755.View ArticleGoogle Scholar - Welch G, Bishop G: An introduction to the Kalman filter.
*SIGGRAPH*2001. (course notes)Google Scholar - Maybeck P:
*Stochastic Models, Estimation and Control*. Academic Press, New York; 1979.MATHGoogle Scholar - Bohg J: Real-times structure from motion using Kalman filtering.
*PhD Dissertation*2005.Google Scholar - Saintmarc P, Chen JS, Medioni G: Adaptive smoothing-a general tool for early vision.
*IEEE Trans Pattern Anal Mach Intell*1991, 13: 514-529.View ArticleGoogle Scholar - Knoll C, Alcaniz M, Grau V, Monserrat C, Juan MC: Outlining of the prostate using snakes with shape restrictions based on the wavelet transform (Doctoral thesis).
*Pattern Recognit*1999, 32: 1767-1781.View ArticleGoogle Scholar - Porrill J, Pollard S: Curve matching and stereo calibration.
*Image Vis Comput*1991, 9: 45-50.View ArticleGoogle Scholar - Wang JY, Cohen FS: 3-D object recognition and shape estimation from image contours using B-splines, shape invariant matching and neural-network.
*IEEE Trans Pattern Anal Mach Intell*1994, 16: 13-23.View ArticleGoogle Scholar - Bigun J, Bigun T, Nilsson K: Recognition by symmetry derivatives and the generalized structure tensor.
*IEEE Trans Pattern Anal Mach Intell*2004, 26: 1590-1605.View ArticleGoogle Scholar - Wang YP, Lee SL, Toraichi K: Multiscale curvature-based shape representation using B-spline wavelets.
*IEEE Trans Image Process*1999, 8: 1586-1592.View ArticleGoogle Scholar - Paglieroni DW, Ford GE, Tsujimoto EM: The position-orientation masking approach to parametric search for template matching.
*IEEE Trans Pattern Anal Mach Intell*1994, 16: 740-747.View ArticleGoogle Scholar - Cohen LD: Auxiliary variables and two-step iterative algorithms in computer vision problems.
*J Math Imag Vis*1996, 6: 59-83.View ArticleMathSciNetGoogle Scholar - Sebastian TB, Kimia BB: Curves vs. skeletons in object recognition.
*Signal Process*2005, 85: 247-263.View ArticleMATHGoogle Scholar - Super BJ: Fast correspondence-based system for shape retrieval.
*Pattern Recognit Lett*2004, 25: 217-225.View ArticleGoogle Scholar - Gope C, Kehtarnavaz N, Hillman G, Wursig B: An affine invariant curve matching method for photo-identification of marine mammals.
*Pattern Recognit*2005, 38: 125-132.View ArticleGoogle Scholar - Ross A, Dass SC, Jain AK: Fingerprint warping using ridge curve correspondences.
*IEEE Trans Pattern Anal Mach Intell*2006, 28: 19-30.View ArticleGoogle Scholar - Orrite C, Herrero JE: Shape matching of partially occluded curves invariant under projective transformation.
*Comput Vis Image Understand*2004, 93: 34-64.View ArticleGoogle Scholar - Olson CF: A general method for geometric feature matching and model extraction.
*Int J Comput Vis*2001, 45: 39-54.View ArticleMATHGoogle Scholar - Cohen FS, Huang ZH, Yang ZW: Unvariant matching and identification of curves using B-spline curve representation.
*IEEE Trans Image Process*1995, 4: 1-17.View ArticleGoogle Scholar - Huang ZH, Cohen FS: Affine-invariant B-spline moments for curve matching.
*IEEE Trans Image Process*1996, 5: 1473-1480.View ArticleGoogle Scholar - Kishon E, Hastie T, Wolfson H: 3-D curve matching using splines.
*J Robot Syst*1991, 8: 723-743.View ArticleGoogle Scholar - Avrithis Y, Xirouhakis Y, Kollias S: Affine-invariant curve normalization for object shape representation, classification, and retrieval.
*Mach Vis Appl*2001, 13: 80-94.View ArticleGoogle Scholar - Gu YH, Tjahjadi T: Coarse-to-fine planar object identification using invariant curve features and B-spline modeling.
*Pattern Recognit*2000, 33: 1411-1422.View ArticleMATHGoogle Scholar - KamgarParsi B: Matching sets of 3D line segments with application to polygonal arc matching.
*IEEE Trans Pattern Anal Mach Intell*1997, 19: 1090-1099.View ArticleGoogle Scholar - Rosin PL: Representing curves at their natural scales.
*Pattern Recognit*1992, 25: 1315-1325.View ArticleGoogle Scholar - Shan Y, Zhang ZY: New measurements and corner-guidance for curve matching with probabilistic relaxation.
*Int J Comput Vis*2002, 46: 157-171.View ArticleMATHGoogle Scholar - Marques JS: A fuzzy algorithm for curve and surface alignment.
*Pattern Recognit Lett*1998, 19: 797-803.View ArticleMATHGoogle Scholar - Frenkel M, Basri R: Curve matching using the fast marching method.
*Proceedings of the Energy Minimization Methods in Computer Vision and Pattern Recognition*2003, 2683: 35-51. PortugalView ArticleGoogle Scholar - Wolfson HJ: On curve matching.
*IEEE Trans Pattern Anal Mach Intell*1990, 12: 483-489.View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.