Skip to content

Advertisement

Open Access

Adaptive parameter particle CBMeMBer tracker for multiple maneuvering target tracking

EURASIP Journal on Advances in Signal Processing20162016:61

https://doi.org/10.1186/s13634-016-0363-x

Received: 1 March 2016

Accepted: 10 May 2016

Published: 17 May 2016

Abstract

Cardinality-balanced multi-target multi-Bernoulli (CBMeMBer) filter has been demonstrated as a promising algorithm for multi-target tracking, and the multi-model (MM) method has been incorporated into the CBMeMBer filter to solve the problem of multiple maneuvering target tracking. However, it is difficult to construct a proper set of models due to the unknown maneuvering parameters of the targets. Moreover, the number of models may increase exponentially if more unknown parameters have to be taken into account to match the target motion modes, which may lead to prohibitive computational complexity. To address this aspect, this paper proposes to incorporate the adaptive parameter estimation (APE) method in the framework of CBMeMBer filters, so that the model with unknown maneuvering parameter can be modified adaptively by using the selected parameter particles. Moreover, a particle labeling technique is introduced in the proposed algorithm in order to obtain the individual target track, which results in the adaptive parameter particle filter CBMeMBer (APPF-CBMeMBer) tracker. Simulation results show that the proposed algorithm can effectively track multiple maneuvering targets with abruptly changing parameters and exhibit better robustness than those of the well-known MM-based approaches.

Keywords

Multi-target multi-Bernoulli filterAdaptive parameter estimation (APE)Multi-model (MM) methodMultiple maneuvering target tracking

1 Introduction

In recent years, random finite set (RFS) [13] is an elegant formulation of the multi-target tracking (MTT) problem and has generated substantial interest due to the development of the probability hypothesis density (PHD) filter [2] and the cardinalized PHD (CPHD) filter [3]. The PHD and CPHD filters were proposed as Poisson RFS density approximations of the multi-target posterior, which can be used to estimate the target states by recursively computing the first-order moment of the multi-target posterior probability distribution. The existing closed-form solutions of PHD mainly include particle filter PHD (PF-PHD) [4, 5] and the Gaussian mixture PHD (GM-PHD) filter [6], which have opened the door to numerous novel extensions and applications as shown in [713]. Moreover, being different from the PHD and CPHD filters, the multi-target multi-Bernoulli (MeMBer) [1] recursion was recently proposed by Mahler as a tractable approximation to the Bayesian multi-target recursion under low clutter density scenarios, which can achieve multi-target tracking by directly propagating the approximate posterior density of the targets. Unfortunately, the MeMBer filter has significant cardinality bias. To tackle this problem, the cardinality-balanced MeMBer (CBMeMBer) filter and its improved versions, such as the δ-generalized labeled multi-Bernoulli (δ-GLMB) and LMB filters, were developed in [1418]. They eliminate the posterior cardinality bias by modifying the measurement-updated track parameters. These algorithms exhibit good MTT performance only when the model parameters are known precisely. In the presence of unknown measurement noise variances, clutter and detection probability, improved RFS filters capable of jointly estimating target states, and unknown parameters were proposed (see e.g., [9, 15, 19] and references therein). These methods would also suffer from performance degradation if the targets make maneuvers with unknown abruptly changing maneuvering parameters.

For maneuvering target tracking, the jump Markov system (JMS) has proved to be an effective method, which switches among a set of candidate models in a Markovian fashion [20, 21]. Pasha et al. introduced the linear JMS into the PHD filters and derived a closed-form solution for the PHD recursion in [22]. Furthermore, the unscented transform (UT) and the linear fractional transformation (LFT) are combined with the closed-form solution for the nonlinear jump Markov multi-target models in [23, 24]. In [25], a GM-PHD filter for jump Markov models is developed by employing the best-fitting Gaussian (BFG) approximation approach. These algorithms assume the Gaussianity of the PHD distribution, which may limit the scope of their applications. The multiple-model PHD (MM-PHD) filter and the MM-CPHD filter implemented using the sequential Monte Carlo (SMC) method were presented in [26, 27], and a corrected version, also known as the jump Markov multi-target Bayes filter, was later proposed in [28]. However, the MMP-CBMeMBer filter [29] has a higher accuracy than the MM-PHD filter due to the fact that the multi-Bernoulli-based method propagates the parameterized approximation to the posterior cardinality distribution. Most of the MM-based filters track multiple maneuvering targets through the interaction of multiple models, which is realized via combining estimates from different models according to their respective model likelihoods. The difficulty of applying them in tracking targets with abruptly changing parameters comes from the need to specify priorly the set of candidate models. The number of models may increase exponentially if more unknown parameters have to be taken into account to match the target motion modes, which may lead to prohibitive computational complexity.

In this work, we attempt to incorporate the adaptive parameter estimation (APE) technique into the framework of the CBMeMBer filter for addressing the problem of multiple maneuvering target tracking. The adaptive Liu and West (LW) filter is adopted to propagate the posterior marginal of the time-varying parameters as a mixture of multivariate Gaussian distributions [3032]. The obtained adaptive parameter particle filter CBMeMBer (APPF-CBMeMBer) filter can track multiple maneuvering targets in the presence of unknown model parameters. Simulation results show that the proposed algorithm exhibits better robustness and improved tracking performance over the MMP-CBMeMBer algorithms. Furthermore, in order to obtain the individual target tracks, the particle labeling technique is introduced in the proposed algorithm.

The remainder of the paper is organized as follows. Section 2 formulates the problem of tracking a target in the presence of unknown model parameters. It also briefly reviews the APE technique and the CBMeMBer filter. Section 3 proposes the APPF-CBMeMBer algorithm with the closed-form solution and describes the track maintenance method. Simulation results are given in Section 4. Finally, conclusions are provided in Section 5.

2 Backgrounds

2.1 Formulation of the problem

The state-space models for tracking a single target moving on a two-dimensional plane are given by
$$ {\boldsymbol{x}}_{k+1}=\mathbf{F}{\boldsymbol{x}}_k+\mathbf{G}{\boldsymbol{v}}_k $$
(1)
$$ {\boldsymbol{y}}_k=h\left({\boldsymbol{x}}_k\right)+{\boldsymbol{w}}_k $$
(2)
where \( {\boldsymbol{x}}_k={\left[{x}_k,{v}_{x_k},{y}_k,{v}_{y_k}\right]}^{\mathrm{T}} \) denotes the target state at time k, (x k , y k ) and \( \left({v}_{x_k},{v}_{y_k}\right) \) denote its position and velocity. F and G are the state transition matrices of the state vector and the process noise gain matrix. y k is the measurement vector. v k and w k are the process noise and the measurement noise, respectively. They are independent with each other and modeled as zero-mean Gaussian random vectors with covariance Q k and R k , respectively.
In many practical applications, the state-space model in (1) and (2) may contain unknown parameters. For example, if the motion of a target follows a coordinated-turn (CT) model [26], the state transition matrix would become
$$ \mathbf{F}\left(\omega \right)=\left[\begin{array}{cccc}\hfill 1\hfill & \hfill \frac{ \sin \omega T}{\omega}\hfill & \hfill 0\hfill & \hfill -\frac{1- \cos \omega T}{\omega}\hfill \\ {}\hfill 0\hfill & \hfill \cos \omega T\hfill & \hfill 0\hfill & \hfill - \sin \omega T\hfill \\ {}\hfill 0\hfill & \hfill \frac{1- \cos \omega T}{\omega}\hfill & \hfill 1\hfill & \hfill \frac{ \sin \omega T}{\omega}\hfill \\ {}\hfill 0\hfill & \hfill \sin \omega T\hfill & \hfill 0\hfill & \hfill \cos \omega T\hfill \end{array}\right] $$
(3)

The maneuvering parameter (turn rate ω) may be unknown and time-varying. In this case, joint estimating the posterior distribution of the target state and the unknown maneuvering parameter from the measurements is needed.

Let θ k be a time-varying parameter in the state-space model. The posterior probability density function (PDF) of the target state vector x k and the unknown parameter vector θ k conditioned on the measurements up to time k is, according to Bayes’ rule,
$$ p\left({\boldsymbol{x}}_k,{\theta}_k\Big|{\boldsymbol{y}}_{1:k}\right)=\frac{p\left({\boldsymbol{y}}_k\Big|{\boldsymbol{x}}_k,{\theta}_k\right)p\left({\boldsymbol{x}}_k,{\theta}_k\Big|{\boldsymbol{y}}_{1:k-1}\right)}{{\displaystyle \int p\left({\boldsymbol{y}}_k\Big|{\boldsymbol{x}}_k,{\theta}_k\right)p\left({\boldsymbol{x}}_k,{\theta}_k\Big|{\boldsymbol{y}}_{1:k-1}\right)d{\boldsymbol{x}}_kd{\theta}_k}} $$
(4)
where p(x k , θ k |y 1 : k − 1) is the predictive PDF and can be expressed as
$$ p\left({\boldsymbol{x}}_k,{\theta}_k\Big|{\boldsymbol{y}}_{1:k-1}\right)={\displaystyle \int p\left({\boldsymbol{x}}_k\Big|{\boldsymbol{x}}_{k-1},{\theta}_{k-1}\right)p\left({\boldsymbol{x}}_{k-1},{\theta}_{k-1}\Big|{\boldsymbol{y}}_{1:k-1}\right)d{\boldsymbol{x}}_{k-1}d{\theta}_{k-1}} $$
(5)

Deriving exact recursive solutions for the posterior distribution p(x k , θ k |y 1 : k ) from (4) and (5) is in general intractable and as a result, approximate solutions are usually resorted to. One such approach is the particle filter (PF) [4, 14, 31].

2.2 Adaptive parameter estimation

In [30, 31], the Liu and West (LW) filter was proposed for the joint identification of static parameters and the target states. In particular, the marginal posterior distribution of the unknown parameters is approximated and propagated using a mixture of multivariate Gaussian distributions. In [32], the particle learning technique was introduced into the LW filter. The obtained APE filter can handle both static and time-varying parameters.

The development of the APE method starts with factorizing the predicting PDF p(x k , θ k |y 1 : k − 1) into
$$ p\left({\boldsymbol{x}}_k,{\theta}_k\Big|{\boldsymbol{y}}_{1:k-1}\right)=p\left({\boldsymbol{x}}_k\Big|{\boldsymbol{y}}_{1:k-1},{\theta}_{k-1}\right)p\left({\theta}_k\Big|{\boldsymbol{y}}_{1:k-1}\right) $$
(6)
The predicting distribution p(θ k |y 1 : k − 1) of the time-varying parameter vector θ k can be approximated via
$$ p\left({\theta}_k\Big|{\boldsymbol{y}}_{1:k-1}\right)\approx \left\{\begin{array}{l}{\displaystyle \sum_{i=1}^N{\omega}_{k-1}^i}N\left({\theta}_k\Big|{m}_{k-1}^i,{h}^2{V}_{k-1}\right),\ \mathrm{with}\ \mathrm{probability}\ 1-\beta \\ {}{p}_{\theta}\left({\theta}_0\right),\kern8em \mathrm{with}\ \mathrm{probability}\ \beta \end{array}\right. $$
(7)
where \( N\left({\theta}_k\Big|{m}_{k-1}^i,{h}^2{V}_{k-1}\right) \) is a Gaussian component with mean \( {m}_{k-1}^i \) and covariance V k − 1, and \( {\omega}_{k-1}^i \) is the associated weight. Here, β is introduced to model the temporal evolution of θ k . It is defined as the probability that θ k is subject to an abrupt change at time k, or equivalently speaking, time instant k is a changepoint [32]. The time-varying vector θ k is assumed to be piecewise constant between two neighboring changepoints. As shown in (7), if there is no abrupt change in θ k , its predicting PDF follows a Gaussian mixture model of N components. The mean and covariance of each component are obtained by
$$ {m}_{k-1}^i=\alpha {\theta}_{k-1}^i+\left(1-\alpha \right){\overline{\theta}}_{k-1} $$
(8)
$$ {V}_{k-1}={\displaystyle \sum_{i=1}^N{\omega}_{k-1}^i\left({\theta}_{k-1}^i-{\overline{\theta}}_{k-1}\right){\left({\theta}_{k-1}^i-{\overline{\theta}}_{k-1}\right)}^T} $$
(9)
where \( {\overline{\theta}}_{k-1}={\displaystyle \sum_{i=1}^N{\omega}_{k-1}^i{\theta}_{k-1}^i} \) is the minimum mean square error (MMSE) estimate of θ k − 1 at time k − 1, and \( \alpha =\sqrt{1-{h}^2} \) is the shrinkage factor suggested in [33] to correct for the over-dispersion of the Gaussian mixture model. In the case that time instant k is a changepoint, the predicting distribution of the time-varying vector θ k will be reset to p θ (θ 0), its prior distribution.
With the predicting PDF given in (7), the APE filter utilizes the PF to produce an approximation of the posterior distribution p(x k , θ k |y 1 : k ) in (4). Suppose at time k − 1, the posterior distribution is represented by N particles \( {\left\{{\boldsymbol{x}}_{k-1}^i,{\theta}_{k-1}^i\right\}}_{i=1}^N \) with weights \( {\omega}_{k-1}^i \). At time k, each particle is given two weights [32]
$$ \begin{array}{cccc}\hfill {\omega}_{k,1}^i\propto p\left({\boldsymbol{y}}_k\Big|{\mu}_k^i,{\theta}_{k-1}^i\right),\hfill & \hfill \mathrm{where}\hfill & \hfill {\theta}_k^i\sim N\left({\xi}_k\Big|{m}_{k-1}^i,{h}^2{V}_{k-1}\right),\hfill & \hfill {\mu}_k^i=E\left[{\boldsymbol{x}}_k^i\Big|{\boldsymbol{x}}_{k-1}^i,{\theta}_{k-1}^i\right]\hfill \end{array} $$
(10)
$$ \begin{array}{cccc}\hfill {\omega}_{k,2}^i\propto p\left({\boldsymbol{y}}_k\Big|{\mu}_k^i,{\gamma}_k^i\right),\hfill & \hfill \mathrm{where}\hfill & \hfill {\gamma}_k^i\sim {p}_{\theta}\left({\theta}_{{}^0}\right),\hfill & \hfill {\mu}_k^i=E\left[{\boldsymbol{x}}_k^i\Big|{\boldsymbol{x}}_{k-1}^i,{\gamma}_k^i\right]\hfill \end{array} $$
(11)
which essentially leads to 2N particles. \( {\omega}_{k,1}^i \) and \( {\omega}_{k,2}^i \) correspond to the probability of the current measurement y k when there is no changepoint and when there is a changepoint. In the former case, the value of time-varying parameter vector \( {\theta}_k^i \) is drawn from the Gaussian distribution \( N\left({\xi}_k\Big|{m}_{k-1}^i,{h}^2{V}_{k-1}\right) \) while for the latter case, its value \( {\gamma}_k^i \) is produced using the prior distribution p θ (θ 0) (see also (7)). Resampling is then performed on the basis of the weights \( \left(1-\beta \right){\omega}_{k,1}^i \) and \( \beta {\omega}_{k,2}^i \) to select N particles out of 2N particles and propagate them to generate the approximation of the posterior p(x k , θ k |y 1 : k ) at time k. For more details on the APE filter for tracking a single maneuvering target, please refer to [32].

2.3 Cardinality-balanced MeMBer filter

The CBMeMBer filter was proposed in [14], which eliminates the posterior cardinality bias existed in the MeMBer filter [1] by modifying the measurement-updated tracks parameters. The CBMeMBer recursion is summarized as follows.

Prediction: Assume the posterior multi-target density at time k − 1 can be presented by the multi-Bernoulli parameter set, i.e.,
$$ {\pi}_{k-1}={\left\{\left({r}_{k-1}^{(i)},{p}_{k-1}^{(i)}\right)\right\}}_{i=1}^{M_{k-1}} $$
(12)
where \( {r}_{k-1}^{(i)}\in \left(0,1\right) \) and \( {p}_{k-1}^{(i)} \) denote the existence probability and probability density of the i th Bernoulli component, respectively. M k − 1 denotes the number of the posterior hypothesized tracks at time k − 1.
Then, the predicted multi-target density π k|k − 1 can also be expressed by the multi-Bernoulli parameter set and is given by
$$ {\pi}_{k\Big|k-1}={\left\{\left({r}_{P,k\Big|k-1}^{(i)},{p}_{P,k\Big|k-1}^{(i)}\right)\right\}}_{i=1}^{M_{k-1}}\cup {\left\{\left({r}_{\varGamma, k}^{(i)},{p}_{\varGamma, k}^{(i)}\right)\right\}}_{i=1}^{M_{\varGamma, k}} $$
(13)
where \( {\left\{\left({r}_{P,k\Big|k-1}^{(i)},{p}_{P,k\Big|k-1}^{(i)}\right)\right\}}_{i=1}^{M_{k-1}} \) and \( {\left\{\left({r}_{\varGamma, k}^{(i)},{p}_{\varGamma, k}^{(i)}\right)\right\}}_{i=1}^{M_{\varGamma, k}} \) denote the parameter sets of the multi-Bernoulli RFS of the surviving targets and the spontaneous births, respectively. M k − 1 and M Γ,k denote the predicted hypothesized track number of the surviving targets and the spontaneous births, respectively.
$$ {r}_{P,k\Big|k-1}^{(i)}={r}_{k-1}^{(i)}\left\langle {p}_{k-1}^{(i)},{p}_{S,k}\right\rangle $$
(14)
$$ {p}_{P,k\Big|k-1}^{(i)}(x)=\frac{\left\langle {f}_{k\Big|k-1}\left(x\Big|\cdot \right),{p}_{k-1}^{(i)}{p}_{S,k}\right\rangle }{\left\langle {p}_{k-1}^{(i)},{p}_{S,k}\right\rangle } $$
(15)
where f k|k − 1(x|) denotes the single target transition density and p S,k denotes the target survival probability.

As can be seen from Eq. (13), in essence, the multi-Bernoulli parameter set for the predicted multi-target density π k|k − 1 is formed by the union of the multi-Bernoulli parameter sets for the surviving targets and the spontaneous births. The total number of predicted hypothesized tracks is M k|k − 1 = M k − 1 + M Γ,k .

Update: Assume the predicted multi-target density at time k can be expressed by a known multi-Bernoulli parameter set as follows:
$$ {\pi}_{k\Big|k-1}={\left\{\left({r}_{k\Big|k-1}^{(i)},{p}_{k\Big|k-1}^{(i)}\right)\right\}}_{i=1}^{M_{k\Big|k-1}} $$
(16)
Then, the posterior multi-target density can be approximated by the union of the multi-Bernoulli parameter sets for the legacy tracks [the first term in Eq. (17)] and measurement-corrected tracks [the second term in Eq. (17)], i.e.,
$$ {\pi}_k\approx {\left\{\left({r}_{L,k}^{(i)},{p}_{L,k}^{(i)}\right)\right\}}_{i=1}^{M_{k\Big|k-1}}\cup \Big\{{\left({r}_{U,k}^{*}(y),{p}_{U,k}^{*}\left(\cdot; y\right)\right\}}_{y\in {Y}_k} $$
(17)
where
$$ {r}_{L,k}^{(i)}={r}_{k\Big|k-1}^{(i)}\frac{1-\left\langle {p}_{k\Big|k-1}^{(i)},{p}_{D,k}\right\rangle }{1-{r}_{k\Big|k-1}^{(i)}\left\langle {p}_{k\Big|k-1}^{(i)},{p}_{D,k}\right\rangle } $$
(18)
$$ {p}_{L,k}^{(i)}={p}_{k\Big|k-1}^{(i)}(x)\frac{1-{p}_{D,k}(x)}{1-\left\langle {p}_{k\Big|k-1}^{(i)},{p}_{D,k}\right\rangle } $$
(19)
$$ {r}_{U,k}^{*}(y)=\frac{{\displaystyle {\sum}_{i=1}^{M_{k\Big|k-1}}\frac{r_{k\Big|k-1}^{(i)}\left(1-{r}_{k\Big|k-1}^{(i)}\right)\left\langle {p}_{k\Big|k-1}^{(i)},{\psi}_{k,y}\right\rangle }{{\left(1-{r}_{k\Big|k-1}^{(i)}\left\langle {p}_{k\Big|k-1}^{(i)},{p}_{D,k}\right\rangle \right)}^2}}}{\kappa_k(y)+{\displaystyle {\sum}_{i=1}^{M_{k\Big|k-1}}\frac{r_{k\Big|k-1}^{(i)}\left\langle {p}_{k\Big|k-1}^{(i)},{\psi}_{k,y}\right\rangle }{1-{r}_{k\Big|k-1}^{(i)}\left\langle {p}_{k\Big|k-1}^{(i)},{p}_{D,k}\right\rangle }}} $$
(20)
$$ {p}_{U,k}^{*}\left(x;y\right)=\frac{{\displaystyle {\sum}_{i=1}^{M_{k\Big|k-1}}\frac{r_{k\Big|k-1}^{(i)}}{1-{r}_{k\Big|k-1}^{(i)}}{p}_{k\Big|k-1}^{(i)}(x){\psi}_{k,y}(x)}}{{\displaystyle {\sum}_{i=1}^{M_{k\Big|k-1}}\frac{r_{k\Big|k-1}^{(i)}}{1-{r}_{k\Big|k-1}^{(i)}}\left\langle {p}_{k\Big|k-1}^{(i)}{\psi}_{k,y}\right\rangle }} $$
(21)
$$ {\psi}_{k,y}(x)={p}_k\left(y\Big|x\right){p}_{D,k}(x) $$
(22)
p k (y|x) is the single target measurement likelihood, p D,k (x) is the target detection probability, Y k is the measurement set, and κ k (y) is the intensity of clutter which follows the Poisson distribution. The total number of posterior hypothesized tracks is M k  = M k|k − 1 + |Y k |.

3 APPF-CBMeMBer tracker

3.1 Adaptive parameter particle filter

In this section, we propose the adaptive parameter particle filter to implement the CBMeMBer filter. Notice that the particles consist of the state and maneuvering parameter with associated weights. The detailed processes of particle implementation are described as follows.

(1) Prediction: Suppose that at time k − 1, the posterior multi-target density is described as \( {\pi}_{k-1}={\left\{\left({r}_{k-1}^{(i)},{p}_{k-1}^{(i)}\left(x,\theta \right)\right)\right\}}_{i=1}^{M_{k-1}} \), θ denotes the maneuvering parameter of a Bernoulli component. \( {p}_{k-1}^{(i)}\left(x,\theta \right) \) is comprised of a set of weighted samples \( {\left\{{w}_{k-1}^{\left(i,j\right)},{x}_{k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)}\right\}}_{j=1}^{L_{{}_{k-1}}^{(i)}} \), i.e.,
$$ {p}_{k-1}^{(i)}\left(x,\theta \right)={\displaystyle \sum_{j=1}^{L_{k-1}^{(i)}}{w}_{k-1}^{\left(i,j\right)}\delta \left(x-{x}_{k-1}^{\left(i,j\right)},\theta -{\theta}_{k-1}^{\left(i,j\right)}\right)} $$
(23)
\( {L}_{k-1}^{(i)} \) denotes the number of particles of the i th Bernoulli component. Then, the predicted multi-target density π k|k − 1 can be expressed as \( {\pi}_{k\Big|k-1}={\left\{\left({r}_{P,k\Big|k-1}^{(i)},{p}_{P,k\Big|k-1}^{(i)}\left(x,\theta \right)\right)\right\}}_{i=1}^{M_{k-1}}\cup {\left\{\left({r}_{\varGamma, k}^{(i)},{p}_{\varGamma, k}^{(i)}\left(x,\theta \right)\right)\right\}}_{i=1}^{M_{\varGamma, k}} \).

(2) Parameter particle selection:

(2.1) Predict parameter particles \( {\left\{{w}_{k-1}^{\left(i,j\right)},{x}_{k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)}\right\}}_{j=1}^{L_{{}_{k-1}}^{(i)}} \), where \( {\theta}_{k-1}^{\left(i,j\right)}\sim N\left(\cdot \Big|{\overline{\theta}}_{k-1}^{(i)},{h}^2{V}_{k-1}^{(i)}\right) \), \( {\overline{\theta}}_{k\hbox{-} 1}^{(i)} \) and \( {V}_{k-1}^{(i)} \) denote the mean and covariance of the maneuvering parameter of ith component at time k − 1 and can be obtained in the following step of parameter update (see Eq.(47)~(49)). Given important densities \( {q}_k\left(\left.{\mathbf{x}}_k\right|{\mathbf{x}}_{k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)},{\boldsymbol{Y}}_k\right) \), the steps of parameter particle prediction are as follows:
$$ {\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)}\sim {q}_k\left(\left.{\mathbf{x}}_k\right|{\mathbf{x}}_{k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)},{\boldsymbol{Y}}_k\right),\kern0.6em i=1,\dots, {M}_{k-1},\ j=1,\dots, {L}_{k-1}^{(i)} $$
(24)
$$ {\theta}_{P,k\Big|k-1}^{\left(i,j\right)}={\theta}_{k-1}^{\left(i,j\right)} $$
(25)
$$ {\omega}_{P,k\Big|k-1}^{\left(i,j\right)}\propto \frac{f_{k\Big|k-1}\left({\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)}\Big|{\mathbf{x}}_{P,k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)}\right){p}_{S,k}\left({\mathbf{x}}_{k-1}^{\left(i,j\right)}\right)}{q_k\left(\left.{\mathbf{x}}_k^{\left(i,j\right)}\right|{\mathbf{x}}_{k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)},{\boldsymbol{Y}}_k\right)}{w}_{k-1}^{\left(i,j\right)} $$
(26)
At time k, each particle is given another weight which is proportional to the predictive likelihood corresponding to no changepoint parameter \( {\theta}_{k-1}^{\left(i,j\right)} \), i.e.,
$$ {\omega}_1^{\left(i,j\right)}\propto p\left({\boldsymbol{Y}}_k\Big|{\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)},{\theta}_{k-1}^{\left(i,j\right)}\right) $$
(27)
(2.2) In order to obtain better parameter particles, produce new parameter particles with random parameter sampled from the initial distribution, i.e., \( {\left\{{w}_{k-1}^{\left(i,j\right)},{x}_{k-1}^{\left(i,j\right)},{\gamma}_k^{\left(i,j\right)}\right\}}_{j={L}_{{}_{k-1}}^{(i)}+1}^{2{L}_{{}_{k-1}}^{(i)}} \), where \( {\gamma}_k^{\left(i,j\right)}\sim {p}_{\theta_0}\left(\cdot \right) \). The steps of new parameter particle prediction are as follows:
$$ {\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)}\sim {q}_k\left(\left.{\mathbf{x}}_k\right|{\mathbf{x}}_{k-1}^{\left(i,j\right)},{\gamma}_k^{\left(i,j\right)},{\boldsymbol{Y}}_k\right),\kern0.6em i=1,\dots, {M}_{k-1},\ j={L}_{k-1}^{(i)}+1,\dots, 2{L}_{k-1}^{(i)} $$
(28)
$$ {\omega}_{P,k\Big|k-1}^{\left(i,j\right)}\propto \frac{f_{k\Big|k-1}\left({\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)}\Big|{\mathbf{x}}_{P,k-1}^{\left(i,j\right)},{\gamma}_k^{\left(i,j\right)}\right){p}_{S,k}\left({\mathbf{x}}_{k-1}^{\left(i,j\right)}\right)}{q_k\left(\left.{\mathbf{x}}_k^{\left(i,j\right)}\right|{\mathbf{x}}_{k-1}^{\left(i,j\right)},{\gamma}_k^{\left(i,j\right)},{\boldsymbol{Y}}_k\right)}{w}_{k-1}^{\left(i,j\right)} $$
(29)
At time k, each particle is also given another weight which is proportional to the predictive likelihood corresponding to changepoint parameter \( {\gamma}_k^{\left(i,j\right)} \), i.e.,
$$ {\omega}_2^{\left(i,j\right)}\propto p\left({\boldsymbol{Y}}_k\Big|{\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)},{\gamma}_k^{\left(i,j\right)}\right) $$
(30)
(2.3) We then select \( {L}_{k-1}^{(i)} \) particles out of the \( 2{L}_{k-1}^{(i)} \) obtained particles. Denote their indices as \( {l}^j\in \left\{1,\dots, 2{L}_{k-1}^{(i)}\right\} \), where \( j=1,\cdots, {L}_{k-1}^{(i)} \), the selection processes are as follows.
  1. (a)

    For \( j=1,\cdots, {L}_{k-1}^{(i)} \) , select indices l j with probability \( \left(1-\beta \right){\omega}_1^{\left(i,j\right)} \) from \( \left[1,\dots, {L}_{k-1}^{(i)}\right] \) and \( \beta {\omega}_2^{\left(i,j\right)} \) from \( \left[{L}_{k-1}^{(i)}+1,\dots, 2{L}_{k-1}^{(i)}\right] \), where β is the probability that an abrupt change occurred and it is assumed to be known.

     
  2. (b)

    If \( {l}^j\in \left\{1,\dots, {L}_{k-1}^{(i)}\right\} \), then update the time-varying parameter particles using \( {\theta}_{p,k\Big|k-1}^{\left(i,j\right)}={\theta}_{p,k\Big|k-1}^{\left(i,{l}^j\right)} \) .

     
  3. (c)

    If \( {l}^j\in \left\{{L}_{k-1}^{(i)}+1,\dots, 2{L}_{k-1}^{(i)}\right\} \), then set the time-varying parameter particles to be \( {\theta}_{p,k\Big|k-1}^{\left(i,j\right)}={\gamma}_k^{\left(i,{l}^i\right)} \)

     
(2.4) Relabel the selected particles with indices \( j=1,\cdots, {L}_{k-1}^{(i)} \), i.e., \( {\mathbf{x}}_{P,k\Big|k-1}^{\left(i,j\right)}={\mathbf{x}}_{P,k\Big|k-1}^{\left(i,{l}^j\right)},\ {\omega}_{P,k\Big|k-1}^{\left(i,j\right)}={\omega}_{P,k\Big|k-1}^{\left(i,{l}^j\right)} \), and sample \( {L}_{\varGamma, k}^{(i)} \) new-born particles from the proposal distribution \( {b}_k\left(\left.\cdot \right|{\gamma}_{\varGamma, k}^{\left(i,j\right)},{\boldsymbol{Y}}_k\right) \) via
$$ {\theta}_{\varGamma, k}^{\left(i,j\right)}\sim {p}_{\theta_0}\left(\cdot \right),\kern0.6em i=1,\dots, {M}_{k-1},j=1,\dots, {L}_{\varGamma, k}^{(i)} $$
(31)
$$ {\mathbf{x}}_{\varGamma, k}^{\left(i,j\right)}\sim {b}_k\left(\left.\cdot \right|{\theta}_{\varGamma, k}^{\left(i,j\right)},{\boldsymbol{Y}}_k\right) $$
(32)
$$ {\omega}_{\varGamma, k}^{\left(i,j\right)}\propto \frac{p_{\varGamma, k}\left({\mathbf{x}}_{\varGamma, k}^{\left(i,j\right)}\right)}{b_k\left({\mathbf{x}}_{\varGamma, k}^{\left(i,j\right)},{\theta}_{\varGamma, k}^{\left(i,j\right)},{\boldsymbol{Y}}_k\right)} $$
(33)
(3) Update: Assume the predicted multi-target density at time k is \( {\pi}_{k\Big|k-1}={\left\{\left({r}_{k\Big|k-1}^{(i)},{p}_{k\Big|k-1}^{(i)}\left(x,\theta \right)\right)\right\}}_{i=1}^{M_{k\Big|k-1}} \), where M k|k − 1 = M k − 1 + M Γ,k|k − 1. Each \( {p}_{k\Big|k-1}^{(i)}\left(x,\theta \right) \) is comprised of a set of weighted samples \( {\left\{{w}_{k\Big|k-1}^{\left(i,j\right)},{x}_{k\Big|k-1}^{\left(i,j\right)},{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right\}}_{j=1}^{L_{{}_{k\Big|k-1}}^{(i)}} \), where i = 1, …, M k − 1, i.e.,
$$ {p}_{k\Big|k-1}^{(i)}\left(x,\theta \right)={\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{w}_{k\Big|k-1}^{\left(i,j\right)}\delta \left(x-{x}_{k\Big|k-1}^{\left(i,j\right)},\theta -{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right)} $$
(34)
Then, the updated multi-target density \( {\pi}_k={\left\{\left({r}_{L,k}^{(i)},{p}_{L,k}^{(i)}\left(x,\theta \right)\right)\right\}}_{i=1}^{M_{k\Big|k-1}}\cup \Big\{{\left({r}_{U,k}^{*}(y),{p}_{U,k}^{*}\left(x,\theta; y\right)\right\}}_{y\in {Y}_k} \) can be computed as follows:
$$ {r}_{L,k}^{(i)}={r}_{k\Big|k-1}^{(i)}\frac{1-{p}_{L,k}^{(i)}\left(x,\theta \right)}{1-{r}_{k\Big|k-1}^{(i)}{p}_{L,k}^{(i)}\left(x,\theta \right)} $$
(35)
$$ {p}_{L,k}^{(i)}\left(x,\theta \right)={\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{\tilde{w}}_{L,k}^{\left(i,j\right)}\delta \left(x-{x}_{k\Big|k-1}^{\left(i,j\right)},\theta -{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right)} $$
(36)
$$ {r}_{U,k}^{*}(y)=\frac{{\displaystyle \sum_{i=1}^{M_{k\Big|k-1}}\frac{r_{k\Big|k-1}^{(i)}\left(1-{r}_{k\Big|k-1}^{(i)}\right){\rho}_{U,k}^{(i)}(y)}{{\left(1-{r}_{k\Big|k-1}^{(i)}{p}_{L,k}^{(i)}\left(x,\theta \right)\right)}^2}}}{\kappa_k(y)+{\displaystyle \sum_{i=1}^{M_{k\Big|k-1}}\frac{r_{k\Big|k-1}^{(i)}{\rho}_{U,k}^{(i)}(y)}{1-{r}_{k\Big|k-1}^{(i)}{p}_{L,k}^{(i)}\left(x,\theta \right)}}} $$
(37)
$$ {p}_{U,k}^{*}\left(x,\theta; y\right)={\displaystyle \sum_{i=1}^{M_{k\Big|k-1}}{\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{\tilde{w}}_{U,k}^{\left(i,j\right)}(y)}}\delta \left(x-{x}_{k\Big|k-1}^{\left(i,j\right)},\theta -{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right) $$
(38)
where
$$ {\rho}_{L,k}^{(i)}\left(x,\theta \right)={\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{w}_{k\Big|k-1}^{\left(i,j\right)}{p}_{D,k}\left({x}_{k\Big|k-1}^{\left(i,j\right)}\right)}\delta \left(x-{x}_{k\Big|k-1}^{\left(i,j\right)},\theta -{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right) $$
(39)
$$ {\tilde{w}}_{L,k}^{\left(i,j\right)}={w}_{L,k}^{\left(i,j\right)}/{\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{w}_{L,k}^{\left(i,j\right)}} $$
(40)
$$ {w}_{L,k}^{\left(i,j\right)}={w}_{k\Big|k-1}^{\left(i,j\right)}\left(1-{p}_{D,k}\left({x}_{k\Big|k-1}^{\left(i,j\right)}\right)\right) $$
(41)
$$ {\rho}_{U,k}^{(i)}(y)={\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{w}_{k\Big|k-1}^{\left(i,j\right)}{\psi}_{k,y}\left({x}_{k\Big|k-1}^{\left(i,j\right)},{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right)}\delta \left(x-{x}_{k\Big|k-1}^{\left(i,j\right)},\theta -{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right) $$
(42)
$$ {\tilde{w}}_{U,k}^{\left(i,j\right)}(y)={w}_{U,k}^{\left(i,j\right)}(y)/{\displaystyle \sum_{i=1}^{M_{k\Big|k-1}}{\displaystyle \sum_{j=1}^{L_{k\Big|k-1}^{(i)}}{w}_{U,k}^{\left(i,j\right)}(y)}} $$
(43)
$$ {w}_{U,k}^{\left(i,j\right)}(y)={w}_{k\Big|k-1}^{\left(i,j\right)}\frac{r_{k\Big|k-1}^{(i)}{\psi}_{k,y}\left({x}_{k\Big|k-1}^{\left(i,j\right)},{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right)}{1-{r}_{k\Big|k-1}^{(i)}} $$
(44)
$$ {\psi}_{k,y}\left({x}_{k\Big|k-1}^{\left(i,j\right)},{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right)={p}_k\left(y\Big|{x}_{k\Big|k-1}^{\left(i,j\right)},{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right){p}_{D,k}\left({x}_{k\Big|k-1}^{\left(i,j\right)}\right) $$
(45)

\( {p}_k\left(y\Big|{x}_{k\Big|k-1}^{\left(i,j\right)},{\theta}_{k\Big|k-1}^{\left(i,j\right)}\right)={p}_k\left(y\Big|{x}_{k\Big|k-1}^{\left(i,j\right)}\right) \) is the likelihood function.

(4) Resampling: To alleviate the effect of the particle degeneracy, the updated particle set \( {\left\{{\tilde{w}}_k^{\left(i,j\right)},{x}_k^{\left(i,j\right)},{\theta}_k^{\left(i,j\right)}\right\}}_{j=1}^{L_{{}_{k\Big|k-1}}^{(i)}} \) is resampled to get \( {\left\{{w}_k^{\left(i,j\right)},{x}_k^{\left(i,j\right)},{\theta}_k^{\left(i,j\right)}\right\}}_{j=1}^{L_{{}_k}^{(i)}} \). The resampling step can effectively eliminate the particles with low weights and multiply the particles with high weights to focus on the important zones of the state space. The resampling process is similar to that of the CBMeMBer filter [14]. Notice that the number of the particles increases due to the spontaneous births in the prediction and the averaging of the hypothesized tracks in the update. Therefore, the hypothesized tracks need to be pruned by discarding those with existence probabilities below a threshold η, which can reduce the number of particles effectively.

The posterior density of each Bernoulli component can be obtained by
$$ {p}_k^{(i)}\left(x,\theta \right)={\displaystyle \sum_{j=1}^{L_k^{(i)}}{w}_k^{\left(i,j\right)}\delta \left(x-{x}_k^{\left(i,j\right)},\theta -{\theta}_k^{\left(i,j\right)}\right)} $$
(46)
(5) Parameter update: The mean and covariance of each component are obtained by
$$ {\overline{\theta}}_k^{(i)}={\displaystyle \sum_{j=1}^{L_k^{(i)}}{w}_k^{\left(i,j\right)}}{\theta}_k^{\left(i,j\right)} $$
(47)
$$ {V}_k^{(i)}={\displaystyle \sum_{j=1}^{L_k^{(i)}}{\omega}_k^{\left(i,j\right)}\left({\theta}_k^{\left(i,j\right)}-{\overline{\theta}}_k^{(i)}\right){\left({\theta}_k^{\left(i,j\right)}-{\overline{\theta}}_k^{(i)}\right)}^T} $$
(48)
The parameter of each particle can be updated by
$$ {\theta}_k^{\left(i,j\right)}=\alpha {\theta}_k^{\left(i,j\right)}+\left(1-\alpha \right){\overline{\theta}}_k^{(i)} $$
(49)
where \( \alpha =\sqrt{1-{h}^2} \) is the shrinkage factor suggested in [33] to correct for the over-dispersion of the Gaussian mixture model.
(6) State estimation: The estimated number of the targets is the cardinality mean, which can be obtained by
$$ {\widehat{N}}_k={\displaystyle \sum_{i=1}^{M_{k\Big|k-1}}{r}_{L,k}^{(i)}}+{\displaystyle \sum_{y\in {Y}_k}{r}_{U,k}^{*}(y)} $$
(50)

Individual state estimates can be obtained by calculating the means of the posterior densities of the hypothesized tracks with existence probabilities exceeding a given threshold (e.g., 0.5) [14], which is inexpensive and scales linearly with the number of hypothesized tracks.

3.2 Track maintenance

Since the MMP-CBMeMBer algorithm cannot give the tracks, the track maintenance algorithm is proposed by introducing the particle labeling method, which can effectively achieve the track continuity for the multiple maneuvering target tracking. The detailed process of the track maintenance is described as follows.

(1) Prediction: Suppose at time k − 1 (k ≥ 2), the particle label of each multi-Bernoulli component can be described as
$$ {L}_{k-1}={\left\{{L}_{k-1}^{(j)}\right\}}_{j=1}^{J_{k-1}}={\left\{{l}_{k-1}^{(j)(1)},{l}_{k-1}^{(j)(2)},\cdots, {l}_{k-1}^{(j)\left({N}_j\right)}\right\}}_{j=1}^{J_{k-1}} $$
(51)
where J k − 1 is the number of the Bernoulli components at time k − 1, and N j denotes the number of the particles of the j th Bernoulli component.
The labels of the prediction Bernoulli components can be expressed as
$$ {L}_{k\Big|k-1}={L}_{k-1}\cup {L}_{\gamma } $$
(52)
where L γ denotes the labels of the Bernoulli components of the spontaneous births and can be expressed as
$$ {L}_{\gamma }={\left\{{L}_{\gamma}^{(i)}\right\}}_{i=1}^{J_{\gamma }}={\left\{{l}_{k-1}^{(i)(1)},{l}_{k-1}^{(i)(2)},\cdots, {l}_{k-1}^{(i)\left({N}_{\gamma}\right)}\right\}}_{i=1}^{J_{\gamma }} $$
(53)
where J γ denotes the number of the Bernoulli components of the spontaneous births, and N γ denotes the number of the particles of each Bernoulli component.
(2) Update: In the update state, there will appear |Y k | + 1 Bernoulli components due to the measurements, where |Y k | denotes the number of the measurements. At time k, measurement-updated components are assigned the label of the predicted track, i.e., the label can be expressed as
$$ {L}_{k\Big|k}={L}_{k\Big|k-1}\cup {L}_{k\Big|k-1}^1\cup \cdots \cup {L}_{k\Big|k-1}^{\left|{Y}_k\right|} $$
(54)
where \( {L}_{k\Big|k-1}^n={L}_{k\Big|k-1},\ n=1,\cdots, \left|{Z}_k\right| \).
(3) Resampling: Resample each component of the |Z k | + 1 Bernoulli components. The resampling particles need to keep the same label as their father particles and the label of the remaining Bernoulli components are given as
$$ {L}_k={L}_k^1\cup \cdots \cup {L}_k^{J_k} $$
(55)
where J k is the number of the remaining Bernoulli components at time k,
$$ {L}_k^j=\left\{{L}_k^{(j)(1)},{L}_k^{(j)(2)},\cdots, {L}_k^{(j)\left({N}_j\right)}\right\},\ j=1,\cdots, {J}_k $$
(56)

(4) Track continuity: Track continuity can be completed according to the particle labels by the data association technique [34], i.e., the track can be obtained by comparing the number of particles with the same labels in each component.

3.3 Simulations

In order to demonstrate the performance of the proposed APPF-CBMeMBer algorithm, a two-dimensional tracking example is simulated. The benchmark technique is the MMP-CBMeMBer algorithms [29]. In the considered scenario, the measurements are obtained at four stationary sensors located at (0, 0) m, (0, 1 × 104) m, (1 × 104, 0) m, and (1 × 104, 1 × 104) m. At time k, each sensor outputs the measured bearing of the received signal, which is given by
$$ {\boldsymbol{y}}_k^{S_i}=ta{n}^{-1}\left(\frac{z_k-{z}_{S_i}}{x_k-{x}_{S_i}}\right)+{\boldsymbol{w}}_k $$
(57)
where \( \left({x}_{{\mathrm{S}}_{\mathrm{i}}},{z}_{{\mathrm{S}}_{\mathrm{i}}}\right) \) denotes the location of the i th sensor, i = 1, 2, 3, 4. w k is the zero-mean Gaussian distributed measurement noise with variance \( {\sigma}_{\boldsymbol{w}}^2=1\times {10}^{-4}{\mathrm{rad}}^2 \).
There are four maneuvering targets. Targets 1 and 2 appear throughout the tracking process, and they are traveling from their initial positions (−1 × 103, 4 × 103) m and (1.4 × 104, 1 × 104) m. Target 3 is a spontaneous birth at 10th minute with initial position (2 × 103, 10.5 × 103) m and disappears at 50th minute. Target 4 is a spontaneous birth at 13th minute with initial position (1.3 × 104, 8 × 103) m and disappears at 53th minute. The true trajectories of the four targets are depicted in Fig. 1.
Figure 1
Fig. 1

True target tracks

We model the birth process using a Poisson RFS with intensity
$$ {\varGamma}_k^{(i)}\left(\mathbf{x}\right)={\displaystyle \sum_{i=1}^30.2\mathcal{N}\left(\mathbf{x};{\mathbf{m}}_{\varGamma}^{(i)},{\mathbf{P}}_{\varGamma}^{(i)}\right)}\kern0.4em ,\kern0.4em i=1,2,3 $$
(58)
where \( {\mathbf{m}}_{\varGamma}^{(1)}=\left(-1\times {10}^3\mathrm{m},0\mathrm{m}/\mathrm{s},4\times {10}^3\mathrm{m},0\mathrm{m}/\mathrm{s}\right) \), \( {\mathbf{m}}_{\varGamma}^{(2)}=\left(1.4\times {10}^4\mathrm{m},0\mathrm{m}/\mathrm{s},1\times {10}^4\mathrm{m},0\mathrm{m}/\mathrm{s}\right) \), \( {\mathbf{m}}_{\varGamma}^{(3)}=\Big(2\times {10}^3\mathrm{m},0\mathrm{m}/\mathrm{s}, \) 10.5 × 103m, 0m/s), \( {\mathbf{m}}_{\varGamma}^{(4)}=\left(2\times {10}^3\mathrm{m},0\mathrm{m}/\mathrm{s},10.5\times {10}^3\mathrm{m},0\mathrm{m}/\mathrm{s}\right) \), and \( {\mathbf{P}}_{\varGamma}^{(1)}={\mathbf{P}}_{\varGamma}^{(2)}={\mathbf{P}}_{\varGamma}^{(3)}={\mathbf{P}}_{\varGamma}^{(4)}=\mathrm{diag}\left(400,1,400,1\right) \). The clutter is modeled as a Poisson RFS with the mean rate r = 10 over the observation space. The probabilities of the target survival and detection are p S,k  = 0.99 and p D,k  = 0.98.
To contrast the performance of different algorithms, two performance metrics are used. One is the statistics of the target number estimates. The other one is the optimal subpattern assignment (OSPA) distance [35], which is recently developed and defined as
$$ {\overline{d}}_p^{(c)}\left(X,Y\right)={\left(\frac{1}{n}\left(\underset{\pi \in {\prod}_n}{ \min }{\displaystyle \sum_{i=1}^m{d}^{(c)}{\left({\mathbf{x}}_i,{\mathbf{y}}_{\pi (i)}\right)}^p+{c}^p\left(n-m\right)}\right)\right)}^{1/p} $$
(59)
where X = {x 1, , x m } and Y = {y 1, , y n } are arbitrary finite subsets, 1 ≤ p < ∞, c > 0, m, nN o  = {0, 1, 2, }. If m > n, \( {\overline{d}}_p^{(c)}\left(X,Y\right)={\overline{d}}_p^{(c)}\left(Y,X\right) \). In the simulation, the parameters of OSPA distance are set as p = 2 and c = 1000. The simulation results are obtained from Monte Carlo experiments of 200 ensemble runs.
In this experiment, we compare the performance of multiple abruptly maneuvering targets tracking with different algorithms. The turn rate ω is considered as unknown and time-varying model parameter for the proposed APPF-CBMeMBer algorithm and the MMP-CBMeMBer algorithm. Moreover, the MMP-CBMeMBer algorithm uses a constant velocity (CV) model and two coordinated-turn CT models, and the turn rate ω is assumed a known value priori. We compare the proposed APPF-CBMeMBer algorithm with the MMP-CBMeMBer (ω = 0, ± 4) algorithm and the MMP-CBMeMBer (ω = 0, ± 7) algorithm. It is noted that in the tracking scenario, the real turn rates are ω = ± 7. The simulation results for this experiment are shown in Figs. 2, 3, 4, 5, 6, and 7.
Figure 2
Fig. 2

Target number estimates

Figure 3
Fig. 3

OSPA distance statistics

Figure 4
Fig. 4

Average RMSEs of target numbers

Figure 5
Fig. 5

Average OSPA distances with different number of particles

Figure 6
Fig. 6

Average run time

Figure 7
Fig. 7

Track estimates of the proposed algorithm

Figure 2 shows the averaged target number estimates of the APPF-CBMeMBer and MMP-CBMeMBer algorithms. It can be seen that the proposed APPF-CBMeMBer algorithm provides more accurate target number estimates than the benchmark MMP-CBMeMBer algorithm. The behind reason is that the proposed algorithm can effectively joint estimate the unknown model parameter ω which can be well matched with the motion model of each target. While for the MMP-CBMeMBer algorithm, the tracking accuracy depends on the matching degree of the prior designed multiple model sets with the real target motion models. Unfortunately, the prior parameters of the various models are unknown; thus, the models cannot be well matched with the real motion model of each target (such as ω = 0, ± 4). Moreover, although the MMP-CBMeMBer (ω = 0, ± 7) algorithm includes the real turn rates, the tracking accuracy is slightly lower than the proposed algorithm due to the disturbance of each model.

Figure 3 compares the OSPA distances of the two simulated algorithms, and it is clear that the proposed algorithm again outperforms the MMP-CBMeMBer algorithm. This is also due to the fact that the proposed method can adapt to the temporal evolution of the target maneuvering parameters via estimating them with the target states.

Figures 4 and 5 show the average RMSEs of target number estimates and the average OSPA distances under different number of particles. It is clear that the tracking accuracy of the two algorithms increase with the increase of the particle number. In addition, the accuracy of the APPF-CBMeMBer algorithm is always higher than that of the MMP-CBMeMBer algorithm. However, in Fig. 6, we can see that the proposed algorithm has a higher run time than the MMP-CBMeMBer algorithm, the reason is that the propose algorithm needs to select the parameter particles from the doubled particles, and the steps of parameter update occupy some time. However, for MM-based methods, the computational complexity may become prohibitive as the number of models needed would increase exponentially if more unknown parameters have to be taken into account to match the possible target motion modes.

Figure 7 shows the track estimates of the proposed APPF-CBMeMBer algorithm. It is clear that the proposed algorithm has a good track maintenance performance due to its good performance in terms of the state estimates and their number estimates.

4 Conclusions

In this paper, we developed a new multiple maneuvering target tracker algorithm, referred to as the APPF-CBMeMBer tracker, to handle the presence of unknown and time-varying maneuvering parameter. In the proposed algorithm, the APE technique was incorporated to achieve online maneuvering parameter estimation, and the selected parameter particles were utilized to derive the approximation closed solution. Simulations showed that the newly proposed algorithm can offer higher tracking accuracy in the case of multiple maneuvering targets over the existing MMP-CBMeMBer algorithm. Furthermore, in order to obtain the individual target tracks, the particle labeling technique is introduced in the proposed algorithm.

In the future works, we shall consider introducing the APE technique into the LMB filter to obtain a better algorithm for tracking multiple targets with unknown but abruptly changing parameters.

Declarations

Acknowledgements

This paper is supported by the National Natural Science Foundation of China (Nos. 61305017, 61304264) and the Natural Science Foundation of Jiangsu Province (No. BK20130154).

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Internet of Things Engineering, Jiangnan University, Wuxi, China
(2)
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Wuxi, China

References

  1. R Mahler, Statistical multisource-multitarget information fusion (Artech House, Norwood, MA, 2007)MATHGoogle Scholar
  2. R Mahler, Multi-target Bayes filtering via first-order multi-target moments. IEEE Trans. Aerosp. Electron. Syst. 39(4),1152–1178 (2003)View ArticleGoogle Scholar
  3. R Mahler, PHD filters of higher order in target number. IEEE Trans. Aerosp. Electron. Syst. 43(4), 1523–1543 (2007)View ArticleGoogle Scholar
  4. BN Vo, S Singh, A Doucet, Sequential Monte Carlo methods for multi-target filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst. 41(4),1224–1245 (2005)View ArticleGoogle Scholar
  5. D Clark, J Bell, Convergence results for the particle PHD filter. IEEE Trans. Signal Process. 54(7), 2652–2661 (2006)View ArticleGoogle Scholar
  6. BN Vo, WK Ma, The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 54(11),4091–4104 (2006)View ArticleGoogle Scholar
  7. BT Vo, BN Vo, A Cantoni, Analytic implementations of the cardinalized probability hypothesis density filter. IEEE Trans. Signal Process. 55(7),3553–3567 (2007)MathSciNetView ArticleGoogle Scholar
  8. R Mahler, Approximate multisensor CPHD and PHD filters. Proceedings of the 13th International Conference on Information Fusion, Edinburgh, UK, 2010, pp. 1152–1178Google Scholar
  9. R Mahler, BT Vo, BN Vo, CPHD filtering with unknown clutter rate and detection profile. IEEE Trans. Signal Process. 59(8),3497–3513 (2011)MathSciNetView ArticleGoogle Scholar
  10. N Whiteley, S Singh, S Godsil, Auxiliary particle implementation of probability hypothesis density filter. IEEE Trans. Aerosp. Electron. Syst. 46(3),1437–1454(2010)View ArticleGoogle Scholar
  11. M Yazdian-Dehkordi, Z Azimifar, MA Masnadi-Shirazi, Penalized Gaussian mixture probability hypothesis density filter for multiple target tracking. Signal Process. 92(5),1230–1242 (2012)Google Scholar
  12. ZX Liu, LJ Li, WX Xie et al., Sequential measurement-driven multi-target Bayesian filter. EURASIP Journal on Advances in Signal Processing 43,1–9 (2015)View ArticleGoogle Scholar
  13. ZX Liu, LJ Li, WX Xie et al., Two implementations of marginal distribution Bayes filter for nonlinear Gaussian models. AEU - International Journal of Electronics and Communications 69(9),1297–1304 (2015)MathSciNetView ArticleGoogle Scholar
  14. BT Vo, BN Vo, A Cantoni, The Cardinality balanced multi-target multi-Bernoulli filter and its implementations. IEEE Trans. Signal Process. 57(2),409–423 (2009)MathSciNetView ArticleGoogle Scholar
  15. BT Vo, BN Vo, R Hoseinnezhad et al., Robust multi-Bernoulli filtering. IEEE J. Sel. Top. Sign. Proces. 7(3),399–409 (2013)View ArticleGoogle Scholar
  16. BT Vo, BN Vo, Labeled random finite sets and multi-object conjugate priors. IEEE Trans. Signal Process. 61(13),3460–3475 (2013)MathSciNetView ArticleGoogle Scholar
  17. S Reuter, BT Vo, BN Vo et al., The labeled multi-Bernoulli filter. IEEE Trans. Signal Process. 62(12),3246–3260 (2014)MathSciNetView ArticleGoogle Scholar
  18. MA Beard, BT Vo, BN Vo, Bayesian multi-target tracking with merged measurements using labelled random finite sets. IEEE Trans. Signal Process. 63(6),1433–1447 (2015)MathSciNetView ArticleGoogle Scholar
  19. JL Yang, HW Ge, An improved multi-target tracking algorithm based on CBMeMBer filter and variational Bayesian approximation. Signal Process. 93(9),2510–2515 (2013)View ArticleGoogle Scholar
  20. ML Hernandez, B Ristic, A Farina et al., Performance measure for Markovian switching systems using best-fitting Gaussian distributions. IEEE Trans. Aerosp. Electron. Syst. 44(2),724–747 (2008)View ArticleGoogle Scholar
  21. XR Li, VP Jilikov, Survey of maneuvering target tracking. Part V: multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 41(4),1255–1321 (2005)View ArticleGoogle Scholar
  22. A Pasha, BN Vo, HD Tuan et al., Closed-form PHD filtering for linear jump Markov models. Proceedings of the 9th International Conference on Information Fusion, Florence, Italy, 2006View ArticleGoogle Scholar
  23. SA Pasha, BN Vo, HD Tuan et al., A Gaussian mixture PHD filter for jump Markov system models. IEEE Trans. Aerosp. Electron. Syst. 45(3),919–936 (2009)View ArticleGoogle Scholar
  24. SA Pasha, HD Tuan, P Apkarian, The LFT based PHD filter for nonlinear jump Markov models in multi-target tracking. Proceedings of the IEEE Conference on Decision and Control, Shanghai, China, 2009, pp. 5478–5483Google Scholar
  25. WL Li, YM Jia, Gaussian mixture PHD filter for jump Markov models based on best-fitting Gaussian approximation. Signal Process. 91(4), 1036–1042 (2011)View ArticleMATHGoogle Scholar
  26. K Punithakumar, T Kirubarajan, A Sinha, Multiple-model probability hypothesis density filter for tracking maneuvering targets. IEEE Trans. Aerosp. Electron. Syst. 44(1), 87–98 (2008)View ArticleGoogle Scholar
  27. R Georgescu, P Willett, The multiple model CPHD tracker. IEEE Trans. Signal Process. 60(4), 1741–1751 (2012)MathSciNetView ArticleGoogle Scholar
  28. R Mahler, On multitarget jump-Markov filters. Proceedings of the 15th International Conference on Information Fusion, Singapore, 2012, pp. 149–156Google Scholar
  29. JL Yang, HB Ji, HW Ge, Multi-model particle cardinality-balanced multi-target multi-Bernoulli algorithm for multiple manoeuvring target tracking. IET Radar Sonar Navig. 7(2),101–112 (2013)View ArticleGoogle Scholar
  30. J Liu, M West, Combined parameter and state estimation in simulation-based filtering, in Proceedings of the sequential Monte Carlo Methods in practice, Springer, New York, 2001, pp. 197–223View ArticleGoogle Scholar
  31. C Carvalho, M Johannes, H Lopes et al., Particle learning and smoothing. Stat. Sci. 25(1),88–106 (2010)MathSciNetView ArticleMATHGoogle Scholar
  32. C Nemeth, P Fearnhead, L Mihaylova, Sequential Monte Carlo methods for state and parameter estimation in abruptly changing environments. IEEE Trans. Signal Process. 62(5),1245–1255 (2014)MathSciNetView ArticleGoogle Scholar
  33. M West, Approximating posterior distributions by mixture. J. R. Stat. Soc. 55(2),409–422 (1993)MATHGoogle Scholar
  34. DE Clark, J Bell, Multi-target state estimation and track continuity for the particle PHD filter. IEEE Trans. Aerosp. Electron. Syst. 43(4),1441–1452 (2007)View ArticleGoogle Scholar
  35. D Schuhmacher, BT Vo, BN Vo, A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 56(8),3447–3457 (2008)MathSciNetView ArticleGoogle Scholar

Copyright

© Yang et al. 2016

Advertisement