4.1 Algorithm for large-batch and multi-structure group targets tracking when SDVs are known
Tracking Large-batch and multi-structure group targets require a lot of computing cost. To save the computing cost, we could divide them into multiple subgroups for serial tracking, as shown in Fig. 2. Serial refers to tracking n subgroups successively at a step, which needs to be tracked n times.
4.1.1 Estimate number of subgroups
-
(1)
Estimate deviation matrix: \({Z}_{k}=\left\{ {z}_{k,1},\ldots ,{z}_{k,n_k}\right\}\) represents the measurement of the group members at time k. The deviation matrix is introduced to represent the difference between the state estimates of each target:
$$\begin{aligned} \varvec{D_{k}({Z}_{k})}=\begin{bmatrix}0 &{} d_{k}(1,2) &{}\cdots &{} d_{k}(1,n_{k})\\ d_{k}(2,1) &{} 0&{} \cdots &{}d_{k}(2,n_{k}) \\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ d_{k}(n_{k},1)&{}d_{k}(n_{k},2)&{}\cdots &{}0\end{bmatrix} \end{aligned}$$
(28)
where \({d_{k}(i,j)}\) is defined as the 2 norm of the vector (i, j):
$$\begin{aligned} d_{k}(i,j)=\parallel {z}_{k,i}-{z}_{k,j}\parallel _{2},\qquad i\ne j\ \end{aligned}.$$
(29)
In this paper, we obtain the adjacency matrix for each time by:
$$\begin{aligned} {\hat{A}}_{dk}(i,j)={\left\{ \begin{array}{ll}1, &{} d_{k}(i,j) \le d_{\lambda },\,i\ne j\\ 0, &{} {\text{ otherwise }}\end{array}\right. } \end{aligned}$$
(30)
where \({d_{\lambda }}\) represents the threshold value of \({d_{k}(i,j)}\), and when the 2 norm between target i and target j is less than the threshold value \({d_{\lambda }}\), the two targets are considered to belong to the same subgroup.
-
(2)
Estimation of the number of subgroups
Due to the cooperative relationship between group members, the state of one member can be represented by the states of other members, and this relationship can be described by linear dependence. The linear dependence can be expressed by the formula:
$$\begin{aligned} v_{n}={k_{1}}{v_{1}+{k_{2}}v_{2}+\cdots +{k_{n-1}}}v_{n-1} \end{aligned}$$
(31)
where \({\left\{ k_{1},k_{2},\ldots ,k_{n-1}\right\} }\) is the set of weight coefficients, and at least one of them is not equal to zero; \({\left\{ v_{1},v_{2},\ldots ,v_{n}\right\} }\) is the set of matrices or vectors.
Without considering the process noise \({B_{k,i}}\), Eq. (17) can be expanded as follows:
$$\begin{aligned} x_{k+1,i}=\omega _{1}\left[ F_{k,1}x_{k,1}+b_{k}(1,i)\right] +\omega _{2}\left[ F_{k,2}x_{k,2}+b_{k}(2,i)\right]\\ +\cdots +\omega _{n}\left[ F_{k,n}x_{k,n}+b_{k}(n,i)\right] \end{aligned}$$
(32)
let \({F_{k,m}x_{k,m}+b_{k}(m,i)=x_{m},\,\,m=1,2,\ldots ,n}\), then formula (32) reduces to:
$$\begin{aligned} x_{k+1,i}=\omega _{1}x_{1}+\omega _{2}x_{2}+\cdots +\omega _{n}x_{n} \end{aligned}.$$
(33)
From Eqs. (31) and (33), it can be seen that the members in the same subgroup are linearly dependence.
Eigenvectors corresponding to different eigenvalues are linearly independent. The number of subgroups can be estimated by calculating the number of eigenvectors of the deviation matrix.
4.1.2 Estimation of group members
The GLMB filter can estimate the number, states, and trajectories of independent targets. However, there are dependencies between group members. In other words, the states of targets are not independent of each other. In order to analyze the relationship between Eqs. (17) and (20), the true deviation variable \({{\check{b}}_{k}(l,i)}\) is introduced [25]. The relationship between true deviation vector and the SDV is shown in Fig. 3.
In Fig. 3, the ellipse represents the difference between true deviation vector and the SDV. \({{\check{b}}_{k}(l,i)}\) represents the real offset between the target i and its parent node l at time k, that is:
$$\begin{aligned} x_{k,i}=x_{k,l}+{\check{b}}_{k}(l,i) \end{aligned}$$
(34)
combining Eqs. (34) and (17), we can get the following formula:
$$\begin{aligned} x_{k+1,i}=\sum _{l\in {{P}(i)}}\omega _{k}(l,i)F_{k,l}x_{k,i}+\sum _{l\in {{P}(i)}}\omega _{k}(l,i)\left[ b_{k}(l,i)-F_{k,l}(x_{k,i}-x_{k,l})\right] +B_{k,i}\omega _{k,i} \end{aligned}$$
(35)
In general, we assume that the state transition matrix of target in the same subgroup is the same, i.e., \({F_{k,l}=F_{k}}\). The above formula can be simplified as:
$$\begin{aligned}&\ x_{k+1,i}=F_{k}x_{k,i}+\triangle b_{k,i}+B_{k,i}\omega _{k,i}\\&\triangle b_{k,i}=\sum _{l\in {{P}(i)}}\omega _{k}(l,i)\left[ b_{k}(l,i)-F_{k}{\check{b}}_{k}(l,i)\right] \end{aligned}$$
(36)
where \({\triangle b_{k,i}}\) can be seen as a displacement noise between group members, which reflects the dependency between group members. Different from Eqs. (20), (36) involves process noise \({\omega _{k,i}}\) and displacement noise \({\triangle b_{k,i}}\). Together, the two noises are called collaboration noise \({\omega _{k,i}^o}\), i.e., \({\omega _{k,i}^o=\triangle b_{k,i}+B_{k,i}\omega _{k,i}}\). We can replace the original state noise \({\omega _{k,i}}\) with collaboration noise \({\omega _{k,i}^o}\) in GLMB filter, so that GLMB filter can work normally.
Since the GLMB algorithm requires a large computational cost to implement, we use the \({\delta }\)-GLMB filter for computational convenience:
$$\begin{aligned} \pi ({{\textbf{X}}})=\triangle ({{\textbf{X}}})\sum _{(I,\xi )\in {{{\mathcal{F}}}}({\mathbb{L}})\times \varXi }w^{(I,\xi )}\times \delta _{I}({\mathcal{L}}({{\textbf{X}}}))\left[ p^{(\xi )}\right] ^{{{\textbf{X}}}} \end{aligned}.$$
(37)
The \({\delta }\)-GLMB filter tracks multiple targets in two parts: the prediction step and the update step.
Prediction step:
$$\begin{aligned} \pi _{+}({{\textbf{X}}}_{+})=\triangle ({{\textbf{X}}}_{+})\sum _{(I_{+},\xi )\in F({\mathbb{L}}_{+})\times \varXi }w_{+}^{(I_{+},\xi )}\times \delta _{I_{+}}({\mathcal{L}}({{\textbf{X}}}_{+}))\left[ p_+^{(\xi )}\right] ^{{{\textbf{X}}}_{+}} \end{aligned}$$
(38)
where
$$\begin{aligned} {{w_ + }^{\left( {{I_ + },\xi } \right) } = {w_B}\left( {{I_ + } \cap {\mathbb{B}}} \right) {w_s}^{\left( \xi \right) }\left( {{I_ + } \cap {\mathbb{L}}} \right) } \end{aligned}$$
(39)
$$\begin{aligned} {{p_ + }^{\left( \xi \right) }\left( {x,\ell } \right) = {1_{{\mathbb{L}}}}\left( \ell \right) p_S^{\left( \xi \right) }\left( {x,\ell } \right) + \left( {1 - {1_{{\mathbb{L}}}}\left( \ell \right) } \right) {p_B}\left( {x,\ell } \right) } \end{aligned}$$
(40)
$$\begin{aligned} {p_S^{\left( \xi \right) }\left( {x,\ell } \right) = \frac{{\left\langle {{p_S}\left( { \cdot ,\ell } \right) f\left( {x| \cdot ,\ell } \right) ,{p^{\left( \xi \right) }}\left( { \cdot ,\ell } \right) } \right\rangle }}{{\eta _S^{\left( \xi \right) }\left( \ell \right) }}} \end{aligned}$$
(41)
$$\begin{aligned} {\eta _S^{\left( \xi \right) }\left( \ell \right) = \int {\left\langle {{p_S}\left( { \cdot ,\ell } \right) f\left( {x| \cdot ,\ell } \right) ,{p^{\left( \xi \right) }}\left( { \cdot ,\ell } \right) } \right\rangle {\text{ d }}x} } \end{aligned}$$
(42)
$$\begin{aligned} {{w_S}^{\left( \xi \right) }\left( L \right) = {{\left[ {\eta _S^{\left( \xi \right) }} \right] }^L}\sum \limits _{I \subseteq {\mathbb{L}}} {{1_I}\left( L \right) } {{\left[ {{q_S}^{\left( \xi \right) }} \right] }^{I - L}}{w^{\left( {I,\xi } \right) }}} \end{aligned}$$
(43)
$$\begin{aligned} {{q_S}^{\left( \xi \right) }\left( \ell \right) = \left\langle {{q_S}\left( { \cdot ,\ell } \right) ,p_S^{\left( \xi \right) }\left( { \cdot ,\ell } \right) } \right\rangle } \end{aligned}$$
(44)
where \({w_B}\left( {{I_ + } \cap {\mathbb{B}}} \right)\) denotes the weight of the newborn label \({ {{I_ + } \cap {\mathbb{B}}}}\), \({w_s}^{\left( \xi \right) }\left( {{I_ + } \cap {\mathbb{L}}} \right)\) denotes the weight of the surviving label \({ {{I_ + } \cap {\mathbb{B}}}}\). \({{p_B}\left( {\cdot ,\ell } \right) }\) denotes the probability density of the newborn target, the density \({{p_s^{\left( \xi \right) }\left( {x,\ell } \right) }}\) of the surviving target is obtained from the prior density \({{p_S}\left( {\cdot ,\ell } \right) }\), and \({f\left( {x| \cdot ,\ell } \right) }\) denotes the probability density of the surviving target.
Update step:
$$\begin{aligned} \pi \left( {{{{{\textbf{X}}}|{Z}}}} \right) \mathrm{{ = }}&\Delta \left( {{{{\textbf{X}}}}} \right) \sum \limits _{\left( {I,\xi } \right) \in {{{\mathcal{F}}}}\left( {{\mathbb{L}}} \right) \times \Xi } {\sum \limits _{\theta \in \Theta } {{w^{\left( {I,\xi ,\theta } \right) }}\left( Z \right) } }\times {\delta _I}\left( {{{{\mathcal{L}}}}\left( {{{{\textbf{X}}}}} \right) } \right) {\left[ {{p^{\left( {\xi ,\theta } \right) }}\left( { \cdot |Z} \right) } \right] ^{{{{\textbf{X}}}}}} \end{aligned}$$
(45)
where \(\Theta\) is the space of mapping \(\theta\):\({\mathbb{L}}\rightarrow \left\{ {0,1 \cdots ,\left| Z \right| } \right\}\), \(\theta \left( i \right) = \theta \left( {i'} \right) > 0\) denotes \(i = i'\). \({ \Theta ^{(M)}=\left\{ \xi ^{(1)},\ldots ,\xi ^{(M)}\right\} }\) denotes the M elements of \({\Theta }\) in a fixed \({(I,\xi )}\) at maximum weight \({w^{(I,\xi ,\theta ^{(i)})}}\). Its associated parameters are defined as follows:
$$\begin{aligned} {w^{\left( {I,\xi ,\theta } \right) }}\left( Z \right) = \frac{{{\delta _{{\theta ^{ - 1}}\left( {\left\{ {0:\left| Z \right| } \right\} } \right) }}\left( I \right) {w^{\left( {I,\xi } \right) }}{{\left[ {\eta _Z^{\left( {\xi ,\theta } \right) }} \right] }^I}}}{{\sum \limits _{\left( {I,\xi } \right) \in {{{\mathcal{F}}}}\left( {{\mathbb{L}}} \right) \times \Xi } {\sum \limits _{\theta \in \Theta } {{\delta _{{\theta ^{ - 1}}\left( {\left\{ {0:\left| Z \right| } \right\} } \right) }}\left( I \right) {w^{\left( {I,\xi } \right) }}{{\left[ {\eta _Z^{\left( {\xi ,\theta } \right) }} \right] }^I}} } }} \end{aligned}$$
(46)
$$\begin{aligned} {p^{\left( {\xi ,\theta } \right) }}\left( {x,\ell |Z} \right) = \frac{{{p^{\left( \xi \right) }}\left( {x,\ell } \right) {\psi _Z}\left( {x,\ell ;\theta } \right) }}{{\eta _Z^{\left( {\xi ,\theta } \right) }\left( \ell \right) }} \end{aligned}$$
(47)
$$\begin{aligned} {\eta _Z^{\left( {\xi ,\theta } \right) }\left( \ell \right) } = \left\langle {{p^{\left( \xi \right) }}\left( { \cdot ,\ell } \right) ,{\psi _Z}\left( { \cdot ,\ell ;\theta } \right) } \right\rangle \end{aligned}$$
(48)
$$\begin{aligned} {\psi _Z}\left( {x,\ell ;\theta } \right) =&{\delta _0}\left( {\theta \left( \ell \right) } \right) {q_D}\left( {x,\ell } \right) +\left( {1 - {\delta _0}\left( {\theta \left( \ell \right) } \right) } \right) \frac{{{p_D}\left( {x,\ell } \right) g\left( {{z_{\theta \left( \ell \right) }}|x,\ell } \right) }}{{\kappa \left( {{z_{\theta \left( \ell \right) }}} \right) }} \end{aligned}$$
(49)
where \({P_{D}(x\mid \ell )}\) is the detection probability, \({g(\cdot \mid x,\ell )}\) is the likelihood function, and \({\kappa (\cdot )}\) is the clutter intensity.
4.1.3 Estimation of group structure
The adjacency matrix obtained in 4.1.1 is symmetric, and the graph obtained from the symmetric adjacency matrix is an undirected graph, which cannot describe the parent–child relationship between members. In this paper, we use the inner product of velocity and position offset vectors between nodes to further describe the parent–child relationship. Specifically, if the inner product is positive, the target is the parent node, and if it is negative, the target is the child node. That is:
$$\begin{aligned} V_{i}=[\dot{P}_{i,x},\dot{P}_{i,y}]^{T} \end{aligned}$$
(50)
$$\begin{aligned} \triangle d_{i,j}=[P_{i,x}-P_{j,x},P_{i,y}-P_{j,y}]^{T} \end{aligned}$$
(51)
$$\begin{aligned} \alpha _{k}=\langle V_{i},\triangle d_{i,j} \rangle \end{aligned}$$
(52)
where \({V_{i}}\) denotes the velocity of target i, \({\triangle d_{i,j}}\) denotes the position offset vector between target i and target j, \({\alpha _{k}}\) represents the inner product of velocity and position offset vectors. The results are divided into the following 3 cases:
$$\begin{aligned} \alpha _{k}={\left\{ \begin{array}{ll}>0 &{} {\text{ parent }}\, {\text{ node }}\\ <0 &{} {\text{ child }} \,{\text{ node }}\\ 0&{} {\text{ unknown }}\end{array}\right. } \end{aligned}.$$
(53)
For example, in Fig. 4, target A is the parent node of target B.
Definition 1:
Let \({G_{1}=(V_{1},E_{1})}\) and \({G_{2}=(V_{2},E_{2})}\) be two graphs. We call \({G_{1}}\) and \({G_{2}}\) isomorphic, and write \({G_{1}\simeq G_{2}}\), if there exists a bijection \({\phi }\): \({V_{1}\rightarrow V_{2}}\) with side \({v_{i}v_{j}\in E_{1}\Leftrightarrow \phi (v_{i})\phi (v_{j})\in E_{2}}\) for all \({v_{i},v_{j}\in V_{1}}\).
Since the members of target state set have no any significance with the sequence, the same group targets may have different adjacency matrix. We introduce the concept of isomorphism to judge whether two different adjacency matrices describe the same group. The isomorphism of the adjacency matrix can be judged by matrix equivalence, if the adjacency matrix A and the adjacency matrix B are isomorphic, there exists a replacement matrix P with \({{\text{ PAP }}^{-1}=B}\).
A pseudocode of the Serial tracking algorithm for large-batch and multi-structure group targets is provided in Algorithm 1.
4.2 Joint estimation algorithm of SDVs and target states
In this paper, a two-stage estimation algorithm is proposed to jointly estimate the states of the group members and the SDVs. In the first stage, the states of the subgroup centers are estimated. In the second stage, the states of the group members and the SDVs are estimated. The specific flow of the algorithm is shown in Fig. 5.
4.2.1 Tracking of subgroup centers based on k-means clustering and GLMB algorithm
In this paper, k-means clustering and GLMB algorithm are used to estimate the states of subgroup centers. At time k, assume that the group targets’ measurement set is \({Z_{k}^{g}=\left\{ z_{k,1}^{g},\ldots ,z_{k,m_k}^{g} \right\} }\), the mixing distribution function of the ith member is shown below:
$$\begin{aligned} f\left( z_{k,i}^{g}\left| \varTheta _k \right. \right) =\omega _{k,1}f\left( z_{k,i}^{g}\left| \theta _{k,1} \right. \right) +\cdot \cdot \cdot +\omega _{k,{\overline{g}}_k}f\left( z_{k,i}^{g}\left| \theta _{k,{\overline{g}}_k} \right. \right) \end{aligned}$$
(54)
where \({\left\{ \theta _{k,1},\ldots ,\theta _{k,{\overline{g}}_k} \right\} }\) represents the parameters of each distribution element, and \({\left\{ \omega _{k,1},\ldots ,\omega _{k,{\overline{g}}_k} \right\} }\) represents the mixture weights of each element, \({\overline{g}}_k\) represents the number of subgroups at time k.
It is assumed that all measurements consist of group members and clutter, i.e., \({Z_k=Z_{k}^{g}\cup Z_{k}^{c}}\), \({Z_{k}^{c}}\) represents the set of clutter measurements. It is assumed that the group members obey multiple Gaussian distributions, and the clutter obeys a uniform distribution. The mixture distribution of ith measurement can be expressed as:
$$\begin{aligned} f\left( z_{k,i}\left| \varTheta _k \right. \right) =\omega _{k,0}U\left( z_{k,i}\left| V_k \right. \right) +\omega _{k,1}N\left( z_{k,i};\mu _{k,1},D_{k,1} \right) +\cdot \cdot \cdot +\\ \omega _{k,{\overline{g}}_k}N\left( z_{k,i};\mu _{k,{\overline{g}}_k,}D_{k,{\overline{g}}_k} \right) \end{aligned}.$$
(55)
The purpose of clustering using k-means is to divide the measurements into the most probable classes. Introducing a label variable \({\mathcal{E}}_k=\left\{ 0,1,\ldots ,{\overline{g}}_k\right\}\), 0 denotes clutter label. The complete set of measurements can be expressed as: \(Z_{k}=\left\{ \left( z_{k,0},e_{k,0} \right) ,\ldots ,\left( z_{k,M_k},e_{k,M_k} \right) \right\}\). \({\left( z_{k,j},e_{k,j} \right) }\) represents the jth measurement originating from the \({e_{k,j}}\) class at time k. The specific steps of the k-means algorithm [34] are as follows:
-
(1).
Initialize the positions of the \({{\overline{g}}_k}\) clustering centers at time k, \({{c}_{k,1},\ldots ,{c}_{k,{\overline{g}}_k}}\).
-
(2).
Let \({\mathcalligra{f}}\): \(Z_k \rightarrow {\mathcal{E}}_k\) be the mapping of measurement RFS to label variable space. Calculate the class to which the jth measurement belongs,
$$\begin{aligned} {\mathcalligra{f}}(z_{k,j},e_{k,j})={e_{k,j}},{\left\{ \begin{array}{ll}e_{k,j}=g&{} {\text{ if }} \ \underset{g}{{\text{ argmin }}}\Vert z_{k,j}-{c}_{k,g} \Vert ^2\le d_\eta \ \ \ \ \\ e_{k,j}=0&{} {\text{ if }} \ \underset{g}{{\text{ argmin }}}\Vert z_{k,j}-{c}_{k,g} \Vert ^2> d_{\eta } \ \ \ \ \end{array}\right. }g=1,\ldots ,{\overline{g}}_k, j=1,\ldots M_k \end{aligned}.$$
(56)
\(d_{\eta }\)denotes the threshold value of the distance between the measurement and the center of the group.
-
(3).
Update the positions of clustering centers:
$$\begin{aligned} {c}_{k,g}=\frac{\sum _{j=1}^{m_k}{z_{k,j}}}{\sum _{j=1}^{m_k}\delta _{g}\left( {\mathcalligra{f}}(z_{k,j},e_{k,j})\right) }\ \ {\text{ for }}\ e_{k,j}={g}, \ \ \ g=1,\ldots ,{\overline{g}}_k \end{aligned}$$
(57)
-
(4).
Repeat (2) and (3) until the objective function J converges:
$$\begin{aligned} J_g =\sum _{j=1}^{m_k}{\Vert z_{k,j}-{c}_{k,g}} \Vert ^2\ \ {\text{ for }}\ e_{k,j}={g}, \ \ \ g=1,\ldots ,{\overline{g}}_k \end{aligned}.$$
(58)
We assume that the subgroups are independent of each other, and if the center of each subgroup is regarded as a special target, its state can be estimated by the GLMB algorithm. The specific algorithm framework is shown in Fig. 6, in which, \({x_{m,k}^o}\) represents the state of the mth subgroup center at time k, \({{\bar{x}}_{m,k+1}^o}\) denotes the predicted state of the mth subgroup center by GLMB at time \({k+1}\), and \({{{\hat{x}}}_{m,k+1}^o}\) represents the updated state of the mth subgroup center by GLMB at time \({k+1}\).
4.2.2 Joint estimation algorithm of SDVs and target states based on RLS
Before estimating the target states and the SDVs, the displacement vectors between the targets and the subgroup centers need to be estimated. In the gth subgroup, \({x_{k,g}^m}\) represents the state of the mth target at time k, \({x_{k,g}^o}\) denotes the state of the subgroup center at time k, and \({{b^{\prime }}_{k,i}^m}\) represents the displacement vector between the mth target and the subgroup center at time k. \({Z_{k,g}}\) denotes a set of all measurements from the gth subgroup at time k, \(z_{k,g}^m\in Z_{k,g}\), and \({z_{k,g}^m}\) represents the measurement generated by the target m at time k. The relationship between them can be described as:
$$\begin{aligned} {x}_{k,g}^m=x_{k,g}^o+{b^{\prime }}_{k,g}^m+\omega _k^\prime \end{aligned}$$
(59)
$$\begin{aligned} {z}_{k,g}^m=H{x}_{k,g}^m+\nu _k^\prime \end{aligned}$$
(60)
where \({\omega _k^\prime }\) and \({\nu _k^\prime }\) represent the process noise and observation noise at time k, respectively. Substitute Eq. (59) into Eq. (60):
$$\begin{aligned} {z}_{k,g}^m=H(x_{k,g}^o+{b^{\prime }}_{k,g}^m+\omega _k^\prime )+\nu _k^\prime =Hx_{k,g}^o+H{b^{\prime }}_{k,g}^m+H\omega _k^\prime +\nu _k^\prime \end{aligned}.$$
(61)
Considering \({H\omega _k^\prime +\nu _k^\prime }\) as a new noise \({\omega _k^{\prime \prime }}\):
$$\begin{aligned} {z}_{k,g}^m=Hx_{k,g}^o+H{b^{\prime }}_{k,g}^m+\omega _k^{\prime \prime } \end{aligned}$$
(62)
$$\begin{aligned} {z}_{k,g}^m-Hx_{k,g}^o=H{b^{\prime }}_{k,g}^m+\omega _k^{\prime \prime } \end{aligned}.$$
(63)
The estimation of the displacement Vector using the least squares method can be described by the equation:
$$\begin{aligned} J(\hat{b^{\prime }}_{k,g}^m)=({z}_{k,g}^m-Hx_{k,g}^o-H\hat{b^{\prime }}_{k,g}^m)^{T}({z}_{k,g}^m-Hx_{k,g}^o-H\hat{b^{\prime }}_{k,g}^m) \end{aligned}$$
(64)
let \({\frac{\partial J(\hat{b^{\prime }}_{k,g}^m)}{\partial \hat{b^{\prime }}_{k,g}^m}=0}\):
$$\begin{aligned} \frac{\partial [({z}_{k,g}^m-Hx_{k,ig}^o-H\hat{b^{\prime }}_{k,g}^m)^{T}({z}_{k,g}^m-Hx_{k,g}^o-H\hat{b^{\prime }}_{k,g}^m)]}{\partial \hat{b^{\prime }}_{k,g}^m}=0 \end{aligned}$$
(65)
$$\begin{aligned} -2H^{T}({z}_{k,g}^m-Hx_{k,g}^o-H\hat{b^{\prime }}_{k,g}^m)=0 \end{aligned}$$
(66)
that is:
$$\begin{aligned} H^{T}({z}_{k,g}^m-Hx_{k,g}^o-H\hat{b^{\prime }}_{k,g}^m)=0 \end{aligned}$$
(67)
$$\begin{aligned} H^{T}({z}_{k,g}^m-Hx_{k,g}^o)=H^{T}H\hat{b^{\prime }}_{k,g}^m \end{aligned}.$$
(68)
Assuming that \({H^{T}H}\) is full rank, both sides of Eq. (68) are simultaneously left multiplied by the inverse of \({H^{T}H}\):
$$\begin{aligned} \hat{b^{\prime }}_{k,g}^m=(H^{T}H)^{-1}H^{T}(z_{k,g}^m-Hx_{k,g}^o) \end{aligned}.$$
(69)
The state of the \({g_m}\)th target in the gth subgroup at time k can be estimated as follows:
$$\begin{aligned} {\hat{x}}_{k,g}^m=x_{k,g}^o+{b^{\prime }}_{k,g}^m \end{aligned}.$$
(70)
The SDV between the \({g_m}\)th member and the \({g_n}\)th member can be estimated as follows:
$$\begin{aligned} {\hat{b}}_{k}(m,n)={b^{\prime }}_{k,g}^m-{b^{\prime }}_{k,g}^n \end{aligned}.$$
(71)
A pseudocode of the two-stage estimation is provided in Algorithm 2.