3.1 The first step calculation
Define the auxiliary vector \(\boldsymbol {\theta }^{o}_{1} = {\left [\boldsymbol {v}^{oT}, r^{o}_{1}, \boldsymbol {\dot {v}}^{oT}, \dot {r}^{o}_{1}, \ddot {r}^{o}_{1} \right ]}^{T}\), where \(\boldsymbol {v}^{o} = \boldsymbol {u}^{o}  \boldsymbol {s}_{1}, \boldsymbol {\dot {v}}^{o} = \boldsymbol {\dot {u}}^{o}  \boldsymbol {\dot {s}}_{1}\). This auxiliary vector \(\boldsymbol {\theta }^{o}_{1}\) bears information about u^{o} and \(\boldsymbol {\dot {u}}^{o}\) of the unknown source, as well as the additional three nuisance variables \(r^{o}_{1}, \dot {r}^{o}_{1}\) and \(\ddot {r}^{o}_{1}\). Rewrite the first equation in (4) as \(r^{o}_{i1} + r^{o}_{1} = r^{o}_{i}\), square the two sides of the equation, and expand it to get the following (7):
$$ \begin{aligned} &2(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})^{T}{(\boldsymbol{u}^{o}  \boldsymbol{s}_{1})} + 2{r^{o}_{i1}}{r^{o}_{1}}\\ &= {(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})^{T}}{(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})}  r^{o2}_{i1} \end{aligned} $$
(7)
Taking the derivative of (7) with respect to time, the following (8) is obtained which contains TDOA and FDOA:
$$ \begin{aligned} &(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}{(\boldsymbol{u}^{o}  \boldsymbol{s}_{1})} + {\dot{r}^{o}_{i1}}{r^{o}_{1}} + (\boldsymbol{s}_{i}  \boldsymbol{s}_{1})^{T}{(\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1})} \\ &+ {r^{o}_{i1}}{\dot{r}^{o}_{1}} = {(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})}  {r^{o}_{i1}}{\dot{r}^{o}_{i1}} \end{aligned} $$
(8)
Then, taking the derivative of (8) results in (9) that contains TDOA, FDOA, and DDR:
$$ \begin{aligned} &{\ddot{r}^{o}_{i1}}{r^{o}_{1}} + 2{(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1})} + 2{\dot{r}^{o}_{i1}}{\dot{r}^{o}_{1}} + {r^{o}_{i1}}{\ddot{r}^{o}_{1}} \\ &= {(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})}  {\ddot{r}^{o}_{i1}}{r^{o}_{i1}}  {\dot{r}^{o2}_{i1}} \end{aligned} $$
(9)
When TDOA, FDOA, and DDR are noisy, the true variables \(r^{o}_{i1}, {\dot {r}}^{o}_{i1}\) and \({\ddot {r}}^{o}_{i1}\) are \(r_{i1}, \dot {r}_{i1}\) and \(\ddot {r}_{i1}\), respectively. Replacing \(r^{o}_{i1}, {\dot {r}}^{o}_{i1}\) and \({\ddot {r}}^{o}_{i1}\) by \(r_{i1}  \Delta {r_{i1}}, \dot {r}_{i1}  \Delta {\dot {r}_{i1}}\) and \(\ddot {r}_{i1}  \Delta {\ddot {r}_{i1}}\), (7), (8), and (9) can be rewritten as
$$ \begin{aligned} &2{r^{o}_{i}}{\Delta{r_{i1}}} = {(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})^{T}}{(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})}  r^{2}_{i1} \\ & 2(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})^{T}{(\boldsymbol{u}^{o}  \boldsymbol{s}_{1})}  2{r_{i1}}{r^{o}_{1}} \end{aligned} $$
(10)
$$ \begin{aligned} &{\dot{r}^{o}_{i}}{\Delta{r_{i1}}}  {r^{o}_{i}}{\Delta{\dot{r}_{i1}}} = {(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{s}_{i}  \boldsymbol{s}_{1})} \\ & {r_{i1}}{\dot{r}_{i1}}  (\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}{(\boldsymbol{u}^{o}  \boldsymbol{s}_{1})}  {\dot{r}_{i1}}{r^{o}_{1}} \\ & (\boldsymbol{s}_{i}  \boldsymbol{s}_{1})^{T}{(\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1})}  {r_{i1}}{\dot{r}^{o}_{1}} \end{aligned} $$
(11)
$$ \begin{aligned} &{\ddot{r}^{o}_{i}}{\Delta{r}_{i1}}  2{\dot{r}^{o}_{i}}{\Delta{\dot{r}_{i1}}}  {r^{o}_{i}}{\Delta{\ddot{r}_{i1}}} = \\ &{(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})}  {\ddot{r}_{i1}}{r_{i1}}  {\dot{r}^{2}_{i1}}  {\ddot{r}_{i1}}{r^{o}_{1}} \\ & 2{(\boldsymbol{\dot{s}}_{i}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1})}  2{\dot{r}_{i1}}{\dot{r}^{o}_{1}}  {r_{i1}}{\ddot{r}^{o}_{1}} \end{aligned} $$
(12)
In the previous derivation, the intermediate secondorder error terms in the equations are neglected. Bring i=2,3,...,M into (10), (11), and (12) and rearrange them into the linear system of equations of the following:
$$ {\boldsymbol{\epsilon}}_{1} = {\boldsymbol{G}_{1}}{\boldsymbol{\theta}^{o}_{1}}  \boldsymbol{h}_{1} $$
(13)
where ε_{1} is:
$$ \boldsymbol{\epsilon}_{1} = {\boldsymbol{B}_{1}} {\left[ {\Delta{\boldsymbol{r}^{T}}},{\Delta{\boldsymbol{\dot{r}}^{T}}},{\Delta{\boldsymbol{\ddot{r}}^{T}}} \right]}^{T}={\boldsymbol{B}_{1}}\Delta\boldsymbol{\alpha} $$
(14)
$$ \boldsymbol{B}_{1} = \left[ \begin{array}{ccc} 2\boldsymbol{B} \\ \boldsymbol{\dot{B}} & \boldsymbol{B} \\ \boldsymbol{\ddot{B}} & 2\boldsymbol{\dot{B}} & \boldsymbol{B} \end{array} \right] $$
(15)
$$\begin{array}{*{20}l} \boldsymbol{B} = diag \left\{ r^{o}_{2},r^{o}_{3},...,r^{o}_{M} \right\} \\ \boldsymbol{\dot{B}} = diag \left\{ \dot{r}^{o}_{2},\dot{r}^{o}_{3},...,\dot{r}^{o}_{M} \right\} \\ \boldsymbol{\ddot{B}} = diag \left\{ \ddot{r}^{o}_{2},\ddot{r}^{o}_{3},...,\ddot{r}^{o}_{M} \right\} \end{array} $$
(16)
The other two variables in (13) are expressed as the following:
$$ \boldsymbol{h}_{1} = \left[ \begin{array}{c} {(\boldsymbol{s}_{2}  \boldsymbol{s}_{1})^{T}}{(\boldsymbol{s}_{2}  \boldsymbol{s}_{1})}  r^{2}_{21} \\ \vdots \\ {(\boldsymbol{s}_{M}  \boldsymbol{s}_{1})^{T}}{(\boldsymbol{s}_{M}  \boldsymbol{s}_{1})}  r^{2}_{M1} \\ {(\boldsymbol{\dot{s}}_{2}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{s}_{2}  \boldsymbol{s}_{1})}  {r_{21}}{\dot{r}_{21}} \\ \vdots \\ {(\boldsymbol{\dot{s}}_{M}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{s}_{M}  \boldsymbol{s}_{1})}  {r_{M1}}{\dot{r}_{M1}} \\ {(\boldsymbol{\dot{s}}_{2}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{\dot{s}}_{2}  \boldsymbol{\dot{s}}_{1})}  {\ddot{r}_{21}}{r_{21}}  {\dot{r}^{2}_{21}} \\ \vdots \\ {(\boldsymbol{\dot{s}}_{M}  \boldsymbol{\dot{s}}_{1})^{T}}{(\boldsymbol{\dot{s}}_{M}  \boldsymbol{\dot{s}}_{1})}  {\ddot{r}_{M1}}{r_{M1}}  {\dot{r}^{2}_{M1}} \end{array} \right] $$
(17)
and
$$ \boldsymbol{G}_{1} = \left[ \begin{array}{ccccc} 2(\boldsymbol{s}_{2}  \boldsymbol{s}_{1})^{T} & 2r_{21} & \boldsymbol{0}_{1\times3} & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ 2(\boldsymbol{s}_{M}  \boldsymbol{s}_{1})^{T} & 2r_{M1} & \boldsymbol{0}_{1\times3} & 0 & 0 \\ (\boldsymbol{\dot{s}}_{2}  \boldsymbol{\dot{s}}_{1})^{T} & \dot{r}_{21} & (\boldsymbol{s}_{2}  \boldsymbol{s}_{1})^{T} & r_{21} & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ (\boldsymbol{\dot{s}}_{M}  \boldsymbol{\dot{s}}_{1})^{T} & \dot{r}_{M1} & (\boldsymbol{s}_{M}  \boldsymbol{s}_{1})^{T} & r_{M1} & 0 \\ \boldsymbol{0}_{1\times3} & \ddot{r}_{21} & 2(\boldsymbol{\dot{s}}_{2}  \boldsymbol{\dot{s}}_{1})^{T} & 2\dot{r}_{21} & r_{21} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \boldsymbol{0}_{1\times3} & \ddot{r}_{M1} & 2(\boldsymbol{\dot{s}}_{M}  \boldsymbol{\dot{s}}_{1})^{T} & 2\dot{r}_{M1} & r_{M1} \end{array} \right]. $$
(18)
In the mean time, the calculation of \(\boldsymbol {r}_{1}=\lVert \boldsymbol {u}\boldsymbol {s}_{1}\rVert, \dot {\boldsymbol {r}}_{1}\), and \(\ddot {\boldsymbol {r}}_{1}\) follows the same way as \(\boldsymbol {r}^{o}_{1}, \dot {\boldsymbol {r}}^{o}_{1}\), and \(\ddot {\boldsymbol {r}}^{o}_{1}\) in (1), (2), and (3) (replacing u^{o} by u). By introducing \(\boldsymbol {\theta }_{1} = {\left [ \boldsymbol {v}^{T}, r_{1}, \boldsymbol {\dot {v}}^{T}, \dot {r}_{1}, \ddot {r}_{1} \right ]}^{T}\) (\(\boldsymbol {v} = \boldsymbol {u}  \boldsymbol {s}_{1}, \boldsymbol {\dot {v}} = \boldsymbol {\dot {u}}  \boldsymbol {\dot {s}}_{1}\)), \(\boldsymbol {r}_{1}, \dot {\boldsymbol {r}}_{1}\), and \(\ddot {\boldsymbol {r}}_{1}\) are arranged in the matrix form as:
$$ {\boldsymbol{\theta}^{T}_{1}}{\boldsymbol{M}_{1}}{\boldsymbol{\theta}_{1}} = 0 $$
(19)
$$ {\boldsymbol{\theta}^{T}_{1}}{\boldsymbol{M}_{2}}{\boldsymbol{\theta}_{1}} = 0 $$
(20)
$$ {\boldsymbol{\theta}^{T}_{1}}{\boldsymbol{M}_{3}}{\boldsymbol{\theta}_{1}} = 0 $$
(21)
where M_{1},M_{2},M_{3} are shown in Appendix A.
Combine (19), (20), and (21) into (22) as
$$ {\boldsymbol{\theta}^{T}_{1}}{\boldsymbol{M}}{\boldsymbol{\theta}_{1}} = 0 $$
(22)
where M=M_{1}+M_{2}+M_{3}.
The localization problem is now to solve the linear system of equations in (13) with the constraint of (22). In Hu [32] and Liu [33] works, in employing the WLS algorithm to solve (13), the elements in the constraint are assumed to be independent (while in fact they are correlated). This leads to the possible inaccuracy of the initial solution, thus affecting the refinement in the second step.
In this paper, the correlation of elements in θ_{1} is imposed from the constraint of (22). The localization problem in this first step is to solve equations of (13) with the constraint in (22). Through the use of the Lagrange multiplier λ, this constrained equations above can be solved by minimizing the following function:
$$ \boldsymbol{J}{(\boldsymbol{\theta}_{1}, \lambda)} = {({\boldsymbol{G}_{1}}{\boldsymbol{\theta}_{1}}  \boldsymbol{h}_{1})^{T}}{\boldsymbol{W}^{1}_{1}}{({\boldsymbol{G}_{1}}{\boldsymbol{\theta}_{1}}  \boldsymbol{h}_{1})} + {\lambda}{{\boldsymbol{\theta}^{T}_{1}}{\boldsymbol{M}}{\boldsymbol{\theta}_{1}}} $$
(23)
where W_{1} is the weight matrix in WLS:
$$ \boldsymbol{W}_{1} = \boldsymbol{E}\left[ {\boldsymbol{\epsilon}_{1}}{\boldsymbol{\epsilon}^{T}_{1}} \right] = \boldsymbol{B}_{1}\boldsymbol{Q}{\boldsymbol{B}^{T}_{1}} $$
(24)
Note that in obtaining W_{1}, the covariance matrix Q of Δα is used.
To minimize (23), the differentiation of J(θ_{1},λ) with respect to θ_{1} should be zero:
$$ \begin{aligned} \frac{\partial{\boldsymbol{J}{(\boldsymbol{\theta}_{1}, \lambda)}}}{\partial{\boldsymbol{\theta}_{1}}} &= 2({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}} + {\lambda}{\boldsymbol{M}}){\boldsymbol{\theta}_{1}}  2{\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}}{\boldsymbol{h}_{1}} = 0 \end{aligned} $$
(25)
The solution of (25) is
$$ \widehat{\boldsymbol{\theta}}_{1} = {({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}} + {\lambda}{\boldsymbol{M}})^{1}}{\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}}{\boldsymbol{h}_{1}} $$
(26)
As W_{1} and M are symmetric, \({\boldsymbol {G}^{T}_{1}}{\boldsymbol {W}^{1}_{1}}{\boldsymbol {G}_{1}} + {\lambda }{\boldsymbol {M}}\) is also symmetric. Substituting the estimate of θ_{1} in (26) into the constraint \({\boldsymbol {\theta }^{T}_{1}}{\boldsymbol {M}}{\boldsymbol {\theta }_{1}} = 0\), the following equations are obtained:
$$ \begin{aligned} &{({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{h}_{1}})^{T}}{({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}} + {\lambda}{\boldsymbol{M}})^{T}}{\boldsymbol{M}}\cdot \\ &{({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}} + {\lambda}{\boldsymbol{M}})^{1}}{\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{h}_{1}} = \\ &{({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{h}_{1}})^{T}}{\boldsymbol{M}^{1}}{({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}}{\boldsymbol{M}^{1}} + \lambda{\boldsymbol{I}})^{1}}\cdot \\ &{({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}}{\boldsymbol{M}^{1}} + \lambda{\boldsymbol{I}})^{1}}{\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{h}_{1}} = 0 \end{aligned} $$
(27)
Employing the eigenvalue factorization, \({\boldsymbol {G}^{T}_{1}}{\boldsymbol {W}^{1}_{1}}{\boldsymbol {G}_{1}}{\boldsymbol {M}^{1}}\) can be diagonalized as
$$ {\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}^{1}_{1}}{\boldsymbol{G}_{1}}{\boldsymbol{M}^{1}} = {\boldsymbol{U}}{\boldsymbol{\varLambda}}{\boldsymbol{U}^{1}} $$
(28)
where Λ=diag{η_{1},η_{2},…,η_{9}}. Substitute (28) into (27):
$$ \begin{aligned} f(\lambda) &= {\boldsymbol{p}^{T}}{(\boldsymbol{\varLambda} + \lambda{\boldsymbol{I}})^{2}}{\boldsymbol{q}} = \sum_{i = 1}^{9}{\frac{{p_{i}}{q_{i}}}{(\lambda + \eta_{i})^{2}}} = 0 \end{aligned} $$
(29)
where \(\boldsymbol {p} = \left [ p_{1}, p_{2}, \ldots, p_{9} \right ]= {\boldsymbol {U}^{T}}{\boldsymbol {M}^{T}}{\boldsymbol {G}^{T}_{1}}{\boldsymbol {W}^{1}_{1}}{\boldsymbol {h}_{1}}\) and \(\boldsymbol {q} = \left [ q_{1}, q_{2}, \ldots, q_{9} \right ] = {\boldsymbol {U}^{1}}{\boldsymbol {G}^{T}_{1}}{\boldsymbol {W}^{1}_{1}}{\boldsymbol {h}_{1}}\). Multiplying both sides by \({\prod _{j = 1}^{9}{(\lambda + \eta _{j})^{2}}}\), a 16root equation is reached:
$$ {\sum_{i = 1}^{9}{\frac{{p_{i}}{q_{i}}}{(\lambda + \eta_{i})^{2}}}}{\prod_{j = 1}^{9}{(\lambda + \eta_{j})^{2}}} = 0 \\ $$
(30)
This equation of (30) can be solved efficiently as shown in [20].
3.2 The second step calculation
In the first step, a system of linear equations is constructed by introducing the auxiliary vector \(\boldsymbol {\theta }^{o}_{1}\), and the initial estimation \(\widehat {\boldsymbol {\theta }}_{1}\) is obtained employing the Lagrange multiplier. In this second step, the correlation between redundant variables \(r^{o}_{1}, \dot {r}^{o}_{1}, \ddot {r}^{o}_{1}\) and \(\boldsymbol {u}^{o}, \boldsymbol {\dot {u}}^{o}\) in (1), (2) and (3) is used again to further optimize the initial solution. The relationship between redundant variables and sources is as follows:
$$\begin{array}{*{20}l} r^{o2}_{1} = (\boldsymbol{u}^{o}  \boldsymbol{s}_{1})^{T}(\boldsymbol{u}^{o}  \boldsymbol{s}_{1}) \\ {\dot{r}^{o}_{1}}{r^{o}_{1}} = (\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1})^{T}(\boldsymbol{u}^{o}  \boldsymbol{s}_{1}) \\ {\ddot{r}^{o}_{1}}{r^{o}_{1}} + \dot{r}^{o2}_{1} = (\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1})^{T}(\boldsymbol{\dot{u}}^{o}  \boldsymbol{\dot{s}}_{1}) \end{array} $$
(31)
Define \(\boldsymbol {\widehat {\theta }}^{\prime }_{1}\) containing the estimation values as: \(\boldsymbol {\widehat {\theta }}^{\prime }_{1} = {\left [ \boldsymbol {\widehat {u}}^{T}, \widehat {r}_{1}, \boldsymbol {\widehat {\dot {u}}}^{T}, \widehat {\dot {r}}_{1}, \widehat {\ddot {r}}_{1} \right ]}^{T}\). Substitute \(\boldsymbol {u}^{o} = \boldsymbol {\widehat {u}}  \Delta {\boldsymbol {u}}, \boldsymbol {\dot {u}}^{o} = \boldsymbol {\widehat {\dot {u}}}  \Delta {\boldsymbol {\dot {u}}}, r^{0}_{1} = \widehat {r}_{1}  \Delta {r}_{1}, \dot {r}^{o} = \widehat {\dot {r}}_{1}  \Delta {\dot {r}}_{1}\) and \(\ddot {r}_{1} = \widehat {\ddot {r}}_{1}  \Delta {\ddot {r}}_{1}\) into (31) (also ignoring second order errors), equations are obtained as:
$$ 2{\boldsymbol{\widehat{u}}^{T}}{\Delta{\boldsymbol{u}}}  2{\widehat{r}_{1}}{\Delta{r_{1}}} = ({\boldsymbol{\widehat{u}}^{T}}{\boldsymbol{\widehat{u}}} + {\boldsymbol{s}^{T}_{1}}{\boldsymbol{s}_{1}}  \widehat{r}^{2}_{1})  2{\boldsymbol{s}^{T}_{1}}{\boldsymbol{u}^{o}} $$
(32)
$$ \begin{aligned} &{\boldsymbol{\widehat{\dot{u}}}^{T}}{\Delta{\boldsymbol{u}}}  {\widehat{\dot{r}}_{1}}{\Delta{r_{1}}} + {\boldsymbol{\widehat{u}}^{T}}{\Delta{\boldsymbol{\dot{u}}}}  {\widehat{r}_{1}}{\Delta{\dot{r}_{1}}} = \\ &({\boldsymbol{\widehat{\dot{u}}}^{T}}{\boldsymbol{\widehat{u}}} + {\boldsymbol{\widehat{\dot{s}}}^{T}_{1}}{\boldsymbol{\widehat{s}}_{1}}  {\widehat{\dot{r}}_{1}}{\widehat{r}_{1}})  {\boldsymbol{\dot{s}}^{T}_{1}}{\boldsymbol{u}^{o}}  {\boldsymbol{s}^{T}_{1}}{\boldsymbol{\dot{u}}^{o}} \end{aligned} $$
(33)
$$ \begin{aligned} &{\widehat{\ddot{r}}_{1}}{\Delta{r_{1}}} + 2{\boldsymbol{\widehat{\dot{u}}}^{T}}{\Delta{\boldsymbol{\dot{u}}}}  2{\widehat{\dot{r}}_{1}}{\Delta{\dot{r}}_{1}}  {\widehat{r}_{1}}{\Delta{\ddot{r}}_{1}} \\ &= ({\boldsymbol{\dot{s}}^{T}_{1}}{\boldsymbol{\dot{s}}_{1}} + {\boldsymbol{\widehat{\dot{u}}}^{T}}{\boldsymbol{\widehat{\dot{u}}}}  {\widehat{\ddot{r}}_{1}}{\widehat{r}_{1}}  \widehat{\dot{r}}^{2}_{1})  2{\boldsymbol{\dot{s}}^{T}_{1}}{\boldsymbol{\dot{u}}^{o}} \end{aligned} $$
(34)
Similar to the process in obtaining (13), the following linear system of equations can be obtained:
$$ \begin{aligned} \boldsymbol{\epsilon}_{2} &= \boldsymbol{B}_{2} {\left[ \Delta{\boldsymbol{u}}^{T},\Delta{r_{1}},\Delta{\dot{\boldsymbol{u}}}^{T},\Delta{\dot{r}_{1}},\Delta{\ddot{r}_{1}} \right]}^{T} \\ &= \boldsymbol{B}_{2}{\Delta{\boldsymbol{\widehat{\theta}}^{\prime}_{1}}} \\ &= \boldsymbol{h}_{2}  {\boldsymbol{G}_{2}}{\boldsymbol{\theta}^{o}_{2}} \end{aligned} $$
(35)
where \(\boldsymbol {\theta }^{o}_{2} = { \left [ \boldsymbol {u}^{oT}, \boldsymbol {\dot {u}}^{oT} \right ]}^{T}\) and
$$\begin{array}{*{20}l} &\boldsymbol{h}_{2} = \left[ \begin{array}{c} \widehat{\boldsymbol{u}} \\ {\widehat{\boldsymbol{u}}^{T}}{\widehat{\boldsymbol{u}}} + {\boldsymbol{s}^{T}_{1}}{\boldsymbol{s}_{1}}  \widehat{r}^{2}_{1} \\ \widehat{\dot{\boldsymbol{u}}} \\ {\widehat{\dot{\boldsymbol{u}}}^{T}}{\widehat{\boldsymbol{u}}} + {\boldsymbol{\dot{s}}^{T}}{\boldsymbol{s}_{1}}  {\widehat{\dot{r}}_{1}}{\widehat{r}_{1}} \\ {\boldsymbol{\dot{s}^{T}_{1}}}{\boldsymbol{\dot{s}_{1}}} + {\boldsymbol{\widehat{\dot{u}}}^{T}_{1}}{\boldsymbol{\widehat{\dot{u}}}}  {\widehat{\ddot{r}}_{1}}{\widehat{r}_{1}}  \widehat{\dot{r}}^{2}_{1} \end{array} \right], \boldsymbol{G}_{2} = \left[ \begin{array}{cc} \boldsymbol{I}_{3\times3} & \boldsymbol{O}_{3\times3} \\ 2\boldsymbol{s}^{T}_{1} & \boldsymbol{0}_{1\times3} \\ \boldsymbol{O}_{3\times3} & \boldsymbol{I}_{3\times3} \\ \boldsymbol{\dot{s}}^{T}_{1} & \boldsymbol{s}^{T}_{1} \\ \boldsymbol{0}_{1\times3} & 2\boldsymbol{\dot{s}^{T}_{1}} \end{array} \right] \\ &\boldsymbol{B}_{2} = \left[ \begin{array}{ccccc} \boldsymbol{I}_{3\times3} & \boldsymbol{0}_{3\times1} & \boldsymbol{O}_{3\times3} & \boldsymbol{0}_{3\times1} & \boldsymbol{0}_{3\times1} \\ 2\boldsymbol{\widehat{u}}^{T} & 2\widehat{r}_{1} & \boldsymbol{0}_{1\times3} & 0 & 0 \\ \boldsymbol{O}_{3\times3} & \boldsymbol{0}_{3\times1} & \boldsymbol{I}_{3\times3} & \boldsymbol{0}_{3\times1} & \boldsymbol{0}_{3\times1} \\ \boldsymbol{\widehat{\dot{u}}}^{T} & \widehat{\dot{r}}_{1} & \boldsymbol{\widehat{u}}^{T} & \widehat{r}_{1} & 0 \\ \boldsymbol{0}_{1\times3} & \widehat{\ddot{r}}_{1} & 2\boldsymbol{\widehat{\dot{u}}}^{T} & 2\widehat{\dot{r}}_{1} & \widehat{r}_{1} \end{array} \right] \end{array} $$
(36)
Based on WLS, the solution of \(\boldsymbol {\widehat {\theta }}_{2}\) is directly obtained as
$$ \boldsymbol{\widehat{\theta}}_{2} = ({\boldsymbol{G}^{T}_{2}}{\boldsymbol{W}^{1}_{2}}{\boldsymbol{G}_{2}})^{1}{\boldsymbol{G}^{T}_{2}}{\boldsymbol{W}^{1}_{2}}{\boldsymbol{h}_{2}} $$
(37)
where the weighting matrix W_{2} is
$$ \boldsymbol{W}_{2} = \boldsymbol{E}\left[ {\boldsymbol{\epsilon}_{2}}{\boldsymbol{\epsilon}^{T}_{2}} \right] = \boldsymbol{B}_{2}{\boldsymbol{E}\left[ \Delta{\boldsymbol{\theta}^{\prime}_{1}}\Delta{\boldsymbol{\theta}^{{\prime}T}_{1}} \right]}{\boldsymbol{B}^{T}_{2}} $$
(38)
According to [33], the covariance matrix of \(\boldsymbol {\widehat {\theta }}^{\prime }_{1}\) is
$$ cov(\boldsymbol{\widehat{\theta}}^{\prime}_{1}) = \boldsymbol{E} \left[ \Delta{\boldsymbol{\theta}^{\prime}_{1}}\Delta{\boldsymbol{\theta}^{{\prime}T}_{1}} \right] = ({\boldsymbol{G}^{T}_{1}}{\boldsymbol{W}_{1}}{\boldsymbol{G}_{1}})^{1} $$
(39)
Then (38) can be rewritten as:
$$ \boldsymbol{W}_{2} = {\boldsymbol{B}_{2}}{cov(\boldsymbol{\widehat{\theta}}^{\prime}_{1})}{\boldsymbol{B}^{T}_{2}} $$
(40)
3.3 The working procedure
The procedure for the proposed algorithm is as follows:

1
Set W_{1}=Q.

2
Transform (29) to a standard polynomial. Find the roots of this polynomial. The Lagrange multiplier λ is the real roots.

3
Substitute λ into (26) to obtain \(\widehat {\boldsymbol {\theta }}_{1}\), which is fed to (23). The \(\widehat {\boldsymbol {\theta }}_{1}\) that minimizes J(θ_{1},λ) is selected for the next step.

4
Update \(\boldsymbol {W}^{1}_{1}\) in (24) using \(\widehat {\boldsymbol {\theta }}_{1}\) from the previous step.

5
Repeat Step 2, Step 3, and Step 4 to refine \(\widehat {\boldsymbol {\theta }}_{1}\).

6
Find \(cov(\boldsymbol {\widehat {\theta }}^{\prime }_{1})\) in (39).

7
Compute W_{2} from (40).

8
Calculate \(\boldsymbol {\widehat {\theta }}_{2}\) in (37) as the final estimate.