The objective of this section is to develop effective TDOA localization methods in the presence of synchronization offsets and sensor location errors when a set of calibration emitters are available. The presented methods are based on the Taylor-series expansion. In order to decrease the number of variables involved in iteration, we devise dimension-reduction Taylor-series iterative algorithms. More specifically, the novel approach consists of two stages. The first stage estimates the clock bias and refines the sensor positions based on the calibration RDOA measurements as well as the prior knowledge of sensor locations. The second stage provides the estimates of source location by combining the RDOA measurements of the target signal and the estimated values in the first phase. Moreover, the sensor locations and the clock bias can be further refined compared to the estimates in the first step.
5.1 Stage 1 of the proposed methods
In the first stage, the measurement vectors \( {\hat{\mathbf{r}}}_{\mathrm{c}} \) and \( \hat{\mathbf{v}} \) are combined to estimate ρ and w. In order to obtain the optimum accuracy, the ML criterion is adopted, and the corresponding minimization problem can be formulated as:
$$ \underset{\mathbf{w},\boldsymbol{\uprho}}{\min}\left\{{\left\Vert \left[\begin{array}{c}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\hat{\mathbf{r}}}_{\mathrm{c}}\\ {}{\mathbf{P}}^{-1/2}\hat{\mathbf{v}}\end{array}\right]-\left[\begin{array}{c}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left(\overline{\mathbf{f}}\left(\mathbf{w}\right)+\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} \right)\\ {}{\mathbf{P}}^{-1/2}\mathbf{w}\end{array}\right]\right\Vert}_2^2\right\} $$
(26)
There is no doubt that the conventional Taylor-series iterative algorithm, as discussed in [8, 88], can be used to solve (26) and jointly estimate ρ and w. However, in this subsection, we would like to present an alternative Taylor-series iterative algorithm, which is able to reduce the number of parameters involved in the iteration. Note that the objective function in (26) is quadratic in ρ; hence, the optimal solution to ρ can be obtained in closed form as below:
$$ {\hat{\boldsymbol{\rho}}}_{\mathrm{f},\mathrm{opt}}={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left(\mathbf{w}\right)\right) $$
(27)
where subscript “f” is added to emphasize that this is the solution in the first stage. Inserting (27) back into (26) results in the following concentrated objective function:
$$ \underset{\mathbf{w}}{\min}\left\{{\left\Vert \left[\begin{array}{c}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\hat{\mathbf{r}}}_{\mathrm{c}}\\ {}{\mathbf{P}}^{-1/2}\hat{\mathbf{v}}\end{array}\right]-\left[\begin{array}{c}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{f}}\left(\mathbf{w}\right)\\ {}{\mathbf{P}}^{-1/2}\mathbf{w}\end{array}\right]\right\Vert}_2^2\right\} $$
(28)
The unknowns that need to be optimized in (28) include w only, and this minimization problem can be solved by the traditional Taylor-series iterative algorithm. The corresponding iterative formula is given by
$$ {\displaystyle \begin{array}{c}{\hat{\mathbf{w}}}_{\mathrm{f}}^{\left(k+1\right)}={\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}+{\left({\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)+{\mathbf{P}}^{-1}\right)}^{-1}\\ {}\times \left({\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)+{\mathbf{P}}^{-1}\left(\hat{\mathbf{v}}-{\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)\end{array}} $$
(29)
where superscript k indexes iteration number and \( {\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)} \) denotes the estimate at the kth iteration. If the sequence \( {\left\{{\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right\}}_{1\le k\le +\infty } \) converges to \( {\hat{\mathbf{w}}}_{\mathrm{f}} \), then this vector can be regarded as the solution of sensor locations in the first phase.
When the iteration process in (29) is terminated, the solution of ρ can be immediately determined by
$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right) $$
(30)
At this point, we make two important remarks about the proposed algorithm in the first stage.
5.1.1 Remark 5
In the procedure stated above, the estimation of w and ρ is decoupled and each is estimated sequentially with lower computational complexity.
5.1.2 Remark 6
Both \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) are asymptotically efficient solutions because their performance can attain the CRB derived in Section 4.2. We prove this result in Section 5.2 with an analytical manner.
5.2 MSE analysis in the first stage
The aim of this subsection is to deduce the MSE expressions of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) by employing the first-order perturbation analysis. Moreover, the two MSEs are proved to asymptotically reach the corresponding CRBs given in Section 4.2. It is worth emphasizing that the MSE expressions for the estimates in the first phase are important because they are used in the second stage.
5.2.1 MSE expression of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \)
The theoretical MSE of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) is derived here. Taking the limit on both sides of (29) produces
$$ {\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)+{\mathbf{P}}^{-1}\left(\hat{\mathbf{v}}-{\hat{\mathbf{w}}}_{\mathrm{f}}\right)={\mathbf{O}}_{3M\times 1} $$
(31)
where \( {\hat{\mathbf{w}}}_{\mathrm{f}}=\underset{k\to +\infty }{\lim }{\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)} \). Substituting (5) and (7) into (31) and ignoring the second- and higher-order error terms leads to
$$ {\displaystyle \begin{array}{c}{\mathbf{O}}_{3M\times 1}={\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left(\overline{\mathbf{f}}\left(\mathbf{w}\right)-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)+\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} +{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}\right)+{\mathbf{P}}^{-1}\left(\mathbf{w}-{\hat{\mathbf{w}}}_{\mathrm{f}}+\boldsymbol{\upxi} \right)\\ {}\approx {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left({\boldsymbol{\upvarepsilon}}_{\mathrm{c}}-\overline{\mathbf{F}}\left(\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\right)+{\mathbf{P}}^{-1}\left(\boldsymbol{\upxi} -\Delta {\mathbf{w}}_{\mathrm{f}}\right)\end{array}} $$
(32)
where \( \Delta {\mathbf{w}}_{\mathrm{f}}={\hat{\mathbf{w}}}_{\mathrm{f}}-\mathbf{w} \) is the estimation error in \( {\hat{\mathbf{w}}}_{\mathrm{f}} \). Note that the second (approximate) equality in (32) exploits the relation \( {\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}={\mathbf{O}}_{D\left(M-1\right)\times \left(N-1\right)} \). It follows immediately from (32) that
$$ \Delta {\mathbf{w}}_{\mathrm{f}}\approx {\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}+{\mathbf{P}}^{-1}\boldsymbol{\upxi} \right) $$
(33)
which yields
$$ \mathbf{MSE}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)=\mathrm{E}\left[\Delta {\mathbf{w}}_{\mathrm{f}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}={\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right) $$
(34)
Therefore, the estimate \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) is asymptotically efficient.
5.2.2 MSE expression of \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \)
This subsection is devoted to deriving the analytical MSE expression of \( {\hat{\boldsymbol{\rho}}}_{\mathrm{f}} \). Inserting (7) into (30) and neglecting the second- and higher-order error terms, we obtain
$$ {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\left(\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)\approx {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left(\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} +{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}-\overline{\mathbf{F}}\left(\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\right)\Rightarrow \Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\approx {\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}} $$
(35)
where \( \Delta {\boldsymbol{\uprho}}_{\mathrm{f}}={\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}-\boldsymbol{\uprho} \) is the estimation error in \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \). It can be checked from (35) that
$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right)=\mathrm{E}\left[\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}{\left(\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\mathbf{A}}_1+{\mathbf{A}}_2+{\mathbf{A}}_3+{\mathbf{A}}_3^{\mathrm{T}} $$
(36)
where
$$ \left\{\begin{array}{l}{\mathbf{A}}_1={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\\ {}{\mathbf{A}}_2={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right){\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\\ {}{\mathbf{A}}_3=-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\cdot \mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]\cdot {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\end{array}\right. $$
(37)
Appendix 3 shows that A3 = O(N − 1) × (N − 1), which, together with (36) and (37), implies
$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right)={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}+{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right){\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}={\mathbf{CRB}}_{\mathrm{c}}\left(\boldsymbol{\uprho} \right) $$
(38)
It can be seen from (38) that \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) also has asymptotic efficiency.
Two remarks are drawn at the end of this subsection.
5.2.3 Remark 7
The asymptotic optimality of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) only holds when the RDOA measurements of target source are not used. Hence, the estimation accuracy of sensor locations and clock bias may be further improved in the subsequent phase.
5.2.4 Remark 8
Applying (35), the cross-covariance matrix between \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) and \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) is given by
$$ \mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)=\mathrm{E}\left[\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\cdot \mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)\cdot \mathrm{E}\left[\Delta {\mathbf{w}}_{\mathrm{f}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right] $$
(39)
Putting (87), (34), and (39) together produces
$$ \mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)=-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right) $$
(40)
It can be verified from the matrix identity (II) in Table 2 that \( \mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right) \) is equal to the lower-left-hand corner of \( {\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right) \). So, using (34) and (38), we have
$$ \mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{w}}}_{\mathrm{f}}\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]\right)=\mathrm{E}\left(\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]{\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right)=\left[\begin{array}{cc}{\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right)& {\left(\mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}\\ {}\mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)& {\mathbf{CRB}}_{\mathrm{c}}\left(\boldsymbol{\uprho} \right)\end{array}\right]={\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)=\boldsymbol{\Phi} $$
(41)
5.3 Stage 2 of the proposed methods
In the second phase, we combine the measurement vector \( \hat{r} \) with the estimates of w and ρ in the first stage, denoted by \( {\hat{w}}_f \) and \( {\hat{\rho}}_f \), to locate the target source. Moreover, the estimates in the first step can be further refined. As in the first stage, the ML criterion is utilized again to obtain the asymptotic optimum performance. It is important to note that two dimension-reduction Taylor-series iterative algorithms are proposed in the second step.
5.4 The first algorithm
The first algorithm not only determines the position of target source, but also further improves the estimates of sensor locations and clock bias provided in the first stage. Since the measurement error in \( \hat{r} \) is independent of the estimation errors Δwf and Δρf, the corresponding ML estimator can be formulated as:
$$ \underset{\mathbf{u},\mathbf{w},\boldsymbol{\uprho}}{\min}\left\{{\left\Vert \left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\hat{\mathbf{r}}\\ {}{\boldsymbol{\Psi}}_1{\hat{\mathbf{w}}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]-\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\left(\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} \right)\\ {}{\boldsymbol{\Psi}}_1\mathbf{w}+{\boldsymbol{\Psi}}_2\boldsymbol{\uprho} \end{array}\right]\right\Vert}_2^2\right\} $$
(42)
where Ψ1 and Ψ2 are defined as follows:
$$ {\boldsymbol{\Phi}}^{-1/2}=\left[\underset{\left(3M+N-1\right)\times 3M}{\underbrace{{\boldsymbol{\Psi}}_1}}\kern0.5em \underset{\left(3M+N-1\right)\times \left(N-1\right)}{\underbrace{{\boldsymbol{\Psi}}_2}}\right] $$
(43)
It is obvious that this minimization problem can be efficiently solved through the conventional Taylor-series iterative technique. However, for the purpose of reducing the number of parameters in the iterative procedure, we still exploit the dimension-reduction Taylor-series iterative algorithm. Due to the fact that (42) is a quadratic minimization problem with respect to ρ, its optimal solution can be written in closed form as:
$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1,\mathrm{opt}}={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left(\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1\left({\hat{\mathbf{w}}}_{\mathrm{f}}-\mathbf{w}\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right) $$
(44)
where subscript “s1” is used to highlight that this is the solution in the second phase for the first algorithm. Putting (44) back into (42) yields the following concentrated minimization problem:
$$ \underset{\mathbf{u},\mathbf{w}}{\min}\left\{{\left\Vert {\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\hat{\mathbf{r}}\\ {}{\boldsymbol{\Psi}}_1{\hat{\mathbf{w}}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]-{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)\\ {}{\boldsymbol{\Psi}}_1\mathbf{w}\end{array}\right]\right\Vert}_2^2\right\} $$
(45)
The set of unknown parameters in (45) consists of u and w, which cannot be decoupled. As a result, they should be jointly estimated by the traditional Taylor-series iterative technique. The associated update formula for parameter estimation is given by
$$ {\displaystyle \begin{array}{c}\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}^{\left(k+1\right)}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}^{\left(k+1\right)}\end{array}\right]=\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\end{array}\right]+{\left({\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]\right)}^{-1}\\ {}\times {\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\left(\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\right)\\ {}{\boldsymbol{\Psi}}_1\left({\hat{\mathbf{w}}}_{\mathrm{f}}-{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)+{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(46)
where superscript k denotes iteration number; \( {\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)} \) and \( {\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)} \) are the estimates of u and w at the kth iteration, respectively. Let \( \underset{k\to +\infty }{\lim }{\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)}={\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( \underset{k\to +\infty }{\lim }{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}={\hat{\mathbf{w}}}_{\mathrm{s}1} \). Then, \( {\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( {\hat{\mathbf{w}}}_{\mathrm{s}1} \) can be regarded as the final estimates for target position and sensor locations, respectively. Besides, once the convergence criterion is satisfied and the iteration procedure is terminated, the final estimate for clock bias can be explicitly obtained as:
$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1}={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left(\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}1},{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1\left({\hat{\mathbf{w}}}_{\mathrm{f}}-{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right) $$
(47)
Before we proceed, three important remarks concerning the procedure described above are concluded.
5.4.1 Remark 9
In optimization problem (45), matrices Ψ1 and Ψ2 are not accurately known because they depend on w, which is to be estimated. In order to overcome this difficulty, we can use the iteration vector \( {\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)} \) instead of the true sensor locations, which means that Ψ1 and Ψ2 should be updated at every iteration step. Let us assume that the approximations of Ψ1 and Ψ2 at the kth iteration are denoted by \( {\hat{\boldsymbol{\Psi}}}_1^{(k)} \) and \( {\hat{\boldsymbol{\Psi}}}_2^{(k)} \), respectively. The performance analysis in Section 6.1.1 shows that such an approximation does not affect the asymptotic properties of the estimator. Plentiful simulation results also indicate that the estimation accuracy is relatively insensitive to the noise in these two matrices.
5.4.2 Remark 10
Since Φ−1 = Φ−1/2Φ−1/2 = (Φ−1/2)TΦ−1/2, putting (22), (41), and (43) together produces
$$ {\boldsymbol{\Phi}}^{-1}=\left[\begin{array}{cc}{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_1& {\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\\ {}{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1& {\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]={\left({\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)\right)}^{-1}=\left[\begin{array}{cc}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right] $$
(48)
which implies
$$ {\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_1={\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\kern1em ;\kern1em {\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2={\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\kern1em ;\kern1em {\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2={\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}} $$
(49)
These matrices are used in (46) and (47). Besides, they are also required for the theoretical analysis in Section 6.
5.4.3 Remark 11
Section 6.1.1 proves that the joint estimate \( \left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right] \) can asymptotically attain the CRB computed by (81). Moreover, Section 6.1.2 shows that the solution \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \) is also asymptotically efficient because its performance can achieve the CRB given by (85) before the thresholding effect occurs.
5.4.4 The second algorithm
The aim of this subsection is to present an alternative dimension-reduction Taylor-series iterative formula in which the iteration variable is composed of u only. The basic idea behind this algorithm is to automatically mitigate the effects of sensor location errors produced in the first stage, rather than further improving the sensor positions. As a result, the computational load can be reduced.
For this purpose, we use a first-order Taylor- series expansion, leading to the following approximation:
$$ \hat{\mathbf{r}}\approx \mathbf{f}\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} +\left(\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\right) $$
(50)
where the second- and higher-order terms of estimation error Δwf are ignored. It is immediately obvious that the last term \( {\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}} \) can also be regarded as a measurement error as ε. The ML estimator is therefore given by
$$ \underset{\mathbf{u},\boldsymbol{\uprho}}{\min}\left\{{\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\boldsymbol{\Gamma} \boldsymbol{\uprho} \\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}-\boldsymbol{\uprho} \end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\boldsymbol{\Gamma} \boldsymbol{\uprho} \\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}-\boldsymbol{\uprho} \end{array}\right]\right\} $$
(51)
where
$$ \boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)=\mathrm{E}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]{\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right)=\left[\begin{array}{cc}\mathbf{Q}+{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right){\boldsymbol{\Phi}}_1{\left({\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}& -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right){\boldsymbol{\Phi}}_2\\ {}-{\boldsymbol{\Phi}}_2^{\mathrm{T}}{\left({\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}& {\boldsymbol{\Phi}}_3\end{array}\right] $$
(52)
in which Φ1, Φ2, and Φ3 are defined as follows:
$$ \boldsymbol{\Phi} =\left[\begin{array}{cc}\underset{3M\times 3M}{\underbrace{{\boldsymbol{\Phi}}_1}}& \underset{3M\times \left(N-1\right)}{\underbrace{{\boldsymbol{\Phi}}_2}}\\ {}\underset{\left(N-1\right)\times 3M}{\underbrace{{\boldsymbol{\Phi}}_2^{\mathrm{T}}}}& \underset{\left(N-1\right)\times \left(N-1\right)}{\underbrace{{\boldsymbol{\Phi}}_3}}\end{array}\right] $$
(53)
Likewise, the optimal solution of ρ in (51) can also be written explicitly as below:
$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2,\mathrm{opt}}={\left({\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{-1}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right] $$
(54)
where subscript “s2” is used to emphasize that this is the solution in the second phase for the second algorithm. By substituting (54) into (51), we obtain the following concentrated minimization problem
$$ \underset{\mathbf{u}}{\min}\left\{{\left\Vert {\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\hat{\mathbf{r}}\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]-{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)\left(\boldsymbol{\Omega} \right(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\left)\right){}^{-1/2}\left[\begin{array}{c}\mathbf{f}\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 1}\end{array}\right]\right\Vert}_2^2\right\} $$
(55)
Similar to (28) and (45), the Taylor-series iterative formula for solving (55) can be expressed as:
$$ {\displaystyle \begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}2}^{\left(k+1\right)}={\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)}+{\left({\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{-1}\\ {}\times {\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(56)
where superscript k stands for iteration number and \( {\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)} \) is the estimate of u at the kth iteration. The convergence result of sequence \( {\left\{{\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)}\right\}}_{1\le k\le +\infty } \), denoted by \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \), can be viewed as the final estimate of target position for the second algorithm. Moreover, once the iteration process is completed, the final solution of clock bias can be expressed in a closed form as:
$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2}={\left({\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{-1}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right] $$
(57)
At the end of this subsection, two remarks about the second algorithm are in order.
5.4.5 Remark 12
The estimates of sensor locations obtained in the first stage cannot be further refined in this algorithm.
5.4.6 Remark 13
Similar to the solutions \( {\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \) obtained by the first algorithm, the estimates \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \) are also asymptotically efficient. Section 6.2 proves that the estimation accuracy of \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \) can attain the CRBs given by (18) and the third equality in (13), respectively, under mild condition.
5.5 Summary of the proposed methods
According to the description in Sections 5.1 and 5.3, we get two novel TDOA localization algorithms when the TDOA measurements from the calibration emitters are available. Both of them require two stages, and moreover, the first phases of the two algorithms have the same computational procedure. In the sequel, we summarize the two newly proposed algorithms, which are called algorithm I and algorithm II, respectively (Figs. 2 and 3).
We make the following two remarks about the proposed algorithms described above.
5.5.1 Remark 14
In the first stage of algorithm I, the initial value \( {\hat{\mathbf{w}}}_{\mathrm{f}}^{(0)} \) can be set to be the available erroneous sensor positions \( \hat{v} \). In the second phase of algorithm I, the initial solution \( {\hat{\mathbf{w}}}_{\mathrm{s}1}^{(0)} \) can be selected as \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and the initial guess \( {\hat{\mathbf{u}}}_{\mathrm{s}1}^{(0)} \) can be obtained by the non-iterative algebraic solution proposed in [59]. Simulation results in Section 7 show that using these initial solutions is able to achieve asymptotically efficient performance. Moreover, this initialization method is also suitable for algorithm II. From our simulation results, it can be observed that fifteen iterations are generally enough to achieve the convergence criterion.
5.5.2 Remark 15
As mentioned in Section 5.4.4, algorithm II involves a smaller amount of computation than algorithm I because the sensor locations are not refined in the second stage of algorithm II. In Appendix 4, we provide the numerical complexities of the two algorithms, expressed in the number of multiplication operations.