Skip to main content

Bias reduction for TDOA localization in the presence of receiver position errors and synchronization clock bias

Abstract

Time difference of arrival (TDOA) localization does not require time stamping of the source signal and is playing an increasingly important role in passive location. In addition to measurement noise, receiver position errors and synchronization clock bias are two important factors affecting the performance of TDOA positioning. This paper proposes a bias-reduced solution for passive source localization using TDOA measurements in the presence of receiver position errors and synchronization clock bias. Like the original two-step weighted least-squares solution, the new technique has two stages. In the first stage, the proposed method expands the parameter space in the weighted least-squares (WLS) formulation and imposes a quadratic constraint to suppress the bias. In the second stage, an effective WLS estimator is given to reduce the bias generated by nonlinear operations. With the aid of second-order error analysis, theoretical biases for the original solution and proposed bias-reduced solution are derived, and it is proved that the proposed bias-reduced method can achieve the Cramér–Rao lower bound performance under moderate Gaussian noise, while having smaller bias than the original algorithm. Simulation results exhibit smaller estimation bias and better robustness for all estimates, including those of the source position, refined receiver positions, and clock bias vector, when the measurement noise or receiver position error increases.

1 Introduction

The problem of passive localization has in recent decades been of wide concern and studied intensely by scholars in many fields, such as passive radar [1,2,3], wireless communication [4,5,6], sensor networks [7, 8], and underwater acoustics [9, 10]. Most localization techniques use two-step processing, in which the positioning parameters are first extracted (or estimated) and the source position is then determined according to these estimated parameters. The positioning parameters are nonlinear functions with respect to the source position and are usually the received signal strength (RSS) [11,12,13,14,15,16], gain ratios of arrival [17, 18], time of arrival (TOA) [19, 20], time difference of arrival (TDOA) [21,22,23,24,25,26,27], frequency difference of arrival (FDOA) [28, 29], and angle of arrival (AOA) [30, 31]. Among these, TDOA localization is perhaps one of the most frequently used schemes, because it has superior positioning performance and does not require the time stamp of the source signal. This paper focuses on the localization of a single source using TDOA measurements obtained at spatially separated receivers.

A number of TDOA localization algorithms have been developed during the past few decades. Many methods are iterative owing to the highly nonlinear relationship between unknowns and TDOA measurements. The Taylor series method begins with an initial guess and uses local linear least-sum-square-error corrections to improve the estimation accuracy in each iteration [24, 32]. The constrained total least-squares (CTLS) algorithm [22] has been proposed and the Newton iteration applied to estimate the source position. These methods have high localization accuracy in the case of a good initial guess close to the true value; however, such prior information of the initial guess is not readily available in practice. It is therefore difficult to guarantee convergence. To overcome the drawback of iterative algorithms, several closed-form methods using TDOAs have been proposed, such as the total least-squares (TLS) algorithm [25] and two-step weighted least-squares (TSWLS) positioning algorithm [21, 29, 33]. It has been shown both theoretically and by simulation that the above methods can achieve the Cramér–Rao lower bound (CRLB) under small Gaussian noise levels. Obviously, compared with iterative algorithms, closed-form methods are more attractive because they do not require an initial guess and avoid the problem of divergence. We concentrate on the closed-form method in this paper.

Most existing TDOA localization algorithms require the receiver locations to be accurately known and the receivers to be strictly synchronized in sampling the received signals, but these are unlikely to be satisfied in practice. As examples, receivers (or sensors) are fixed on vessels or aircraft or they are randomly arranged in a certain region, which results in the true receiver positions to be compromised by receiver position errors. In addition, when the receivers are far away from each other, it is difficult to achieve strict clock synchronization for all receivers. Many studies have shown that both the receiver position errors and synchronization clock bias play important roles in TDOA localization because they deteriorate the positioning accuracy [29, 32, 34]. Indeed, the problem of jointly suppressing the receiver position error and synchronization clock bias has been intensively studied in recent years. A joint synchronization and source localization algorithm with erroneous receiver positions has been proposed [35], where the clock bias is assumed to be known with random errors, whereas such prior information with respect to synchronization clock bias is not available in practice. To overcome this drawback, a novel closed-form solution method, in which the clock bias is considered a deterministic parameter, has been developed and the algebraic solutions of the source location, receiver positions, and synchronization offsets were sequentially obtained [36]. This method is practical and effective and not only jointly suppresses receiver position error and synchronization clock bias but also obtains the CRLB under low noise levels.

However, the original algorithm proposed in [36] has a drawback in that the bias of estimates is too large owing to the noise correlation between the regressor and regressand in the weighted least-squares (WLS) formulation and some nonlinear operations. Especially when the noise level is high or the localization geometry is not good enough, the bias becomes large and seriously affects the localization performance. Moreover, in some modern applications, we can obtain multiple independent measurements in a short time period. The localization performance can be improved by averaging these estimates from multiple independent measurements. Nevertheless, this operation only reduces the variance and not the bias. In tracking applications, the bias problem remains because the measurements made at different instants are coherent [37]. It is therefore necessary to reduce the bias to improve the localization performance. Over the years, many studies have reduced the bias of an estimator using TDOAs [38,39,40,41,42,43,44,45]. Two methods of reducing the bias of the closed-form solution using TDOAs have been proposed [38], but the receiver position errors and synchronization clock bias were not taken into account. One study [39] proposed a bias-reduced method for a two-sensor (or two-receiver) positioning system based on TDOA and AOA measurements in the presence of sensor errors. The simulation validates the availability of the proposed method. Moreover, an improved algebraic solution employing new stage-2 processing for the TDOA with sensor position errors has been proposed [40]. Simulation results show lower estimation bias; however, this method only improves the stage-2 processing, and the bias introduced in stage 1 needs to be further reduced.

Inspired by previous works [38,39,40,41,42,43,44,45], this paper proposes a bias-reduced method of reducing the bias of estimates from [36] using TDOAs in the presence of receiver position errors and synchronization clock bias. The study begins with a bias analysis for the original TSWLS solution. Results show that the bias of the original algorithm mainly comes from the noise correlation of the WLS problem in the first stage and the nonlinear operations in the second stage. On this basis, the proposed method introduces an augmented matrix and imposes a quadratic constraint in the first stage. Generalized singular value decomposition (GSVD) is then used to obtain the stage-1 solution. A new WLS estimator is designed to correct the stage-2 solution and avoid the use of nonlinear operations. Moreover, this paper derives a theoretical bias for the proposed method, and performance analysis indicates that the proposed bias-reduced method effectively reduces bias without increasing the values in the covariance matrix. Finally, simulation results verify the validity of the theoretical derivation and the superiority of the proposed method.

Compared with the previous works related to bias reduction, the major contributions of this paper are as follows.

  1. 1.

    Different from most existing bias reduction methods [38,39,40,41,42,43,44,45], the proposed method considers both receiver position errors and synchronization clock bias.

  2. 2.

    Through second-order error analysis, [38] investigated the bias of the classical TSWLS method [33]. The present paper extends the bias analysis using a more realistic positioning model [36], which considers both receiver position errors and synchronization clock bias.

  3. 3.

    All previous studies [38,39,40,41,42,43,44,45] aim at reducing the bias of the source position. We develop a bias-reduced method that effectively reduces not only the bias of the source position but also the bias of refined receiver positions and estimated clock bias vector.

  4. 4.

    Previous works [43, 44] reduced the bias, but the estimation variance was higher than that of the original solution. The method proposed in this paper reduces the bias of the solution without increasing the root mean square error (RMSE).

  5. 5.

    The performance of the proposed bias-reduced method is theoretically derived and it is shown that the proposed bias-reduced method effectively reduces the bias without increasing the estimation variance.

The remainder of this paper is organized as follows. The measurement model in the presence of synchronization clock bias and the original TSWLS solution are described in Section 2. Section 3 presents the performance analysis for the original TSWLS solution. Section 4 develops a bias-reduced solution. In Section 5, the theoretical bias for the proposed method is derived with the aid of second-order error analysis. Simulation results are presented in Section 6 while conclusions are presented in Section 7. The main notations used in this paper are listed in Table 1.

Table 1 Mathematical notation

2 Measurement model and original TSWLS method

2.1 TDOA measurement model in the presence of receiver position errors and synchronization clock bias

Consider a three-dimensional localization scenario, in which M stationary receivers at \( {\mathbf{s}}_m^o,m=1,2,\cdots, M \) receive the signal emitted from a point source whose unknown location is to be determined, denoted by uo. Similar to [36], receivers are separated into N groups. Within each group, the receivers share a common local clock. However, the local clocks for different receiver groups are not the same, and there are thus clock offsets among the groups. Assuming that the first n receiver groups have Mn receivers, there are Mn − Mn − 1 receivers in the nth group, where M0 = 0 and MN = M. The receiver grouping diagram is shown as Fig. 1.

Fig. 1
figure 1

The receiver grouping diagram

The clock offset of group n with respect to group 1 is denoted as τn, n = 1, 2, … , N, where τ1 = 0. The first receiver is chosen as the reference, and the TDOA measurement from the receiver pair m and 1 is denoted as tm1. The relationship with the range difference of arrival (RDOA) measurement rm1 is rm1 = c ⋅ tm1, where c is the signal propagation speed. For convenience, we directly discuss the RDOAs in the following derivation. The RDOAs can be modeled as

$$ {r}_{m1}={r}_{m1}^o+{\delta}_n+\Delta {r}_{m1},\kern0.5em m={M}_{n-1}+1,{M}_{n-1}+2,\dots, {M}_n,n=1,2,\dots, N, $$
(1)

where Δrm1 represents the measurement noise, δn = cτn (δ1 = 0), and the true value \( {r}_{m1}^o \) is

$$ {r}_{m1}^o=\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m^o\right\Vert -\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_1^o\right\Vert, \kern0.5em m=2,3,\cdots M. $$
(2)

Rewriting (1) in vector format, we attain

$$ \mathbf{r}={\mathbf{r}}^o+\boldsymbol{\Gamma} \boldsymbol{\updelta} +\Delta \mathbf{r},\kern0.5em . $$
(3)

where vectors r = [r21, r31, ⋯, rM1]T, \( {\mathbf{r}}^o={\left[{r}_{21}^o,{r}_{31}^o,\cdots, {r}_{M1}^o\right]}^{\mathrm{T}} \) and Δr = [Δr21, Δr31, ⋯, ΔrM1]T comprise all measurements, true value, and measurement noise, respectively. δ = [δ2, δ3, ⋯, δN]T is the clock bias vector, which is modeled as being deterministic. \( \boldsymbol{\Gamma} =\left[\frac{{\mathbf{O}}_{\left({M}_1-1\right)\times \left(N-1\right)}}{\mathrm{blkdiag}\left[{\mathbf{1}}_{M_2\times 1}\kern1em {\mathbf{1}}_{M_3\times 1}\kern1em \cdots \kern1em {\mathbf{1}}_{M_N\times 1}\right]}\right]\in {\mathbf{R}}^{\left(M-1\right)\times \left(N-1\right)} \) is a column full-rank matrix; i.e., rank[Γ] = N − 1. Assume that the RDOA noise vector Δr follows a zero-mean Gaussian distribution with covariance matrix Q1.

Similar to [21, 23, 24], the receiver positions are not known exactly. The available receiver position for receiver m is expressed as

$$ {\mathbf{s}}_m={\mathbf{s}}_m^o+\Delta {\mathbf{s}}_m,\kern0.5em m=1,2,\cdots, M, $$
(4)

where Δsm represents random errors having covariance matrix \( {\mathbf{Q}}_{s_m} \). Rewriting (4) in vector format, we have

$$ \mathbf{s}={\mathbf{s}}^o+\Delta \mathbf{s}, $$
(5)

where \( \mathbf{s}={\left[{\mathbf{s}}_1^{\mathrm{T}},{\mathbf{s}}_2^{\mathrm{T}},\cdots, {\mathbf{s}}_M^{\mathrm{T}}\right]}^{\mathrm{T}} \), \( {\mathbf{s}}^o={\left[{\mathbf{s}}_1^{\mathrm{oT}},{\mathbf{s}}_2^{\mathrm{oT}},\cdots, {\mathbf{s}}_M^{\mathrm{oT}}\right]}^{\mathrm{T}} \) and \( \Delta \mathbf{s}={\left[\Delta {\mathbf{s}}_1^{\mathrm{T}},\Delta {\mathbf{s}}_2^{\mathrm{T}},\cdots, \Delta {\mathbf{s}}_M^{\mathrm{T}}\right]}^{\mathrm{T}} \) is the receiver position error vector, which is assumed to have a zero mean and be Gaussian distributed with covariance matrix \( {\mathbf{Q}}_2=\mathrm{blkdiag}\left\{{\mathbf{Q}}_{s_1},{\mathbf{Q}}_{s_2},\cdots, {\mathbf{Q}}_{s_M}\right\} \). Measurement noise Δr and receiver position error Δs are independent of each other.

The localization problem is to obtain an estimate of uo, so, and δo as accurately as possible using the available measurements r and the receiver positions s.

Remark 1: In practice, the clock-offset grouping can be implemented according to the distance between receivers. When the receivers are close to each other, synchronization is easily performed using a single piece of hardware with multichannel acquisition capabilities. If the receivers are far away, synchronous sampling will be a big challenge [36]. Therefore, receivers that are relatively close to each other are put into the same group.

2.2 Original TSWLS method

For the TDOA positioning problem in the presence of receiver position errors and synchronization clock bias, [36] proposed a novel computationally efficient method, in which the algebraic solutions of the source location, receiver positions, and synchronization clock bias are estimated sequentially. The method has two stages for target location estimation. The first stage introduces N nuisance variables \( {d}_{M_{n-1}+1}^o=\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}\right\Vert, \kern0.5em \mathrm{n}=1,2,\cdots N \) to get the initial solution for the source location and these nuisance variables. In the second stage, the relationship between uo and \( {d}_{M_{n-1}+1}^o \) is used to improve the precision of the estimated source position. The final estimate of the source location is obtained by remapping the stage-2 solution. Moreover, the solution is valid when M − N ≥ N + 3 (i.e., the number of equations is greater than or equal to the number of unknowns). The process of the algorithm is summarized in the following, and details of the derivation can be found in the literature [36].

Stage 1:

$$ {\boldsymbol{\upvarphi}}_1={\left({\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1\right)}^{-1}{\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{h}}_1, $$
(6)

where \( {\boldsymbol{\upvarphi}}_1={\left[\mathbf{u},{d}_1,{d}_{M_1+1},\cdots, {d}_{M_{N-1}+1}\right]}^{\mathrm{T}} \) represents the stage-1 solution, consisting of the estimated target location and nuisance variables, and

$$ {\mathbf{G}}_1=-2{\left[\begin{array}{ccccc}{\left({\mathbf{s}}_2-{\mathbf{s}}_1\right)}^{\mathrm{T}}& {r}_{21}& 0& \cdots & 0\\ {}\vdots & \vdots & \vdots & & \vdots \\ {}{\left({\mathbf{s}}_{M_1}-{\mathbf{s}}_1\right)}^{\mathrm{T}}& {r}_{M_1,1}& 0& \cdots & 0\\ {}\vdots & \vdots & \ddots & & \vdots \\ {}{\left({\mathbf{s}}_{M_{N-1}+2}-{\mathbf{s}}_{M_{N-1}+1}\right)}^{\mathrm{T}}& 0& 0& \ddots & {r}_{M_{N-1}+2,{M}_{N-1}+1}\\ {}\vdots & \vdots & \vdots & & \vdots \\ {}{\left({\mathbf{s}}_{M_N}-{\mathbf{s}}_{M_{N-1}+1}\right)}^{\mathrm{T}}& 0& 0& \cdots & {r}_{M_N,{M}_{N-1}+1}\end{array}\right]}_{\left(M-N\right)\times \left(N+3\right)}, $$
(7)
$$ {\mathbf{h}}_1={\left[\begin{array}{c}{r}_{21}^2-{\mathbf{s}}_2^{\mathrm{T}}{\mathbf{s}}_2+{\mathbf{s}}_1^{\mathrm{T}}{\mathbf{s}}_1\\ {}\vdots \\ {}{r}_{M_1,1}^2-{\mathbf{s}}_{M_1}^{\mathrm{T}}{\mathbf{s}}_{M_1}+{\mathbf{s}}_1^{\mathrm{T}}{\mathbf{s}}_1\\ {}\vdots \\ {}{r}_{M_{N-1}+2,{M}_{N-1}+1}^2-{\mathbf{s}}_{M_{N-1}+2}^{\mathrm{T}}{\mathbf{s}}_{M_{N-1}+2}+{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}{\mathbf{s}}_{M_{N-1}+1}\\ {}\vdots \\ {}{r}_{M_N,{M}_{N-1}+1}^2-{\mathbf{s}}_{M_N}^{\mathrm{T}}{\mathbf{s}}_{M_N}+{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}{\mathbf{s}}_{M_{N-1}+1}\end{array}\right]}_{\left(M-N\right)\times 1}. $$
(8)
$$ {\mathbf{W}}_1={\left({\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1+{\mathbf{D}}_1{\mathbf{Q}}_2{\mathbf{D}}_1^{\mathrm{T}}\right)}^{-1}. $$
(9)

The compositions in (9) are expressed as

$$ \left\{\begin{array}{l}\mathbf{A}=\mathrm{blkdiag}\left\{{\mathbf{Z}}_1,{\mathbf{Z}}_2,\cdots, {\mathbf{Z}}_N\right\}\\ {}{\mathbf{B}}_1=2\operatorname{diag}\left\{{r}_2^o,{r}_3^o,\cdots, {r}_{M_1}^o,{r}_{M_1+2}^o,\cdots, {r}_{M_2}^o,\cdots, {r}_{M_{N-1}+2}^o,\cdots, {r}_{M_N}^o\right\}\\ {}{\mathbf{D}}_1=\mathrm{blkdiag}\left\{{\mathbf{D}}_{1,1},{\mathbf{D}}_{1,2},\cdots, {\mathbf{D}}_{1,N}\right\}\end{array}\right., $$
(10)

where

$$ \left\{\begin{array}{l}{\mathbf{Z}}_1={\mathbf{I}}_{M_1-1};{\mathbf{Z}}_j=\left[-{\mathbf{1}}_{M_j-{M}_{j-1}-1},{\mathbf{I}}_{M_j-{M}_{j-1}-1}\right],j=2,3\cdots N\\ {}{r}_m^o=\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m^o\right\Vert \\ {}{\mathbf{D}}_{1,n}=2{\left[\begin{array}{ccccc}-{\left[\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}\right)+{r}_{M_{n-1}+2,{M}_{n-1}+1}{\boldsymbol{\uprho}}_{M_{n-1}+1}\right]}^{\mathrm{T}}& {\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+2}\right)}^{\mathrm{T}}& {\mathbf{0}}_{1\times 3}& \cdots & {\mathbf{0}}_{1\times 3}\\ {}-{\left[\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}\right)+{r}_{M_{n-1}+3,{M}_{n-1}+1}{\boldsymbol{\uprho}}_{M_{n-1}+1}\right]}^{\mathrm{T}}& {\mathbf{0}}_{1\times 3}& {\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+3}\right)}^{\mathrm{T}}& \cdots & {\mathbf{0}}_{1\times 3}\\ {}\vdots & \vdots & \vdots & \ddots & \vdots \\ {}-{\left[\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}\right)+{r}_{M_n,{M}_{n-1}+1}{\boldsymbol{\uprho}}_{M_{n-1}+1}\right]}^{\mathrm{T}}& {\mathbf{0}}_{1\times 3}& {\mathbf{0}}_{1\times 3}& \cdots & {\left({\mathbf{u}}^o-{\mathbf{s}}_{M_n}\right)}^{\mathrm{T}}\end{array}\right]}_{\left({M}_n-{M}_{n-1}-1\right)\times \left(3\cdot \left({M}_n-{M}_{n-1}\right)\right)}\\ {}{\boldsymbol{\uprho}}_m=\left({\mathbf{u}}^o-{\mathbf{s}}_m\right)/\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m\right\Vert \end{array}\right.. $$
(11)

Note that the true receiver location \( {\mathbf{s}}_m^o \) in B1 can be replaced by the noisy version sm. Additionally, both B1 and D1 in W1 contain the true source locations, which are unknown. To overcome this problem, W1 is first set to an identity matrix, and an initial solution is obtained from (6), say \( {\widehat{\boldsymbol{\upvarphi}}}_1 \), from which an approximate W1 is obtained, thereby getting the stage-1 solution. The error due to the approximation of W1 is negligible [36].

Stage 2:

$$ {\boldsymbol{\upvarphi}}_2={\left({\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2{\mathbf{G}}_2\right)}^{-1}{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2{\mathbf{h}}_2, $$
(12)

where φ2 = u ⊙ u represents the stage-2 solution, which is equal to the Schur product of the target location estimate, and

$$ {\mathbf{G}}_2={\left[\begin{array}{c}{\mathbf{I}}_3\\ {}{\mathbf{1}}_3^{\mathrm{T}}\\ {}\vdots \\ {}{\mathbf{1}}_3^{\mathrm{T}}\end{array}\right]}_{\left(N+3\right)\times 3},{\mathbf{h}}_2={\boldsymbol{\upvarphi}}_1\odot {\boldsymbol{\upvarphi}}_1+{\left[\begin{array}{c}{\mathbf{0}}_{3\times 1}\\ {}2{\mathbf{s}}_1^{\mathrm{T}}{\boldsymbol{\upvarphi}}_1\left(1:3\right)-{\mathbf{s}}_1^{\mathrm{T}}{\mathbf{s}}_1\\ {}2{\mathbf{s}}_{M_1+1}^{\mathrm{T}}{\boldsymbol{\upvarphi}}_1\left(1:3\right)-{\mathbf{s}}_{M_1+1}^{\mathrm{T}}{\mathbf{s}}_{M_1+1}\\ {}\vdots \\ {}2{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}{\boldsymbol{\upvarphi}}_1\left(1:3\right)-{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}{\mathbf{s}}_{M_{N-1}+1}\end{array}\right]}_{\left(N+3\right)\times 1}, $$
(13)
$$ {\mathbf{W}}_2={\mathbf{B}}_2^{-\mathrm{T}}\left({\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1\right){\mathbf{B}}_2^{-1}, $$
(14)

in which

$$ {\mathbf{B}}_2=2{\left[\begin{array}{cccc}\operatorname{diag}\left\{{\boldsymbol{\upvarphi}}_1\left(1:3\right)\right\}& 0& \cdots & 0\\ {}{\mathbf{s}}_1^{\mathrm{T}}& {\boldsymbol{\upvarphi}}_1(4)& \cdots & 0\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}& 0& \cdots & {\boldsymbol{\upvarphi}}_1\left(3+N\right)\end{array}\right]}_{\left(N+3\right)\times \left(N+3\right)}. $$
(15)

The final source position solution is

$$ \mathbf{u}=\boldsymbol{\Pi} \sqrt{{\boldsymbol{\upvarphi}}_2\left(1:3\right)}, $$
(16)

where

$$ \boldsymbol{\Pi} =\operatorname{diag}\left\{\operatorname{sgn}\left({\boldsymbol{\upvarphi}}_1\left(1:3\right)\right)\right\}. $$
(17)

According to [38], the source position estimate has appreciable bias for the classical TSWLS method [33] when the noise level is high or the localization geometry is poor. It can therefore be judged that the bias of the original TSWLS algorithm [36] under receiver position errors and synchronization clock bias is also large. To solve this problem, the present paper designs a new reduced-bias estimator for this scenario. It was previously necessary to derive the expression of the bias for the original TSWLS algorithm.

3 Performance analysis of the original TSWLS method

This section analyzes the performance of the original TSWLS solution using second-order error analysis. Two basic assumptions are made in our analysis. (1) The noise level is not high and higher second-order error terms can thus be ignored. (2) The source is sufficiently far from each receiver for the performance loss due to the approximation of W1 to be negligible.

3.1 Bias analysis for φ 1

Subtracting the true value \( {\boldsymbol{\upvarphi}}_1^o \) from both sides of (6), the estimation error in φ1 can be expressed as.

$$ \Delta {\boldsymbol{\upvarphi}}_1={\left({\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1\right)}^{-1}{\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{h}}_1-{\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1^o\right). $$
(18)

According to the definitions of G1 in (7) and h1 in (8), and ignoring the higher second-order error terms, \( {\mathbf{h}}_1-{\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1^o \) can be expressed as

$$ {\mathbf{h}}_1-{\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1^o={\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o, $$
(19)

where

$$ \left\{\begin{array}{l}{\tilde{\mathbf{D}}}_1=\mathrm{blkdiag}\left\{{\tilde{\mathbf{D}}}_{1,1},{\tilde{\mathbf{D}}}_{1,2},\cdots, {\tilde{\mathbf{D}}}_{1,N}\right\}\\ {}\mathbf{E}=\mathrm{blkdiag}\left\{{\mathbf{E}}_1,{\mathbf{E}}_2,\cdots, {\mathbf{E}}_N\right\}\\ {}\mathbf{T}\left(\Delta \mathbf{s}\right)=\operatorname{diag}\left\{{\boldsymbol{\uprho}}_1^{\mathrm{T}}\Delta {\mathbf{s}}_1,{\boldsymbol{\uprho}}_2^{\mathrm{T}}\Delta {\mathbf{s}}_{M_1+1},\cdots, {\boldsymbol{\uprho}}_N^{\mathrm{T}}\Delta {\mathbf{s}}_{M_{N-1}+1}\right\}\\ {}\mathbf{R}\left(\Delta \mathbf{s}\right)=\mathrm{blkdiag}\left\{\Delta {\mathbf{s}}_1^{\mathrm{T}}{\mathbf{P}}_1\Delta {\mathbf{s}}_1\cdot {\mathbf{I}}_{M_1-1},\Delta {\mathbf{s}}_{M_1+1}^{\mathrm{T}}{\mathbf{P}}_{M_1+1}\Delta {\mathbf{s}}_{M_1+1}\cdot {\mathbf{I}}_{M_2-{M}_1-1},\cdots, \Delta {\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}{\mathbf{P}}_{M_{N-1}+1}\Delta {\mathbf{s}}_{M_{N-1}+1}\cdot {\mathbf{I}}_{M_N-{M}_{N-1}-1}\right\}\end{array}\right., $$
(20)

in which

$$ \left\{\begin{array}{l}{\tilde{\mathbf{D}}}_{1,n}=2{\left[\begin{array}{ccccc}-{\left[\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}^o\right)+{r}_{M_{n-1}+2,{M}_{n-1}+1}^o{\boldsymbol{\uprho}}_{M_{n-1}+1}\right]}^{\mathrm{T}}& {\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+2}^o\right)}^{\mathrm{T}}& {\mathbf{0}}_{1\times 3}& \cdots & {\mathbf{0}}_{1\times 3}\\ {}-{\left[\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}^o\right)+{r}_{M_{n-1}+3,{M}_{n-1}+1}^o{\boldsymbol{\uprho}}_{M_{n-1}+1}\right]}^{\mathrm{T}}& {\mathbf{0}}_{1\times 3}& {\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+3}^o\right)}^{\mathrm{T}}& \cdots & {\mathbf{0}}_{1\times 3}\\ {}\vdots & \vdots & \vdots & \ddots & \vdots \\ {}-{\left[\left({\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}^o\right)+{r}_{M_n,{M}_{n-1}+1}^o{\boldsymbol{\uprho}}_{M_{n-1}+1}\right]}^{\mathrm{T}}& {\mathbf{0}}_{1\times 3}& {\mathbf{0}}_{1\times 3}& \cdots & {\left({\mathbf{u}}^o-{\mathbf{s}}_{M_n}^o\right)}^{\mathrm{T}}\end{array}\right]}_{\left({M}_n-{M}_{n-1}-1\right)\times \left(3\cdot \left({M}_n-{M}_{n-1}\right)\right)}\\ {}{\mathbf{E}}_n=\left[{\mathbf{1}}_{M_n-{M}_{n-1}-1},-{\mathbf{I}}_{M_n-{M}_{n-1}-1}\right]\otimes {\mathbf{1}}_3^{\mathrm{T}}\kern5.25em \\ {}{\mathbf{P}}_m=\frac{{\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m\right\Vert}^2{\mathbf{I}}_3-\left({\mathbf{u}}^o-{\mathbf{s}}_m\right){\left({\mathbf{u}}^o-{\mathbf{s}}_m\right)}^{\mathrm{T}}}{2{\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m\right\Vert}^3}\\ {}\end{array}\right.. $$
(21)

Note that (19) uses the approximation \( {r}_m^o=\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m^o\right\Vert \approx \left\Vert {\mathbf{u}}^o-{\mathbf{s}}_m\right\Vert +{\boldsymbol{\uprho}}_m^{\mathrm{T}}\Delta {\mathbf{s}}_m+\Delta {\mathbf{s}}_m^{\mathrm{T}}{\mathbf{P}}_m\Delta {\mathbf{s}}_m \).

In (18), G1 is also the noisy version, and G1 can be decomposed as \( {\mathbf{G}}_1={\mathbf{G}}_1^o+\Delta {\mathbf{G}}_1 \). Hence, we attain

$$ \Delta {\mathbf{G}}_1=-2\left[{\mathbf{A}}_2\Delta \tilde{\mathbf{s}},\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right)\right], $$
(22)

where A2 = blkdiag{A2, 1, A2, 2, ⋯, A2, N}, Λ = [Λ1, Λ2, ⋯, ΛN]T, and \( \Delta \tilde{\mathbf{s}}={\left[\Delta {\mathbf{s}}_1,\Delta {\mathbf{s}}_2,\cdots, \Delta {\mathbf{s}}_M\right]}^{\mathrm{T}} \) is the receiver position error matrix, in which \( {\mathbf{A}}_{2,n}=\left[-{\mathbf{1}}_{M_n-{M}_{n-1}-1},{\mathbf{I}}_{M_n-{M}_{n-1}-1}\right] \) and \( {\boldsymbol{\Lambda}}_n=\mathrm{blkdiag}\left\{{\mathbf{0}}_{M_{n-1}-n+1},{\mathbf{I}}_{M_n-{M}_{n-1}-1},{\mathbf{0}}_{M-N-{M}_n+n}\right\} \).

Letting \( {\mathbf{U}}_1={\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1 \) yields

$$ \left\{\begin{array}{l}{\mathbf{U}}_1^o={\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\\ {}\Delta {\mathbf{U}}_1={\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1\Delta {\mathbf{G}}_1+\Delta {\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o\end{array}\right.. $$
(23)

According to the Neumann expansion [46], we have

$$ {\mathbf{U}}_1^{-1}\approx \left(\mathbf{I}-{\mathbf{U}}_1^{o-1}\Delta {\mathbf{U}}_1\right){\mathbf{U}}_1^{o-1}. $$
(24)

Substituting (19), (22), and (24) into (18) yields

$$ {\displaystyle \begin{array}{l}\Delta {\boldsymbol{\upvarphi}}_1=\left({\mathbf{U}}_1^{o-1}-{\mathbf{U}}_1^{o-1}\Delta {\mathbf{U}}_1{\mathbf{U}}_1^{o-1}\right)\left({\mathbf{G}}_1^{\mathrm{o}\mathrm{T}}+\Delta {\mathbf{G}}_1^{\mathrm{T}}\right){\mathbf{W}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right)\\ {}\kern1.5em ={\mathbf{H}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right)\\ {}\kern2.5em +{\mathbf{U}}_1^{o-1}\Delta {\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1\left(\mathbf{I}-{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1\right){\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\mathbf{U}}_1^{o-1}\Delta {\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1\left(\mathbf{I}-{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1\right){\tilde{\mathbf{D}}}_1\Delta \mathbf{s}-{\mathbf{H}}_1\Delta {\mathbf{G}}_1{\mathbf{H}}_1{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}-{\mathbf{H}}_1\Delta {\mathbf{G}}_1{\mathbf{H}}_1{\tilde{\mathbf{D}}}_1\Delta \mathbf{s},\end{array}} $$
(25)

where \( {\mathbf{H}}_1={\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right)}^{-1}{\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1 \). Taking the expectation for (25) yields

$$ {\displaystyle \begin{array}{l}\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\right]={\mathbf{H}}_1\mathrm{vecd}\left[{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}\right]+{\mathbf{H}}_1\mathbf{E}\mathrm{vecd}\left[{\mathbf{Q}}_2\right]-2{\mathbf{H}}_1\cdot \mathrm{blkdiag}\Big\{\mathrm{tr}\left\{{\mathbf{Q}}_{s_1}{\mathbf{P}}_1\right\}\cdot {\mathbf{I}}_{M_1-1},\mathrm{tr}\left\{{\mathbf{Q}}_{s_{M_1+1}}{\mathbf{P}}_{M_1+1}\right\}\cdot {\mathbf{I}}_{M_2-{M}_1-1},\\ {}\kern3.75em \cdots, \mathrm{tr}\left\{{\mathbf{Q}}_{s_{M_{N-1}+1}}{\mathbf{P}}_{M_{N-1}+1}\right\}\cdot {\mathbf{I}}_{M_N-{M}_{N-1}-1}\Big\}\cdot {\mathbf{A}\mathbf{r}}^o\\ {}\kern3.75em +2{\mathbf{U}}_1^{o-1}\left[\begin{array}{c}\begin{array}{l}{\mathbf{Q}}_{s_1}{\mathbf{X}}_1{\left(1,1:3\right)}^{\mathrm{T}}+{\mathbf{Q}}_{s_2}{\mathbf{X}}_1{\left(2,4:6\right)}^{\mathrm{T}}\\ {}+\cdots +{\mathbf{Q}}_{s_M}{\mathbf{X}}_1{\left(M,3M-2:3M\right)}^{\mathrm{T}}\end{array}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\end{array}\right]+2{\mathbf{H}}_1{\mathbf{A}}_2\left[\begin{array}{c}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,1:3\right){\mathbf{Q}}_{s_1}\right\}\\ {}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,4:6\right){\mathbf{Q}}_{s_2}\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,3M-2:3M\right){\mathbf{Q}}_{s_M}\right\}\end{array}\right]\\ {}\kern3.5em +2{\mathbf{H}}_1\left({\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1{\left(4,:\right)}^{\mathrm{T}}+{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1\Big(5,:\left){}^{\mathrm{T}}+\cdots +{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1\right(3+N,:\Big){}^{\mathrm{T}}\right),\end{array}} $$
(26)

which is proved in Appendix A. X1 and X2 are given by (A.7) in Appendix A. The first two components H1vecd[AQ1AT] and H1Evecd[Q2] come from second-order error terms due to the square operations for the measurements and receiver positions in h1, respectively. The third component comes from the second-order error term \( \Delta {\mathbf{s}}_m^{\mathrm{T}}{\mathbf{P}}_m\Delta {\mathbf{s}}_m \) due to the Taylor series expansion for \( {d}_{M_{n-1}+1}^o \). The remaining components come from measurement noise and receiver position errors in regressor G1

3.2 Bias analysis for φ 2

Subtracting the true value \( {\boldsymbol{\upvarphi}}_2^o \) from both sides of (12) yields

$$ \Delta {\boldsymbol{\upvarphi}}_2={\left({\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2{\mathbf{G}}_2\right)}^{-1}{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2\left({\mathbf{h}}_2-{\mathbf{G}}_2{\boldsymbol{\upvarphi}}_2^o\right). $$
(27)

According to the definitions of G2 and h2 in (13) and ignoring the higher second-order error terms, \( {\mathbf{h}}_2-{\mathbf{G}}_2{\boldsymbol{\upvarphi}}_2^o \) can be expressed in terms of Δφ1 as

$$ {\mathbf{h}}_2-{\mathbf{G}}_2{\boldsymbol{\upvarphi}}_2^o={\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1+\Delta {\boldsymbol{\upvarphi}}_1\odot \Delta {\boldsymbol{\upvarphi}}_1, $$
(28)

where

$$ {\mathbf{B}}_2^o=2{\left[\begin{array}{cccc}\operatorname{diag}\left\{{\mathbf{u}}^o\right\}& 0& \cdots & 0\\ {}{\mathbf{s}}_1^{\mathrm{T}}& {d}_1^o& \cdots & 0\\ {}\vdots & \vdots & \ddots & \vdots \\ {}{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}& 0& \cdots & {d}_{M_{N-1}+1}^o\end{array}\right]}_{\left(N+3\right)\times \left(N+3\right)}. $$
(29)

In (27), G2 is a constant matrix, and W2 is the noisy version because G1 and B2 in it contain noise. According to the definitions of B2 in (15) and \( {\mathbf{B}}_2^o \) (29), we have ΔB2 = 2 diag {Δφ1}. Adopting the Neumann expansion [46], we have

$$ {\mathbf{B}}_2^{-1}\approx {\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-1}\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}. $$
(30)

Substituting (23) and (30) into (14), and ignoring the higher first-order error terms yields

$$ {\displaystyle \begin{array}{l}{\mathbf{W}}_2={\left({\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-1}\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}\right)}^{\mathrm{T}}\left({\mathbf{U}}_1^o+\Delta {\mathbf{U}}_1\right)\left({\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-1}\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}\right)\\ {}\kern1em \approx {\mathbf{B}}_2^{o-1}{\mathbf{U}}_1^o{\mathbf{B}}_2^{o-1}+{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{U}}_1{\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-\mathrm{T}}{\mathbf{U}}_1^o{\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-1}{\mathbf{U}}_1^o{\mathbf{B}}_2^{o-1}\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}.\end{array}} $$
(31)

Letting \( {\mathbf{W}}_2^o={\mathbf{B}}_2^{o-\mathrm{T}}{\mathbf{U}}_1^o{\mathbf{B}}_2^{o-1} \), we have

$$ {\mathbf{W}}_2\approx {\mathbf{W}}_2^o+{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{U}}_1{\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{B}}_2{\mathbf{W}}_2^o-{\mathbf{W}}_2^o\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}={\mathbf{W}}_2^o+\Delta {\mathbf{W}}_2. $$
(32)

Letting \( {\mathbf{U}}_2={\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2{\mathbf{G}}_2 \) yields

$$ {\mathbf{U}}_2={\mathbf{U}}_2^o+\Delta {\mathbf{U}}_2,{\mathbf{U}}_2^o={\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o{\mathbf{G}}_2 and\Delta {\mathbf{U}}_2={\mathbf{G}}_2^{\mathrm{T}}\Delta {\mathbf{W}}_2{\mathbf{G}}_2. $$
(33)

Applying the Neumann expansion [46] again, we have \( {\mathbf{U}}_2^{-1}\approx {\mathbf{U}}_2^{o-1}-{\mathbf{U}}_2^{o-1}\Delta {\mathbf{U}}_2{\mathbf{U}}_2^{o-1} \). Substituting (28) and (32) into (27) and ignoring the higher second-order error terms yields

$$ {\displaystyle \begin{array}{l}\Delta {\boldsymbol{\upvarphi}}_2=\left({\mathbf{U}}_2^{o-1}-{\mathbf{U}}_2^{o-1}\Delta {\mathbf{U}}_2{\mathbf{U}}_2^{o-1}\right){\mathbf{G}}_2^{\mathrm{T}}\left({\mathbf{W}}_2^o+\Delta {\mathbf{W}}_2\right)\left({\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1+\Delta {\boldsymbol{\upvarphi}}_1\odot \Delta {\boldsymbol{\upvarphi}}_1\right)\ \\ {}\kern1.5em ={\mathbf{H}}_2\left({\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1+\Delta {\boldsymbol{\upvarphi}}_1\odot \Delta {\boldsymbol{\upvarphi}}_1\right)+{\mathbf{U}}_2^{o-1}{\mathbf{G}}_2^{\mathrm{T}}\Delta {\mathbf{W}}_2\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\\ {}\kern1.5em ={\mathbf{H}}_2\left({\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1+\Delta {\boldsymbol{\upvarphi}}_1\odot \Delta {\boldsymbol{\upvarphi}}_1\right)+{\mathbf{U}}_2^{o-1}{\mathbf{G}}_2^{\mathrm{T}}\left({\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{U}}_1{\mathbf{B}}_2^{o-1}-{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{B}}_2{\mathbf{W}}_2^o-{\mathbf{W}}_2^o\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}\right)\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\\ {}\kern1.5em ={\mathbf{H}}_2\left({\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1+\Delta {\boldsymbol{\upvarphi}}_1\odot \Delta {\boldsymbol{\upvarphi}}_1\right)+{\mathbf{U}}_2^{o-1}{\mathbf{G}}_2^{\mathrm{T}}\left(\begin{array}{l}{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{U}}_1{\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1-{\mathbf{B}}_2^{o-\mathrm{T}}\Delta {\mathbf{B}}_2{\mathbf{W}}_2^o\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\\ {}-{\mathbf{W}}_2^o\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\end{array}\right),\end{array}} $$
(34)

where \( {\mathbf{H}}_2={\left({\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o{\mathbf{G}}_2\right)}^{-1}{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o={\mathbf{U}}_2^{o-1}{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o \).

According to Q1 = E[ΔrΔrT], Q2 = E[ΔsΔsT], (23) and (25), we have

$$ \mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\odot \Delta {\boldsymbol{\upvarphi}}_1\right]=\mathrm{vecd}\left[\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {{\boldsymbol{\upvarphi}}_1}^{\mathrm{T}}\right]\right], $$
(35)
$$ {\displaystyle \begin{array}{l}\boldsymbol{\upalpha} =\mathrm{E}\left[\Delta {\mathbf{U}}_1{\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\right]\\ {}=-2{\mathbf{G}}_1^{\mathrm{o}\mathrm{T}}{\mathbf{W}}_1\left({\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1{\mathbf{X}}_3{\left(4,:\right)}^{\mathrm{T}}+{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1{\mathbf{X}}_3\Big(5,:\left){}^{\mathrm{T}}+\cdots +{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1{\mathbf{X}}_3\right(3+N,:\Big){}^{\mathrm{T}}\right)\\ {}-2{\mathbf{G}}_1^{\mathrm{o}\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_2\left[\begin{array}{c}\mathrm{tr}\left\{{\mathbf{X}}_5\left(1:3,1:3\right){\mathbf{Q}}_{s_1}\right\}\\ {}\mathrm{tr}\left\{{\mathbf{X}}_5\left(1:3,4:6\right){\mathbf{Q}}_{s_2}\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{X}}_5\left(1:3,3M-2:3M\right){\mathbf{Q}}_{s_M}\right\}\end{array}\right]-2\left[\begin{array}{c}\begin{array}{l}{\mathbf{Q}}_{s_1}{\mathbf{X}}_6{\left(1,1:3\right)}^{\mathrm{T}}+{\mathbf{Q}}_{s_2}{\mathbf{X}}_6{\left(2,4:6\right)}^{\mathrm{T}}\\ {}+\cdots +{\mathbf{Q}}_{s_M}{\mathbf{X}}_6{\left(M,3M-2:3M\right)}^{\mathrm{T}}\end{array}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{X}}_3{\mathbf{Q}}_1\right\}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{X}}_3{\mathbf{Q}}_1\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{X}}_3{\mathbf{Q}}_1\right\}\end{array}\right],\end{array}} $$
(36)
$$ \boldsymbol{\upbeta} =\mathrm{E}\left[\Delta {\mathbf{B}}_2{\mathbf{W}}_2^o\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\right]=2\mathrm{vecd}\left[{\mathbf{W}}_2^o\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {{\boldsymbol{\upvarphi}}_1}^{\mathrm{T}}\right]\right], $$
(37)
$$ \boldsymbol{\upgamma} =\mathrm{E}\left[\Delta {\mathbf{B}}_2{\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\Delta {\boldsymbol{\upvarphi}}_1\right]=2\mathrm{vecd}\left[{\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {{\boldsymbol{\upvarphi}}_1}^{\mathrm{T}}\right]\right], $$
(38)

where \( \mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {{\boldsymbol{\upvarphi}}_1}^{\mathrm{T}}\right]\approx {\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right)}^{-1}={\mathbf{U}}_1^{o-1} \), \( {\mathbf{X}}_3={\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o{\mathbf{H}}_1{\mathbf{B}}_1\mathbf{A} \), \( {\mathbf{X}}_4={\mathbf{B}}_2^{o-1}\left(\mathbf{I}-{\mathbf{G}}_2{\mathbf{H}}_2\right){\mathbf{B}}_2^o{\mathbf{H}}_1{\tilde{\mathbf{D}}}_1 \), X5 = X4(1 : 3, :) and \( {\mathbf{X}}_6={\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o{\mathbf{X}}_4 \).

Hence, taking the expectation for (34) yields

$$ \mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_2\right]={\mathbf{H}}_2\left({\mathbf{B}}_2^o\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\right]+\mathrm{vecd}\left[\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {{\boldsymbol{\upvarphi}}_1}^{\mathrm{T}}\right]\right]\right)+{\mathbf{U}}_2^{o-1}{\mathbf{G}}_2^{\mathrm{T}}\left({\mathbf{B}}_2^{o-\mathrm{T}}\boldsymbol{\upalpha} -{\mathbf{B}}_2^{o-\mathrm{T}}\boldsymbol{\upbeta} -{\mathbf{W}}_2^o\boldsymbol{\upgamma} \right). $$
(39)

In summary, the first component \( {\mathbf{H}}_2{\mathbf{B}}_2^o\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\right] \) comes from the bias in the stage-1 solution. The second component H2vecd[E[Δφ1Δφ1T]] is from the square operation for φ1 in h2, while the remaining components come from measurement noise and receiver position errors in G1 and Δφ1 in B2.

3.3 Bias analysis for u

According to (12), and we express \( {\boldsymbol{\upvarphi}}_2={\boldsymbol{\upvarphi}}_2^o+\Delta {\boldsymbol{\upvarphi}}_2 \) and u = uo + Δu. The error in u can be expressed as

$$ \Delta \mathbf{u}={\mathbf{B}}_3^{o-1}\left(\Delta {\boldsymbol{\upvarphi}}_2-\Delta \mathbf{u}\odot \Delta \mathbf{u}\right), $$
(40)

where \( {\mathbf{B}}_3^o=2\operatorname{diag}\left\{{\mathbf{u}}^o\right\} \). Taking the expectation for (40) yields the bias in u as

$$ \mathrm{E}\left[\Delta \mathbf{u}\right]={\mathbf{B}}_3^{o-1}\left(\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_2\right]-\mathrm{vecd}\left[\mathrm{E}\left[\Delta \mathbf{u}\Delta {\mathbf{u}}^{\mathrm{T}}\right]\right]\right), $$
(41)

where \( \mathrm{E}\left[\Delta \mathbf{u}\Delta {\mathbf{u}}^{\mathrm{T}}\right]\approx {\mathbf{B}}_3^{o-1}\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_2\Delta {\boldsymbol{\upvarphi}}_2^{\mathrm{T}}\right]{\mathbf{B}}_3^{o-1}={\mathbf{B}}_3^{o-1}{\left({\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o{\mathbf{G}}_2\right)}^{-1}{\mathbf{B}}_3^{o-1} \). Substituting (26) and (39) into (41), the bias for the source position solution can be obtained. Note that the first component \( {\mathbf{B}}_3^{o-1}\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_2\right] \) comes from the bias in the stage-2 solution and the second component \( -{\mathbf{B}}_3^{o-1}\mathrm{vecd}\left[\mathrm{E}\left[\Delta \mathbf{u}\Delta {\mathbf{u}}^{\mathrm{T}}\right]\right] \) is from the square root operation in (16).

4 Proposed bias-reduced method

The proposed bias-reduced technique has two stages as follows.

4.1 Stage 1

According to the analysis in Section 3.1, the bias in the stage-1 solution φ1 mainly comes from the noise correlation between the regressor G1 and regressand h1 in the WLS formulation. The main purpose of this stage is to find a better φ1 with small bias. The main idea is introducing an augmented matrix and imposing a quadratic constraint, so that the expectation of the cost function reaches a minimum value when the unknown is equal to the true value.

From [36], we obtain the noise matrix equation in the first stage as \( {\boldsymbol{\upvarepsilon}}_1={\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\mathbf{D}}_1\Delta \mathbf{s}={\mathbf{h}}_1-{\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1^o, \) where \( {\boldsymbol{\upvarphi}}_1^o \) is the true value of φ1. The cost function of this WLS problem is

$$ J={\left({\mathbf{h}}_1-{\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)}^{\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{h}}_1-{\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right). $$
(42)

Defining an augmented matrix A1 = [−G1, h1] and augmented vector \( \mathbf{v}={\left[{\boldsymbol{\upvarphi}}_1^{\mathrm{T}},1\right]}^{\mathrm{T}} \), (42) can be rewritten as

$$ J={\mathbf{v}}^{\mathrm{T}}{\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1\mathbf{v}. $$
(43)

A1 contains measurement noise and receiver position errors, and can be decomposed as

$$ {\mathbf{A}}_1={\mathbf{A}}_1^o+\Delta {\mathbf{A}}_1. $$
(44)

According to the definition of A1, after subtracting the true value \( {\mathbf{A}}_1^o=\left[-{\mathbf{G}}_1^o,{\mathbf{h}}_1^o\right] \), and ignoring the second-order noise terms, we have

$$ \Delta {\mathbf{A}}_1=2\left[{\mathbf{A}}_2\Delta \tilde{\mathbf{s}},\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right),{\tilde{\mathbf{B}}}_1\mathbf{A}\Delta \mathbf{r}+{\mathbf{C}}_1\Delta \mathbf{s}\right], $$
(45)

where A2, \( \Delta \tilde{\mathbf{s}} \), and Λ are defined below (22), and

$$ {\tilde{\mathbf{B}}}_1=\operatorname{diag}\left\{{\mathbf{Ar}}^o\right\}=\left\{{r}_{2,1}^o,{r}_{3,1}^o,\cdots, {r}_{M_1,1}^o,{r}_{M_1+2,{M}_1+1}^o,\cdots, {r}_{M_2,{M}_1+1}^o,\cdots, {r}_{M_{N-1}+2,{M}_{N-1}+1}^o,\cdots, {r}_{M_N,{M}_{N-1}+1}^o\right\}, $$
(46)
$$ \left\{\begin{array}{l}{\mathbf{C}}_1=\mathrm{blkdiag}\left\{{\mathbf{C}}_{1,1},{\mathbf{C}}_{1,2},\cdots, {\mathbf{C}}_{1,N}\right\}\\ {}{\mathbf{C}}_{1,n}={\left[\begin{array}{ccccc}{\mathbf{s}}_{M_{n-1}+1}^{\mathrm{oT}}& -{\mathbf{s}}_{M_{n-1}+2}^{\mathrm{oT}}& \mathbf{0}& \cdots & \mathbf{0}\\ {}{\mathbf{s}}_{M_{n-1}+1}^{\mathrm{oT}}& \mathbf{0}& -{\mathbf{s}}_{M_{n-1}+3}^{\mathrm{oT}}& \cdots & \mathbf{0}\\ {}\vdots & \vdots & \vdots & \ddots & \vdots \\ {}{\mathbf{s}}_{M_{n-1}+1}^{\mathrm{oT}}& \mathbf{0}& \mathbf{0}& \cdots & -{\mathbf{s}}_{M_n}^{\mathrm{oT}}\end{array}\right]}_{\left({M}_n-{M}_{n-1}-1\right)\times \left(3\cdot \left({M}_n-{M}_{n-1}\right)\right)}\end{array}\right.. $$
(47)

Substituting (44) into (43) yields the cost function

$$ J={\mathbf{v}}^{\mathrm{T}}{\mathbf{A}}_1^{\mathrm{o}\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1^{\mathrm{o}}\mathbf{v}+{\mathbf{v}}^{\mathrm{T}}\Delta {\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1\Delta {\mathbf{A}}_1\mathbf{v}+2{\mathbf{v}}^{\mathrm{T}}\Delta {\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1^o\mathbf{v}. $$
(48)

Taking the expectation yields

$$ \mathrm{E}\left[J\right]={\mathbf{v}}^{\mathrm{T}}{\mathbf{A}}_1^{\mathrm{o}\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1^{\mathrm{o}}\mathbf{v}+{\mathbf{v}}^{\mathrm{T}}\mathrm{E}\left[\Delta {\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1\Delta {\mathbf{A}}_1\right]\mathbf{v}. $$
(49)

The third term in (48) vanishes in the expectation because ΔA1 is zero-mean. When we minimize E[J] with respect to v, the second term on the right-hand side of (49) is the cause of bias, because the first term is zero at v = vo (\( {\mathbf{A}}_1^{\mathrm{o}}{\mathbf{v}}^o=\mathbf{0} \)). If we impose a constraint that makes the second term constant, E[J] will reach minimum value at v = vo. We thus find v using

$$ \min \kern0.5em {\mathbf{v}}^{\mathrm{T}}{\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1\mathbf{v}\kern0.75em \mathrm{s}.\mathrm{t}.\kern0.5em {\mathbf{v}}^{\mathrm{T}}\boldsymbol{\Omega} \mathbf{v}=k, $$
(50)

where \( \boldsymbol{\Omega} =\mathrm{E}\left[\Delta {\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1\Delta {\mathbf{A}}_1\right] \), and the constant k can be any value. We can use the Lagrange multiplier method to solve the constrained minimization problem (50). Using Lagrange multiplier λ, we obtain the auxiliary cost function \( {\mathbf{v}}^{\mathrm{T}}{\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1\mathbf{v}+\lambda \left(k-{\mathbf{v}}^{\mathrm{T}}\boldsymbol{\Omega} \mathbf{v}\right) \). Taking the derivative with respect to v, we have

$$ \left({\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1\right)\mathbf{v}=\lambda \boldsymbol{\Omega} \mathbf{v}. $$
(51)

Premultiplying both sides of (51) by vT and using the equality constraint vTΩv = k, we attain

$$ \lambda ={\mathbf{v}}^{\mathrm{T}}\left({\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1\right)\mathbf{v}/k. $$
(52)

We here find the above equation has the same form as the objective function (50). Hence, we only need to minimize λ. According to (51), λ is the generalized eigenvalue of the pair \( \left({\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1,\boldsymbol{\Omega} \right) \). The estimate v is therefore the generalized eigenvector that corresponds to the minimum generalized eigenvalue for the pair \( \left({\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_1,\boldsymbol{\Omega} \right) \). The stage-1 solution can be expressed as

$$ {\boldsymbol{\upvarphi}}_1=\mathbf{v}\left(1:3+N\right)/\mathbf{v}\left(4+N\right). $$
(53)

We now derive the formula for Ω. Substituting (45) into Ω yields

$$ \boldsymbol{\Omega} =\mathrm{E}\left[\Delta {\mathbf{A}}_1^{\mathrm{T}}{\mathbf{W}}_1\Delta {\mathbf{A}}_1\right]=4{\left[\begin{array}{ccc}{\boldsymbol{\Omega}}_{1,1}& {\mathbf{0}}_{3\times N}& {\boldsymbol{\Omega}}_{1,3}\\ {}{\mathbf{0}}_{N\times 3}& {\boldsymbol{\Omega}}_{2,2}& {\boldsymbol{\Omega}}_{2,3}\\ {}{\boldsymbol{\Omega}}_{3,1}& {\boldsymbol{\Omega}}_{3,2}& {\boldsymbol{\Omega}}_{3,3}\end{array}\right]}_{\left(N+4\right)\times \left(N+4\right)}, $$
(54)

where

$$ {\boldsymbol{\Omega}}_{1,1}=\mathrm{E}\left[\Delta {\tilde{\mathbf{s}}}^{\mathrm{T}}{\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_2\Delta \tilde{\mathbf{s}}\right]={f}_{1,1}{\mathbf{Q}}_{s_1}+{f}_{2,2}{\mathbf{Q}}_{s_2}+\cdots +{f}_{M,M}{\mathbf{Q}}_{s_M}, $$
(55)
$$ {\boldsymbol{\Omega}}_{1,3}=\mathrm{E}\left[\Delta {\tilde{\mathbf{s}}}^{\mathrm{T}}{\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{C}}_1\Delta \mathbf{s}\right]={\mathbf{Q}}_{s_1}{\mathbf{y}}_1^{\mathrm{T}}+{\mathbf{Q}}_{s_2}{\mathbf{y}}_2^{\mathrm{T}}+\cdots +{\mathbf{Q}}_{s_M}{\mathbf{y}}_M^{\mathrm{T}}, $$
(56)
$$ {\displaystyle \begin{array}{l}{\boldsymbol{\Omega}}_{2,2}=\mathrm{E}\left[{\left(\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right)\right)}^{\mathrm{T}}{\mathbf{W}}_1\left(\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right)\right)\right]\ \\ {}\kern1.75em ={\left[\begin{array}{cccc}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1{\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1\right\}& \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1\right\}& \cdots & \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1\right\}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1{\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1\right\}& \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1\right\}& \cdots & \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1\right\}\\ {}\vdots & \vdots & \ddots & \vdots \\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1{\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1\right\}& \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1\right\}& \cdots & \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1\right\}\end{array}\right]}_{N\times N},\end{array}} $$
(57)
$$ {\displaystyle \begin{array}{l}{\boldsymbol{\Omega}}_{2,3}=\mathrm{E}\left[{\left(\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right)\right)}^{\mathrm{T}}{\mathbf{W}}_1{\tilde{\mathbf{B}}}_1\mathbf{A}\Delta \mathbf{r}\right]\\ {}\kern1.5em ={\left[\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1{\tilde{\mathbf{B}}}_1{\mathbf{A}\mathbf{Q}}_1\right\},\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1{\tilde{\mathbf{B}}}_1{\mathbf{A}\mathbf{Q}}_1\right\},\cdots, \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1{\tilde{\mathbf{B}}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\right]}^{\mathrm{T}},\end{array}} $$
(58)
$$ {\boldsymbol{\Omega}}_{3,1}=\mathrm{E}\left[\Delta {\mathbf{s}}^{\mathrm{T}}{\mathbf{C}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_2\Delta \tilde{\mathbf{s}}\right]={\mathbf{z}}_1^{\mathrm{T}}{\mathbf{Q}}_{s_1}+{\mathbf{z}}_1^{\mathrm{T}}{\mathbf{Q}}_{s_2}+\cdots +{\mathbf{z}}_1^{\mathrm{T}}{\mathbf{Q}}_{s_M}, $$
(59)
$$ {\displaystyle \begin{array}{l}{\boldsymbol{\Omega}}_{3,2}=\mathrm{E}\left[\Delta {\mathbf{r}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{\tilde{\mathbf{B}}}_1{\mathbf{W}}_1\left(\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right)\right)\right]\\ {}\kern1.5em =\left[\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\tilde{\mathbf{B}}}_1{\mathbf{W}}_1{\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1\right\},\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\tilde{\mathbf{B}}}_1{\mathbf{W}}_1{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1\right\},\cdots, \mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\tilde{\mathbf{B}}}_1{\mathbf{W}}_1{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1\right\}\right],\end{array}} $$
(60)
$$ {\boldsymbol{\Omega}}_{3,3}=\mathrm{E}\left[\Delta {\mathbf{r}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{\tilde{\mathbf{B}}}_1{\mathbf{W}}_1{\tilde{\mathbf{B}}}_1\mathbf{A}\Delta \mathbf{r}+\Delta {\mathbf{s}}^{\mathrm{T}}{\mathbf{C}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{C}}_1\Delta \mathbf{s}\right]=\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\tilde{\mathbf{B}}}_1{\mathbf{W}}_1{\tilde{\mathbf{B}}}_1{\mathbf{A}\mathbf{Q}}_1\right\}+\mathrm{tr}\left\{{\mathbf{C}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{C}}_1{\mathbf{Q}}_2\right\}, $$
(61)

where \( \mathbf{F}={\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_2 \), fi, j is the ijth element of F, \( \mathbf{Y}={\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{C}}_1 \), yi = Y(i, 3i − 2 : 3i), \( \mathbf{Z}={\mathbf{C}}_1^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_2 \), and zi = Z(3i − 2 : 3i, i).

Remark 2: Both submatrices Ω2, 3 in (58) and Ω3, 2 in (60) depend on \( {\tilde{\mathbf{B}}}_1 \), which is unknown. To facilitate implementation, the true values in \( {\tilde{\mathbf{B}}}_1 \) are replaced by the measurements, and the performance loss due this approximation is negligible.

Remark 3: The weight matrix W1 is also approximated through the procedure described below (11). The loss due to the approximation is negligible when the source is away from each receiver. The proposed bias-reduced method is thus more suitable for distant source localization.

4.2 Stage 2

The performance analysis in Subsections 3.2 and 3.3 reveals that some nonlinear operations, including the squaring and square root operations in stage 2 of the original method increase the estimation bias. To reduce the use of these nonlinear operations, a new version of stage 2 is developed in this subsection. The main idea is of this stage is to estimate the estimation error of the stage-1 solution \( \widehat{\mathbf{u}}={\boldsymbol{\upvarphi}}_1\left(1:3\right) \), and correct the solution \( \widehat{\mathbf{u}} \) using this estimation error.

For N nuisance variables, we expand them around \( \widehat{\mathbf{u}} \) and retaining up to the linear term of \( \Delta \widehat{\mathbf{u}} \),

$$ {d}_{M_{n-1}+1}^o=\left\Vert {\mathbf{u}}^o-{\mathbf{s}}_{M_{n-1}+1}\right\Vert \approx \left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_{n-1}+1}\right\Vert -{\tilde{\boldsymbol{\uprho}}}_{M_{n-1}+1}^{\mathrm{T}}\Delta \widehat{\mathbf{u}},\kern0.5em n=1,2,\cdots N, $$
(62)

where \( {\tilde{\boldsymbol{\uprho}}}_{M_{n-1}+1}=\left(\widehat{\mathbf{u}}-{\mathbf{s}}_{M_{n-1}+1}\right)/\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_{n-1}+1}\right\Vert \) and \( \widehat{\mathbf{u}}={\mathbf{u}}^o+\Delta \widehat{\mathbf{u}} \). Let \( {\widehat{d}}_{M_{n-1}+1}={\boldsymbol{\upvarphi}}_1\left(3+n\right),\mathrm{n}=1,2,\cdots, \mathrm{N} \). Substituting (62) into

\( {\widehat{d}}_{M_{n-1}+1}={d}_{M_{n-1}+1}^o+\Delta {\widehat{d}}_{M_{n-1}+1} \) yields

$$ {\widehat{d}}_{M_{n-1}+1}=\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_{n-1}+1}\right\Vert -{\tilde{\boldsymbol{\uprho}}}_{M_{n-1}+1}^{\mathrm{T}}\Delta \widehat{\mathbf{u}}+\Delta {\widehat{d}}_{M_{n-1}+1}\Rightarrow \Delta {\widehat{d}}_{M_{n-1}+1}={\widehat{d}}_{M_{n-1}+1}-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_{n-1}+1}\right\Vert +{\tilde{\boldsymbol{\uprho}}}_{M_{n-1}+1}^{\mathrm{T}}\Delta \widehat{\mathbf{u}},\kern0.5em n=1,2,\cdots N. $$
(63)

Similar to the analysis in [40], \( \Delta {\boldsymbol{\upvarphi}}_1={\boldsymbol{\upvarphi}}_1-{\boldsymbol{\upvarphi}}_1^o \) is approximately zero-mean. Following Sorenson’s method [47], we have

$$ {\mathbf{0}}_{3\times 1}=\Delta \widehat{\mathbf{u}}-\Delta \widehat{\mathbf{u}}. $$
(64)

Combining (64) and the second equation in (63) gives

$$ \left[\begin{array}{c}-\Delta \widehat{\mathbf{u}}\\ {}\Delta {\widehat{d}}_1\\ {}\Delta {\widehat{d}}_{M_1+1}\\ {}\vdots \\ {}\Delta {\widehat{d}}_{M_{N-1}+1}\end{array}\right]=\left[\begin{array}{cc}-{\mathbf{I}}_3& \mathbf{0}\\ {}\mathbf{0}& {\mathbf{I}}_N\end{array}\right]\left[\begin{array}{c}\Delta \widehat{\mathbf{u}}\\ {}\Delta {\widehat{d}}_1\\ {}\Delta {\widehat{d}}_{M_1+1}\\ {}\vdots \\ {}\Delta {\widehat{d}}_{M_{N-1}+1}\end{array}\right]={\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1=\left[\begin{array}{c}{\mathbf{0}}_{3\times 1}\\ {}{\widehat{d}}_1-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_1\right\Vert \\ {}{\widehat{d}}_{M_1+1}-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_1+1}\right\Vert \\ {}\vdots \\ {}{\widehat{d}}_{M_{N-1}+1}-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_{N-1}+1}\right\Vert \end{array}\right]-\left[\begin{array}{c}{\mathbf{I}}_3\\ {}-{\tilde{\boldsymbol{\uprho}}}_1^{\mathrm{T}}\\ {}-{\tilde{\boldsymbol{\uprho}}}_{M_1+1}^{\mathrm{T}}\\ {}\vdots \\ {}-{\tilde{\boldsymbol{\uprho}}}_{M_{N-1}+1}^{\mathrm{T}}\end{array}\right]\Delta \widehat{\mathbf{u}}={\tilde{\mathbf{h}}}_2-{\tilde{\mathbf{G}}}_2\Delta \widehat{\mathbf{u}}, $$
(65)

where

$$ \Delta {\boldsymbol{\upvarphi}}_1={\left[\begin{array}{c}\Delta \widehat{\mathbf{u}}\\ {}\Delta {\widehat{d}}_1\\ {}\Delta {\widehat{d}}_{M_1+1}\\ {}\vdots \\ {}\Delta {\widehat{d}}_{M_{N-1}+1}\end{array}\right]}_{\left(N+3\right)\times 1},{\tilde{\mathbf{B}}}_2=\left[\begin{array}{cc}-{\mathbf{I}}_3& \mathbf{0}\\ {}\mathbf{0}& {\mathbf{I}}_N\end{array}\right],{\tilde{\mathbf{h}}}_2={\left[\begin{array}{c}{\mathbf{0}}_{3\times 1}\\ {}{\widehat{d}}_1-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_1\right\Vert \\ {}{\widehat{d}}_{M_1+1}-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_1+1}\right\Vert \\ {}\vdots \\ {}{\widehat{d}}_{M_{N-1}+1}-\left\Vert \widehat{\mathbf{u}}-{\mathbf{s}}_{M_{N-1}+1}\right\Vert \end{array}\right]}_{\left(N+3\right)\times 1},{\tilde{\mathbf{G}}}_2={\left[\begin{array}{c}{\mathbf{I}}_3\\ {}-{\tilde{\boldsymbol{\uprho}}}_1^{\mathrm{T}}\\ {}-{\tilde{\boldsymbol{\uprho}}}_{M_1+1}^{\mathrm{T}}\\ {}\vdots \\ {}-{\tilde{\boldsymbol{\uprho}}}_{M_{N-1}+1}^{\mathrm{T}}\end{array}\right]}_{\left(N+3\right)\times 3}. $$
(66)

It is worth emphasizing that Δφ1 represents the estimation error of the stage-1 solution.

Then, using the WLS formulation, the desired estimate of \( \Delta \widehat{\mathbf{u}} \) can be obtain as

$$ {\tilde{\boldsymbol{\upvarphi}}}_2={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2{\tilde{\mathbf{G}}}_2\right)}^{-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2{\tilde{\mathbf{h}}}_2, $$
(67)

where \( {\tilde{\mathbf{W}}}_2={\tilde{\mathbf{B}}}_2^{-\mathrm{T}}{\left(\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {{\boldsymbol{\upvarphi}}_1}^{\mathrm{T}}\right]\right)}^{-1}{\tilde{\mathbf{B}}}_2^{-1}={\tilde{\mathbf{B}}}_2^{-1}\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right){\tilde{\mathbf{B}}}_2^{-1} \) and the expression of E[Δφ1Δφ1T] is derived in (80).

The final solution can be obtained by subtracting \( {\tilde{\boldsymbol{\upvarphi}}}_2 \) from the stage-1 solution \( \widehat{\mathbf{u}} \):

$$ \overline{\mathbf{u}}=\widehat{\mathbf{u}}-{\tilde{\boldsymbol{\upvarphi}}}_2. $$
(68)

The proposed bias-reduced method using TDOAs in the presence of receiver position errors and synchronization clock bias is summarized in Algorithm 1.

figure a

Remark 4: Although the proposed bias-reduced method only improves the source position solution, it can still reduce the bias for subsequent estimates including receiver positions and the synchronization clock bias vector.

4.3 Complexity analysis

This subsection investigates the computational complexity of the proposed bias-reduced method in terms of the number of multiplications. The numerical complexity is summarized in Table 2.

Table 2 Complexity of proposed method

We next compare the computational complexity of the proposed bias-reduced method with that of the original TSWLS method [36]. Through analysis, the total computational complexity of original TSWLS method is O((M − N)3) + O((N + l)3) + O(l3) + (M − N)M2l2 + (M − N)2Ml + 2(M − N)2(M − 1) + (M − N)(M − 1)2 + (M − N)3 + 3(N + l) ⋅ (M − N)2 + 3(N + l)2(M − N) + (N + l)(M − N) + 2(N + l)3 + 2l2(N + l) + 2l(N + l)2 + l(N + l) + 2Nl2 + Ml + M + l. The main computational complexities of the proposed method and the original TSWLS algorithm are respectively O((M − N)3) + O((N + l + 1)3) + O(l3) and O((M − N)3) + O((N + l)3) + O(l3). By comparison, we find that the computational complexity of the proposed bias-reduced method is comparable to (slightly larger than) that of the original TSWLS algorithm.

5 Performance analysis of the proposed bias-reduced method

This section analyzes the theoretical performance of the proposed bias-reduced method. The bias and covariance matrix of the solution are derived according to second-order error analysis. The same two assumptions described in Section 3 are made.

5.1 Stage 1

We denote the solution of (50) as v. The stage-1 solution of the proposed method is φ1 = v(1 : 3 + N)/v(4 + N).. The equation error A1v can be expressed as

$$ {\mathbf{A}}_1\mathbf{v}=\left[-{\mathbf{G}}_1,{\mathbf{h}}_1\right]\cdot \left[\begin{array}{c}{\boldsymbol{\upvarphi}}_1\\ {}1\end{array}\right]\cdot \mathbf{v}\left(4+N\right)=\left(-{\mathbf{G}}_1^o{\boldsymbol{\upvarphi}}_1+{\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)\cdot \mathbf{v}\left(4+N\right). $$
(69)

Letting \( {\overset{\smile }{\mathbf{A}}}_1=\left[-{\mathbf{G}}_1^o,{\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right] \), we have \( {\overset{\smile }{\mathbf{A}}}_1\mathbf{v}={\mathbf{A}}_1\mathbf{v} \). The optimization problem (50) is therefore equivalent to

$$ \min \kern0.5em {\mathbf{v}}^{\mathrm{T}}{\overset{\smile }{\mathbf{A}}}_1{\mathbf{W}}_1{\overset{\smile }{\mathbf{A}}}_1\mathbf{v}\kern0.75em \mathrm{s}.\mathrm{t}.\kern0.5em {\mathbf{v}}^{\mathrm{T}}\overset{\smile }{\boldsymbol{\Omega}}\mathbf{v}=k, $$
(70)

where \( \overset{\smile }{\boldsymbol{\Omega}}=\mathrm{E}\left[\Delta {\overset{\smile }{\mathbf{A}}}_1^{\mathrm{T}}{\mathbf{W}}_1\Delta {\overset{\smile }{\mathbf{A}}}_1\right] \), \( \Delta {\overset{\smile }{\mathbf{A}}}_1={\overset{\smile }{\mathbf{A}}}_1-{\overset{\smile }{\mathbf{A}}}_1^o \) and \( {\overset{\smile }{\mathbf{A}}}_1^o=\left[-{\mathbf{G}}_1^o,{\mathbf{h}}_1^o\right] \). Similar to the solution of problem (50), the solution of (70) satisfies

$$ \left({\overset{\smile }{\mathbf{A}}}_1^{\mathrm{T}}{\mathbf{W}}_1{\overset{\smile }{\mathbf{A}}}_1\right)\mathbf{v}=\lambda \overset{\smile }{\boldsymbol{\Omega}}\mathbf{v}. $$
(71)

According to the definition of h1 in (8), we have \( \Delta {\mathbf{h}}_1={\mathbf{h}}_1-{\mathbf{h}}_1^o={\tilde{\mathbf{B}}}_1\mathbf{A}\Delta \mathbf{r}+{\mathbf{C}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right) \),

where A, \( {\tilde{\mathbf{B}}}_1 \), C1, and E are respectively defined by (10), (46), (47), and (20).

We next derive the expressions for \( {\overset{\smile }{\mathbf{A}}}_1^{\mathrm{T}}{\mathbf{W}}_1{\overset{\smile }{\mathbf{A}}}_1 \) and \( \overset{\smile }{\boldsymbol{\Omega}} \) as

$$ {\displaystyle \begin{array}{l}{\overset{\smile }{\mathbf{A}}}_1^{\mathrm{T}}{\mathbf{W}}_1{\overset{\smile }{\mathbf{A}}}_1={\left[-{\mathbf{G}}_1^o,{\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right]}^{\mathrm{T}}{\mathbf{W}}_1\left[-{\mathbf{G}}_1^o,{\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right]\\ {}\kern3.25em =\left[\begin{array}{cc}{\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o& -{\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)\\ {}-{\left({\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)}^{\mathrm{T}}{\mathbf{W}\mathbf{G}}_1^o& {\left({\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)}^{\mathrm{T}}\mathbf{W}\left({\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)\end{array}\right],\end{array}} $$
(72)
$$ \overset{\smile }{\boldsymbol{\Omega}}=\mathrm{E}\left[\Delta {\overset{\smile }{\mathbf{A}}}_1^{\mathrm{T}}{\mathbf{W}}_1\Delta {\overset{\smile }{\mathbf{A}}}_1\right]=\left[\begin{array}{cc}{\mathbf{0}}_{\left(N+3\right)\times \left(N+3\right)}& {\mathbf{0}}_{\left(N+3\right)\times 1}\\ {}{\mathbf{0}}_{1\times \left(N+3\right)}& {\left\{\ast \right\}}_{1\times 1}\end{array}\right]. $$
(73)

From (72) and (73), we find that \( {\overset{\smile }{\mathbf{A}}}_1^{\mathrm{T}}{\mathbf{W}}_1{\overset{\smile }{\mathbf{A}}}_1 \) and \( \overset{\smile }{\boldsymbol{\Omega}} \) have the same partition. Substituting (72) and (73) into (71) yields \( \left[{\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o,-{\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)\right]\mathbf{v}=\mathbf{0} \). Dividing both sides of the equation by v(4 + N) yields

$$ {\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o{\boldsymbol{\upvarphi}}_1-{\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\right)=\mathbf{0}. $$
(74)

According to (19), h1 − ΔG1φ1 can be expressed as

$$ {\displaystyle \begin{array}{l}{\mathbf{h}}_1-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1={\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1^o+{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o-\Delta {\mathbf{G}}_1{\boldsymbol{\upvarphi}}_1\ \\ {}\kern4.5em ={\mathbf{G}}_1^0{\boldsymbol{\upvarphi}}_1^o-\Delta {\mathbf{G}}_1\Delta {\boldsymbol{\upvarphi}}_1+{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o,\end{array}} $$
(75)

where \( \Delta {\boldsymbol{\upvarphi}}_1={\boldsymbol{\upvarphi}}_1-{\boldsymbol{\upvarphi}}_1^o \) represents the error in φ1. Substituting (75) into (74) and expressing φ1 as \( {\boldsymbol{\upvarphi}}_1^o+\Delta {\boldsymbol{\upvarphi}}_1 \) yields

$$ {\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o\Delta {\boldsymbol{\upvarphi}}_1-{\mathbf{G}}_1^{o\mathrm{T}}{\mathbf{W}}_1\left(-\Delta {\mathbf{G}}_1\Delta {\boldsymbol{\upvarphi}}_1+{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right)=\mathbf{0}. $$
(76)

With some algebraic manipulations, we have the equality relationship

$$ \left(\mathbf{I}+{\mathbf{H}}_1\Delta {\mathbf{G}}_1\right)\Delta {\boldsymbol{\upvarphi}}_1={\mathbf{H}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right), $$
(77)

where H1 is defined below (25). When the noise level is small, Δφ1 can be expressed as

$$ {\displaystyle \begin{array}{l}\Delta {\boldsymbol{\upvarphi}}_1=\left(\mathbf{I}-{\mathbf{H}}_1\Delta {\mathbf{G}}_1\right){\mathbf{H}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right)\\ {}\kern1.5em ={\mathbf{H}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}+\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)+\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)-2\mathbf{T}\left(\Delta \mathbf{s}\right)A\Delta \mathbf{r}-2\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right)\\ {}\kern2.25em -{\mathbf{H}}_1\Delta {\mathbf{G}}_1{\mathbf{H}}_1{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}-{\mathbf{H}}_1\Delta {\mathbf{G}}_1{\mathbf{H}}_1{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}.\end{array}} $$
(78)

According to (A.1)–(A.7) in Appendix A, the bias E[Δφ1] can be obtained as

$$ {\displaystyle \begin{array}{l}\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\right]={\mathbf{H}}_1\mathrm{vecd}\left[{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}\right]+{\mathbf{H}}_1\mathbf{E}\mathrm{vecd}\left[{\mathbf{Q}}_2\right]-2{\mathbf{H}}_1\cdot \operatorname{diag}\left\{\mathrm{tr}\left\{{\mathbf{Q}}_{s_1}{\mathbf{P}}_1\right\},\mathrm{tr}\left\{{\mathbf{Q}}_{s_{M_1+1}}{\mathbf{P}}_{M_1+1}\right\},\cdots, \mathrm{tr}\left\{{\mathbf{Q}}_{s_{M_{N-1}+1}}{\mathbf{P}}_{M_{N-1}+1}\right\}\right\}\cdot {\mathbf{A}\mathbf{r}}^o\ \\ {}\kern3.5em +2{\mathbf{H}}_1\left({\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1{\left(4,:\right)}^{\mathrm{T}}+{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1\Big(5,:\left){}^{\mathrm{T}}+\cdots +{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1\right(3+N,:\Big){}^{\mathrm{T}}\right)\\ {}\kern3.5em +2{\mathbf{H}}_1{\mathbf{A}}_2\left[\begin{array}{c}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,1:3\right){\mathbf{Q}}_{s_1}\right\}\\ {}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,4:6\right){\mathbf{Q}}_{s_2}\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,3M-2:3M\right){\mathbf{Q}}_{s_M}\right\}\end{array}\right].\end{array}} $$
(79)

If we keep the first-order error terms, multiplying (78) by its transpose and taking the expectation yields the covariance matrix

$$ {\displaystyle \begin{array}{l}\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {\boldsymbol{\upvarphi}}_1^{\mathrm{T}}\right]\approx \mathrm{E}\left[\left({\mathbf{H}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}\right)\right){\left({\mathbf{H}}_1\left({\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}+{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}\right)\right)}^{\mathrm{T}}\right]\ \\ {}\kern5em ={\mathbf{H}}_1\left({\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1+{\tilde{\mathbf{D}}}_1{\mathbf{Q}}_2{\tilde{\mathbf{D}}}_1^{\mathrm{T}}\right){\mathbf{H}}_1^{\mathrm{T}}={\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right)}^{-1}.\end{array}} $$
(80)

5.2 Stage 2

Subtracting the true value \( {\tilde{\boldsymbol{\upvarphi}}}_2^o \) from both sides of (67) yields

$$ \Delta {\tilde{\boldsymbol{\upvarphi}}}_2={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2{\tilde{\mathbf{G}}}_2\right)}^{-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2\left({\tilde{\mathbf{h}}}_2-{\tilde{\mathbf{G}}}_2{\tilde{\boldsymbol{\upvarphi}}}_2^o\right). $$
(81)

According to the definitions of \( {\tilde{\mathbf{h}}}_2 \) and \( {\tilde{\mathbf{G}}}_2 \) in (66), we attain

$$ {\tilde{\mathbf{h}}}_2-{\tilde{\mathbf{G}}}_2{\tilde{\boldsymbol{\upvarphi}}}_2^o={\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1. $$
(82)

In (81), \( {\tilde{\mathbf{W}}}_2 \) is the noisy version because G1 in it contains measurement noise and receiver position error. Using (23) and the definition of \( {\tilde{\mathbf{W}}}_2 \) used for (67), we have

$$ {\tilde{\mathbf{W}}}_2={\tilde{\mathbf{B}}}_2^{-1}{\mathbf{U}}_1{\tilde{\mathbf{B}}}_2^{-1}={\tilde{\mathbf{B}}}_2^{-1}\left({\mathbf{U}}_1^o+\Delta {\mathbf{U}}_1\right){\tilde{\mathbf{B}}}_2^{-1}={\tilde{\mathbf{W}}}_2^o+\Delta {\tilde{\mathbf{W}}}_2, $$
(83)

where \( {\tilde{\mathbf{W}}}_2^o={\tilde{\mathbf{B}}}_2^{-1}{\mathbf{U}}_1^o{\tilde{\mathbf{B}}}_2^{-1} \) and \( \Delta {\tilde{\mathbf{W}}}_2={\tilde{\mathbf{B}}}_2^{-1}\Delta {\mathbf{U}}_1{\tilde{\mathbf{B}}}_2^{-1} \). Letting \( {\tilde{\mathbf{U}}}_2={\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2{\tilde{\mathbf{G}}}_2 \) yields

$$ {\tilde{\mathbf{U}}}_2={\tilde{\mathbf{U}}}_2^o+\Delta {\tilde{\mathbf{U}}}_2,{\tilde{\mathbf{U}}}_2^o={\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o{\tilde{\mathbf{G}}}_2 and\Delta {\tilde{\mathbf{U}}}_2={\tilde{\mathbf{G}}}_2^{\mathrm{T}}\Delta {\tilde{\mathbf{W}}}_2{\tilde{\mathbf{G}}}_2. $$
(84)

Adopting the Neumann expansion [46], we have the approximation\( {\tilde{\mathbf{U}}}_2^{-1}\approx {\tilde{\mathbf{U}}}_2^{o-1}-{\tilde{\mathbf{U}}}_2^{o-1}\Delta {\tilde{\mathbf{U}}}_2{\tilde{\mathbf{U}}}_2^{o-1} \). Substituting (82) and (83) into (81) and ignoring the higher second-order error terms yields

$$ {\displaystyle \begin{array}{l}\Delta {\tilde{\boldsymbol{\upvarphi}}}_2=\left({\tilde{\mathbf{U}}}_2^{o-1}-{\tilde{\mathbf{U}}}_2^{o-1}\Delta {\tilde{\mathbf{U}}}_2{\tilde{\mathbf{U}}}_2^{o-1}\right){\tilde{\mathbf{G}}}_2^{\mathrm{T}}\left({\tilde{\mathbf{W}}}_2^o+\Delta {\tilde{\mathbf{W}}}_2\right){\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1\\ {}\kern1.5em ={\tilde{\mathbf{H}}}_2{\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1+{\tilde{\mathbf{U}}}_2^{o-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}\Delta {\mathbf{U}}_1{\tilde{\mathbf{B}}}_2^{-1}\left(\mathbf{I}-{\tilde{\mathbf{G}}}_2{\tilde{\mathbf{H}}}_2\right){\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1,\end{array}} $$
(85)

where \( {\tilde{\mathbf{H}}}_2={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o{\tilde{\mathbf{G}}}_2\right)}^{-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o={\tilde{\mathbf{U}}}_2^{o-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o \). With some algebraic manipulations, we have

$$ {\displaystyle \begin{array}{l}\tilde{\boldsymbol{\upalpha}}=\mathrm{E}\left[\Delta {\mathbf{U}}_1{\tilde{\mathbf{B}}}_2^{-1}\left(\mathbf{I}-{\tilde{\mathbf{G}}}_2{\tilde{\mathbf{H}}}_2\right){\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1\right]\\ {}=-2{\mathbf{G}}_1^{\mathrm{o}\mathrm{T}}{\mathbf{W}}_1\left({\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1{\tilde{\mathbf{X}}}_3{\left(4,:\right)}^{\mathrm{T}}+{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1{\tilde{\mathbf{X}}}_3\Big(5,:\left){}^{\mathrm{T}}+\cdots +{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1{\tilde{\mathbf{X}}}_3\right(3+N,:\Big){}^{\mathrm{T}}\right)\\ {}-2{\mathbf{G}}_1^{\mathrm{o}\mathrm{T}}{\mathbf{W}}_1{\mathbf{A}}_2\left[\begin{array}{c}\mathrm{tr}\left\{{\tilde{\mathbf{X}}}_5\left(1:3,1:3\right){\mathbf{Q}}_{s_1}\right\}\\ {}\mathrm{tr}\left\{{\tilde{\mathbf{X}}}_5\left(1:3,4:6\right){\mathbf{Q}}_{s_2}\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\tilde{\mathbf{X}}}_5\left(1:3,3M-2:3M\right){\mathbf{Q}}_{s_M}\right\}\end{array}\right]-2\left[\begin{array}{c}\begin{array}{l}{\mathbf{Q}}_{s_1}{\tilde{\mathbf{X}}}_6{\left(1,1:3\right)}^{\mathrm{T}}+{\mathbf{Q}}_{s_2}{\tilde{\mathbf{X}}}_6{\left(2,4:6\right)}^{\mathrm{T}}\\ {}+\cdots +{\mathbf{Q}}_{s_M}{\tilde{\mathbf{X}}}_6{\left(M,3M-2:3M\right)}^{\mathrm{T}}\end{array}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1{\mathbf{G}}_1^{\mathrm{o}}{\tilde{\mathbf{X}}}_3{\mathbf{Q}}_1\right\}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1{\mathbf{G}}_1^{\mathrm{o}}{\tilde{\mathbf{X}}}_3{\mathbf{Q}}_1\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1{\mathbf{G}}_1^{\mathrm{o}}{\tilde{\mathbf{X}}}_3{\mathbf{Q}}_1\right\}\end{array}\right],\end{array}} $$
(86)

where \( {\tilde{\mathbf{X}}}_3={\tilde{\mathbf{B}}}_2^{-1}\left(\mathbf{I}-{\tilde{\mathbf{G}}}_2{\tilde{\mathbf{H}}}_2\right){\tilde{\mathbf{B}}}_2{\mathbf{H}}_1{\mathbf{B}}_1\mathbf{A},\kern0.5em {\tilde{\mathbf{X}}}_4={\tilde{\mathbf{B}}}_2^{-1}\left(\mathbf{I}-{\tilde{\mathbf{G}}}_2{\tilde{\mathbf{H}}}_2\right){\tilde{\mathbf{B}}}_2{\mathbf{H}}_1{\tilde{\mathbf{D}}}_1,\kern0.5em {\tilde{\mathbf{X}}}_5={\mathbf{X}}_4\left(1:3,:\right) \) and \( {\tilde{\mathbf{X}}}_6={\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1{\mathbf{G}}_1^o{\mathbf{X}}_4 \). Taking the expectation for (85) and using (86) yields

$$ \mathrm{E}\left[\Delta {\tilde{\boldsymbol{\upvarphi}}}_2\right]={\tilde{\mathbf{H}}}_2{\tilde{\mathbf{B}}}_2\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\right]+{\tilde{\mathbf{U}}}_2^{o-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}\tilde{\boldsymbol{\upalpha}}. $$
(87)

If we keep the first-order error terms, multiplying (85) by its transpose and taking the expectation yields the covariance matrix of \( {\tilde{\boldsymbol{\upvarphi}}}_2 \) as

$$ \mathrm{E}\left[\Delta {\tilde{\boldsymbol{\upvarphi}}}_2\Delta {\tilde{\boldsymbol{\upvarphi}}}_2^{\mathrm{T}}\right]\approx \mathrm{E}\left[\left({\tilde{\mathbf{H}}}_2{\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1\right){\left({\tilde{\mathbf{H}}}_2{\tilde{\mathbf{B}}}_2\Delta {\boldsymbol{\upvarphi}}_1\right)}^{\mathrm{T}}\right]={\tilde{\mathbf{H}}}_2{\tilde{\mathbf{B}}}_2\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {\boldsymbol{\upvarphi}}_1^{\mathrm{T}}\right]{\tilde{\mathbf{B}}}_2{\tilde{\mathbf{H}}}_2^{\mathrm{T}}={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o{\tilde{\mathbf{G}}}_2\right)}^{-1}, $$
(88)

where \( \mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\Delta {\boldsymbol{\upvarphi}}_1^{\mathrm{T}}\right]={\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right)}^{-1} \) is derived in (80).

The estimation error in the final solution \( \overline{\mathbf{u}} \) can be expressed as

$$ \Delta \overline{\mathbf{u}}=\overline{\mathbf{u}}-{\mathbf{u}}^o=\widehat{\mathbf{u}}-{\tilde{\boldsymbol{\upvarphi}}}_2-{\mathbf{u}}^o=\widehat{\mathbf{u}}-\left({\boldsymbol{\upvarphi}}_2^o+\Delta {\tilde{\boldsymbol{\upvarphi}}}_2\right)-{\mathbf{u}}^o=-\Delta {\tilde{\boldsymbol{\upvarphi}}}_2, $$
(89)

where the equation \( \widehat{\mathbf{u}}-{\boldsymbol{\upvarphi}}_2^o={\mathbf{u}}^o \) is used. The bias and covariance matrix of \( \overline{\mathbf{u}} \) are therefore

$$ \mathrm{E}\left[\Delta \overline{\mathbf{u}}\right]=-\mathrm{E}\left[\Delta {\tilde{\boldsymbol{\upvarphi}}}_2\right]=-\left({\tilde{\mathbf{H}}}_2{\tilde{\mathbf{B}}}_2\mathrm{E}\left[\Delta {\boldsymbol{\upvarphi}}_1\right]+{\tilde{\mathbf{U}}}_2^{o-1}{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}\tilde{\boldsymbol{\upalpha}}\right), $$
(90)
$$ \mathrm{E}\left[\Delta \overline{\mathbf{u}}\Delta {\overline{\mathbf{u}}}^{\mathrm{T}}\right]=\mathrm{E}\left[\Delta {\tilde{\boldsymbol{\upvarphi}}}_2\Delta {\tilde{\boldsymbol{\upvarphi}}}_2^{\mathrm{T}}\right]={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o{\tilde{\mathbf{G}}}_2\right)}^{-1}, $$
(91)

respectively. We now state the following equation that is proved in Appendix B:

$$ \mathrm{E}\left[\Delta \overline{\mathbf{u}}\Delta {\overline{\mathbf{u}}}^{\mathrm{T}}\right]={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o{\tilde{\mathbf{G}}}_2\right)}^{-1}={\mathbf{B}}_3^{o-1}{\left({\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o{\mathbf{G}}_2\right)}^{-1}{\mathbf{B}}_3^{o-1}=\mathrm{E}\left[\Delta \mathbf{u}\Delta {\mathbf{u}}^{\mathrm{T}}\right]. $$
(92)

The following conclusions can be drawn from the above performance analysis.

  1. 1.

    Comparing the bias (79) in stage 1 of the proposed method with that of the original algorithm (26), we find that the proposed method removes the bias terms (A.4) and (A.5) generated by the noise correlation between G1 and h1. According to the discussion following Eq. (26), the proposed method does not completely decorrelate the noise from G1 and h1, and retains the small bias terms (A.6) and (A.7). Moreover, the removed terms represent most of the bias caused by the noise correlation between G1 and h1.

  2. 2.

    The bias of final source position estimate is obtained by combining (79) and (90). Compared with the bias of the original algorithm (41), the bias expression in (90) is more concise because our method avoids the use of nonlinear operations in stage 2, including squaring and square root operations.

  3. 3.

    From (92), we see that the proposed method has the same covariance matrix as the original algorithm, which indicates that under moderate noise, the proposed bias-reduced method can achieve the CRLB like the original algorithm. The proposed method therefore reduces the bias of the solution without increasing values in the covariance matrix.

Remark 5: For the proposed bias-reduced method, the main idea in stage 1 is introducing an augmented matrix and imposing a quadratic constraint, so that the expectation of the cost function E[J] (ideal bias) reaches a minimum value of zero at v = vo. Stage-2 designs an effective WLS estimator with which to further reduce the bias. The remaining parts of the bias are therefore negligible compared with the removed parts of the bias. This is verified in the following simulation.

6 Simulation results and discussion

This section conducts several simulation experiments to verify the superiority of the proposed bias-reduced algorithm and the validity of the theoretical derivation. We conduct L = 10000 Monte Carlo (MC) experiments and evaluate the localization accuracy in terms of the RMSE, \( \mathrm{RMSE}\left(\mathbf{u}\right)=\sqrt{\frac{1}{L}{\sum \limits}_{l=1}^L{\left\Vert {\widehat{\mathbf{u}}}^{(l)}-{\mathbf{u}}^o\right\Vert}^2}, \) and the bias, \( \mathrm{bias}\left(\mathbf{u}\right)=\left\Vert {\sum \limits}_{l=1}^L\left({\widehat{\mathbf{u}}}^{(l)}-{\mathbf{u}}^o\right)\right\Vert /L \). Note that the RMSE and bias for the receiver position and clock bias vector are defined in the same manner.

6.1 Comparison of localization performance with the original TSWLS method

The experiment considers a three-dimensional localization scenario. We assume there are 17 available receivers having the positions listed in Table 3. The source is placed at uo = [15, 16, 17]T km. The localization geometry is shown in Fig. 2. Moreover, the receivers are separated into five groups according to differences in local clocks; group 1 comprises receivers 1–6, group 2 comprises receivers 7–10, group 3 comprises receivers 11–13, group 4 comprises receivers 14–15, and group 5 comprises receivers 16–17. The following simulation results show the RMSEs and biases for the proposed bias-reduced method (see Section 4) and the original TSWLS method [36]. To verify the theoretical analysis presented in the text, the following graphs also show the theoretical bias curves of the two algorithms (see Sections 3 and 5) and the corresponding CRLBs.

Table 3 Location of the receivers (units: m)
Fig. 2
figure 2

The localization geometry

Assume that the RDOAs and receiver positions are contaminated by zero-mean Gaussian noise with covariance matrices \( {\mathbf{Q}}_1={\sigma}_{\mathrm{RDOA}}^2\tilde{\mathbf{I}} \) and \( {\mathbf{Q}}_2={\sigma}_{\mathrm{s}}^2\mathbf{I} \), where \( \tilde{\mathbf{I}} \) has diagonal elements equal to unity and off-diagonal elements of 0.5. σRDOA and σs respectively represent the noise level of the RDOAs and receiver positions. We first set the clock bias vector δ = [40 60 80 100]T m and the receiver position noise level σs = 2 m. Let σRDOA vary from 0.6 to 12 m in intervals of 0.6 m. The RMSEs of the source position, receiver position, and clock bias vector versus σRDOA are presented in Figs. 3, 4, and 5 while the biases of these estimates are shown in Figs. 6, 7, and 8. We next assume δ = [40 60 80 100]T m and σRDOA = 2 m, and let σsvary from 0.2 to 4 m in intervals of 0.2 m. The simulation results are depicted in Figs. 9, 10, 11, 12, 13, and 14.

Fig. 3
figure 3

The RMSE of the source position as measurement noise varies

Fig. 4
figure 4

The RMSE of receiver position vector as measurement noise varies

Fig. 5
figure 5

The RMSE of clock bias vector as measurement noise varies

Fig. 6
figure 6

The bias of the source position as measurement noise varies

Fig. 7
figure 7

The bias of receiver position vector as measurement noise varies

Fig. 8
figure 8

The bias of clock bias vector as measurement noise varies

Fig. 9
figure 9

The RMSE of the source position as receiver position error varies

Fig. 10
figure 10

The RMSE of receiver position vector as receiver position error varies

Fig. 11
figure 11

The RMSE of clock bias vector as receiver position error varies

Fig. 12
figure 12

The bias of the source position as receiver position error varies

Fig. 13
figure 13

The bias of receiver position vector as receiver position error varies

Fig. 14
figure 14

The bias of clock bias vector as receiver position error varies

We draw the following conclusions from Figs. 9, 10, 11, 12, 13, and 14.

  1. 1.

    The RMSEs of all estimates (including the source position, receiver position, and clock bias vector) for both the proposed bias-reduced method and original TSWLS algorithm can achieve the corresponding CRLBs at low noise levels, which verifies the theoretical derivation in Section 5.

  2. 2.

    As the measurement noise or receiver position error increases, the original TSWLS algorithm gradually deviates from the CRLB after the thresholding effect occurs. However, the noise endurance threshold of the proposed method is always higher than that of the original TSWLS algorithm, which indicates that the proposed method is more robust to high noise levels than the original TSWLS algorithm.

  3. 3.

    The bias of the source position solution for both the proposed bias-reduced method and original TSWLS algorithm coincides with the corresponding theoretical value under moderate noise levels, which validates the theoretical derivation in Sections 3 and 5.

  4. 4.

    Figures 6, 7, 8, 12, and 14 show that the proposed bias-reduced method can effectively reduce not only the bias of the source position but also the bias of the refined receiver positions and clock bias vector.

  5. 5.

    With an increase in measurement noise or receiver position error, the bias reduction of the proposed method is superior to that of the original TSWLS algorithm. Figure 6 shows that the bias of the source position estimated using the proposed method reduces by 288 m relative to the bias in the original TSWLS solution when σs = 2 (m) and σRDOA = 6 (m).

6.2 Study of localization performance for different source ranges

According to the analysis in Remark 3, the proposed bias-reduced method is more effective for a far-field source. In this section, we examine the localization performance of the proposed method for different source ranges from the receivers. We fix the measurement noise level and receiver position error level as σRDOA = 2 m and σs = 2 m, respectively. Let source position \( \overline{\mathbf{u}}=\mu {\mathbf{u}}^o \), where μ represents the distance factor that varies from 0.1 to 2. The other simulation conditions are the same as described in Subsection 6.1. The simulation results for the RMSE and bias are shown in Figs. 15, 16, 17, 18, 19, and 20.

Fig. 15
figure 15

The RMSE of the source position as distance factor varies

Fig. 16
figure 16

The RMSE of receiver position vector as distance factor varies

Fig. 17
figure 17

The RMSE of clock bias vector as distance factor varies

Fig. 18
figure 18

The bias of the source position as distance factor varies

Fig. 19
figure 19

The bias of receiver position vector as distance factor varies

Fig. 20
figure 20

The bias of clock bias vector as distance factor varies

As expected, the improved bias of the proposed method is not obvious when the source is close to the receivers or even inside them (i.e., the distance factor μ is small). However, as the distance factor μ increases, the superiority of the proposed algorithm for bias reduction is gradually revealed. Moreover, the RMSE and bias of the source position solution for the proposed method coincide well with the corresponding CRLB and theoretical value, respectively, which again verifies the theoretical derivation in Section 5.

6.3 Study of localization performance for different source positions

To highlight the superiority of the proposed bias-reduced method, this subsection examines the localization performance for 30 randomly placed sources. Assume that the source is randomly placed in a cubic region of 5 × 5 × 5 (km × km × km) around the point [15, 15, 15]T km. Fix the clock bias vector δ = [40 60 80 100]T m and the receiver position noise level σs = 2 m. Let σRDOA vary from 0.6 to 6 m with 0.6 m intervals. The receiver positions and grouping situation are the same as described in Subsection 6.1. Figures 21, 22, and 23 show the boxplots [48] of the bias calculated using 30 random source positions. For each source, we conduct L = 10000 MC experiments.

Fig. 21
figure 21

The bias of source position for 30 randomly placed sources as measurement noise varies. a The original algorithm. b The proposed method

Fig. 22
figure 22

The bias of receiver position vector for 30 randomly placed sources as measurement noise varies. a The original algorithm. b The proposed method

Fig. 23
figure 23

The bias of clock bias vector for 30 randomly placed sources as measurement noise varies. a The original algorithm. b The proposed method

These simulation results again validate that the proposed method has a smaller bias than the original algorithm for all estimates including the source position, refined receiver positions, and clock bias vector. Moreover, this improvement in reducing bias does not depend on the localization geometry.

7 Conclusions

This paper proposes a bias-reduced version for the well-known TSWLS solution using TDOAs in the presence of receiver position errors and synchronization clock bias. The new technique has two stages. In stage 1, through introducing an augmented matrix and imposing a quadratic constraint, the proposed method reduces the bias caused by the noise correlation of the WLS problem. Stage 2 develops an effective WLS estimator to correct the stage-1 solution, thereby avoiding the use of nonlinear operations that increase the bias in the original algorithm. Subsequently, the theoretical performance of the proposed method is derived via second-order error analysis, demonstrating theoretically the effectiveness of the proposed method in reducing the bias and achieving the CRLB under moderate noise for the far-field source. Finally, several simulation experiments are conducted to verify the superiority of the proposed method and the validity of the theoretical derivation. Several important conclusions can be drawn from the simulation results. (i) The RMSEs of all estimates (including the source position, receiver position, and clock bias vector) for the proposed method can achieve the corresponding CRLBs under moderate noise levels. (ii) The proposed method can reduce the bias of solution while not increasing the RMSE. (iii) The proposed method effectively reduces not only the bias of the source position but also the bias of the refined receiver positions and estimated clock bias vector. (iv) As the source range increases, the bias reduction of the proposed method is more obvious. (v) The improvement of the proposed method in terms of reducing bias does not depend on the localization geometry.

Currently, the proposed method only uses TDOA information of the emitted signal from a single target. Our future work will extend the proposed bias-reduced method to the following aspects:

  1. 1.

    Hybrid TDOA/FDOA localization

  2. 2.

    Localization in the multiple source scenario

Abbreviations

3D:

Three-dimensional

AOA:

Angle of arrival

CRLB:

Cramér–Rao lower bound

CTLS:

Constrained total least-squares

FDOA:

Frequency difference of arrival

GROA:

Gain ratios of arrival

GSVD:

Generalized singular value decomposition

MC:

Monte Carlo

RDOA:

Range difference of arrival

RMSE:

Root mean square error

RSS:

Received signal strength

TDOA:

Time difference of arrival

TLS:

Total least-squares

TOA:

Time of arrival

TS:

Taylor series

TSWLS:

Two-step weighted least-squares

WLS:

Weighted least-squares

References

  1. A. Noroozi, M.A. Sebt, Target localization from bistatic range measurements in multi-transmitter multi-receiver passive radar[J]. IEEE Signal Processing Letters 22(12), 2445–2449 (2015)

    Article  Google Scholar 

  2. H. Ma, M. Antoniou, D. Pastina, F. Santi, F. Pieralice, M. Bucciarelli, M. Cherniakov, Maritime moving target indication using passive gnss-based bistatic radar[J]. IEEE Trans. Aerosp. Electron. Syst. 54(1), 115–130 (2018)

    Article  Google Scholar 

  3. R. Amiri, F. Behnia, MAM Sadr, efficient positioning in momi radars with widely separated antennas[J]. IEEE Commun. Lett. 21(7), 1569–1572 (2017)

    Article  Google Scholar 

  4. Y.L. Wang, Y. Wu, An efficient semidefinite relaxation algorithm for moving source localization using TDOA and FDOA measurements[J]. IEEE Commun. Lett. 21(1), 80–83 (2017)

    Article  MathSciNet  Google Scholar 

  5. C.W. Luo, L. Cheng, M.C. Chan, Y. Gu, J.Q. Li, Pallas: Self-bootstrapping fine-grained passive indoor localization using wifi monitors[J]. IEEE Trans. Mob. Comput. 16(2), 466–481 (2017)

    Article  Google Scholar 

  6. T. Tirer, A.J. Weiss, Performance analysis of a high-resolution direct position determination method[J]. IEEE Trans. Signal Process. 65(3), 544–554 (2017)

    Article  MathSciNet  Google Scholar 

  7. S. Tomic, M. Beko, R. Dinis, 3-D target localization in wireless sensor network using RSS and AOA measurements[J]. IEEE Trans. Veh. Technol. 66(4), 3197–3210 (2017)

    Article  Google Scholar 

  8. C. Liu, D.Y. Fang, Z. Yang, H.B. Jiang, et al., RSS distribution-based passive localization and its application in sensor networks[J]. IEEE Trans. Wirel. Commun. 15(4), 2883–2895 (2016)

    Article  Google Scholar 

  9. P.A. Forero, P.A. Baxley, L. Straatemeier, A multitask learning framework for broadband source-location mapping using passive sonar[J]. IEEE Trans. Signal Process. 63(14), 3599–3614 (2015)

    Article  MathSciNet  Google Scholar 

  10. T. Chen, C.S. Liu, Y.V. Zakharov, Source localization using matched-phase matched-field processing with phase descent search[J]. IEEE J. Ocean. Eng. 37(2), 261–270 (2012)

    Article  Google Scholar 

  11. Y.C. Hu, G. Leus, Robust differential received signal strength-based localization[J]. IEEE Trans. Signal Process. 65(12), 3261–3276 (2017)

    Article  MathSciNet  Google Scholar 

  12. R.M. Vaghefi, M.R. Gholami, R.M. Buehrer, E.G. Strom, Cooperative received signal strength-based sensor localization with unknown transmit powers[J]. IEEE Trans. Signal Process. 61(6), 1389–1403 (2013)

    Article  MathSciNet  Google Scholar 

  13. R. Niu, P.K. Varshney, Joint detection and localization in sensor networks based on local decisions[A], Proceedings of the Fortieth Asilomar Conference on Signals, Systems and Computers[C] (IEEE Press, Pacific Grove, 2006), pp. 525–529

    Google Scholar 

  14. D. Ciuonzo, P.S. Rossi, Distributed detection of a non-cooperative target via generalized locally-optimum approaches[J]. Information Fusion 36, 261–274 (2017)

    Article  Google Scholar 

  15. D. Ciuonzo, P.S. Rossi, P. Willett, Generalized rao test for decentralized detection of an uncooperative target[J]. IEEE Signal Processing Letters 24(5), 678–682 (2017)

    Article  Google Scholar 

  16. D. Ciuonzo, P.S. Rossi, Quantizer design for generalized locally-optimum detectors in wireless sensor networks[J]. IEEE Wireless Communications Letters 7(2), 162–165 (2018)

    Article  Google Scholar 

  17. K.C. Ho, M. Sun, Passive source localization using time differences of arrival and gain ratios of arrival[J]. IEEE Trans. Signal Process. 56(2), 464–477 (2008)

    Article  MathSciNet  Google Scholar 

  18. J.A. Luo, S.W. Pan, D.L. Peng, Z. Wang, Y.J. Li, Source localization in acoustic sensor networks via constrained least-squares optimization using AOA and GROA measurements[J]. Sensors 18(4), 937 (2018)

    Article  Google Scholar 

  19. W. Wang, G. Wang, J. Zhang, Y.M. Li, Robust weighted least squares method for TOA-based localization under mixed LOS/NLOS conditions[J]. IEEE Commun. Lett. 21(10), 2226–2229 (2017)

    Article  Google Scholar 

  20. N. Wu, W.J. Yuan, H. Wang, J.M. Kuang, TOA-based passive localization of multiple targets with inaccurate receivers based on belief propagation on factor graph[J]. Digital Signal Processing 49(C), 14–23 (2016)

    Article  Google Scholar 

  21. L. Yang, K.C. Ho, An approximately efficient TDOA localization algorithm in closed-form for locating multiple disjoint sources with erroneous sensor positions[J]. IEEE Trans. Signal Process. 57(12), 4598–4615 (2009)

    Article  MathSciNet  Google Scholar 

  22. K. Yang, J.P. An, X.Y. Bu, G.C. Sun, Constrained total least-squares location algorithm using time-difference-of-arrival measurements[J]. IEEE Trans. Veh. Technol. 59(3), 1558–1562 (2010)

    Article  Google Scholar 

  23. K.H. Yang, G. Wang, Z.Q. Luo, Efficient convex relaxation methods for robust target localization by a sensor network using time differences of arrivals[J]. IEEE Trans. Signal Process. 57(7), 2775–2784 (2009)

    Article  MathSciNet  Google Scholar 

  24. D. Wang, The geolocation performance analysis for the constrained Taylor-series iteration in the presence of satellite orbit perturbations[J]. Scientia Sinica Informations 44(2), 231–253 (2014)

    Google Scholar 

  25. Z. Huang, L. J, Total least squares and equilibration algorithm for range difference location[J]. Electron. Lett. 40(5), 121–122 (2004)

    Article  Google Scholar 

  26. Y. Wang, K.C. Ho, TDOA positioning irrespective of source range[J]. IEEE Trans. Signal Process. 65(6), 1447–1460 (2017)

    Article  MathSciNet  Google Scholar 

  27. G. Wang, A.M.C. So, Y.M. Li, Robust convex approximation methods for TDOA-based localization under NLOS conditions[J]. IEEE Trans. Signal Process. 64(13), 3281–3296 (2016)

    Article  MathSciNet  Google Scholar 

  28. X.M. Qu, L.H. Xie, W.R. Tan, Iterative constrained weighted least squares source localization using TDOA and FDOA measurements[J]. IEEE Trans. Signal Process. 65(15), 3990–4003 (2017)

    Article  MathSciNet  Google Scholar 

  29. K.C. Ho, X. Lu, L. Kovavisaruch, Source localization using TDOA and FDOA measurements in the presence of receiver location errors: analysis and solution[J]. IEEE Trans. Signal Process. 55(2), 684–696 (2007)

    Article  MathSciNet  Google Scholar 

  30. Z. Wang, J.A. Luo, X.P. Zhang, A novel location-penalized maximum likelihood estimator for bearing-only target localization[J]. IEEE Trans. Signal Process. 60(12), 6166–6181 (2012)

    Article  MathSciNet  Google Scholar 

  31. S. Xu, K. Dogancay, Optimal sensor placement for 3D angle-of-arrival target localization[J]. IEEE Trans. Aerosp. Electron. Syst. 53(3), 1196–1211 (2017)

    Article  Google Scholar 

  32. L. Kovavisaruch, K.C. Ho, Modified Taylor-series method for source and receiver localization using TDOA measurements with erroneous receiver positions[A], Proceedings of the IEEE International Symposium on Circuits and Systems[C] (IEEE Press, Kobe, 2005), pp. 2295–2298

    Google Scholar 

  33. Y.T. Chan, K.C. Ho, A simple and efficient estimator for hyperbolic location[J]. IEEE Trans. Signal Process. 42(4), 1905–1915 (1994)

    Article  Google Scholar 

  34. L. X, K.C. Ho, Analysis of the degradation in source location accuracy in the presence of sensor location error[A], Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing[C] (IEEE Press, Toulouse, 2006), pp. 14–19

    Google Scholar 

  35. Y. Wang, J. Huang, L. Yang, Y. Xue, TOA-based joint synchronization and source localization with random errors in sensor positions and sensor clock biases[J]. Ad Hoc Netw. 27(C), 99–111 (2015)

    Article  Google Scholar 

  36. Y. Wang, K.C. Ho, TDOA source localization in the presence of synchronization clock bias and sensor position errors[J]. IEEE Trans. Signal Process. 61(18), 4532–4544 (2013)

    Article  MathSciNet  Google Scholar 

  37. K. Dogancay, Bias compensation for the bearings-only pseudolinear target track estimator[J]. IEEE Trans. Signal Process. 54(1), 59–68 (2006)

    Article  Google Scholar 

  38. K.C. Ho, Bias reduction for an explicit solution of source localization using TDOA[J]. IEEE Trans. Signal Process. 60(5), 2101–2114 (2012)

    Article  MathSciNet  Google Scholar 

  39. Y. Zhao, Z. Li, B.J. Hao, J.B. Si, P.W. Wan, Bias reduced method for TDOA and AOA localization in the presence of sensor errors[A], Proceedings of the IEEE International Conference on Communications[C] (IEEE Press, Paris, 2017), pp. 1–6

    Google Scholar 

  40. Y. Liu, F.C. Guo, L. Yang, W.L. Jiang, An improved algebraic solution for TDOA localization with sensor position errors[J]. IEEE Commun. Lett. 19(12), 2218–2221 (2015)

    Article  Google Scholar 

  41. G. Wang, S. Cai, Y.M. Li, N. Ansari, A bias-reduced nonlinear WLS method for TDOA/FDOA-based source localization[J]. IEEE Trans. Veh. Technol. 65(10), 8603–8615 (2016)

    Article  Google Scholar 

  42. B.J. Hao, L. Zan, P.H. Qi, L. GUAN, Effective bias reduction methods for passive source localization using TDOA and GROA[J]. SCIENCE CHINA Inf. Sci. 56(7), 1–12 (2013)

    Article  MathSciNet  Google Scholar 

  43. R.J. Barton, D. Rao, Performance capabilities of long-range UWB-IR TDOA localization systems[J]. EURASIP Journal on Advances in Signal Processing 2008, 236791 (2007)

    Article  Google Scholar 

  44. Y.T. Huang, J. Benesty, G.W. Elko, R.M. Mersereati, Real-time passive source localization: a practical linear-correction least-squares approach[J]. IEEE Transactions on Speech and Audio Processing 9(8), 943–956 (2001)

    Article  Google Scholar 

  45. Y.M. Ji, C.B. Yu, J.M. Wei, B. Anderson, Localization bias reduction in wireless sensor networks[J]. IEEE Trans. Ind. Electron. 62(5), 3004–3016 (2015)

    Article  Google Scholar 

  46. T.K. Moon, W.C. Stirling, Mathematical methods and algorithms for signal processing (Prentice-Hall, Upper Saddle River, NJ, 2000)

    Google Scholar 

  47. H.W. Sorenson, Parameter estimation: principles and problems (Marcel Dekker, New York, 1980)

    MATH  Google Scholar 

  48. M. Frigge, D.C. Hoaglin, B. Iglewicz, Some implementations of the boxplot[J]. Am. Stat. 43(1), 50–54 (1989)

    Google Scholar 

  49. S.G. Wang, M.X. Wu, Z.Z. Jia, Matrix inequality (Science Press, Beijing, 2006)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Editorial board and the Reviewers for considering and revising this manuscript. Meanwhile, we thank Glenn Pennycook, MSc, from Liwen Bianji, Edanz Group China (www.liwenbianji.cn/ac), for editing the English text of a draft of this manuscript.

Funding

This work is supported from the National Natural Science Foundation of China (Grant No. 61201381, No. 61401513 and No.61772548), China Postdoctoral Science Foundation (Grant No. 2016 M592989), the Self-Topic Foundation of Information Engineering University (Grant No. 2016600701), and the Outstanding Youth Foundation of Information Engineering University (Grant No. 2016603201).

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

XC and DW derived and developed the algorithm. XC and JY conceived of and designed the simulations. XC and YW performed the simulations. DW analyzed the results. XC wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ding Wang.

Ethics declarations

Ethics approval and consent to participate

All data and procedures performed in paper were in accordance with the ethical standards of research community. This paper does not contain any studies with human participants or animals performed by any of the authors.

Consent for publication

Informed consent was obtained from all authors included in the study.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

According to Q1 = E[ΔrΔrT], Q2 = E[ΔsΔsT] and (22), we have

$$ \mathrm{E}\left[{\mathbf{H}}_1\left(\mathbf{A}\Delta \mathbf{r}\right)\odot \left(\mathbf{A}\Delta \mathbf{r}\right)\right]={\mathbf{H}}_1\mathrm{vecd}\left[{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}\right] $$
(A.1)
$$ \mathrm{E}\left[{\mathbf{H}}_1\mathbf{E}\left(\Delta \mathbf{s}\odot \Delta \mathbf{s}\right)\right]={\mathbf{H}}_1\mathbf{E}\mathrm{vecd}\left[{\mathbf{Q}}_2\right] $$
(A.2)
$$ \mathrm{E}\left[\hbox{-} 2{\mathbf{H}}_1\mathbf{R}\left(\Delta \mathbf{s}\right){\mathbf{Ar}}^o\right]=-2{\mathbf{H}}_1\cdot \mathrm{blkdiag}\left\{\mathrm{tr}\left\{{\mathbf{Q}}_{s_1}{\mathbf{P}}_1\right\}\cdot {\mathbf{I}}_{M_1-1},\mathrm{tr}\left\{{\mathbf{Q}}_{s_{M_1+1}}{\mathbf{P}}_{M_1+1}\right\}\cdot {\mathbf{I}}_{M_2-{M}_1-1},\cdots, \mathrm{tr}\left\{{\mathbf{Q}}_{s_{M_{N-1}+1}}{\mathbf{P}}_{M_{N-1}+1}\right\}\cdot {\mathbf{I}}_{M_N-{M}_{N-1}-1}\right\}\cdot {\mathbf{Ar}}^o $$
(A.3)
$$ \mathrm{E}\left[{\mathbf{U}}_1^{o-1}\Delta {\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1\left(\mathbf{I}-{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1\right){\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}\right]=2{\mathbf{U}}_1^{o-1}\left[\begin{array}{c}{\mathbf{0}}_{3\times 1}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_1{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_2{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{A}}^{\mathrm{T}}{\boldsymbol{\Lambda}}_N{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\mathbf{B}}_1{\mathbf{A}\mathbf{Q}}_1\right\}\end{array}\right] $$
(A.4)
$$ {\displaystyle \begin{array}{l}\mathrm{E}\left[{\mathbf{U}}_1^{o-1}\Delta {\mathbf{G}}_1^{\mathrm{T}}{\mathbf{W}}_1\left(\mathbf{I}-{\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1\right){\tilde{\mathbf{D}}}_1\Delta \mathbf{s}\right]=2{\mathbf{U}}_1^{o-1}\mathrm{E}\left[\begin{array}{c}\Delta {\tilde{\mathbf{s}}}^{\mathrm{T}}{\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\tilde{\mathbf{D}}}_1\Delta \mathbf{s}\\ {}{\mathbf{0}}_{N\times 1}\end{array}\right]\\ {}=2{\mathbf{U}}_1^{o-1}\left[\begin{array}{c}{\mathbf{Q}}_{s_1}{\mathbf{X}}_1{\left(1,1:3\right)}^{\mathrm{T}}+{\mathbf{Q}}_{s_2}{\mathbf{X}}_1{\left(2,4:6\right)}^{\mathrm{T}}+\cdots +{\mathbf{Q}}_{s_M}{\mathbf{X}}_1{\left(M,3M-2:3M\right)}^{\mathrm{T}}\\ {}{\mathbf{0}}_{N\times 1}\end{array}\right]\end{array}} $$
(A.5)
$$ {\displaystyle \begin{array}{l}\mathrm{E}\left[-{\mathbf{H}}_1\Delta {\mathbf{G}}_1{\mathbf{H}}_1{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}\right]=2{\mathbf{H}}_1\mathrm{E}\left[\Big[{\mathbf{A}}_2\Delta \tilde{\mathbf{s}},\boldsymbol{\Lambda} \left({\mathbf{I}}_N\otimes \left(\mathbf{A}\Delta \mathbf{r}\right)\right)\Big]{\mathbf{H}}_1{\mathbf{B}}_1\mathbf{A}\Delta \mathbf{r}\right]\\ {}=2{\mathbf{H}}_1\left(\begin{array}{l}{\boldsymbol{\Lambda}}_1{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1{\left(4,:\right)}^{\mathrm{T}}+{\boldsymbol{\Lambda}}_2{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1{\left(5,:\right)}^{\mathrm{T}}\\ {}+\cdots +{\boldsymbol{\Lambda}}_N{\mathbf{A}\mathbf{Q}}_1{\mathbf{A}}^{\mathrm{T}}{\mathbf{B}}_1^{\mathrm{T}}{\mathbf{H}}_1{\left(3+N,:\right)}^{\mathrm{T}}\end{array}\right)\end{array}} $$
(A.6)
$$ \mathrm{E}\left[-{\mathbf{H}}_1\Delta {\mathbf{G}}_1{\mathbf{H}}_1{\tilde{\mathbf{D}}}_1\Delta \mathbf{s}\right]=2{\mathbf{H}}_1\mathrm{E}\left[{\mathbf{A}}_2\Delta \tilde{\mathbf{s}}{\mathbf{H}}_1\left(1:3,:\right){\tilde{\mathbf{D}}}_1\Delta \mathbf{s}\right]=2{\mathbf{H}}_1{\mathbf{A}}_2\left[\begin{array}{c}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,1:3\right){\mathbf{Q}}_{s_1}\right\}\\ {}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,4:6\right){\mathbf{Q}}_{s_2}\right\}\\ {}\vdots \\ {}\mathrm{tr}\left\{{\mathbf{X}}_2\left(1:3,3M-2:3M\right){\mathbf{Q}}_{s_M}\right\}\end{array}\right] $$
(A.7)

where \( {\mathbf{X}}_1={\mathbf{A}}_2^{\mathrm{T}}{\mathbf{W}}_1\left({\mathbf{G}}_1^{\mathrm{o}}{\mathbf{H}}_1-\mathbf{I}\right){\tilde{\mathbf{D}}}_1 \) and \( {\mathbf{X}}_2={\mathbf{H}}_1\left(1:3,:\right){\tilde{\mathbf{D}}}_1 \). Using (A.1)–(A.7) and taking the expectation for (25) yields the bias (26) of φ1.

Appendix B

According to the definitions of \( {\tilde{\mathbf{W}}}_2^o \) and \( {\mathbf{U}}_1^o \) in below (83) and (23), we have

$$ \mathrm{E}\left[\Delta \overline{\mathbf{u}}\Delta {\overline{\mathbf{u}}}^{\mathrm{T}}\right]={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{W}}}_2^o{\tilde{\mathbf{G}}}_2\right)}^{-1}={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}{\mathbf{U}}_1^o{\tilde{\mathbf{B}}}_2^{-1}{\tilde{\mathbf{G}}}_2\right)}^{-1}={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right){\tilde{\mathbf{B}}}_2^{-1}{\tilde{\mathbf{G}}}_2\right)}^{-1} $$
(B.1)

Using the definitions of \( {\mathbf{W}}_2^o \) and \( {\mathbf{U}}_1^o \) in below (31) and (23), E[ΔuΔuT] can be reformulated as

$$ \mathrm{E}\left[\Delta \mathbf{u}\Delta {\mathbf{u}}^{\mathrm{T}}\right]={\mathbf{B}}_3^{o-1}{\left({\mathbf{G}}_2^{\mathrm{T}}{\mathbf{W}}_2^o{\mathbf{G}}_2\right)}^{-1}{\mathbf{B}}_3^{o-1}={\left({\mathbf{B}}_3^o{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{B}}_2^{o-\mathrm{T}}\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right){\mathbf{B}}_2^{o-1}{\mathbf{G}}_2{\mathbf{B}}_3^o\right)}^{-1} $$
(B.2)

We now derive the expressions of \( {\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1} \) and \( {\mathbf{B}}_3^o{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{B}}_2^{o-\mathrm{T}} \) as follows. Using the definition of \( {\tilde{\mathbf{G}}}_2 \) and \( {\tilde{\mathbf{B}}}_2 \) in (66), we have

$$ {\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}=\left[{\mathbf{I}}_3,-{\tilde{\boldsymbol{\uprho}}}_1,-{\tilde{\boldsymbol{\uprho}}}_{M_1+1},\cdots, -{\tilde{\boldsymbol{\uprho}}}_{M_{N-1}+1}\right]\cdot \left[\begin{array}{cc}-{\mathbf{I}}_3& \mathbf{0}\\ {}\mathbf{0}& {\mathbf{I}}_N\end{array}\right]=-\left[{\mathbf{I}}_3,{\tilde{\boldsymbol{\uprho}}}_1,{\tilde{\boldsymbol{\uprho}}}_{M_1+1},\cdots, {\tilde{\boldsymbol{\uprho}}}_{M_{N-1}+1}\right] $$
(B.3)

Applying the partitioned matrix inversion formula [49] and the definition of \( {\mathbf{B}}_2^o \) in (29), we attain

$$ {\mathbf{B}}_2^{o-1}=\frac{1}{2}\left[\begin{array}{cccc}{\left(\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\right)}^{-1}& 0& \cdots & 0\\ {}-\frac{{\mathbf{s}}_1^{\mathrm{T}}{\left(\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\right)}^{-1}}{d_1^o}& \frac{1}{d_1^o}& \cdots & 0\\ {}\vdots & \vdots & \ddots & \vdots \\ {}-\frac{{\mathbf{s}}_{M_{N-1}+1}^{\mathrm{T}}{\left(\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\right)}^{-1}}{d_{M_{N-1}+1}^o}& 0& \cdots & \frac{1}{d_{M_{N-1}+1}^o}\end{array}\right] $$
(B.4)

Using (B.4), the definition of G2 in (13) and definition of \( {\mathbf{B}}_3^o \) in below (40), \( {\mathbf{B}}_3^o{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{B}}_2^{o-\mathrm{T}} \) can be reformulated as

$$ {\displaystyle \begin{array}{l}{\mathbf{B}}_3^o{\mathbf{G}}_2^{\mathrm{T}}{\mathbf{B}}_2^{o-\mathrm{T}}=2\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\cdot \left[{\mathbf{I}}_3,{\mathbf{1}}_3,\cdots, {\mathbf{1}}_3\right]\cdot \frac{1}{2}\left[\begin{array}{cccc}{\left(\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\right)}^{-1}& -\frac{{\left(\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\right)}^{-1}{\mathbf{s}}_1}{d_1^o}& \cdots & -\frac{{\left(\operatorname{diag}\left\{{\mathbf{u}}^o\right\}\right)}^{-1}{\mathbf{s}}_{M_{N-1}+1}}{d_{M_{N-1}+1}^o}\\ {}\mathbf{0}& \frac{1}{d_1^o}& \cdots & 0\\ {}\vdots & \vdots & \ddots & \vdots \\ {}\mathbf{0}& 0& \cdots & \frac{1}{d_{M_{N-1}+1}^o}\end{array}\right]\\ {}\kern4em =\left[{\mathbf{I}}_3,\frac{{\mathbf{u}}^o-{\mathbf{s}}_1}{d_1^o},\frac{{\mathbf{u}}^o-{\mathbf{s}}_{M_1+1}}{d_{M_1+1}^o},\cdots, \frac{{\mathbf{u}}^o-{\mathbf{s}}_{M_{N-1}+1}}{d_{M_{N-1}+1}^o}\right]\approx \left[{\mathbf{I}}_3,{\tilde{\boldsymbol{\uprho}}}_1,{\tilde{\boldsymbol{\uprho}}}_{M_1+1},\cdots, {\tilde{\boldsymbol{\uprho}}}_{M_{N-1}+1}\right]=-{\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}\end{array}} $$
(B.5)

Substituting (B.5) into (B.2) yields

$$ \mathrm{E}\left[\Delta \mathbf{u}\Delta {\mathbf{u}}^{\mathrm{T}}\right]={\left({\tilde{\mathbf{G}}}_2^{\mathrm{T}}{\tilde{\mathbf{B}}}_2^{-1}\left({\mathbf{G}}_1^{\mathrm{oT}}{\mathbf{W}}_1{\mathbf{G}}_1^o\right){\tilde{\mathbf{B}}}_2^{-1}{\tilde{\mathbf{G}}}_2\right)}^{-1}=\mathrm{E}\left[\Delta \overline{\mathbf{u}}\Delta {\overline{\mathbf{u}}}^{\mathrm{T}}\right] $$
(B.6)

This completes the proof.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, X., Wang, D., Yin, J. et al. Bias reduction for TDOA localization in the presence of receiver position errors and synchronization clock bias. EURASIP J. Adv. Signal Process. 2019, 7 (2019). https://doi.org/10.1186/s13634-019-0602-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-019-0602-z

Keywords