Skip to main content

On the use of calibration emitters for TDOA source localization in the presence of synchronization clock bias and sensor location errors

Abstract

Time difference of arrival (TDOA) positioning is one of the widely applied techniques for locating an emitting source. Unfortunately, synchronization clock bias and random sensor location perturbations are known to significantly degrade the TDOA localization accuracy. This paper studies the use of a set of calibration sources, whose locations are accurately known to an estimator, to reduce the loss in localization accuracy caused by synchronization offsets and sensor location errors. Under the Gaussian noise assumption, we first derive the Cramér–Rao bound (CRB) for parametric estimation with the use of calibration emitters. Some explicit CRB expressions are obtained, and the performance improvement due to the introduction of the calibration sources is also quantified through the CRB analysis. In order to achieve the optimum localization accuracy, we proceed to propose new localization methods using the TDOA measurements from both target source and calibration emitters. Specifically, two dimension-reduction Taylor-series iterative algorithms are developed, and both of them have two stages. The first stage estimates the clock bias and refines the sensor positions by using the calibration TDOA measurements and the prior knowledge of sensor locations. The second stage provides the estimates of source location by combining the TDOA measurements of target signal and the estimated values in the first phase. The mean square errors (MSEs) of the proposed methods are shown analytically to achieve the corresponding CRB by applying the first-order perturbation analysis. Simulations are used to corroborate and support the theoretical development in this paper.

1 Introduction

Passive localization of an emitting source is a fundamental research topic in numerous applications including signal processing, wireless communications, wireless sensor networks, sonar, surveillance, navigation, passive radar, and vehicular technique. Most localization systems require two estimation steps. In the first step, some intermediate parameters that are embedded in the received signals are extracted at several stations or different time slots through signal processing techniques. Intermediate parameters can be characterized by the emitter location and are usually angle of arrival (AOA) [1,2,3], time difference of arrival (TDOA) [4,5,6,7,8,9,10,11,12,13,14,15,16], time of arrival (TOA) [17,18,19,20,21,22], frequency difference of arrival (FDOA) [23,24,25,26,27,28,29,30,31,32], frequency of arrival (FOA) [33], received signal strength (RSS) [34,35,36,37,38], gain ratios of arrival (GROA) [38,39,40,41], etc. In the second step, the transmitter’s position is determined by finding the coordinate that best fits the lines of position (LOP) associated with the parameters obtained in the first step. The two-step procedure can be classified as decentralized processing approach [42]. It is worth pointing out that source localization can be achieved not only by using a single signal parameter but also by combining multiple signal parameters. In [43,44,45,46,47], some localization approaches that use TOA/TDOA are proposed. In [48,49,50], the localization problems that combine range or range difference with signal strength are studied. In addition, some more general methods for localization in mobile contexts based on moving sensors are developed in [51,52,53,54].

Perhaps the most common technique for locating a stationary emitter is to measure the TDOAs of radiated signal to a number of spatially separated sensors. Each TDOA defines a hyperbola in which the emitter must lie. The intersection of the hyperbolae gives the source location estimate. As mentioned in [11, 55], TOA and TDOA measurements generally yield more accurate position estimates compared to the other intermediate parameters. Moreover, it should be highlighted that TOA localization approach requires knowledge of the transmit time of the received signal from the transmitter, but TDOA positioning technique does not rely on this parameter. Hence, the latter is more suitable for passive location. In this paper, we focus on TDOA-based location scheme.

In the TDOA localization problem, finding the solution of the hyperbolic location equations is generally not a trivial task due to its non-linear and non-convex nature. Moreover, the non-linear hyperbolic equations become inconsistent as the TDOA measurements are corrupted by noises, and the hyperbolae no longer intersect at a single point. During the past few decades, a number of methods for TDOA positioning become available in the literature. These approaches can be divided into two categories. The methods in the first class require iteration to obtain an accurate location estimate. The most important iterative methods include Taylor-series iterative algorithm [8], constrained total least squares (CTLS) algorithm [11, 28], quadratic constraint least squares (QCLS) algorithm [6, 14, 27, 31, 38], and interior-point algorithm [9, 16, 29]. The second category can provide explicit solutions to the target position, and typical closed-form approaches include spherical-interpolation (SI) algorithm [4], two-step weighted least squares (TWLS) algorithm [5, 10, 15, 23, 24, 26, 30, 32, 39, 40], and multidimensional scaling (MDS) algorithm [25]. Both the two classes of methods are able to attain the Cramér–Rao bound (CRB) accuracy when the noise condition is favorable. Generally, the closed-form methods are computationally attractive and do not have local minima and divergence problems as compared to the iterative techniques. However, the iterative approaches generally tolerate higher noise level as compared to the closed-form solutions if they converge to the global optimal solution with the help of a good initial guess. A possible reason for this is that the closed-form algorithms may generate complex values when finding the square root [12]. Although there is no rigorous proof, the experimental results in [11, 27, 28, 31] can be used to support this conclusion. Indeed, the two kinds of methods can be combined to enhance the reliability of the produced position estimates. For example, we can take the closed-form solution as the initial guess of the iterative method, to avoid local convergence and to achieve a higher level of noise tolerance before the thresholding effect takes place.

In addition to the TDOA measurement errors, synchronization offsets and sensor position uncertainties can also degrade the localization accuracy considerably, regardless of the algorithm used for source localization. In recent years, much attention has been paid to the emitter location problem in the presence of sensor location errors and/or synchronization clock bias. In [24, 56], the mean square error (MSE) of source position estimate is derived when an optimum estimator assumes the sensor positions are exact but, in fact, they have errors. In [57], the effects of synchronization errors on TDOA localization accuracy are analyzed in terms of estimation bias, MSE, and success probability. In [58, 59], the degradation in localization accuracy caused by synchronization offsets and sensor position perturbations is examined through the CRB analysis. Both theoretical and experimental results reveal that the TDOA positioning accuracy is very sensitive to the two types of errors, and the CRB performance cannot be achieved if an estimator does not take the two errors into account.

On the other hand, most of the TDOA localization algorithms mentioned above can be extended to the scenario where either of the two special errors is present or both are present. In general, the sensor position errors are modeled as random variables. Moreover, there exist two classes of methods that can remove the effects of the uncertainties in sensor locations. The first one incorporates the prior statistical distribution of noisy sensor positions into the localization procedure [9, 15, 16, 24, 26, 29], and the other performs joint estimation of emitter and sensor locations [8, 10, 20, 30, 40]. The latter is usually more computationally demanding, but it is able to improve the sensor locations and has a higher level of noise tolerance before the thresholding effect starts to occur. Different from sensor position errors, the clock bias is generally regarded as a deterministic parameter, and it can be estimated together with emitter location. A number of effective methods for joint time synchronization and source localization are presented in [60,61,62,63,64,65,66,67,68,69,70]. Additionally, it is worth emphasizing that asynchronous sampling does not always occur for different sensors. Note that when the sensors are close to each other, it is relatively simple to achieve synchronous sampling through the use of a single hardware with multichannel acquisition capabilities. Consequently, the synchronization errors should be considered only when the sensors are far away. In [59], a number of spatially separated sensors for TDOA positioning are separated into many groups, and they are synchronized within each group and synchronization timing offsets occur among different groups. In other words, these sensors are partially synchronized. Based on this localization scenario, some efficient localization approaches are developed in [57, 59, 70].

In order to further improve the source position estimate in the presence of timing synchronization offsets and/or sensor location errors, we need to utilize a set of calibration emitters whose positions are accurately or approximately known. Indeed, the introduction of calibration source can provide much performance gain [71,72,73,74,75,76,77,78]. The reason for this is that the TDOA measurements from target source and calibration emitters are subject to the same sensor position displacement and synchronization clock bias. In [71], a single calibration source with accurately known location is exploited to reduce the loss in localization accuracy due to the uncertainties in sensor locations. The gain in localization accuracy resulted from a calibration signal is examined through the CRB analysis. An asymptotically efficient solution for source location estimate is also presented in [71], which uses TDOA measurements from both target and calibration source. References [72,73,74] extend this work to TDOA/FDOA positioning scenario where the target source is moving. In addition, the study in [71] is also generalized to some more practical situations. For example, the accurate position of calibration emitter is not available [75]; multiple target sources and calibration emitters exist simultaneously [76, 77]. Some efficient solutions are developed in [75,76,77]. Besides, the effects of sensor position errors and the placement of calibration emitter for source localization are studied in [78]. Theoretical and experimental results show that it is possible to eliminate the effects of sensor position errors on source localization by properly exploring calibration emitters, even if their positions are not known exactly.

It is noteworthy that none of the works in [71,72,73,74,75,76,77,78] takes the consideration of the synchronization errors in the presence of calibration emitter. Indeed, it can be expected by intuition that the negative effect incurred by clock bias on localization accuracy can also be mitigated through the utilization of calibration emitter. Therefore, this work focuses on the use of calibration emitter for TDOA source localization when both clock offsets and sensor position uncertainties exist. To the best of our knowledge, this is the first time this problem is addressed.

In this paper, the localization scenario is similar to the one presented in [59], where the sensors are partially synchronized. The study begins with the CRB investigation to examine the performance improvement due to the utilization of the calibration emitters over the case where no calibration sources are exploited. Some explicit and useful CRB expressions are obtained. The insight gained from the CRB indicates that the calibration sources can significantly reduce the effects of clock bias and sensor position errors. In order to achieve the optimum estimation accuracy, we develop new TDOA localization methods using the measurements from both target source and calibration emitters. Specifically, two dimension-reduction Taylor-series iterative algorithms are proposed, and both of them have two stages. The first stage estimates the clock offsets and refines the sensor positions by using the calibration TDOA measurements as well as the statistical characteristic of the noisy sensor locations. The second stage provides the estimates of source location by combining the TDOA measurements from the target signal and the estimated values in the first phase. The theoretical MSEs of the proposed methods are deduced based on the first-order perturbation analysis. Moreover, the proposed solutions are proved analytically to attain the CRB accuracy under moderate noise level. The paper is closed by the use of some simulations to support the theoretical development. The novelty and technical contributions of the paper are summarized as follows:

  1. (1)

    The exact CRB expressions for TDOA source localization in the presence of synchronization clock bias and sensor location errors are first obtained. The performance improvement due to the use of the calibration sources is quantified through the CRB analysis.

  2. (2)

    Aiming at the localization problem addressed here, we propose two efficient dimension-reduction Taylor-series iterative algorithms based on the property of the orthogonal projection matrix. Both of them can significantly reduce the performance loss in localization accuracy caused by synchronization offsets and sensor location errors.

  3. (3)

    The estimation MSEs of two proposed solutions are derived by applying the first-order perturbation analysis. Moreover, the analytical expressions for the MSEs are proved to equal the CRB by making use of the property of the orthogonal projection matrix. More importantly, our performance analysis is performed in a general mathematical framework, which is not limited to a specific signal metric. The obtained analytical result reveals that the new methods are able to provide asymptotically optimal estimation accuracy for source localization.

The rest of this paper is organized as follows. Section 2 lists the notational conventions and matrix identities that will be used throughout the paper. In Section 3, the localization scenario is described and the measurement model is formulated. The CRB analyses are performed in Section 4. Section 5 develops the proposed TDOA localization methods. The asymptotic efficiency of the proposed estimators is proved in Section 6. Section 7 provides the simulation results which can corroborate the theoretical analysis as well as the good performance of the proposed solutions. Conclusions are drawn in Section 8. The proofs of the main results are shown in Appendices 1, 2, 3, 4, 5, 6, 7, and 8.

2 Notational conventions and matrix identities

In this paper, lowercase and uppercase boldface letters are used to denote vectors and matrices, respectively. The notational conventions that are used throughout this paper are listed in Table 1, where the right side describes the definition of some common mathematical notations shown on the left side.

Table 1 Notational conventions

Table 2 shows four matrix identities, which are useful for the theoretical development in this paper. It contains two orthogonal projection matrix formulas for the full-column-rank matrix, the partitioned matrix inversion formula for symmetric matrix, and the matrix inversion lemma.

Table 2 Matrix identities

3 Measurement model and problem formulation

3.1 Measurement model for target source

We consider three-dimension (3D) localization scenario, in which M sensors are used to capture the radiated signal from an emitting point source. The sensors are located at positions wm = [xr, m yr, m zr, m]T (1 ≤ m ≤ M). The TDOAs of the received signals with respect to the signal at reference sensor, say sensor 1, are estimated to determine the location of the target source. The unknown position of the transmitter is assumed to be u = [x y z]T.

As discussed in [59], perfect synchronization for all sensors may not be feasible if the sensors are widely separated or there are a large number of sensors. However, it is relatively simple to perform synchronous sampling for some of the sensors that are close to each other. For this reason, we can divide the sensors into N groups. The number of sensors in the nth group is set to Mn, which implies \( M={\sum}_{n=1}^N{M}_n \). Within each group, the signals received at the sensors are sampled synchronously with respect to a common local clock. But, the clocks from different sensor groups are not the same and clock offsets exist among them. Without loss of generality, the sensors are grouped in the following manner.

In Fig. 1, Sm stands for the mth sensor and S1 represents the reference sensor. Since the reference sensor belongs to the first group, the clock bias of this group can be assumed to be zero.

Fig. 1
figure 1

Sketch map of sensor grouping

It is well known that TDOA measurement can be easily converted to range difference of arrival (RDOA) measurement given the signal propagation speed. Therefore, TDOA and RDOA are used interchangeably throughout this paper. Based on the above assumption, the RDOAs can be modeled as:

$$ \left\{\begin{array}{l}{\hat{r}}_{m1}={r}_{m1}+{\varepsilon}_{m1}={\left\Vert \mathbf{u}-{\mathbf{w}}_m\right\Vert}_2-{\left\Vert \mathbf{u}-{\mathbf{w}}_1\right\Vert}_2+{\varepsilon}_{m1}={f}_m\left(\mathbf{u},\mathbf{w}\right)+{\varepsilon}_{m1}\kern0.5em \left(2\le m\le {M}_1\right)\\ {}{\hat{r}}_{m1}={r}_{m1}+{\varepsilon}_{m1}={\left\Vert \mathbf{u}-{\mathbf{w}}_m\right\Vert}_2-{\left\Vert \mathbf{u}-{\mathbf{w}}_1\right\Vert}_2+{\rho}_n+{\varepsilon}_{m1}={f}_m\left(\mathbf{u},\mathbf{w}\right)+{\rho}_n+{\varepsilon}_{m1}\ \left({\tilde{M}}_{n-1}+1\le m\le {\tilde{M}}_n;2\le n\le N\right)\end{array}\right. $$
(1)

where \( {\hat{r}}_{m1} \) is the noisy measurement, rm1 is the true value in the presence of synchronization errors, εm1 is the additive noise, ρn is the range offset due to clock bias of group n with respect to group 1, \( {\tilde{M}}_n={\sum}_{j=1}^n{M}_j \) and \( {\tilde{M}}_N=M \), and fm(u, w) = u − wm2 − u − w12.

For notation simplicity, we collect \( {\left\{{\hat{r}}_{m1}\right\}}_{2\le m\le M} \) to form a (M − 1) × 1 RDOA vector as follows:

$$ \hat{\boldsymbol{r}}=\boldsymbol{r}+\varepsilon =\boldsymbol{f}\left(u,w\right)+\varGamma \rho +\varepsilon $$
(2)

where

$$ \left\{\begin{array}{l}\hat{\mathbf{r}}={\left[{\hat{r}}_{21}\kern1em {\hat{r}}_{31}\kern1em \cdots \kern1em {\hat{r}}_{M1}\right]}^{\mathrm{T}}\kern1em ,\kern1em \mathbf{r}={\left[{r}_{21}\kern1em {r}_{31}\kern1em \cdots \kern1em {r}_{M1}\right]}^{\mathrm{T}}\kern1em ,\kern1em \boldsymbol{\upvarepsilon} ={\left[{\varepsilon}_{21}\kern1em {\varepsilon}_{31}\kern1em \cdots \kern1em {\varepsilon}_{M1}\right]}^{\mathrm{T}}\kern1em ,\kern1em \boldsymbol{\uprho} ={\left[{\rho}_2\kern1em {\rho}_3\kern1em \cdots \kern1em {\rho}_N\right]}^{\mathrm{T}}\\ {}\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)={\left[{f}_2\left(\mathbf{u},\mathbf{w}\right)\kern1em {f}_3\left(\mathbf{u},\mathbf{w}\right)\kern1em \cdots \kern1em {f}_M\left(\mathbf{u},\mathbf{w}\right)\right]}^{\mathrm{T}}\kern1em ,\kern1em \boldsymbol{\Gamma} =\left[{\mathbf{O}}_{\left(N-1\right)\times \left({M}_1-1\right)}\right.\kern0.5em \mathrm{blkdiag}{\left.\left[\begin{array}{ccc}{\mathbf{1}}_{1\times {M}_2}& \begin{array}{cc}{\mathbf{1}}_{1\times {M}_3}& \cdots \end{array}& {\mathbf{1}}_{1\times {M}_N}\end{array}\right]\right]}^{\mathrm{T}}\end{array}\right. $$
(3)

It is assumed that the error vector ε follows a zero-mean Gaussian distribution with covariance matrix Q = E[εεT]. Note that ρ is the clock bias vector and its unit is meter not second because RDOA is used instead of TDOA. Besides, from the definition of matrix Γ in (3), it can be easily checked that rank[Γ] = N − 1.

On the other hand, the accurate sensor locations {wm}1 ≤ m ≤ M are not known and only noisy versions of them, denoted by \( {\left\{{\hat{\mathbf{v}}}_m\right\}}_{1\le m\le M} \), are available. Mathematically, we have

$$ {\hat{\mathbf{v}}}_m={\mathbf{w}}_m+{\boldsymbol{\upxi}}_m\kern1em \left(1\le m\le M\right) $$
(4)

where ξm is the position error in \( {\hat{\mathbf{v}}}_m \). The collection of \( {\left\{{\hat{\mathbf{v}}}_m\right\}}_{1\le m\le M} \) forms a 3M × 1 sensor location vector as below:

$$ \hat{\mathbf{v}}=\mathbf{w}+\boldsymbol{\upxi} $$
(5)

where \( \hat{\mathbf{v}}={\left[{\hat{\mathbf{v}}}_1^{\mathrm{T}}\kern1em {\hat{\mathbf{v}}}_2^{\mathrm{T}}\kern1em \cdots \kern1em {\hat{\mathbf{v}}}_M^{\mathrm{T}}\right]}^{\mathrm{T}} \) and \( \boldsymbol{\upxi} ={\left[{\boldsymbol{\upxi}}_1^{\mathrm{T}}\kern1em {\boldsymbol{\upxi}}_2^{\mathrm{T}}\kern1em \cdots \kern1em {\boldsymbol{\upxi}}_M^{\mathrm{T}}\right]}^{\mathrm{T}} \). It is assumed that ξ is Gaussian distributed with zero mean and covariance matrix P = E[ξξT]. Moreover, ξ is independent of ε.

3.2 Measurement model for calibration source

Assume that there exist some calibration emitters that are not far from the target. Moreover, the locations of the calibration sources are accurately known. The RDOAs of the calibration signals are also measured based on the sensors given in Fig. 1. These measurements are helpful in reducing the effects of synchronization offsets and sensor location errors.

The number of calibration emitters is set to D, and the position of the dth calibration source is denoted as uc, d = [xc, dyc, dzc, d]T (1 ≤ d ≤ D). Since timing synchronization offsets are caused by the difference of local clocks from different sensor groups, it is reasonable to assume that the clock bias vector ρ remains the same for different signals [79]. As a consequence, the RDOA vector of the dth calibration signal can be written as:

$$ {\hat{\mathbf{r}}}_{\mathrm{c},d}={\mathbf{r}}_{\mathrm{c},d}+{\boldsymbol{\upvarepsilon}}_{\mathrm{c},d}=\mathbf{f}\left({\mathbf{u}}_{\mathrm{c},d},\mathbf{w}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} +{\boldsymbol{\upvarepsilon}}_{\mathrm{c},d}\kern1em \left(1\le d\le D\right) $$
(6)

where εc, d is the measurement error, which is modeled as a zero-mean Gaussian random vector with covariance matrix Qc, d. Besides, \( {\boldsymbol{\upvarepsilon}}_{\mathrm{c},{d}_1} \) and \( {\boldsymbol{\upvarepsilon}}_{\mathrm{c},{d}_2} \) are statistically independent for d1 ≠ d2, and {εc, d}1 ≤ d ≤ D are uncorrelated with ε and ξ.

Putting all the D equations in (6) together yields

$$ {\hat{\mathbf{r}}}_{\mathrm{c}}={\mathbf{r}}_{\mathrm{c}}+{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}=\overline{\mathbf{f}}\left(\mathbf{w}\right)+\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} +{\boldsymbol{\upvarepsilon}}_{\mathrm{c}} $$
(7)

where

$$ \left\{\begin{array}{l}\overline{\boldsymbol{\Gamma}}={\mathbf{1}}_{D\times 1}\otimes \boldsymbol{\Gamma} \kern1em ,\kern1em \overline{\mathbf{f}}\left(\mathbf{w}\right)={\left[{\left(\mathbf{f}\left({\mathbf{u}}_{\mathrm{c},1},\mathbf{w}\right)\right)}^{\mathrm{T}}\kern1em {\left(\mathbf{f}\left({\mathbf{u}}_{\mathrm{c},2},\mathbf{w}\right)\right)}^{\mathrm{T}}\kern1em \cdots \kern1em {\left(\mathbf{f}\left({\mathbf{u}}_{\mathrm{c},D},\mathbf{w}\right)\right)}^{\mathrm{T}}\right]}^{\mathrm{T}}\\ {}{\hat{\mathbf{r}}}_{\mathrm{c}}={\left[{\hat{\mathbf{r}}}_{\mathrm{c},1}^{\mathrm{T}}\kern1em {\hat{\mathbf{r}}}_{\mathrm{c},2}^{\mathrm{T}}\kern1em \cdots \kern1em {\hat{\mathbf{r}}}_{\mathrm{c},D}^{\mathrm{T}}\right]}^{\mathrm{T}}\kern1em ,\kern1em {\mathbf{r}}_{\mathrm{c}}={\left[{\mathbf{r}}_{\mathrm{c},1}^{\mathrm{T}}\kern1em {\mathbf{r}}_{\mathrm{c},2}^{\mathrm{T}}\kern1em \cdots \kern1em {\mathbf{r}}_{\mathrm{c},D}^{\mathrm{T}}\right]}^{\mathrm{T}}\kern1em ,\kern1em {\boldsymbol{\upvarepsilon}}_{\mathrm{c}}={\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c},1}^{\mathrm{T}}\kern1em {\boldsymbol{\upvarepsilon}}_{\mathrm{c},2}^{\mathrm{T}}\kern1em \cdots \kern1em {\boldsymbol{\upvarepsilon}}_{\mathrm{c},D}^{\mathrm{T}}\right]}^{\mathrm{T}}\end{array}\right. $$
(8)

It follows from the above assumption that εc is a zero-mean Gaussian random vector with covariance matrix Qc = blkdiag[Qc, 1Qc, 2Qc, D].

3.3 Determination of covariance matrices

Before proceeding further, it must be noted that the theoretical development requires the knowledge of the covariance matrices Q, Qc, and P, which may not be known in practice. Fortunately, they can be well estimated in real-life application.

First, we discuss how to obtain the covariance matrix of the TDOA measurement. According to [80], it can be found that the TDOA estimated by generalized cross-correlation with Gaussian data is asymptotically normally distributed. Besides, it follows from [81] that the TDOA noise vector in Hahn and Tretter’s estimator is also asymptotically normal. Then, if the noise power spectral densities are similar at sensors, the covariance matrices can be replaced by a matrix of diagonal elements 1 and 0.5 for all other elements according to the analysis in [5]. If this condition is not satisfied, for many estimation methods (such as maximum likelihood (ML) estimator), Q and Qc approach the CRB under Gaussian noise, and it has explicit form as shown in [82, 83].

Second, the covariance matrix of sensor location errors can be obtained by a large number of off-line observations. Specifically, it can be estimated during calibration by using a source of known location and by measuring the amounts of perturbations in the sensor positions. The detailed estimation method can be found in [84, 85]. On the other hand, according to the discussion in [24], some scattering models from the environment may also help to determine the covariance matrix.

Although there is no mathematically substantiated argument to support that the approximation of covariance matrices does not introduce much loss in accuracy, some previous work indicates that the performance degradation due to the approximation is insignificant [86, 87]. The detailed sensitivity of the inaccurate knowledge of Q, Qc, and P on the performance of the proposed estimator can be considered as a subject for further study. Once the covariance matrices mentioned above are obtained, or well estimated from the estimation methods, we can use them for source localization.

3.4 Problem formulation

In this work, two important problems need to be studied. First, we intend to determine the performance gain in parameter estimation accuracy due to the introduction of calibration emitters. This problem can be solved by deriving and analyzing the CRB expressions. Second, it is necessary to present an effective method that can provide the estimates of u, w, and ρ as accurate as possible with the help of RDOA measurements from the calibration sources.

4 CRB derivation and analysis

It is well known that the CRB establishes a lower bound on the error covariance matrix for any unbiased estimate of a parameter vector. It is often used to investigate the optimality of parametric estimators. This section is devoted to the derivation of the CRB on the estimation of the parameters of interest. The obtained results can provide some valuable insights into the performance gain for source localization through the introduction of calibration signals. Additionally, they can also be considered as a performance benchmark for the proposed solutions in Section 5.

4.1 CRB derivation and analysis based on all the RDOA measurements

In this subsection, the CRB on the covariance matrix of parameter estimation is deduced based on the RDOA measurements from both target source and calibration emitters. In this situation, the observations consist of \( \hat{\mathbf{r}} \), \( \hat{v} \), and \( {\hat{r}}_c \), and the unknowns include u, ρ, and w. Hence, we need to define the data vector \( \boldsymbol{\upeta} ={\left[{\hat{\mathbf{r}}}^{\mathrm{T}}\kern1em {\hat{\mathbf{v}}}^{\mathrm{T}}\kern1em {\hat{\mathbf{r}}}_{\mathrm{c}}^{\mathrm{T}}\right]}^{\mathrm{T}} \) as well as the parameter vector μ = [uTwTρT]T. Under the assumptions stated in Section 3, the logarithm probability density function (PDF) of η parameterized on μ is given by

$$ {\displaystyle \begin{array}{l}\ln \left(p\left(\boldsymbol{\upeta} |\boldsymbol{\upmu} \right)\right)=L-\frac{1}{2}{\left(\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\boldsymbol{\Gamma} \boldsymbol{\uprho} \right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left(\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\boldsymbol{\Gamma} \boldsymbol{\uprho} \right)-\frac{1}{2}{\left(\hat{\mathbf{v}}-\mathbf{w}\right)}^{\mathrm{T}}{\mathbf{P}}^{-1}\left(\hat{\mathbf{v}}-\mathbf{w}\right)\\ {}\kern5.62em -\frac{1}{2}{\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left(\mathbf{w}\right)-\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} \right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left(\mathbf{w}\right)-\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} \right)\end{array}} $$
(9)

where L is a constant independent of μ. It can be readily verified from (9) that

$$ \frac{\partial \ln \left(p\left(\boldsymbol{\upeta} |\boldsymbol{\upmu} \right)\right)}{\partial \boldsymbol{\upmu}}=\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}+{\mathbf{P}}^{-1}\boldsymbol{\upxi} \\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}\end{array}\right] $$
(10)

where \( {\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)=\frac{\partial \mathbf{f}\left(\mathbf{u},\mathbf{w}\right)}{\partial {\mathbf{u}}^{\mathrm{T}}} \), \( {\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)=\frac{\partial \mathbf{f}\left(\mathbf{u},\mathbf{w}\right)}{\partial {\mathbf{w}}^{\mathrm{T}}} \), and \( \overline{\mathbf{F}}\left(\mathbf{w}\right)=\frac{\partial \overline{\mathbf{f}}\left(\mathbf{w}\right)}{\partial {\mathbf{w}}^{\mathrm{T}}} \). Using (10), we can obtain the CRB matrix for μ as

$$ {\displaystyle \begin{array}{c}\mathbf{CRB}\left(\boldsymbol{\upmu} \right)={\left(\mathrm{E}\left[\frac{\partial \ln \left(p\left(\boldsymbol{\upeta} |\boldsymbol{\upmu} \right)\right)}{\partial \boldsymbol{\upmu}}{\left(\frac{\partial \ln \left(p\left(\boldsymbol{\upeta} |\boldsymbol{\upmu} \right)\right)}{\partial \boldsymbol{\upmu}}\right)}^{\mathrm{T}}\right]\right)}^{-1}\\ {}={\left[\begin{array}{ccc}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& \begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}+{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\end{array}& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{-1}\end{array}} $$
(11)

For convenience, we define the matrices

$$ \left\{\begin{array}{l}\mathbf{X}={\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\kern1em ,\kern1em \mathbf{Y}={\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left[{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\kern1em \boldsymbol{\Gamma} \right]\\ {}\mathbf{Z}=\left[\begin{array}{cc}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]\end{array}\right. $$
(12)

Then, applying the matrix identities I and II in Table 2 yields

$$ \left\{\begin{array}{l}\mathbf{CRB}\left(\mathbf{u}\right)={\mathbf{X}}^{-1}+{\mathbf{X}}^{-1}\mathbf{Y}{\left(\mathbf{Z}-{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\mathbf{Y}\right)}^{-1}{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\\ {}\mathbf{CRB}\left(\mathbf{w}\right)=\left[{\mathbf{I}}_{3M}\kern1em {\mathbf{O}}_{3M\times \left(N-1\right)}\right]{\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{I}}_{3M}\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3M}\end{array}\right]+\left[{\mathbf{I}}_{3M}\kern1em {\mathbf{O}}_{3M\times \left(N-1\right)}\right]{\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}{\left(\mathbf{X}-{\mathbf{Y}\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}\right)}^{-1}{\mathbf{Y}\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{I}}_{3M}\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3M}\end{array}\right]\\ {}\mathbf{CRB}\left(\boldsymbol{\uprho} \right)=\left[{\mathbf{O}}_{\left(N-1\right)\times 3M}\kern1em {\mathbf{I}}_{N-1}\right]{\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{O}}_{3M\times \left(N-1\right)}\\ {}{\mathbf{I}}_{N-1}\end{array}\right]+\left[{\mathbf{O}}_{\left(N-1\right)\times 3M}\kern1em {\mathbf{I}}_{N-1}\right]{\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}{\left(\mathbf{X}-{\mathbf{Y}\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}\right)}^{-1}{\mathbf{Y}\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{O}}_{3M\times \left(N-1\right)}\\ {}{\mathbf{I}}_{N-1}\end{array}\right]\end{array}\right. $$
(13)

At this point, we would like to derive the performance gain for source localization accuracy due to the introduction of calibration sources. For this purpose, it is necessary to compare CRB(u) with the CRB of u for the case without calibration emitters. This CRB is denoted as CRBo(u), and it can be written as [24]

$$ {\mathbf{CRB}}_{\mathrm{o}}\left(\mathbf{u}\right)={\mathbf{X}}^{-1}+{\mathbf{X}}^{-1}\mathbf{Y}{\left({\mathbf{Z}}_{\mathrm{o}}-{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\mathbf{Y}\right)}^{-1}{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1} $$
(14)

where

$$ {\mathbf{Z}}_{\mathrm{o}}=\left[\begin{array}{cc}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)& {\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \end{array}\right] $$
(15)

It is straightforward to verify from the third equality in (12) and (15) that

$$ \mathbf{Z}={\mathbf{Z}}_{\mathrm{o}}+\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1}{\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}} $$
(16)

which combined with the first equality in (13), (14) and the matrix identity (II) in Table 2 leads to

$$ {\displaystyle \begin{array}{c}{\mathbf{CRB}}_{\mathrm{o}}\left(\mathbf{u}\right)-\mathbf{CRB}\left(\mathbf{u}\right)={\mathbf{X}}^{-1}\mathbf{Y}{\left({\mathbf{Z}}_{\mathrm{o}}-{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\mathbf{Y}\right)}^{-1}\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\left({\mathbf{I}}_{D\left(M-1\right)}+{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}{\left({\mathbf{Z}}_{\mathrm{o}}-{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\mathbf{Y}\right)}^{-1}\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\right)}^{-1}\\ {}\times {\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}{\left({\mathbf{Z}}_{\mathrm{o}}-{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\mathbf{Y}\right)}^{-1}{\mathbf{Y}}^{\mathrm{T}}{\mathbf{X}}^{-1}\end{array}} $$
(17)

Before proceeding, some remarks are in order.

4.1.1 Remark 1

The term on the right side of (17) represents the performance improvement resulted from the utilization of the calibration sources. It is clear that if \( {\mathbf{Q}}_{\mathrm{c}}^{-1/2}\to \mathbf{O} \), then CRBo(u) → CRB(u), which means that there is no improvement in the localization accuracy. This is not unexpected because \( {\mathbf{Q}}_{\mathrm{c}}^{-1/2}\to \mathbf{O} \) implies that the RDOA measurements of the calibration emitters are so noisy and become useless.

4.1.2 Remark 2

It is easy to check that the term on the right side of (17) has a symmetric structure and it is a positive semidefinite matrix. Therefore, using the calibration emitters is helpful in improving the best localization accuracy. In fact, the resulting performance gain is considerable at typical measurement error level, as illustrated in the simulation section.

4.1.3 Remark 3

In Appendix 1, we provide an alternative expression for CRB(u) as follows:

$$ \mathbf{CRB}\left(\mathbf{u}\right)={\left({\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\left(\mathbf{Q}+{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Phi} \left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)}^{-1}{\mathbf{F}}_1\Big(\mathbf{u},\mathbf{w}\Big)\right)}^{-1} $$
(18)

where

$$ \boldsymbol{\Phi} ={\left[\begin{array}{cc}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{-1} $$
(19)

Note that this CRB expression is useful for the performance analysis in Section 6.2.1.

4.1.4 Remark 4

In Appendix 2, the explicit expressions for \( \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right) \), \( \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right) \), and CRB(ρ) are shown. They are useful in the subsequent theoretical development. In addition, it is worthy to point out that all the CRBs obtained above are uncorrelated with ρ.

4.2 CRB derivation and analysis based on the RDOA measurements of calibration signals

The aim of this subsection is to derive the CRB on the estimation of parameters ρ and w based on the RDOA measurements from the calibration emitters only. The associated CRB matrix is denoted by \( {\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\boldsymbol{\uprho} \\ {}\mathbf{w}\end{array}\right]\right) \). In this situation, the observation and parameter vector should be defined as \( {\boldsymbol{\upeta}}_{\mathrm{c}}={\left[{\hat{\mathbf{v}}}^{\mathrm{T}}\kern1em {\hat{\mathbf{r}}}_{\mathrm{c}}^{\mathrm{T}}\right]}^{\mathrm{T}} \) and μc = [wTρT]T, respectively. It follows from the assumptions described in Section 3 that the logarithm PDF of ηc conditioned on μc can be written as:

$$ \ln \left({p}_{\mathrm{c}}\left({\boldsymbol{\upeta}}_{\mathrm{c}}|{\boldsymbol{\upmu}}_{\mathrm{c}}\right)\right)={L}_{\mathrm{c}}-\frac{1}{2}{\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left(\mathbf{w}\right)-\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} \right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left(\mathbf{w}\right)-\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} \right)-\frac{1}{2}{\left(\hat{\mathbf{v}}-\mathbf{w}\right)}^{\mathrm{T}}{\mathbf{P}}^{-1}\left(\hat{\mathbf{v}}-\mathbf{w}\right) $$
(20)

where Lc is a constant that is unrelated to μc. From (20), we have

$$ \frac{\partial \ln \left({p}_{\mathrm{c}}\left({\boldsymbol{\upeta}}_{\mathrm{c}}|{\boldsymbol{\upmu}}_{\mathrm{c}}\right)\right)}{\partial {\boldsymbol{\upmu}}_{\mathrm{c}}}=\left[\begin{array}{c}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}+{\mathbf{P}}^{-1}\boldsymbol{\upxi} \\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}\end{array}\right] $$
(21)

which implies that the CRB matrix for μc = [wTρT]T is equal to

$$ {\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)={\left(\mathrm{E}\left[\frac{\partial \ln \left({p}_{\mathrm{c}}\left({\boldsymbol{\upeta}}_{\mathrm{c}}|{\boldsymbol{\upmu}}_{\mathrm{c}}\right)\right)}{\partial {\boldsymbol{\upmu}}_{\mathrm{c}}}{\left(\frac{\partial \ln \left(p\left({\boldsymbol{\upeta}}_{\mathrm{c}}|{\boldsymbol{\upmu}}_{\mathrm{c}}\right)\right)}{\partial {\boldsymbol{\upmu}}_{\mathrm{c}}}\right)}^{\mathrm{T}}\right]\right)}^{-1}={\left[\begin{array}{cc}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{-1}=\boldsymbol{\Phi} $$
(22)

Combining (22) and (84) leads to

$$ {\left({\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)\right)}^{-1}-{\left(\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)\right)}^{-1}=\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right]{\mathbf{Q}}^{-1/2}{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\ge \mathbf{O}\Rightarrow {\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)\le \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right) $$
(23)

which means that the RDOA measurements from the target source can be exploited to improve the optimum estimation accuracy for w and ρ.

On the other hand, applying the matrix identities (I–III) in Table 2 yields

$$ {\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right)={\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1} $$
(24)
$$ {\mathbf{CRB}}_{\mathrm{c}}\left(\boldsymbol{\uprho} \right)={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}+{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right){\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1} $$
(25)

It is noteworthy that these two CRB expressions are useful for the theoretical analysis in Section 5.2. In addition, it can be easily observed from (24) and (25) that both CRBc(w) and CRBc(ρ) are independent of ρ.

5 Proposed TDOA localization methods

The objective of this section is to develop effective TDOA localization methods in the presence of synchronization offsets and sensor location errors when a set of calibration emitters are available. The presented methods are based on the Taylor-series expansion. In order to decrease the number of variables involved in iteration, we devise dimension-reduction Taylor-series iterative algorithms. More specifically, the novel approach consists of two stages. The first stage estimates the clock bias and refines the sensor positions based on the calibration RDOA measurements as well as the prior knowledge of sensor locations. The second stage provides the estimates of source location by combining the RDOA measurements of the target signal and the estimated values in the first phase. Moreover, the sensor locations and the clock bias can be further refined compared to the estimates in the first step.

5.1 Stage 1 of the proposed methods

In the first stage, the measurement vectors \( {\hat{\mathbf{r}}}_{\mathrm{c}} \) and \( \hat{\mathbf{v}} \) are combined to estimate ρ and w. In order to obtain the optimum accuracy, the ML criterion is adopted, and the corresponding minimization problem can be formulated as:

$$ \underset{\mathbf{w},\boldsymbol{\uprho}}{\min}\left\{{\left\Vert \left[\begin{array}{c}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\hat{\mathbf{r}}}_{\mathrm{c}}\\ {}{\mathbf{P}}^{-1/2}\hat{\mathbf{v}}\end{array}\right]-\left[\begin{array}{c}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left(\overline{\mathbf{f}}\left(\mathbf{w}\right)+\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} \right)\\ {}{\mathbf{P}}^{-1/2}\mathbf{w}\end{array}\right]\right\Vert}_2^2\right\} $$
(26)

There is no doubt that the conventional Taylor-series iterative algorithm, as discussed in [8, 88], can be used to solve (26) and jointly estimate ρ and w. However, in this subsection, we would like to present an alternative Taylor-series iterative algorithm, which is able to reduce the number of parameters involved in the iteration. Note that the objective function in (26) is quadratic in ρ; hence, the optimal solution to ρ can be obtained in closed form as below:

$$ {\hat{\boldsymbol{\rho}}}_{\mathrm{f},\mathrm{opt}}={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left(\mathbf{w}\right)\right) $$
(27)

where subscript “f” is added to emphasize that this is the solution in the first stage. Inserting (27) back into (26) results in the following concentrated objective function:

$$ \underset{\mathbf{w}}{\min}\left\{{\left\Vert \left[\begin{array}{c}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\hat{\mathbf{r}}}_{\mathrm{c}}\\ {}{\mathbf{P}}^{-1/2}\hat{\mathbf{v}}\end{array}\right]-\left[\begin{array}{c}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{f}}\left(\mathbf{w}\right)\\ {}{\mathbf{P}}^{-1/2}\mathbf{w}\end{array}\right]\right\Vert}_2^2\right\} $$
(28)

The unknowns that need to be optimized in (28) include w only, and this minimization problem can be solved by the traditional Taylor-series iterative algorithm. The corresponding iterative formula is given by

$$ {\displaystyle \begin{array}{c}{\hat{\mathbf{w}}}_{\mathrm{f}}^{\left(k+1\right)}={\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}+{\left({\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)+{\mathbf{P}}^{-1}\right)}^{-1}\\ {}\times \left({\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)+{\mathbf{P}}^{-1}\left(\hat{\mathbf{v}}-{\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right)\right)\end{array}} $$
(29)

where superscript k indexes iteration number and \( {\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)} \) denotes the estimate at the kth iteration. If the sequence \( {\left\{{\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)}\right\}}_{1\le k\le +\infty } \) converges to \( {\hat{\mathbf{w}}}_{\mathrm{f}} \), then this vector can be regarded as the solution of sensor locations in the first phase.

When the iteration process in (29) is terminated, the solution of ρ can be immediately determined by

$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right) $$
(30)

At this point, we make two important remarks about the proposed algorithm in the first stage.

5.1.1 Remark 5

In the procedure stated above, the estimation of w and ρ is decoupled and each is estimated sequentially with lower computational complexity.

5.1.2 Remark 6

Both \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) are asymptotically efficient solutions because their performance can attain the CRB derived in Section 4.2. We prove this result in Section 5.2 with an analytical manner.

5.2 MSE analysis in the first stage

The aim of this subsection is to deduce the MSE expressions of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) by employing the first-order perturbation analysis. Moreover, the two MSEs are proved to asymptotically reach the corresponding CRBs given in Section 4.2. It is worth emphasizing that the MSE expressions for the estimates in the first phase are important because they are used in the second stage.

5.2.1 MSE expression of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \)

The theoretical MSE of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) is derived here. Taking the limit on both sides of (29) produces

$$ {\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left({\hat{\mathbf{r}}}_{\mathrm{c}}-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)+{\mathbf{P}}^{-1}\left(\hat{\mathbf{v}}-{\hat{\mathbf{w}}}_{\mathrm{f}}\right)={\mathbf{O}}_{3M\times 1} $$
(31)

where \( {\hat{\mathbf{w}}}_{\mathrm{f}}=\underset{k\to +\infty }{\lim }{\hat{\mathbf{w}}}_{\mathrm{f}}^{(k)} \). Substituting (5) and (7) into (31) and ignoring the second- and higher-order error terms leads to

$$ {\displaystyle \begin{array}{c}{\mathbf{O}}_{3M\times 1}={\left(\overline{\mathbf{F}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left(\overline{\mathbf{f}}\left(\mathbf{w}\right)-\overline{\mathbf{f}}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)+\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} +{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}\right)+{\mathbf{P}}^{-1}\left(\mathbf{w}-{\hat{\mathbf{w}}}_{\mathrm{f}}+\boldsymbol{\upxi} \right)\\ {}\approx {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\left({\boldsymbol{\upvarepsilon}}_{\mathrm{c}}-\overline{\mathbf{F}}\left(\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\right)+{\mathbf{P}}^{-1}\left(\boldsymbol{\upxi} -\Delta {\mathbf{w}}_{\mathrm{f}}\right)\end{array}} $$
(32)

where \( \Delta {\mathbf{w}}_{\mathrm{f}}={\hat{\mathbf{w}}}_{\mathrm{f}}-\mathbf{w} \) is the estimation error in \( {\hat{\mathbf{w}}}_{\mathrm{f}} \). Note that the second (approximate) equality in (32) exploits the relation \( {\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}={\mathbf{O}}_{D\left(M-1\right)\times \left(N-1\right)} \). It follows immediately from (32) that

$$ \Delta {\mathbf{w}}_{\mathrm{f}}\approx {\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}+{\mathbf{P}}^{-1}\boldsymbol{\upxi} \right) $$
(33)

which yields

$$ \mathbf{MSE}\left({\hat{\mathbf{w}}}_{\mathrm{f}}\right)=\mathrm{E}\left[\Delta {\mathbf{w}}_{\mathrm{f}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}={\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right) $$
(34)

Therefore, the estimate \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) is asymptotically efficient.

5.2.2 MSE expression of \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \)

This subsection is devoted to deriving the analytical MSE expression of \( {\hat{\boldsymbol{\rho}}}_{\mathrm{f}} \). Inserting (7) into (30) and neglecting the second- and higher-order error terms, we obtain

$$ {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\left(\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)\approx {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\left(\overline{\boldsymbol{\Gamma}}\boldsymbol{\uprho} +{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}-\overline{\mathbf{F}}\left(\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\right)\Rightarrow \Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\approx {\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}} $$
(35)

where \( \Delta {\boldsymbol{\uprho}}_{\mathrm{f}}={\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}-\boldsymbol{\uprho} \) is the estimation error in \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \). It can be checked from (35) that

$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right)=\mathrm{E}\left[\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}{\left(\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\mathbf{A}}_1+{\mathbf{A}}_2+{\mathbf{A}}_3+{\mathbf{A}}_3^{\mathrm{T}} $$
(36)

where

$$ \left\{\begin{array}{l}{\mathbf{A}}_1={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\\ {}{\mathbf{A}}_2={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right){\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\\ {}{\mathbf{A}}_3=-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\cdot \mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]\cdot {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\end{array}\right. $$
(37)

Appendix 3 shows that A3 = O(N − 1) × (N − 1), which, together with (36) and (37), implies

$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right)={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}+{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right){\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}={\mathbf{CRB}}_{\mathrm{c}}\left(\boldsymbol{\uprho} \right) $$
(38)

It can be seen from (38) that \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) also has asymptotic efficiency.

Two remarks are drawn at the end of this subsection.

5.2.3 Remark 7

The asymptotic optimality of \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) only holds when the RDOA measurements of target source are not used. Hence, the estimation accuracy of sensor locations and clock bias may be further improved in the subsequent phase.

5.2.4 Remark 8

Applying (35), the cross-covariance matrix between \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{f}} \) and \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) is given by

$$ \mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)=\mathrm{E}\left[\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\cdot \mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)\cdot \mathrm{E}\left[\Delta {\mathbf{w}}_{\mathrm{f}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right] $$
(39)

Putting (87), (34), and (39) together produces

$$ \mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)=-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right){\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right) $$
(40)

It can be verified from the matrix identity (II) in Table 2 that \( \mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right) \) is equal to the lower-left-hand corner of \( {\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right) \). So, using (34) and (38), we have

$$ \mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{w}}}_{\mathrm{f}}\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]\right)=\mathrm{E}\left(\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]{\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right)=\left[\begin{array}{cc}{\mathbf{CRB}}_{\mathrm{c}}\left(\mathbf{w}\right)& {\left(\mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}\\ {}\mathbf{\operatorname{cov}}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{f}},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)& {\mathbf{CRB}}_{\mathrm{c}}\left(\boldsymbol{\uprho} \right)\end{array}\right]={\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)=\boldsymbol{\Phi} $$
(41)

5.3 Stage 2 of the proposed methods

In the second phase, we combine the measurement vector \( \hat{r} \) with the estimates of w and ρ in the first stage, denoted by \( {\hat{w}}_f \) and \( {\hat{\rho}}_f \), to locate the target source. Moreover, the estimates in the first step can be further refined. As in the first stage, the ML criterion is utilized again to obtain the asymptotic optimum performance. It is important to note that two dimension-reduction Taylor-series iterative algorithms are proposed in the second step.

5.4 The first algorithm

The first algorithm not only determines the position of target source, but also further improves the estimates of sensor locations and clock bias provided in the first stage. Since the measurement error in \( \hat{r} \) is independent of the estimation errors Δwf and Δρf, the corresponding ML estimator can be formulated as:

$$ \underset{\mathbf{u},\mathbf{w},\boldsymbol{\uprho}}{\min}\left\{{\left\Vert \left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\hat{\mathbf{r}}\\ {}{\boldsymbol{\Psi}}_1{\hat{\mathbf{w}}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]-\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\left(\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} \right)\\ {}{\boldsymbol{\Psi}}_1\mathbf{w}+{\boldsymbol{\Psi}}_2\boldsymbol{\uprho} \end{array}\right]\right\Vert}_2^2\right\} $$
(42)

where Ψ1 and Ψ2 are defined as follows:

$$ {\boldsymbol{\Phi}}^{-1/2}=\left[\underset{\left(3M+N-1\right)\times 3M}{\underbrace{{\boldsymbol{\Psi}}_1}}\kern0.5em \underset{\left(3M+N-1\right)\times \left(N-1\right)}{\underbrace{{\boldsymbol{\Psi}}_2}}\right] $$
(43)

It is obvious that this minimization problem can be efficiently solved through the conventional Taylor-series iterative technique. However, for the purpose of reducing the number of parameters in the iterative procedure, we still exploit the dimension-reduction Taylor-series iterative algorithm. Due to the fact that (42) is a quadratic minimization problem with respect to ρ, its optimal solution can be written in closed form as:

$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1,\mathrm{opt}}={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left(\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1\left({\hat{\mathbf{w}}}_{\mathrm{f}}-\mathbf{w}\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right) $$
(44)

where subscript “s1” is used to highlight that this is the solution in the second phase for the first algorithm. Putting (44) back into (42) yields the following concentrated minimization problem:

$$ \underset{\mathbf{u},\mathbf{w}}{\min}\left\{{\left\Vert {\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\hat{\mathbf{r}}\\ {}{\boldsymbol{\Psi}}_1{\hat{\mathbf{w}}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]-{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)\\ {}{\boldsymbol{\Psi}}_1\mathbf{w}\end{array}\right]\right\Vert}_2^2\right\} $$
(45)

The set of unknown parameters in (45) consists of u and w, which cannot be decoupled. As a result, they should be jointly estimated by the traditional Taylor-series iterative technique. The associated update formula for parameter estimation is given by

$$ {\displaystyle \begin{array}{c}\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}^{\left(k+1\right)}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}^{\left(k+1\right)}\end{array}\right]=\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\end{array}\right]+{\left({\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]\right)}^{-1}\\ {}\times {\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\left(\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)\right)\\ {}{\boldsymbol{\Psi}}_1\left({\hat{\mathbf{w}}}_{\mathrm{f}}-{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}\right)+{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(46)

where superscript k denotes iteration number; \( {\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)} \) and \( {\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)} \) are the estimates of u and w at the kth iteration, respectively. Let \( \underset{k\to +\infty }{\lim }{\hat{\mathbf{u}}}_{\mathrm{s}1}^{(k)}={\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( \underset{k\to +\infty }{\lim }{\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)}={\hat{\mathbf{w}}}_{\mathrm{s}1} \). Then, \( {\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( {\hat{\mathbf{w}}}_{\mathrm{s}1} \) can be regarded as the final estimates for target position and sensor locations, respectively. Besides, once the convergence criterion is satisfied and the iteration procedure is terminated, the final estimate for clock bias can be explicitly obtained as:

$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1}={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left(\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}1},{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1\left({\hat{\mathbf{w}}}_{\mathrm{f}}-{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\right) $$
(47)

Before we proceed, three important remarks concerning the procedure described above are concluded.

5.4.1 Remark 9

In optimization problem (45), matrices Ψ1 and Ψ2 are not accurately known because they depend on w, which is to be estimated. In order to overcome this difficulty, we can use the iteration vector \( {\hat{\mathbf{w}}}_{\mathrm{s}1}^{(k)} \) instead of the true sensor locations, which means that Ψ1 and Ψ2 should be updated at every iteration step. Let us assume that the approximations of Ψ1 and Ψ2 at the kth iteration are denoted by \( {\hat{\boldsymbol{\Psi}}}_1^{(k)} \) and \( {\hat{\boldsymbol{\Psi}}}_2^{(k)} \), respectively. The performance analysis in Section 6.1.1 shows that such an approximation does not affect the asymptotic properties of the estimator. Plentiful simulation results also indicate that the estimation accuracy is relatively insensitive to the noise in these two matrices.

5.4.2 Remark 10

Since Φ−1 = Φ−1/2Φ−1/2 = (Φ−1/2)TΦ−1/2, putting (22), (41), and (43) together produces

$$ {\boldsymbol{\Phi}}^{-1}=\left[\begin{array}{cc}{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_1& {\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\\ {}{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1& {\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]={\left({\mathbf{CRB}}_{\mathrm{c}}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)\right)}^{-1}=\left[\begin{array}{cc}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right] $$
(48)

which implies

$$ {\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_1={\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\kern1em ;\kern1em {\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2={\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\kern1em ;\kern1em {\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2={\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}} $$
(49)

These matrices are used in (46) and (47). Besides, they are also required for the theoretical analysis in Section 6.

5.4.3 Remark 11

Section 6.1.1 proves that the joint estimate \( \left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right] \) can asymptotically attain the CRB computed by (81). Moreover, Section 6.1.2 shows that the solution \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \) is also asymptotically efficient because its performance can achieve the CRB given by (85) before the thresholding effect occurs.

5.4.4 The second algorithm

The aim of this subsection is to present an alternative dimension-reduction Taylor-series iterative formula in which the iteration variable is composed of u only. The basic idea behind this algorithm is to automatically mitigate the effects of sensor location errors produced in the first stage, rather than further improving the sensor positions. As a result, the computational load can be reduced.

For this purpose, we use a first-order Taylor- series expansion, leading to the following approximation:

$$ \hat{\mathbf{r}}\approx \mathbf{f}\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} +\left(\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\right) $$
(50)

where the second- and higher-order terms of estimation error Δwf are ignored. It is immediately obvious that the last term \( {\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}} \) can also be regarded as a measurement error as ε. The ML estimator is therefore given by

$$ \underset{\mathbf{u},\boldsymbol{\uprho}}{\min}\left\{{\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\boldsymbol{\Gamma} \boldsymbol{\uprho} \\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}-\boldsymbol{\uprho} \end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\boldsymbol{\Gamma} \boldsymbol{\uprho} \\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}-\boldsymbol{\uprho} \end{array}\right]\right\} $$
(51)

where

$$ \boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)=\mathrm{E}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]{\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right)=\left[\begin{array}{cc}\mathbf{Q}+{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right){\boldsymbol{\Phi}}_1{\left({\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}& -{\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right){\boldsymbol{\Phi}}_2\\ {}-{\boldsymbol{\Phi}}_2^{\mathrm{T}}{\left({\mathbf{F}}_2\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{\mathrm{T}}& {\boldsymbol{\Phi}}_3\end{array}\right] $$
(52)

in which Φ1, Φ2, and Φ3 are defined as follows:

$$ \boldsymbol{\Phi} =\left[\begin{array}{cc}\underset{3M\times 3M}{\underbrace{{\boldsymbol{\Phi}}_1}}& \underset{3M\times \left(N-1\right)}{\underbrace{{\boldsymbol{\Phi}}_2}}\\ {}\underset{\left(N-1\right)\times 3M}{\underbrace{{\boldsymbol{\Phi}}_2^{\mathrm{T}}}}& \underset{\left(N-1\right)\times \left(N-1\right)}{\underbrace{{\boldsymbol{\Phi}}_3}}\end{array}\right] $$
(53)

Likewise, the optimal solution of ρ in (51) can also be written explicitly as below:

$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2,\mathrm{opt}}={\left({\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{-1}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right] $$
(54)

where subscript “s2” is used to emphasize that this is the solution in the second phase for the second algorithm. By substituting (54) into (51), we obtain the following concentrated minimization problem

$$ \underset{\mathbf{u}}{\min}\left\{{\left\Vert {\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\hat{\mathbf{r}}\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]-{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)\left(\boldsymbol{\Omega} \right(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\left)\right){}^{-1/2}\left[\begin{array}{c}\mathbf{f}\left(\mathbf{u},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 1}\end{array}\right]\right\Vert}_2^2\right\} $$
(55)

Similar to (28) and (45), the Taylor-series iterative formula for solving (55) can be expressed as:

$$ {\displaystyle \begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}2}^{\left(k+1\right)}={\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)}+{\left({\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{-1}\\ {}\times {\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(56)

where superscript k stands for iteration number and \( {\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)} \) is the estimate of u at the kth iteration. The convergence result of sequence \( {\left\{{\hat{\mathbf{u}}}_{\mathrm{s}2}^{(k)}\right\}}_{1\le k\le +\infty } \), denoted by \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \), can be viewed as the final estimate of target position for the second algorithm. Moreover, once the iteration process is completed, the final solution of clock bias can be expressed in a closed form as:

$$ {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2}={\left({\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{-1}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right] $$
(57)

At the end of this subsection, two remarks about the second algorithm are in order.

5.4.5 Remark 12

The estimates of sensor locations obtained in the first stage cannot be further refined in this algorithm.

5.4.6 Remark 13

Similar to the solutions \( {\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \) obtained by the first algorithm, the estimates \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \) are also asymptotically efficient. Section 6.2 proves that the estimation accuracy of \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \) and \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \) can attain the CRBs given by (18) and the third equality in (13), respectively, under mild condition.

5.5 Summary of the proposed methods

According to the description in Sections 5.1 and 5.3, we get two novel TDOA localization algorithms when the TDOA measurements from the calibration emitters are available. Both of them require two stages, and moreover, the first phases of the two algorithms have the same computational procedure. In the sequel, we summarize the two newly proposed algorithms, which are called algorithm I and algorithm II, respectively (Figs. 2 and 3).

Fig. 2
figure 2

Logical flowchart of algorithm I

Fig. 3
figure 3

Logical flowchart of algorithm II

We make the following two remarks about the proposed algorithms described above.

5.5.1 Remark 14

In the first stage of algorithm I, the initial value \( {\hat{\mathbf{w}}}_{\mathrm{f}}^{(0)} \) can be set to be the available erroneous sensor positions \( \hat{v} \). In the second phase of algorithm I, the initial solution \( {\hat{\mathbf{w}}}_{\mathrm{s}1}^{(0)} \) can be selected as \( {\hat{\mathbf{w}}}_{\mathrm{f}} \) and the initial guess \( {\hat{\mathbf{u}}}_{\mathrm{s}1}^{(0)} \) can be obtained by the non-iterative algebraic solution proposed in [59]. Simulation results in Section 7 show that using these initial solutions is able to achieve asymptotically efficient performance. Moreover, this initialization method is also suitable for algorithm II. From our simulation results, it can be observed that fifteen iterations are generally enough to achieve the convergence criterion.

5.5.2 Remark 15

As mentioned in Section 5.4.4, algorithm II involves a smaller amount of computation than algorithm I because the sensor locations are not refined in the second stage of algorithm II. In Appendix 4, we provide the numerical complexities of the two algorithms, expressed in the number of multiplication operations.

6 Performance analysis of the proposed methods

This section is devoted to deriving the theoretical MSE of the TDOA localization methods presented in Section 5. Besides, we prove analytically that the theoretical performance of the proposed solutions can reach the corresponding CRB accuracy under mild conditions.

6.1 Performance analysis of algorithm I

6.1.1 MSE expression of joint estimates \( {\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( {\hat{\mathbf{w}}}_{\mathrm{s}1} \)

The aim of this subsection is to deduce the theoretical MSE of the joint estimate \( \left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right] \). We start the derivation by taking the limit on both sides of (46) as follows:

$$ {\left[\begin{array}{cc}{Q}^{-1/2}{F}_1\left({\hat{u}}_{\mathrm{s}1},{\hat{w}}_{\mathrm{s}1}\right)& {Q}^{-1/2}{F}_2\left({\hat{u}}_{\mathrm{s}1},{\hat{w}}_{\mathrm{s}1}\right)\\ {}{O}_{\left(3M+N-1\right)\times 3}& {\hat{\boldsymbol{\varPsi}}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\varPi}}^{\perp}\left(\left[\begin{array}{c}{Q}^{-1/2}\varGamma \\ {}{\hat{\boldsymbol{\varPsi}}}_2\end{array}\right]\right)\left[\begin{array}{c}{Q}^{-1/2}\left(\hat{r}-f\left({\hat{u}}_{\mathrm{s}1},{\hat{w}}_{\mathrm{s}1}\right)\right)\\ {}{\hat{\boldsymbol{\varPsi}}}_1\left({\hat{w}}_f-{\hat{w}}_{\mathrm{s}1}\right)+{\hat{\varPsi}}_2{\hat{\rho}}_{\mathrm{f}}\end{array}\right]={O}_{\left(3M+3\right)\times 1} $$
(58)

where \( {\hat{\boldsymbol{\Psi}}}_1=\underset{k\to +\infty }{\lim }{\hat{\boldsymbol{\Psi}}}_1^{(k)} \) and \( {\hat{\boldsymbol{\Psi}}}_2=\underset{k\to +\infty }{\lim }{\hat{\boldsymbol{\Psi}}}_2^{(k)} \), and we replace Ψ1 and Ψ2 with \( {\hat{\boldsymbol{\Psi}}}_1^{(k)} \) and \( {\hat{\boldsymbol{\Psi}}}_2^{(k)} \), respectively, according to the statement in Remark 9. Putting (2) into (58) and neglecting the second- and higher-order error terms gives

$$ {\displaystyle \begin{array}{c}{\mathbf{O}}_{\left(3M+3\right)\times 1}={\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}1},{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left({\hat{\mathbf{u}}}_{\mathrm{s}1},{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\hat{\boldsymbol{\Psi}}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\hat{\boldsymbol{\Psi}}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\left(\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}1},{\hat{\mathbf{w}}}_{\mathrm{s}1}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} +\boldsymbol{\upvarepsilon} \right)\\ {}{\hat{\boldsymbol{\Psi}}}_1\left(\Delta {\mathbf{w}}_{\mathrm{f}}-\Delta {\mathbf{w}}_{\mathrm{s}1}\right)+{\hat{\boldsymbol{\Psi}}}_2\left(\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)\end{array}\right]\\ {}\approx {\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\left(\boldsymbol{\upvarepsilon} -{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{u}}_{\mathrm{s}1}-{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{s}1}\right)\\ {}{\boldsymbol{\Psi}}_1\left(\Delta {\mathbf{w}}_{\mathrm{f}}-\Delta {\mathbf{w}}_{\mathrm{s}1}\right)+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(59)

where \( \Delta {\mathbf{u}}_{\mathrm{s}1}={\hat{\mathbf{u}}}_{\mathrm{s}1}-\mathbf{u} \) and \( \Delta {\mathbf{w}}_{\mathrm{s}1}={\hat{\mathbf{w}}}_{\mathrm{s}1}-\mathbf{w} \) are the estimation errors in \( {\hat{\mathbf{u}}}_{\mathrm{s}1} \) and \( {\hat{\mathbf{w}}}_{\mathrm{s}1} \), respectively. Moreover, it is worthy to mention that the second (approximate) equality in (59) makes use of the relation \( {\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\hat{\boldsymbol{\Psi}}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\hat{\boldsymbol{\Psi}}}_2\end{array}\right]={\mathbf{O}}_{\left(4M+N-2\right)\times \left(N-1\right)} \). From (59), we have

$$ {\displaystyle \begin{array}{c}\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]\approx {\left({\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]\right)}^{-1}\\ {}\times {\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\upvarepsilon} \\ {}{\boldsymbol{\Psi}}_1\Delta {\mathbf{w}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(60)

Besides, it follows from (41) and (43) that

$$ \mathrm{E}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\upvarepsilon} \\ {}{\boldsymbol{\Psi}}_1\Delta {\mathbf{w}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]{\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\upvarepsilon} \\ {}{\boldsymbol{\Psi}}_1\Delta {\mathbf{w}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right)=\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{Q}\mathbf{Q}}^{-1/2}& {\mathbf{O}}_{\left(M-1\right)\times \left(3M+N-1\right)}\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times \left(M-1\right)}& {\boldsymbol{\Phi}}^{-1/2}{\boldsymbol{\Phi} \boldsymbol{\Phi}}^{-1/2}\end{array}\right]={\mathbf{I}}_{4M+N-2} $$
(61)

which together with (60) produces

$$ \mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)=\mathrm{E}\left(\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]{\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]}^{\mathrm{T}}\right)={\left({\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]\right)}^{-1} $$
(62)

In Appendix 5, we prove that \( \mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)=\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right) \), which immediately implies that the joint estimate \( \left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right] \) can asymptotically achieve the optimum performance.

6.1.2 MSE expression of \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \)

Here, the analytical MSE of \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \) is derived. Inserting (2) into (47) and neglecting the second- and higher-order error terms leads to

$$ {\displaystyle \begin{array}{l}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\hat{\boldsymbol{\Psi}}}_2^{\mathrm{T}}{\hat{\boldsymbol{\Psi}}}_2\right)\left(\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{s}1}\right)\approx {\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\left(\boldsymbol{\Gamma} \boldsymbol{\uprho} +\boldsymbol{\upvarepsilon} -{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{u}}_{\mathrm{s}1}-{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{s}1}\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1\left(\Delta {\mathbf{w}}_{\mathrm{f}}-\Delta {\mathbf{w}}_{\mathrm{s}1}\right)+{\hat{\boldsymbol{\Psi}}}_2^{\mathrm{T}}{\hat{\boldsymbol{\Psi}}}_2\left(\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)\\ {}\Rightarrow \Delta {\boldsymbol{\uprho}}_{\mathrm{s}1}\approx {\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} -{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{u}}_{\mathrm{s}1}-\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_1\right)\Delta {\mathbf{w}}_{\mathrm{s}1}+{\boldsymbol{\Psi}}_2^{\mathrm{T}}\left({\boldsymbol{\Psi}}_1\Delta {\mathbf{w}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\right)\right)\end{array}} $$
(63)

where \( \Delta {\boldsymbol{\uprho}}_{\mathrm{s}1}={\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1}-\boldsymbol{\uprho} \) is the estimation error in \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \). It is important to note that in (63), Ψ1 and Ψ2 are replaced by \( {\hat{\boldsymbol{\Psi}}}_1 \) and Ψ2, respectively.

Putting (43) and (63) together yields

$$ \Delta {\boldsymbol{\uprho}}_{\mathrm{s}1}\approx {\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Phi}}^{-1/2}\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]-{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]}^{\mathrm{T}}\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]\right) $$
(64)

Then, we have

$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1}\right)=\mathrm{E}\left[\Delta {\boldsymbol{\uprho}}_{\mathrm{s}1}{\left(\Delta {\boldsymbol{\uprho}}_{\mathrm{s}1}\right)}^{\mathrm{T}}\right]={\mathbf{B}}_1+{\mathbf{B}}_2+{\mathbf{B}}_3+{\mathbf{B}}_3^{\mathrm{T}} $$
(65)

where

$$ \left\{\begin{array}{l}{\mathbf{B}}_1={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\\ {}{\mathbf{B}}_2={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]}^{\mathrm{T}}\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right)\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\\ {}{\mathbf{B}}_3=-{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\cdot \mathrm{E}\left(\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Phi}}^{-1/2}\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\right){\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]}^{\mathrm{T}}\right)\cdot \left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\end{array}\right. $$
(66)

It is shown in Appendix 6 that B3 = O(N − 1) × (N − 1), which combined with (65) and (66) produces

$$ {\displaystyle \begin{array}{c}\mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1}\right)={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}+{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]}^{\mathrm{T}}\\ {}\times \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right)\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}\end{array}} $$
(67)

From (85), (49), and (67), we have \( \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1}\right)=\mathbf{CRB}\left(\boldsymbol{\uprho} \right) \), which means that the solution \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}1} \) is asymptotically efficient.

6.2 Performance analysis of algorithm II

6.2.1 MSE expression of \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \)

The objective of this subsection is to derive the theoretical MSE of \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \). Taking the limit on both sides of (56) gives

$$ {\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\hat{\mathbf{r}}-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\hat{\boldsymbol{\uprho}}}_{\mathrm{f}}\end{array}\right]={\mathbf{O}}_{3\times 1} $$
(68)

Inserting (2) into (68) and neglecting the second- and higher-order error terms leads to

$$ {\displaystyle \begin{array}{c}{\mathbf{O}}_{3\times 1}={\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\mathbf{f}\left(\mathbf{u},\mathbf{w}\right)-\mathbf{f}\left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)+\boldsymbol{\Gamma} \boldsymbol{\uprho} +\boldsymbol{\upvarepsilon} \\ {}\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\\ {}\approx {\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{u}}_{\mathrm{s}2}-{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(69)

where \( \Delta {\mathbf{u}}_{\mathrm{s}2}={\hat{\mathbf{u}}}_{\mathrm{s}2}-\mathbf{u} \) is the estimation error in \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \). Additionally, the second (approximate) equality in (69) exploits the relation \( {\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]={\mathbf{O}}_{\left(M+N-2\right)\times \left(N-1\right)} \). From (69), we can obtain

$$ {\displaystyle \begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}2}\approx {\left({\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{-1}\\ {}\times {\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\end{array}} $$
(70)

which combined with (52) results in

$$ \mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)=\mathrm{E}\left[\Delta {\mathbf{u}}_{\mathrm{s}2}{\left(\Delta {\mathbf{u}}_{\mathrm{s}2}\right)}^{\mathrm{T}}\right]={\left({\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\right)}^{-1} $$
(71)

In Appendix 7, it is proved that \( \mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)=\mathbf{CRB}\left(\mathbf{u}\right) \), which indicates that the estimate \( {\hat{\mathbf{u}}}_{\mathrm{s}2} \) has asymptotically optimal accuracy.

6.2.2 MSE expression of \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \)

Here, we need to derive the analytical MSE of \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \). Substituting (2) into (57) and neglecting the second- and higher-order error terms produces

$$ {\displaystyle \begin{array}{l}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\left(\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{s}2}\right)\approx {\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left({\hat{\mathbf{u}}}_{\mathrm{s}2},{\hat{\mathbf{w}}}_{\mathrm{f}}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \boldsymbol{\uprho} +\boldsymbol{\upvarepsilon} -{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{u}}_{\mathrm{s}2}-{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\boldsymbol{\uprho} +\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\\ {}\Rightarrow \Delta {\boldsymbol{\uprho}}_{\mathrm{s}2}\approx {\left({\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{-1}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]-\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\Delta {\mathbf{u}}_{\mathrm{s}2}\right)\end{array}} $$
(72)

where \( \Delta {\boldsymbol{\uprho}}_{\mathrm{s}2}={\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2}-\boldsymbol{\uprho} \) is the estimation error in \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \).

Combining (52) and (72) leads to

$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2}\right)=\mathrm{E}\left[\Delta {\boldsymbol{\uprho}}_{\mathrm{s}2}{\left(\Delta {\boldsymbol{\uprho}}_{\mathrm{s}2}\right)}^{\mathrm{T}}\right]={\mathbf{C}}_1+{\mathbf{C}}_2+{\mathbf{C}}_3+{\mathbf{C}}_3^{\mathrm{T}} $$
(73)

where

$$ \left\{\begin{array}{c}{\mathbf{C}}_1={\left({\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{-1}\\ {}{\mathbf{C}}_2={\mathbf{C}}_1{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\mathbf{CRB}\left(\mathbf{u}\right){\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]{\mathbf{C}}_1\\ {}{\mathbf{C}}_3=-{\mathbf{C}}_1{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\cdot \mathrm{E}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\Delta {\mathbf{u}}_{\mathrm{s}2}^{\mathrm{T}}\right)\cdot {\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]{\mathbf{C}}_1\end{array}\right. $$
(74)

In Appendix 8, we prove that C3 = O(N − 1) × (N − 1), which together with (73) and (74) produces

$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2}\right)={\mathbf{C}}_1+{\mathbf{C}}_1{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\mathbf{CRB}\left(\mathbf{u}\right){\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]{\mathbf{C}}_1 $$
(75)

Combining (12), (19), and (52) and taking some algebraic manipulations lead to

$$ \left\{\begin{array}{l}{\mathbf{C}}_1=\left[{\mathbf{O}}_{\left(N-1\right)\times 3M}\kern1em {\mathbf{I}}_{N-1}\right]{\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{O}}_{3M\times \left(N-1\right)}\\ {}{\mathbf{I}}_{N-1}\end{array}\right]\\ {}{\mathbf{C}}_1{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]=\left[{\mathbf{O}}_{\left(N-1\right)\times 3M}\kern1em {\mathbf{I}}_{N-1}\right]{\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}\end{array}\right. $$
(76)

Inserting (76) into (75) and using CRB(u) = (X − YZ−1YT)−1, we have

$$ \mathbf{MSE}\left({\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2}\right)=\left[{\mathbf{O}}_{\left(N-1\right)\times 3M}\kern1em {\mathbf{I}}_{N-1}\right]{\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{O}}_{3M\times \left(N-1\right)}\\ {}{\mathbf{I}}_{N-1}\end{array}\right]+\left[{\mathbf{O}}_{\left(N-1\right)\times 3M}\kern1em {\mathbf{I}}_{N-1}\right]{\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}{\left(\mathbf{X}-{\mathbf{Y}\mathbf{Z}}^{-1}{\mathbf{Y}}^{\mathrm{T}}\right)}^{-1}{\mathbf{Y}\mathbf{Z}}^{-1}\left[\begin{array}{c}{\mathbf{O}}_{3M\times \left(N-1\right)}\\ {}{\mathbf{I}}_{N-1}\end{array}\right]=\mathbf{CRB}\left(\boldsymbol{\uprho} \right) $$
(77)

which implies that the solution \( {\hat{\boldsymbol{\uprho}}}_{\mathrm{s}2} \) is also asymptotically efficient.

Finally, we would like to stress that the performance analysis described above is performed in a general mathematical framework, which is not limited to a specific signal metric.

7 Numerical experiments

This section presents a set of Monte Carlo simulations to examine the behavior of the proposed TDOA localization algorithms. The root-mean-square error (RMSE) and radius of circular error probability (CEP) are chosen as performance metrics. All the simulation results are averaged over 2000 independent noise realizations. It should be pointed out that, to the best our knowledge, there is no existing algorithm that utilizes calibration emitters to improve the localization accuracy for the scenario where both synchronization clock bias and sensor location errors are present. As a result, there are relatively few algorithms that can be directly used for fair performance comparison. Note that, as mentioned in [71], the differential calibration (DC) technique is commonly used in global positioning systems (GPS) to mitigate the effect of uncertainties in satellite position, and various errors caused by satellite clock mismatch and tropospheric and ionospheric layers. Moreover, this method can be easily extended to the localization scenario studied here. So, it is reasonable to compare our methods with the DC approach. Additionally, we also compare the performance of the proposed solutions with the TWLS algorithm in [59], the Taylor-series iterative algorithm extended from [8], and the algorithm in [70], none of which makes use of the calibration emitters. From this comparison, we can clearly observe the performance improvement resulted from the use of the calibration sources. On the other hand, the CRBs derived in Section 4 are also used as an important performance benchmark, which can corroborate the asymptotic efficiency of the new algorithms.

In the first set of experiments, we compare the radiuses of CEP of the two proposed algorithms with the TWLS algorithm in [59], which does not utilize the TDOA measurements obtained from the calibration signals. The simulation scenario contains an unknown source located at u = [4000 4000 4000]T (m) and the localization task is performed by an array of M = 15 passive sensors. The actual sensor locations are tabulated in Table 3, which shows that the sensors are divided into 5 groups. The clock offset vector is set as ρ = [20 − 25 15  − 10]T (m). Besides, two calibration emitters are deployed near the target source, and they are located at uc, 1 = [3200 5800 2500]T (m) and uc, 2 = [6200 3800 5600]T (m). In generating the simulation results, the RDOA measurements from the target source and calibration emitters are generated by adding to the true values zero-mean Gaussian noise with covariance matrix \( \mathbf{Q}={\sigma}_{\mathrm{RDOA}}^2\mathbf{T} \) and \( {\mathbf{Q}}_{\mathrm{c}}={\sigma}_{\mathrm{RDOA}}^2{\mathbf{T}}_{\mathrm{c}} \), respectively. T is an (M − 1) × (M − 1) matrix with diagonal elements equal to 1 and all other elements equal to 0.5; Tc is an D(M − 1) × D(M − 1) block diagonal matrix and its diagonal blocks are equal to T. The erroneous sensor positions are created in a similar way using covariance matrix \( \mathbf{P}={\sigma}_{\mathrm{P}}^2{\mathbf{I}}_{3M} \). σRDOA and σP denote the standard deviations of RDOA measurement errors and sensor position perturbations, respectively.

Table 3 Sensor positions in the unit of meters

We set σRDOA = 10 (m) and σP = 5 (m). The number of Monte Carlo runs equals 2000. Figs. 4 and 5 show the scatter plots of the source location estimates in the xy and yz plane, respectively. The radiuses of CEP for the three localization algorithms are also provided in the figures.

Fig. 4
figure 4

Scatter plots of the source location estimates in the xy plane. a Location estimation results for algorithm I in the xy plane. b Location estimation results for algorithm II in the xy plane. c Location estimation results for the TWLS algorithm in the xy plane

Fig. 5
figure 5

Scatter plots of the source location estimates in the yz plane. a Location estimation results for algorithm I in the yz plane. b Location estimation results for algorithm II in the yz plane. c Location estimation results for the TWLS algorithm in the yz plane

It can be easily observed from Figs. 4 and 5 that the estimation performances of the two new algorithms are the same since they have the same radius of CEP. Moreover, the two proposed algorithms can achieve much higher localization accuracy than the TWLS algorithm in [59]. Obviously, the performance improvement in location accuracy mainly results from the utilization of the calibration emitters.

The second set of experiments evaluates the estimation RMSEs of different localization algorithms mentioned above. The simulation parameters are the same as those for the previous experiments, except that σRDOA, σP, and the norm of ρ (i.e., ρ2) are changed. First, the standard deviation of sensor position errors is set to σP = 5 (m) and the clock offset vector is fixed at ρ = [20  − 25 15  − 10]T (m). Figs. 6, 7, and 8 depict the RMSEs of the estimated target location, sensor position, and clock offset versus σRDOA, respectively. Subsequently, we set σRDOA = 10 (m) and ρ = [20  − 25 15  − 10]T (m). Figs. 9, 10, and 11 show the RMSEs of target location, sensor position, and clock offset estimates as a function of σP, respectively. Next, σRDOA and σP are fixed at 10 (m) and 5 (m), respectively, and the direction of clock offset vector is the same as ρ = [20  − 25 15  − 10]T (m). The RMSEs of the estimated target location, sensor position, and clock offset against ρ2 are plotted in Figs. 12, 13, and 14, respectively.

Fig. 6
figure 6

RMSE of target location estimate versus σRDOA

Fig. 7
figure 7

RMSE of sensor location estimate versus σRDOA

Fig. 8
figure 8

RMSE of clock offset estimate versus σRDOA

Fig. 9
figure 9

RMSE of target location estimate versus σP

Fig. 10
figure 10

RMSE of sensor location estimate versus σP

Fig. 11
figure 11

RMSE of clock offset estimate versus σP

Fig. 12
figure 12

RMSE of target location estimate versus ρ2

Fig. 13
figure 13

RMSE of sensor location estimate versus ρ2

Fig. 14
figure 14

RMSE of clock offset estimate versus ρ2

From Figs. 6, 7, 8, 9, 10, 11, 12, 13, and 14, it can be found that the two new algorithms are both asymptotically efficient since they can achieve the relevant CRBs obtained in Section 4.1. As a result, the effectiveness of the theoretical derivation carried out in Section 6 can be confirmed. Moreover, the superiority of the proposed algorithms over the TWLS algorithm in [59], the Taylor-series iterative algorithm extended from [8], and the algorithm in [70] is noticeable. The reason is that the latter three algorithms do not exploit the measurements from the calibration sources. In other words, this significant performance gap is mainly due to the use of the calibration emitters. In addition, all the three algorithms can achieve the CRB for the case of no calibration signal, before the non-linear effect dominates performance. We can also observe that the TWLS algorithm breaks away from the CRB earlier than the other two algorithms. The possible reason for this is that the TWLS algorithm may generate complex values when finding the square root in its second stage. However, the TWLS algorithm is more computationally efficient and does not require initialization and iteration. On the other hand, it can be seen that the proposed algorithms outperform the DC algorithm and the RMSE improvement increases as σRDOA is increased. It is clear that the RMSE of the DC algorithm is larger than the associated CRB. This observation is consistent with the analytical result in [71]. More importantly, the DC algorithm cannot further refine the sensor locations and provide the solution for clock bias, while the proposed algorithms can. Finally, we would like to clarify that the estimation accuracies of all the above algorithms are not dependent on the norm of clock offset vector, which is consistent with the CRB analysis in Section 4.

The third set of experiments studies the effect of the number of calibration sources. Assume that the 3D localization scenario comprises 16 sensors whose true locations are given in Table 4. They are used to locate a source through the RDOA measurements from both target source and calibration emitters. The covariance matrices of the RDOA measurements and sensor location errors are chosen in the same way as previously specified. As shown in Table 4, the sensors are separated into 5 sets and the sensors within the same set are close to each other. The clock offset vector is set as ρ = [−18 22  − 15 24]T (m). The target source is located at u = [5000 5000 5000]T (m), which needs to be estimated with the help of calibration signals. The estimation accuracies of the proposed algorithms are examined in three cases. In the first case, a single calibration source is used for target localization, and the position of this calibration source is given by uc = [6800 5800 4500]T (m). The second case assumes that there are two calibration emitters, which are located at uc, 1 = [6800 5800 4500]T (m) and uc, 2 = [7200 5500 5600]T (m), respectively. Three calibration sources are deployed for locating the target in the third case and their locations are set as uc, 1 = [6800 5800 4500]T (m), uc, 2 = [7200 5500 5600]T (m), and uc, 3 = [4000 3800 4200]T (m), respectively. Figures 15, 16, and 17 illustrate the RMSEs of the estimated target location, sensor position, and clock offset versus σRDOA, respectively, when σP = 5 (m). Figures 18, 19, and 20 plot the RMSEs of target location, sensor position, and clock offset estimates as a function of σP, respectively, when σRDOA = 10 (m).

Table 4 Sensor positions in the unit of meters
Fig. 15
figure 15

RMSE of target location estimate versus σRDOA for a different number of calibration emitters

Fig. 16
figure 16

RMSE of sensor location estimate versus σRDOA for a different number of calibration emitters

Fig. 17
figure 17

RMSE of clock offset estimate versus σRDOA for a different number of calibration emitters

Fig. 18
figure 18

RMSE of target location estimate versus σP for a different number of calibration emitters

Fig. 19
figure 19

RMSE of sensor location estimate versus σP for a different number of calibration emitters

Fig. 20
figure 20

RMSE of clock offset estimate versus σP for a different number of calibration emitters

From Figs. 15, 16, 17, 18, 19, and 20, we can observe that the proposed algorithms are shown to yield the solutions reaching the CRB accuracy under moderate noise level. This finding can further support the theoretical development in Section 6. It can also be concluded that, as expected, the parameter estimation accuracy will improve when more calibration sources are available for target localization. Moreover, the higher the noise level, the greater the performance gain in localization accuracy resulted from the increase in the number of calibration signals.

In the fourth experiment, we compare the running time of the proposed algorithms with the other TDOA localization algorithms mentioned above. The simulations are carried out using MATLAB R2017b on a ThinkPad laptop equipped with Intel Core i7-7500 CPU and 8GB RAM. The simulation settings are the same with those used to produce Figs. 4 and 5. In Table 5, the average CPU computational time is compared for the considered localization algorithms.

Table 5 Comparison of the running time

The results in Table 5 show that the TWLS algorithm in [59] takes the least computation time, followed by the Taylor-series iterative algorithm extended from [8] and the algorithm in [70]. This observation is not unexpected because none of these three algorithms take advantage of the measurements from the calibration emitters. Moreover, the DC algorithm is more computationally efficient than the two proposed algorithms. The reason is that the DC algorithm does not refine the sensor locations and estimate the clock bias. Finally, algorithm II is less computationally demanding than algorithm I because the former does not improve the sensor locations in its second stage.

Finally, we would like to point out that although our proposed algorithms require more computation than the other algorithms, the computational complexity is still acceptable when they are executed onboard a node of a wireless sensor network (WSN). There are at least three reasons. First, the new algorithms have fast convergence rate. Second, the dimension of variables involved in the iteration is reduced in the proposed algorithms. Third, the algorithms can be implemented through a graphics processing unit (GPU) chip, which can support parallel operations and has much higher computation speed than CPU chip. It is noteworthy that one of our future works is to implement the new algorithms in a practical WSN.

8 Results and discussions

From the simulation results described above, we can observe that the two proposed algorithms are both asymptotically efficient because they can achieve the relevant CRBs given in Section 4.1. The superiority of the proposed algorithms over the TWLS algorithm in [59], the Taylor-series iterative algorithm extended from [8], and the algorithm in [70] is noticeable. The reason is that the latter three algorithms do not exploit the measurements from the calibration sources. In other words, this significant performance gap is mainly due to the use of the calibration emitters. In addition, it can be seen that the proposed algorithms outperform the DC algorithm and the RMSE improvement increases as σRDOA is increased. This observation is consistent with the analytical result in [71]. More importantly, the DC algorithm cannot further refine the sensor locations and provide the solution for clock bias, while the proposed algorithms can. It can also be concluded that, as expected, the parameter estimation accuracy improves when more calibration sources are available for target localization. Moreover, the higher the noise level, the greater the performance gain in localization accuracy which resulted from the increase in the number of calibration signals.

9 Conclusions

This paper concentrates on the use of a set of calibration signals positioned at known locations when both clock offsets and sensor position errors are present. The localization scenario is similar to the one presented in [59], where the sensors are partially synchronized. The study begins with the CRB investigation to examine the performance gain due to the utilization of the calibration emitters over the case where the calibration emitters are not available. Some explicit and useful CRB expressions are obtained. The insight gained from the CRB indicates that the calibration sources can significantly reduce the effects of clock bias and sensor position errors. In order to obtain the optimum estimation accuracy, new TDOA localization methods using the measurements from both target source and calibration emitters are developed. Specifically, we propose two dimension-reduction Taylor-series iterative algorithms, both of which have two stages. The first stage estimates the clock offsets and refines the sensor positions based on the calibration TDOA measurements. The statistical characteristic of the noisy sensor locations is also incorporated into this computation procedure. The second stage yields the estimates of source location by combining the TDOA measurements of target signal and the estimated values in the first phase. The theoretical MSEs of the two proposed algorithms are deduced by applying the first-order perturbation analysis. Besides, the proposed methods are proved analytically to reach the CRB accuracy before the thresholding effect takes place. Simulations are conducted to support our theoretical development and demonstrate the superiority of the proposed algorithms over the existing localization algorithms.

Finally, it needs to be mentioned that the present study assumes that the locations of calibration sources are accurately known. In our future work, we intend to extend the proposed algorithms to more practical situations where the exact positions of calibration emitters are not available. In addition, we also intend to implement the new algorithms in a practical WSN through parallel GPU acceleration technique.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.

Abbreviations

AOA:

Angle of arrival

CEP:

Circular error probability

CRB:

Cramér–Rao bound

CTLS:

Constrained total least squares

DC:

Differential calibration

FDOA:

Frequency difference of arrival

FIM:

Fisher information matrix

FOA:

Frequency of arrival

GPS:

Global positioning systems

GPU:

Graphics processing unit

GROA:

Gain ratios of arrival

LOP:

Lines of position

MDS:

Multidimensional scaling

ML:

Maximum likelihood

MSEs:

Mean square errors

PDF:

Probability density function

QCLS:

Quadratic constraint least squares

RDOA:

Range difference of arrival

RMSE:

Root-mean-square error

RSS:

Received signal strength

SI:

Spherical interpolation

TDOA:

Time difference of arrival

TOA:

Time of arrival

TWLS:

Two-step weighted least squares

WSN:

Wireless sensor network

References

  1. K. Doğançay, Bearings-only target localization using total least squares [J]. Signal Processing 85(9), 1695–1710 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. K. Doğançay, H. Hmam, Optimal angular sensor separation for AOA localization [J]. Signal Processing 88(5), 1248–1260 (2008)

    Article  MATH  Google Scholar 

  3. Y. Wang, K.C. Ho, An asymptotically efficient estimator in closed-form for 3D AOA localization using a sensor network [J]. IEEE Transactions on Wireless Communications 14(12), 6524–6535 (2015)

    Article  Google Scholar 

  4. J.O. Smith, J.S. Abel, Closed-form least-squares source location estimation from range-difference measurements [J]. IEEE Transactions on Acoustics, Speech and Signal Processing 35(11), 1661–1669 (1987)

    Article  Google Scholar 

  5. Y.T. Chan, K.C. Ho, A simple and efficient estimator by hyperbolic location [J]. IEEE Transactions on Signal Processing 42(4), 1905–1915 (1994)

    Article  Google Scholar 

  6. Y. Huang, J. Benesty, G.W. Elko, R.M. Mersereau, Real-time passive source localization: a practical linear-correction least-squares approach [J]. IEEE Transactions on Speech and Audio Processing 9(8), 943–956 (2001)

    Article  Google Scholar 

  7. Z. Huang, J. Liu, Total least squares and equilibration algorithm for range difference location [J]. Electronics Letters 40(5), 121–122 (2004)

    Article  Google Scholar 

  8. L. Kovavisaruch, K.C. Ho, Modified Taylor-series method for source and receiver localization using TDOA measurements with erroneous receiver positions [A]. Proceedings of the IEEE International Symposium on Circuits and Systems [C]. (IEEE Press, Kobe, 2005), pp. 2295–2298

    Google Scholar 

  9. K.H. Yang, G. Wang, Z.Q. Luo, Efficient convex relaxation methods for robust target localization by a sensor network using time differences of arrivals [J]. IEEE Transactions on Signal Processing 57(7), 2775–2784 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. L. Yang, K.C. Ho, An approximately efficient TDOA localization algorithm in closed-form for locating multiple disjoint sources with erroneous sensor positions [J]. IEEE Transactions on Signal Processing 57(12), 4598–4615 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. K. Yang, J.P. An, X.Y. Bu, G.C. Sun, Constrained total least-squares location algorithm using time-difference-of-arrival measurements [J]. IEEE Transactions on Vehicular Technology 59(3), 1558–1562 (2010)

    Article  Google Scholar 

  12. G. Wang, H.Y. Chen, An importance sampling method for TDOA-based source localization [J]. IEEE Transactions on Wireless Communications 10(5), 1560–1568 (2011)

    Article  Google Scholar 

  13. K.C. Ho, Bias reduction for an explicit solution of source localization using TDOA [J]. IEEE Transactions on Signal Processing 60(5), 2101–2114 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. L.X. Lin, H.C. So, F.K.W. Chan, Y.T. Chan, K.C. Ho, A new constrained weighted least squares algorithm for TDOA-based localization [J]. Signal Processing 93(11), 2872–2878 (2013)

    Article  Google Scholar 

  15. Y. Liu, F.C. Guo, L. Yang, W.L. Jiang, An improved algebraic solution for TDOA localization with sensor position errors [J]. IEEE Communications Letters 19(12), 2218–2221 (2015)

    Article  Google Scholar 

  16. Y.B. Zou, H.P. Liu, W. Xie, Q. Wan, Semidefinite programming methods for alleviating sensor position error in TDOA localization [J]. IEEE Access 5, 23111–23120 (2017)

    Article  Google Scholar 

  17. K.W. Cheung, H.C. So, Y.T. Chan, Least squares algorithms for time-of-arrival-based mobile location [J]. IEEE Transactions on Signal Processing 52(4), 1121–1128 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. F.K.W. Chan, H.C. So, J. Zheng, K.W.K. Lui, Best linear unbiased estimator approach for time-of-arrival based localisation [J]. IET Signal Processing 2(2), 156–162 (2008)

    Article  Google Scholar 

  19. Ma Z H, Ho K C. TOA localization in the presence of random sensor position errors [A]. Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing [C]. Prague, Czech: IEEE Press, May 2011: 2468-2471.

  20. M. Sun, L. Yang, K.C. Ho, Efficient joint source and sensor localization in closed-form [J]. IEEE Signal Processing Letters 19(7), 399–402 (2012)

    Article  Google Scholar 

  21. J.Z. Li, K.C. Ho, F.C. Guo, W.L. Jiang, Improving the projection method for TOA source localization in the presence of sensor position errors [A]. Proceedings of the IEEE Sensor Array and Multichannel Signal Processing Workshop [C] (IEEE Press, A Coruna, 2014), pp. 45–48

    Google Scholar 

  22. N.H. Nguyen, K. Doğançay, Optimal geometry analysis for multistatic TOA localization [J]. IEEE Transactions on Signal Processing 64(16), 4180–4193 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. K.C. Ho, W. Xu, An accurate algebraic solution for moving source location using TDOA and FDOA measurements [J]. IEEE Transactions on Signal Processing 52(9), 2453–2463 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  24. K.C. Ho, X. Lu, L. Kovavisaruch, Source localization using TDOA and FDOA measurements in the presence of receiver location errors: analysis and solution [J]. IEEE Transactions on Signal Processing 55(2), 684–696 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  25. H.W. Wei, R. Peng, Q. Wan, Z.X. Chen, S.F. Ye, Multidimensional scaling analysis for passive moving target localization with TDOA and FDOA measurements [J]. IEEE Transactions on Signal Processing 58(3), 1677–1688 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. M. Sun, K.C. Ho, An asymptotically efficient estimator for TDOA and FDOA positioning of multiple disjoint sources in the presence of sensor location uncertainties [J]. IEEE Transactions on Signal Processing 59(7), 3434–3440 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  27. F.C. Guo, K.C. Ho, A quadratic constraint solution method for TDOA and FDOA localization [A]. Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing [C] (IEEE Press, Prague, 2011), pp. 2588–2591

    Google Scholar 

  28. H.G. Yu, G.M. Huang, J. Gao, B. Liu, An efficient constrained weighted least squares algorithm for moving source location using TDOA and FDOA measurements [J]. IEEE Transactions on Wireless Communications 11(1), 44–47 (2012)

    Article  Google Scholar 

  29. G. Wang, Y.M. Li, N. Ansari, A semidefinite relaxation method for source localization using TDOA and FDOA measurements [J]. IEEE Transactions on Vehicular Technology 62(2), 853–862 (2013)

    Article  Google Scholar 

  30. B.J. Hao, Z. Li, J.B. Si, L. Guan, Joint source localisation and sensor refinement using time differences of arrival and frequency differences of arrival [J]. IET Signal Processing 8(6), 588–600 (2014)

    Article  Google Scholar 

  31. X.M. Qu, L.H. Xie, W.R. Tan, Iterative constrained weighted least squares source localization using TDOA and FDOA measurements [J]. IEEE Transactions on Signal Processing 65(15), 3990–4003 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  32. A. Noroozi, A.H. Oveis, S.M. Hosseini, M.A. Sebt, Improved algebraic solution for source localization from TDOA and FDOA measurements [J]. IEEE Transactions on Wireless Communications 7(3), 352–355 (2018)

    Article  Google Scholar 

  33. J. Mason, Algebraic two-satellite TOA/FOA position solution on an ellipsoidal earth [J]. IEEE Transactions on Aerospace and Electronic Systems 40(7), 1087–1092 (2004)

    Article  Google Scholar 

  34. K.W. Cheung, H.C. So, W.K. Ma, Y.T. Chan, Received signal strength based mobile positioning via constrained weighted least squares [A]. Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing [C] (IEEE Press, Hong Kong, 2003), pp. 137–140

    Google Scholar 

  35. K.C. Ho, M. Sun, An accurate algebraic closed-form solution for energy-based source localization [J]. IEEE Transactions on Audio, Speech and Language Processing 15(8), 2542–2550 (2007)

    Article  Google Scholar 

  36. M.R. Gholami, R.M. Vaghefi, E.G. Ström, RSS-based sensor localization in the presence of unknown channel parameters [J]. IEEE Transactions on Signal Processing 61(15), 3752–3759 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  37. S. Tomic, M. Beko, R. Dinis, RSS-based localization in wireless sensor networks using convex relaxation: noncooperative and cooperative schemes [J]. IEEE Transactions on Vehicular Technology 64(5), 2037–2050 (2015)

    Article  Google Scholar 

  38. K.W. Cheung, H.C. So, W.K. Ma, Y.T. Chan, A constrained least squares approach to mobile positioning: algorithms and optimality [J]. EURASIP Journal on Applied Signal Processing, 1–23 (2006)

  39. K.C. Ho, M. Sun, Passive source localization using time differences of arrival and gain ratios of arrival [J]. IEEE Transactions on Signal Processing 56(2), 464–477 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  40. B.J. Hao, Z. Li, J.B. Si, W.Y. Yin, Y.M. Ren, Passive multiple disjoint sources localization using TDOAs and GROAs in the presence of sensor location uncertainties [A]. Proceedings of the IEEE Conference on Communications [C] (IEEE Press, Ottawa, 2012), pp. 47–52

    Google Scholar 

  41. B.J. Hao, Z. Li, Y.M. Ren, W.Y. Yin, On the Cramer-Rao bound of multiple sources localization using RDOAs and GROAs in the presence of sensor location uncertainties [A]. Proceedings of the IEEE Wireless Communications and Networking Conference [C] (IEEE Press, Shanghai, 2012), pp. 3117–3122

    Google Scholar 

  42. M. Wax, T. Kailath, Decentralized processing in sensor arrays [J]. IEEE Transactions on Signal Processing 33(4), 1123–1129 (1985)

    Article  Google Scholar 

  43. D. Dardari, A. Conti, U.J. Ferner, A. Giorgetti, M.Z. Win, Ranging with ultrawide bandwidth signals in multipath environments [J]. Proceedings of the IEEE 97(2), 404–426 (2009)

    Article  Google Scholar 

  44. Y. Shen, M.Z. Win, Fundamental limits of wideband localization-part I: a general framework [J]. IEEE Transactions on Information Theory 56(10), 4956–4980 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  45. Y. Shen, H. Wymeersch, M.Z. Win, Fundamental limits of wideband localization-part II: cooperative networks [J]. IEEE Transactions on Information Theory 56(10), 4981–5000 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  46. A. Coluccia, F. Ricciato, G. Ricci, Positioning based on signals of opportunity [J]. IEEE Communications Letters 18(2), 356–359 (2014)

    Article  Google Scholar 

  47. F. Bandiera, A. Coluccia, G. Ricci, F. Ricciato, D. Spano, TDOA localization in asynchronous WSNs [A]. Proceedings of the IEEE International Conference on Embedded and Ubiquitous Computing [C] (IEEE Press, Milano, 2014), pp. 193–196

    Google Scholar 

  48. A. Catovic, Z. Sahinoglu, The Cramer-Rao bounds of hybrid TOA/RSS and TDOA/RSS location estimation schemes [J]. IEEE Communications Letters 8(10), 626–628 (2004)

    Article  Google Scholar 

  49. M. Laaraiedh, L. Yu, S. Avrillon, B. Uguen, Comparison of hybrid localization schemes using RSSI, TOA, and TDOA [A]. Proceedings of the European Wireless Conference [C] (IEEE Press, Vienna, 2011), pp. 1–5

    Google Scholar 

  50. A. Coluccia, A. Fascista, On the hybrid TOA/RSS range estimation in wireless sensor networks [J]. IEEE Transactions on Wireless Communications 17(1), 361–371 (2018)

    Article  Google Scholar 

  51. A. Coluccia, F. Ricciato, Maximum Likelihood trajectory estimation of a mobile node from RSS measurements [A]. Proceedings of the IEEE/IFIP Annual Conference on Wireless On-Demand Network Systems and Services [C] (IEEE Press, Courmayeur, 2012), pp. 151–158

    Google Scholar 

  52. A. Tahat, G. Kaddoum, S. Yousefi, S. Valaee, F. Gagnon, A look at the recent wireless positioning techniques with a focus on algorithms for moving receivers [J]. IEEE Access 4, 6652–6680 (2016)

    Article  Google Scholar 

  53. A. Fascista, G. Ciccarese, A. Coluccia, G. Ricci, Angle-of-arrival based cooperative positioning for smart vehicles [J]. IEEE Transactions on Intelligent Transportation Systems 19(9), 2880–2892 (2018)

    Article  Google Scholar 

  54. A. Guerra, F. Guidi, D. Dardari, Single-anchor localization and orientation performance limits using massive arrays: MIMO vs. Beamforming [J]. IEEE Transactions Wireless Communications 17(8), 5241–5255 (2018)

    Article  Google Scholar 

  55. K. Pahlavan, P. Krishnamurthy, Principles of wireless networks-a unified approach (Prentice-Hall, Englewood Cliffs, 2002)

    Google Scholar 

  56. X.N. Lu, K.C. Ho, Analysis of the degradation in source location accuracy in the presence of sensor location error [A]. Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing [C] (IEEE Press, Toulouse, France, USA, 2006), pp. 14–19

    Google Scholar 

  57. D. Wang, J.X. Yin, T. Tang, X. Chen, Z.D. Wu, Quadratic constrained weighted least squares method for TDOA source localization in the presence of clock synchronization bias: analysis and solution [J]. Digital Signal Processing 82, 237–257 (2018)

    Article  Google Scholar 

  58. Y. Wang, J. Huang, L. Yang, Y. Xue, TOA-based joint synchronization and source localization with random errors in sensor positions and sensor clock biases [J]. Ad Hoc Networks 27(C), 99–111 (2015)

    Article  Google Scholar 

  59. Y. Wang, K.C. Ho, TDOA source localization in the presence of synchronization clock bias and sensor position errors [J]. IEEE Transactions on Signal Processing 61(18), 4532–4544 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  60. J. Zheng, Y.C. Wu, Joint time synchronization and localization of an unknown node in wireless sensor networks [J]. IEEE Transactions on Signal Processing 58(3), 1309–1320 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  61. M.R. Gholami, S. Gezici, E.G. Ström, TDOA based positioning in the presence of unknown clock skew [J]. IEEE Transactions on Communications 61(6), 2522–2534 (2013)

    Article  Google Scholar 

  62. A. Ahmad, E. Serpedin, H. Nounou, M. Nounou, Joint node localization and time-varying clock synchronization in wireless sensor networks [J]. IEEE Transactions on Wireless Communications 12(10), 5322–5333 (2013)

    Article  Google Scholar 

  63. L.Y. Rui, K.C. Ho, Algebraic solution for joint localization and synchronization of multiple sensor nodes in the presence of beacon uncertainties [J]. IEEE Transactions on Wireless Communications 13(9), 5196–5210 (2014)

    Article  Google Scholar 

  64. R.M. Vaghefi, R.M. Buehrer, Cooperative joint synchronization and localization in wireless sensor networks [J]. IEEE Transactions on Signal Processing 58(3), 1309–1320 (2015)

    MathSciNet  MATH  Google Scholar 

  65. H. Xiong, Z.Y. Chen, B.Y. Yang, R.P. Ni, TDOA localization algorithm with compensation of clock offset for wireless sensor networks [J]. China Communication 12(10), 193–201 (2015)

    Article  Google Scholar 

  66. H. Xiong, Z.Y. Chen, W. An, B.Y. Yang, Robust TDOA localization algorithm for asynchronous wireless sensor networks [J]. International Journal of Distributed Sensor Networks 2015(10), 1–10 (2015)

    Google Scholar 

  67. H. Naseri, V. Koivunen, Cooperative joint synchronization and localization using time delay measurements [A]. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing [C] (IEEE Press, Shanghai, 2016), pp. 3146–3150

    Google Scholar 

  68. X.P. Wu, Z.H. Gu, A joint time synchronization and localization method without known clock parameters [J]. Pervasive and Mobile Computing 37(7), 154–170 (2017)

    Article  Google Scholar 

  69. Y. Zou, Q. Wan, H. Liu, Semidefinite programming for TDOA localization with locally synchronized anchor nodes [A]. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing [C] (IEEE Press, Calgary, Canada, April 2018)

    Google Scholar 

  70. X. Chen, D. Wang, J.X. Yin, C.G. Jia, Y. Wu, Bias reduction for TDOA localization in the presence of receiver position errors and synchronization clock bias [J]. EURASIP Journal on Advances in Signal Processing 7, 1–26 (2019)

    Google Scholar 

  71. K.C. Ho, On the use of a calibration emitter for source localization in the presence of sensor position uncertainty [J]. IEEE Transactions on Signal Processing 56(12), 5758–5772 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  72. J.Z. Li, F.C. Guo, W.L. Jiang, Source localization and calibration using TDOA and FDOA measurements in the presence of sensor location uncertainty [J]. Science China Information Sciences 57(4), 1–12 (2014)

    Google Scholar 

  73. J.Z. Li, F.C. Guo, L. Yang, W.L. Jiang, H.W. Pang, On the use of calibration sensors in source localization using TDOA and FDOA measurements [J]. Digital Signal Processing 27(4), 33–43 (2014)

    Article  Google Scholar 

  74. Zhang L, Wang D, Yu H Y. A ML method for TDOA and FDOA localization in the presence of receiver and calibration source location errors [A]. Proceedings of the IEEE Conference on Information and Communications Technologies [C]. Nanjing, China: IEEE Press, May 2014: 1-5.

  75. L. Yang, K.C. Ho, Alleviating sensor position error in source localization using calibration emitters at inaccurate locations [J]. IEEE Transactions on Signal Processing 58(1), 67–83 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  76. L. Yang, K.C. Ho, On using multiple calibration emitters and their geometric effects for removing sensor position errors in TDOA localization [A]. Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing[C] (IEEE Press, Dallas, 2010), pp. 2702–2705

    Google Scholar 

  77. J.Z. Li, F.C. Guo, W.L. Jiang, Z. Liu, Multiple disjoint sources localization with the use of calibration emitters [A]. Proceedings of the IEEE Radar Conference [C] (IEEE Press, Atlanta, 2012), pp. 34–39

    Google Scholar 

  78. Z.H. Ma, K.C. Ho, A study on the effects of sensor position error and the placement of calibration emitter for source localization [J]. IEEE Transactions on Wireless Communications 13(10), 5440–5452 (2014)

    Article  Google Scholar 

  79. O. Jean, A.J. Weiss, Passive localization and synchronization using arbitrary signals [J]. IEEE Transactions on Signal Processing 62(8), 2143–2150 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  80. G.C. Carter, Time delay estimation for passive sonar signal processing [J]. IEEE Transactions on Acoustics, Speech, and Signal Processing 29(3), 463–470 (1981)

    Article  Google Scholar 

  81. W.R. Hahn, S.A. Tretter, Optimum processing for delay-vector estimation in passive signal arrays [J]. IEEE Transactions on Information Theory 19(5), 608–614 (1973)

    Article  MATH  Google Scholar 

  82. E. Weinstein, Kletter. Delay and Doppler estimation by time-space partition of the array data [J]. IEEE Transactions on Aerospace and Electronic Systems 31(6), 1523–1535 (1983)

    Google Scholar 

  83. B. Friedlander, On the Cramer-Rao bound for time delay and Doppler estimation [J]. IEEE Transactions on Information Theory 30(3), 575–580 (1984)

    Article  MATH  Google Scholar 

  84. T. Strutz, Data Fitting and Uncertainty: A Practical Introduction to Weighted Least Squares and Beyond (2nd edition) (Vieweg, Springer, 2016)

    Book  Google Scholar 

  85. Jia C G, Yin J X, Wang D, Wang Y L, Zhang L. Semidefinite relaxation algorithm for multisource localization using TDOA measurements with range constraints [J]. Wireless Communications and Mobile Computing, 2018, Article ID 9430180.

  86. M. Viberg, B. Ottersten, Sensor array processing based on subspace fitting [J]. IEEE Transactions on Signal Processing 39(5), 1110–1121 (1991)

    Article  MATH  Google Scholar 

  87. M. Viberg, A.L. Swindlehurst, A Bayesian approach to auto-calibration for parametric array signal processing [J]. IEEE Transactions on Signal Processing 42(12), 3495–3507 (1994)

    Article  Google Scholar 

  88. W.H. Foy, Position-location solution by Taylor-series estimation [J]. IEEE Transactions on Aerospace and Electronic Systems 12(2), 187–194 (1976)

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledges the support from the National Natural Science Foundation of China (Grant No. 61201381, No. 61401513, and No.61772548), the China Postdoctoral Science Foundation (Grant No. 2016M592989), the Self-Topic Foundation of Information Engineering University (Grant No. 2016600701), and the Outstanding Youth Foundation of Information Engineering University (Grant No. 2016603201).

Author information

Authors and Affiliations

Authors

Contributions

DW and JY derived and developed the algorithms. XC and CJ conceived of and designed the simulations. XC and JY performed the simulations. DW and FW analyzed the results. DW wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jiexin Yin.

Ethics declarations

Ethics approval and consent to participate

All data and procedures performed in paper were in accordance with the ethical standards of research community. This paper does not contain any studies with human participants or animals performed by any of the authors.

Consent for publication

Informed consent was obtained from all authors included in the study.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

1.1 Proof of (18)

Combining (11), (12) and the matrix identity (I) in Table 2 produces

$$ \mathbf{CRB}\left(\mathbf{u}\right)={\left({\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\left({\mathbf{Q}}^{-1}-{\mathbf{Q}}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}{\mathbf{Z}}^{-1}\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}^{-1}\right){\mathbf{F}}_1\Big(\mathbf{u},\mathbf{w}\Big)\right)}^{-1} $$
(78)

In addition, it can be checked from the third equality in (12) and (19) that

$$ \mathbf{Z}=\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}+{\boldsymbol{\Phi}}^{-1} $$
(79)

which combined with the matrix identity (II) in Table 2 gives

$$ {\left(\mathbf{Q}+{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Phi} \left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)}^{-1}={\mathbf{Q}}^{-1}-{\mathbf{Q}}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}{\mathbf{Z}}^{-1}\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}^{-1} $$
(80)

Putting (80) back into (78) proves (18).

Appendix 2

1.1 Expressions of some CRB matrices

First, using the matrix identity (I) in Table 2 produces

$$ \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right)={\left(\begin{array}{c}\left[\begin{array}{cc}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\end{array}\right]\\ {}-\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{\mathrm{T}}\end{array}\right)}^{-1} $$
(81)
$$ \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)={\left(\begin{array}{c}\left[\begin{array}{cc}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]\\ {}-\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right){\left({\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\end{array}\right)}^{-1} $$
(82)
$$ \mathbf{CRB}\left(\boldsymbol{\uprho} \right)={\left(\begin{array}{c}{\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}-{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{\mathrm{T}}\\ {}\times {\left[\begin{array}{cc}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& \begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}+{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\end{array}\end{array}\right]}^{-1}\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]\end{array}\right)}^{-1} $$
(83)

Subsequently, from (82) and the matrix identity (III) in Table 2, we have

$$ \mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{w}\\ {}\boldsymbol{\uprho} \end{array}\right]\right)={\left(\left[\begin{array}{cc}{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}& {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\\ {}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)& {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]-\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\mathbf{Q}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right]{\mathbf{Q}}^{-1/2}{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\right)}^{-1} $$
(84)

Combing (83) and the matrix identity (II) in Table 2 yields

$$ {\displaystyle \begin{array}{c}\mathbf{CRB}\left(\boldsymbol{\uprho} \right)={\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}+{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{\mathrm{T}}\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right)\\ {}\times \left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}\end{array}} $$
(85)

Appendix 3

1.1 Proof of A 3 = O (N − 1) × (N − 1)

It follows from (33) that

$$ {\displaystyle \begin{array}{c}\mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]=\mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}^{\mathrm{T}}\right]\cdot {\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right){\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}\\ {}={\mathbf{Q}}_{\mathrm{c}}^{1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right){\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}\end{array}} $$
(86)

Then, we have

$$ {\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\cdot \mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]={\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right){\left({\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\right)}^{-1}={\mathbf{O}}_{\left(N-1\right)\times 3M} $$
(87)

which follows from the relation \( {\boldsymbol{\Pi}}^{\perp}\left[{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}\right]{\mathbf{Q}}_{\mathrm{c}}^{-1/2}\overline{\boldsymbol{\Gamma}}={\mathbf{O}}_{D\left(M-1\right)\times \left(N-1\right)} \). Inserting (87) into the expression of A3 produces

$$ {\mathbf{A}}_3=-{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\cdot \mathrm{E}\left[{\boldsymbol{\upvarepsilon}}_{\mathrm{c}}{\left(\Delta {\mathbf{w}}_{\mathrm{f}}\right)}^{\mathrm{T}}\right]\cdot {\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}{\left({\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}={\mathbf{O}}_{\left(N-1\right)\times \left(N-1\right)} $$
(88)

which completes the derivation.

Appendix 4

1.1 Numerical complexities of the two proposed algorithms

Tables 6 and 7 list the numerical complexities of the two proposed algorithms, respectively, expressed in the number of multiplication operations.

Table 6 Computational complexity of algorithm I
Table 7 Complexity of algorithm II

Appendix 5

1.1 Proof of \( \mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)=\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right) \)

Putting (62) and the matrix identity (III) in Table 2 together gives

$$ {\displaystyle \begin{array}{c}{\left(\mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)\right)}^{-1}=\left[\begin{array}{cc}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_1\end{array}\right]\\ {}-\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Psi}}_2\right)}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\boldsymbol{\Psi}}_1^{\mathrm{T}}{\boldsymbol{\Psi}}_2\end{array}\right]}^{\mathrm{T}}\end{array}} $$
(89)

Substituting (49) into (89) and using (81), we have

$$ {\displaystyle \begin{array}{c}{\left(\mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)\right)}^{-1}=\left[\begin{array}{cc}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)+{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\mathbf{F}}\left(\mathbf{w}\right)+{\mathbf{P}}^{-1}\end{array}\right]\\ {}-\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]{\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\overline{\boldsymbol{\Gamma}}}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\right)}^{-1}{\left[\begin{array}{c}{\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} \\ {}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\Gamma} +{\left(\overline{\mathbf{F}}\left(\mathbf{w}\right)\right)}^{\mathrm{T}}{\mathbf{Q}}_{\mathrm{c}}^{-1}\overline{\boldsymbol{\Gamma}}\end{array}\right]}^{\mathrm{T}}\\ {}={\left(\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right)\right)}^{-1}\end{array}} $$
(90)

which implies \( \mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)=\mathbf{CRB}\left(\left[\begin{array}{c}\mathbf{u}\\ {}\mathbf{w}\end{array}\right]\right) \)

Appendix 6

1.1 Proof of B 3 = O (N − 1) × (N − 1)

It can be verified from (60) and (62) that

$$ {\displaystyle \begin{array}{l}\mathrm{E}\left(\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Phi}}^{-1/2}\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\right){\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]}^{\mathrm{T}}\right)\\ {}=\mathrm{E}\left(\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Phi}}^{-1/2}\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\right){\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\upvarepsilon} \\ {}{\boldsymbol{\Psi}}_1\Delta {\mathbf{w}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right){\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]\mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)\end{array}} $$
(91)

Besides, from (43), we obtain

$$ \mathrm{E}\left(\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Phi}}^{-1/2}\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\right){\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\upvarepsilon} \\ {}{\boldsymbol{\Psi}}_1\Delta {\mathbf{w}}_{\mathrm{f}}+{\boldsymbol{\Psi}}_2\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right)={\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]}^{\mathrm{T}} $$
(92)

The substitution of (92) into (91) yields

$$ \mathrm{E}\left(\left({\boldsymbol{\Gamma}}^{\mathrm{T}}{\mathbf{Q}}^{-1}\boldsymbol{\upvarepsilon} +{\boldsymbol{\Psi}}_2^{\mathrm{T}}{\boldsymbol{\Phi}}^{-1/2}\left[\begin{array}{c}\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\right){\left[\begin{array}{c}\Delta {\mathbf{u}}_{\mathrm{s}1}\\ {}\Delta {\mathbf{w}}_{\mathrm{s}1}\end{array}\right]}^{\mathrm{T}}\right)={\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]}^{\mathrm{T}}{\boldsymbol{\Pi}}^{\perp}\left(\left[\begin{array}{c}{\mathbf{Q}}^{-1/2}\boldsymbol{\Gamma} \\ {}{\boldsymbol{\Psi}}_2\end{array}\right]\right)\left[\begin{array}{cc}{\mathbf{Q}}^{-1/2}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)& {\mathbf{Q}}^{-1/2}{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(3M+N-1\right)\times 3}& {\boldsymbol{\Psi}}_1\end{array}\right]\mathbf{MSE}\left(\left[\begin{array}{c}{\hat{\mathbf{u}}}_{\mathrm{s}1}\\ {}{\hat{\mathbf{w}}}_{\mathrm{s}1}\end{array}\right]\right)={\mathbf{O}}_{\left(N-1\right)\times \left(3M+3\right)} $$
(93)

Inserting (93) into the expression of B3, we have B3 = O(N − 1) × (N − 1). At this point, the derivation is completed.

Appendix 7

1.1 Proof of \( \mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)=\mathbf{CRB}\left(\mathbf{u}\right) \)

First, it can be verified from (52) and (53) that

$$ \mathbf{Q}+{\left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Phi} \left[\begin{array}{c}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\\ {}{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]={\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\left[\begin{array}{cc}\mathbf{Q}+{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right){\boldsymbol{\Phi}}_1{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}& -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right){\boldsymbol{\Phi}}_2\\ {}-{\boldsymbol{\Phi}}_2^{\mathrm{T}}{\left({\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}& {\boldsymbol{\Phi}}_3\end{array}s\right]\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]={\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right] $$
(94)

Inserting (94) into (18) gives

$$ \mathbf{CRB}\left(\mathbf{u}\right)={\left({\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\left({\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)}^{-1}{\mathbf{F}}_1\Big(\mathbf{u},\mathbf{w}\Big)\right)}^{-1} $$
(95)

On the other hand, from (71), we obtain

$$ \mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)={\left({\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}\left({\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}{\mathbf{O}}_{\left(N-1\right)\times \left(M-1\right)}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)\left(\boldsymbol{\Omega} \right(\mathbf{u},\mathbf{w}\left)\right){}^{-1/2}\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}{\mathbf{O}}_{\left(N-1\right)\times \left(M-1\right)}\end{array}\right]\right){\mathbf{F}}_1\Big(\mathbf{u},\mathbf{w}\Big)\right)}^{-1} $$
(96)

Moreover, it can be checked that

$$ \left\{\begin{array}{l}{\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)}^{\mathrm{T}}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2}\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)={\boldsymbol{\Gamma}}^{\mathrm{T}}-{\boldsymbol{\Gamma}}^{\mathrm{T}}={\mathbf{O}}_{\left(N-1\right)\times \left(M-1\right)}\\ {}\operatorname{rank}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)=N-1\kern1em ,\kern1em \operatorname{rank}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2}\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)=M-1\end{array}\right. $$
(97)

which implies

$$ \mathrm{range}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)\perp \mathrm{range}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2}\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right) $$
(98)

It follows from (98) that

$$ {\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right)=\boldsymbol{\Pi} \left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2}\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)={\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2}\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]{\left({\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)}^{-1}{\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2} $$
(99)

Inserting (99) into (96) and using (95), we get

$$ \mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)={\left({\left({\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\right)}^{\mathrm{T}}{\left({\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]}^{\mathrm{T}}\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\left[\begin{array}{c}{\mathbf{I}}_{M-1}\\ {}-{\boldsymbol{\Gamma}}^{\mathrm{T}}\end{array}\right]\right)}^{-1}{\mathbf{F}}_1\Big(\mathbf{u},\mathbf{w}\Big)\right)}^{-1}=\mathbf{CRB}\left(\mathbf{u}\right) $$
(100)

which completes the proof.

Appendix 8

1.1 Proof of C 3 = O (N − 1) × (N − 1)

Using (70) and (71), we obtain

$$ {\displaystyle \begin{array}{c}\mathrm{E}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\Delta {\mathbf{u}}_{\mathrm{s}2}^{\mathrm{T}}\right)\\ {}=\mathrm{E}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]{\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]}^{\mathrm{T}}\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)\\ {}={\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{1/2}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)\end{array}} $$
(101)

where the second equality follows from (52). From (101), it can be checked that

$$ {\displaystyle \begin{array}{l}{\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1}\cdot \mathrm{E}\left(\left[\begin{array}{c}\boldsymbol{\upvarepsilon} -{\mathbf{F}}_2\left(\mathbf{u},\mathbf{w}\right)\Delta {\mathbf{w}}_{\mathrm{f}}\\ {}\Delta {\boldsymbol{\uprho}}_{\mathrm{f}}\end{array}\right]\Delta {\mathbf{u}}_{\mathrm{s}2}^{\mathrm{T}}\right)\\ {}={\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]}^{\mathrm{T}}{\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}{\boldsymbol{\Pi}}^{\perp}\left({\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}\boldsymbol{\Gamma} \\ {}{\mathbf{I}}_{N-1}\end{array}\right]\right){\left(\boldsymbol{\Omega} \left(\mathbf{u},\mathbf{w}\right)\right)}^{-1/2}\left[\begin{array}{c}{\mathbf{F}}_1\left(\mathbf{u},\mathbf{w}\right)\\ {}{\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}\right]\mathbf{MSE}\left({\hat{\mathbf{u}}}_{\mathrm{s}2}\right)={\mathbf{O}}_{\left(N-1\right)\times 3}\end{array}} $$
(102)

Putting (102) into the expression of C3, we have C3 = O(N − 1) × (N − 1). Then, the derivation is completed.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, D., Yin, J., Chen, X. et al. On the use of calibration emitters for TDOA source localization in the presence of synchronization clock bias and sensor location errors. EURASIP J. Adv. Signal Process. 2019, 37 (2019). https://doi.org/10.1186/s13634-019-0629-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-019-0629-1

Keywords