# Localization based on two-stage treatment for dealing with noisy and biased distance measurements

## Abstract

Localization can be performed by trilateration in which the coordinates of a target are calculated by using the coordinates of reference points and the distances between each reference point and the target. Because the distances are measured on the basis of the time-of-flight of various kinds of signals, they contain errors which are the noise and bias. The presence of bias can become a major problem because its magnitude is generally unknown. In this article, we propose an algorithm that combines the Kalman filter (KF) and the least square (LS) algorithm to treat noisy and biased distances measured by chirp spread spectrum ranging defined in IEEE 802.15.4a. By using the KF, we remove the noise in the measured distance; hence, the noise-eliminated distance, which still contains bias, is obtained. The next step consists of the calculation of the target coordinates by using the weighted LS algorithm. This algorithm uses the noise-eliminated distance obtained by using the KF, and the weighting parameters of the algorithm are determined to reduce the effects of bias. To confirm the accuracy of the proposed algorithm, we present the results of indoor localization experiments.

## Introduction

Trilateration is a method used to calculate the coordinates of a target. For two-dimensional localization, it requires at least three reference points whose coordinates are known in advance and involves the determination of the distance between the target and the individual reference points. Geometrically, there is only one point of intersection of all the circles whose radius and center coordinates are equal to the distances and the coordinates of the reference points, respectively. The coordinates of the intersection are those of the target.

In practice, since all the distances are mostly measured on the basis of the time-of-flight of various kinds of signals, namely, laser, ultrasonic and electromagnetic signals, these are only approximately equal to the true distance.. Furthermore, it is often the case that not all circles intersect with each other because of the errors in the measured distances. As a consequence, the least square (LS) algorithm and the extended Kalman filter (EKF) have been applied to the trilateration method.

In the vast majority of studies, it is assumed that errors in the measured distance are caused by zero-mean white Gaussian noise. In this case, the LS algorithm and EKF gives the optimal solution. However, in actual situations, the measured distances are not only corrupted by noise but also deteriorated by bias, which is caused by non-line-of-sight and multiple paths of ranging signal. When only zero-mean noise is added to the measured distance, the expected value of the measured distance is equal to the true distance. On the other hand, when bias exists, the expected value will not be equal to the true distance. The combination of noise and bias can be regarded as nonzero mean noise with unknown mean value. Therefore, the bias can result in errors, even if the LS algorithm and EKF are used.

The use of the Kalman filter (KF) for the bias has been extensively studied[12, 13]. In these studies, the bias is regarded as additional states to be estimated and the augmented state vector is defined as the combination of the original states and the bias. The general KF estimates the augmented state vector. The two-stage KF, which consists of a bias-free and a bias filter, separately estimates the augmented state vector. The estimates of the two-stage KF is the same as those of the general KF, but the former has a numerical advantage. Furthermore, a two-stage EKF, which is a KF used for estimating the state vector of nonlinear state space (SS) equations, has been developed[14, 15].

However, the EKF cannot deal with the bias of the trilateration model due to the observability problem. When the SS equations which model the trilateration are unobservable, the estimated value does not converge to a meaningful solution due to lack of measurement information. The observability of the equations can be investigated by using the Fisher information matrix[17, 18]. According to our study the equations are unobservable, and it means that most conventional approaches such as the EKF cannot accurately estimate all of the original states and the biases.

In this article, therefore, we propose an algorithm for treating both noise and bias in distances measured by chirp spread spectrum (CSS) ranging. The CSS is defined in IEEE 802.15.4a as an alternative physical layer, and we can measure distances by the CSS ranging[8, 9, 1921]. However, the CSS ranging is biased, so a localization algorithm to solve the bias problem has been proposed. The algorithm uses an approximated bias model of CSS ranging, and applies the model to the EKF. Therefore when the model is modified, the equations of the algorithm have to be changed. On the other hand, the proposed algorithm is divided into two stages. In the first stage, the noise in the distance measurement is eliminated by using the KF. In the second stage, the target coordinates are calculated on the basis of the noise-eliminated distances obtained in the first stage. For this calculation, we use a weighted least square (WLS) algorithm and the weighting parameters have an important rule to reducing the effect of bias. Therefore, when the characteristic of the bias is reconsidered, we only change the equation for determining weighting parameters. To confirm the accuracy of the proposed algorithm, we present the results of indoor localization experiments, which are based on CSS ranging.

This article is organized as follows. In Section “Conventional algorithms for trilateration”, we provide an introduction to trilateration and conventional methods used for solving trilateration problems. In Section “Biased distance model and its observability”, we describe the EKF algorithm used for biased measurements. We also argue that the EKF algorithm is not suitable for treating the bias in the case of trilateration because of the observability problem. In Section “Proposed algorithm”, we present the proposed algorithm. In Section “Experimental results”, we present the results of indoor localization experiments based on the CSS ranging. Lastly, in Section “Conclusion”, we conclude our study.

## Conventional algorithms for trilateration

### Mathematical expression for trilateration

On a 2D-plane, four reference points are symmetrically located as shown in Figure1. The coordinates of reference points 1, 2, 3, and 4 are [A A]T, [−A A]T, [−AA]T, and [AA]Tin vector form, respectively. At the time instant t, when a target is positioned at x(t) = [x1(t) x2(t)]T, the distances between this target and each reference point denoted as y1(t), y2(t), y3(t), and y4(t) are given by the following expressions:

$y 1 ( t ) = x 1 ( t ) − A 2 + x 2 ( t ) − A 2 + v 1 ( t )$
(1)
$y 2 ( t ) = x 1 ( t ) + A 2 + x 2 ( t ) − A 2 + v 2 ( t )$
(2)
$y 3 ( t ) = x 1 ( t ) + A 2 + x 2 ( t ) + A 2 + v 3 ( t )$
(3)
$y 4 ( t ) = x 1 ( t ) − A 2 + x 2 ( t ) + A 2 + v 4 ( t ) ,$
(4)

where v1(t), v2(t), v3(t), and v4(t) are the noises in the measurements of these four distances which are generally assumed to be zero-mean white Gaussian. The first terms on the right side of each equation correspond to the true distances. These equations can be more simply expressed as a vector form,

$y(t)=h x ( t ) +v(t),$
(5)

where

$y ( t ) = y 1 ( t ) y 2 ( t ) y 3 ( t ) y 4 ( t ) T$
(6)
$v ( t ) = v 1 ( t ) v 2 ( t ) v 3 ( t ) v 4 ( t ) T$
(7)

and

$h x ( t ) = x 1 ( t ) − A 2 + x 2 ( t ) − A 2 x 1 ( t ) + A 2 + x 2 ( t ) − A 2 x 1 ( t ) + A 2 + x 2 ( t ) + A 2 x 1 ( t ) − A 2 + x 2 ( t ) + A 2 .$
(8)

Here, y(t), v(t), and h(x(t)) are the vectors of the measured distances, the measurement noises, and the true distances. The true distance vector h(x(t)) is a function of the vector x(t) of the target coordinates.

### LS algorithms for trilateration

The ordinary LS algorithm is one of methods for calculating the coordinates of a specific target by using equations of (1)–(4). To transform the equations into linear equations, the terms corresponding to the measurement noises are neglected and the equations are rearranged, then we obtain[7, 22]:

$y 1 2 ( t ) = x 1 2 ( t ) + x 2 2 ( t ) − 2 A x 1 ( t ) − 2 A x 2 ( t ) + 2 A 2$
(9)
$y 2 2 ( t ) = x 1 2 ( t ) + x 2 2 ( t ) + 2 A x 1 ( t ) − 2 A x 2 ( t ) + 2 A 2$
(10)
$y 3 2 ( t ) = x 1 2 ( t ) + x 2 2 ( t ) + 2 A x 1 ( t ) + 2 A x 2 ( t ) + 2 A 2$
(11)
$y 4 2 ( t ) = x 1 2 ( t ) + x 2 2 ( t ) − 2 A x 1 ( t ) + 2 A x 2 ( t ) + 2 A 2 .$
(12)

After subtracting (9)–(11) from (12), we obtain:

$y 1 2 ( t ) − y 4 2 ( t ) = − 2 A x 1 ( t ) + 2 A x 1 ( t ) − 2 A x 2 ( t ) − 2 A x 2 ( t )$
(13)
$y 2 2 ( t ) − y 4 2 ( t ) = 2 A x 1 ( t ) + 2 A x 1 ( t ) − 2 A x 2 ( t ) − 2 A x 2 ( t )$
(14)
$y 3 2 ( t ) − y 4 2 ( t ) = 2 A x 1 ( t ) + 2 A x 1 ( t ) + 2 A x 2 ( t ) − 2 A x 2 ( t ) .$
(15)

These equations can be rewritten in matrix and vector form as follows:

$Φx(t)=θ(t),$
(16)

where

$Φ= 0 − 4 A 4 A − 4 A 4 A 0$
(17)

and

$θ(t)= y 1 2 ( t ) − y 4 2 ( t ) y 2 2 ( t ) − y 4 2 ( t ) y 3 2 ( t ) − y 4 2 ( t ) .$
(18)

The exact solution of the linear Equation (16), i.e. x(t) does not exist because of the presence of noise. Instead of an exact solution, the LS solution$x ̂ (t)$ is calculated by using the ordinary LS algorithm[7, 22]:

$x ̂ (t)= Φ T Φ − 1 Φ T θ(t).$
(19)

As a different approach, the nonlinear LS algorithm can be utilized. The nonlinear LS algorithm does not need the rearrangement of the Equation (5) and is based on the following minimization criterion:

$x ̂ (t)=arg min x ( t ) r T x ( t ) r x ( t ) ,$
(20)

where r(x(t)) = y(t)−h(x(t))which is the residual vector. To obtain the solution that satisfies this minimization criterion, the nonlinear LS problem needs to be solved using the Gauss–Newton method, which is a recursive algorithm used to solve nonlinear LS problems. The Gauss–Newton method is expressed based on the given vector y(t) of the measured distances and the initial value of$x ̂ 0 (t)$,

$x ̂ i (t)= x ̂ i − 1 (t)+α p i (t)r x ̂ i − 1 ( t ) ,$
(21)

where

$r x ̂ i − 1 ( t ) = y ( t ) − h x ̂ i − 1 ( t )$
(22)
$p i ( t ) = − J i T ( t ) J i ( t ) − 1 J i T ( t ) ,$
(23)

and α is step-size and meets the condition 0 < α ≤ 1. The vector$x ̂ i (t)$ is the nonlinear LS solution of x(t) at i-th iteration. Here, the matrix J i (t) is the Jacobian matrix of h(x) defined as

$J i (t)= ∂ h x ∂ x | x = x ̂ i − 1 ( t ) .$
(24)

### EKF for trilateration

The EKF is a state estimator of nonlinear SS equations. When the measured distance vector y(t) is successively measured over time, the EKF can be applied to solve the trilateration problem. For the EKF, the nonlinear SS equations for trilateration model are defined as follows:

$x ( t + 1 ) = x ( t )$
(25)
$y ( t ) = h x ( t ) + v ( t ) ,$
(26)

The measurement Equation (26) is the same as (5). In this article, we deal with two-dimensional localization of a fixed object. As shown in the process Equation (25), the coordinate vector x(t) is not changed over time which means that the target object is stationary. Also, in the SS equations, the coordinate vector is called the state vector. The correlation matrix of v(t) is defined as R(t) and is assumed as known. With initial values of$x ̂ (0)$ and P(0), the EKF is given by the following equations:

$e ( t ) = y ( t ) − h x ̂ ( t − 1 )$
(27)
$K ( t ) = P ( t − 1 ) H T ( t ) H ( t ) P ( t − 1 ) H T ( t ) + R ( t ) − 1$
(28)
$x ̂ ( t ) = x ̂ ( t − 1 ) + K ( t ) e ( t )$
(29)
$P ( t ) = I − K ( t ) H ( t ) P ( t − 1 ) ,$
(30)

where

$H(t)= ∂ h x ∂ x | x = x ̂ ( t − 1 ) .$
(31)

Here, I is the identity matrix with appropriate dimensions, and P(t) is the estimation error covariance matrix. The estimated state vector at time instant t is denoted as$x ̂ (t)$ that is the estimator based on the measured distance vectors y(1),y(2),…,y(t). With the application of the EKF algorithm, a new y(t) is continuously required at every time instant t, whereas the LS algorithms require a measured distance vector y(t) for obtaining a LS solution.

## Biased distance model and its observability

### Biased model

In (5), the noises are assumed as the zero-mean, but they are not always zero-mean especially in actual situations. In this article, the nonzero-mean value of the noises are denoted as the bias. The biased model is expressed as follows:

$y(t)=h x ( t ) +b(t)+v(t),$
(32)

where b(t) = b1(i)b2(i)b3(i)b4(i)]T is the bias vector that is added to the vector of the true distances similarly to the vector of the measurement noise. However the bias vector is a deterministic error source, whereas the noise vector is stochastic. Also the bias can be assumed as constant, because the variation of the bias is much slower than the random variation of the noise. If the bias vector is known in advance, the effect caused by the bias vector can be easily eliminated by subtracting the bias vector from the measured distance vector. However the bias vector is unknown in practical situations, so it becomes a major problem.

### EKF for biased distance model

In this section, we describe the SS equations of the biased distances. First, we can rewrite the nonlinear SS equations of trilateration with the biased distance measurements (32) as:

$x ( t + 1 ) = x ( t )$
(33)
$y ( t ) = h x ( t ) + b ( t ) + v ( t ) .$
(34)

As the bias vector must be estimated in order to eliminate its effects, an augmented state vector is defined as follows:

$z(t)= x ( t ) T b ( t ) T T .$
(35)

In this definition, the biases are regarded as states, as well as the original state vector x(t). Subsequently, the nonlinear SS equations can be rewritten using z(t) as:

$z ( t + 1 ) = z ( t )$
(36)
$y ( t ) = g z ( t ) + v ( t ) ,$
(37)

where

$g z ( t ) = x 1 ( t ) − A 2 + x 2 ( t ) − A 2 + b 1 ( t ) x 1 ( t ) + A 2 + x 2 ( t ) − A 2 + b 2 ( t ) x 1 ( t ) + A 2 + x 2 ( t ) + A 2 + b 3 ( t ) x 1 ( t ) − A 2 + x 2 ( t ) + A 2 + b 4 ( t )$
(38)
$= h x ( t ) + b ( t ) .$
(39)

As mentioned, the bias vector b(t) is assumed as constant, so we can express that b(t + 1) = b(t). Therefore the augmented state vector z(t) is not changed over time in (36). Also, g(z(t)) is defined as the summation of the true distance vector and the bias vector, and is referred to as the biased true distance vector.

In general, the augmented state vector can be estimated using various algorithms including the two-stage EKF algorithms[14, 15]. However, these approaches are not applicable in case of trilateration because of the observability problem.

### Observability of biased distance model

When SS equations are unobservable, the meaningful estimation cannot be derived due to lack of measurement information. In this section, we will investigate the observability of the SS Equations (36) and (37) by using the Fisher information matrix (FIM). In the SS equations, which are the trilateration model with biases, only y(t) is the given measurement information available for estimating z(t), because the process Equation (36) does not contain any measurement information. Therefore, based on (37), the FIM of the trilateration model is given by

$F= ∇ z T g z R − 1 ∇ z g z ,$
(40)

where

$∇ z g z = ∂ g z ∂ z = ∂ g j ( z ) ∂ z k j , k$
(41)

for 1 ≤ j ≤ 4 and 1 ≤ k ≤ 6. Here, the FIM is denoted as F, and the covariance matrix of the measurement noise v(t) is written as R. For convenience, we abbreviate the time instant t such as z(t)= z and R(t) = R. The inverse of the FIM means the Cramér-Rao lower bound of any unbiased estimator. Further if the FIM is singular, i.e. the FIM is not invertible, the model is unobservable.

Based on this knowledge we can judge that the SS model is unobservable if F is singular matrix. To test the singularity of F, we investigate the rank of F. If the rank of F is not full rank, the matrix is not invertible that means the matrix is singular. Through (35) and (38), we can know that

$z = z 1 z 2 z 3 z 4 z 5 z 6 T$
(42)
$= x 1 x 2 b 1 b 2 b 3 b 4 T$
(43)

and

$g 1 z = x 1 − A 11 2 + x 2 − A 12 2 + b 1$
(44)
$g 2 z = x 1 − A 21 2 + x 2 − A 22 2 + b 2$
(45)
$g 3 z = x 1 − A 31 2 + x 2 − A 32 2 + b 3$
(46)
$g 4 z = x 1 − A 41 2 + x 2 − A 42 2 + b 4 .$
(47)

Here, we use non-symmetrical reference points for the generality of our derivation. The coordinates of them are denoted as A pq for 1 ≤ p ≤ 4 and 1 ≤ q ≤ 2. By the definitions, z g(z) is given by

$∇ z g z = ∂ g z ∂ z$
(48)
$= x 1 − A 11 Σ 1 x 2 − A 21 Σ 1 1 0 0 0 x 1 − A 21 Σ 2 x 2 − A 22 Σ 2 0 1 0 0 x 1 − A 31 Σ 3 x 2 − A 32 Σ 3 0 0 1 0 x 1 − A 41 Σ 4 x 2 − A 42 Σ 4 0 0 0 1$
(49)
$= a 11 a 12 1 0 0 0 a 21 a 22 0 1 0 0 a 31 a 32 0 0 1 0 a 41 a 42 0 0 0 1 ,$
(50)

where

$Σ 1 = x 1 − A 11 2 + x 2 − A 12 2$
(51)
$Σ 2 = x 1 − A 21 2 + x 2 − A 22 2$
(52)
$Σ 3 = x 1 − A 31 2 + x 2 − A 32 2$
(53)
$Σ 4 = x 1 − A 41 2 + x 2 − A 42 2 .$
(54)

In (50), the parameters a mn for m = 1,2,3,4 and n = 1,2 are used for simplifying (49). If we suppose$R= σ v 2 I$ under the assumption that v(t) is white, the FIM is expressed as (55).

$F= σ v − 2 a 11 2 + a 21 2 + a 31 2 + a 41 2 a 11 a 12 + a 21 a 22 + a 31 a 32 + a 41 a 42 a 11 a 21 a 31 a 41 a 11 a 12 + a 21 a 22 + a 31 a 32 + a 41 a 42 a 12 2 + a 22 2 + a 32 2 + a 42 2 a 12 a 22 a 32 a 42 a 11 a 12 1 0 0 0 a 21 a 22 0 1 0 0 a 31 a 32 0 0 1 0 a 41 a 42 0 0 0 1 .$
(55)

Subsequently, we execute the following row reductions:

1. (1)

Multiply each of 3, 4, 5, and 6-th rows of F by a 11, a 21, a 31, and a 41, respectively, and then subtract each of them from the first row.

2. (2)

Multiply each of 3, 4, 5, and 6-th rows of F by a 12, a 22, a 32, and a 42, respectively, and then subtract each of them from the second row.

Through these successive row reductions, we can obtain the row-reduced matrix (56) and know that the rank of F is at most 4 regardless of z. This means that F is not full rank matrix, because the dimension of F is 6×6. Hence F is singular and the SS model is unobservable. To conclude, we cannot guarantee the accuracy of the estimated state vector by the EKF.

$Row-reducedF= σ v − 2 0 0 0 0 0 0 0 0 0 0 0 0 a 11 a 12 1 0 0 0 a 21 a 22 0 1 0 0 a 31 a 32 0 0 1 0 a 41 a 42 0 0 0 1 .$
(56)

## Proposed algorithm

### Concept of separation for treatments of noise and bias

In this section, we explain the causes of deterioration of the distance measurements, namely of the noise and bias. The noise is generally assumed to be zero-mean white Gaussian, and the bias is an unknown quantity which remains approximately constant. However, both bias and noise cannot be treated by using only the EKF because of because of the observability problem. Therefore, in this article, we algorithm which combines the KF with the WLS algorithm for estimating the position of a specific target when the distance measurements are noisy and biased.

The proposed method is divided into two stages: noise elimination and bias reduction. In the first stage, the noise in the distance measurements is removed with the use of the KF, and then noise-eliminated distances are obtained. In the second stage, the coordinates of the target are calculated by using the WLS algorithm. For this purpose, we use the noise-eliminated distances obtained in the first stage. Subsequently, we propose a method to derive the weighting parameters for the WLS algorithm. As the noise has already been removed in the first stage, these parameters are determined based only on the characteristics of the bias. It should be noted that the weighting parameters affect the estimation of the target coordinates with the use of the WLS algorithm.

In the following sections, we explain the proposed algorithm in three steps: noise elimination, target coordinates calculation, and determination of the weighting parameters.

### Measurement noise elimination by KF

To remove the noise from the distance measurements, we define new SS equations as

$g ( t + 1 ) = g ( t )$
(57)
$y ( t ) = g ( t ) + v ( t ) ,$
(58)

where g(t) is equal to the biased true distance vector g(z(t)) in (37). In this SS equations, however, we regard g(t) as a state vector, because the purpose of the first stage is not estimating z(t), but is estimating the biased true distance vector g(t) itself. Also, we can know that the expression of constant g(t) in (57) is valid, since g(t) only depend on z(t) which is constant as shown in (36).

The SS Equations (57) and (58) are linear, so estimating g(t) is readily executed by the KF, and not by the EKF. The KF as a constant parameter estimator is ideal for the noise elimination, because the KF is optimal estimator and robust against the noise. With initial value of$g ̂ (0)$ and P(0), the KF for estimating g(t) are given by the following equations:

$e ( t ) = y ( t ) − g ̂ ( t − 1 )$
(59)
$K ( t ) = P ( t − 1 ) P ( t − 1 ) + R ( t ) − 1$
(60)
$g ̂ ( t ) = g ̂ ( t − 1 ) + K ( t ) e ( t )$
(61)
$P ( t ) = I − K ( t ) P ( t − 1 ) ,$
(62)

where$g ̂ (t)$ is the estimated biased true distance vector. In other word,$g ̂ (t)$ is the estimated noise-eliminated distance vector. If we have no a priori information on initialization, P(0) can be set to infinite matrix. In this case, the KF becomes

$g ̂ (t)= g ̂ (t−1)+ 1 t y ( t ) − g ̂ ( t − 1 ) .$
(63)

Equation (63) is the same as the recursive average, and$g ̂ (t)$ can be estimated by using (63). The vector$g ̂ (t)$ is used for the WLS algorithm in the next stage of calculating the coordinates of a specific target.

### Target coordinate calculation by using nonlinear WLS algorithm

In the second stage, the coordinates of a target are calculated based on the noise-eliminated distances obtained with the use of the KF in the first stage. Based on the relationship in (39), the nonlinear equation for the second stage is given by

$g(t)=h x ( t ) +b(t)$
(64)

where b(t) is regarded as the error source. In (64), the true value of g(t) is actually unknown. Instead of using g(t), therefore, the estimated value obtained in the first stage is used, and then we define the residual vector as

$r x ( t ) = g ̂ (t)−h x ( t ) .$
(65)

By using (65), the criterion of the WLS algorithm is given by

$x ̂ (t)=arg min x ( t ) r T x ( t ) W(t)r x ( t ) ,$
(66)

where W(t) is the matrix of the weighting parameters or differently, the weight matrix. Unlike the conventional LS problem, the bias vector is not a zero-mean noise, so the weight matrix must be selected differently. We will consider the assignment of the weight matrix in the following section.

The solution of (66) is obtained by using the Gauss–Newton method which has already been mentioned in Section “Conventional algorithms for trilateration”. With the given noise-eliminated distance vector$g ̂ (t)$ and initial value of$x ̂ 0 (t)$, the algorithm is expressed as the following recursive equations:

$x ̂ i (t)= x ̂ i − 1 (t)+α p i (t)r x ̂ i − 1 ( t ) ,$
(67)

where

$r x ̂ i − 1 ( t ) = g ̂ ( t ) − h x ̂ i − 1 ( t )$
(68)
$p i ( t ) = − J i T ( t ) W ( t ) J i ( t ) − 1 J i T ( t ) W ( t ) ,$
(69)

0 < α ≤ 1, and $x ̂ i (t)$ is the nonlinear LS solution of x(t) at i-th iteration. Here, the matrix J i (t) is Jacobian matrix of h, i.e.

$J i (t)= ∂ h x ∂ x x = x ̂ i − 1 ( t ) .$
(70)

While the Gauss–Newton algorithm is running, the new$g ̂ (t)$ is being recursively estimated by the KF. In other words, while the Gauss–Newton algorithm is recursively updating$x ̂ i (t)$, the previous noise-eliminated distance vector$g ̂ (t)$ is continuously replaced by a new one, i.e.$g ̂ (t+1)$. Replacing$g ̂ (t)$ to$g ̂ (t+1)$ can alter the minimum point of (66), so the Gauss–Newton algorithm tries to track the new minimum point, which is x(t + 1). Therefore the coordinates estimated by the Gauss–Newton algorithm will also vary as a result of the changes that the noise-eliminated distance vector undergoes. However, based on the assumption of the bias remains approximately constant, we can argue that the noise-eliminated distance vector which is estimated by the KF is not changing rapidly. Even though the bias may change slightly, it varies much slower than the Gauss–Newton algorithm converges. This implies that the Gauss–Newton algorithm is not required to converge fast. According to this observation, the Gauss–Newton algorithm can be synchronized with the iteration of the KF. Consequently, we remove the number of iterations i in the Gauss–Newton algorithm and rewrite the algorithm recursive with respect to the time instant t. Hence, with the given noise-eliminated distance vector by the first stage and initial value of$x ̂ (0)$, the second stage of the proposed algorithm is expressed as

$x ̂ (t)= x ̂ (t−1)+αp(t)r x ̂ ( t − 1 ) ,$
(71)

where

$r x ̂ ( t − 1 ) = g ̂ ( t ) − h x ̂ ( t − 1 )$
(72)
$p ( t ) = − J T ( t ) W ( t ) J ( t ) − 1 J T ( t ) W ( t ) ,$
(73)

0 < α ≤ 1, and $x ̂ (t)$ is the nonlinear LS solution at time instant t. Here, the matrix J(t) is Jacobian matrix of h,

$J(t)= ∂ h x ∂ x x = x ̂ ( t − 1 ) .$
(74)

### Determining weight matrix

Unlike the conventional LS problem, the bias vector is not zero-mean noise. Therefore, we must derive a proper weight matrix W for the bias vector, when (64) is applied to the WLS algorithm.

First, we review the weight matrix for the conventional LS problem. Let image the linear equation

$y=Φx+v.$
(75)

In the conventional LS algorithm, the cause of error is the zero-mean white Gaussian noise v. In this case, according to the WLS algorithm theory, the correlation matrix of the estimation error is expressed as

$E x ̂ − x x ̂ − x T$
(76)
$= Φ T W Φ − 1 Φ T W R v W Φ Φ T W Φ − 1 ,$
(77)

where

$R v =Ev v T .$
(78)

When$W= R v − 1$, the correlation matrix is minimized, i.e.

$E x ̂ − x x ̂ − x T = Φ T W Φ − 1 .$
(79)

In (64), however, the cause of error is not noise, but bias. In light of this, the weight matrix cannot be determined similarly to other conventional algorithms. It must be assigned according to the property of bias. By using the first-order approximation of the Taylor series, (64) is linearized around x as follows:

$g(t)≈h x ∗ +H x ( t ) − x ∗ +b(t),$
(80)

where

$H= ∂ h x ∂ x x = x ∗ .$
(81)

This equation can be rearranged as

$g ̄ (t)=Hx(t)+b(t),$
(82)

where

$g ̄ (t)=g(t)−h x ∗ +H x ∗ .$
(83)

The WLS solution of the linearized equation is

$x ̂ (t)= H T W ( t ) H − 1 H T W(t) g ̄ (t)$
(84)

The estimation error, i.e.,$x ̂ (t)−x(t)$, is approximately equal to

$x ̂ ( t ) − x ( t )$
(85)
$= H T W ( t ) H − 1 H T W ( t ) g ̄ ( t ) − x ( t )$
(86)
$= H T W ( t ) H − 1 H T W ( t ) H x ( t ) + b ( t ) − x ( t )$
(87)
$= H T W ( t ) H − 1 H T W ( t ) b ( t ) .$
(88)

Therefore, the correlation matrix for the estimation error is written as follows:

$E x ̂ ( t ) − x ( t ) x ̂ ( t ) − x ( t ) T$
(89)
$= H T W ( t ) H − 1 H T W ( t ) R b W ( t ) H H T W ( t ) H − 1 ,$
(90)

where

$R b =Eb(t) b T (t)≈b(t) b T (t).$
(91)

In (91), the approximation is reasonable under the assumption of the constant bias. To minimize the correlation matrix for the estimation error, the weight matrix W(t) has to be determined as the inverse matrix of R b . However, it is not possible to calculate R b because the bias vector is still unknown. To solve this problem, we have considered the following situation. Under the assumption that the measurements are carried out in a homogeneous environment, the bias for two pairs of nodes placed at different positions is considered the same as long as the actual distance between each member of the two pairs is identical. In addition, the bias to the distance measured between node pairs is never greater than the bias to the distance measured between two nodes that are placed further apart in the same homogeneous environment. In summary, the bias is proportional to the true distance. Multiple paths and non-line-of-sight of signal propagation can be the cause of the bias, because they make the signal travel longer distance than the signal that passes through the direct path. Therefore the longer length between a node pair is, the greater bias can be caused especially at indoor in which a number of obstacles exist. Based on this observation, we model the bias vector as

$b(t)=βh x ( t ) ,$
(92)

where β is the proportionality constant and x(t) is the true coordinate. Therefore, h(x(t)) is the true distance vector. By h(x(t)) = g(t)−b(t), we can rewrite (92)

$b(t)=γg(t),$
(93)

where$γ= β 1 + β$. The direction of b(t) is equal to that of g(t) due to the proportionality, so the normalized b(t) and g(t) are the same, i.e.

$b ( t ) ∥ b ( t ) ∥ 2 = g ( t ) ∥ g ( t ) ∥ 2 ,$
(94)

where ·2 is the 2-norm operator. Through (94), we can know that b(t) can be expressed as

$b(t)= ∥ b ( t ) ∥ 2 ∥ g ( t ) ∥ 2 g(t)$
(95)

and γ is given by$∥ b ( t ) ∥ 2 ∥ g ( t ) ∥ 2$. To calculate the right hand side of (95), we use$r x ̂ ( t − 1 )$ and$g ̂ (t)$ instead of b(t) and g(t), respectively. Hence, the estimated bias vector can be derived as

$b ̂ (t)= ∥ r x ̂ ( t − 1 ) ∥ 2 ∥ g ̂ ( t ) ∥ 2 g ̂ (t),$
(96)

where$b ̂ (t)$ is the estimated bias vector. If$x ̂ (t−1)$ is equal to the true coordinate, the bias vector is just the same as$r x ̂ ( t − 1 )$. In actual situations, however,$x ̂ (t−1)$ is not equal to the true coordinate, so we cannot guarantee the equality between$r x ̂ ( t − 1 )$ and the bias vector. Therefore we only use the magnitude of$r x ̂ ( t − 1 )$ for calculating the estimated bias vector. By the proportionality assumption, the direction of the estimated bias vector is given by the direction of$g ̂ (t)$ as shown in (96).

Finally, the weight matrix is estimated as

$W(t)= b ̂ ( t ) b ̂ T ( t ) + ε I − 1 ,$
(97)

where ε is a small scalar value which has been inserted to solve mathematical problem of the matrix inversion, because$b ̂ (t) b ̂ T (t)$ is singular matrix.

The proposed algorithm is summarized in Algorithm 1 and the block diagram of the proposed algorithm is represented in Figure2.

### Algorithm 1 Summary of the proposed algorithm

Initial values:$x ̂ (0)$,$g ̂ (0)$, and P(0).

Given parameters: ε, α, and R(t).

For t = 1,2,3,…

When y(t) is obtained by ranging

//START First Stage - Noise elimination

$e(t)=y(t)− g ̂ (t−1)$

K(t)=P(t−1)[P(t−1) + R(t)]−1

$g ̂ (t)= g ̂ (t−1)+K(t)e(t)$

P(t) = [IK(t)]P(t − 1)

//END First Stage - Noise elimination

//START Second Stage

$r x ̂ ( t − 1 ) = g ̂ (t)−h x ̂ ( t − 1 )$

//START Determination of the weight matrix

$γ ̂ = ∥ r x ̂ ( t − 1 ) ∥ 2 ∥ g ̂ ( t ) ∥ 2$

$b ̂ (t)= γ ̂ g ̂ (t)$

$W(t)= b ̂ ( t ) b ̂ T ( t ) + ε I − 1$

//END Determination of the weight matrix

//START Calculation of the target coordinates

$J(t)= ∂ h x ∂ x x = x ̂ ( t − 1 )$

p(t) = −(J(t)TW(t)J(t))−1J(t)TW(t)

$x ̂ (t)= x ̂ (t−1)+αp(t)r x ̂ ( t − 1 )$

//END Calculation of the target coordinates

//END Second Stage

End

## Experimental results

### Noise elimination by using KF

First, we present the results obtained with the use of the KF for noise elimination, and show the relationship between the true distance and the bias. Under assumption that the measurement noise is uncorrelated, the off-diagonal terms of R(t) were set to zeros. The diagonal terms of R(t) were set to 0.0346 which was calculated from a number of measurements. Accordingly, R(t) was given by 0.0346 × I. We also set the initial value$g ̂ (0)$ to the distance vector measured at the first iteration. P(0) was set to the identity matrix. In this experiment, we arranged four pairs of CSS nodes which were labeled pair 1, 2, 3, and 4 at indoor. The actual distances between pair 1, 2, 3, and 4 were set to 2.1, 4.2, 6.3, and 8.4 meters, respectively. We successively measured the distances by using the CSS ranging, and the measured distances were fed into the KF.

The dash-dotted lines in Figures3,4,5, and6 correspond to the successively measured distances between pair 1, 2, 3, and 4, respectively. The measured distances vary considerably because of the presence of noise. The solid lines correspond to the noise-eliminated distances which were calculated with the use of the KF, and they converge to the averages of the measured distances denoted as the dashed lines. The averages of the measured distances were calculated by averaging the 600 measured distances. As result, we can conclude that the KF accurately estimates the biased true distance vector g(t) in (57) and (58). Also the variation of the noise-eliminated distances after removing the noise v(t) by the KF is much smaller than that of the measured distances. By these results we can confirm our assumption of constant bias.

Moreover, the difference between the true distance and average of measured distances can be regarded as the bias. Through the graphes, we can see that the relationship between the bias and the true distance. Although the bias is not perfectly proportional to the true distance, there is a trend that the bias becomes higher when the true distance is long. Therefore the proportionality assumption can model the bias approximately. If the mismatch between the actual and modeled bias becomes considerable, the localization error of the proposed algorithm can become higher. However, the proposed algorithm can reduce the effect of the bias as long as the trend continues.

### Coordinates calculation by using LS algorithm

To confirm the accuracy of the target coordinates estimated with the use of the proposed algorithm, our next step was to compare them with the coordinates estimated by using only the EKF. The solely use of the EKF for trilateration has already been discussed in Section “Conventional algorithms for trilateration”. The EKF does not consider the effect of bias.

In our localization experiments, we assigned a two-dimensional Cartesian coordinate system and installed four CSS nodes at [4.5 4.5]T, [−4.5 4.5]T, [−4.5 −4.5]T, and [4.5 −4.5]Tand numbered as reference position 1, 2, 3, and 4, respectively. They are also the vertices of 9×9 square plane. The unit of measurement of all these values is meters. The target CSS node was fixed at four different positions: these were [2.25 2.25]T, [−2.25 2.25]T, [−2.25 −2.25]T, and [2.25−2.25]T. The target CSS node begins ranging with each CSS node at the four reference points. After all four distances were measured, the proposed algorithm can calculate the coordinates of the target node. The ranging operations are iteratively performed and the proposed algorithm is executed at every iteration. It should also be noted that the localization experiment at each position was independently executed. The initial value P(0) was set equal to the identity matrix. The initial state vector$x ̂ (0)$ was set equal to the LS solution of (19) which was obtained by using the firstly measured distances. The initial value$g ̂ (0)$ equal to the distance vector measured at the first iteration.

With different combinations of parameters α and ε, the errors in the estimation of the target coordinates are shown in Figures7,8,9, and10. These errors are the calculated Euclidean distance between the true position and the estimated position. In the graphs below, all of results obtained with the used of the proposed algorithm are more accurate than the results of the EKF. When both α and ε are set to 0.1, the algorithm proposed in this article ensures errors in the estimation of the target position less than about 0.2 m. On the other hand, the errors in the estimation of the target position with the use of the EKF seem to differ from case to case.

In the proposed algorithm, parameter α affects the convergence rate of the algorithm, because α plays a role of step-size. If α decreases to zero, the convergence rate of the proposed algorithm decreases. Reversely, if α increases to one, the convergence rate of the proposed algorithm increases. Another parameter ε is originally adopted for only solving the mathematical problem of matrix inversion, but it also affects the accuracy of the proposed algorithm in the steady state. To minimize its effect, parameter ε must be selected as small as possible. In some cases, however, a large ε could give better results, because the parameter ε accidentally corrects the error of the estimated bias.

According to experimental results, we can suppose that the proposed algorithm can be used for ubiquitous applications such as navigation, context-aware services, warehouse management, and mobile robots, because the position is significant information for implementing these ubiquitous applications.

## Conclusion

In this article, we proposed a localization algorithm by using the KF and the WLS algorithm. The proposed algorithm consists of two stages: noise elimination by using the KF and calculation of the target coordinates by the WLS algorithm. More specifically, the KF estimates a noise-eliminated distance vector, which is subsequently used for the calculation of the target coordinates as well as for the determination of the weight matrix. The results of localization experiments demonstrate the accuracy of the proposed algorithm.

## References

1. 1.

Navidi W, Murphy WS, Hereman W: Statical methods in surveying by trilateration. Comput. Stat. Data Anal 1998, 27(2):209-227. 10.1016/S0167-9473(97)00053-4

2. 2.

Larsson U, Forsberg J, Wernersson A: Mobile robot localization: integrating measurements from a time-of-flight laser. IEEE Trans. Ind. Electron 1996, 43(3):422-431. 10.1109/41.499815

3. 3.

Gutierrez-Osuna R, Janet JA, Luo RC: Modeling of ultrasonic range sensors for localization of autonomous mobile robots. IEEE Trans. Ind. Electron 1998, 45(4):654-662. 10.1109/41.704895

4. 4.

Lee JM, Lee DH, An H, Huh N, Kim MK, Lee MH: Ultrosonic satellite system for the positioning of mobile robots. Proceedings of 30th Annual Conference of IEEE Industrial Electronics Society (IECON 2004) 2–6 Nov 2004, 448-453.

5. 5.

Bahillo A, Mazuelas S, Lorenzo RM, Fernández P, Prieto J, Durán RJ, Abril EJ: Hybrid RSS-RTT localization scheme for indoor wireless networks. EURASIP J. Adv. Signal Process 2010, 2010: 1-12.

6. 6.

Zhou Y: An efficient least-squares trilateration algorithm for mobile robot localization. Proceedings of International Conference on Intelligent Robots and Systems (IROS 2009) 10–15 Oct 2009, 3474-3479.

7. 7.

Yim J, Jeong S, Gwon K, Joo J: Improvement of Kalman filter for WLAN based indoor tracking. Expert Syst. Appl 2010, 37: 426-433. 10.1016/j.eswa.2009.05.047

8. 8.

Cho H, Lee CW, Ban SJ, Kim SW: An enhanced positioning scheme for chirp spread spectrum ranging. Expert Syst. Appl 2010, 37(8):5728-5735. 10.1016/j.eswa.2010.02.037

9. 9.

Cho H, Kim SW: Mobile robot localization using biased chirp spread spectrum ranging. IEEE Trans. Ind. Electron 2010, 57(8):2826-2835.

10. 10.

Drócourt JP, Madsen H: Bias aware Kalman filters: comparison and improvements. Adv. Water Res. 2006, 29(5):707-718. 10.1016/j.advwatres.2005.07.006

11. 11.

Güvenç I, Chong CC, Watanabe F, Inamura H: NLOS identification and weighted least-square localization for UWB systems using mulipath channel statistics. EURASIP J. Adv. Signal Process 2008, 2008: 1-14.

12. 12.

Friedland B: Treatment of bias in recursive filtering. IEEE Trans. Automat. Control 1969, AC-14(4):359-367.

13. 13.

Ignagni MB: An alternate derivation extension of Friendland’s two-stage Kalman estimator. IEEE Trans. Automat. Control 1981, AC-26(3):746-750.

14. 14.

Caglayan AK, Lancraft RE: A separated bias identification and state estimation algorithm for nonlinear systems. Automatica 1983, 19(5):561-570. 10.1016/0005-1098(83)90012-2

15. 15.

Kim KH, Lee JG, Park CG: Adaptive two-stage extended Kalman filter for a fault-tolerant INS-GPS loosely coupled system. IEEE Trans. Aerospace Electron. Syst 2009, 45: 125-137.

16. 16.

Southall B, Buxton BF, Marchant JA: Controllability and Observability: tools for Kalman filter design. In Proceedings of 9th British Machine Vision Conference: 14–17 September 1998. UK: University of Southampton; 1998:164-173.

17. 17.

Jauffret C: Observability and Fisher information matrix in nonlinear regression. IEEE Trans. Aerospace Electron. Syst 2007, 43(2):756-759.

18. 18.

Wang Z, Dissanayake G: Observability analysis of SLAM uisng Fisher information matrix. Proceedings of 10th International Conference on Control, Automation, Robotics and Vision 17–20 Dec 2008, 1242-1247.

19. 19.

Lee KH, Cho SH: CSS based localization system using kalman filter for multi-cell environment. Proceedings of International Conference on Advanced Technologies for Communications (ATC 2008) 6–9 Oct 2008, 293-296.

20. 20.

Spieker A, Röhrig C: Localization of pallets in warehouse using Wireless Sensor Networks. Proceedings of 16th Mediterranean Conference on Control and Automation: 25-27 June 2008; Ajaccio 2008, 1833-1838.

21. 21.

Röhrig C, Müller M: Indoor location tracking in non-line-of-sight environments using a IEEE 802.15.4a wireless network. Proceedings of International Conference on Intelligent Robots and Systems (IROS 2009) 10–15 Oct 2009, 552-557.

22. 22.

Langendoen K, Reijers N: Distributed localization in wireless sensor network: a quantitative comparison. Comput. Netw 2003, 43(4):499-518. 10.1016/S1389-1286(03)00356-6

23. 23.

Izquierdo F, Ciurana M, Barcelo F, Paradells J, Zola E: Performance evaluation of a TOA-based trilateration method to locate terminals in WLAN. Proceedings of 1st International Symposium on Wireless Pervasive Computing 16–18 Jan 2006, 1-6.

24. 24.

Zarchan P, Musoff H: Fundamentals of Kalman Filtering: A Practical Approach. 2005.

25. 25.

Kailath T, Sayed AH, Hassibi B: Linear Estimation. 2000.

## Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0004145). This work was supported by the Brain Korea 21 Project in 2012.

## Author information

Authors

### Corresponding author

Correspondence to Sang Woo Kim.

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Reprints and Permissions

Cho, H., Lee, J., Kim, D. et al. Localization based on two-stage treatment for dealing with noisy and biased distance measurements. EURASIP J. Adv. Signal Process. 2012, 164 (2012). https://doi.org/10.1186/1687-6180-2012-164 