Skip to main content

Upper bounds on position error of a single location estimate in wireless sensor networks

Abstract

This paper studies upper bounds on the position error for a single estimate of an unknown target node position based on distance estimates in wireless sensor networks. In this study, we investigate a number of approaches to confine the target node position to bounded sets for different scenarios. Firstly, if at least one distance estimate error is positive, we derive a simple, but potentially loose upper bound, which is always valid. In addition assuming that the probability density of measurement noise is nonzero for positive values and a sufficiently large number of distance estimates are available, we propose an upper bound, which is valid with high probability. Secondly, if a reasonable lower bound on negative measurement errors is known a priori, we manipulate the distance estimates to obtain a new set with positive measurement errors. In general, we formulate bounds as nonconvex optimization problems. To solve the problems, we employ a relaxation technique and obtain semidefinite programs. We also propose a simple approach to find the bounds in closed forms. Simulation results show reasonable tightness for different bounds in various situations.

1 Introduction

Position information is often one of the vital requirements for wireless sensor networks (WSNs), especially for location-aware services [1]. Position information can be extracted via GPS but also from the network [2]. During the last few years, a vast number of positioning algorithms have been proposed in the literature [1, 36], just to cite a few. Such algorithms can be assessed in different ways, for example on the basis of complexity, accuracy, or coverage [6]. Accuracy is one of the performance measures that is commonly used to evaluate positioning algorithms. In the literature, the accuracy metric has been studied widely through the position error, defined as the norm of the difference between the estimated and the true position [1, 6]. For instance, the Cramér-Rao lower bound, employed to evaluate position estimates, provides a lower bound on the variance of any unbiased estimator ([7], chap. 3).

In addition to the lower bound assessment, for some applications, it may be useful to know the maximum position error contained in an estimate of the target node position [1, 8, 9]. For example, it can be imagined that a specific service can be offered to a user if its maximum location error is smaller than a predetermined threshold. The worst-case position error or a reasonable upper bound on the position error can also be used in traffic-safety applications to decrease the number of collisions between vehicles [9]. A powerful approach to find the worst-case position error, when an estimate of the target node position is available, is to confine the target node position to a bounded set. For instance, by defining a confidence region for an estimate [10], a target node position can be confined to a set, e.g., an ellipsoid, with a certain probability, say, 95% of the cases. This approach has been employed in characterizing GPS position errors [11] or in studying a position algorithm [12]. The confidence region mainly depends on the covariance of the position estimate and can be a large volume depending on the accuracy of the estimate. To determine a confidence region, the distribution of the position estimate must be known. Although some distributions, such as the Gaussian distribution, may be suitable to model the position estimate, it is in general difficult to identify the distribution of the position estimate. This approach, therefore, may have some limitations for practical scenarios.

The geometric approach developed in [9, 13] can be used to confine the target node position to a bounded set if certain conditions are satisfied. In fact, regardless of the accuracy of an estimate, one can obtain a bounded set from the measurements in which the target node position resides. For instance, for range-based positioning, it can be concluded that the target node position can be found in a bounded convex set (a feasible set) if the distance measurement error is positive. This set, therefore, is obtained from the intersection of a number of balls derived from the measurements. When a single estimate is available, it is reasonable to define the worst-case position error as the maximum distance from the estimate to a point in the feasible set. Based on this interpretation, one can also design a geometric algorithm that gives an estimate inside the feasible set. For example, the well-known projection onto convex set (POCS) technique has been successfully applied to the positioning problem on the basis of this geometric interpretation (for details of the POCS approach for positioning see, e.g., [1317]). In the previous study in [9], it has also been argued that the maximum position error (when distance estimation errors are positive) is difficult to compute because the problem is formulated as a nonconvex problem; however, by means of a relaxation technique, an upper bound on the position error can be obtained efficiently [9]. Although the assumption that the measurement error is positive is fulfilled in some situations, e.g., in non-line-of-sight conditions, in which the measured distances (with high probability) are larger than the actual distances, there are also situations in which all measurement errors may not be positive. Therefore, the intersection derived from the distance measurements will no longer contain the target node position, even if the intersection happens to be nonempty. Hence, the approach introduced in [9] may fail to give an upper bound on the position error.

In this paper, we relax the assumption on measurement noise considered in [9] to a general case in which a fraction of the range measurement errors may also be negative. For a single position estimate, we consider three approaches to define upper bounds on the position error. We first assume that a subset of measurements (at least one) have positive errors, thus, we are able to bound a target node position to a set derived from the subset of measurements, i.e., the set of distance measurements having positive errors among all distance measurements. Since this subset is unknown in advance, the largest distance among the maximum distances from the estimate to all balls (derived from measurements) can be considered as an upper bound on the position error. Although this approach may result in a loose upper bound in general, we consider it to assess other bounds.

Secondly, we assume that the noise is bounded from below, meaning that the realization of the measurement error cannot be an arbitrarily large negative value. This assumption is implicitly considered as positive distance estimates in the positioning literature, e.g., in [18]. After that, we enlarge the distance estimate by adding a positive value, the absolute value of a lower bound on the measurement error (assuming it is known in advance), and then obtain distance measurements that have positive errors. Therefore, the intersection of balls derived from the new set of the measurements definitely contains the target node position.

Finally, we propose a method to confine a target node to a bounded set when a reasonable lower bound on the measurement errors is a priori unknown, for instance, in a practical scenario in which it is difficult to find a reasonable lower bound on the measurement error. The idea relies on the fact that the probability density function (pdf) of the measurement error is nonzero for positive values. Hence, if we take a sufficiently large number of distance estimates for every link, i.e., between the target node and a reference node, with high probability, at least one distance measurement has a positive error. Now by taking the maximum of distance estimates among all distance estimates for a link, we obtain a distance estimate that has a positive error (with high probability). Thus, the target node position can be confined to a bounded set derived from the new set of measurements with high probability.

The second and third bounds are formulated as nonconvex problems. We then employ a relaxation technique to approximately solve the problem. We further derive a simple bound as the maximum distance from the estimate to a ball corresponding to a reference node in which has the minimum distance estimate to the target node. Simulation results show that the proposed upper bounds are reasonably tight in some situations. The results also reveal a tradeoff between the tightness and validity of the third bound in terms of the number of samples. Note that for the new intersection derived from the new set of measurements (for both the second and third approaches), we may apply a geometric technique to take a point inside the new feasible set as an estimate.

In summary, the main contributions of this study are as follow:

  • generalizing the idea of upper bounding a single position error, as opposed to bounding a statistical measure of the ensemble of position errors such as the mean squared error, introduced in [9], to the case of range-based positioning when some of the distance estimation errors can be negative;

  • formulating upper bounds on the position error that are always valid if (a) at least one distance error is positive or (b) if in advance we know a reasonable lower bound on the distance estimation errors;

  • formulating an upper bound that holds with high probability if there is a sufficiently large number of distance estimate between each reference node and the target node and if the distance estimation errors are not always negative.

The rest of the paper is organized as follows: Section 2 reviews some preliminary requirements and section 3 studies the signal model considered in this study. Different upper bounds are derived in sections 4 to 6. Simulation results are discussed in section 7. Finally, in section 8, some concluding remarks are made.

2 Preliminaries

2.1 Notation

The following notations are used in this paper: Lowercase and bold lowercase letters denote scalar values and vectors, respectively. Matrices are written using bold uppercase letters. By 0m×m, we denote the m by m zero matrix, and we use 0 m as the m-vector of m zeros. 1 M and I M denote the vectors of M ones and the M by M identity matrix, respectively. The operator tr(·) is used to denote the trace of a square matrix. The Euclidean norm is denoted by ·2. Given two matrices A and B, A() B means that AB is positive (semi)definite. S m , R m , and R + m denote the set of all m×m symmetric matrices, the set of all m×1 vectors with real values, and the set of all m×1 vectors with nonnegative real values.

2.2 Semidefinite relaxation

Let us consider a quadratically constrained quadratic program (QCQP) as

maximize x R m x T A 0 x + 2 b 0 T x + c 0 subject to x T A i x + 2 b i T x + c i 0 , i = 1 , , N
(1)

for A i S m , b i R m , and c i R. For nonconvex QCQP in (1), we employ a relaxation technique and derive a semidefinite programming (SDP) problem as follows. Let us rewrite the problem in (1) as

maximize x R m tr ( A 0 x x T ) + 2 b 0 T x + c 0 subject to tr ( A i x x T ) + 2 b i T x + c i 0 , i = 1 , , N.
(2)

Now, by replacing Z=x xT and then relaxing it as Zx xT, we obtain an SDP as [19]

maximize x R m , Z S m tr ( A 0 Z ) + 2 b 0 T x + c 0 subject to tr ( A i Z ) + 2 b i T x + c i 0 , i = 1 , , N Z x x T 1 0 .
(3)

Using the Schur complement ([20], Appendix B), we expressed the constraint Zx xT as a linear matrix inequality in (3). To refer to the QCQP formulated in (1) throughout this paper, we use QP { A i , b i , c i } i = 0 N . Similarly, to refer to the SDP relaxation derived in (3) originated from QCQP in (1), we use SDP { A i , b i , c i } i = 0 N . For the optimal values of the objective function of the QCQP and the corresponding SDP relaxation in (1) and in (3), we use v qp { A i , b i , c i } i = 0 N and v sdp { A i , b i , c i } i = 0 N , respectively. By adopting the relaxation in (3), we expand the feasible set, therefore, the objective function in (3) is maximized over a larger set than in (1), thus

v qp A i , b i , c i i = 0 N v sdp A i , b i , c i i = 0 N .
(4)

That is, the optimal value in (3) gives an upper bound on the optimal value in (1).

3 System model

Let us consider an m-dimensional network, m=2 or 3, with N reference nodes at known positions a i = [ a i , 1 a i , m ] T R m ,i=1,,N. Suppose that a target node is placed at an unknown position x= [ x 1 x m ] T R m . The range measurement between the target node and reference node i is modeled as [6, 21]

d ̂ i = d ( x , a i ) + ε i , i = 1 , , N ,
(5)

where d(x,a i )=xa i 2 is the actual Euclidean distance between the target node and reference node i and ε i is the measurement error.

In the literature, the measurement error is commonly modeled as a zero-mean Gaussian random variable [1, 3, 5]. In some scenarios, however, other distributions, e.g., an exponential, uniform, or Laplacian distribution, seem to be more reasonable to describe the model in (5) [2224]. In the previous work [9, 13, 17], it is assumed that all measurement errors are positive and then it is concluded that a target node can definitely be confined to a bounded convex set, derived from the measurements. In fact, the intersection of a number of balls (with centers a i and radii d ̂ i ), which is nonempty for range measurements with positive errors, definitely contains the target node position. When at least one distance error is negative, the intersection derived from the measurements does not contain the target node position, even if the intersection happens to be nonempty. In this study, we generalize the technique introduced in [9] to upper bound a position error in which the intersection does not contain the target’s location.

Before a detailed discussions on upper bounds proposed in this work, we first review the concept of the geometric upper bounding a position error in which distance estimate errors are positive. For a general discussion about an upper bounds on estimation errors, we refer the reader to [9] and also to Appendix A.

Let us consider a geometric interpretation of the positioning problem. In the absence of measurement errors, i.e., d ̂ i =d(x, a i ), it is clear that the geometric locus of the target node position lies on a sphere with the radius d(x,a i ) and center a i . If the measurement error is positive, the ball with the radius d i ̂ and center a i definitely contains the target node position. Therefore, the intersection of a number of balls defines a set in which the target node position resides. Let us define the ball i at center a i as

i x R m : x a i 2 d ̂ i , i = 1 , , N.
(6)

It is then reasonable to consider every point in the intersection (a bounded set) of the balls i as an estimate x ̂ of x, i.e.,

x ̂ i = 1 N i .
(7)

We call a feasible set and every point in a feasible point. Based on the geometric interpretation in (7), we can derive estimators to get one point inside the intersection as an estimate. For example, the POCS or outer-approximation approach, picks one feasible point as an estimate [13, 15, 17]. Regardless of the type of the estimator, if an estimate of the target node position is available, we can define an upper bound on the position error e with respect to the feasible set [9]. Namely, we consider the following definition:

e x ̂ x 2 v max ( x ̂ , ) max y x ̂ y 2 ,
(8)

where x ̂ is an estimate of the target node position x given by a positioning algorithm. For details of defining different upper bounds, we refer the reader to [9].

The validity of the definition in (8) relies on the fact that the target node position is confined to a bounded set. (For an unbounded feasible set , the upper bound defined in (8) is trivial). If at least one distance estimate error is negative, the intersection no longer contains the target node position, and then the bound defined in (8) may not be valid. Figure 1 shows an example of a network consisting of three reference nodes and one target node in which different cases are observed for the intersection for the range-based measurements.

Figure 1
figure1

The intersection for a network consisting of three reference nodes and one target node. (a) measurement one has positive error and two others have negative errors. The intersection is nonempty and does not contain the target node position, (b) all measurement errors are positive and the intersection definitely contains the target node position, (c) all measurements have negative errors and the intersection is nonempty and does not contain the target node position, and (d) all measurement errors are negative and the intersection is empty.

4 An upper bound for both positive and negative measurement errors

In this section, we derive a simple upper bound, expressed in a closed form expression, which is valid if at least one distance estimate error is positive. Based on previous discussions, when a distance estimate error is positive, the ball defined for that measurement definitely contains the target node position. Then, we can confine the target node position to a bounded convex set as follows:

x ̂ I = i I i ,
(9)

where the set I(|I|N) indicates the measurements that have positive errors, e.g., in Figure 1a I={1}. If the set is known in advance, we can use the same strategy as in the previous section and derive an upper bound on the position error with respect to set I . In general, however, finding the set is a difficult task. Now we relax the problem to find an upper bound when the set is unknown, but nonempty. Since the target node position belongs to at least a ball (at least one measurement has a positive error), then we can compute the maximum distance to different balls to find an upper bound. In fact, instead of the intersection of balls in (9), we consider each ball individually and determine an upper bound as

e v max ( x ̂ , I ) max i = 1 , , N max y i x ̂ y 2 = max x ̂ a 1 2 + d ̂ 1 , , x ̂ a N 2 + d ̂ N .
(10)

As an example, Figure 2 shows how the position error can be upper bounded for a network consisting of three reference nodes and one target node in which one distance estimate has a positive error. If the set I={1} is known in advance, we simply find an upper bound on the position error as x ̂ a 1 2 + d ̂ 1 . Note that the bound defined in (10) may not be tight enough since it depends on, for instance, the geometry and the size of the network.

Figure 2
figure2

The intersection is empty and measurement one has a positive error. The actual error and an upper bound on the position error are shown in this figure.

In Proposition 4.1, we show that a simple test can be used to determine whether a target node position belongs to at least one ball

Proposition 4.1.

Suppose x is in the convex hull of a1,…,a N , where a i R m . Let i ={z R m :z a i 2 d ̂ i },i=1,,N. If i = 1 N i is nonempty, then x for some {1,…,N}.

Proof

The convex hull of reference nodes is a convex polyhedron with vertices a ,J={ j 1 ,, j l }{1,,N}. Consider a point y inside the convex hull such that y a j 2 d ̂ j ,jJ. Now, we partition the convex hull into a number of simplexes. Then, the target node position, x, belongs to one of these simplexes, say the simplex formed by vertices a n , y, a p and a q with n,p,qJ. Therefore, we have

x a n 2 + x a p 2 + x a q 2 y a n 2 + y a p 2 + y a q 2 d ̂ n + d ̂ p + d ̂ q .
(11)

From (11), it is easily concluded that x a n 2 d ̂ n , x a p 2 d ̂ p , or x a q 2 d ̂ q . Hence, the target node position belongs to at least the n th, the p th, or the q th ball. Thus, the proposition is proved.

Proposition 4.1 implies that the convex hull of the reference nodes is a subset of the union of the balls j ,j=1,2,,N, if the intersection in (7) is nonempty.

In summary, the bound in (10) is valid if there is at least one distance estimate with a positive error.

5 An upper bound for bounded measurement errors

The bound derived in (10) may not be tight, as will be observed in the simulations. In the sequel, we investigate another derivation of an upper bound regarding a bounded measurement error. In a practical application, it is more realistic to assume that the measurement error is bounded. For instance, assuming that the estimated distance d ̂ i is nonnegative, we can conclude that the absolute value of the negative noise is at most equal to the actual distance d(x,a i ). Therefore, it is reasonable to assume a bounded measurement errora. Let the measurement error in (5) be bounded, i.e., ρ 1 i ε i ρ 2 i , ρ 1 i , ρ 2 i R + . It means that the measured distances can be bounded as

d i ( x , a i ) ρ 1 i d ̂ i d i ( x , a i ) + ρ 2 i .
(12)

Therefore, geometrically, we can conclude that the target node position belongs to a number of rings (or annulus) derived from measurements, two rings for every measurement. Now, let us consider a new set of measurements as

d ~ i = d ̂ i + ρ 1 i d i ( x , a i ) ,
(13)

where we assume that the values of ρ 1 i are known in advance. It is observed that the new measurements, i.e., d ~ i , have positive errors. Let us form a new intersection as

= i = 1 N i ,
(14)

where i {x:x a i 2 d ~ i }.

We can deduce that the nonempty feasible set definitely contains the target node position. For example, Figure 3 shows a scenario in which all distance errors are negative and the intersection derived from the measurement is empty. By adding a positive value (the absolute value of a lower bound on the error) to estimated distances d ̂ 1 , d ̂ 2 , and d ̂ 3 , a set of larger distances is obtained, i.e., d ~ 1 , d ~ 2 , and d ~ 3 , which have positive errors, thus the balls with radii d ~ 1 , d ~ 2 , and d ~ 3 definitely contain the target node position.

Figure 3
figure3

Every distance measurement d ̂ i is replaced by a larger value d ~ i . The intersection drawn by green color now definitely contains the target node position.

According to the discussion in section 3, let us define an optimization problem to derive an upper bound on the position error as

e v max ( x ̂ , ) = max y x ̂ 2 : y .
(15)

Note that when the lower bound on the distance estimate error ρ 1 i tends to zero, the bound obtained in (15) covers the one in [9] as a special case.

The problem in (15) is a nonconvex problem and can be cast as a convex problem by adopting a relaxation similar to the approaches explained in [9]. That is, we first consider the following optimization problem:

maximize y R m y T y 2 x ̂ T y + x ̂ T x ̂ subject to y T y 2 a i T y + a i T a i d ~ i 2 , i = 1 , , N.
(16)

In fact, the problem in (16) is a QCQP { A i , b i , c i } i = 0 N with parameters

A i = I m , b i = x ̂ , if i = 0 , a i , otherwise , c i = x ̂ 2 2 , if i = 0 , a i 2 2 d ~ i 2 , otherwise .
(17)

It is clear that v max ( x ̂ , )= v qp { A i , b i , c i } i = 0 N .

Following a similar procedure as explained in section 2.2, we can obtain a relaxed SDP problem SDP { A i , b i , c i } i = 0 N , with parameters defined in (17), and the maximum position error can be upper bounded as

e v max ( x ̂ , ) = v qp A i , b i , c i i = 0 N v sdp { A i , b i , c i } i = 0 N .
(18)

To investigate the tightness of the right-most bound in (18), we can derive a lower bound on the maximum position error v max ( x ̂ , ) based on the upper bound derived in (18) considering the methods proposed in [25, 26]. Considering similar procedures as taken in [9], we can bound v qp { A i , b i , c i } i = 0 N from below as

α v sdp A i , b i , c i i = 0 N v qp A i , b i , c i i = 0 N ,
(19)

where

α = 1 2 ln ( 2 ( N + 1 ) μ ) , μ = min { N + 1 , m + 1 } .
(20)

Therefore, the maximum position error v max ( x ̂ , ) can be upper and lower bounded as

α v sdp { A i , b i , c i } i = 0 N v max ( x ̂ , ) v sdp { A i , b i , c i } i = 0 N .
(21)

Remark 1.

Considering a reasonable lower bound on distance estimation errors, we can derive a positioning algorithm based on, e.g., the least squares approach. In fact, we can solve the least squares over the feasible set derived based on enlarging the distance measurements (a constraint least squares). We can also apply a modified version of the POCS to find an estimate of the target node position. Instead of projecting onto every ball with radius d ̂ i , we now project onto a larger ball with radius d ~ i . Note that to handle the nonempty intersection for the POCS algorithms a relaxed approach is used in the literature, e.g., see [13, 15]. By increasing the estimated distance, we can apply an unrelaxed POCS as well without facing any convergence problems. We can also apply a method based on projection onto rings [13] to find an estimate of the target node position. For details of projection onto rings, see, e.g., [13, 27].

Remark 2.

Considering the new feasible set including the target node position, we may design an algorithm that takes a point inside the feasible set as an estimate (Remark 1). Therefore, we can define new upper bounds on the position error as the maximum distance between two points in the intersection involving the target node position. To find the maximum length of the intersection, we can follow the approaches studied in [9]

Remark 3.

For the Gaussian noise, we can consider a bounded interval in which noise samples reside with high probability. For instance for zero-mean Gaussian noise if we define ρ 1 i = ρ 2 i =ρ=3σ, noise samples belong to the set {αR:3σα3σ} with probability 0.9973.

In the rest of this section, we combine preceding bounds and derive another simple upper bound on the position error for bounded measurement errors. The main reason for deriving the new upper bound is that it is simple and it can be expressed in a closed-form expression. Suppose that the measurement errors are bounded from below. We first form a new set of measurements as performed in (13). Now we know that every ball includes the target node position. Hence, the maximum distance to every ball defines an upper bound to the position error, namely

e v max ( x ̂ , ) max y i x ̂ y 2 = x ̂ a i 2 + d ~ i , i = 1 , , N.
(22)

The expression in (22) can alternatively be written as

e v max ( x ̂ , ) min i = 1 , , N max y i x ̂ y 2 = min x ̂ a 1 2 + d ~ 1 , , x ̂ a N 2 + d ~ N ,
(23)

which provides a simple upper bound in closed-form.

6 An upper bound for an unknown measurement noise

For practical measurements, we can use the upper bound developed in (18) or (23) if a reasonable lower bound on the measurement errors is available. For scenarios in which it is difficult to obtain a lower bound on the measurement errors, the bounds introduced in section 5 may not be applicable. In this section, we introduce a new bound that can be applied for the cases in which the realizations of distance errors are not always negative.

Suppose a target node measures the distance between a reference and itself K times. Namely, we have

d ̂ i k = d ( x , a i ) + ε i k , i = 1 , , N , k = 1 , , K.
(24)

Without any particular assumption on measurement noise ε i k , except that Pr( ε i k 0)>0, the set ω i ={k: ε i k 0,k=1,,K} is nonempty as K is sufficiently large. Then, we can write

d ( x , a i ) d ̂ i m , m ω i .
(25)

It means that at least one distance measurement has a positive error. Then, we form a new set of measurements as

d ̄ i max { d ̂ i k : k = 1 , , K } , i = 1 , , N.
(26)

Therefore, we can show that lim K Pr( d ̄ i d i (x, a i )0)1. Hence, we can deduce that the new distance d ̄ i asymptotically has positive error. Consequently, we can define a feasible set in which the target node position resides as

x ̂ ̄ = i = 1 N ̄ i ,
(27)

where ̄ i ={x R m :x a i 2 d ̄ i }.

According to the discussion in section 3, an upper bound on the position error can then be obtained as

e v max ( x ̂ , ̄ ) .
(28)

Similarly, we consider QCQP { A i , b i , c i } i = 0 N to find an upper bound, where

A i = I m , b i = x ̂ , if i = 0 , a i , otherwise , c i = x ̂ 2 2 , if i = 0 , a i 2 2 d ̄ i 2 , otherwise .
(29)

Following a similar approach, we can derive a relaxed SDP as SDP { A i , b i , c i } i = 0 N parameterized in (29). Then an upper bound on the position error can be derived by solving the SDP { A i , b i , c i } i = 0 N , i.e.,

e v max ( x ̂ , ̄ ) = v qp { A i , b i , c i } i = 0 N v sdp A i , b i , c i i = 0 N .
(30)

We can also find a lower bound similar to (19) on the upper bound v max ( x ̂ , ̄ ). Finally, a simple upper bound similar to (23) can be obtained as

v max ( x ̂ , ̄ ) min i = 1 , , N max y ̄ i x ̂ y 2 = min x ̂ a 1 2 + d ̄ 1 , , x ̂ a N 2 + d ̄ N .
(31)

Note that a tighter upper bound can be derived if the set ω i is known in advance. In fact, we can form a new set of measurements that have positive errors as

d ̄ i min { d ̂ i k : k w i , i = 1 , , N } .
(32)

Remark 4.

It is clear that for a very large K, we may have large positive outliers and the intersection obtained in (27) can be a large volume which yields a loose bound. On the contrary, a small number of samples for every link may not guarantee that one distance among all distances has a positive error. But as reported in [28], the distance estimates tend to have positive errors in practice. Therefore, a medium number of samples seems to be enough to obtain a valid and tight bound.

Table 1 summarizes different bounds formulated in this study.

Table 1 Summary of bounds

7 Simulation results

Computer simulations are conducted to study the validity and tightness of the proposed bounds. In the simulations, we consider a network consisting of N reference nodes randomly distributed in a 10×10×10 m3 cube volume. One target node is randomly placed inside the volume. To generate noisy distances, we add noise to the actual distance between the reference nodes and the target node. We consider both Gaussian and truncated Gaussian measurement errors. Specially, we consider ρ 1 i = ρ 2 i =ρ in (12). In the simulations, we also pick the set of distance estimates with positive errors, i.e., the set , and form an intersection as in (9), and then we derive an upper bound for this nonempty intersection by solving a relaxed SDP. In fact, in this case an upper bound is derived using (18) in which is replaced by I . To solve the optimization problems formulated in this study, we use the CVX toolbox [29].

We define the tightness of the bound as t v (ve) for a bound v and the true position error e. We also define the relative tightness as τ v (ve)/e. To illustrate how the tightness varies with, e.g., network deployment, measurement noise, or estimator parameters, we study the cumulative distribution function (CDF) of t v and τ v , i.e., Pr{t v x} and Pr{τ v x}, where the randomness comes from selecting, e.g., the deployment in a random fashion. In the following, we will generate e from the unrelaxed POCS estimates (for details of POCS and different relaxations see, e.g., [13], ([30], chap. 5)).

Figure 4 shows the CDF of the relative tightness for the bounded Gaussian measurement errors when an estimate of the target node is obtained via the POCS algorithm. In the simulations, we generate Gaussian samples in the interval [ −3σ,3σ], i.e., the distance error pdf is a scaled Gaussian pdf inside the interval [ −3σ,3σ] and zero elsewhere. The standard deviation of the distribution is set to 1.5 m. As observed from the figure, Bound 1, i.e., Equation 10, is always the loosest bound as expected. Also it is concluded that the upper bound for known I becomes the tightest bound among other bounds as the number of reference nodes increases. For few reference nodes, Bound 2 shows comparable tightness against the Bound 2 for known . The results also demonstrate that for low-density networks Bound 2 and Bound 2 (relaxed) are close to one another. This figure also shows that the behaviors of Bound 2 and Bound 2 (relaxed) almost remain the same for different network densities.

Figure 4
figure4

Comparison between the CDF of the relative tightness of upper bounds versus the POCS position error. (a) 5 Reference nodes, (b) 10 reference nodes, (c) 15 reference nodes, and (d) 20 reference nodes.

For further investigations, we study the validity of the bounds for Gaussian measurement errors for different values of ρ (ρ=σ, 2σ, 3σ, and 4σ). In the simulations, we set σ=1 m. Figure 5 shows the CDF of the relative tightness for different bounds. As can be seen, Bound 2 and Bound 2 (relaxed) may not be valid in all cases, but increasing ρ improves the validity of the bounds. For instance, in this figure, we can conclude that for ρ≥3σ the bounds are valid (with high probability). For small ρ, e.g., ρ=σ, the intersection derived from the measurements may be empty or may not include the target node position. For example in Figure 5a, 20% of the cases the intersection is empty and almost in 30% of the cases the bound (Bound 2) is not valid although the intersection is nonempty. Note that if the intersection is empty, the SDP algorithm returns infinity to indicate an infeasible problem.

Figure 5
figure5

CDF comparison of relative tightness for POCS position error for Gaussian measurement errors ( σ =1 [m], N=15), (a) ρ = σ , (b) ρ =2 σ , (c) ρ =3 σ , and (d) ρ =4 σ .

In the next simulations, we compare the upper bounds with the maximum position error. To compare different upper bounds, we again employ the POCS method. For every realization of the network, we run POCS for 200 random initializations and take the maximum position error. For every random realization, the upper bounds are computed regarding the estimate that gives the maximum POCS position error. Figure 6 shows the CDF of the relative tightness for different bounds. It is again observed that Bound 1 is the loosest bound. For low-density networks, Bound 2 and Bound 2 for known are close to each other. It is also observed that Bound 2 (relaxed) shows good tightness for low density networks.

Figure 6
figure6

Comparison between CDF of relative tightness of upper bounds versus maximum POCS position error. (a) 5 Reference nodes, (b) 10 reference nodes, (c) 15 reference nodes, and (d) 20 reference nodes.

We now evaluate Bound 3 for different networks when K distance samples are available for every link. In the simulations, we set σ=1 m. Figure 7 shows the CDFs of relative tightness for Bound 3 and Bound 3 (relaxed) with respect to the POCS estimates. Among K position estimates, we pick the estimate that gives the largest position error among the POCS estimates. As can be seen, increasing the number of samples, K, the tightness of both bounds decreases due to appearance of large outliers. Hence, the intersection, consequently, will be a large area and the tightness of both bounds will decrease. From this figure, it is also observed that Bound 3 is more sensitive to the number of samples, K, compared to Bound 3 (relaxed). In addition, for a fixed K, we see that Bound 3 (relaxed) shows almost the same behavior for different numbers of reference nodes. From this figure, it is also seen that bounds obtained for small K when the number of reference nodes increases may not be valid all the time. For instance for N=20 and K=5, a small percentage of the time the intersection does not contain the position of the target node and therefore, Bound 3 may not be valid.

Figure 7
figure7

CDF of relative tightness for Bound 3 and Bound 3 (relaxed) for different numbers of distance estimates for every link, K. (a) 5 Reference nodes, (b) 10 reference nodes, (c) 15 reference nodes, and (d) 20 reference nodes.

Finally, we consider a moving target, e.g., a vehicle, and compute confidence regions - using available estimates and upper bounds on the position errors - containing the position of the target in different locations. In particular, we consider a 2D network in which a number of reference nodes are placed on the lines y=0 and y=100, see Figures 8 and 9 for details. In every position marked by red circles in the figures, we measure noisy distances and run the POCS algorithm as before and obtain estimates of the target positions. The target moves on a trajectory according to a quadratic curve y=−0.0007x2+0.4x+10 in the xy plane (see the red curves in Figures 8 and 9).

Figure 8
figure8

Confidence discs formed by POCS estimates and corresponding upper bounds (Bound 2, Equation 18). Discs contain the location of a target moving according to a trajectory.

Figure 9
figure9

Confidence discs formed by POCS estimates and corresponding upper bound (Bound 3 in Equation 30, K=6 ). Discs are formed for the same trajectory used in Figure 8.

Figure 8 shows discs that are centered with the POCS estimates and have radii equal to the upper bounds (Bound 2, Equation (18)) on the position errors. In the simulation, we assume the same truncated Gaussian distribution as before. From the figure, it is observed that the discs are reasonably tight and definitely contain the target locations.

In Figure 9, we plot confidence discs using Bound 3 in Equation (30). In the simulation, we set K=6 for the Gaussian measurement errors. From the figure, it is observed that the corresponding discs for different estimates contain the target positions. In some locations, the discs are quite small, which implies that the bound is tight for those locations. For small K, the corresponding discs can be quite small, but in some situations the intersection might be empty and thus the algorithm may fail to provide an upper bound on the position error. For large K, the upper bound can be quite large resulting in larger discs. In the simulation, we observe that K=6 or K=7 provides satisfactory performance.

8 Conclusions

In this paper, we have studied the possibility of upper bounding a single position error for wireless sensor networks considering range measurements when at least one distance estimate has a negative error.

To derive an upper bound on the position error, we have followed the technique introduced in [9] to find a bounded set that contains the target node position. We have first assumed that at least one measurement error is positive and then have concluded that the target node position can be found at least inside one ball derived from the measurements. Since the set of measurements with positive errors are unknown in advance, we have proposed to pick the largest distance of the maximum distance from the estimate to every ball, Equation (10). This bound, called Bound 1, is not so tight as we have observed through simulations. We have further assumed that the measurement errors are bounded from below and assumed a reasonable lower bound on measurement error is known a priori, and then have enlarged the measurements with the absolute value of the lower bound of the measurements to derive a new set of distance estimates with positive errors. We have argued that the target node position can be found in a feasible set derived from the new set of measurements. We have, then, defined the maximum distance from the estimate to every point in the feasible set as an upper bound on the position error. Consequently, we have derived two upper bounds on the position error, i.e., Bound 2 in Equation (18) and Bound 2 (relaxed) in Equation (23).

Since a reasonable lower bound on measurement errors may not be known in practical scenarios, we have further assumed that a number of distance estimates between a target node and a reference node are available. If the distance errors are positive with a nonzero probability and we get enough estimates for a link, then with high probability at least one distance error is positive. Therefore, we take the maximum distance between different estimates as a distance estimate with positive error. Hence, we are able to confine a target node position to a bounded convex set, which definitely contains the target node position, this bound is called Bound 3.

Simulation results show that the upper bounds are reasonably tight for some situations. For instance, for dense networks, Bound 2 shows acceptable tightness compared to the bound when the set of distance estimates with positive errors are known in advance. Numerical results also show that Bound 3 is reasonably tight when for every link there is a reasonable number of distance measurements. For a large number of distance estimates, Bound 3 can be a loose bound, while for very few samples, Bound 3 may not be a valid bound all the time.

Endnote

a In fact, we only need to assume that measurement error is bounded from below.

Appendix A

An instantaneous upper bound on the estimation error

Let us consider an unknown parameter vector x R n and define the set of the possible values of x as

X { possible values of x } R n .

Suppose we aim at estimating x from a random vector . Let m be the observed realization of the (random) measurement vector . Given the event M=m, the set of possible values of x changes to

X ( m ) { possible values of x : M = m } X.

Let us denote an estimate of x by x ̂ (m) R n as a function of the observed data m. We can define an upper bound on the 2 norm of estimation error e x ̂ (m,f)x 2 as

e u 1 ( x ̂ ( m ) ) sup x X ( m ) x ̂ ( m ) x 2 .
(33)

Remark 5.

It is clear that the tightness of the bound depends on how large the feasible set X(m) and how accurate the estimate x ̂ (m) is. If the bound is tight enough, it provides a good measure in evaluating the worst-case estimation error).

References

  1. 1.

    Mao G, Fidan B: Localization Algorithms and Strategies for Wireless Sensor Networks. New York: Information Science reference; 2009.

    Book  Google Scholar 

  2. 2.

    Gezici S, Tian Z, Giannakis GB, Kobayashi H, Molisch AF, Poor HV, Sahinoglu Z: Localization via ultra-wideband radios: a look at positioning aspects for future sensor networks. IEEE Signal Process. Mag 2005, 22(4):70-84.

    Article  Google Scholar 

  3. 3.

    Gezici S: A survey on wireless position estimation. Wireless Pers. Commun 2008, 44(3):263-282. 10.1007/s11277-007-9375-z

    Article  Google Scholar 

  4. 4.

    Sayed AH, Tarighat A, Khajehnouri N: Network-based wireless location: challenges faced in developing techniques for accurate wireless location information. IEEE Signal Process. Mag 2005, 22(4):24-40.

    Article  Google Scholar 

  5. 5.

    Patwari N, Ash J, Kyperountas S, Hero AO, Correal NC: Locating the nodes: cooperative localization in wireless sensor network. IEEE Signal Process. Mag 2005, 22(4):54-69.

    Article  Google Scholar 

  6. 6.

    Gholami MR: Positioning algorithms for wireless sensor networks. Licentiate Thesis, Chalmers University of Technology,. 2011.http://publications.lib.chalmers.se/records/fulltext/138669.pdf . Accessed 15 March 2011

    Google Scholar 

  7. 7.

    Kay SM: Fundamentals of Statistical Signal Processing: Estimation Theory. Englewood Cliffs: Prentice-Hall; 1993.

    MATH  Google Scholar 

  8. 8.

    Slijepcevic SS, Megerian S, Potkonjak M: Location errors in wireless embedded sensor networks: sources, models, and effects on applications. Sigmobile Mobile Comput. Commun. Rev 2002, 6: 67-78. 10.1145/581291.581301

    Article  Google Scholar 

  9. 9.

    Gholami MR, Ström EG, Wymeersch H, Rydström M: On geometric upper bounds for positioning algorithms in wireless sensor networks. arXiv preprint arXiv:1201.2513,. 2012.

    Google Scholar 

  10. 10.

    Wasserman L: All of Statistics: A Concise Course in Statistical Inference. Heidelberg: Springer; 2004.

    Book  MATH  Google Scholar 

  11. 11.

    Schön S, Kutterer H: Realistic uncertainty measures for GPS observations. In Proceedings of the IUGG2003 General Meeting. Heidelberg: Springer; 2004.

    Google Scholar 

  12. 12.

    Costa JA, Patwari N, Hero AO: Distributed weighted-multidimensional scaling for node localization in sensor networks. ACM Trans. Sensor Netw 2006, 2(1):39-64. 10.1145/1138127.1138129

    Article  Google Scholar 

  13. 13.

    Gholami MR, Wymeersch H, Ström EG, Rydström M: Wireless network positioning as a convex feasibility problem. EURASIP J. Wireless Commun. Netw 2011, 2011: 161. 10.1186/1687-1499-2011-161

    Article  Google Scholar 

  14. 14.

    Hero AO, Blatt D: Sensor network source localization via projection onto convex sets (POCS). In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3. Philadelphia, USA; 19–23 Mar. 2005:689-692.

    Google Scholar 

  15. 15.

    Blatt D, Hero AO: Energy-based sensor network source localization via projection onto convex sets. IEEE Trans. Signal Process 2006, 54(9):3614-3619.

    Article  Google Scholar 

  16. 16.

    Gholami MR, Gezici S, Ström EG, Rydström M: A distributed positioning algorithm for cooperative active and passive sensors. In Proc. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). Istanbul, Turkey; 26–29 September 2010.

    Google Scholar 

  17. 17.

    Gholami MR, Wymeersch H, Ström EG, Rydström M: Robust distributed positioning algorithms for cooperative networks. In SPAWC. San Francisco, US; 26–29 June 2011:156-160.

    Google Scholar 

  18. 18.

    Beck A, Stoica P, Li J: Exact and approximate solutions of source localization problems. IEEE Trans. Signal Process 2008, 56(5):1770-1778.

    MathSciNet  Article  Google Scholar 

  19. 19.

    Vandenberghe L, Boyd S: Semidefinite programming. SIAM Rev 1996, 38(1):49-95. 10.1137/1038003

    MathSciNet  Article  MATH  Google Scholar 

  20. 20.

    Boyd S, Vandenberghe L: Convex Optimization. Cambridge: Cambridge, University Press; 2004.

    Book  MATH  Google Scholar 

  21. 21.

    Patwari N: Location estimation in sensor networks. Ph.D. dissertation, University of Michigan, Ann Arbor. 2005.

    Google Scholar 

  22. 22.

    Chen P-C: A non-line-of-sight error mitigation algorithm in location estimation. In Proc. IEEE Wireless Communications and Networking Conference, vol. 1. Piscataway: IEEE; 1999:316-320.

    Google Scholar 

  23. 23.

    Liang C, Piche R: Mobile tracking and parameter learning in unknown non-line-of-sight conditions. In Proc. 13th International Conference on Information Fusion. Edinburg, UK; 26–29 July, 2010.

    Google Scholar 

  24. 24.

    Oguz-Ekim P, Gomes J, Xavier J, Oliveira P: Robust localization of nodes and time-recursive tracking in sensor networks using noisy range measurements. IEEE Trans. Signal Process 2011, 59(8):3930-3942.

    MathSciNet  Article  Google Scholar 

  25. 25.

    Ben-Tal A, Nemirovski A: Lectures on modern convex optimization. 2012.http://www2.isye.gatech.edu/~nemirovs/Lect_ModConvOpt.pdf . Accessed 15 March 2011

    Google Scholar 

  26. 26.

    Nemirovski A, Roos C, Terlaky T: On maximization of quadratic form over intersection of ellipsoids with common center. Math. Program 1999, 86: 463-473. 10.1007/s101070050100

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Gholami MR, Ström EG, Rydström M: Indoor sensor node positioning using UWB range measurments. Proc. 17th European Signal Processing Conference (Eusipco) 2009, 1943-1947.

    Google Scholar 

  28. 28.

    Wymeersch H, Maranó S, Gifford WM, Win MZ: A machine learning approach to ranging error mitigation for UWB localization. IEEE Trans. Commun 2012, 60(6):1719-1728.

    Article  Google Scholar 

  29. 29.

    Grant M, Boyd S: CVX: Matlab software for disciplined convex programming, version 1.21. Feb. 2011.http://cvxr.com/cvx . Accessed 15 March 2011

    Google Scholar 

  30. 30.

    Censor Y, Zenios SA: Parallel Optimization: Theory, Algorithms, and Applications. New York: Oxford Unversity Press; 1997.

    MATH  Google Scholar 

Download references

Acknowledgements

The simulations were performed on resources provided by the Swedish National Infrastructure for Computing SNIC) at C3SE. This work was supported in part by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications NEWCOM # (contract no. 318306) and in part by the Swedish Research Council (contract no. 2007-6363).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mohammad Reza Gholami.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Gholami, M.R., Ström, E.G., Wymeersch, H. et al. Upper bounds on position error of a single location estimate in wireless sensor networks. EURASIP J. Adv. Signal Process. 2014, 4 (2014). https://doi.org/10.1186/1687-6180-2014-4

Download citation

Keywords

  • Wireless sensor networks
  • Positioning problem
  • Projection onto convex set
  • Convex feasibility problem
  • Semidefinite relaxation
  • Position error
  • Worst-case position error