Skip to main content

A practical approach for outdoors distributed target localization in wireless sensor networks

Abstract

Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensus-based implementation of the algorithm is proposed based on an augmented Lagrangian approach and primal-dual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate.

1 Introduction

The deployment of a large number of scattered sensors in a certain area constitutes a very powerful tool for sensing and retrieving information from the environment (e.g., temperature, humidity, motion). The main features of wireless sensor networks (WSN) are that of a large number of low-cost nodes with limited computational and power resources. WSNs must also be scalable and robust against changes in topology (i.e., node failure or addition of new nodes), as well as energy efficient. These are the major design issues in WSNs that make the development of simple and efficient algorithms a challenging problem. These limitations also make centralized approaches not very suitable for being used in WSNs. Localization is a key task (often mandatory) in many applications [1] and therefore, distributed localization algorithms are of high practical importance.

There exist different measurement sources that can be fused in order to get an estimate of the target's position [1, 2], like time-of-arrival (TOA), time-difference of arrival (TDOA), angle of arrival (AOA) or received signal strength indicator (RSSI). In this article, we focus on single antenna nodes without tight synchronization abilities, which leads us to the use of RSSI measurements for the localization task. One of the main challenges when using RSSI measurements is that the mapping between the measurement and target's position is nonlinear and hence, finding a suitable solution becomes more challenging. Some approaches to deal with non-linearities are based on particle filtering principles [3]. In the context of WSN particle filtering approaches have also been proposed for localization and tracking using RSSI measurements [48]. In general, particle filtering approaches have shown very good performance when dealing with RSSI measurements but they are centralized and suffer from a high computational cost and hence, their applicability in a real scenario is questionable.

A recent approach based on convex optimization concepts has been proposed in [9, 10] for the node localization problem. In [9] a semidefinite relaxation approach is used to cast the localization problem into a semidefinite program (SDP) that can be solved efficiently via interior point methods, see [11] and references therein. The obtained position estimate through SDP is then further refined via an iterative algorithm. Although the proposed method provide near optimal results (i.e., close to the Cramer-Rao bound) they are centralized so their application to WSN may be limited. The problem of source localization using energy measurements has also been treated in [12], where a distributed algorithm based on projections onto convex sets is presented. The algorithm is shown to asymptotically approach the maximum likelihood (ML) estimate as the number of nodes increases when the target lies in the convex hull defined by the node's coordinates. In [13], an alternative approach is presented that can handle variations in the path-loss exponent. However, in both approaches no restrictions are imposed in the communication among nodes. In real applications this will cause rapid battery depletion if far away nodes are to communicate. Further, in both approaches the estimation is performed only by a subset of nodes that are selected according to their received signal to noise ratio. The main drawback is that such subset must be known to every node in the network. In a real scenario, the required signaling and routing overhead necessary for node coordination may limit their application.

In this contribution, we propose a distributed algorithm for localization in WSN's by fusing RSSI measurements. We approach the ML estimation problem by solving a simplified and more tractable problem which allows the use of convex optimization tools for its distributed solution. More precisely, we use an augmented Lagrangian approach with a primal-dual decomposition [14, 15]. The developed approach offers an advantage over centralized approaches as it is scalable, robust against changes in network's topology and requires only local communication among neighboring nodes. These are key properties very desirable in the context of WSN's.

The article is organized as follows: Section 2 introduces the localization problem and the underlying propagation model. In Section 3, we present the localization approach based on RSSI readings and its distributed implementation is presented in Section 4. Simulations are provided in Section 6, while some comments and concluding remarks are given in Section 7.

Notation

Bold lower- and upper-case letters denote vectors and matrices, respectively. For vector quantities the operator || · || denotes Euclidean norm while for matrices it refers to the Frobenius norm. The symbol 0 denotes a matrix of appropriate dimensions whose entries are all zeros. The symbol I is used to denote the identity matrix of appropriate dimensions. The optimal value of a variable x in an optimization problem is denoted by x*. The symbol n is used to denote the set of real n-dimensional vectors while S + n is used to denote the set of symmetric, n × n positive semi-definite matrices.

2 Problem formulation and definitions

Consider a wireless sensor network, as the one depicted in Figure 1, consisting of M nodes randomly deployed on a certain area (in the same x-y plane). Nodes are static and able to communicate with adjacent nodes that lie within a given range for communications. Nodes are aware of their own location but not aware of the location of any other element in the network. Assume the presence of a target node that emits beacon frames that can be heard by all nodes in the network. The goal is to determine the location of the target node in the x-y plane.

Figure 1
figure 1

Wireless sensor network. Schematic representation of the network used for the simulations. Blue circles represent the nodes and the link between them indicates that those nodes can communicate. The target is represented as a red square.

For getting estimates of the target position, nodes employ RSSI measurements. The use of RSSI readings is of practical convenience when working with real hardware as they do not need tight synchronization requirements. We assume that the RSSI follows a linear relationship with the received power (we assume they are equal). Let denote r m as the received power at node m. A common assumption, see [2] and references therein, is that the received power follows a lognormal distribution with a distance-dependent mean as

r m = p m -10 α m log 10 d m d 0 + n m ,
(1)

where p m is the received power (in dB) at reference distance d0, α m is the path-loss exponent, d m is the true distance between the target and the m th node and n m ~N ( 0 , σ m 2 ) is a Gaussian random variable of zero mean and variance σ m 2 . The received power r m will be used to get an estimate of the true target position.

3 Localization strategies

In this section, we present different localization strategies using the received power (in dB) at the nodes. We first consider the (centralized) ML estimate and then we propose to use a suboptimal strategy based on local distance estimate at each node. We show that the proposed localization strategy can be implemented in a fully distributed way by only local communication among neighboring nodes.

3.1 ML estimation

Consider the presence of a centralized unit that gathers all measurements coming from the nodes. Let r = [r1, . . . , r M ]T be a vector whose components are the different measurements taken by each sensor and denote x = [x t y t ]T 2 as the target's position. The true distance between the target and the m th sensor can be then expressed as

d m = | | x - c m | | ,
(2)

where c m = [x m y m ] 2 are the coordinates of node m with m = 1, . . . , M. The vector of measurements r can now be written as

r = p 1 - 10 α 1 lo g 10 | | x - c 1 | | d 0 p M - 10 α M lo g 10 | | x - c M | | d 0 + n 1 n M = μ ( x ) + n
(3)

where the vector n~N ( 0 , ) is jointly Gaussian with zero mean and covariance . It is easy to see that r will follow a Gaussian distribution with mean μ(x) and covariance , that is

p ( r ; x ) = 1 ( 2 π ) M / 2 det exp - 1 / 2 ( r - μ ( x ) ) T - 1 ( r - μ ( x ) )
(4)

where p(r ; x) is the probability density function of r with parameter x. The ML estimate of the target position is then

x ^ ML = arg max x p ( r ; x ) ,
(5)

which is equivalent to maximizing the log of p(r ; x). Neglecting all terms that do not depend on x it is easy to see that

x ^ ML = arg min x μ ( x ) T - 1 μ ( x ) - 2 r T - 1 μ ( x ) .
(6)

The objective to be minimized in (6) is not convex and therefore, finding the global optimum is not an easy task. In Figure 2 (left), we have an illustration of how the objective in (6) looks like for a network of 20 nodes over a normalized square area. It is clear that the function is not convex and that several local minima and saddle point may exist. Instead of dealing directly with the ML estimate we propose to use a suboptimal approach that offers a reasonable good performance and that allows for its distributed computation.

Figure 2
figure 2

Contour plot of the objective function. This figure represents, on the left, the contour plot of the log of the original ML cost function in (6) while on the right side we have the modified (suboptimal) cost function in (11).

3.2 Practical approach

In this section, we propose to estimate the target's position based on local distance estimates computed at each node. The use of local distance estimates allows the derivation of simple and distributed estimators of the target's position. We have that, for the propagation model (1), the ML estimate of the distance between the m th node and the target is given by

d ^ m = d 0 10 p m - r m 10 α m .
(7)

Taking the square at both sides of (2) and further developing, it is easy to see that the following equation must be satisfied

d 1 2 = x T x - 2 c 1 T x + | | c 1 | | 2 d M 2 = x T x - 2 c M T x + | | c M | | 2
(8)

Rearranging terms we can express (8) in a more compact form as

d 1 2 - | | c 1 | | 2 d M 2 - | | c M | | 2 = ( x T x ) 1 - 2 c 1 T c M T C x ,
(9)

where 1 is a M × 1 vector of all ones. However, we do not have the actual distances to the target but a noisy version of them as per (7). Define the vector b= | | c 1 | | 2 - d ^ 1 2 , , | | c M | | 2 - d ^ M 2 T and the vector-valued cost function f(x): 2 M as

f ( x ) = ( x T x ) 1-2Cx+b.
(10)

We can then get an estimate of the target's position by minimizing the norm of (10). In order to incorporate robustness and make the localization task more applicable to realistic scenarios we propose to use a weighted version of the cost function (10). In a WSN it may happen that some of the nodes exhibit a bias in their measurements due to the presence of obstacles. Additionally, nodes do not have precise information about their own locations instead, some errors may be present. The incorporation of weights will mitigate the effects of biased nodes and uncertainties in nodes' positions. So we compute an estimate x ^ of the true target's position x as the solution to the following non-linear (weighted) least-squares problem

x ^ = minimize x D f ( x ) ,
(11)

where D = diag (γ1, . . . , γ M ) is a diagonal weighting matrix with γ m 0 for all m = 1, . . . , M. A proper choice for the weights would be inversely proportional to the variance of the measurements. As we are assuming the log-normal model for the measurements it is well known that the variance of the ML estimate (7) is proportional to the square of the true distance to be estimated [2, 16]. With this consideration in mind we may choose to weight our measurements inversely proportional to the measured distance, that is γ m =1/ d ^ m .

This problem has been studied in [17, 18] where a distributed version of the Gauss-Newton method can be used for its solution. In this study, we present a more flexible approach that uses concepts from convex optimization theory. The proposed approach has better convergence properties and also makes it straightforward to include additional constraints to the problem that may prevent it from instabilities. In order to proceed let's write problem (11) as the following equivalent problem

x ^ = minimize x m = 1 M γ m ( x T x - 2 c m T x + b m ) 2
(12)

Note that, although (11) and (12) are equivalent problems (i.e., with the same solution), they are different because in the latter case we are minimizing the squared norm of Df(x). The minimization of the squared norm is motivated by the fact that it allows a simple distributed implementation as it can be guessed from the structure of (12). The use of the objective in (12) is well motivated by the fact that we get a smoother surface at the cost of introducing some bias with respect to the ML solution (see Figure 2 (right)). However, if the bias is small, we may still get to the ML estimate by performing a local search around the solution of (12). However, in order to use convex optimization methods we need problem (12) to be convex. Unfortunately, the objective function is not convex because we are adding the squares of quadratic convex but not necessarily positive functions [11]. It would be interesting to exploit some hidden convexity of the problem so that convex optimization methods can be applied.

A possible approach to make the problem convex is to use semidefinite relaxation technique. Let X = xxTand note that Tr (X) = ||x||2, where Tr(·) is the trace operator. We can rewrite the problem as

minimize X , x m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) 2 subject to  X  =  x x T
(13)

We now have that the objective is convex as it is the composition of an affine function of X and x with a convex function [11]. However, the above problem is still non-convex due to the non-linear constraint X = xxT. We can then relax the equality constraint by replacing it with a semidefinite constraint. As a result we end up with the following (convex) SDP

minimize X , x m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) 2 subject to  X - x x T 0 X S + 2
(14)

where S + 2 is the set of 2 × 2 symmetric positive semi-definite matrices. As we are allowing for a larger feasible set, the optimal value of problem (14) would provide a lower bound on the optimal value of the original problem (12). However, if the optimal solution X* of (14) is of rank-one, we have that the semidefinite relaxation approach is not a relaxation at all and the found solution x* of (14) is also optimal for (12).

It would be interesting to give conditions under which (14) provides a rank-one solution so that the obtained solution is optimal for the original problem, too. To that end let define the matrix A as

A = 2 m γ m x m 2 m γ m y m - m γ m 2 m γ m x m 2 2 m γ m y m x m - m γ m x m 2 m γ m y m x m 2 m γ m y m 2 - m γ m y m
(15)

and let the vector δ be given by

δ = m γ m b m m γ m b m x m m γ m b m y m
(16)

We then define the following feasibility problem

find { z t } subject to z 2 t A z t = δ
(17)

with variables z 2 and t +. The above problem is convex since it belongs to the class of second-order cone program (SOCP) [11]. Based on the feasibility problem (17) we can state the following result:

Proposition 1. Assume problem (12) has at least one strictly feasible point. If problem (17) is not feasible, then the optimal solution x* of the semidefinite relaxed problem (14) is also optimal for the original problem (12).

Proof. See the Appendix. □

Corollary 2. If matrix A is singular then, the solution X* of (14) is of rank one with X* = x*x*Tand x* is also optimal for (12).

Proof. It follows directly from Proposition 1. If A is singular then, problem (17) is infeasible (because matrix A is not invertible) so that the relaxed problem is not a relaxation at all. □

It is worth to mention that the feasibility problem (17) can be easily checked without the requirement of an optimization solver. If matrix A is singular then, the problem is infeasible and we are done. If however, matrix A is full-rank, we compute A- 1δ (which is unique) and check whether it satisfies the second order cone (SOC) constraint ||z|| 2≤ t. If the constraint is not met then we conclude that the problem is infeasible.

4 Distributed algorithm

One of the main advantages that offers the considered approach is that it allows for a distributed implementation. We assume that nodes communicate with their one-hop neighbors as dictated by the communication graph G(V, E), where V is the set of vertices and E is the set of edges of the graph. By only local communication exchange, nodes can agree to compute some desired (global) quantity using consensus algorithms [19].

Let's proceed by reformulating problem (14) into a SDP with a single matrix variable Z S + 3 . For that purpose, let us formulate the following problem

minimize Z m = 1 M γ m ( Tr ( M m Z ) + b m - 1 ) 2 subject to  Z ( 3 , 3 ) = 1 Z S + 3
(18)

where M m =I-2 0 0 c m T 0 and I is the identity matrix. With the above problem definition we have the following equivalence:

Lemma 1. The two problems (14) and (18) are equivalent. Further, if we denote Z* as the optimal solution to (18), then the optimal solution x* of (14) is given by x* = [Z*(3, 1), Z*(3, 2)]T.

Proof. See the Appendix. □

Now that we have established the equivalence between problems (14) and (18) through Lemma 1 we show how to solve it in a distributed way. For that purpose we use the optimization framework for consensus-networked systems as proposed in [15] that uses an augmented Lagrangian approach [14]. The framework in [15] generalizes the previous work of [20] as it can also handle convex but not necessarily strictly convex objective functions. The augmented Lagrangian method adds a quadratic penalty term to the objective function that is zero at the optimal solution. The resulting problem is then equivalent to the original problem as both of them end up with the same solution. Augmented Lagrangian methods are also attractive because they offer better convergence properties than standard primal-dual decomposition methods. A detailed treatment about augmented Lagrangian methods and their properties can be found in [14].

In order to derive a distributed solution consider first the introduction of M new variables and a global consensus constraint into the problem as

minimize Z , { Z m } m = 1 M γ m ( Tr ( M m Z m ) + b m - 1 ) 2 subject to Z ( 3 , 3 ) = 1 Z m = Z Z S + 3
(19)

The problem is now separable in the objective function (as it is the sum of M terms, each one dependent of one node) but we still have the coupling "consensus" constraint Z m = Z. However, we do not need to impose that all nodes agree on the same quantity instead, we can only force nodes to agree with their one-hop neighbors. Let N m be the set of neighbors of node m, we can then reformulate the problem as

minimize { Z m } m = 1 M γ m ( Tr ( M m Z m ) + b m - 1 ) 2 subject to Z m ( 3 , 3 ) = 1 Z m = Z j , j N m Z m S + 3
(20)

The two problems (19) and (20) are equivalent provided that the underlying graph is strongly connected [20]. We can now use the developed framework in [15] to derive an augmented Lagrangian method for the distributed solution of (20). Consider then, the introduction of the additional variables W m , j S + 3 and formulate the equivalent problem

minimize { Z m } , { W m j } m = 1 M γ m ( Tr ( M m Z m ) + b m - 1 ) 2 subject to  Z m ( 3 , 3 ) = 1 Z m = W m , j , j N m Z j = W m , j , j N m Z m S + 3
(21)

The penalized problem [14] can be written as

minimize { Z m } , { W m j } m = 1 M γ m ( Tr ( M m Z m ) + b m - 1 ) 2 + c 2 m = 1 M j N m ( | | Z m - W m , j | | 2 + | | Z j - W m , j | | 2 ) subject to  Z m ( 3 , 3 ) = 1 Z m = W m , j , j N m Z m = W m , j , j N m Z m S + 3
(22)

where c > 0 is a constant that controls the penalization of the disagreement among neighbors. In general, c could be a sequence as long as it is non-decreasing. The choice of c has a direct impact on the rate of convergence of the distributed algorithm [14]. There is no general rule to choose c and its value will vary depending on the problem at hand. It becomes clear from the formulation of problem (22) that the penalty term is zero at the optimum and, therefore the optimal solution to (22) is also optimal for (20). We can now find a solution of (22) by solving its dual problem. By relaxing the consensus constraint we form the partial augmented Lagrangian L c as

L c ( { Γ m , j } , { Φ m , j } , { Z m } , { W m , j } ) = m = 1 M γ m ( Tr ( M m Z m ) + b m - 1 ) 2 + m = 1 M j N m Tr ( Γ m , j ( Z m - W m , j ) ) + m = 1 M j N m Tr ( Φ m , j ( Z j - W m , j ) ) + c 2 m = 1 M j N m ( | | Z m - W m , j | | 2 + | | Z j - W m , j | | 2 )
(23)

where Γm, jand Φm, jare the Lagrange multipliers. We then have that the dual problem is given by

maximize { Γ m , j } , { Φ m , j } inf { Z m } , { W m , j } L c ( { Γ m , j } , { Φ m , j } , { Z m } , { W m , j } ) subject to  Z m ( 3 , 3 ) = 1 Z m S + 3
(24)

The problem is now separable and strictly convex which allows its solution by alternating the minimization over Z m and Wm, jas

Z m ( k + 1 ) = minimize Z m L c { Γ m , j ( k ) } , { Φ m , j ( k ) } , { Z m } , { W m , j ( k ) } subject to  Z m (3,3) = 1 Z m S + 3
(25)
W m , j ( k + 1 ) = minimize W m , j L c { Γ m , j ( k ) } , { Φ m , j ( k ) } , { Z m ( k + 1 ) } , { W m , j }
(26)

and then performing an update of the Lagrange multipliers using a subgradient step

Γ m , j ( k + 1 ) = Γ m , j ( k ) + c ( Z m - W m , j )
(27)
Φ m , j ( k + 1 ) = Φ m , j ( k ) + c ( Z j - W m , j )
(28)

where the superscript (k)denotes the k th iteration. Following the same steps as in [15], it can be shown that

Γ m , j ( k + 1 ) = - Φ m , j ( k + 1 )
(29)
W m , j ( k + 1 ) = 1 2 ( Z m ( k + 1 ) + Z j ( k + 1 ) )
(30)

Let Δ m, j = Γ m, j = -Φ m, j and define Ψ m = Δ m, j - Δ j, m . Based on the above results it can be easily shown that the solution to problem (24) is obtained in a distributed way by alternating between the following two updates

Z m ( k + 1 ) = minimize Z m γ m ( Tr ( M m Z m ) + b m - 1 ) 2 + Tr ( Ψ m Z m ) + c j N m | | Z m - 1 2 ( Z m ( k ) - Z j ( k ) ) | | 2 subject to  Z m ( 3 , 3 ) = 1 Z m S + 3
(31)

and

Ψ m ( k + 1 ) = Ψ m ( k ) + c j N m Z m ( k + 1 ) - Z j ( k + 1 )
(32)

with Ψ m ( 0 ) =0 and m = 1, . . . , M.

The network will then operate as follows: At the beginning of the k th iteration, each node locally solves (31). Then, nodes broadcast the computed estimates Z m ( k + 1 ) to their neighbors. With the local estimates of the corresponding neighbors at hand, each node updates its multipliers as in (32). The process is repeated until all nodes converge to the same solution which, in turn would be the same as in the centralized case.

5 Approaching the ML estimate

So far, we have shown how to solve (11) by formulating the relaxed problem (14). We have also provided conditions under which the solutions to (11) and (14) coincide. Further, the solution can be computed in a distributed fashion using convex optimization tools. However, the performance of the followed approach in (11) is below that of the ML estimate (6). In order to come closer to the ML solution we could perform an additional local search that improves the obtained estimate through the solution of (14). The idea is to run a distributed optimization routine, taking the solution of (14) as the starting point, to solve for (6). If the previously computed estimate by solving (14) is close to the ML estimate we may converge to it by optimizing in the neighborhood of the solution of (14), otherwise we will converge to a local optima but still improving performance.

Observe that the ML estimate (6) can be cast into a non-linear least-squares problem of the form of (11). Assuming that is positive definite, we can write the ML estimation problem (6) as the following unconstrained optimization problem

x ^ ML = minimize x | | f ML ( x ) | | 2 ,
(33)

where fML(x) = S (r - μ (x)) and S is the Cholesky factorization of the inverse covariance matrix, i.e., STS = -1. A local minimum of the above non-linear least-squares problem (33) can be found using an iterative descent algorithm like the Gauss-Newton method [11, 21]. The standard (centralized) Gauss-Newton procedure is given in Algorithm 1, where hgn represents the descent direction (i.e., direction that reduces the value of the cost function) and J(k)= J(x(k)) with J(x) M×2representing the Jacobian matrix of f ML ( x ) = [ f 1 ML ( x ) , , f M ML ( x ) ] T whose entries are given by

[ J ( x ) ] i j = f i ML x j ( x ) ,i=1,,M,j=1,2.
(34)

Algorithm 1 Gauss-Newton method

1: x(0) x0k = 0 {Initialization}

2: while ! found & k < kmaxdo

3:   hgn← - (J(k)TJ(k))- 1J(k)TfML (x(k)) {Descent direction}

4: if hgnthen

5:     found = true

6:   end if

7:   x(k+1) x(k)+ hgn {Update}

8:   k ← k + 1

9: end while

If the covariance matrix has no special structure, then the problem (33) requires a central entity that gathers all the information coming from the nodes in order to solve it. However, it is reasonable to assume independence of the noise processes among the nodes so that has a diagonal structure, say = diag ( σ 1 2 , . . . , σ M 2 ) . In that case, matrix S = diag(1/σ1, . . . , 1/σ M ) is also diagonal and we can exploit the problem structure in order to find a distributed implementation of the Gauss-Newton procedure given in Algorithm 1. Note that for finding a distributed implementation of Algorithm 1, it suffices to find a way to compute the descent search direction hgn is a distributed fashion. For such purpose, let first note that J(x) has a block-wise structure given by

J ( x ) = - 10 α 1 σ 1 | | x - c 1 | | 2 x - c 1 T - 10 α M σ M | | x - c M | | 2 x - c M T = J 1 ( x ) J M ( x ) .
(35)

Based on the block-wise structure of matrix J(x) it is easy to note that

J ( x ) T J ( x ) = m J m ( x ) T J m ( x )
(36)
J ( x ) T f ML ( x ) = m J ( x ) T f m ML ( x )
(37)

and therefore, the above quantities can be computed in a distributed fashion by means of average consensus [19]. Once we have computed the products (36) and (37), it is straightforward to compute the descent search direction hgn.

Based on these observations we propose here a fully distributed algorithm, shown as Algorithm 2, which asymptotically approaches the same result as in the centralized case using only local information and the exchange of low-volume intermediate results within each node's 1-hop neighborhood. We immediately note that the Steps 3-6 and 11-12 can all be performed locally by each node. The only communication occurs in the Steps 8 and 9 via standard average consensus Algorithms [19]. Seeing how Δ n ( k ) R 2 × 2 and is symmetric, and γ n ( k ) 2 , we conclude that each consensus round requires a broadcast of only five real values.

Algorithm 2 Distributed Gauss-Newton localization

1: x ^ ( 0 ) same initial value m M

2: for k = 0 to K - 1 do

3:    J m ( k ) - 10 α m σ m x ^ ( k ) - c n 2 ( x ^ ( k ) - c n ) T

4:    f m ML ( x ^ ( k ) ) σ m - 1 ( r m - p m + 5 α m log 10 x ^ ( k ) - c n 2 + 10 log 10 d 0 )

5:    Δ m ( k ) J m ( k ) T J m ( k )

6:    γ m ( k ) J m ( k ) T f m ML ( x ^ ( k ) )

7:   begin consensus

8:      Δ * ( k ) 1 M m = 1 M Δ m ( k ) = 1 M J ( k ) T J ( k )

9:      γ * ( k ) 1 M m = 1 M γ m ( k ) = 1 M J ( k ) T f ML ( x ^ ( k ) )

10:   end consensus

11:    h ( k ) Δ * ( k ) - 1 γ * ( k ) = ( J ( k ) T J ( k ) ) - 1 J ( k ) T f ML ( x ^ ( k ) )

12:    x ^ ( k + 1 ) x ^ ( k ) + h ( k )

13: end for

6 Numerical simulations

In this section, we provide several numerical examples in order to evaluate the performance of the proposed approach. For the simulations we consider a network of randomly deployed nodes over an area of 100 × 100 squared meters. We have used the same propagation model for all the nodes with reference power p m = - 40 dB at reference distance d0 = 1 m and path-loss exponent α m = 2 for m = 1, . . . , M. We further assume that the noise processes are independent and identically distributed with n m ~N ( 0 , σ dB 2 ) for all m.

In Figure 3, we have simulated the distributed localization task using a network of 50 nodes (see Figure 1) with a randomly located target. We have performed distributed estimation of the target's position by the alternating between (31) and (32) with penalty parameter c = 0.05. We have plotted in Figure 3 the error between each node's local estimate and the centralized solution of the problem. As it can be appreciated, the distributed algorithm converges to the optimal centralized solution as the number of iterations increases.

Figure 3
figure 3

Error versus iteration. Norm of the difference between the local (distributed) and the centralized estimate as a function of the iteration number.

We have also simulated the localization task over the same network of Figure 1. For the propagation model the measurement noise variance has been set to σ dB 2 =9. We have evaluated the performance of the proposed approach with and without weights (labeled SDP and wSDP, respectively) over 1000 random target locations (test points). We compare our approach with Multilateration localization approach [2]. As shown in [2], if one node is chosen as a reference, then the localization problem can be linearized. Therefore, it allows its distributed implementation by means of consensus since it can be cast into a least-squares problem where each node locally contributes to the global cost function. Note, however that nodes must first agree on a common reference. For the simulations, we have chosen the node with the closest distance estimate to be the reference. We also provide the performance of the ML estimate for comparison purposes. The empirical cumulative distribution function (CDF) of the localization error is represented in Figure 4. As it can be observed in Figure 4, the performance of the proposed scheme outperforms that of Multilateration. Further, we observe that the use of weights improves the localization accuracy of the algorithm so that we come closer to the ML estimate.

Figure 4
figure 4

Empirical CDF of the localization error. This figure represents the empirical CDF of the localization error for the considered methods based on 1000 realizations.

The performance of the algorithm has also been tested for different values of the measurement noise variance. The results are displayed in Figure 5 where the average error over 1000 locations is depicted as a function of the measurement noise standard deviation. Again, we can appreciate the performance improvement of the proposed approach compared to multilateration as can be seen in Figure 5. We have also displayed the results when combining the proposed distributed localization approach with a local search (wSDP+local). The local search is performed in a distributed fashion following the steps in Algorithm 2. As it can be observed the results of such combination provide close to ML performance. This implies that our method is capable of providing good estimates that could be used to run a local solver in order to come close to the ML estimate. Although not guaranteed to converge to the ML solution, the local search can only provide better estimates (in the ML sense).

Figure 5
figure 5

Error versus noise. Average localization error as a function of the measurement noise standard deviation for the considered methods. The average is taken over 1000 different realizations.

7 Conclusions

We have presented a distributed localization approach over sensor networks using consensus and convex optimization. An alternative problem to the ML position estimation problem has been proposed based on local ML distance estimates at each node. In order to circumvent the non-convexity of the problem, semidefinite relaxation technique has been employed and conditions that guarantee zero gap between the relaxed and the original problem have been given. A distributed algorithm based on an augmented Lagrangian approach using primal-dual decompositions have been proposed and it has been shown to converge to the centralized solution. The approach is suitable for its real implementation in WSN as it is scalable, robust against changes in topology and energy efficient by the use of only local broadcast-type communication among nodes. Another interesting property of the proposed algorithm is that it allows the introduction of additional convex constraints to the localization problem in a straightforward manner.

The proposed algorithm is intended to be usable in real networks and its suitability in terms of accuracy would be determined by the application at hand. However, if higher accuracy is required, we could run an additional optimization step around the found solution. We have verified by means of simulations that the combination of our suboptimal method with a local search provides a localization error close to the ML estimate.

It is worth to mention that the proposed approach has a direct application to distributed tracking in WSN's as well. The tracking procedure would be based on the jointly estimated target's position. As all nodes share the same estimate, they could use that estimate to locally run a tracking filter in order to follow the movement of the target.

Appendix 1: Proof of Proposition 1

To show the validity of Proposition 1, consider first the Lagrangian of (14) which is given by

L ( X , x ) = m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) 2 - Tr ( Ψ ( X - x x T ) )
(38)

with Ψ 0 being the Lagrange multipliers. Since problem (14) is convex and there exists, by assumption, at least a strictly feasible point, Slater's constraint qualifications are satisfied and therefore, strong duality holds. Moreover, from duality theory we have that, at the optimum, the derivative of the Lagrangian with respect to X and x must be zero, that is

X L ( X , x ) = 2 m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) I - Ψ = 0
(39)

and

x L ( X , x ) = - 4 m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) c m - 2 Ψ x = 0
(40)

From Equation (39) it becomes clear that Ψ must be a diagonal matrix. This fact, together with the complementary slackness condition

Tr ( Ψ ( X - x x T ) ) =0
(41)

implies that the off-diagonal elements of X must equal those of xxT. However, this does not necessarily mean that X is of rank one. From the complementary slackness condition (41) we have that X will equal xxT whenever the constraint is active (i.e., Ψ0). So by finding under which conditions Ψ0 we will find the conditions that guarantee that the solution of (14) coincides with the solution of the original problem (12). For that purpose, if we set Ψ = 0 we have from (39) and (40) that

m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) = 0
(42)
m = 1 M γ m ( Tr ( X ) - 2 c m T x + b m ) c m = 0
(43)

Let t = Tr(X) and z = [z1, z2]T = x, keeping in mind that c m = [x m , y m ], we can rewrite the above equations as

2 m γ m x m z 1 + 2 m γ m y m z 2 - m γ m t = m γ m b m
(44)
2 m γ m x m 2 z 1 + 2 m γ m y m x m z 2 - m γ m x m t = m γ m b m x m
(45)
2 m γ m x m y m z 1 + 2 m γ m y m 2 z 2 - m γ m y m t = m γ m b m y m
(46)

which can be expressed in a compact way as

A z t = δ
(47)

Therefore, Ψ will be equal to 0 only if (47) has a solution where x = z and Tr(X) = t. Additionally, we have that for the solution to be a feasible point it must be satisfied that Tr(X - xxT) 0 which implies that Tr(X) ||x||2 or, equivalently ||z||2≤ t. This implies that if problem (17) is infeasible, then Ψ ≠ 0 and hence, X = xxT so that the solution of the relaxed problem (14) coincides with that of the original problem (12), which proves Proposition 1.

Appendix 2: Proof of Lemma 1

By the Schur's complement we have that

X-x x T 0 X x x T 1 0
(48)

Let introduce a new variable Z = X x x T 1 . We then have that Tr(X) = Tr(Z) - 1 and that

c m T x= [ 0 T 1 ] Z c m 0 = Tr Z 0 0 c m T 0
(49)

By rearranging terms and the previous conditions on Z we end up with problem (18) and the equivalence is established. The equivalence between the two solutions follows directly from the definition of Z.

References

  1. Sayed AH, Tarighat A, Khajehnouri N: Network-based wireless location. IEEE Signal Process Mag 2005, 22(4):24-40.

    Article  Google Scholar 

  2. Patwari N, Ash JN, Kyperountas S, Hero AO, Moses RL, Correal NS: Locating the nodes. IEEE Signal Process Mag 2005, 22(4):54-68.

    Article  Google Scholar 

  3. Arulampalam MS, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans Signal Process 2002, 50(2):174-188. 29 10.1109/78.978374

    Article  Google Scholar 

  4. Zàruba GV, Huber M, Kamangar FA, Chlamtac I: Indoor location tracking using RSSI readings from a single Wi-Fi access point. Wirel Netw 2007, 13(2):221-235. 10.1007/s11276-006-5064-1

    Article  Google Scholar 

  5. Tai Y, Bo Y: Collaborative target tracking in wireless sensor network. Proc 9th Int Conf Electronic Measurement & Instruments ICEMI'09, 16-19 Aug 2009; Beijing 2009, 2-1005-2-1010.

    Google Scholar 

  6. Wu H, Tian G, Huang B: Multi-robot collaborative localization methods based on Wireless Sensor Network. Proc IEEE Int Conf Automation and Logistics ICAL,1–3Sep 2008; Qindao 2008, 2053-2058.

    Chapter  Google Scholar 

  7. Ren H, Meng MQH: Power adaptive localization algorithm for wireless sensor networks using particle filter. IEEE Trans Veh Technol 2009, 58(5):2498-2508.

    Article  Google Scholar 

  8. Aounallah F, Amara R, Alouane MTH: Particle filtering based on sign of innovation for tracking a jump Markovian motion in a binary WSN. Proc Third Int Conf Sensor Technologies and Applications SENSORCOMM'09, 18-23 Jun 2009; Athens 2009, 252-255.

    Chapter  Google Scholar 

  9. Lui KWK, Ma WK, So HC, Chan FKW: Semi-definite programming algorithms for sensor network node localization with uncertainties in anchor positions and/or propagation speed. IEEE Trans Signal Process 2009, 57(2):752-763.

    Article  MathSciNet  Google Scholar 

  10. Luo Z, So AMC, Ye Y, Zhang S: Semidefinite relaxation of quadratic optimization problems. IEEE Signal Process Mag 2010, 27(3):20-34.

    Article  Google Scholar 

  11. Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, New York; 2004.

    Book  MATH  Google Scholar 

  12. Blatt D, Hero AO: Energy-based sensor network source localization via projection onto convex sets. IEEE Trans Signal Process 2006, 54(9):3614-3619.

    Article  Google Scholar 

  13. Shi Q, He C: A new incremental optimization algorithm for ML-based source localization in sensor networks. IEEE Signal Process Mag 2008, 15: 45-48.

    Article  Google Scholar 

  14. Bertsekas DP: Constrained optimization and Lagrange Multiplier Methods. Academic Press, New York; 1982.

    MATH  Google Scholar 

  15. Li J, Elhamifar E, Wang IJ, Vidal R: Consensus with robustness to outliers via distributed optimization. Proc 49th IEEE Conf Decision and Control (CDC), 15-17 Dec 2010; Atlanta 2010, 2111-2117.

    Google Scholar 

  16. Patwari N, Hero IAO, Perkins M, Correal NS, O'Dea RJ: Relative location estimation in wireless sensor networks. IEEE Trans Signal Process 2003, 51(8):2137-2148. 10.1109/TSP.2003.814469

    Article  Google Scholar 

  17. Bejar B, Belanovic P, Zazo S: Distributed Gauss-Newton method for localization in Ad-Hoc networks. 43rd Asilomar Conference on Signals, Systems, and Computers, 7-10 Nov 2010; Pacific Grove 2010, 1452-1454.

    Google Scholar 

  18. Bejar B, Belanovic P, Zazo S: Distributed consensus-based tracking in wireless sensor networks: a practical approach. Proceedings of the European Signal Processing Con- ference, EUSIPCO, 29 Aug - 1 Sep 2011; Barcelona 2011, 2019-2023.

    Google Scholar 

  19. Olfati-Saber R, Murray R: Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans Autom Control 2004, 49(9):1520-1533. 10.1109/TAC.2004.834113

    Article  MathSciNet  Google Scholar 

  20. Rabbat MG, Nowak RD, Bucklew JA: Generalized consensus computation in networked systems with erasure links. Proc IEEE 6th Workshop Signal Process Adv Wirel Commun, 5 - 8 Jun 2005; New York 1088-1092.

    Google Scholar 

  21. Madsen K, Nielsen HB, Tingleff O:Methods for Non-Linear Least Squares Problems. Richard Petersens Plads, Kgs. Lyngby; 2004. [http://www2.imm.dtu.dk/pubdb/p.php?660]

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their useful comments and suggestions that lead to a significant improvement in the clarity of exposition and motivation of the present study. This study was supported in part by the Spanish Ministry of Science and Innovation under the grant TEC2009-14219-C03-01; El Consejo Social de la UPM; the Spanish Ministry of Science and Innovation in the program CONSOLIDER-INGENIO 2010 under the grant CSD2008-00010 COMONSENS; the European Commission under the grant FP7-ICT-2009-4-248894-WHERE-2; the European Commission under the grant FP7-ICT-223994-N4C and the Spanish Ministry of Science and Innovation under the complementary action grant TEC 2008-04644-E; Spanish Ministry of Science and Innovation under the grant TEC2010-21217-C02-02-CR4HFDVL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamín Béjar.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

BB proposed and developed the solution to the problem, derived the distributed algorithm for its computation and run the numerical simulations. SZ conceived the study and helped in the draft of the manuscript. All authors read and approved the manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Béjar, B., Zazo, S. A practical approach for outdoors distributed target localization in wireless sensor networks. EURASIP J. Adv. Signal Process. 2012, 95 (2012). https://doi.org/10.1186/1687-6180-2012-95

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-95

Keywords