Skip to main content

Dimension expansion and customized spring potentials for sensor localization


The spring model algorithm is an important distributed algorithm for solving wireless sensor network (WSN) localization problems. This article proposes several improvements on the spring model algorithm for solving WSN localization problems with anchors. First, the two-dimensional (2D) localization problem is solved in a three-dimensional (3D) space. This “dimension expansion” technique can effectively prevent the spring model algorithm from falling into local minima, which is verified both theoretically and empirically. Second, the Hooke spring force, or quadratic potential function, is generalized into L p potential functions. The optimality of different values of p is considered under different noise environments. Third, a customized spring force function, which has larger strength when the estimated distance between two sensors is close to the true length of the spring, is proposed to increase the speed of convergence. These techniques can significantly improve the robustness and efficiency of the spring model algorithm, as demonstrated by multiple simulations. They are particularly effective in a scenario with anchor points of longer broadcasting radius than other sensors.

1 Introduction

Localization is an essential tool in many sensor network applications. Over the years, a rich literature has been developed to solve the sensor localization problem from different perspectives[13]. Franceschini et al. characterized them according to four criteria in their survey article[1]: anchor-based vs. anchor free, incremental vs. concurrent, fine-grained vs. coarse-grained, and centralized vs. distributed.

In this article, we propose several techniques to improve the robustness and computational efficiency of an anchor based, concurrent, fine-grained, and distributed spring-model-based sensor localization algorithm. We start our discussion by reviewing several representative distributed sensor localization algorithms.

2 Related study

Savarese et al.[4] proposed an incremental and anchor-based localization algorithm that uses a two-phase approach, including an initial position estimation stage and a second stage for position refinement using trilateration. It is accurate compared to traditional incremental methods, which are prone to error propagation. However, the method can produce dramatically incorrect node displacement when measurement noise is present.

To increase accuracy in a noisy environment, Moore et al.[5] argue that measurement noise can cause flip ambiguities during trilateration. They proposed a robust incremental algorithm, dramatically reducing the amount of error propagation. However, this algorithm has relatively high computational complexity and its third stage can hardly be handled without a centralized node[1]. Furthermore, Franceschini et al.[1] points out that, under large measurement noise, the algorithm may fail to localize enough nodes.

Ihler et al.[6] proposed a distributed localization algorithm from a different perspective. It is based on graph models and requires a prior distribution for the sensor locations. It is accurate and fully distributed. Furthermore, in[7], a graph model based cooperative algorithm is proposed and tested with real measurements. The algorithm requires knowledge of the distributions of range measurements between pairs of sensors. However, prior knowledge of the sensor deployment and range distribution might not be available in many applications.

Algorithms based on machine learning techniques have also been proposed[8, 9]. These algorithms typically assume that a large number of anchor points are present, which is impractical in many cases.

The algorithms reviewed above are all range-based algorithms. The range information in sensor networks can be obtained with various methods, such as time of arrival (TOA) based ranging techniques, even for multipath environments[10]. Nonetheless, many methods also try to take advantage of additional/alternative information beyond range/distance measurements, including sector/bearing information[11], information about connectivity among sensor nodes[12], information about connectivity to anchor nodes[13] and measurement errors (vertex errors and edge errors)[14].

These methods use the additional/alternative information above to improve accuracy, accelerate convergence and reduce computational complexity. Because of the diversity of information available for sensor network localization, Shen et al. proposed a theory to quantify the limits of localization accuracy for wideband wireless networks in[15] and[16], by introducing the concept of squared position error bound.

Priyantha et al.[17] proposed a two stage anchor-free distributed algorithm (AFL). The first stage, the initialization stage, produces a qualitative network node graph, while the second stage refines the result by using a spring model, which is the most relevant to our algorithm.

In the spring model method, any pair of sensors with knowledge of mutual distance are considered to be nodes connected by a spring of the same length. The force each spring applies to the nodes depends on the difference between inter-node estimated distances and actual distances. Given sufficient connections amongst the sensors, the system has a unique zero-energy state with all the sensors in correct relative position, up to global transition, rotation and mirror transformation. Furthermore, if enough anchor nodes (sensors with knowledge of their true coordinates) are introduced, the absolute coordinates of the sensors can also be estimated.

One of the advantages of the spring model is the ease with which it can be implemented as a fully distributed algorithm, i.e., each sensor needs to communicate only with its neighbors, repeatedly updating its estimated location until convergence. However, there are several drawbacks to the spring model.

The most prominent problem is the “folding” phenomenon in which the spring system falls into an energy local minimum and cannot unfold itself. To tackle this problem, many pre-processing methods have been proposed, such as the first stage of the AFL algorithm, which may fail due to an insufficient number of nodes or to measurement noise. Gotsman et al.[18] improved the first stage of the AFL algorithm, by solving an eigenvector problem. It can be distributized, but requires many iterations, thus incurring significant communication cost.

Another problem with the spring model (with Hooke’s law) is that the potential energy, measured by the squared error between the true distance and the estimated distance, is sensitive to measurement error, and hence is not robust enough for many applications.

Moreover, in[17] and[18], all scenarios are established on relative coordinates with no anchor. In the case of the presence of anchors, or if absolute coordinates are required, another round of information propagation is needed to transform the coordinates of each sensor to the corresponding absolute coordinates.

In this article, we propose a new spring-model algorithm to resolve the problems mentioned above with the presence of anchors. First, we solve a two-dimensional (2D) sensor localization problem in a three-dimensional (3D) space, which significantly reduces the occurrence of “folding”. This algorithm requires no preprocessing, and operates in a distributed manner, which converges with competitive rate compared to algorithms in[17] and[18]. Moreover, to improve the noise resistance of the algorithm to range measurement noise, we investigate the estimation error of different types of spring potential functions under different noise distributions. Furthermore, we design a customized spring force so that the spring has a larger strength coefficient once the estimated distance between two sensors is close to the measured distance between them. This customized spring can effectively accelerate the convergence speed of the system and also reduce the estimation error.

The rest of this article is organized as follows. In Section 3, the spring model sensor localization problem is defined. In Section 3.2, a distributed algorithm is proposed to solve the spring model sensor network localization problem. The idea of dimension reduction is explained and simulated in Section 4. In Section 5, the optimal spring potential function under different noise distributions is explored via extensive simulations. In Section 6, a customized spring force is proposed to increase convergence speed as well as to reduce estimation error, and Section 7 concludes our article.

3 Spring model with anchors

3.1 Basic model

Suppose we have N sensors, and each sensor has a communication radius r. We denote the set of sensors within the communication radius r of the j th sensor by N j (r), and the measured distance between sensor i and j by d i j (assuming sensor i and sensor j have reached consensus, i.e., d i j  = d j i ). If the distance between the estimated locations of sensor i, x i , and sensor j, x j , is denoted as x i  − x j , and the potential energy E i j caused by the difference between the measured distance and the estimated distance is defined by the potential function

E ij =P x i x j d ij ,

then the localization problem can be described as

arg min x i , i = 1 , , N 1 2 i = 1 N j N i ( r ) P x i x j d ij .

Assume thatA is the set of indices of the anchor sensors; then their coordinates should be excluded from the variables of the optimization problem. Therefore, (2) can be specialized as

arg min x i , i A c 1 2 i = 1 N j N i ( r ) P x i x j d ij ,

given{ x i ,iA} known and fixed.

Next, we will introduce an iterative, distributed algorithm that solves (2), and further specialize this algorithm to solve its derived problems for different network configurations and noise types.

3.2 Distributed algorithm solving the spring model for sensor localization

The basic idea of the spring model is to run the system in a physical world with energy dissipation until it reaches its lowest potential state. Each sensor is deemed to be a ball of mass m and negligible size, and the entire system is immersed in a damping liquid of viscosity β. At time t, each sensor is pushed/pulled by the force exerted by the tension of the spring connected to its neighbors, and also dragged by the damping force opposite to its velocity. Under these assumptions, a distributed, iterative algorithm is proposed to find a solution to (2). The algorithm is summarized as Algorithm 1 below:

The basic idea of Algorithm 1 is to set up a physical environment with appropriate damping for the system to run freely until the kinetic and elastic potential energy dissipates. The steady state of the system is a local minimum of the objective function, which is likely to be the desirable solution of the localization problem. The convergence to the global minimum is not guaranteed. However, with proper control of parameters and wise choice of the initial condition, we can improve the possibility of success.

Algorithm 1 Distributed algorithm for spring model

Moreover, Algorithm 1 is purely distributed. There are two stages in the algorithm:

  1. 1.

    Local communication stage: Each sensor collects the current estimates of the coordinates from its neighbors. This process is purely local within the radius r.

  2. 2.

    Local computation stage: Each sensor updates the estimate of its coordinates (and corresponding virtual speed and acceleration) based on the information collected locally.

Most importantly, this model is highly configurable by setting the mass, viscosity, spring strength, and spring force type of the system. Next, we will discuss a special formulation that introduces anchor nodes into the system.

3.3 Introducing anchors: long-range anchor scenario

Intuitively, when the anchors’ coordinates are known accurately, they can be deemed to be ordinary sensor nodes, except with infinite mass, so that they will not update their coordinates. When the anchors’ coordinates are known up to a confidence level, they can be modeled as ordinary sensors, connected to an imaginary anchor (with infinite mass and set at the believed location) by a hard spring, with strength k much higher than others, reflecting the confidence level of the anchor in its coordinates. In our simulations, we assume that the anchors have accurate absolute coordinates, and hence do not update their own coordinates at all.

Moreover, since the anchors do not need to update their coordinates, they are not part of the iterative algorithm. Their only task is to broadcast their coordinates and distance measurements to their neighbors once. This gives rise to an interesting scenario, discussed next, in which the anchors have longer broadcasting ranges than ordinary sensors.

Since we are interested in finding the absolute coordinates of all the sensors in a distributed manner, the speed at which the information provided by the anchors spreads throughout the network is essential to the convergence speed of our algorithm. In principle, only when each sensor knows, directly or indirectly, the coordinates of at least three anchors, and the relative distance to them, can it estimate its absolute coordinates. Therefore, it is desirable for more sensors to know their distances to at least one anchor, and this can greatly accelerate the speed at which anchor information spreads throughout the network.

Following the arguments described previously, we introduce a scenario shown in Figure1, in which the four anchors have longer communication ranges than ordinary sensors, and do not enter into the iterations of the algorithm.

Figure 1
figure 1

Anchor setup. A case of four long-range anchors with large communication radii.

In Figure1, the four anchors are located at the four vertices of the unit square, with communication radii approximately r anchor = 0.35. All the other sensors have a smaller communication radii of r = 0.1. The red region is completely out of the range of anchors. Simulations in Section 6, in which we combine all techniques to improve the spring model algorithm for sensor localization, employ this configuration.

4 Dimension expansion

4.1 “Folding” phenomenon and dimension expansion

The most prominent problem when running Algorithm 1 is the “folding” phenomenon. In a 2D space, two parts of the system could fold together, with a few springs strained, becoming stuck in an energy local minimum. Figure2 illustrates a typical example of the folding phenomenon.

Figure 2
figure 2

Illustration of the folding phenomenon. Left: correct displacement of the sensors, global minimum; middle: folding phenomenon, system reaches local minimum—forces within the 2-D space cannot unfold the system; right: the system unfolds easily with the introduction of the third dimension.

For example, in Figure2, the left graph represents the global minimum, yet the middle graph is a “folding” scenario, with four springs strained. Unfortunately, once the spring system falls into a folding local minimum in a 2D space, it is rather difficult for the system to resolve on its own, without external perturbation, since all forces are within the same plane. However, if the 2D spring system is allowed to evolve in a 3D space, a small displacement in the third dimension can effectively unfold the system that falls into a “folding” local minimum, as shown in the right part of Figure2. We refer to this technique as dimension expansion.

By initializing the third dimension of the sensor coordinates according to a uniform distribution, the probability of the “folded” sensors lying exactly on the same 2D plane is zero. As a result, dimension expansion can resolve many folding scenarios and hence circumvent local minima (more detailed discussion can be found in Section 4.2). Moreover, since the anchors are located on the same 2D plane, the global minimum of the relaxed system remains the same as the original problem (given that all the distance measurements are accurate), with all the sensors lying in the same 2D plane as the anchors.

However, the benefits come with a price. First, computationally, dimension expansion adds one more coordinate that the sensors need to maintain and communicate. Second, with measurement noise, this may cause the third dimension to overfit to the noise.

Nonetheless, our simulations demonstrate that the dimension expansion technique has almost the same convergence rate as its 2D counterpart while significantly reducing the chances of “folding”, and performs well for large noise scenarios when there is a sufficient number of anchor points. These properties will be demonstrated via simulations next and in Section 6.

Before demonstrating more empirical results, we will first explore a simple case of a three-body scenario to explain the theoretical foundation of the efficacy of dimension expansion.

4.2 Why dimension expansion works

Although simulations in later sections will confirm our intuition that dimension expansion resolves the “folding” problem, in this section we provide a mathematical explanation of this intuition to better understand the capability and limitations of dimension expansion.

Since the derivation is very intricate for high dimensional cases, in this subsection, we illustrate the mathematical foundation of dimension expansion using a 1D to 2D expansion with a three-body model. Although this simplified model has some differences from our 2D to 3D simulation and application, it still sheds some light on the fundamental ideas of dimension expansion.

4.2.1 Model and setup

Suppose that we have three balls numbered 1, 2, and 3. They are connected by springs of intrinsic lengths l 12, l 13, and l 23. Without loss of generality, we assume that x 1 < x 2 < x 3, where x i denotes the coordinate of the i th ball. Under the assumption that the three balls are confined in a 1D space and there is no observational error, we have an extra constraint

l 12 + l 23 = l 13 .

Therefore, the elastic potential energy of the entire system can be expressed as

L( x 1 , x 2 , x 3 )= 1 i < j 3 k 2 ( | x i x j | l ij ) 2 .

In the globally optimal state, i.e., when the “correct” configuration of the relative positions of the balls are reached, the potential energy in (5) is 0. However, it can be shown that, if the balls are confined in 1D, then there exist stable configurations that are not globally optimal (i.e., they are local minima), and hence lead us to “incorrect” configurations if we use the spring model without discretion.

4.2.2 Existence of stable suboptimal equilibrium (SSE)

Without loss of generality, we assume k = 2, l 13 = 1 and l 12 = α, in which 0 < α < 1. According to (4), l 23 = 1 − α.

Any equilibrium configuration of the system needs to satisfy

L x i =0,i{1,2,3}.

Hence, by substituting (5) into (6), we obtain

2 x 1 x 2 x 3 l 12 sgn ( x 1 x 2 ) l 13 sgn ( x 1 x 3 ) = 0 2 x 2 x 1 x 3 l 12 sgn ( x 2 x 1 ) l 23 sgn ( x 2 x 3 ) = 0 2 x 3 x 1 x 2 l 13 sgn ( x 3 x 1 ) l 23 sgn ( x 3 x 2 ) = 0 .

Equations (7) provide us with the necessary condition for a stable equilibrium. However, to sufficiently guarantee that an equilibrium is stable, we also need the Hessian matrix ofL( x 1 , x 2 , x 3 ) to be semi-positive definite. By deriving the second-order partial derivatives ofL( x 1 , x 2 , x 3 ), we obtain

2 L x i 2 =22 j i l ij δ( x i x j )


2 L x i x j =1+2 l ij δ( x i x j ),ij,

where δ is the Dirac delta function, and i,j {1,2,3}.

From (8) and (9), we can conclude that as long as (7) does not yield solutions for which

(i,j)s.t. x i = x j ,

the Hessian matrixH( x 1 , x 2 , x 3 ) is a constant matrix

H( x 1 , x 2 , x 3 )= , 2 1 1 1 2 1 1 1 2 .

The Hessian matrix in (11) is semi-positive definite, which guarantees that almost any equilibrium given by the solution of Equation (7) is stable, as long as condition (10) is not satisfied. Therefore, we can solve (7) to show that sub-optimal solutions do exist.

Since only the relative coordinates of the balls matter, without loss of generality, we fix ball 1 at the origin, i.e., let x 1 = 0. Then, there are three unique non-trivial scenarios excluding symmetric equivalent ones:

  • Scenario 1: 0 < x 2 < x 3;

  • Scenario 2: 0 < x 3 < x 2;

  • Scenario 3: x 2 < 0 < x 3.

Compared to Scenario 1, which has the global optimum as the solution to (7), i.e.,

x 2 =α, x 3 =1,

Scenarios 2 and 3 are more of interest to us. Under the assumption of Scenario 2, the solution to (7) turns out to be

x 2 = α + 2 3 , x 3 = 2 α + 1 3 .

Obviously, x 2 ≠ x 3, so (13) is a stable equilibrium, which is a concrete example of the “folding phenomenon” in 1D space. Similarly, we can find the solution in Scenario 3 as

x 2 = α 3 , x 3 =1 2 α 3 .

By arguments similar to that for Scenario 2, (14) is also a stable equilibrium.

Therefore, we have shown the existence of stable sub-optimal equilibrium (SSE) configurations for the three body spring model in 1D space.

4.2.3 Eliminating SSEs by dimension expansion

Although SSEs exist for the three body model in 1D space, they can be eliminated by dimension expansion, which, in this case, is to allow the 1D system to evolve in a 2D space.

Denoting the coordinates of the balls by (x i ,y i ), i {1,2,3}, the potential function in (5) can then be rewritten as

L x 1 , x 2 , x 3 , y 1 , y 2 , y 3 = 1 i < j 3 k 2 ( ( x i x j ) 2 + ( y i y j ) 2 l ij ) 2

The standard method to find out all equilibria for (15) is to solve the six equations derived from

L x i = 0 , i { 1 , 2 , 3 } L y i = 0 , i { 1 , 2 , 3 } .

However, a simple argument can save us the effort: We claim that the three-body spring model has an equivalent set of equilibria in 2D space, as in 1D space, with possible changes in stability.

The reasoning behind this claim is as follows. First, notice the fact that each ball in the three-body system is pulled/pushed by exactly two springs. For each ball to be in equilibrium, the two forces must both be 0, or co-linear. The former situation is achieved only in the unique globally optimal state, in which the three balls are co-linear. Therefore, the three balls must always be co-linear (located in a 1D space) in any equilibrium, and hence the 2D equilibrium solutions must be equivalent to the 1D ones.

Although the 2D equilibria are equivalent to the 1D ones, the stability of those equilibria may not necessarily be the same. And the essence of dimension expansion is to expand the system into a higher dimension so that the stable local optimal configurations in the lower dimensional space become unstable in the higher dimensional space, which is exactly what happens in this case.

To investigate the stability of an equilibrium, we need to investigate the semi-positive-definitness of the Hessian matrix at the equilibrium. To show that the Hessian matrix is no longer semi-positive definite, it is sufficient to show that one of the diagonal element of the Hessian matrix is negative.

Let us first take Scenario 2 as an example. Without loss of generality, we assume that the equilibrium has y 1 = y 2 = y 3 = 0. Further assuming k = 2, from (15), we can derive

2 L y 1 2 = 2 l 12 ( x 1 x 2 ) 2 ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 l 13 ( x 1 x 3 ) 2 ( x 1 x 3 ) 2 + ( y 1 y 3 ) 2 .

Substituting x 1,x 2,x 3,y 1,y 2,y 3 from the solution of Scenario 2, which is

x 1 = 0 , x 2 = α + 2 3 , x 3 = 2 α + 1 3 , y 1 = y 2 = y 3 = 0 ,

we obtain

2 L y 1 2 =2 ( α 1 ) 2 ( α + 2 ) ( 2 α + 1 ) ,

by the assumption that 0 < α < 1. Therefore (18) is strictly less than 0, which means that the Hessian matrix for the equilibrium in Scenario 2 is no longer stable. Consequently, by dimension expansion from 1D to 2D, we eliminate an SSE configuration of the three-body system.

A geometric explanation of this phenomenon is that, in 1D space, an equilibrium can be a stable local optimum, while in the 2D situation, the local optimum becomes a saddle point.

Similarly, by substituting x 1,x 2,x 3,y 1,y 2,y 3 from the solution of Scenario 3, we obtain

2 L y 1 2 =1 1 3 2 α ,

which is also strictly negative. And the same arguments follow.

Therefore, both SSE configurations are unstable in the 2D case, which leaves us with a unique solution—the desired global optimum.

4.2.4 Conclusions

With the analysis on 1D to 2D dimension expansion using a three-body model, we have come up with two important conclusions:

  1. 1.

    For a 1D three-body spring model with no measurement noise, the “folding phenomenon” always exists, since there are two non-trivial SSE configurations; and

  2. 2.

    If we expand the space from 1D to 2D, all the SSE configurations become unstable and the system will evolve into the unique globally optimal configuration.

Although the model for the analysis above is simple compared to more general cases, the conclusion is quite insightful: by allowing the system to have one more degree of freedom, the local optima are converted into saddle points and hence eliminated, which is the essence of dimension expansion.

To further demonstrate the efficacy of dimension expansion in more general cases, we resort to numerical simulations in the following section.

4.3 Efficacy of dimension expansion

The theoretical model only shows us the situation in the 1D/2D case with a simple three-body model. To fully explore the capability of dimension expansion, we compare the performance of the spring system with and without dimension expansion in a 2D/3D configuration.

We assume that the springs obey Hooke force in the experiment, i.e., the potential function is p(x) = k x 2. N = 500 sensors are scattered at random in a 1 × 1 2D square. On each vertex, the midpoint of each side, and the center of the square, there is an anchor (a total of 9). Assuming all measurements of distance within communication radius r are accurate, we run a series of simulations for different values of r, with m = 1 for all sensors, k = 5, Δ t = 0.01, and β = 1, where m is the mass of the sensor, and β is the viscosity of the liquid, as described in Section 3. The simulation results are shown in Figure3.

Figure 3
figure 3

Comparison of accuracy for 2D and 3D algorithms. Average localization errors of 100 trials for 2D and 3D algorithms.

The initial location of each sensor is a uniformly distributed random point in the unit square. For the 3D algorithm, the coordinate for the third dimension is initialized as a uniformly distributed value on [−0.1,0.1]. Assuming that x i 0 is the true location of sensor i, the localization errorE is measured by

E = 1 N i = 1 N x i x i 0 2 .

For small values of r, some sensors do not have sufficient constraints to specify their absolute coordinates. Hence, both 2D and 3D algorithms have large estimation errors. When r grows larger, the 3D algorithm performs very well, consistently converging to the desired global minimum. However, as the links among the sensors increase, the 2D algorithm has many chances to fall into local minima and hence incurs a large estimation error. Therefore, this experiment confirms the efficacy of dimension expansion.

Another important issue is the convergence speed. As is common for many optimization algorithms, the selection of parameters is essential to the convergence speed. The value of k and m can be selected arbitrarily, because the effect of ratio k/m can be compensated by the choice of β and Δ t. The choice of Δ t should follow the general rules for choosing step sizes for optimization problems: too large of a Δ t leads to poor accuracy in the vicinity of the optimal region, and too small of a Δ t results in very slow convergence. Lastly, β is chosen following the guidance of damped harmonic oscillation of springs. Large β leads to over-damping, which results in slow convergence and susceptibilities to local minimum; on the other hand, small β leads to under-damping, which causes a long time of oscillation and slow convergence. In practice, β is selected slightly less than the critical damping factor2 k / m (in our set up, the critical damping factor is approximately 4.47, and β is chosen as 1), to avoid unnecessarily prolonged oscillation, while provide the system with enough freedom to avoid bad local minima.

Figure4 compares the convergence rates for the case of N = 1000 and r = 0.15, for both 2D and 3D algorithms. The graph shows that they converge at similar speeds despite the fact that the 3D algorithm has one more dimension.

Figure 4
figure 4

Comparison of convergence speeds for 2D and 3D algorithms.

In these and other examples, we find that the 3D algorithm, though having one more dimension, does not have an observable disadvantage in terms of the rate of reducing the localization estimation error.

5 Optimal spring potential

Besides changing the dimension of the space in which we run the algorithm, the distributed spring model algorithm also has the advantage that it can easily adapt to different types of noise, with the mere change of the force/potential (use non-Hooke force). Various types of potentials can be designed to adapt to different types of noise.

To simplify the problem, in this section, we analyze the group of potentials of the form

E ij =P x i x j d ij = x i x j d ij p ,

where p≥1, to identify the optimal spring potential, i.e., we seek optimal values of p, for different types of noise. For simplicity, we henceforth call the potential function represented in Equation (19) the L p potential.

Let us first consider a simplified model with N anchors and only one sensor to be localized, with noisy distance/range measurements. Let A i ,i = 1,…,N, be anchors located uniformly on a unit circle in two dimensional space, with locations denoted as x Ai , x A i =1. Suppose the only sensor to be localized, S, is located at the center of the circle. The distance measurements between S and the anchors A i ,i = 1,…,N, are corrupted by different types of noise with mean 0 and standard deviation σ.

Given the range measurements and x A i ,i=1,,N, sensor S can easily estimate its location, x S , using the distributed spring model algorithm described in Section 3, with the desired force/potential. For each specific type of noise and potential, sensor S solves the following optimization problem:

min x S i = 1 N x A i x S ( 1 + ξ i ) p ,

where ξ i ,i = 1,…,N, are independent and identically distributed (i.i.d.) random variables with means 0 and standard deviations σ.

Given a particular p and the joint distribution of ξ i ,i = 1,…,N, the optimization result of (20) forms a distribution dist p (x S ). In order to investigate L p potentials with different values of p, we compare the group of distributions dist p (x S ),p ≥ 1, to find the optimal potential, in terms of accuracy, for different types of noise.

Intuitively, we desire the distribution dist p (x S ) to concentrate around the origin as closely as possible. Specifically, we desire a potential function, as specified by parameter p, that has a spiky dist p (x S ) centering around the true location (0,0), since this demonstrates the robustness of the spring potential to resist noise. Therefore a natural measure of the performance would be the q-quantile of the distributions of the distance from our estimate to the true location, and deem the potential with smallest q-quantile distribution the q-optimal L p potential for the particular noise.

To be more specific, the distance from our estimate to the true location (0,0) is simply x S ; so from the distribution function of x S , i.e., dist p (x S ), we can derive the distribution of x S , i.e.,

Φ p (r)=P{ x S r}.

For a given system and noise distribution, the distribution function Φ p (r) is a function of the parameter p only. Therefore, the q-optimal value of p is defined as

p = arg min p Φ p 1 (q),

where Φ p 1 (q) is the inverse function of the distribution function of x S .

Since the optimization problem (20) is non-convex, it is difficult to find the analytical form of dist p (x S ) and thus the q-optimal spring potential. Therefore we turn to numerical methods to investigate the q-optimal potential for different types of noise.

In the numerical experiments, N anchors are fixed uniformly on the unit circle, with coordinates(cos 2 Πi N ,sin 2 Πi N ). Then, a large number of rounds of simulations are run to generate the distribution dist p (x S ) for a specific p. At each round, the distance measurements between S and the anchors A i ,i = 1,…,N, are corrupted by the desired additive noise. In order to find the localization result of sensor S, i.e., the global minimum of the optimization problem (20), we search in fine grids within the square circumscribed about the unit circle. The total number of times each grid point becomes the global minimum in all rounds are counted, and the distribution dist p (x S ) is then represented by the frequency of “hits” received at each grid point.

Figure5 shows the setting and resulting dist p (x S ) of a particular run of the experiment, where p = 2. In this experiment, the range measurements are corrupted by additive Gaussian noise, with mean 0 and standard deviation 0.2. Ten anchors are used and ten thousand rounds of simulations are run to generate the distribution dist2(x S ), in which, at each round, a 100-by-100 grid is searched for the global minimum of (20).

Figure 5
figure 5

Illustration of the distribution dist p (x S ). Setting and resulting dist2(x S ) of a particular run of the experiment, where the range measurements are corrupted by additive Gaussian noise: ξ i N(0,0.2),i=1,,10. Ten anchors are used and 10,000 rounds of simulations are run to generate dist2(x S ). Left: anchor positions and color plot of “hits” received by each grid point; Right: 3D mesh plot of the left hand graph, representing dist2(x S ).

On the left hand graph of Figure5, ten dots on the unit circle show the positions of the anchors, while color represents the number of “hits” received by each grid point. The right hand graph is a 3D mesh plot of the left hand graph, which shows dist2(x S ) under Gaussian noise.

Given dist p (x S ) represented by the frequency of “hits” received at each grid point, the distribution function Φ p (r) can be naturally interpreted as the frequency of “hits” received at the grid points located within radius r from the origin.

Figure6 shows a group of Φ p (r) for distributions dist p (x S ) with various p values, where the x-axis represents the radius r. In this experiment, the range measurements are corrupted by additive Gaussian noise (left graph) and Laplacian noise (right graph), respectively, with mean 0 and standard deviation 0.2. The anchor locations are the same as the previous simulation, and 10,000 rounds of simulations are run to generate Φ p (r) for each p value under each type of noise.

Figure 6
figure 6

Comparison of Φ p ( r ) for distributions dist p (x S ). Comparison of Φ p (r) for distributions dist p (x S ) with various p values, where the x-axis represents the radius r. Left: Φ p (r) under Gaussian noise; Right: Φ p (r) under Laplacian noise.

A distribution with Φ p (r) that is close to the upper-left corner of the graph is desired, since it is spikier and demonstrates the robustness of the corresponding spring potential with respect to the specific type of noise. Therefore, as shown in Figure6, the spring potential with p = 2 works the best under Gaussian noise, while the potential with p around 1.3 performs well with Laplacian noise.

We further compare the q-quantile of the distributions, to find the q-optimal L p potential as described above, under Gaussian and Laplacian noise. The simulation results are shown in Figure7, where the x-axis represents the p values and the y-axis shows the radius r. The left and right hand graphs show the quantile comparison under Gaussian and Laplacian noise respectively. The same simulation settings are used as the previous experiment, and 20%, 50%, 80%, and 90% quantiles of distributions with various p values are compared.

Figure 7
figure 7

Comparison of quantiles for distributions dist p (x S ). Comparison of quantiles for distributions dist p (x S ), where the x-axis represents the p values and the y-axis shows the radius r of the circle. 20%, 50%, 80%, and 90% quantiles of distributions with various p values are compared. Left: quantile comparison under Gaussian noise; Right: quantile comparison under Laplacian noise.

It is easy to read the q-optimal spring potential from Figure7. For example, p = 2 is the 90%-optimal spring potential under Gaussian noise in the group of L p potentials, while p = 1.3 is the 80%-optimal spring potential under Laplacian noise.

To conclude, the distributed spring model algorithm has the advantage that it can easily adapt to different types of noise, and the appropriate spring potential can be chosen according to the numerical scheme described in this section.

6 Customizing the potential for faster convergence

In the previous section, all the potential functions that we have discussed are generic L p norms. Different types of potential functions work for different types of noise. However, in practical scenarios, situations can be more convoluted. Searching for the optimal value of p may not be enough. Moreover, as the system grows large, with more sensors to localize, convergence speed becomes a more important measure of the performance of our algorithm.

6.1 Spring with “lock-in” mode

A commonly observed phenomenon in the spring model algorithm is that even when some part of the system has approximately reached the correct configuration, due to the soft spring and under-damped environment, the system continues to oscillate before eventually converging to a low-estimation-error state. This can significantly prolong the convergence time of the system. Increasing the spring strength or the viscous coefficient of the environment may not be effective either because these changes may cause the system to fall into local minima and reach premature convergence.

To cure this problem, one simple idea is to design the springs so that they can “lock in” when the actual distance between two sensors is approximately the same as the length of the spring. To be more specific, we would like the strength coefficient to be significantly higher when the spring approaches its natural state. This way, in the situation where the sensors have approximately reached the correct configuration, the system will “harden-up” to keep the current configuration so as to avoid being disturbed into disorder again. If handled properly, the newly designed spring system can reach the correct configuration faster than the vanilla spring model algorithm.

This idea can be viewed from another perspective: The incremental methods[4, 5] are actually the case in which all springs are rigid (with infinite strength), so that all locally resolved configurations are kept unchangeable. The concurrent method (e.g., the vanilla spring model) is another extreme in which nothing is kept fixed until the very end. This new spring model with “lock-in” mode is something in between. Specifically, we make well-configured parts of the entire system more rigid than the ill-configured parts, so that we can take potential benefits from both methods.

To make things simple and concrete, we assume the new spring to be a Hooke spring (linear force), with different strengths for different Δ d = x i x j d i j values. For Δ d greater than a critical distance ρ, we assume that the spring is softer, with strength coefficient k 0; and for Δ d less than ρ, we assume that the spring is harder, with strength coefficient k 1, where k 1 > k 0. Therefore, the force function is as shown below:

f(x)= k 1 x | x | ρ k 0 x | x | > ρ .

A typical potential function for the customized spring potential is shown in Figure8. Note that there is a “potential well” with radius ρ. Therefore, when Δ d is no more than ρ, the two sensors will be locked in with a strong bond.

Figure 8
figure 8

An example potential function with customized strength.

6.2 Numerical caveats

Ideally, the “lock-in” mode strength coefficient k 1 should be very high compared to the normal strength so that the length is effectively fixed. However, since we need to evolve the spring model numerically with a fixed time step Δ t, a very high spring strength can result in unstable numerical results. Therefore, in our simulation, the lock-in mode spring strength is chosen to be 3 to 5 times that of the normal spring strength. Despite the moderate k 1 value, the customized spring potential is proven to be effective by the simulation result next.

Meanwhile, the choice of variable ρ is also essential to fully exploit the benefits of the customized spring potential. Too small values of ρ have almost no effect on improving the computational efficiency, since it is rather difficult for the springs to fall into the very narrow “lock-up” mode. Too large ρ leads to premature lock up of too many springs, which forces the system into an undesirable local minimum. In practice, the rule of thumb is to select ρ to be of similar magnitude as the noise level.

6.3 Simulation results

To demonstrate the efficacy of the spring model algorithm with “lock-in” mode, we compare the convergence speed of the following three cases:

  1. 1.

    Hooke spring with soft strength k 0;

  2. 2.

    Hooke spring with hard strength k 1;

  3. 3.

    Customized spring potential with k 1 within ρ and k 0 beyond that range.

It is worth noting that dimension expansion is applied in all three cases.

The setup with long-range anchors discussed in Section 3 (shown in Figure1) is employed, which includes 800 normal sensors with communication radius 0.1, and four anchors with broadcast radius 1 2 2 . We choose viscosity β = 1, critical radius ρ = 0.05, and k 1 = 5k 0 = 250. The aforementioned three cases are compared under two scenarios: range measurements with no noises and range measurements corrupted by additive Gaussian noise with mean 0 and standard deviation 0.1.

Figure9 illustrates the comparison for one particular run of the three cases. It is clear from the figure that the customized spring case converges at a much faster rate, with or without noise. In both situations, the case with customized spring potential reaches the same level of estimation error with about 150 fewer iterations than its counterparts.

Figure 9
figure 9

Comparison of convergence rates. Left: range measurements with no noise; Right: range measurements corrupted with Gaussian noise with zero mean and σ = 0.1.

More detailed simulation results are shown in Table1, which records the average error rate of 100 independent simulations, at iterations of multiples of 100. The results in Table1 demonstrate that the algorithm with customized spring potential works consistently better than the other two cases.

Table 1 Comparison of average convergence rates under different noise levels

Note that there are rises and falls in the aggregated localization error in Table1, because of the oscillations in the spring model. To better quantify the difference in convergence speed, we compute the integrated localization error of the entire process of the algorithm, i.e., the ratio between the total area under the convergence curve and the total number of iterations (800 in this experiment), and depict the results in Figure10. It is obvious that under different noise levels, the customized spring model outperforms the other two cases significantly. The average localization error throughout the process of convergence is merely about 60% of those of its counterparts.

Figure 10
figure 10

Comparison of integrated convergence rates. Comparison of integrated convergence rate under Gaussian noise with standard deviation 0.00, 0.05, and 0.10.

It is worth mentioning that both extreme cases, with soft and strong springs, underperform the customized spring case, which is a compromise of two types of springs. It is the combination of the two that creates effective improvement in convergence speed, without compromising the final localization error.

7 Conclusion

In this article, we have discussed several techniques to improve a spring-model-based algorithm that solves sensor localization problems in a fully distributed manner. Our prototype algorithm, with no prior knowledge of the coordinates of the sensors except a few anchor points, iteratively localizes the sensors using only short-range communication among neighboring sensors.

The first technique to improve the prototype algorithm is based on expanding a two-dimension problem into three dimensions. The dimension expansion technique can greatly reduce the occurrence of the “folding” phenomenon and hence prevent the algorithm from falling into local minima.

Second, we have investigated the optimality of different types of spring potential functions under different types of noise. Using a simplified model, our simulation demonstrates that L p potential works best for Gaussian noise when p ≈ 2, while p ≈ 1.3 works best for Laplacian noise. This simulation based conclusion sheds some light on spring-model selection under different noise circumstances.

Third, we have discussed the customized design of a spring model beyond L p potential functions to accelerate convergence speed. The customized spring model has higher strength when the distance between sensors is close to the true length of the spring. Extensive simulations demonstrate that this customized spring model can effectively “lock in” correctly localized sensors and hence make the iterative algorithm more efficient. From another perspective, this customized spring model can be deemed to be a compromise between incremental and concurrent sensor localization algorithms.


  1. Franceschini F, Galetto M, Maisano DA, Mastrogiacomo L: A review of localization algorithms for distributed wireless sensor networks in manufacturing. Int. J. Comput. Integrat. Manufactur 2009, 22(7):698-716. 10.1080/09511920601182217

    Article  Google Scholar 

  2. Patwari N, Ash J, Kyperountas S, Hero IAO, Moses R, Correal N: Locating the nodes: cooperative localization in wireless sensor networks. IEEE Signal Process. Mag 2005, 22(4):54-69.

    Article  Google Scholar 

  3. Langendoen K, Reijers N: Distributed localization in wireless sensor networks: a quantitative comparison. Comput. Netw 2003, 43(4):499-518. 10.1016/S1389-1286(03)00356-6

    Article  Google Scholar 

  4. Savarese C, Rabaey JM, Langendoen K: Robust positioning algorithms for distributed ad-hoc wireless sensor networks. In Proceedings of the General Track of the USENIX Annual Technical Conference, ATEC ’02. Berkeley, CA, USA: USENIX Association; 2002:317-327.

    Google Scholar 

  5. Moore D, Leonard J, Rus D, Teller S: Robust distributed network localization with noisy range measurements. In Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems SenSys ’04. New York: ACM; 2004:50-61.

    Chapter  Google Scholar 

  6. Ihler AT, Fisher IJW, Moses RL, Willsky AS: Nonparametric belief propagation for self-localization of sensor networks. IEEE J. Sel. Areas Commun 2005, 23(4):809-819.

    Article  Google Scholar 

  7. Wymeersch H, Lien J, Win M: Cooperative localization in wireless networks. Proc. IEEE 2009, 97(2):427-450.

    Article  Google Scholar 

  8. Nguyen X, Jordan MI, Sinopoli B: A kernel-based learning approach to ad hoc sensor network localization. ACM Trans. Sen. Netw 2005, 1: 134-152. 10.1145/1077391.1077397

    Article  Google Scholar 

  9. Tran D, Nguyen T: Localization in wireless sensor networks based on support vector machines. IEEE Trans. Parallel Distrib. Syst 2008, 19(7):981-994.

    Article  Google Scholar 

  10. Dardari D, Conti A, Ferner U, Giorgetti A, Win M: Ranging with ultrawide bandwidth signals in multipath environments. Proc. IEEE 2009, 97(2):404-426.

    Article  Google Scholar 

  11. Chintalapudi K, Dhariwal A, Govindan R, Sukhatme G: Ad-hoc localization using ranging and sectoring. In Proceedings of INFOCOM 2004. Twenty-third Annual Joint Conference of the IEEE Computer and Communications Societies. Hong Kong; 2004:2662-2672.

    Google Scholar 

  12. Shang Y, Ruml W, Zhang Y, Fromherz MPJ: Localization from mere connectivity. In Proceedings of the 4th ACM International Symposium on Mobile Ad Hoc Networking & Computing, MobiHoc’03. New York: ACM; 2003:201-212. 10.1145/778415.778439

    Google Scholar 

  13. Baggio A, Langendoen K: Monte Carlo localization for mobile wireless sensor networks. Ad Hoc Netw 2008, 6(5):718-733. 10.1016/j.adhoc.2007.06.004

    Article  Google Scholar 

  14. Liu J, Zhang Y, Zhao F: Robust distributed node localization with error management. In Proceedings of the 7th ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc’06. New York: ACM; 2006:250-261. 10.1145/1132905.1132933

    Google Scholar 

  15. Shen Y, Win M: Fundamental limits of wideband localization–Part I: a general framework. IEEE Trans. Inf. Theory 2010, 56(10):4956-4980.

    Article  MathSciNet  Google Scholar 

  16. Shen Y, Wymeersch H, Win M: Fundamental limits of wideband localization–Part II: cooperative networks. IEEE Trans. Inf. Theory 2010, 56(10):4981-5000.

    Article  MathSciNet  Google Scholar 

  17. Priyantha NB, Balakrishnan H, Demaine E, Teller S: Anchor-free distributed localization in sensor networks. In Proceedings of the 1st International Conference on Embedded Networked Sensor Systems, SenSys’03. New York: ACM; 2003:340-341.

    Chapter  Google Scholar 

  18. Gotsman C, Koren Y: Distributed graph layout for sensor networks. In Graph Drawing, Lecture Notes in Computer Science. Springer Berlin/Heidelberg; 2005:273-284.

    Google Scholar 

Download references


This research was supported in part by the Office of Naval Research under Grant N00014-12-1-0767, by the Center for Science of Information (CSoI), an NSF Science and Technology Center, under Grant CCF-0939370, and by the U.S. Army Research Office under Grant W911NF-07-1-0185. This study was presented in part at the 4th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), San Juan, Puerto Rico, December 2011.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Jieqi Yu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Yu, J., Kulkarni, S.R. & Poor, H.V. Dimension expansion and customized spring potentials for sensor localization. EURASIP J. Adv. Signal Process. 2013, 20 (2013).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: