Skip to main content

Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging


The implementation challenges of cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging are discussed, and work on the subject is reviewed. System architecture and sensor fusion are identified as key challenges. A partially decentralized system architecture based on step-wise inertial navigation and step-wise dead reckoning is presented. This architecture is argued to reduce the computational cost and required communication bandwidth by around two orders of magnitude while only giving negligible information loss in comparison with a naive centralized implementation. This makes a joint global state estimation feasible for up to a platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion for the considered setup, based on state space transformation and marginalization, is presented. The transformation and marginalization are used to give the necessary flexibility for presented sampling-based updates for the inter-agent ranging and ranging free fusion of the two feet of an individual agent. Finally, the characteristics of the suggested implementation are demonstrated with simulations and a real-time system implementation.

1 Introduction

High accuracy, robust, and infrastructure-free pedestrian localization is a highly desired ability for, among others, military, security personnel, and first responders. Localization and communication are key capabilities to achieve situational awareness and to support, manage, and automatize individual’s or agent group actions and interactions. See[18] for reviews on the subject. The fundamental information sources for the localization are proprioception, exteroception, and motion models. Without infrastructure, the exteroception must be dependent on prior or acquired knowledge about the environment[9]. Unfortunately, in general, little or no prior knowledge of the environment is available, and exploiting acquired knowledge without revisiting locations is difficult. Therefore, preferably the localization should primarily rely on proprioception and motion models. Proprioception can take place on the agent level, providing the possibility to perform dead reckoning, or on inter-agent level, providing the means to perform cooperative localization. Pedestrian dead reckoning can be implemented in a number of different ways[10]. However, foot-mounted inertial navigation, with motion models providing zero-velocity updates, constitute a unique, robust, and high-accuracy pedestrian dead reckoning capability[1114]. With open-source implementations[1517] and several products appearing on the market[1821], dead reckoning by foot-mounted inertial sensors is a readily available technology. In turn, the most straightforward and well-studied inter-agent measurement, and mean of cooperative localization, is ranging[2225]. Also here, there are multiple (radio) ranging implementations available in the research literature[2630] and as products on the market[3133]. Finally, suitable infrastructure-free communication equipment for inter-agent communication is available off-the-shelf, e.g.[3437], and processing platforms are available in abundance. Together, this suggests that the setup with foot-mounted inertial sensors and inter-agent ranging as illustrated in Figure1 is suitably used as a base setup for any infrastructure-free pedestrian localization system. However, despite the mature components and the in principle straight-forward combination, cooperative localization with this sensor setup remains challenging, and only a few similar systems can be found in the literature[3842], and no off-the-shelf products are available.

Figure 1

Illustration of the considered localization setup. A group of agents are cooperatively localizing themselves without any infrastructure. For this purpose, each agent is equipped with dual foot-mounted inertial sensors, a ranging device, and a communication (com.) and processing (proc.) device.

The challenges with the localization setup lie in the system architecture and the sensor fusion. The inter-agent ranging and lack of anchor nodes mean that some global state estimation is required with a potentially prohibitive large computational cost. The global state estimation, the distributed measurements, and the (required) high sampling rates of the inertial sensors mean that a potentially substantial communication is needed and that the system may be sensitive to incomplete or varying connectivity. The feet are poor placements for inter-agent ranging devices and preferably inertial sensors are used on both feet, meaning that the sensors of an individual agent will not be collocated. This gives a high system state dimensionality and means that relating the sensory data from different sensors to each other is difficult and that local communication links on each agent are needed. Further, inter-agent ranging errors as well as sensor separations, often have far from white, stationary, and Gaussian characteristics. Together, this makes fusing ranging and dead reckoning in a high integrity and recursive Bayesian manner at a reasonable computational cost difficult.

Unfortunately, the mentioned challenges are inherent to the system setup. Therefore, they have to be addressed for any practical implementation. However, to our knowledge, the implementation issues have only been sparsely covered in isolation in the literature, and no complete satisfactory solution has been presented. Therefore, in this article, we present solutions to key challenges to the system setup and a complete localization system implementation. More specifically, the considered overall problem is tracking, i.e., recursively estimating, the positions of a group of agents with the equipment setup of Figure1. The available measurements for the tracking are inertial measurements from the dual foot-mounted inertial sensors and inter-agent range measurements. The position tracking is illustrated in Figure2. The measurements will be treated as localized to the respective sensors, and the necessary communication will be handled as an integral part of the overall problem. However, we will not consider specific communication technologies but only communication constraints that naturally arise in the current scenario (low bandwidth and varying connectivity). See[4347] and references therein for treatment of related networking and communication technologies. Also, for brevity, the issues of initialization and time synchronization will not be considered. See[48, 49] for the solutions used in the system implementation.

Figure 2

Illustration of the localization estimation problem and the desired output from the localization system. The problem is to track (i.e., recursively estimate) the positions of multiple agents in three dimensions by inter-agent range measurements and inertial measurements from the foot-mounted inertial sensors.

To arrive at the key challenges and the solutions, initially, in Section 2, the implementation challenges are discussed in more detail, and the related work is reviewed. Following this, we address the key challenges and present a cooperative localization system implementation based on dual foot-mounted inertial sensors and inter-agent ranging. The implementation is based on a partially decentralized system architecture and statistical marginalization, and sampling-based measurement updates. In Section 3, the architecture is presented and argued to reduce the computational cost and required communication by around two orders of magnitude, and to make the system robust to varying connectivity, while only giving negligible information loss. Thereafter, in Section 4, the sampling-based measurement updates with required state space transformation and marginalization are presented and shown to give a robust and low computational cost sensor fusion. Subsequently, in Section 5, the characteristic of the suggested implementation is illustrated via simulations and a real-time system implementation. The cooperative localization is found to give a bounded relative position mean square error (MSE) and an absolute position MSE inversely proportional to the number of agents, in the worst case scenario, and a bounded position MSE in the best case scenario. Finally, Section 6 concludes the article.

2 Implementation challenges

The lack of anchor nodes, the distributed nature of the system, the error characteristics of the different sensors, and the non-collocated sensors of individual agents poses a number of implementation challenges for the cooperative localization. Broadly speaking, these challenges can be divided into those related to designing an overall system architecture to minimize the required communication and computational cost, while making it robust to varying connectivity and retaining sufficient information about the coupling of non-collocated parts of the system, and fusing the information of different parts of the system given the constraints imposed by the system architecture and a finite computational power, while retaining performance and a high system integrity. In the following two subsections, these two overall challenges are discussed in more detail, and the related previous work is reviewed.

2.1 System architecture and state estimation

The system architecture has a strong connection to the position/state estimation and the required communication. The range of potential system architectures and estimation solutions goes from the completely decentralized, in which each agent only estimates its own states, to the completely centralized, in which all states of all agents are estimated jointly.

A completely decentralized architecture is often used in combination with some inherently decentralized belief propagation estimation techniques[38, 50, 51]. The advantage of this is that it makes the localization scalable and robust to varying and incomplete connectivity between the agents. Unfortunately, belief propagation discards information about the coupling between agents, leading to reduced performance[5154]. See[52] for an explicit treatment of the subject. Unfortunately, as will be shown in Section 5, in a system with dead reckoning, inter-agent ranging, and no anchor nodes, the errors in the position estimates of the different agents may become almost perfectly correlated. Consequently, discarding these couplings/correlations between agents can significantly deteriorate the localization performance and integrity.

In contrast, with a centralized architecture and estimation, all correlations can be considered, but instead the state dimensionality of all the agents will add up. Unfortunately, due to the lack of collocation of the sensors of the individual agents, the state dimensionality of the individual agents will be high. Together, this means computationally expensive filter updates. Further, the distributed nature of the system means that information needs to be gathered to perform the sensor fusion. Therefore, communication links are needed, both locally on each agent as well as on a group level. Inter-agent communication links are naturally wireless. However, the foot-mounting of the inertial sensors makes cabled connections impractical, opting for battery powering and local wireless links for the sensors as well[55, 56]. Unfortunately, the expensive filter updates, the wireless communication links, and the battery powering combines poorly with the required high sampling rates of the inertial sensors. With increasing number of agents, the computational cost and the required communication bandwidth will eventually become a problem. Moreover, an agent which loses contact with the fusion center cannot, unless state statistics are continually provided, easily carry on the estimation of its own states by itself. Also, to recover from an outage when the contact is restored, a significant amount of data would have to be stored, transferred, and processed.

Obviously, neither of the extreme cases, the completely decentralized nor the completely centralized architectures, are acceptable. The related problems suggest that some degree of decentralization of the estimation is required to cope with the state dimensionality and communication problems. However, some global book keeping is also required to handle the information coupling. Multiple approximative and exact distributed implementations of global state estimation have been demonstrated, see[54, 5759] and references therein. However, these methods suffer from either a high computational cost or required guaranteed and high bandwidth communication, and are not adapted to the considered sensor setup with high update rates, local communication links, and lack of sensor collocation. Therefore, in Section 3, we suggest and motivate a system architecture with partially decentralized estimation based on a division of the foot-mounted inertial navigation into a step-wise inertial navigation and dead reckoning. This architecture does not completely solve the computational cost issue but makes it manageable for up to a platoon-sized group of agents. For larger groups, some cellular structure is needed[39, 58]. However, the architecture is largely independent of how the global state estimation is implemented and a distributed implementation is conceivable.

The idea of dividing the filtering is not completely new. A similar division is presented in an application specific context in[60] and used to fuse data from foot-mounted inertial sensors with maps, or to build the maps themselves, in[6163]. However, the described division is heuristically motivated, and the statistical relation between the different parts is not clear. Also, no physical processing decentralization is exploited to give reduced communication requirements.

2.2 Robust and low computational cost sensor fusion

The sensor fusion firstly poses the problem of how to model the relation between the tracked inertial sensors and the range measurements. Secondly, it poses the problem of how to condition the state statistic estimates on provided information while retaining reasonable computational costs.

The easiest solution to the non-collocated sensors of individual agents is to make the assumption that they are collocated (or have a fixed relation)[38, 6466]. While simple, this method can clearly introduce modeling errors, resulting in suboptimal performance and questionable integrity. Instead, explicitly modeling the relative motion of the feet has been suggested in[67]. However, making an accurate and general model of the human motion is difficult, to say the least. As an alternative, multiple publications suggest explicitly measuring the relation between the sensors[14, 6870]. The added information can improve the localization performance but unfortunately introduces the need for additional hardware and measurement models. Also, it works best for situations with line-of-sight between measurement points, and therefore, it is probably only a viable solution for foot-to-foot ranging on clear, not too rough, and vegetation/obstacle-free ground[71]. Instead of modeling or measuring the relation between navigation points of an individual agent, the constraint that the spatial separation between them has an upper limit may be used. This side information obviously has an almost perfect integrity, and results in[72] indicate that the performance loss in comparison to ranging is transitory. For inertial navigation, it has been demonstrated that a range constraint can be used to fuse the information from two foot-mounted systems, while only propagating the mean and the covariance[73, 74]. Unfortunately, the suggested methods depend on numerical solvers and only apply the constraint on the mean, giving questionable statistical properties. Therefore, in Section 4, based on the work in[72], we suggest a simpler and numerically more attractive solution to using range constraints to perform the sensor fusion, based on marginalization and sampling.

The naive solution to the sensor fusion of the foot-mounted inertial navigation and the inter-agent ranging is simply using traditional Kalman filter measurement updates for the ranging[38]. However, the radio ranging errors are often far from Gaussian, often with heavy tails and non-stationary and spatially correlated errors[7580]. This can cause unexpected behavior of many localization algorithms, and therefore, statistically more robust methods are desirable[7981]. See[82] and references therein for a general treatment of the statistical robustness concept. The heavy tails and spatially correlated errors could potentially be solved by a flat likelihood function as suggested in[75, 83]. However, while giving a high integrity, this also ignores a substantial amount of information and requires multi-hypothesis filtering (a particle filter) with unacceptable high computational cost. Using a more informative likelihood function is not hard to imagine. Unfortunately, only a small set of likelihood functions can easily be used without resorting to multi-hypothesis filtering methods. Some low-cost fusion techniques for special classes of heavy-tailed distributions and H criteria have been suggested in the literature[8488]. However, ideally, we would like more flexibility to handle measurement errors and non-collocated sensors. Therefore, in Section 4, we propose a marginalization and sample based measurement update for the inter-agent ranging, providing the necessary flexibility to handle an arbitrary likelihood function. A suitable likelihood function is proposed, taking lack of collocation, statistical robustness, and correlated errors into account, and shows to provide a robust and low computational cost sensor fusion.

3 Decentralized estimation architecture

To get around the problems of the centralized architecture, the state estimation needs somehow to be partially decentralized. However, as previously argued, some global state estimation is necessary. Consequently, the challenge is to do the decentralization in a way that does not lead to unacceptable loss in information coupling, leading to poor performance and integrity, while still solving the issues with computational cost, communication bandwidth, and robustness to varying connectivity. In the following subsections, it is shown how this can be achieved by dividing the filtering associated with foot-mounted inertial sensors into a step-wise inertial navigation and step-wise dead reckoning. Pseudo-code for the related processing is found in Algorithm 1.

3.1 Zero-velocity-update-aided inertial navigation

To track the position of an agent equipped with foot-mounted inertial sensors, the sensors are used to implement an inertial navigation system aided by so called zero-velocity updates (ZUPTs). The inertial navigation essentially consists of the inertial sensors combined with mechanization equations. In the simplest form, the mechanization equations are

p k v k q k = p k - 1 + v k - 1 dt v k - 1 + ( q k - 1 f k q k - 1 - g ) dt Ω ( ω k dt ) q k - 1 ,

where k is a time index, dt is the time difference between measurement instances, p k is the position, v k is the velocity, f k is the specific force, g = [0, 0, g] is the gravity, and ω k is the angular rate (all in R 3 ). Further, q k is the quaternion describing the orientation of the system, the triple product q k - 1 f k q k - 1 denotes the rotation of f k by q k , and Ω(·) is the quaternion update matrix. For a detailed treatment of the inertial navigation, see[89, 90]. For analytical convenience, we will interchangeably represent the orientation q k with the equivalent Euler angles (roll, pitch, yaw) θ k = [ϕ k , θ k , ψ k ]. Note that [·,…] is used to denote a column vector.

The mechanization equations (1) together with measurements of the specific force f ~ k and the angular rates ω ~ k , provided by the inertial sensors, are used to propagate position p ̂ k , velocity v ̂ k , and orientation q ̂ k state estimates. Unfortunately, due to its integrative nature, small measurement errors in f ~ k and ω ~ k accumulate, giving rapidly growing estimation errors. Fortunately, these errors can be modeled and estimated with ZUPTs. A first-order error model of (1) is given by

δ p k δ v k δ θ k = I I dt 0 0 I [ q k - 1 f k q k - 1 ] × dt 0 0 I δ p k - 1 δ v k - 1 δ θ k - 1 ,

where δ(·) k are the error states, I, and 0 are 3 × 3 identity and zero matrices, respectively, and [·]× is the cross-product matrix. As argued in[91], one should be cautious about estimating systematic sensor errors in the current setup. Indeed, remarkable dead reckoning performance has been demonstrated, exploiting dual foot-mounted sensors without any sensor error state estimation[92]. Therefore, in contrast to many publications, no additional sensor bias states are used.

Together with statistical models for the errors in f ~ k and ω ~ k , (2) is used to propagate statistics of the error states. To estimate the error states, stationary time instances are detected based on the conditionZ( { f ~ κ , ω ~ κ } W k )< γ Z , where Z(·) is some zero-velocity test statistic, { f ~ κ , ω ~ κ } W k is the inertial measurements over some time window W k , and γ Z is a zero-velocity detection threshold. See[93, 94] for further details. The implied zero-velocities are used as pseudo-measurements

y ~ k = v ̂ k k:Z( { f ~ κ , ω ~ κ } W k )< γ Z ,

which are modeled in terms of the error states as

y ~ k =H δ p k δ v k δ θ k + n k ,

where H = [0 I 0] is the measurement matrix, and n k is a measurement noise, i.e., y ~ k =δ v k + n k . A similar detector is also used to lock the system when completely stationary. See[95] for further details. Given the error model (2) and the measurements model (4), the measurements (3) can be used to estimate the error states with a Kalman type of filter. See[11, 94, 96, 97] for further details and variations. See[98] for a general treatment of aided navigation. Since there is no reason to propagate errors, as soon as there are any non-zero estimatesδ p ̂ k ,δ v ̂ k , orδ θ ̂ k , they are fed back correcting the navigational states,

p ̂ k v ̂ k := p ̂ k v ̂ k + δ p ̂ k δ v ̂ k and q ̂ k :=Ω(δ θ ̂ k ) q ̂ k

and consequently, the error state estimates are set to zero, i.e.,δ p ̂ k := 0 3 × 1 ,δ v ̂ k := 0 3 × 1 , andδ θ ̂ k := 0 3 × 1 , where := indicates assignment.

Unfortunately, all (error) states are not observable based on the ZUPTs. During periods of consecutive ZUPTs, the system (2) becomes essentially linear and time invariant. Zero-velocity for consecutive time instances means no acceleration and ideally f k = q k g q k . This gives the system and observability matrices

F = I I dt 0 0 I [ g ] × dt 0 0 I and H H F H F 2 = 0 I 0 0 I [ g ] × dt 0 I 2 [ g ] × dt .

Obviously, the position (error) is not observable, while the velocity is. Since

[ g ] × = 0 g 0 - g 0 0 0 0 0 ,

the roll and pitch are observable, while the heading (yaw) of the system is not. Ignoring the process noise, this implies that the covariances of the observable states decay as one over the number of consecutive ZUPTs. Note that there is no difference between the covariances of the error states and the states themselves. Consequently, during standstill, after a reasonable number of ZUPTs, the state estimate covariance becomes

cov ( p ̂ k , v ̂ k , θ ̂ k ) P p k 0 3 × 5 P p k , ψ k 0 5 × 3 0 5 × 5 0 5 × 1 P p k , ψ k 0 1 × 5 P ψ k , ψ k ,

where P x,y = cov(x, y),P x = cov(x) = cov(x, x),(·) denotes the transpose, and 0 n×m denotes a zero matrix of size n × m.

3.2 Step-wise dead reckoning

The covariance matrix (6) tells us that the errors of p ̂ k and ψ ̂ k are uncorrelated with those of v ̂ k and[ ϕ ̂ k , θ ̂ k ]. Together with the Markovian assumption of the state space models and the translational and in-plan rotation invariance of (1) to (4), this means that future errors of v ̂ k and[ ϕ ̂ k , θ ̂ k ] are uncorrelated with those of the current p ̂ k and ψ ̂ k . Consequently, future ZUPTs cannot be used to deduce information about the current position and heading errors. In turn, this means that, considering only the ZUPTs, it makes no difference if we reset the system and add the new relative position and heading to those before the reset. However, for other information sources, we must keep track of the global (total) error covariance of the position and heading estimates.

Resetting the system means setting position p ̂ k and heading ψ ̂ k , and corresponding covariances to zero. Denote the position and heading estimates at a reset by d p and d ψ . These values can be used to drive the step-wise dead reckoning

x χ = x - 1 χ - 1 + R - 1 d p d ψ + w ,

where x and χ are the global position in three dimensions and heading in the horizontal plan of the inertial navigation system relative to the navigation frame,

R = cos ( χ ) - sin ( χ ) 0 sin ( χ ) cos ( χ ) 0 0 0 1

is the rotation matrix from the local coordinate frame of the last reset to the navigation frame, and w is a (by assumption) white noise with covariance,

cov ( w ) = cov [ R - 1 d p , d ψ ] = R - 1 P p R - 1 R - 1 P p , ψ P p , ψ R - 1 P ψ , ψ .

The noise w in (7) represents the accumulated uncertainty in position and heading since the last reset, i.e., the essentially non-zero elements in (6) transformed to the navigation frame. The dead reckoning (7) can trivially be used to estimate x and χ , and their error covariances from d p and d ψ , and related covariances. The relation between the step-wise inertial navigation and dead reckoning is illustrated in Figure3.

Figure 3

Illustration of the step-wise inertial navigation and the step-wise dead reckoning. The displacement and heading change over a step given by the inertial navigation is used to perform the step-wise dead reckoning.

To get [d p , d ψ ] from the inertial navigation, reset instances need to be determined, i.e., the decoupled situation (6) needs to be detected. However, detecting it is not enough. If it holds for one time instance k, it is likely to hold for the next time instance. Resetting at nearby time instances is not desirable. Instead we want to reset once at every step or at some regular intervals if the system is stationary for a longer period of time. The latter requirement is necessary to distinguish between extended stationary periods and extended dynamic periods. Further, to allow for real-time processing, the detection needs to be done in a recursive manner. The longer the stationary period, the smaller the cross-coupling terms in (6). This means that the system should be reset as late as possible in a stationary period. However, if the stationary period is too short, we may not want to reset at all, since then the cross-terms in (6) may not have converged.

In summary, the necessary conditions for a reset are low enough cross-coupling and minimum elapsed time since the last reset. If this holds, there is a pending reset. In principle, the cross-coupling terms in (6) should be used to determine the first requirement. However, in practice, all elements fall off together, and a threshold γ p on, e.g., the first velocity component, can be used. To assess the second requirement, a counter c p which is incremented at each time instance is needed, giving the pending reset condition

( P v x k < γ p )( c p > c min ),

where c min is the minimum number of samples between resets. A pending reset is to be performed if the stationary period comes to an end or a maximum time with a pending reset has elapsed. To assess the latter condition, a counter c d is needed which is incremented if (9) holds. Then, a reset is performed if

Z ( { f ~ κ , ω ~ κ } W k ) γ Z ( c d > c max ),

where c max is the maximum number of samples of a pending reset. Together, (9) and (10) make up the sufficient conditions for a reset. When the reset is performed, the counters are reset, c p  := 0 and c d  := 0. This gives a recursive step segmentation. Pseudo-code for the inertial navigation with recursive step segmentation (i.e., step-wise inertial navigation) and the step-wise dead reckoning is found in Algorithm 1.

Not to lose performance in comparison with a sensor fusion approach based on centralized estimation, the step-wise inertial navigation combined with the step-wise dead reckoning needs to reproduce the same state statistics (mean and covariance) as those of the indefinite (no resets) ZUPT-aided inertial navigation. If the models (1), (2), and (7) had been linear with Gaussian noise and the cross-coupling terms of (6) were perfectly zero, then the divided filtering would reproduce the full filter behavior perfectly. Unfortunately, they are not. However, as shown in the example trajectory in Figure4, in practice, the differences are marginal, and the mean and covariance estimates of the position and heading can be reproduced by only [d p , d ψ ] and the corresponding covariances. Due to linearization and modeling errors of the ZUPTs, the step-wise dead reckoning can even be expected to improve performance since it will eliminate these effects to single steps[91, 99]. Indeed, resetting appropriate covariance elements (which has similar effects as of performing the step-wise dead reckoning) has empirically been found to improve performance[100].

Figure 4

Illustration of the decentralized system architecture. Step-wise inertial navigation is done locally in the foot-mounted units. Displacement and heading changes are transferred to a local processing device, where step-wise dead reckoning is performed and relayed together with ranging data to a central fusion center. The fusion center may be carried by an agent, reside in a vehicle or something similar, or be distributed among agents.

3.3 Physical decentralization of state estimation

The step-wise inertial navigation and dead reckoning as described in Algorithm 1 can be used to implement a decentralized architecture and state estimation. The ranging, as well as most additional information, is only dependent on position, not on the full state vector [p k ,v k ,θ k ]. Further, as argued in the previous subsection, the errors of v ̂ k and[ ϕ ̂ k , θ ̂ k ] are weakly correlated with those of p ̂ k and ψ ̂ k . Therefore, only the states [x , χ ] (for all feet) have to be estimated jointly, and only line 19 needs to be executed centrally. The step-wise inertial navigation, i.e., Algorithm 1 apart from line 19, can be implemented locally in the foot-mounted units, and thereby, only [d p , d ψ ] and related covariances need to be transmitted from the feet. This way, the required communication will be significantly lower compared to that in the case in which all inertial data would have to be transmitted. Also, since the computational cost of propagating (7) is marginal, this can be done both locally on the processing device of each agent and in a global state estimation. This way, if an agent loses contact with whomever who performs the global state estimation, it can still perform the dead reckoning and, thereby, keep an estimate of where it is. Since the amount of data in the displacement and heading changes is small, if contact is reestablished, all data can easily be transferred, and its states in the global state estimation updated. The other way around, if corrections to the estimates of [x , χ ] are made in the central state estimation, these corrections can be transferred down to the agent. Since the recursion in (7) is pure dead reckoning (no statistical conditioning), these corrections can directly be used to correct the local estimates of [x , χ ]. This way, the local and the global estimates can be kept consistent.

The straightforward way of implementing the global state estimation is by one (or multiple) central fusion center to which all dead reckoning data are transmitted (potentially by broadcasting). The fusion center may be carried by an agent or reside in a vehicle or something similar. Range measurements relative to other agents only have a meaning if the position estimate and its statistics are known. Therefore, all ranging information is transferred to the central fusion center. This processing architecture with its three layers of estimation (foot, processing device of agent, and common fusion center) is illustrated in Figure5. However, the division in step-wise inertial navigation and dead reckoning is independent of the structure with a fusion center, and some decentralized global state estimation could potentially be used.

Figure 5

Comparison of step-wise dead reckoning and indefinite ZUPT-aided inertial navigation. The plots show the trajectory (top), the position error covariances (middle), and the covariance between the position and heading errors (bottom) as estimated by an extended Kalman filter-based indefinite ZUPT-aided inertial navigation (solid lines) and a step-wise inertial navigation and dead reckoning (crossed lines). The agreement between the systems is far below the accuracy and integrity of the former system.

3.4 Computational cost and required communication

The step-wise dead reckoning is primarily motivated and justified by the reduction in computational cost and required communication bandwidth. With a completely centralized sensor fusion[ f ~ k , ω ~ k ], six measurement values in total, needs to be transferred to the central fusion center at a sampling rate f IMU in the order of hundreds of Hz, with each measurement value typically consisting of some 12 to 16 bits. With the step-wise dead reckoning,[d p ,d ψ ], P p , P p , ψ , and P ψ , ψ , in total 14 values, need to be transferred to the central fusion center at a rate of f sw ≈ 1 Hz (normal gait frequency[101]). In practice, the 14 values can be reduced to 8 values since crosscovariances may be ignored, and the numerical ranges are such that they can reasonably be fitted in some 12 to 16 bits each. The other way around, some four correction values need to be transferred back to the agent. Together, this gives the ratio of the required communication of (6 · f IMU)/(12 · f sw) ≈ 102, a two-order magnitude reduction. In turn, the computational cost scales linearly with the update rates f IMU and f sw. In addition, the computational cost has a cubic scaling (for covariance-based filters) with the state dimensionality. Therefore, the reduction in the computational cost at the central fusion center is at the most f IMU/f sw · (9/4)3 ≈ 103. However, at higher update rates, updates may be bundled together. Consequently, a somewhat lower reduction may be expected in practice, giving a reduction of again around two orders of magnitude.

4 Robust and low-cost sensor fusion

The step-wise dead reckoning provides a low-dimensional and low-update rate interface to the foot-mounted inertial navigation. With this interface, the global state of the localization system (the system state as conceived by the global state estimation) becomes

x = [ x α , χ α , x β , χ β , x ζ , χ ζ , ] ,

where x j and χ j are the positions and headings of the agents’ feet with dropped time indices. Other auxiliary states may also be put in the state vector. Our desire is to fuse the provided dead reckoning with that of the other foot and that of the other agents via inter-agent ranging. This fusion is primarily challenging because of (1) the high dimensionality of the global system, (2) the non-collocated sensors of individual agents, and (3) the potentially malign error characteristic of the ranging. The high dimensionality is tackled by only propagating the mean and covariance estimates and by marginalization of the state space. The lack of collocation is handled by imposing range constraints between sensors. Finally, the error characteristic of the ranging is handled by sampling-based updates. In the following subsections, these approaches are described. The pseudo-code for the sensor fusion is found in Algorithms 2 and 3 in the final subsection.

4.1 Marginalization

New information (e.g., range measurements) introduced in the systems is only dependent on a small subset of the states. Assume that the state vector can be decomposed as z = [z 1,z 2], such that some introduced information π is only dependent on z 1. Then, with a Gaussian prior with mean and covariance,

z ̂ = z ̂ 1 z ̂ 2 and P z = P z 1 P z 1 z 2 P z 1 z 2 P z 2 , ,

the conditional (with respect to π) mean of z 2 and the conditional covariance can be expressed as[72]

z ̂ 2 | π = V + U z ̂ 1 | π P z 1 | π = C z 1 | π - z ̂ 1 | π z ̂ 1 | π P z 2 | π = P z 2 - U P z 1 z 2 + V V + Z + U C z 1 | π U - z ̂ 2 | π z ̂ 2 | π P z 1 z 2 | π = z ̂ 1 | π V + C z 1 | π U - z ̂ 1 | π z ̂ 2 | π ,

whereU= P z 1 z 2 P z 1 - 1 ,V= z ̂ 2 -U z ̂ 1 ,Z=U z ̂ 1 | π V +V z ̂ 1 | π U , and C z 1 | π is the conditional second order moment of z 1. Note that this will hold for any information π only dependent on z 1, not just range constraints as studied in[72]. Consequently, the relations (12) give a desired marginalization. To impose the information π on (11), only the first and second conditional moments, z ̂ 1 | π and C z 1 | π , need to be calculated. If π is linearly dependent on z 1 and with Gaussian errors, this will be equivalent with a Kalman filter measurement update. This may trivially be used to introduce information about individual agents. However, as we will show in the following subsections, this can also be used to couple multiple navigation points of individual agents without any further measurement and to introduce non-Gaussian ranging between agents.

4.2 Fusing dead reckoning from dual foot-mounted units

The position of the feet x a and x b of an agent (in general two navigation points of an agent) has a bounded spatial separation. This can be used to fuse the related dead reckoning without any further measurements. In practice, the constraint will often have different extents in the vertical and the horizontal directions. This can be expressed as a range constraint

D γ ( x a - x b ) γ xy ,

where D γ is a diagonal scaling matrix with γ = [1, 1, γ xy /γ z ] on the diagonal, and γ xy and γ z are the constraints in the horizontal and vertical direction. Unfortunately, there is no standard way of imposing such a constraint in a Kalman-like framework[102]. Also, the position states being in arbitrary locations in the global state vector, i.e., x = […,x a ,…,x b ,…], means that the state vector is not on the form of z. Further, since the constraint (13) has unbounded support, the conditional means x ̂ a | γ , x ̂ b | γ and covariances cov([x a|γ ,x b|γ ]) cannot easily be evaluated. Moreover, since the states are updated asynchronously (steps occurring at different time instances), the state estimates x ̂ a and x ̂ b may not refer to the same time instance. The latter problem can be handled by adjusting the constraint γ xy by the time difference of the states. In principle, this means that an upper limit on the speed by which an agent moves is imposed. The former problems can be solved with the state transformation

z= T γ xwhere T γ = D γ - D γ D γ D γ I m - 6 Π,

where I m-6 is the identity matrix of size m - 6, m is the dimension of x, denotes the direct sum of matrices, Π is a permutation matrix fulfilling

Π [ , x a , , x b , ] = [ x a , x b , ]

and z 1 = D γ (x a - x b ). With the state transformation T γ , the mean and covariance of z become z ̂ = T γ x ̂ and P z = T γ P x T γ . Inversely, since T γ is invertible, the conditional mean and covariance of x become x ̂ | γ = T γ - 1 z ̂ | γ and P x | γ = T γ - 1 P z | γ T γ - . Therefore, if z ̂ 1 | γ and C z 1 | γ are evaluated, (12) gives z ̂ | γ and P z|γ and thereby also x ̂ | γ and P x|γ . Fortunately, with z 1 = D γ (x a - x b ), the constraint (13) becomes

z 1 γ xy .

In contrast to (13), this constraint has a bounded support. Therefore, as suggested in[72], the conditional means can be approximated by sampling and projecting sigma points.

z ̂ 1 | γ i w i z 1 ( i ) and C z 1 | γ i w i z 1 ( i ) ( z 1 ( i ) ) ,

where ≈ denotes approximate equality and

z 1 ( i ) = s ( i ) , s ( i ) γ xy γ xy s ( i ) s ( i ) , 1 .

Here s (i) and w (i) are sigma points and weights

{ s ( i ) , w ( i ) } = { z ̂ 1 , 1 - 3 / η } , i = 0 { z ̂ 1 + η 1 / 2 l i , 1 / 2 η } , i [ 1 , , 3 ] { z ̂ 1 - η 1 / 2 l i - 3 , 1 / 2 η } , i [ 4 , , 6 ] ,

where l i is the i th column of the Cholesky decomposition P z 1 =L L and the scalar η reflects the portion of the prior to which the constraint is applied. See[72] for further details. The application of the constraint for the two-dimensional case is illustrated in Figure6.

Figure 6

Illustration of the separation constraint update for two feet in the horizontal plane. The plots show (upper left) the prior position and covariance estimates, (upper right) the transformed system with the sigma points (blue crosses) and the constraint (dashed circle), (lower left) the projected sigma points and the conditional mean and covariance, and finally (lower-right) the conditioned result in the original domain with the prior covariances indicated with thinner lines.

4.3 Inter-agent range measurement updates

Similar to the range constraint between the feet of an individual agent, the geometry of the inter-agent ranging gives the constraints

r-( γ a + γ b ) x a - x b r+( γ a + γ b ),

where r is the (true) range between agents’ ranging devices, and γ a and γ b are the maximum spatial separation of respective ranging device and x a and x b , where in this case, x a and x b are the positions of a foot of each agent. The range only being dependent on x a  - x b means that the state transformation z = T 1 x, where 1 = [1,1,1] and z 1 = x a  - x b , and the corresponding mean and covariance transformations as explained in the previous subsection can be used to let us exploit the marginalization (12).

The inter-agent ranging gives measurements r ~ of the range r. As reviewed in Section 2.2, the malign attributes of r ~ which we have to deal with are potentially heavy-tailed error distributions and non-stationary spatially correlated errors due to diffraction, multi-path, or similar. This can be done by using the model r ~ =r+v+v’, where v is a heavy-tailed error component, and v’ is a uniformly distributed component intended to cover the assumed bounded correlated errors in a manner similar to that of[75]. Combining the model with (16) and the state transformation T 1 gives the measurement model

r ~ -v- γ r z 1 r ~ -v+ γ r ,

where γ r is chosen to cover the bounds in (16), the asynchrony between x ̂ a and x ̂ b , and the correlated errors v’. In practice γ r will be a system parameter trading integrity for information.

To update the global state estimate with the the range measurement r ~ , the state z ̂ 1 and covariance estimates P z 1 must be conditioned on r ~ via (17). Due to the stochastic term v, we cannot use hard constraints as with the feet of a single agent. However, by assigning a uniform prior to the constraint in (17), the likelihood function of r ~ given z ̂ 1 becomes

f( r ~ | z 1 )=U(- γ r , γ r )V( z ̂ 1 - r ~ ,σ),

whereU(- γ r , γ r ) is a uniform distribution over the interval[- γ r , γ r ],V( z ̂ 1 - r ~ , σ r ) is the distribution of v with mean z ̂ 1 - r ~ and some scale σ r , and denotes convolution. Then, with the assumed Gaussian prior z 1 N( z ̂ 1 , P z 1 ), the conditional distribution of z 1 given r ~ , z ̂ 1 , and P z 1 is

f( z 1 | r ~ )f( r ~ | z ̂ 1 )N( z ̂ 1 , P z 1 ).

Since z 1 is low dimensional, the conditional moments z ̂ 1 | r ~ and C z 1 | r ~ can be evaluated by sampling. With the marginalization (12) and the inverse transformation T 1 - 1 , this will give the conditional mean and covariance of x.

Since the likelihood function (18) is typically heavy tailed, it cannot easily be described by a set of samples. However, since the prior is (assumed) Gaussian, the sampling of it can efficiently be implemented with the eigenvalue decomposition. With sample points u (i) of the standard Gaussian distribution, the corresponding sample points of the prior is given by

s ( i ) = z ̂ 1 + Q Λ 1 / 2 u ( i ) ,

where P z 1 =QΛ Q is the eigenvalue decomposition of P z 1 . With the sample points s (i), the associated prior weights only become dependent on u (i) (apart from normalization) since

w pr ( i ) e - 1 2 ( s ( i ) - z ̂ 1 ) P z 1 - 1 ( s ( i ) - z ̂ 1 ) = e - 1 2 u ( i ) 2

and can therefore be precalculated. Reweighting with the likelihood function, w ~ po ( i ) = w pr ( i ) ·f( r ~ | s ( i ) ) and normalizing the weights w po ( i ) = w ~ po ( i ) · ( w ~ po ( i ) ) - 1 , with suitable chosen u (i), the conditional moments can be approximated by

z ̂ 1 | r ~ i w po ( i ) s ( i ) and C z 1 | r ~ i w po ( i ) s ( i ) ( s ( i ) ) .

Consequently, as long as the likelihood function can be efficiently evaluated, any likelihood function may be used. For analytical convenience, we have typically letV(·,·) be Cauchy-distributed, giving the heavy-tailed likelihood function

f( r ~ | s ( i ) )atan r ~ - s ( i ) + γ r σ r -atan r ~ - s ( i ) - γ r σ r .

The sampling-based range update with this likelihood function and u (i) from a square sampling lattice is illustrated in Figure7. Potential more elaborate techniques for choosing the sample points can be found in[103].

Figure 7

Illustration of the suggested range update in two dimensions. From left to right, the plots show the Gaussian prior given by the mean z ̂ 1 (blue dot) and the covariance P z 1 (blue ellipse), and with indicated samples (red dots) and eigenvectors/values, the (one-dimensional) likelihood function given by the range measurement r ~ and used to reweight the samples, and the resulting posterior with conditional mean z ̂ 1 | r and covariance P z 1 | r calculated from the reweighted samples.

The presented ranging update gives a robustness to outliers in the measurement data. In Figure8, the influence functions for the sample-based update and the traditional Kalman measurement update are shown for the ranging likelihood function (20) with γ r  = 2 m and σ r  = 0.5 m, and position covariance values of P z 1 =I m2 and P z 1 =0.3I m2. By comparing the blue solid and the red dashed-dotted lines, it is seen that when the position and ranging error covariances are of the same size, the suggested ranging update behaves like the Kalman update up to around three standard deviations, where it gracefully starts to neglect the range measurement. In addition, by comparing the blue dashed and the red dotted lines, it is seen that for smaller position error covariances, in contrast to the Kalman update, the suggested range update neglects ranging measurements with small errors (flat spot in the middle of the influence function). This has the effect that multiple ranging updates will not make the position error covariance collapse, which captures the fact that due to correlated errors, during standstill, multiple range measurements will contain a diminishing amount of information; and during motion, the range measurements should only 'herd’ the dead reckoning.

Figure 8

Influence functions for range updates for z ̂ 1 =10 m. The different functions correspond to the suggested method (blue solid/dashed lines) and a traditional Kalman measurement update (red dashed-dotted/dotted lines) with P z 1 =I m2 and P z 1 =0.3I m2, respectively. For the suggested update, γ r  = 2 m and v 2 were Cauchy distributed with σ r  = 0.5 m, and for the Kalman measurement update, the measurement error variance was 1 m2.

With slight modifications, the ranging updates can be used to incorporate information from many other information sources. Ranging to anchor nodes whose positions are not kept in x or position updates (from a GNSS receiver or similar) may trivially be implemented as range updates (zero range in the case of the position update) with z 1 = x a  - x b replaced with z 1 = x a  - x c, where x c is the position of the anchor node or the position measurement. Fusion of pressure measurements may be implemented as range updates in the vertical direction, either relative to other agents or relative to a reference pressure.

4.4 Summary of sensor fusion

The central sensor fusion, as described in Section 3.3, keeps the position and heading of all feet in the global state vector x. From all agents, it receives dead reckoning updates,[d p ,d ψ ], P p , P p , ψ , and P ψ , ψ , and inter-agent range measurements r ~ . The dead reckoning updates are used to propagate the corresponding states and covariances according to (7). At each dead reckoning update, the range constraint is imposed on the state space as described in subsection 4.2, and the corrections are sent back to the agent. The inter-agent range measurements are used to condition the state space as described in subsection 4.3. Pseudo-code for conditioning the state mean and covariance estimates on the range constraint and range measurements is shown in Algorithms 2 and 3.

5 Experimental results

To demonstrate the characteristics of the sensor fusion presented in the previous section, in the following subsection, we first show numerical simulations giving a quantitative description of the fusion. Subsequently, to demonstrate the practical feasibility of the suggested architecture and sensor fusion, a real-time localization system implementation is briefly presented.

5.1 Simulations

The cooperative localization by foot-mounted inertial sensors and inter-agent ranging is non-linear, and the behavior of the system will be highly dependent on the trajectories. Therefore, we cannot give an analytical expression for the performance. Instead, to demonstrate the system characteristics, two extreme scenarios are simulated. For both scenarios, the agents move with 1 m steps at 1 Hz. Gaussian errors with standard deviation 0.01 m and 0.2° were added to the step displacements and the heading changes, respectively, and heavy-tailed Cauchy distributed errors of scale 1 m were added to the range measurements. The ranging is done time-multiplexed in a round-robin fashion at a total rate of 1 Hz.

5.1.1 Straight-line march

N agents are marching beside each other in straight lines with agent separation of 10 m. The straight line is the worst case scenario for the dead reckoning, and the position errors will be dominated by errors induced by the heading errors. In Figure9, examples of the estimated trajectories of the right (blue) and left (green) feet are shown from three agents without any further information, with range constraints between the feet and with range constraints and inter-agent ranging. The absolute and relative root-mean-square error (RMSE) as a function of the walked distance, and for different number of agents, are shown in Figure10. The relative errors are naturally bounded by the inter-agent ranging. However, the heading RMSE grows linearly with time/distance, and therefore, the absolute position error is seen to grow with distance. Similar behavior can be observed in the experimental data in[64, 91]. Naturally, the heading error and therefore also the absolute position RMSE drop as1/ N , where N is the number of agents. This is shown in Figure11. We may also note that the position errors of the different agents become strongly correlated. The correlation coefficients for two agents as a function of distance are shown in Figure12.

Figure 9

Illustration of the gain of dual foot-mounted sensors and inter-agent ranging. The upper plot shows the step-wise dead reckoning of the individual feet (in blue and green) without any further information. The middle plot shows the step-wise dead reckoning with range constraints between the feet of the individual agents. The lower plot shows the complete cooperative localization with step-wise dead reckoning, range constraints, and inter-agent ranging.

Figure 10

Absolute position RMSEs (blue lines) and relative position RMSEs (red lines) as functions of distance. The different blue lines correspond, in ascending order, to the increasing number of agents and are the results of 100 Monte Carlo runs. Clearly, the relative error is bounded by the inter-agent ranging while the absolute error grows slower the larger the number of agents. The final position RMSEs as a function of the number of agents are shown in Figure11.

Figure 11

Final position RMSEs as a function of the number of agents (blue crossed line). From the fit to1/ N (dashed red line), the position error is seen to be decaying as the square root of the number of agents. The RMSEs are the results of 100 Monte Carlo runs.

Figure 12

Position error correlation coefficients for two agents in the straight-line march scenario. The lines correspond to the x direction (solid blue), y direction (dashed green), and z direction (dotted/dashed red). Clearly, the position errors of the different agents become strongly correlated with the increasing distance traveled.

5.1.2 Three static agents

Three non-collinear agents are standing still. This will be perceived by the foot-mounted inertial navigation, and therefore, they essentially become anchor nodes. This is obviously the best-case scenario. A fourth agent walks around them in a circle. An example of an estimated trajectory is shown in Figure13, and the RMSE as a function of time is shown in Figure14. Since anchor nodes are essentially present in the system, the errors are bounded. See[104] for further discussions. The non-zero RMSE reflects the range constraints in the system.

Figure 13

Estimated trajectory from the scenario with three static agents and a fourth mobile agent. Clearly, the position estimation errors are bounded.

Figure 14

Position RMSEs of the mobile unit for the three static agents scenario. The RMSEs are the results of 100 Monter Carlo runs. Being static, the three stationary agents essentially become anchor nodes, and therefore, the RMSE is bounded.

From the two scenarios, we can conclude that the relative position errors are kept bounded by the inter-agent ranging, while the absolute position errors (relative starting location) are bounded in the best case (stationary agents) and that the error growth is reduced by a factor of1/ N in the worst case.

5.2 Real-time implementation

The decentralized system architecture has been realized with OpenShoe units[15] and Android smartphones and tablets (Samsung Galaxy S III and Tab 2 10.1, Samsung Electronics Co., Ltd., Suwon, Korea) in the in-house developed tactical locator system TOR. The communication is done over an IEEE 802.11 WLAN. Synthetic inter-agent ranging is generated from position measurements from a Ubisense system (Ubisense Research & Development Package, Ubisense Group plc., Cambridge, UK), installed in the KTH R1 reactor hall[105]. The intension is to replace the Ubisense system with in-house developed UWB radios[26]. The equipment for a single agent is shown in Figure15. The multi-agent setup with additional equipment for sensor mounting is shown in Figure16.

Figure 15

Agent equipment carried by each agent in the prototype implementation. The OpenShoe units are connected to the USB-hub. Radio tags and a Ubisense real-time location system are used to generate synthetic range measurements between agents.

Figure 16

Four agents with equipment displayed in Figure 15 . The OpenShoe units are integrated in the soles of the shoes, and the radio tags are attached to the helmets. The cables and the USB hubs are not displayed.

The step-wise inertial navigation and the associated transfer of displacements and heading changes have been implemented in the OpenShoe units. The agent filtering has been implemented as Android applications together with graphical user interfaces. A screenshot of the graphical user interface with trajectories from a 10-min search in the reactor hall and adjacent rooms (built-up walls not displayed) by three smoked divers is shown in Figure17. The central sensor fusion has been implemented as a separate Android application running on one agent’s Android platform. Recently, voice radio communication and 3D audio have been integrated into the localization system[106].

Figure 17

Screenshot of the cooperative localization system user interface overlayed on a photography of the Android phone. The interface shows the recursively estimated positions (trajectories) of three smoke divers, a smoke diving pair (magenta), and a smoke diving leader (blue), during a 10-min search of the R1 reactor hall and adjacent rooms. The built-up rooms/walls are not displayed but can clearly be seen in the search pattern. At the time of the screenshot, the smoke diving pair has advanced to the second floor as can be seen by the hight estimates displayed on the left side.

6 Conclusions

Key implementation challenges of cooperative localization by foot-mounted inertial sensors and inter-agent ranging are designing an overall system architecture to minimize the required communication and computational cost while retaining the performance and making it robust to varying connectivity, and fusing the information from the system under the constraint of the system architecture while retaining high integrity and accuracy. A solution to the former problem has been presented in the partially decentralized system architecture based on the division and physical separation of the step-wise inertial navigation and the step-wise dead reckoning. A solution to the latter problem has been presented in the marginalization and sample-based spatial separation constraint and ranging updates. By simulations, it has been shown that in the worst case scenario, the absolute localization RMSE improves as the square root of the number of agents, and the relative errors are bounded. In the best case scenario, both the relative and the absolute errors are bounded. Finally, the feasibility of the suggested architecture and sensor fusion has been demonstrated with simulations and a real-time system implementation featuring four agents and a meter-level accuracy for operation times of tenth of minutes in a harsh industrial environment.


  1. 1.

    Rantakokko J, Rydell J, Strömbäck P, Händel P, Callmer J, Törnqvist D, Gustafsson F, Jobs M, Grudén M: Accurate and reliable soldier and first responder indoor positioning: multisensor systems and cooperative localization. IEEE Wireless Commun 2011, 18: 10-18.

    Article  Google Scholar 

  2. 2.

    Fuchs C, Aschenbruck N, Martini P, Wieneke M: Indoor tracking for mission critical scenarios: a survey. Pervasive and Mobile Comput 2011, 7: 1-15.

    Article  Google Scholar 

  3. 3.

    Pahlavan K, Li X, Makela J: Indoor geolocation science and technology. IEEE Commun. Mag 2002, 40(2):112-118.

    Article  Google Scholar 

  4. 4.

    Renaudin V, Yalak O, Tomé P, Merminod B: Indoor navigation of emergency agents. Eu. J. Navigation 2007, 5: 36-42.

    Google Scholar 

  5. 5.

    Glanzer G: Personal and first-responder positioning: state of the art and future trends. In Ubiquitous Positioning, Indoor Navigation, and Location Based Service (UPINLBS). Helsinki, Finland; 3–4 Oct 2012.

    Google Scholar 

  6. 6.

    Fischer C, Gellersen H: Location and navigation support for emergency responders: a survey. IEEE Pervasive Comput 2010, 9: 38-47.

    Article  Google Scholar 

  7. 7.

    Hightower J, Borriello G: Location systems for ubiquitous computing. Computer 2001, 34(8):57-66.

    Article  Google Scholar 

  8. 8.

    Angermann M, Khider M, Robertson P: Towards operational systems for continuous navigation of rescue teams. IEEE/ION Position, Location and Navigation Symposium (PLANS) 5–8 May 2008.

    Google Scholar 

  9. 9.

    Mourikis A, Roumeliotis S: On the treatment of relative-pose measurements for mobile robot localization. In IEEE International Conference on Robotics and Automation. Orlando, FL, USA; 15–19 May 2006.

    Google Scholar 

  10. 10.

    Harle R: A survey of indoor inertial positioning systems for pedestrians. IEEE Commun. Surv. Tutor 2013, PP(99):1-13.

    Google Scholar 

  11. 11.

    Foxlin E: Pedestrian tracking with shoe-mounted inertial sensors. IEEE Comput. Graph. Appl 2005, 25: 38-46.

    Article  Google Scholar 

  12. 12.

    Rantakokko J, Emilsson E, Strömbäck P, Rydell J: Scenario-based evaluations of high-accuracy personal positioning systems. In IEEE/ION Position, Location and Navigation Symposium (PLANS). Myrtle Beach, SC, USA; 23–26 Apr 2012.

    Google Scholar 

  13. 13.

    Godha S, Lachapelle G, Cannon ME: Integrated GPS/INS system for pedestrian navigation in a signal degraded environment. In ION GNSS. Fort Worth, TX, USA; 26–29 Sept 2006.

    Google Scholar 

  14. 14.

    Laverne M, George M, Lord D, Kelly A, Mukherjee T: Experimental validation of foot to foot range measurements in pedestrian tracking. In ION GNSS. Portland, OR, USA; 19–23 Sept 2011.

    Google Scholar 

  15. 15.

    Nilsson JO, Skog I, Händel P, Hari K: Foot-mounted inertial navigation for everybody—an open-source embedded implementation. In IEEE/ION Position, Location and Navigation Symposium (PLANS). Myrtle Beach, SC, USA; 23–26 Apr 2012.

    Google Scholar 

  16. 16.

    Fischer C, Talkad Sukumar P, Hazas M: Tutorial: implementing a pedestrian tracker using inertial sensors. IEEE Pervasive Comput 2013, 12(2):17-27.

    Article  Google Scholar 

  17. 17.

    Open Source Aided Inertial Navigation ToolKit 2011. . Accessed 30 Oct 2012

  18. 18.

    InterSense, Inc.: NavShoe™ . Accessed 12 Jul 2012

  19. 19.

    AIONAV Systems, Ltd: AIONAV-F/M/P/L . Accessed 26 Nov 2012

  20. 20.

    National Robotics Engineering Center(NREC): Micro-inertial navigation technology (MINT): Technology by Carnegie Mellon, The Robotics Institute . Accessed 31 Jan 2013

  21. 21.

    Worcester Polytechnic Institute: Precision indoor personnel location & tracking annual international technology workshop (2006–2012). Worcester, Massachusetts . Accessed 27 Feb 2013

  22. 22.

    Gezici S, Tian Z, Giannakis G, Kobayashi H, Molisch A, Poor H, Sahinoglu Z: Localization via ultra-wideband radios: a look at positioning aspects for future sensor networks. IEEE Signal Process. Mag 2005, 22(4):70-84.

    Article  Google Scholar 

  23. 23.

    Patwari N, Ash J, Kyperountas S, Hero III A, Moses R, Correal N: Locating the nodes: cooperative localization in wireless sensor networks. IEEE Signal Process. Mag 2005, 22(4):54-69.

    Article  Google Scholar 

  24. 24.

    Win M, Conti A, Mazuelas S, Shen Y, Gifford W, Dardari D, Chiani M: Network localization and navigation via cooperation. IEEE Commun. Mag 2011, 49(5):56-62.

    Article  Google Scholar 

  25. 25.

    Wymeersch H, Lien J, Win M: Cooperative localization in wireless networks. Proc. IEEE 2009, 97(2):427-450.

    Article  Google Scholar 

  26. 26.

    De Angelis A, Dwivedi S, Händel P: Characterization of a flexible UWB sensor for indoor localization. IEEE Trans. Instrum. Meas 2013, 62(5):905-913.

    Article  Google Scholar 

  27. 27.

    De Angelis A, Dionigi M, Moschitta A, Carbone P: A low-cost ultra-wideband indoor ranging system. IEEE Trans. Instrum. Meas 2009, 58(12):3935-3942.

    Article  Google Scholar 

  28. 28.

    Moragrega A, Artiga X, Gavrincea C, Ibars C, Navarro M, Najar M, Miskovsky P, Mira F, di Renzo M: Ultra-wideband testbed for 6.0-8.5 GHz ranging and low data rate communication. In European Radar Conference. Rome, Italy; 30 Sept–2 Oct 2009.

    Google Scholar 

  29. 29.

    Karbownik P, Krukar G, Pietrzyk MM, Franke N, von der Grün T: Ultra-wideband technology-based localization platform—architecture & experimental validation. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Sydney, Australia; 13–15 Nov 2012.

    Google Scholar 

  30. 30.

    Cazzorla A, De Angelis G, Moschitta A, Dionigi M, Alimenti F, Carbone P: A 5.6-GHz UWB position measurement system. IEEE Trans. Instrum. Meas 2013, 62(3):675-683.

    Article  Google Scholar 

  31. 31.

    Time Domain: PulsOn™ . Accessed 24 Sept 2012

  32. 32.

    Farnsworth BD, Taylor DW: High-precision 2.4 GHz DSSS RF ranging. White Paper, Ensco Inc. . Accessed 27 Oct 2012

  33. 33.

    Nanotron Technologies: Product range . Accessed 1 Oct 2012

  34. 34.

    Motorola, Inc.: XIR P8660/APX 7000 . Accessed 31 Jan 2013

  35. 35.

    Interspiro: SpiroLink . Accessed 31 Jan 2013

  36. 36.

    Thales Communications, Inc.: AN/PRC-148 MBITR/JEM and AN/PRC-154 Rifleman Radio . Accessed 31 Jan 2013

  37. 37.

    Zephyr Technology Corp.: Tactical radio comms . Accessed 13 Feb 2013

  38. 38.

    Strömbäck P, Rantakokko J, Wirkander SL, Alexandersson M, Fors K, Skog I, Händel P: Foot-mounted inertial navigation and cooperative sensor fusion for indoor positioning. In ION International Technical Meeting (ITM). San Diego, CA, USA; 25–27 Jan 2010.

    Google Scholar 

  39. 39.

    Hawkinson W, Samanant P, McCroskey C, Ingvalson R, Kulkarni A, Haas L, English B: GLANSER: Geospatial location, accountability, and navigation system for emergency responders—system concept and performance assessment. In IEEE/ION Position, Location and Navigation Symposium (PLANS). Myrtle Beach, SC, USA; 24–26 Apr 2012.

    Google Scholar 

  40. 40.

    Kloch K, Lukowicz P, Fischer C: Collaborative PDR localisation with mobile phones. In 15th Annual International Symposium on Wearable Computers (ISWC). San Francisco, CA, USA; 12–15 Jun 2011.

    Google Scholar 

  41. 41.

    Kamisaka D, Watanabe T, Muramatsu S, Kobayashi A, Yokoyama H: Estimating position relation between two pedestrians using mobile phones. In Pervasive Computing: Lecture Notes in Computer Science. Edited by: J Kay, P Lukowicz, H Tokuda, P Olivier, A Krüger. Berlin: Springer; 2012:307-324.

    Chapter  Google Scholar 

  42. 42.

    Harris M: The way through the flames. IEEE Spectrum 2013, 9: 26-31.

    Google Scholar 

  43. 43.

    Topics in Ad Hoc and Sensor Networks: Series of the IEEE Communication Magazine. 2005–present

  44. 44.

    Elsevier Journal: Ad Hoc Networks: Elsevier Journal. 2003–present

  45. 45.

    Bruno R, Conti M, Gregori E: Mesh networks: commodity multihop ad hoc networks. IEEE Commun. Mag 2005, 43(3):123-131.

    Article  Google Scholar 

  46. 46.

    Blazevic L, Le Boudec JY, Giordano S: A location-based routing method for mobile ad hoc networks. IEEE Trans. Mobile Comput 2005, 4(2):97-110.

    Article  Google Scholar 

  47. 47.

    Mauve M, Widmer J, Hartenstein H: A survey on position-based routing in mobile ad hoc networks. IEEE Network 2001, 15(6):30-39.

    Article  Google Scholar 

  48. 48.

    Nilsson JO, Händel P: Recursive Bayesian initialization of localization based on ranging and dead reckoning. In IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo, Japan; 3–7 Nov 2013.

    Google Scholar 

  49. 49.

    Nilsson JO, Händel P: Time synchronization and temporal ordering of asynchronous sensor measurements of a multi-sensor navigation system. In IEEE/ION Position, Location and Navigation Symposium (PLANS),. Palm Springs, CA, USA; 3–6 May 2010.

    Google Scholar 

  50. 50.

    Kschischang F, Frey B, Loeliger HA: Factor graphs and the sum-product algorithm. IEEE Trans. Inform. Theory 2001, 47(2):498-519.

    MathSciNet  Article  MATH  Google Scholar 

  51. 51.

    Ihler A, Fisher IJW, Moses R, Willsky A: Nonparametric belief propagation for self-localization of sensor networks. IEEE J Select. Areas Commun 2005, 23(4):809-819.

    Article  Google Scholar 

  52. 52.

    Mazuelas S, Shen Y, Win M: Information coupling in cooperative localization. IEEE Commun. Lett 2011, 15(7):737-739.

    Article  Google Scholar 

  53. 53.

    Bahr A, Walter M, Leonard J: Consistent cooperative localization. In IEEE International Conference on Robotics and Automation. Kobe, Japan; 12–17 May 2009.

    Google Scholar 

  54. 54.

    Nerurkar E, Roumeliotis S: Asynchronous multi-centralized cooperative localization. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Taipei, Taiwan; 18–22 Oct 2010.

    Google Scholar 

  55. 55.

    Romanovas M, Goridko V, Al-Jawad A, Schwaab M, Klingbeil L, Traechtler M, Manoli Y: A study on indoor pedestrian localization algorithms with foot-mounted sensors. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Sydney, Australia; 13–15 Nov 2012.

    Google Scholar 

  56. 56.

    Gadeke T, Schmid J, Zahnlecker M, Stork W, Muller-Glaser KD: Smartphone pedestrian navigation by foot-IMU sensor fusion. In Ubiquitous Positioning, Indoor Navigation, and Location Based Service (UPINLBS). Helsinki, Finland; 3–4 Oct 2012.

    Google Scholar 

  57. 57.

    Roumeliotis S, Bekey G: Distributed multirobot localization. IEEE Trans. Robot. Automat 2002, 18(5):781-795.

    Article  Google Scholar 

  58. 58.

    Martinelli A: Improving the precision on multi robot localization by using a series of filters hierarchically distributed. In IEEE/RSJ International Conference on Intelligent Robots and Systems. San Diego, CA, USA; 29 Sept–2 Nov 2007.

    Google Scholar 

  59. 59.

    Bahr A, Leonard JJ, Fallon MF: Cooperative localization for autonomous underwater vehicles. Int. J. Robotics Res 2009, 28: 714-728.

    Article  Google Scholar 

  60. 60.

    Krach B, Robertson P: Integration of foot-mounted inertial sensors into a Bayesian location estimation framework. In 5th Workshop on Positioning, Navigation and Communication. Hannover, Germany; 27 Mar 2008.

    Google Scholar 

  61. 61.

    Angermann M, Robertson P: FootSLAM: Pedestrian simultaneous localization and mapping without exteroceptive sensors—hitchhiking on human perception and cognition. Proc. IEEE 2012, 100: 1840-1848.

    Article  Google Scholar 

  62. 62.

    Robertson P, Angermann M, Krach B, Khider M: SLAM dance—inertial-based joint mapping and positioning for pedestrian navigation. Inside GNSS Mag 2010, 5(3):48-59.

    Google Scholar 

  63. 63.

    Pinchin J, Hide C, Moore T: A particle filter approach to indoor navigation using a foot mounted inertial navigation system and heuristic heading information. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Sydney, Australia; 13–15 Nov 2012.

    Google Scholar 

  64. 64.

    Rantakokko J, Strömbäck P, Emilsson E, Rydell J: Soldier positioning in GNSS-denied operations. In Navigation Sensors and Systems in GNSS Denied Environments (SET-168). Izmir, Turkey; 8–9 Oct 2012.

    Google Scholar 

  65. 65.

    Godha S, Lachapelle G: Foot mounted inertial system for pedestrian navigation. Meas. Sci. Tech 2008., 19(7): doi:10.1088/0957-0233/19/7/075202

    Google Scholar 

  66. 66.

    Bird J, Arden D: Indoor navigation with foot-mounted strapdown inertial navigation and magnetic sensors (emerging opportunities for localization and tracking). IEEE Wireless Commun 2011, 18(2):28-35.

    Article  Google Scholar 

  67. 67.

    Bancroft JB, Lachapelle G, Cannon ME, Petovello MG: Twin IMU-HSGPS integration for pedestrian navigation. In ION GNSS. Savannah, GA, USA; 15–16 Sept 2008.

    Google Scholar 

  68. 68.

    Brand T, Phillips R: Foot-to-foot range measurement as an aid to personal navigation. In Proc. 59th Annual Meeting of The Institute of Navigation and CIGTF 22nd Guidance Test Symposium. Albuquerque, NM, USA; 23-25 Jun 2003.

    Google Scholar 

  69. 69.

    Hung TN, Suh YS: Inertial sensor-based two feet motion tracking for gait analysis. Sensors 2013, 13: 5614-5629.

    Article  Google Scholar 

  70. 70.

    Saarinen J, Suomela J, Heikkila S, Elomaa M, Halme A: Personal navigation system. In IEEE/RSJ International Conference on Intelligent Robots and Systems. Sendai, Japan; 28 Sept–2 oct 2004.

    Google Scholar 

  71. 71.

    Zhang YP, Bin L, Qi C: Characterization of on-human body UWB radio propagation channel. Microw. Opt. Technol. Lett 2007, 49(6):1365-1371.

    Article  Google Scholar 

  72. 72.

    Zachariah D, Skog I, Jansson M, Händel P: Bayesian estimation with distance bounds. IEEE Signal Processing Lett 2012, 19(12):880-883.

    Article  Google Scholar 

  73. 73.

    Skog I, Nilsson JO, Zachariah D, Händel P: Fusing information from two navigation system using an upper bound on their maximum spatial separation. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Sydney, Australia; 13–15 Nov 2012.

    Google Scholar 

  74. 74.

    Prateek GV, Girisha R, Hari K, Händel P: Data fusion of dual foot-mounted INS to reduce the systematic heading drift. In 4th International Conference on Intelligent Systems, Modelling and Simulation. Bangkok, Thailand; 29.

  75. 75.

    Lee S, Kim B, Kim H, Ha R, Cha H: Inertial sensor-based indoor pedestrian localization with minimum 802.15.4a configuration. IEEE Trans. Ind. Informatics 2011, 7(3):455-466.

    Article  Google Scholar 

  76. 76.

    Abrudan TE, Paula LM, Barros Jao, Cunha JaoPauloSilva, de Carvalho NB: Indoor location estimation and tracking in wireless sensor networks using a dual frequency approach. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Guimarães, Portugal; 21–23 Sept 2011.

    Google Scholar 

  77. 77.

    Hashemi H: The indoor radio propagation channel. Proc. IEEE 1993, 81(7):943-968.

    Article  Google Scholar 

  78. 78.

    Pahlavan K, Akgul FO, Heidari M, Hatami A, Elwell JM, Tingley RD: Indoor geolocation in the absence of direct path. IEEE Wireless Commun 2006, 13(6):50-58.

    Article  Google Scholar 

  79. 79.

    Yoon C, Cha H: Experimental analysis of IEEE 802.15.4a CSS ranging and its implications. Comput. Commun 2011, 34(11):1361-1374.

    Article  Google Scholar 

  80. 80.

    Olson E, Leonard J, Teller S: Robust range-only beacon localization. IEEE J. Oceanic Eng 2006, 31(4):949-958.

    Article  Google Scholar 

  81. 81.

    Whitehouse K, Culler D: A robustness analysis of multi-hop ranging-based localization approximations. In 5th International Conference on Information Processing in Sensor Networks. Nashville, TN, USA; 19–21 Apr 2006.

    Google Scholar 

  82. 82.

    Zoubir A, Koivunen V, Chakhchoukh Y, Muma M: Robust estimation in signal processing: a tutorial-style treatment of fundamental concepts. IEEE Signal Process. Mag 2012, 29(4):61-80.

    Article  Google Scholar 

  83. 83.

    Zampella F, Angelis AD, Skog I, Zachariah D, Jiménez A: A constraint approach for UWB and PDR fusion. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Sydney, Australia; 13–15 Nov 2012.

    Google Scholar 

  84. 84.

    Sornette D, Ide K: The Kalman-Lévy filter. Phys. D: Nonlinear Phenomena 2001, 151(2-4):142-174.

    MathSciNet  Article  MATH  Google Scholar 

  85. 85.

    Vilà-Valls J, Fernández-Prades C, Closas P, Fernandez-Rubio JA: Bayesian filtering for nonlinear state-space models in symmetric α -stable measurement noise. In 19th European Signal Processing Conference (EUSIPCO 2011). Barcelona, Spain; 29 Aug–2 Sept 2011.

    Google Scholar 

  86. 86.

    Gordon N, Percival J, Robinson M: The Kalman-Levy filter and heavy-tailed models for tracking manoeuvring targets. In Sixth International Conference on Information Fusion. Cairns, Australia; 8–11 Jul 2003.

    Google Scholar 

  87. 87.

    Wang D, Zhang C, Zhao X: Multivariate Laplace filter: a heavy-tailed model for target tracking. In 19th International Conference on Pattern Recognition. Tampa, FL, USA; 8–11 Dec 2008.

    Google Scholar 

  88. 88.

    Allen R, Lin KC, Xu C: Robust estimation of a maneuvering target from multiple unmanned air vehicles’ measurements. In International Symposium on Collaborative Technologies and Systems (CTS). Chicago, IL, USA; 17–21 May 2010.

    Google Scholar 

  89. 89.

    Britting KR: Inertial Navigation Systems Analysis. New York: Wiley; 1971.

    Google Scholar 

  90. 90.

    Jekeli C: Inertial Navigation Systems with Geodetic Applications. Berlin: de Gruyter; 2001.

    Book  Google Scholar 

  91. 91.

    Nilsson JO, Skog I, Händel P: A note on the limitations of ZUPTs and the implications on sensor error modeling. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Sydney, Australia; 13–15 Nov 2012.

    Google Scholar 

  92. 92.

    Kelly A: Personal navigation system based on dual shoe mounted IMUs and intershoe ranging. In Precision Indoor Personnel Location & Tracking Annual International Technology Workshop. Worcester, MA, USA; 1 Aug 2011.

    Google Scholar 

  93. 93.

    Skog I, Händel P, Nilsson JO, Rantakokko J: Zero-velocity detection: an algorithm evaluation. IEEE Trans. Biomed. Eng 2010, 57(11):2657-2666.

    Article  Google Scholar 

  94. 94.

    Skog I, Nilsson JO, Händel P: Evaluation of zero-velocity detectors for foot-mounted inertial navigation systems. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Zürich, Switzerland; 15–17 Sept 2010.

    Google Scholar 

  95. 95.

    Nilsson JO, Händel P: Standing still with inertial navigation. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Montbéliard - Belfort, France; 28–31 Oct 2013.

    Google Scholar 

  96. 96.

    Bebek Ö, Suster M, Rajgopal S, Fu M, Huang X, Çavuşoğlu M, Young D, Mehregany M, van den Bogert A, Mastrangelo C: Personal navigation via high-resolution gait-corrected inertial measurement units. IEEE Trans. Instrum. Meas 2010, 59(11):3018-3027.

    Article  Google Scholar 

  97. 97.

    Jiménez A, Seco F, Prieto J, Guevara J: Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IMU. In 7th Workshop on Positioning Navigation and Communication (WPNC). Dresden, Germany; 11–12 Mar 2010.

    Google Scholar 

  98. 98.

    Farrell JA: Aided Navigation. New York: McGraw Hill; 2008.

    Google Scholar 

  99. 99.

    Bageshwar VL, Gebre-Egziabher D, Garrard WL, Georgiou TT: Stochastic observability test for discrete-time Kalman filters. J. Guid. Control. Dynam 2009, 32(4):1356-1370.

    Article  Google Scholar 

  100. 100.

    de la Rubia E, Diaz-Estrella A: Improved pedestrian tracking through Kalman covariance error selective reset. Electronics Letters 2013, 49(7):464-465.

    Article  Google Scholar 

  101. 101.

    Öberg T, Karsznia A, Öberg K: Basic gait parameters: reference data for normal subjects, 10-79 years of age. Int. J. Rehabil. Res 1993, 30(2):210-223.

    Google Scholar 

  102. 102.

    Simon D: Kalman filtering with state constraints: a survey of linear and nonlinear algorithms. IET Control Theory Applications 2010, 4(8):1303-1318.

    MathSciNet  Article  Google Scholar 

  103. 103.

    Huber MF, Hanebeck UD: Gaussian filter based on deterministic sampling for high quality nonlinear estimation. In 17th IFAC World Congress (IFAC 2008). Coex, South Korea; 6–11 Jul 2008.

    Google Scholar 

  104. 104.

    Kurazume R, Nagata S, Hirose S: Cooperative positioning with multiple robots. IEEE Int. Conf. Robot. Automat 1994, 2: 1250-1257.

    Google Scholar 

  105. 105.

    De Angelis A, Händel P, Rantakokko J: Measurement report: laser total station campaign in KTH R1 for Ubisense system accuracy evaluation. Tech. Rep., KTH Royal Institute of Technology 2012. [QC 20120618]

  106. 106.

    Nilsson JO, Schüldt C, Händel P: Voice radio communication, 3D audio and the tactical use of pedestrian localization. In International Conference on Indoor Positioning and Indoor Navigation (IPIN). Montbéliard-Belfort, France; 28-31 Oct 2013.

    Google Scholar 

Download references


Parts of this work have been funded by the Swedish Agency for Innovation Systems (VINNOVA).

Author information



Corresponding author

Correspondence to John-Olof Nilsson.

Additional information

Competing interests

The authors have no connection to any company whose products are referenced in the article. The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Nilsson, JO., Zachariah, D., Skog, I. et al. Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging. EURASIP J. Adv. Signal Process. 2013, 164 (2013).

Download citation


  • Cooperative localization
  • Pedestrian localization
  • Pedestrian dead reckoning
  • Inertial navigation
  • Infrastructure-free localization