Skip to main content

Distributed tracking with consensus on noisy time-varying graphs with incomplete data

Abstract

In this paper, we formulate a problem of distributed tracking with consensus on a time-varying graph with incomplete data and noisy communication links. We develop a framework to handle a time-varying network topology in which not every node has local observations to generate own local tracking estimates (incomplete data). A distributed tracking-with-consensus algorithm that is suitable for such a noisy, time-varying graph is proposed. We establish the graph conditions so that distributed consensus can be achieved in the presence of noisy communication links when the effective network graph is time-varying. The steady-state performance of the proposed distributed tracking with consensus algorithm is also analyzed and compared with that of the distributed local Kalman filtering with the centralized fusion and centralized Kalman filter. Simulation results and performance analysis of the proposed algorithm are given, showing that the proposed distributed tracking with consensus algorithm performs almost the same as the distributed local Kalman filtering with centralized fusion on noisy time-varying graphs with incomplete data, while the proposed algorithm has the additional advantages of robustness and scalability.

1 Introduction

Multisensor tracking problems have attracted the attention of many researchers in robotics, systems, and control theory over the past three decades [14]. Target tracking problems are of great importance in surveillance, security, and information systems for monitoring the behavior of agents using sensor networks, such as tracking pallets in warehouses, vehicles on roadways, or firefighters in burning buildings [5, 6]. With the introduction of the concept of consensus, distributed tracking, and coordination without any fusion center have also received considerable attention in recent years [7, 8].

Distributed consensus estimation in sensor networks has been studied with both fixed as well as time-varying communication topologies, taking into account issues such as link failure, packet losses and quantization or additive channel noise [820]. Olfati-Saber and Murray [9] considered the average consensus for first-order integrator networks with fixed and switching topologies. Kingston and Beard [10] extended the results of [9] to the discrete-time models and relaxed the graph condition to instantaneously balanced, connected-over-time networks. Xiao and Boyd [11] considered discrete-time distributed averaging consensus over fixed and undirected graphs. They designed the weighted adjacency matrix to optimize the convergence rate by semidefinite programming. Huang and Manton [17] considered the discrete-time average consensus with fixed topologies and measurement noises. They introduced decreasing step size in the protocol to attenuate the noises. Li and Zhang [1820] considered the continuous-time average consensus with time-varying topologies and communication noises, where time-varying consensus gains are adopted. They gave a necessary and sufficient condition for mean square average consensus with fixed graph topologies and sufficient conditions for mean square average consensus and almost sure consensus with time-varying graph topologies.

On the other hand, the distributed consensus tracking over networks with and without noiseless communication links among nodes have also been considered [2126]. Recent work in [21, 22] considered the distributed consensus tracking over a fixed graph with noiseless communication among nodes. A distributed Kalman filter with embedded consensus filters was proposed in [21] and further extended to heterogeneous and non-linear sensing models in [22]. Distributed Kalman filtering using one-step weighted averaging was considered in [23]. Each node desires an estimate of the observed system and communicates its local estimate with neighbors in the network. Then, new estimate is formed as a weighted average of the neighboring estimates, where the weights are optimized to yield a small estimation error covariance. In [24], the authors presented a distributed Kalman filter to estimate the state of a sparsely connected, large-scale, dynamical system. The complex large-scale systems are decomposed spatially into low-order overlapping subsystems. A fusion algorithm using bipartite fusion graphs and local average consensus algorithms is applied to fuse observations for those overlapping subsystems. A tracking control problem for multiagent consensus with an active leader and variable interconnection topology was considered in [25], where the state of the considered leader keeps changing and may not be measured. A neighbor-based local controller together with a neighbor-based state-estimation rule is given for each agent to track such a leader. In [26], the authors proposed a greedy stepsize sequence design to guarantee the convergence of distributed estimation consensus over a network with noisy links.

Distributed tracking with consensus, addressed in this paper and previous work [27, 28], refers to the problem that a group of nodes need to achieve an agreement over the state of a dynamical system by exchanging tracking estimates over a network. For instance, space-object tracking with a satellite surveillance network that consists of fixed nodes that are connected together, and mobile nodes that could only have active links with other nodes within their communication radius could benefit from such distributed tracking with consensus, due to the fact that individual sensor nodes may not have enough observations of sufficient quality [29]. Thus, different sensor nodes may arrive at different local estimates regarding the same space object of interest [29]. Information exchange among nodes may improve the quality of local estimates, and consensus estimates may help avoid conflicting and inefficient distributed decisions. Other applications of this problem include flocking and formation control, real-time monitoring, target tracking, and global positioning system (GPS) [29, 30]. In [28], the performance analysis of distributed tracking with consensus on noisy time-varying graphs was addressed, and later, the algorithm of distributed tracking with consensus with incomplete data was proposed without theoretical proof in [27].

The contributions of this work are as follows: (1) We formulate the problem of distributed tracking with consensus on a time-varying graph with incomplete data and noisy communication links. (2) We develop an algorithm by combining distributed Kalman filtering with consensus updates to handle a time-varying network in which not every node has local observations to generate own local tracking estimates (incomplete data). (3) We establish the graph conditions so that the distributed consensus can be achieved when the graph topology is time-varying and with noisy communication links. (4) We analyze the steady-state performance of the distributed tracking with consensus on both fixed and time-varying graphs and compare with that of the distributed local Kalman filtering with centralized fusion and centralized Kalman filter.

Following notation will be used in this paper. At time j, an undirected graph is denoted by G(j) = (V, E(j)) for j ≥ 0, where V = {1,2,..., N} is the node set and E(j) V × V denotes the edge set. A random graph in which the existence of an edge between a pair of vertices in the set V = {1, 2,..., N} is determined randomly and independent of other edges with probability p (0,1] is denoted by G(N, p). The neighborhood of node n at time j is denoted by Ω n (j) = {l V|{l, n} E(j)}. Node n is said to have degree d n (j) = |Ω n (j)|, where | · | refers to the cardinality of a set. Let the degree matrix be the diagonal matrix D(j) = diag (d1(j), . . ., d N (j)), where diag(d1, ...,d N ) represents a diagonal matrix with (d1, ..., d N ) on its main diagonal. The adjacency matrix of the graph G(j) is denoted by A(j) = [A ln (j)], where A ln (j) = 1, if {l, n} E(j), and A ln (j) = 0 otherwise. The graph Laplacian matrix is denoted by L(j) = D(j) - A(j). The Laplacian is a positive semidefinite matrix so that its eigenvalues can always be ordered as 0 = λ1 (L) ≤ λ2 (L) ≤ ··· λ N (L). The smallest eigenvalue λ1(L) is always equal to zero with 1 being the corresponding eigenvector where 1 is the vector of all ones of suitable length. For a connected graph, the second eigenvalue λ2 (L) > 0 and is termed the algebraic connectivity or the Fiedler value of the network [15, 31]. Let p(l, n) be the probability that links {l, n} of the graph exists (for random graphs considered in this paper, we will always assume that p(l, n) = p for l, n V). For a directed graph, let (l, n) denote the directed edge from node l to node n. The direct sum of an N × N matrix B and an M × M matrix C will be an (N + M) × (N + M) matrix, denoted by B C, whereas the Kronecker product of an N × N matrix B and an M × M matrix C will be an NM × NM matrix, denoted by B C. From the properties of Kronecker product and eigenvalues [32], for an N × N matrix B and an M × M matrix C, if λ n and ω m are two eigenvalues of matrices B and C, respectively, for 1 ≤ nN and 1 ≤ mM, then λ n ω m is an eigenvalue of B C. We denote the N-dimensional Euclidean space by N. The N × N identity matrix is denoted by I N , while 1 N denotes the column vectors of all ones and e m denotes the column vectors with the m th element as one and the rest as zeros. The operator || · || applied to a vector denotes the standard Euclidean 2-norm, while applied to matrices denotes the induced 2-norm, which is equivalent to the matrix spectral radius for symmetric matrices.

The remainder of this paper is organized as follows: Section 2 introduces our assumed system and network model and the proposed distributed tracking with consensus algorithm. In Section 3, conditions for achieving distributed consensus are discussed and the rate of convergence is quantified. The steady-state performance of the proposed distributed tracking with consensus algorithm is also analyzed in Section 3. Section 4 provides detailed simulation results and performance comparison of the proposed distributed tracking with consensus algorithm and that of distributed local Kalman filtering with centralized fusion and centralized Kalman filter. Finally, conclusions are given in Section 5.

2 Problem formulation and the proposed distributed tracking with consensus algorithm

A System model

Consider an N-node sensor network with a connectivity graph G(j) = (V, E(j)) at time j. Assume that the graph G(j) is undirected, but time varying due to nodes moving out of communication ranges of each other or needing to cease transmissions to save battery power. The objective is to perform distributed tracking of a target and exchange tracking estimates over noisy communication links and try to reach consensus over the network. The tracking updates are performed at k instances, where k denotes the tracking time step (k = 0, 1,...). Consensus updates are performed between every two tracking updates, where 0 ≤ j < J denotes the consensus iteration number, and J is the number of consensus iterations per tracking update (assumed to be fixed). The dynamics of the target evolves according to

x ( k + 1 ) = F x ( k ) + w ( k ) ; x ( 0 ) ~ N x ̄ ( 0 ) , P 0 .
(1)

The sensing model of the n th sensor is

y n ( k ) = H n x ( k ) + v n ( k ) , y n l .
(2)

Note that the observation matrices H n 's can be different for each node. Both w(k) and v n (k) are assumed to be zero-mean white Gaussian noise (WGN) and x(0) Mis the initial state of the target. The second-order statistics of the process and measurement noise are given by

E w ( k ) w ( k ) T = Q δ k k , E v n ( k ) v n ( k ) T = R n δ k k δ n n ,

where δ kk' = 1 if k = k' and δ kk' = 0, otherwise. Note that the above system model is linear, while the system model assumed in [29] is highly non-linear making it difficult to analyze to obtain theoretical performance characterization.

Figure 1 shows the system model of distributed tracking with consensus on a time-varying graph with incomplete data and noisy communication links. Let x ̄ n ( k , j ) denote the node n's updated tracking estimate at the j th consensus iteration that follows the k th tracking update step with x ̄ n ( k , 0 ) = x ^ n ( k | k ) , where x ^ n ( k | k ) is the n th node's filtered tracking estimate in the k th tracking update. The received data at node n from node l, for n ≠ l, at iteration j can be written as

Figure 1
figure 1

Block diagram of distributed tracking with consensus on a time-varying graph with incomplete data and noisy communication links.

z n , l ( k , j ) = x ̄ l ( k , j ) + ϕ n , l ( j ) , for  0 j < J ,
(3)

where ϕn,l(j) denotes the receiver noise at the node n in receiving the estimate of node l at iteration j and z n , n ( k , j ) = x ̄ n ( k , j ) . Assume that E [ ϕ n , l ( j ) ] = 0 M , E [ ϕ n , l ( j ) ϕ n , l T ( j ) ] = σ n , l 2 I M with sup n , l , j E [ ϕ n , l ( j ) 2 ] = u < .

As depicted in Figure 1, at the end of the k th tracking update, each node n which has an observation of the target will have a filtered estimate x ^ n ( k | k ) with associated covariance matrix P ^ n ( k | k ) . In order to improve the tracking estimate accuracy, it will exchange this filtered estimate with its neighbors over noisy communication links and try to reach consensus over the network. Note that, the goal here is to obtain a consensus tracking estimate over the local estimates at each tracking time step k, and thus, the consensus problem is essentially a problem of consensus in estimation.

Due to time-varying topology of the network, at any given tracking time step k, not all nodes may observe the target. Thus, these nodes will not have local tracking estimates, which is denoted as incomplete data. In previous consensus literature [820], all node estimates are taken into account in forming consensus estimates. However, the same method may not be extended to incomplete data case, since the nodes that mostly do not have observation (y n (k) = v n (k)) will exchange their predicted filtered estimates with others. Those predicted tracking estimates are considered as valid estimates and are taken into account to form consensus estimates, which results in inaccurate estimates and worsens the sensor network performance. By considering incomplete data here, the nodes do not have data will not communicate their invalid tracking estimates (by setting x ^ n ( k | k ) = 0 and P ^ n ( k | k ) = I M for some ϵ > 0 instead). By introducing active node set and effective network graph, each node will notice which node has data in current consensus iteration. Only the estimates from active nodes are considered into forming consensus estimates. The estimates from non-active nodes will not be considered until it forms its updated estimate by fusing the filtered estimates from neighboring active nodes. Since the non-active nodes join the consensus process without invalid tracking estimates, faster consensus process could be achieved while the network performance is still maintained.

In the space object tracking problem treated in [29], each node observes the target and locally processes its data in data sampling period. After forming local estimates, each node will share its information among neighboring nodes in information sharing period. Here, the information sharing rate is much larger compared with the data sampling rate so that each data sample node may exchange their local estimates many times in between, which may conceivably lead to better consensus estimates. The distributed tracking with consensus problem as formulated above may have other applications beyond the space object tracking problem, such as in multitarget tracking with a group of autonomous robots [33], battlefield life signs detection by Unmanned Aerial Vehicles (UAVs) [34], package tracking in warehouse by sensor networks [35], etc.

B Network model

We define the active node set S k j in a time-varying graph G(j) as the set of nodes that have updated local estimates to be shared with others in the j th consensus iteration after the k th tracking update [29]. Define effective network graph of a network G(j) = (V(j), E(j)) with active node set S k j as G ̃ ( j ) = V ( j ) , ( j ) , where ( j ) = E ( j ) n S k j ϒ n out ( j ) , where ϒ n out ( j ) = { ( n , l ) | ( n , l ) E ( j ) } denotes the set of directed edges with initial vertex as n at iteration j. Note that, the effective network graph G ̃ ( j ) is a directed graph, which is obtained by removing the outgoing edges of the nodes that do not have data in G(j). For a static graph G ( j ) =G ( V , E ) , ( j ) can be written as ( j ) = ( j - 1 ) l S k j - 1 ( n Ω l ϒ n out ) , where ( 0 ) =E n S k 0 ϒ n out . Note that, the nodes that do not observe the target will not have updated local estimates to share at the beginning of consensus update process (at j = 0). However, as information exchange among nodes progresses, some of these nodes may be able to form their own updated local estimates at the consensus iteration j for j > 0. Therefore, the active node set S k j is time-varying and S k j = S k j - 1 l S k j - 1 Ω l ( j - 1 ) , where S k 0 is the set of nodes that have observations of the target in the k th tracking update step as in Figure 1. Figure 2 shows the relation between the connectivity graph G(j) and the effective network graph G ̃ ( j ) for a graph of 6 nodes with active node set S k j = ( 1 , 2 , 4 , 6 ) , where solid circles denote active nodes.

Figure 2
figure 2

Connectivity graph and effective network graph.

Let I S k j denote an N × N matrix generated from the active node set S k j as follows:

[ I S k j ] n n = 1 if  n = n and n S k j 0 else .

Note that, I S k j is a diagonal matrix with n' th diagonal element equal to zero for n ( S k j ) c , where (·)cdenotes the set complement. By combining the connectivity graph G(j) with the active node set S k j , we obtain the effective network graph G ̃ ( j ) . Thus, the adjacency matrix of the effective network graph is given by A ( j ) =A ( j ) I S k j . The corresponding degree matrix D(j) can then be obtained from A(j), and the Laplacian matrix is L(j) = D(j) - A(j) by definition.

As an example, consider the same network model in Figure 2. The matrix I S k j = diag ( 1 , 1 , 0 , 1 , 0 , 1 ) . The Laplacian matrices of the connectivity graph and effective network graph are as follows:

L ( j ) = 4 - 1 0 - 1 - 1 - 1 - 1 3 - 1 - 1 0 0 0 - 1 2 0 - 1 0 - 1 - 1 0 3 - 1 0 - 1 0 - 1 - 1 4 - 1 - 1 0 0 0 - 1 2 and L ( j ) = 3 - 1 0 - 1 0 - 1 - 1 2 0 - 1 0 0 0 - 1 1 0 0 0 - 1 - 1 0 2 0 0 - 1 0 0 - 1 3 - 1 - 1 0 0 0 0 1 .

C Distributed tracking with consensus algorithm

In this section, we propose a distributed tracking and consensus algorithm for the above distributed tracking problem over a time-varying graph with incomplete data and noisy communication links. This algorithm is based on the architecture that was first proposed in [29] in the special context of consensus tracking in a satellite sensor network for situational awareness.

Figure 3 shows the timing diagram of tracking and consensus updates process in the proposed distributed tracking with consensus algorithm. As in Figure 3, at tracking time step k, node n is assumed to have completed its consensus iterations corresponding to time k - 1. If the output of this consensus update following the (k - 1)th tracking update step is x ̄ n ( k - 1 , J ) with the associated covariance matrix P ̄ n ( k - 1 , J ) , then node n sets x ̄ n ( k - 1 | k - 1 ) = x ̄ n ( k - 1 , J ) and P ̄ n ( k - 1 | k - 1 ) = P ̄ n ( k - 1 , J ) . Next, at the k th tracking update step, each node n where n S k j , passes its observation y n (k) through its local Kalman filter as follows [1]:

Figure 3
figure 3

Timing diagram of tracking and consensus updates in the proposed algorithm for distributed tracking with consensus.

x ^ n ( k | k - 1 ) = F x ̄ n ( k - 1 | k - 1 ) , P ^ n ( k | k - 1 ) = F P ̄ n ( k - 1 | k - 1 ) F T + Q , K n ( k ) = P ^ n ( k | k - 1 ) H n T H n P ^ n ( k | k - 1 ) H n T + R n - 1 , x ^ n ( k | k ) = x ^ n ( k | k - 1 ) + K n ( k ) y n ( k ) - H n x ^ n ( k | k - 1 ) , P ^ n ( k | k ) = I - K n ( k ) H n P ^ n ( k | k - 1 ) ,
(4)

where x ̄ n ( k - 1 | k - 1 ) = x ̄ n ( k - 1 , J ) with x ̄ n ( 0 , J ) = x ̄ ( 0 ) and P ̄ n ( k - 1 | k - 1 ) = P ̄ n ( k - 1 , J ) with P ̄ n ( 0 , J ) = P 0 . Let X ̄ ( k - 1 , j ) = x ̄ 1 ( k - 1 , j ) T , x ̄ 2 ( k - 1 , j ) T , , x ̄ N ( k - 1 , j ) T T . Denote P ̄ ( k - 1 , j ) as the covariance matrix corresponding to X ̄ ( k - 1 , j ) . The P ̄ n ( k - 1 , J ) in (4) can be obtained by extracting the n th M × M main diagonal block of P ̄ ( k - 1 , J ) .

Node n uses its filtered estimate x ^ n ( k | k ) obtained by the above tracking update step as the initial estimate for consensus update exchanges by setting x ̄ n ( k , 0 ) = x ^ n ( k | k ) with initial covariance matrix P ̄ ( k , 0 ) = P ^ 1 ( k | k ) P ^ 2 ( k | k ) P ^ N ( k | k ) , a where denotes the direct sum. On the other hand, for nodes n ( S k j ) c , we may arbitrarily set x ^ n ( k | k ) = 0 and P ^ n ( k | k ) = I M for some ϵ > 0.

During the (j + 1)th consensus update, each node n forms a linear estimate of the following form as its consensus estimate:

x ̄ n ( k , j + 1 ) = x ̄ n ( k , j ) + γ n ( j ) l = 1 N A n , l ( j ) z ̄ n , l ( k , j ) - z ̄ n , n ( k , j ) ,
(5)

where γ n (j) is the n th node's weight coefficient at iteration j and 0 ≤ j < J. We set γ n (j) = γ(j) for n S k j and γ n ( j ) = 1 l = 1 N A n , l ( j ) for n S k j and l = 1 N A n , l ( j ) 0 . For node n that does not have local tracking estimate, we assume that it will generate its estimate by averaging the tracking estimates from its neighbors.b

By defining X ̄ ( k , j ) = [ x ̄ 1 ( k , j ) T, x ̄ 2 ( k , j ) T , , x ̄ N ( k , j ) T ] T , the consensus update dynamics can be written in vector form as follows:

X ̄ ( k , j + 1 ) = X ̄ ( k , j ) - ( Γ ( j ) L ( j ) ) I M X ̄ ( k , j ) - ( Γ ( j ) I M ) Φ ̄ ( j ) ,
(6)

where Γ ( j ) = diag γ 1 ( j ) , , γ N ( j ) , Φ ̄ ( j ) = ϕ 1 ( j ) T ϕ N ( j ) T T and ϕ n ( j ) =- l Ω n ( j ) ϕ n , l ( j ) . Note that, from (3), E [ Φ ¯ ( j ) ] = 0 and sup j E [ Φ ̄ ( j ) 2 ] = η N ( N - 1 ) u < .

Let us define ē ( k , j ) to be the error vector at the j th consensus iteration after the k th tracking update: ē ( k , j ) X ̄ ( k , j ) - ( 1 I M ) x ( k ) . From (6), it follows that

ē ( k , j + 1 ) = ( A ( j ) I M ) e ̄ ( k , j ) - Γ ( j ) I M Φ ̄ ( j ) + ( A ( j ) I M ) - I ( 1 I M ) x ( k ) ,
(7)

where A(j) = I N -Γ(j)L(j). Note that, this coefficient matrix A(j) is slightly different from the one proposed in [29]. In [29], A ( j ) = I ̃ ( j ) -γ ( j ) L ̃ ( j ) , where I ̃ ( j ) and L ̃ ( j ) are the modified identity and Laplacian matrices. The required modification, however, does not lend itself to a convenient relation between the original matrices and the modified ones that can be used in mathematical derivations.

Note that, if the filtered estimate x ^ n ( k | k ) at the end of the measurement update stage is an unbiased estimate, then x ̄ n ( k , 0 ) is also unbiased for all n S k j . From (5), since x ̄ n ( k , j + 1 ) = 1 l = 1 N A n , l ( j ) l = 1 N A n , l ( j ) ( x ̄ l ( k , j ) + ϕ n , l ( j ) ) for n ( S k j ) c , then x ̄ n ( k , j + 1 ) is also unbiased for n ( S k j ) c if x ̄ l ( k , j ) is unbiased for l S k j . From (7), it can be shown that the unbiasedness in consensus estimate X ̄ ( k , j ) can be maintained if matrix A(j) satisfies the condition ( A ( j ) I M ) - I ( 1 I M ) = 0 , which is equivalent to requiring ( A ( j ) - I N ) 1 I M = 0 . It follows that the unbiasedness in consensus estimate X ̄ ( k , j ) requires 0 to be an eigenvalue of the Laplacian matrix L(j) with the associated eigenvector 1.c Define the covariance matrix corresponding to X ̄ ( k , j ) as P ̄ ( k , j ) = E [ e ̄ ( k , j ) e ̄ ( k , j ) T ] . From (7) and unbiasedness condition, it can be easily seen that

P ̄ ( k , j + 1 ) = A ( j ) I M P ̄ ( k , j ) ( A ( j ) I M ) T + E ( Γ ( j ) I M ) Φ ̄ ( j ) Φ ̄ ( j ) T ( Γ ( j ) I M ) .
(8)

As shown in Figure 3, after J consensus iterations, each node n will feed x ̄ n ( k , J ) back to their local Kalman filters by setting x ̄ n ( k | k ) = x ̄ n ( k , J ) and P ̄ n ( k | k ) = P ̄ n ( k , J ) before starting next tracking update at time k+1. Recall that here P ̄ n ( k , J ) is the n th M × M main diagonal block of P ̄ ( k , J ) . Algorithm 1 shows a summary of the steps in the proposed distributed tracking with consensus algorithm.

3 Performance analysis

A Conditions for achieving consensus

In this section, we analyze the convergence of the proposed distributed tracking with consensus algorithm and the convergence rate. Note that, the proofs of lemmas and theorems in this section are different from those in [16] due to vector state and incomplete data, which results in two stages of consensus process: obtaining complete data from incomplete data and reaching consensus on complete data. In the scenarios we consider, we assume that the information exchange rate during the consensus update process is much higher

Algorithm 1 Distributed tracking with consensus algorithm

Initialize: x(0), F, H n , Q, R n

while new data exist do

   Kalman filtering in tracking process:

          x ^ n ( k | k - 1 ) = F x ̄ n ( k - 1 | k - 1 )

          P ^ n ( k | k - 1 ) = F P ̄ n ( k - 1 | k - 1 ) F T + Q

          K n ( k ) = P ^ n ( k | k - 1 ) H n T H n P ^ n ( k | k - 1 ) H n T + R n - 1

          x ^ n ( k | k ) = x ^ n ( k | k - 1 ) + K n ( k ) y n ( k ) - H n x ^ n ( k | k - 1 )

          P ^ n ( k | k ) = I - K n ( k ) H n P ^ n ( k | k - 1 )

   update the initial state of consensus process:

          x ̄ n ( k , 0 ) = x ^ n ( k | k )

          P ̄ ( k , 0 ) = P ^ 1 ( k | k ) P ^ 2 ( k | k ) P ^ N ( k | k )

         j ← 0

   while jJ - 1 do

       x ̄ n ( k , j + 1 ) = x ̄ n ( k , j ) + γ n ( j ) l = 1 N A n , l ( j ) z n , l ( k , j ) - z n , n ( k , j )

      jj + 1

   end while

    x ̄ n ( k | k ) = x ̄ n ( k , J )

    P ̄ n ( k | k ) = P ̄ n ( k , J )

   kk + 1

end while

compared with the data sampling rate for the tracking updates. Hence, we can assume that J 1,d guaranteeing enough time for information to be exchanged over the network so that consensus can be reached if the weight {γ(j)} is chosen properly. As mentioned above, for a fixed k and J 1, the consensus update process after the k th tracking update can be considered as a consensus in estimation problem. Thus, to simplify notation, in the following, we omit the tracking time step index k in X ̄ ( k , j ) .

We start by defining the consensus subspace C as

C = { X N M | X = 1 N a , a M } .

If the consensus algorithm (6) converges to the consensus subspace C, each node estimate x ̄ n ( j ) will converge to the same value x ̄ n ( j ) = a for 1 ≤ nN, a Mand consensus is reached over the network. It is well known from the stochastic approximation literature [36] that, in order to ensure asymptotic convergence to consensus subspace, the weight coefficient γ(j) must satisfy the persistence condition as follows

γ ( j ) > 0 , j = 0 γ ( j ) = , j = 0 γ ( j ) 2 < .
(9)

We recall the following result on distance properties in NM:

Lemma 1: Suppose that X NMand consider the orthogonal decomposition X = X C + X C . Then, the Euclidean distance ρ ( X , C ) = X C .

In the following, we prove that the consensus algorithm given in (6) converges almost surely (a.s.). This is achieved in two steps: First, Lemma 3 proves that the state vector sequence { X ̄ ( j ) } j 0 converges a.s. to the consensus subspace C. Theorem 1 then completes the proof by showing that the sequence of component-wise averages { X ̄ avg ( j ) } j 0 converges a.s. to a finite random variable Θ, where X ̄ avg ( j ) = 1 N ( 1 T I M ) X ̄ ( j ) . The proof of Theorem 1 will require a basic result on convergence of Markov processes from [36], which is restated as Lemma 2 in our context. Before stating the lemma, however, we need to introduce the notation of [36].

Let { X ̄ ( j ) } j 0 be a Markov process in NM. Define the generating opaserator L corresponding to { X ̄ ( j ) } j 0 as

L V ( j , X ¯ ) = E V ( j + 1 , X ¯ ( j + 1 ) ) | X ¯ ( j ) = X ¯ - V ( j , X ¯ ) ,

for functions V ( j , X ¯ ) , j 0 , X ¯ N M , provided the conditional expectation exists. If D L is the domain of L, then we say that V ( j , X ¯ ) D L in a domain C, if LV ( j , X ¯ ) is finite for all ( j , X ¯ ) C.

For G NM, the ϵ-neighborhood of G and its complement are defined as,

U ε ( G ) = X | inf Y G ρ ( X , Y ) < ε , V ε ( G ) = R N M \ U ε ( G ) .
(10)

With these notations, we may now state the desired lemma on the convergence of Markov processes:

Lemma 2 (Convergence of Markov Processes): Let { X ̄ ( j ) } j 0 be a Markov process with generating operator L. Let there exist a non-negative function V ( j , X ¯ ) D L in the domain G NMfor j ≥ 0 and X ¯ N M . Assume that

inf j 0 , X ¯ V ε ( G ) V ( j , X ¯ ) > 0 , ε > 0 , and V ( j , X ¯ ) = 0 , X ¯ G , lim X ¯ G sup j 0 V ( j , X ¯ ) = 0 , and L V ( j , X ¯ ) g ( j ) ( 1 + V ( j , X ¯ ) ) - γ ( j ) φ ( j , X ¯ ) ,

where φ ( j , X ¯ ) , X ¯ N M is a non-negative function such that

inf j , X ¯ V ε ( G ) φ ( j , X ¯ ) > 0 , ε > 0 ; γ ( j ) > 0 , j 0 γ ( j ) = ; and g ( j ) > 0 , j 0 g ( j ) = .

Then, the Markov process { X ̄ ( j ) } j 0 with an arbitrary initial distribution

converges almost surely (a.s.) to G as j → ∞:

lim j ρ ( X ¯ ( j ) , G ) = 0 = 1 .

Proof Proof is a vector generalization of that in [16], and is omitted.

Lemma 2 guarantees a.s. convergence of a general Markov process with an arbitrary initial distribution under the assumption of the existence of a Lya-punov function V ( j , X ¯ ) . In fact, the state vector sequence { X ̄ ( j ) } j 0 given in (6) is a Markov process, since [ X ¯ ( j ) | X ¯ ( j - 1 ) , , X ¯ ( 0 ) ] = [ X ¯ ( j ) | X ¯ ( j - 1 ) ] . In the next lemma, we prove that the state estimate sequence { X ̄ ( j ) } j 0 given in (6) converges a.s. to the consensus subspace C by showing that the consensus algorithm over an undirected effective network graph satisfies the Lyapunov function assumptions of Lemma 2.

Lemma 3 (a.s. convergence of the proposed algorithm to the consensus subspace): Consider the consensus algorithm in (6) with initial state X ¯ ( 0 ) N M . The weight coefficients satisfy the persistence condition in (9). Assume that the undirected connectivity graph Laplacian L(j) is independent ofcommunication noise ϕn,l(j) for 1 ≤ n, lN. If L ( j ) = L ¯ + L ̃ ( j ) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0 and p(l, n) > 0 for {l, n} E(j), then

lim j ρ ( X ¯ ( j ) , C ) = 0 = 1 .

Proof See Appendix A.

Lemma 3 shows that the state estimate sequence { X ̄ ( j ) } j 0 given in (6) converges a.s. to the consensus subspace C. The key to the proof is to show that the directed effective network graph will become an undirected graph after all nodes have local estimates and the consensus algorithm over this undirected effective network graph satisfies the condition required in Lemma 2. In the following theorem, we state our main result and complete the convergence proof for the proposed distributed tracking with consensus algorithm by showing that the sequence of component-wise averages { X ̄ avg ( j ) } j 0 converges a.s. to a finite random variable Θ, where X ̄ avg ( j ) = 1 N ( 1 T I M ) X ̄ ( j ) .

Theorem 1 (a.s. convergence to a finiterandom vector): Consider the consensus algorithm in (6) with initial state X ¯ ( 0 ) N M . The weight coefficients satisfy the persistence condition in (9). Assume that the time-varying connectivity graph Laplacian L(j) is independent of communication noise ϕn,l(j) for 1 ≤ n, lN. If L ( j ) = L ¯ + L ̃ ( j ) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0, and if p(l, n) > 0 for {l, n} E(j), then there exists an almost sure finite real random vector Θ such that

lim j X ¯ ( j ) = 1 N Θ = 1 .

Proof Since the mean connectivity graph L ¯ is connected with non-zero link probability, for j large enough, each node will receive information from one another and generate its updated local estimate. For a fixed k, let J k =inf { j | ( S k j ) c = , j 0 } . Then, Γ(j) = γ(j)I N for jJ k and (6) becomes

X ¯ ( j + 1 ) = X ¯ ( j ) - γ ( j ) ( L ( j ) I M ) X ¯ ( j ) + Φ ¯ ( j ) for j J k .
(11)

Define the average of X ̄ ( j ) as X ̄ avg ( j ) = 1 N ( 1 T I M ) X ̄ ( j ) . Multiply both sides of (11) by 1 N ( 1 T I M ) and use the fact that 1 TL(j) = 0 N , so that for ( S k j ) c =, we have

X ¯ avg ( j + 1 ) = X ¯ avg ( j ) - ε ( j ) = X ¯ avg ( J k ) - J k l j ε ( l ) ,

where ε ( j ) = γ ( j ) N ( 1 T I M ) Φ ¯ ( j ) . Assuming that receiver noise is zero-mean and time independent, we obtain

E [ ε ( j ) 2 ] = γ 2 ( j ) N 2 E [ Φ ̄ ( j ) T ( 1 T I M ) T ( 1 T I M ) Φ ¯ ( j ) ] = γ 2 ( j ) N 2 E 1 n N ( ϕ n ( j ) ) T ϕ n ( j ) ,

where ϕ n ( j ) =- l Ω n ( j ) ϕ n , l ( j ) denotes the total incoming noise from node l Ω n (j) to node n and the last step follows from the independence of ϕ l (j) and ϕ n (j). By assuming that E ϕ l , n ( j ) ϕ l , n ( j ) T = σ 2 I M , for 1 ≤ l, nN, we obtain

E [ ε ( j ) 2 ] γ 2 ( j ) N 2 M N ( N - 1 ) σ 2 = γ 2 ( j ) M ( N - 1 ) N σ 2 .

From independence of X ̄ ( j ) and Φ ¯ ( j ) and the independence of noise over time, we then have that

E [ X ¯ avg ( j + 1 ) 2 ] E [ X ¯ avg ( J k ) T X ¯ avg ( J k ) ] + l K k j γ 2 ( l ) M ( N - 1 ) N σ 2 .

Denote X ¯ avg ( j ) = [ X ¯ avg,1 ( j ) X ¯ avg, M ( j ) ] T . It can be easily seen that

E [ ( X ¯ avg, m ( j + 1 ) ) 2 ] E [ ( X ¯ avg , m ( J k ) ) 2 ] + l J k j γ 2 ( l ) ( N - 1 ) N σ 2 .

Hence, the sequence { X ¯ avg , m ( j ) } is an L2 bounded martingale and thus converges a.s. in L2 to a finite random scalar θ. Define X ¯ m ( j ) = [ e m T x ̄ 1 , e m T x ̄ N ] T . From the conclusion of Lemma 3, we have that

[ lim j X ¯ ( j ) - 1 N X ¯ avg ( j ) = 0 ] = 1 , which implies that

[ lim j X ¯ m ( j ) - X ¯ avg, m ( j ) 1 N = 0 ] = 1 . Then, we obtain that

[ lim j X ¯ m ( j ) = θ 1 N ] = 1 and the theorem follows.

Theorem 1 shows that the proposed distributed tracking with consensus algorithm will reach consensus almost surely and the consensus estimate lim j x ̄ ( j ) is a finite random vector Θ. Since the consensus algorithm in (6) falls in the framework of stochastic approximation, we may also analyze the convergence rate of the consensus algorithm based on the ODE method (ordinary difference equation) [37]. The next theorem characterizes an upper bound to the convergence rate of the proposed distributed tracking with consensus algorithm.

Theorem 2 (convergence rate): Consider the consensus algorithm in (6) with initial state X ¯ ( 0 ) N M . The weight coefficients satisfy the persistence condition in (9) and γ ( j ) 2 λ 2 ( L ̄ ) + λ n ( L ̄ ) . Assume that the time-varying connectivity graph Laplacian L(j) is independent of communication noise ϕn,l(j) for 1 ≤ n, lN. For jJ k , the effective network graph Laplacian is L ( j ) = L ¯ + L ̃ ( j ) with mean L ¯ =E [ L ( j ) ] . If the connectivity graph Laplacian L(j) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0, and if p(l, n) > 0 for {l, n} E(j), the convergence rate,e of the proposed consensus algorithm is bounded by - λ 2 ( L ¯ ) 1 J - J k J k j J γ ( j ) .

Proof For a fixed i, let J k =inf { j | ( S k j ) c = , j 0 } . From the asymptotic unbiasedness of Θ, we have lim j E [ X ¯ ( j ) ] = 1 N r , where r = X ¯ avg ( J k ) . For jJ k , define Ξ ( j ) = I N M = - γ ( j ) ( L ¯ I M ) , where L ¯ =E [ L ( j ) ] . Using the fact that L(j) and X ̄ ( j ) are independent, and E [ Φ ̄ ( j ) ] = 0 N M , from (6), we have that

E [ X ¯ ( j + 1 ) ] = Ξ ( j ) E [ X ¯ ( j ) ] = l = J k j Ξ ( l ) E [ X ¯ ( J k ) ] , j J k .
(12)

From the persistence condition γ ( j ) >0, j 0 γ ( j ) = and j 0 γ 2 ( j ) [16], it follows that γ(j) → 0. From the mixed-product property of Kro-necker product (A B)(C D) = AB CD and ( I N M - γ ( j ) L ¯ ) 1 N = 1 N [32], we have

1 N r = Ξ ( j ) ( 1 N r ) .
(13)

From (12) and (13), it can be shown that

E [ X ¯ ( j ) ] - 1 N r J k l j - 1 ρ ̄ ( 1 - γ ( l ) L ¯ ) E [ X ¯ ( J k ) ] - 1 N r = J k l j - 1 ( 1 - γ ( l ) λ 2 ( L ¯ ) ) E [ X ¯ ( J k ) ] - 1 N r ,

where last step follows from Lemma 8 of [15] and ρ ̄ ( ) denotes the spectral radius of a matrix. From the assumption on weight coefficient γ(j), we have 0γ ( l ) λ 2 ( L ̄ ) 1. Since 1 - αe for 0 ≤ α ≤ 1, we then have that

E [ X ¯ ( j ) ] - 1 N r e - λ 2 ( L ̄ ) J k l j - 1 γ ( l ) E [ X ¯ ( J k ) ] - 1 N r .
(14)

Therefore, as jJ, the convergence rate is bounded by - λ 2 ( L ̄ ) 1 J - J k J k l J γ ( l ) , which depends on the algebraic connectivity λ 2 ( L ¯ ) and the weights γ(j), for J k jJ.

Theorem 2 shows that the convergence rate of the proposed algorithm depends on the topology through the algebraic connectivity λ 2 ( L ¯ ) of the effective network graph G ̃ ( j ) and through weights γ(j), for jJ k . Since for j J k , I S k j =I and L(j) = L(j), we have L ̄ =E [ L ( j ) ] =E [ L ( j ) ] . In (14), λ 2 ( L ¯ ) is the algebraic connectivity of the mean Laplacian corresponding to the time-varying network graphs. For a static network, this reduces to the algebraic connectivity of the static Laplacian L.

Since the consensus algorithm in (6) is iterative, whose energy consumption is proportional to the time necessary to achieve consensus and inversely proportional to transmit power. From [38, 39], for energy-constrained sensor networks, there exists a tradeoff between convergence time that depends on network connectivity and the transmit power of each node necessary to establish the links with the desired reliability. Therefore, we can minimize the energy consumption for consensus process by optimizing transmit power, network topology, and weights γ(j).

B Steady-state analysis for noiseless graphs

In this section, we analyze the steady-state performance of the proposed distributed tracking with consensus algorithm. When the filter reaches steady state, the error covariance matrix is time invariant and the corresponding filter gain is constant. Therefore, finding the steady state of the proposed algorithm will help understanding its asymptotic behavior, analyzing error covariance, and filter design. From (8), it can be seen that the propagation of communication noise implies the non-existence of an upper bound to the covariance matrix. Therefore, the covariance matrix in the Kalman filter may not also converge and the filter may not reach steady state. However, time-varying graph assumption does not affect the existence of steady state. Since for J → ∞, consensus is reached over the network and the outputs of the consensus update X ¯ n ( k , J ) and P ¯ ( k , J ) depend only on the inputs X ¯ n ( k , 0 ) and P ¯ ( k , 0 ) for complete data case with noiseless time-varying graphs (for incomplete data case with noiseless time-varying graphs, this property still holds for some special types of graphs). Hence, the combined system of distributed tracking with consensus can be transformed into a Kalman filter with time-invariant parameters. Therefore, steady state can still be reached [1]. In the following, assuming noiseless time-varying graphs, we start with steady-state analysis for the case with complete data, and then, we extend the results to the case with incomplete data.

1) Complete data with noiseless time-varying graphs

Here, we assume complete data, a scalar target state x 1 (for simplicity) and noiseless time-varying graphs, where the connectivity graph Laplacian L(i) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0, and p(l, n) > 0 for {l, n} E(j). Note that, since a closed form equation for P ^ n ( k + 1 | k ) cannot be easily obtained when the target state x Mfor M > 1, the following derivation would not apply to vector state.

From the result of Theorem 1 for scalar target state, itcan be shown that lim J X ¯ ( k , J ) = X ¯ avg ( k , 0 ) 1 N , where X ¯ avg ( k , j ) = 1 N 1 T X ¯ ( k , J ) . From the definition of X ¯ ( k , j ) and x ̄ n ( k , 0 ) = x ^ n ( k | k ) , we have for 1 ≤ nN

lim J x ̄ n ( k , J ) = 1 N n = 1 N x ^ n ( k | k ) .
(15)

With the assumptions above, the covariance matrix (8) in the (j + 1)th consensus iteration after the k th tracking update simplifies to P ¯ ( k , j + 1 ) = A ( j ) P ¯ ( k , j ) A ( j ) T For complete data case, L(j) = L(j). Since 1T L(j) = 0, from (7), we have 1T A(j) = 1. Then, we can obtain that

1 T P ¯ ( k , j + 1 ) 1 = 1 T P ¯ ( k , j ) 1 .
(16)

By applying the result of Theorem 1, we have lim J P ¯ ( k , J ) = ( X ¯ avg ( k , 0 ) - x ( k ) ) 2 1 1 T . Since all the elements in lim J P ¯ ( k , J ) are equal, from (16), it follows that

lim J P ¯ ( k , J ) = 1 T P ¯ ( k , 0 ) 1 N 2 1 1 T = n = 1 N P ^ n ( k | k ) N 2 1 1 T .
(17)

Since P ̄ n ( k , J ) is the n th M × M main diagonal block of P ̄ ( k , J ) , we have the covariance matrix for node n (1 ≤ nN) as below:

lim J P ¯ n ( k , J ) = n = 1 N P ^ n ( k | k ) N 2 .
(18)

From (15) and (18), we have x ̄ n ( k , J ) = x ̄ l ( k , J ) and P ̄ n ( k , J ) = P ̄ l ( k , J ) for J → ∞ and 1 ≤ n, lN. Then, each node n sets x ̄ n ( k | k ) = x ̄ n ( k , J ) and P ¯ n ( k | k ) = P ¯ n ( k , J ) . From (4), we have x ^ n ( k + 1 | k ) = x ^ l ( k + 1 | k ) and P ^ n ( k + 1 | k ) = P ^ l ( k + 1 | k ) for 1 ≤ n, lN and it follows that for 1 ≤ nN

x ^ n ( k + 1 | k ) = F 1 N q = 1 N x ^ q ( k | k - 1 ) - F 1 N q = 1 N K q ( k ) ( y q ( k ) - H q x ^ q ( k | k - 1 ) ) , P ^ l ( k + 1 | k ) = Q + 1 N 2 q = 1 N F ( I - K q ( k ) H q ) P ^ q ( k | k - 1 ) F T .
(19)

Let x ^ n ( k + 1 | k ) = x ^ ( k + 1 | k ) and P ^ n ( k + 1 | k ) = P ^ ( k + 1 | k ) . Then, the combined system of distributed tracking with consensus can be transformed into a single Kalman filter as follows:

x ^ ( k + 1 | k ) = F x ^ ( k | k - 1 ) + F N n = 1 N K n ( k ) ( y n ( k ) - H n x ^ n ( k | k - 1 ) ) , K n ( k ) = P ^ ( k | k - 1 ) H n T H n P ^ ( k | k - 1 ) H n T + R n - 1 , P ^ ( k + 1 | k ) = Q + 1 N 2 n = 1 N F P ^ ( k | k - 1 ) F T - F K n ( k ) H n P ^ ( k | k - 1 ) H n T + R n × K n ( k ) T F T .
(20)

Theorem 3 Consider the system dynamics in (1) and (2) and the Kalman filter in (20). Assume that the connectivity graph Laplacian L(j) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0, and p(l, n) > 0 for {l, n} E(j). If the pair (F, H n ) is observable for 1 ≤ nN, then the prediction covariance matrix P ^ ( k | k - 1 ) converges to a constant matrix

lim k P ^ ( k | k - 1 ) = P ,

where P is the unique definite solution of the discrete algebraic Riccati equation (DARE)

P = Q + 1 N 2 n = 1 N F P F T - F P H n T ( H n P H n T + R n ) - 1 H n P F T .
(21)

Proof See Proof of Theorem 4. By setting m = N and β n (k) = 1 for 1 ≤ nN, the Kalman filter in (24) can be reduced to the one in (20). Theorem 3 can be considered as a special case of Theorem 4. Thus, it can be proved in a similar manner.

As a consequence of Theorem 3, the local Kalman filter gain converges to

lim k K n ( k ) = P H n T [ H n P H n T + R n ] - 1 .

From (21), it can be seen that limN→∞P = Q, i.e., as the size of the sensor network N increases, the steady-state covariance P, which in this case is a scalar, will decrease. This implies that if the network size is large enough, asymptotically the tracking is noiseless and follows the target exactly. It is obvious that this result still holds for distributed local Kalman filtering with centralized fusion. However, the distributed tracking with consensus results in the same performance even if the graph is time-varying and it also improves the robustness and scalability due to consensus exchanges. For the assumed scalar case, for example, if H n = H and R n = R for 1 ≤ nN, then we have K= H P H 2 P + R and P= - B + B 2 + 4 H 2 Q R 2 H 2 , where B= 1 - F 2 N R- H 2 Q. This implies that for the same sensing model, each node will have the same Kalman gain K and prediction covariance P in the steady state.

2) Incomplete data with noiseless time-varying graphs

Next, we assume incomplete data, a scalar target state x 1 and noiseless time-varying graphs, where the connectivity graph Laplacian L(j) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0, and p(l, n) > 0 for {l, n} E(j). Furthermore, we assume that only m nodes can observe the target and without loss of any generality the index of those m nodes are ordered as 1, 2,..., m, where m is constant and 1 ≤ mN. This implies that the active node set S k 0 = { 1 , 2 , , m } , which does not require further assumptions on the connectivity graph for consensus, since the graph is connected on average and the information can still propagate over the network even if only a fixed number of nodes have observation.

With the assumption of incomplete data and noiseless time-varying graphs, 1TL(j) = 0 for J k j < J. Then, (17) becomes

lim J P ¯ ( k , J ) = 1 T P ̄ ( k , J k ) 1 N 2 1 1 T = 1 T A 0 J k - 1 P ¯ ( k , 0 ) A 0 J k - 1 T 1 N 2 1 1 T = n = 1 m P ^ n ( k | k ) β n 2 ( k ) N 2 1 1 T ,
(22)

where A 0 J k - 1 = A ( J k - 1 ) A ( 0 ) and β n ( k ) = l = 1 N A 0 J k - 1 l n is the n th column sum of A 0 J k - 1 that depends on time k. The last step of (22) follows from that P ^ n ( k | k ) =ε for m < nN and some ϵ > 0. Then, as in previous subsection, we have for 1 ≤ nN

lim J P ̄ n ( k , J ) = n = 1 m P ^ n ( k | k ) β n 2 ( k ) N 2 , and lim J x ̄ n ( k , J ) = 1 N n = 1 m x ^ n ( k | k ) β n ( k ) .
(23)

From (23), for J → ∞, we have x ̄ n ( k , J ) = x ̄ l ( k , J ) and P ¯ n ( k , J ) = P l ¯ ( k , J ) for 1 ≤ n, l ≤ N. By setting x ̄ n ( k | k ) = x ̄ n ( k , J ) and P n ¯ ( k | k ) = P ¯ n ( k , J ) , from (4), we can obtain recursive update equations for P ^ n ( k + 1 | k ) and x ^ n ( k + 1 | k ) . Furthermore, we also have x ^ n ( k + 1 | k ) = x ^ l ( k + 1 | k ) and P ^ n ( k + 1 | k ) = P ^ l ( k + 1 | k ) for 1 ≤ n, lN. Let x ^ n ( k + 1 | k ) = x ^ ( k + 1 | k ) and P ^ n ( k + 1 | k ) = P ^ ( k + 1 | k ) . Then, the combined system of distributed tracking with consensus can then be transformed into a single Kalman filter for node n(1 ≤ nm) as below:

x ^ ( k + 1 | k ) = F N n = 1 m x ^ ( k | k - 1 ) β n ( k ) + F N n = 1 m K n ( k ) ( y n ( k ) - H n x ^ ( k | k - 1 ) ) β n ( k ) , K n ( k ) = P ^ ( k | k - 1 ) H n T H n P ^ ( k | k - 1 ) H n T + R n - 1 , P ^ ( k + 1 | k ) = Q + 1 N 2 n = 1 m F P ^ ( k | k - 1 ) F T - F K n ( k ) H n P ^ ( k | k - 1 ) H n T + R n × K n ( k ) T F T β n 2 ( k ) ,
(24)

where (20) is a special case of (24) with m = N and β n (k) = 1 for 1 ≤ nN.

Theorem 4 Consider the system dynamics in (1) and (2) and the Kalman filter in (24). Assume that m nodes can observe the target and the index of those m nodes are fixed and ordered as 1,..., m. The connectivity graph Laplacian L(j) with mean L ¯ =E [ L ( j ) ] is such that λ 2 ( L ¯ ) >0, and p(l, n) > 0 for {l, n} E(j). The connectivity graph has switching topologies and is periodic such that β n (k) = β n is time invariant. If the pair (F, H n ) is observable for 1 ≤ nm, then the prediction covariance matrix P ^ ( k | k - 1 ) converges to a constant matrix

lim k P ^ ( k | k - 1 ) = P ,

where P is the unique definite solution of the discrete algebraic Riccati equation (DARE)

P = Q + 1 N 2 n = 1 m F P F T - F P H n T ( H n P H n T + R n ) - 1 H n P F T β n 2 .
(25)

Proof See Appendix B.

Theorem 4 asserts that if the connectivity graph topology is switching and periodic, the proposed algorithm can still reach steady-state and the steady-state covariance matrix can be obtained by solving (25). The conditions of graph topology assumed in Theorem 4 are strong. However, it may still be applicable in certain situations such as satellite surveillance network in [29], since the existence of a communication link depends on distance between nodes and the trajectories of satellites are pre-determined and periodic, whenever ratios of the orbit periods are rational. As an example, consider the network model in Figure 4. The connectivity graph in Figure 4 is switching and periodic with period equal to 4, and it can be seen that the graph is connected on average. Let N=6,m=4, S k 0 = { 1 , 2 , 3 , 4 } and γ ( j ) = 1 j + 1 for 0 ≤ j < J. After iteration J k = 1, all nodes will have updated local estimates to be shared. In this case, A 0 J k - 1 becomes

Figure 4
figure 4

A time-varying graph with switching topologies.

A 0 J k - 1 = - 1 1 0 1 0 0 1 - 2 1 1 0 0 0 1 0 0 0 0 1 1 0 - 1 0 0 1 3 0 1 3 1 3 0 0 1 0 0 0 0 0 .

It can be seen that A 0 J k - 1 is time-variance of m and the periodic graph topology. Thus, β n (k) = β n is also time invariant and 1 N n = 1 m β n =1, which follows from the condition for unbiasedness in the consensus estimate X ¯ ( k , j ) . As we will see shortly, from the simulation results in Section IV, the filter indeed reaches steady state in this case, and then, the error covariance matrix becomes time invariant and the corresponding filter gain is constant.

4 Numerical examples

In this section, we consider the performance of the proposed distributed tracking with consensus algorithm and compare it with centralized Kalman filter and distributed local Kalman filtering with centralized fusion. The performance of the centralized Kalman filter is well-understood [40] and provides a benchmark performance for distributed local Kalman filtering with centralized fusion. In distributed local Kalman filtering with centralized fusion, all nodes send their filtered estimates to a fusion center. The fusion center then generates a fused estimate x ^ fusion ( k ) = 1 | S k 0 | n S k 0 x ^ n ( k | k ) .

In the first simulation, we compare the performance of the proposed algorithm with the distributed local Kalman filtering with centralized fusion and the centralized Kalman filter over a random graph with noisy communication links and incomplete data. We consider a random connectivity graph G(N, p) with N = 20 and the probability that each link exists p = 0.5. The other parameters of the simulation setup are: F = 1, Q = 1, x(0) = 0, P0 = 0, R n = 0.25, H n = 1, σ l , n 2 = σ 2 =0.1, S k 0 = { n | 1 n 10 , n } and J = 30.

Figure 5a shows the node estimates of the three algorithms in a time-varying graph with noisy communication links. As we can see, the node estimates of the three algorithms follow the target's trajectory. In Figure 5a, the curve with cross marker denotes the first node's estimate by using distributed tracking with consensus algorithm, the dashed curve denotes the distributed local Kalman filtering with centralized fusion, and the curve with circle marker denotes the centralized Kalman filter and the solid curve denotes the target's trajectory. Figure 5b compares the resulting mean squared error (MSE) of the three algorithms, where the MSE of the distributed tracking with consensus is defined to be the average MSE over all nodes 1 N n = 1 N ( x ̄ n ( k , J ) - x ( k ) ) T ( x ̄ n ( k , J ) - x ( k ) ) . In Figure 5b, it can be seen that the MSE of the proposed distributed tracking with consensus algorithm is close to that of the distributed local Kalman filtering with centralized fusion. As expected, both of them are higher than the MSE of the centralized Kalman filter, which acts as a benchmark. The results in Figure 5 show that the performance of the proposed distributed tracking with consensus algorithm is close to that of the distributed local Kalman filtering with centralized fusion in a time-varying random graph with noisy communication and incomplete data. Additional communication bandwidth, which depends on graph topology G and number of iterations J, is required for the proposed algorithm due to information exchange among nodes. However, it resolves the bandwidth constraints problem of fusion center for centralized fusion case and has a high level of fault tolerance and reliability. Also, because of its advantages of fully distributed implementation, robustness, and scalability, it may be preferable in practical applications.

Figure 5
figure 5

Comparison of the proposed distributed tracking with consensus algorithm with distributed local Kalman filtering with centralized fusion and centralized Kalman filter. (a) Node estimates; (b) mean squared error.

In the second simulation, we consider the two-dimensional tracking problem treated in [22]. The connectivity graph is again assumed to be a random graph G(N, p) with N = 50 and the probability that each link exists p = 0.5. The probability of each node having an observation at a given time instant is p s = 0.9. The other parameters of the simulation setup are as follows: F = I 2 + ε F 0 + ε 2 2 F 0 2 + ε 3 6 F 0 3 , F 0 = 0 - 2 2 0 , ε = 0 . 015 , Q = ( ε c w 2 ) 2 I 2 , c w = 5 , x ( 0 ) = [ 15 , - 10 ] T , H n = [ 1 , 0 ] for n is odd and H n = [0, 1] for n is even, R n = c v 2 n for n = 1, . . ., N with c v =30, σ l , n 2 = σ 2 =1,J=10. Note that, the target is moving on noisy circular trajectories. The target is not fully observable by an individual node, but is collectively observable by all nodes.

Figure 6a shows the node estimates (trajectory) of the two algorithms over a time-varying graph with incomplete data. In Figure 6a, the curves with markers denote all the node estimates by using distributed tracking with consensus algorithm, while the dashed curve denotes the distributed local Kalman filtering with centralized fusion and the solid curve denotes the target's trajectory. As we can see, both algorithms overcome the impact of partial observations at each node resulting in improved overall observation quality and the node estimates by using distributed tracking with consensus algorithm are noisy due to the communication noise. Note that the estimates are close to the trajectory of the target but with a small gap. That is because the observation noise covariance is rather large at each node. Figure 6b compares the resulting MSE of these algorithms. It can be seen that the mean squared error of the proposed algorithm is slightly higher than that of the distributed Kalman filtering with centralized fusion.

Figure 6
figure 6

Comparison of the proposed distributed tracking with consensus algorithm and distributed Kalman filter with centralized fusion in a two-dimensional tracking problem. (a) Trajectory; (b) mean squared error.

Next, we study the steady-state behavior in the case of time-varying graphs with complete data and noiseless communication. We consider a random connectivity graph G(N, p) with N = 6 and the probability that each link exists p = 0.5. The other parameters of the simulation setup are as follows: F=1,Q=1,x ( 0 ) =0 P 0 =0.5, R n =0.25, σ l , n 2 = σ 2 =0,J=30, H n =1. Figure 7a shows the node consensus estimates x ̄ n ( k , J ) over a random graph with noiseless communication links and complete data. It can be seen that all node estimates x ̄ n ( k , J ) converge to the same value and follow the target state, as asserted by Theorem 1. Figure 7b and 7c shows the node estimates x ̄ n ( k , J ) in the consensus update after the twenty-first tracking update and the variance of all the node estimates, respectively. Here, the variance of all the node estimates is defined as ( k , j ) =E ( x ̄ n ( k , j ) - μ ( k , j ) ) T ( x ̄ n ( k , j ) - μ ( k , j ) ) , where μ ( k , j ) = 1 N n = 1 N x ̄ n ( k , j ) . From Figure 7b, it can be seen that the node estimates converge to the average which is also confirmed in Figure 7c, where the variance var(k, j) decreases as consensus iteration number increases and becomes static (around 10-17) after consensus is reached. Figure 8 shows the node estimate variance P ^ n ( k | k - 1 ) and Kalman gain K n (k) of the filter in (20). It can be seen that as the Kalman filter reaches steady state, both the node estimate variance and the Kalman gain converge, as asserted by Theorem 3.

Figure 7
figure 7

Performance of the distributed tracking with consensus algorithm for complete data and noiseless communication case. (a) Node consensus estimates x ̄ n ( k , J ) versus tracking time step; (b) node estimates x ̄ n ( k , j ) versus consensus iteration number; (c) variance of node estimates var(k, j) versus consensus iteration number.

Figure 8
figure 8

Steady-state performance of the distributed tracking with consensus algorithm for complete data and noiseless communication case. (a) Prediction covariance matrix P ^ n ( k | k - 1 ) Kalman gain K n (k).

Next, we study the steady-state behavior on a graph with switching topologies and incomplete data and noiseless communication. The assumed parameters in the first simulation setup are as follows: F=1,Q=1,x ( 0 ) =0, P 0 =0.5, R n =0.25, σ l , n 2 = σ 2 =0,N=6,J=40, S k 0 = { 1 , 3 , 4 , 6 } , H n =0.5 for n = 1, 3 and H n = 1 for n = 4, 6. The connectivity graph Laplacian is

L ( j ) = L 1 j = 4 m L 2 j = 4 m + 1 L 3 j = 4 m + 2 L 4 j = 4 m + 3 for m = 0 , 1 , 2 , ,

which is shown in Figure 4. As we can see, the graph is connected on average and p(l, n) > 0 for {l, n} E(j), satisfying the conditions on the connectivity graph Laplacian required in Theorem 1.

Figure 9 shows the prediction covariance matrix P ^ n ( k | k - 1 ) and Kalman gain K n (k) of the filter in (20), respectively. It can be seen that as the Kalman filter reaches the steady state, both the prediction covariance matrix and the Kalman gain converge, as asserted by Theorem 4. Note that the limit of the Kalman gain is different for different nodes in Figure 9 because the observation matrix H n is different for different nodes.

Figure 9
figure 9

Steady-state performance of the distributed tracking with consensus algorithm for incomplete data and noiseless communication case. (a) Prediction covariance matrix P ^ n ( k | k - 1 ) ; (b) Kalman gain K n (k).

5 Conclusions

In this paper, we considered the problem of distributed tracking with consensus on a time-varying graph with incomplete data and noisy communication links. We developed a framework consisting of tracking and consensus updates to handle the issues of time-varying network topology and incomplete data. We discussed the conditions for achieving consensus, quantified the convergence rate and analyzed the steady-state performance when applicable. Our simulation results showed that the proposed distributed tracking with consensus algorithm improves the estimation quality at each node and its performance is close to that of the distributed local Kalman filtering with centralized fusion. The proposed algorithm shows advantages of fully distributed implementation, robustness and scalability, which is preferable in practical application.

Appendix A

Proof of Lemma 3

Proof Since λ 2 ( L ¯ ) >0 and p(l, n) > 0 for {l, n} E(j), the undirected time-varying connectivity graph G(j) is connected on average with non-zero link probability. For j large enough, each node will receive the information from one another and generate its updated local estimates. For a fixed k, let J k =inf { j | ( S k j ) c = , j 0 } . Then, we have the effective network graph is the same as connectivity graph G ̃ ( j ) =G ( j ) , L ( j ) =L ( j ) and Γ(j) = γ(j) I N for jJ k .

Since [ X ¯ ( j ) | X ¯ ( j - 1 ) , , X ¯ ( 0 ) ] = [ X ¯ ( j ) | X ¯ ( j - 1 ) ] , the process { X ̄ ( j ) } j 0 is Markov. Define V ( j , X ¯ ) = X ¯ T ( L ¯ I M ) X ¯ . Since we assume the graph is undirected and connected on average, L ¯ is positive semidefinite. Then, the potential function V ( j , X ¯ ) is non-negative. Since X ¯ εC is an eigenvector of L ¯ I M with zero eigenvalue, V ( j , X ¯ ) 0, X ¯ C, lim X ¯ C sup j 0 V ( j , X ¯ ) =0. From Courant-Fisher Theorem [31, 41], for Z N Mand Z C, we have

Z T ( L ¯ I M ) Z λ 2 ( L ¯ I M ) Z T Z .
(26)

From Lemma 1 and the complement of the ϵ-neighborhood of a set in (10), we have X ¯ V ε ( C ) X ¯ C ε. Then, for X ¯ V ε ( C ) , from (26) and the properties of Kronecker product and eigenvalues, we will have

V ( j , X ¯ ) = X ¯ T ( L ¯ I M ) X ¯ = X ¯ C T ( L ¯ I M ) X ¯ C + X ¯ C T ( L ¯ I M ) X ¯ C , λ 2 ( L ¯ I M ) X ¯ C 2 = λ 2 ( L ¯ ) X ¯ C 2 λ 2 ( L ¯ ) ε 2 .
(27)

Since λ 2 ( L ¯ ) >0, we get inf j 0 , X ¯ V ε ( C ) V ( j , X ¯ ) λ 2 ( L ¯ ) ε 2 >0. Consider the generating operator L and (11). Using the fact that L(j) = L(j) for jJ k , we obtain

L V ( j , X ¯ ) = E [ X ¯ ( j + 1 ) T ( L ¯ I M ) X ¯ ( j + 1 ) | X ¯ ( j ) = X ¯ ] - X ¯ T ( L ¯ I M ) X ¯ , = E [ X ¯ - γ ( j ) ( L ( j ) I M ) X ¯ - γ ( j ) Φ ¯ ( j ) ] T ( L ¯ I M ) × [ X ¯ - γ ( j ) ( L ( j ) I M ) X ¯ - γ ( j ) Φ ¯ ( j ) ] - X ¯ T ( L ¯ I M ) X ¯ for j J k .

From (6), we have E [ Φ ¯ ( j ) 2 ] η. By using the independence of L(j) and Φ ¯ ( j ) with respect to X ̄ ( j ) and X ¯ T L X ¯ λ N ( L ¯ ) X ¯ C 2 [32], after some work, we have that

L V j , X = - 2 γ ( j ) X ¯ T L ¯ I M 2 X ¯ + γ 2 ( j ) X ¯ T L ¯ I M 3 X ¯ + E γ 2 ( j ) L ̃ ( j ) I M X ¯ T L ¯ I M L ̃ ( j ) I M X ¯ + E γ 2 ( j ) Φ ¯ ( j ) T L ¯ I M Φ ¯ ( j ) , - 2 γ ( j ) X ¯ T L ¯ I M 2 X ¯ + γ 2 ( j ) λ N 3 ( L ¯ ) X ¯ C 2 + λ N L ¯ E L ̃ ( j ) I M X ¯ 2 + λ N L ¯ E Φ ¯ ( j ) 2 , - 2 γ ( j ) X ¯ T L ¯ I M 2 X ¯ + γ 2 ( j ) λ N 3 L ¯ X ¯ C 2 + 4 N 2 λ N L ¯ X ¯ C 2 + λ N L ¯ η .

The last step follows from the fact that all the eigenvalues of L ̃ ( j ) are less than 2N in absolute value, by the Gershgorin circle theorem. Using the fact that X ¯ T L ¯ I M X ¯ λ 2 L ¯ X ¯ C 2 from (27), we have

L V j , X ¯ - 2 γ ( j ) X ¯ T L ¯ I M 2 X ¯ + γ 2 ( j ) λ N L ¯ η + λ N 3 L ¯ λ 2 L ¯ + 4 N 2 λ N L ¯ λ 2 L ¯ × X ¯ T L ¯ I M X ¯ , - 2 γ ( j ) φ j , X ¯ + g ( j ) 1 + V j , X ¯ for j J k ,

where φ j , X ¯ = 2 X ¯ T L ¯ I M 2 X ¯ , g ( j ) = γ 2 ( j ) max λ N L ¯ η , λ N 3 L ¯ λ 2 L ¯ + 4 N 2 λ N L ¯ λ 2 L ¯ . Then, the theorem follows by using Lemma 2.

Appendix B

Proof of Theorem 4

Proof Step 1: Bound on the error covariance

From (1), it can be easily shown that the controllability matrix has full rank and the system is controllable. Since (F, H n ) is detectable, K n such that ( F - K n H n ) are stable. Consider the suboptimal filter

x ^ n k + 1 | k = F x ^ n k | k - 1 + K n y n ( k ) - H n x ^ n k | k - 1 ,

Since consensus is reached in consensus update part, x ^ n k | k - 1 = x ^ l k | k - 1 = x ^ k | k - 1 for 1 ≤ n, lN. Then,

x ^ k + 1 | k = 1 N F n = 1 m β n - n = 1 m K n H n β n x ^ k | k - 1 + 1 N n = 1 m K n y n ( k ) β n .

It is easily verified that

x ̃ k + 1 | k = x ( k + 1 ) - x ^ k + 1 | k , = F - 1 N n = 1 m K n H n β n x ̃ k | k - 1 - 1 N n = 1 m K n υ n ( k ) β n + w ( k ) ,

where the last step follows from the fact that the estimate is unbiased and 1 N n = 1 m β n =1. Since ( F - K n H n ) is stable, F - 1 N n = 1 m K n H n β n is also stable. It follows that the covariance matrix ( k ) = Cov x ̃ k | k - 1 is bounded, where Cov(x) denotes the covariance matrix of x. However, the filter above is suboptimal, so that P (k|k - 1) ≤ ∏ (k).

Step 2: Monotonicity of the error covariance

Recall that the mapping f : P ^ n ( k | k - 1 ) P ^ n ( k + 1 | k ) as P ^ n ( k + 1 | k ) = min K n g ( P ^ n ( k | k - 1 ) , K n ) , where

g ( P ^ n , K n ) = ( F - K n H n ) P ^ n ( F - K n H n ) T + K n R n K n T + Q .

Thus, if P ^ n ( k | k - 1 ) P ^ n ( k | k - 1 )

P ^ n ( k + 1 | k ) = min K n g ( P ^ n ( k | k - 1 ) , K n ) = g ( P ^ n ( k | k - 1 ) , K n * ) g ( P ^ n ( k | k - 1 ) , K n * ) , min K n g ( P ^ n ( k | k - 1 ) , K n ) = P ^ n ( k + 1 | k ) .

Therefore, the mapping f from P ^ n ( k | k - 1 ) to P ^ n ( k + 1 | k ) is monotonic. Because P ^ ( k + 1 | k ) = 1 N 2 n = 1 m P ^ n ( k + 1 | k ) β n 2 , the mapping f ^ : P ^ ( k | k - 1 ) P ^ ( k + 1 | k ) is also monotonic.

Step 3: Use of zero initial covariance

Suppose P ^ ( 0 | - 1 ) =0. Then P ^ ( 1 | 0 ) P ^ ( 0 | - 1 ) =0. But from Step 2 it follows that P ^ ( k + 1 | k ) P ^ ( k | k - 1 ) , for k ≥ 0. Since P ^ ( k | k - 1 ) is bounded by Step 1, then P ^ ( k | k - 1 ) P for some P ≥ 0. Obviously, P must be a stationary point of the covariance update equation, hence solves the DARE.

Step 4: Asymptotic stability of the filter

With K ¯ n the stationary gain corresponding to P, the DARE is

P = F - 1 N n = 1 m K ̄ n H n β n P F - 1 N n = 1 m K ̄ n H n β n T + 1 N 2 n = 1 m K ̄ n R n K ̄ n T β n 2 + G G T ,

where GGT = Q. Let ν be a left eigenvector of F - 1 N n = 1 m K ̄ n H n β n with eigenvalue λ. Then

( v P v T ) = λ 2 ( v P v T ) + 1 N 2 n = 1 m v K ̄ n R n K ̄ n T v T β n 2 + v G G T v T .
(28)

Since R n and Q are positive semidefinite, it implies that |λ| ≤ 1. It only remains to show that |λ| = 1 is impossible. If |λ| = 1, we have from (28) and the definition of v:

v F - 1 N n = 1 m K ̄ n H n β n = λ v , v K ̄ n = 0 , and v G = 0 ,

which gives that ν[λI - F, G] = 0. This contradicts the assumption that (F, G) is stabilizable.

Step 5: Nonzero initial covariances

Suppose we use the stationary suboptimal filter K n K ̄ n to obtain the estimate x ^ ( k | k - 1 ) . We show that its error covariance converges to P.

Defining x ̃ ( k | k - 1 ) x ( k ) - x ^ ( k | k - 1 ) , we obtain

x ̃ ( k | k - 1 ) = F - 1 N n = 1 m K ̄ n H n β n x ̃ ( k | k - 1 ) - 1 N n = 1 m K ̄ n v n ( k ) β n + w ( k ) .

Since F - 1 N n = 1 m K ̄ n H n β n is stable with eigenvalue |λ| < 1, it follows from above results on stationary behavior that Π ( k ) Cov [ x ̃ ( k | k - 1 ) ] P ̃ 0, where P ̃ is the unique non-negative solution of the Lyapunov equation:

P ̃ = F - 1 N n = 1 m K ̄ n H n β n P ̃ F - 1 N n = 1 m K ̄ n H n β n T + 1 N 2 n = 1 m K ̄ n R n K ̄ n T β n 2 + Q .

However, substituting K ̄ n this is just the DARE which is satisfied by P, hence P ̃ =P. Now, x ^ ( k | k - 1 ) is suboptimal so that P ( k | k - 1 ) Π ( k ) P ̃ . On the other hand, by monotonicity of mapping f ^ : P ^ ( k | k - 1 ) P ^ ( k + 1 | k ) , it follows that P ( k | k - 1 ) P 0 ( k | k - 1 ) P , where P 0 ( k | k - 1 ) is the covariance for P(0| - 1) = 0. Hence, P(k|k - 1) → P.

Endnotes

aThe assumption here is that at the beginning of the consensus update process, the filtered estimates at different nodes are statistically uncorre-lated. bNote that, for n ( S k j ) c and l = 1 N A n , l ( j ) =0, node n does not receive information from any node that has local tracking estimate. Then, x ̄ n ( k , j + 1 ) = x ̄ n ( k , j ) . cNote that, similar results on the unbiasedness of consensus estimate was obtained in [29]. dFor practical consideration, due to energy constraints of sensor networks, the time period J for consensus process is not too long such that the nodes can still efficiently obtain new information from the source [38]. Simulation results in Section IV show how the algorithm performs in this case. eNote that the convergence rate calculated here is for the period of J k jJ, where J 1 is the number of consensus iterations. From persistence condition (9), limj→∞γ(j) = 0. Then γ(j) is very close to zero and the convergence speed can be assumed negligible for jJ and J large enough [26, 42].

References

  1. Anderson BDO, Moore JB: Optimal Filtering. Prentice-Hall, Englewood Cliffs; 1979.

    MATH  Google Scholar 

  2. Reid DB: An algorithm for tracking multiple targets. IEEE Trans Au-tom Control 1979,24(6):843-854. 10.1109/TAC.1979.1102177

    Article  Google Scholar 

  3. Bar-Shalom Y, Fortmann TE: Tracking and Data Association. Academic Press, Boston; 1988.

    MATH  Google Scholar 

  4. Bar-Shalom Y, Li XR: Multitarget-Multisensor Tracking: Principles and Techniques. YBS Publishing, Storrs; 1995.

    Google Scholar 

  5. Fogel M, Burkhart N, Ren H, Schiff J, Meng M, Goldberg K: Automated tracking of pallets in warehouses: Beacon layout and asymmetric ultrasound observation models. In Third IEEE Conference on Automation Science and Engineering (CASE). Scottsdale, AZ; 2007.

    Google Scholar 

  6. Wilson J, Bhargava V, Redfern A, Wright P: A wireless sensor network and incident command system for urban firefighting. In Proceedings of the 4th Annual International Confernce on Mobile and Ubiquitous Systems: Networking & Services (MobiQuitous). Philadephia, PA; 2007.

    Google Scholar 

  7. Jadbabaie A, Lin J, Morse AS: Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans Autom Control 2003,48(6):988-1001. 10.1109/TAC.2003.812781

    Article  MathSciNet  Google Scholar 

  8. Ren W, Beard RW, Atkins E: A survey of consensus problems in multi-agent coordination. In Proceedings of American Control Conference. Portland, OR; 2005:1859-1864.

    Google Scholar 

  9. Olfati-Saber R, Murray RM: Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans Autom Control 2004,49(9):1520-1533. 10.1109/TAC.2004.834113

    Article  MathSciNet  Google Scholar 

  10. Kingston DB, Beard RW: Discrete-time average-consensus under switching network topologies. In Proceedings of American Control Conference. Minneapolis, MN; 2006:3551-3556.

    Google Scholar 

  11. Xiao L, Boyd S: Fast linear iterations for distributed averaging. Syst Control Lett 2004, 53: 65-78. 10.1016/j.sysconle.2004.02.022

    Article  MathSciNet  MATH  Google Scholar 

  12. Hatano Y, Mesbahi M: Agreement over random networks. IEEE Trans Autom Control 2005,50(11):1867-1872.

    Article  MathSciNet  Google Scholar 

  13. Kashyap A, Basar T, Srikant R: Quantized consensus. Automatica 2007,43(7):1192-1203. 10.1016/j.automatica.2007.01.002

    Article  MathSciNet  MATH  Google Scholar 

  14. Fagnani F, Zampieri S: Average consensus with packet drop communication. SIAM J Control Optim 2009,48(1):102-133. 10.1137/060676866

    Article  MathSciNet  MATH  Google Scholar 

  15. Kar S, Moura JMF: Sensor networks with random links: topology design for distributed consensus. IEEE Trans Signal Process 2008,56(7):3315-3326.

    Article  MathSciNet  Google Scholar 

  16. Kar S, Moura JMF: Distributed consensus algorithms in sensor networks: link failures and channel noise. IEEE Trans Signal Process 2009,57(1):355-369.

    Article  MathSciNet  Google Scholar 

  17. Huang M, Manton JH: Coordination and consensus of networked agents with noisy measurement: stochastic algorithms and asymptotic behavior. SIAM J Control Optim 2009,48(1):134-161. 10.1137/06067359X

    Article  MathSciNet  MATH  Google Scholar 

  18. Li T: Asymptotically unbiased average consensus under measurement noises and fixed topologie. In Proceedings of the 17th IFAC World Congress. Seoul, Korea; 2008:2867-2873.

    Google Scholar 

  19. Li T, Zhang JF: Mean square average consensus under measurement noises and fixed topologies: necessary and sufficient conditions. Auto-matica 2009,45(8):1929-1936.

    MathSciNet  MATH  Google Scholar 

  20. Li T, Zhang JF: Consensus conditions of multi-agent systems with time-varying topologies and stochastic communication noises. IEEE Trans Autom Control 2010,55(9):2043-2057.

    Article  MathSciNet  Google Scholar 

  21. Olfati-Saber R: Distributed Kalman filter with embedded consensus filters. In Proceedings of the 44th IEEE Conference on Decision and Control. Seville, Spain; 2005:8179-8184.

    Chapter  Google Scholar 

  22. Olfati-Saber R: Distributed Kalman filtering for sensor networks. In Proceedings of the 46th IEEE Conference on Decision and Control. New Orleans, LA; 2007:5492-5498.

    Google Scholar 

  23. Altiksson P, Rantzer A: Distributed Kalman filtering using weighted averaging. In Proceedings of the 17th Symposium on Mathematical Theory of Networks and Systems. Kyoto, Japan; 2006.

    Google Scholar 

  24. Khan U, Moura JMF: Distributing the Kalman filter for larce-scale systems. IEEE Trans Signal Process 2008,56(10):4919-4935.

    Article  MathSciNet  Google Scholar 

  25. Hong Y, Hu J, Gao L: Tracking control for multi-agent consensus with an active leader and variable topology. Automatica 2006,42(7):1177-1182. 10.1016/j.automatica.2006.02.013

    Article  MathSciNet  MATH  Google Scholar 

  26. Mosquera C, Lopez-Valcarce R, Jayaweera SK: Stepsize sequence design for distributed average consensus. IEEE Signal Process Lett 2010,17(2):169-172.

    Article  Google Scholar 

  27. Jayaweera SK, Ruan Y, Erwin RS: Distributed Tracking with Consensus on Noisy Time-varying Graphs with Incomplete Data. In 10th International Conference on Signal Processing (ICSP 10). Beijing, China, October; 2010.

    Google Scholar 

  28. Ruan Y, Jayaweera SK, Mosquera C: Performance Analysis of Distributed Tracking with Consensus on Noisy Time-varying Graphs. In 5th Advanced Satellite Multimedia System Conference and the 11th Signal Processing for Space Communications Workshop. Sardinia, Italy; 2010.

    Google Scholar 

  29. Jayaweera SK: Distributed space-object tracking and scheduling with a satellite-assisted collaborative space surveillance network (SSN). Final project report: AFRL Summer Faculty Fellowship Program; 2009.

    Google Scholar 

  30. Cao Y, Ren W, Li Y: Distributed discrete-time coordinated tracking with a time-varying reference state and limited communication. Auto-matica 2009,45(5):1299-1305.

    MathSciNet  MATH  Google Scholar 

  31. Chung FRK: Spectral Graph Theory. American Mathematical Society, Providence; 1997.

    MATH  Google Scholar 

  32. Laub AJ: Matrix Analysis for Scientists and Engineers. Society for Industrial and Applied Mathematics, SIAM; 2004.

    MATH  Google Scholar 

  33. Tinati M, Rezaii T: Multi-target tracking in wireless sensor networks using distributed joint probabilistic data association and average consensus filter. In Proceedings of the 2009 International Conference on Advanced Computer Control. Singapore; 2009.

    Google Scholar 

  34. Boric-Lubecke O, Lin J, Park B, Li C, Massagram W, Lubecke V, Host-Madsen A: Battlefield triage life signs detection techniques. In Proceedings of the SPIE Defense and Security Symposium. Volume 6947. Orlando, FL; 2008:69470J.1-69470J.10.

    Google Scholar 

  35. Soto C, Song B, Chowdhury A: Distributed multi-target tracking in a self-configuring camera network. In Proceedings of IEEE Computer Vision and Pattern Recognition. Miami, FL; 2009:1486-1493.

    Google Scholar 

  36. Nevel'son M, Has'minskii R: Stochastic Approximation and Recursive Estimation. American Mathematical Society, Providence; 1973.

    MATH  Google Scholar 

  37. Ghasemi N, Dey S, Baras J: Stochastic Average Consensus Filter for Distributed HMM Filtering: Almost Sure convergence. Institute for Systems Research Technical Reports; 2010.

    Google Scholar 

  38. Barbarossa S, Scutari G, Swami A: Achieving Consensus in Self-Organizing Wireless Sensor Networks: the Impact of Network Topology on Energy Consumption. In Proceedings of IEEE ICASSP. Honolulu, HI, USA; 2007.

    Google Scholar 

  39. Sardellitti S, Barbarossa S, Swami A: Average consensus with minimal energy consumption: optimal topology and power consumption. In 18th European Signal Processing Conference. Aalborg, Denmark; 2010.

    Google Scholar 

  40. Rao BSY, Durrant-Whyte H, Sheen JA: A fully decentralized multi-sensor system for tracking and surveillance. Int J Robot Res 1993,12(1):20-44. 10.1177/027836499301200102

    Article  Google Scholar 

  41. Mohar B, Alavi Y, Chartrand G, Oellermann OR, Schwenk AJ: The Laplacian spectrum of graphs. Graph Theory Comb Appl 1991, 2: 871-898.

    MathSciNet  Google Scholar 

  42. Kushner H, Yin G: Stochastic Approximation and Recursive Algorithms and Appliations. 2nd edition. Springer, New York; 2003.

    MATH  Google Scholar 

Download references

Acknowledgements

This research was supported in part by the Space Vehicles Directorate of the Air Force Research Laboratory (AFRL) and in part by the National Science foundation (NSF) under the grant CCF-0830545.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongxiang Ruan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ruan, Y., Jayaweera, S.K. & Erwin, R.S. Distributed tracking with consensus on noisy time-varying graphs with incomplete data. EURASIP J. Adv. Signal Process. 2011, 110 (2011). https://doi.org/10.1186/1687-6180-2011-110

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-110

Keywords