 Research
 Open Access
 Published:
Collaborative emitter tracking using RaoBlackwellized random exchange diffusion particle filtering
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 19 (2014)
Abstract
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDifPF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a RaoBlackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDifPF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDifPF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDifEKF). Furthermore, the novel ReDifPF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an internode communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcastbased filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensusbased implementations, ReDifPF showed a degradation in steadystate error performance. However, compared to the optimal consensusbased trackers, ReDifPF is better suited for realtime applications since it does not require iterative internode communication between measurement arrivals.
1 Introduction
In several engineering applications, e.g., target tracking or fault detection, multiple agents[1] that are physically dispersed over remote nodes on a network cooperate to execute a global task, e.g., estimating a hidden signal or parameter, without relying on a global data fusion center. Each network node is normally equipped with one or more sensors that generate local measurements and can process those measurements independently of the rest of the network. At the same time, however, the network nodes are also able to communicate with each other in order to build in a collaborative fashion a joint estimate of the hidden signals or parameters of interest that depends both on local and remote measurements. Ideally, that joint estimate should be equal to or, at least approximate the optimal global estimate that would be generated by a centralized processor with access to all network measurements.
Most of the previous literature in distributed signal processing on networks is based on linear estimation methods. Specifically, distributed versions of the Kalman filter were proposed e.g., in[2–4] to track unknown state vectors in linear, Gaussian statespace models. In situations, however, where the state dynamic model or the sensor observation models are nonlinear, the posterior distribution of the states conditioned on the network measurements becomes nonGaussian (even with Gaussian sensor noise) and, therefore, the linear minimum mean square error (LMMSE) estimate of the states provided, e.g., by an extended Kalman filter (EKF) may differ from the true minimum mean square error (MMSE) estimate given by the expected value of the state vector conditioned on the measurements. In this paper in particular, we focus specifically on an application where multiple passive receivedsignalstrength (RSS) sensors jointly track a moving emitter assuming, at each network node, nonlinear observation models with possibly unknown static parameters.
1.1 Distributed particle filtering
In nonlinear scenarios, an alternative to approximate the true MMSE estimate is to use a sequential Monte Carlo method like particle filters[5, 6]. Several distributed particle filters have been proposed recently, see a comprehensive review in[7], to handle nonlinear distributed estimation tasks. An important constraint in the design of a distributed estimation algorithm is, however, that most networks of practical interest are only partially connected, i.e., each node can only directly access neighboring nodes in its immediate vicinity according to the network topology. In particular, assuming conditional independence of the different sensor measurements given the state vector, a distributed particle filter (PF) normally requires the computation of a product of likelihood functions that depend on local data only[8]. To compute that product over the network in a fully distributed fashion and with local neighborhood internode communication only, previous references suggest using iterative average consensus[8], iterative Markov chain Monte Carlo move steps[9], or selective gossip algorithms[10]. Alternatively, we proposed in[11] to compute the likelihood product exactly in a finite number of iterations using either iterative minimum consensus[12] or flooding techniques[13]. However, both consensus or floodingbased solutions are very costly in terms of bandwidth requirements as they require multiple iterative internode communication between two consecutive sensor measurements. Previous works, e.g.[8, 14, 15], propose approximations aimed at reducing the communication cost, but, in all aforementioned schemes, processing and sensing at different time scales are still required.
1.2 Diffusion particle filtering
An alternative to circumvent the high communication cost of consensus algorithms is to use diffusion algorithms[16] which, contrary to the former, do not require multiple iterative internode communication between consecutive measurements. Diffusion algorithms are, however, suboptimal in the sense that they do not simulate at each time step the behavior of the optimal global estimator, but rather, at best, approximate the optimal global solution asymptotically over time.
In the distributed linear estimation literature, most diffusion schemes are based on convex combinations of Kalman filters, see e.g.,[3]. Kar et al. proposed in[2] a different approach based on random information dissemination. In a previous conference paper[17], we introduced the random exchange diffusion particle filter (ReDifPF), which generalizes and extends the methodology in[2] to a PF framework by basically using random information dissemination to build at each network node different Monte Carlo representations of the posterior distribution of the states conditioned on random sets of measurements coming from the entire network. Reference[17] assumed, however, that the parameters of the sensor observation model were perfectly known. In this paper, we extend the algorithm to a scenario with unknown parameters and derive in detail a RaoBlackwellized[18] version of the ReDifPF. In the specific application under consideration in the paper, the unknown parameters are the sensor variances, but most of the methodology in the derivation of the RB ReDifPF is general and could be easily adapted to other signal models and applications provided that, in a fully Bayesian framework, the dynamic posterior probability distribution of the unknown parameters conditioned on the observations and on the simulated particles is a conjugate prior[19] for the likelihood function of the measurements.
An abbreviated description of the RB ReDifPF may be found also in the short paper[20]. This paper consolidates and extends both[17] and[20] including detailed derivations and additional simulation results and comparisons. We also detail approximate versions of the RB ReDifPF where we use Gaussian mixture models (GMM)[21] and momentmatching techniques inspired by[22] to reduce communication requirements.
1.3 Paper outline
The paper is divided into six sections and three appendices. Section 1 is the introduction. Section 2 describes the state and sensor models. Section 3 describes the centralized PF and also briefly reviews the equivalent broadcast, consensus, and flooding implementations introduced in[11]. Section 4 derives the ReDifPF algorithm considering alternate scenarios with both known and unknown parameters. In the unknown parameter case, we derive in detail the RaoBlackwellized version of the ReDifPF and introduce approximate versions thereof that enable significant reductions in communication cost. The performance of the proposed algorithms is evaluated with simulated data in a realistic scenario with 25 sensors in Section 5. We compare the ReDifPF algorithm in the unknown parameter scenario to the optimal centralized PF and its equivalent consensus implementations. In the known parameter case, we also compare the proposed ReDifPF tracker to the Markov chain Monte Carlo distributed particle filter (MCDPF) in[9], to a linearized random exchange distributed EKF, which is a variation of the algorithm proposed in[2], and to a distributed bootstrap particle filter based on selective gossip as proposed in[23]. Finally, we present our conclusions in Section 6.
Appendices 1 and 2 show the proof of some key results in the paper, and Appendix 3 describes the ReDifEKF algorithm used for comparison purposes in Section 5.
2 Problem setup
For simplicity of notation, we use lowercase letters in this paper to denote both random variables/vectors and realvalued samples of random variables/vectors with the proper interpretation implicit in context.
Without loss of generality, we assume that the emitter trajectory is described by the white noise acceleration model[24]
where${\mathbf{x}}_{n}\triangleq {\left[{x}_{n}\phantom{\rule{0.3em}{0ex}}{\stackrel{\u0307}{x}}_{n}\phantom{\rule{0.3em}{0ex}}{y}_{n}\phantom{\rule{0.3em}{0ex}}{\stackrel{\u0307}{y}}_{n}\right]}^{T}$ is the hidden state vector at time step n consisting of the positions and velocities of the target’s centroid respectively in dimensions x and y; F is the state transition matrix; and {u_{ n }} is a sequence of independent, identically distributed (i.i.d.) zeromean Gaussian vectors with covariance matrix Q. Matrices F and Q, parameterized by the sampling period T and the acceleration noise${\sigma}_{\text{accel}}^{2}$, are detailed in[11, 24].
2.1 Observation model
Let$\mathcal{N}(m,{\sigma}^{2})$ denote the Gaussian probability distribution with mean m and variance σ^{2} and denote by$\mathcal{I}\mathcal{G}(a,b)$ the inversegamma probability distribution with parameters a and b. The measurements z_{r,0:n} = {z_{r,0},…,z_{r,n}} in decibels relative to one milliwatt (dBm) at the r th node of a network of R RSS sensors are modeled as
where${v}_{r,n}\sim \mathcal{N}(0,1),{\sigma}_{r}^{2}\sim \mathcal{I}\mathcal{G}(\alpha ,\beta ),\forall r\in \mathcal{R}\triangleq \{1,\dots ,R\}$, and$\left\{{\mathbf{x}}_{0},\left\{{\mathbf{u}}_{n}\right\},\left\{{v}_{r,n}\right\},\left\{{\sigma}_{r}^{2}\right\}\right\}$ are mutually independent for all n ≥ 0 and for all$r\in \mathcal{R}$. The nonlinear function g_{ r }(•) in (2) is in turn given by[25]
where x_{ r } represents the r th sensor position, . is the Euclidean norm, (P_{0},d_{0}, ζ_{ r }) are known model parameters (see[25] for details), and H is a 2 × 4 projection matrix such that H(1,1) = H(2,3) = 1 and H(i,j) = 0 otherwise. We also denote by N_{ r } the set of nodes in the neighborhood of node r. The realvalued constants {α,β} are the model’s hyperparameters.
Note that in (2), we take a fully Bayesian approach and model the unknown sensor noise variances$\left\{{\sigma}_{r}^{2}\right\},r\in \mathcal{R}$, as random variables that are mutually independent for s ≠ r and identically distributed a priori with an inversegamma distribution.
2.2 Problem statement and goals
Let z_{1:R,0:n} denote the set {z_{r,t}} for all network nodes r = 1,…,R and all time instants t = 0,…,n. Given z_{1:R,0:n}, we want to compute the MMSE estimate
at each instant n ≥ 0, where E{x_{ n }z_{1:R,0:n}} denotes the conditional expectation of x_{ n } given z_{1:R,0:n}.
In the sequel, we first describe in Section 3 a recursive, centralized PF algorithm that approximates the desired global MMSE in (4) at each time step n in a scenario with unknown sensor variance scales$\left\{{\sigma}_{r}^{2}\right\}$. Next, we review in Section 3.1 two fully distributed algorithms that operate on a partially connected network and allow exact innetwork computation of the state estimate in (4) without a global data fusion center and with internode communication limited to a node’s immediate neighborhood according to the network topology. The network connectivity is described by a graph$\mathcal{G}=(\mathcal{R},\mathcal{E})$ where$\mathcal{R}=\left\{1,\dots ,R\right\}$ is the set of nodes and the graph has an edge$(u,v)\in \mathcal{E},(u,v)\in \mathcal{R}\times \mathcal{R}$ if and only if nodes u and v can communicate directly with each other. The particular network graph used in the simulation scenarios in this paper is described in detail in Section 5.
Finally, we introduce in Section 4 a novel diffusionbased algorithm, which is also fully distributed and relies on local internode communication only specified as before by the network graph but, rather than yielding an identical estimate (4) at each node, obtains at each node r a suboptimal estimate
where${\mathcal{Z}}_{r,0:n}$ is a random subset of z_{1:R,0:n}, which is different at each node r and includes measurements coming from random locations in the entire network, as opposed to measurements coming only from node r and its neighborhood. Compared to the exact distributed implementations of the optimal global estimate in Section 3.1, the diffusion solution in Section 4, although suboptimal, is designed to have a much lower internode communication cost and, therefore, is better suited for realtime applications.
3 Centralized particle filter
In a centralized architecture, all nodes in the network transmit their local measurements to a data fusion center which then runs a particle filter that approximates the MMSE estimate of the unknown state vector at each time instant n as
where$\left\{{\mathbf{x}}_{n}^{(q)}\right\},q\in \mathcal{Q}\triangleq \{1,\dots ,Q\}$, with the corresponding importance weights$\left\{{w}_{n}^{(q)}\right\}$ is a properly weighted Monte Carlo set[5, 6] that represents the posterior probability density function (PDF) p(x_{ n }z_{1:R,0:n}) in the sense that the sum on the righthand side of (6) converges, according to some statistical criterion, to the expectation on the lefthand side when Q → ∞. The random samples$\left\{{\mathbf{x}}_{n}^{(q)}\right\}$, also called particles, are sequentially generated according to a proposal probability distribution specified by a socalled importance PDF$\pi ({\mathbf{x}}_{n}{\mathbf{x}}_{0:n1}^{(q)},{z}_{1:R,0:n})$. If the blind importance function[5]
is used, then it turns out that the proper importance weights must be updated according to the recursion[6]
where ∝ denotes ‘proportional to,’ z_{1:R,n} is an alternative notation for the set {z_{r,n}},$r\in \mathcal{R}$, and the proportionality constant on the righthand side of (7) is chosen such that${\sum}_{q=1}^{Q}{w}_{n}^{(q)}=1$. From the mutual independence assumptions in the model in Section 2, it follows that
and
From (8) and (9), it can be shown then that (see the proof in[11])
Substituting now (10) into (7), the centralized weight update rule reduces to
3.1 Equivalent distributed implementation of the centralized particle filter
Note that each factor${\lambda}_{r,n}^{(q)}({\mathbf{x}}_{n}^{(q)})$ in the product on the righthand side of (11) depends only on local observations. In a fully connected network, assuming that all nodes$r\in \mathcal{R}$ start out at instant n  1 with the same particles$\left\{{\mathbf{x}}_{n1}^{(q)}\right\}$, they can all synchronously draw[26] new particles$\left\{{\mathbf{x}}_{n}^{(q)}\right\}$ according to$p({\mathbf{x}}_{n}{\mathbf{x}}_{n1}^{(q)})$, locally compute their own local likelihood functions${\lambda}_{r,n}^{(q)}({\mathbf{x}}_{n}^{(q)})$, and then broadcast them to the entire network until all nodes have all the remote likelihood functions and can compute the product on the righthand side of (11). Synchronous multinomial resampling according to the global weights followed by regularization may follow (see[11]) to mitigate particle degeneracy and impoverishment[5, 6]. The algorithm described in this paragraph is referred to as the decentralized particle filter (DcPF) in[11] and[27].
As mentioned, however, in Section 1, realworld networks are only partially connected and fully distributed computations of the product in (11) are needed. One possibility is to approximate the product using iterative average consensus[28] as proposed, e.g., in[8] and[29]. Alternatively, we introduced in[11] a fully distributed computation of the global weights in (11) using either iterative minimum consensus[12] or flooding[13]. Both algorithms assume only local communication between nodes in immediate neighborhoods and, to achieve an exact computation of the global weights, require only a finite number of iterative message exchanges between nodes in the time interval between two consecutive sensor measurements.
Let D denote the diameter of the network graph, i.e., the maximum number of hops between any two nodes and, as before, denote by R the number of nodes in the network. By running R × D consecutive minimum consensus iterations[12] for each particle q, it is possible (see details in[11]) to build an identical ordered list of likelihood functions$\left\{{\lambda}_{r,n}^{(q)}({\mathbf{x}}_{n}^{(q)})\right\},r\in \mathcal{R}$, at all nodes. Each node can then locally compute the product of the likelihoods as in (10) and obtain identical, optimal global importance weights$\left\{{w}_{n}^{(q)}\right\}$. We refer to that (communicationintensive) minimumconsensusbased distributed tracking algorithm as CbPFa.
A more efficient way, however, to compute the exact optimal global weights at each node is to flood[13] the local node likelihoods over the network. Flooding protocols allow one to (iteratively) broadcast values over a network relying on local neighborhood internode communication only. Given a partially connected sensor network, one can simultaneously flood the R distinct likelihoods over the network as follows. First of all, each node r maintains an ordered list of distinct likelihoods. A likelihood in turn is flagged to indicate that it has not been sent to node r neighbors yet. Initially, the node r stores its local flagged likelihood in its list. At a given iteration, node r sends its lowest flagged likelihood to all neighbors and then unflags it. Conversely, it receives remote likelihoods from nodes s ∈ N_{ r }. If a received remote likelihood is not included in node r’s list yet, it is inserted with a flag in its list. This procedure is guaranteed to converge in a finite number of iterations as soon as each node has R distinct values in its ordered list of likelihoods. We refer to the floodingbased iterative tracker in this paper as the CbPFb algorithm.
Figure1 illustrates how the proposed flooding protocol iteratively creates at each node r an ordered list comprising all likelihoods across the network in a toy example with three nodes where node 1 is connected to node 2, node 2 is connected to nodes 1 and 3, and node 3 is connected to node 2 only. A star symbol is employed to indicate which likelihoods are flagged in the ordered list maintained by each node r at a given iteration j.
Although optimal in the sense of reproducing the centralized solution, the minimum consensus and flooding algorithms in[11] are still communicationintensive due to the requirement of iterative internode communication between sensor measurement arrivals. In the next sections, we describe an alternative fully distributed diffusionbased solution that drops this requirement and is the main topic of this paper.
4 Random exchange diffusion particle filter
In this section, we derive an alternative distributed PF based on random information dissemination that extends the methodology in[2] to a Monte Carlo framework. We also present a RaoBlackwellized version of the proposed distributed PF in a scenario with unknown sensor parameters.
Let${\mathcal{Z}}_{s,0:n1}$ denote the set of all network measurements assimilated by node s up to instant n  1. Next, let$\left\{{\mathbf{x}}_{s,0:n1}^{(q)}\right\}$ with associated weights$\left\{{w}_{s,n1}^{(q)}\right\},q\in \mathcal{Q}$, be a properly weighted set that represents the posterior PDF$p({\mathbf{x}}_{0:n1}{\mathcal{Z}}_{s,0:n1})$ at node s. Assume now that at instant n  1, node s sends its particles and weights to a neighboring node r that can assimilate at instant n the measurements${\mathcal{Z}}_{r,n}=\left\{{z}_{i,n}\right\}$, i ∈ {r} ∪ N_{ r }. At instant n, the new particle set at node r,${\mathbf{x}}_{r,0:n}^{(q)}=({\mathbf{x}}_{s,0:n1}^{(q)},{\mathbf{x}}_{r,n}^{(q)})$ with updated weights${w}_{r,n}^{(q)}$ such that
is now a properly weighted set to represent the updated posterior$p({\mathbf{x}}_{0:n}{\mathcal{Z}}_{r,n},\phantom{\rule{0.3em}{0ex}}{\mathcal{Z}}_{s,0:n1})$, where$\{{\mathcal{Z}}_{r,n},\phantom{\rule{0.3em}{0ex}}{\mathcal{Z}}_{s,0:n1}\}$ is redefined as${\mathcal{Z}}_{r,0:n}$. Resampling from the particle weights followed by regularization may be added to combat particle degeneracy and restore particle diversity, i.e., for$q\in \mathcal{Q}$ (see also[11]):

Draw l^{(q)} from {1,2,…,Q} with$\mathit{\text{Pr}}(\{{l}^{(q)}=l\})={w}_{r,n}^{(l)}$, where P r(A) denotes the probability of an event A.

Make${\stackrel{\u0304}{\mathbf{x}}}_{r,n}^{(q)}={\mathbf{x}}_{r,n}^{({l}^{(q)})}+h{D}_{n}{\mathbf{x}}^{\ast}$, where${\mathbf{x}}^{\ast}\sim \mathcal{N}(\mathbf{0},\mathbf{I}),{D}_{n}{D}_{n}^{T}$ is equal to the empirical covariance of the weighted particles$\left\{({\mathbf{x}}_{r,n}^{(q)},{w}_{r,n}^{(q)})\right\}$, and h > 0 is an empirically adjusted parameter.

Reset the particle weights${w}_{r,n}^{(q)}$ to$\frac{1}{Q}$ and make${\mathbf{x}}_{r,n}^{(q)}={\stackrel{\u0304}{\mathbf{x}}}_{r,n}^{(q)}$.
Random exchange protocol In order to build, at each instant n and at each node r, different Monte Carlo representations of the posterior distribution conditioned on different sets of observations${\mathcal{Z}}_{r,0:n}$ coming from random locations in the entire network, it suffices to implement a protocol where each node r, starting from instant zero, exchanges its particles and weights with a randomly chosen neighboring node s, propagates the received particles using the blind importance function as in (12), and then updates their weights as in (13).
Figure2 illustrates the evolution of the marginal posterior at each node  in a linear network containing three nodes running the random exchange protocol  over four time instants. Initially, each node r ∈ {1,2,3} has a posterior at instant zero conditioned on the measurements${\mathcal{Z}}_{r,0}=\{{z}_{i,0}\},i\in \{r\}\cup {\mathbf{N}}_{r}$, in its vicinity only. At each time instant n ∈ {1,2,3}, network nodes perform the sequence of random exchanges as indicated in the rightmost column of Figure2 and, then, update the received posterior by assimilating measurements in their respective neighborhoods.
Note that in the linear network topology shown in Figure2, node 2 always performs two random exchanges at each time instant n. Generally speaking, however, at a given instant n, a node r exchanges its parameters at least one time with a randomly chosen neighbor s and, in the worst case, performs d(r) random exchanges between two measurement arrivals with nodes in its vicinity, where d(r) is the degree of node r, i.e., the number of neighbor nodes.
Unlike randomized gossip algorithms[30], this procedure diffuses information by randomly propagating posterior statistics across the network. More specifically, as the initial posterior statistics provided by a given node r_{0} at time 0 follows a path$\mathcal{P}\triangleq \{{r}_{0},{r}_{1},\dots ,{r}_{n}\}$ along the network, it assimilates the available measurements${\mathcal{Z}}_{r,n}$ in the neighborhood of each visited node$r\in \mathcal{P}$. Since, as illustrated in Figure2, the initial posteriors at each node follow different paths, the posterior available at node r_{ n } at time n will be different from those in the remaining nodes. Thus, network nodes will provide different estimates conditioned on distinct sets of measurements.
4.1 ReDifPF with known sensor variances
If the parameters of the sensor observation model at each node r are deterministic and perfectly known, then
At instant n, then, upon receiving$({w}_{s,n1}^{(q)},{\mathbf{x}}_{s,n1}^{(q)}),q\in \mathcal{Q}$, from node s, the particle filter at node r samples as before
and updates its weights as
where${z}_{i}{\mathbf{x}}_{r,n}^{(q)}\sim \mathcal{N}({g}_{i}({\mathbf{x}}_{r,n}^{(q)}),{\sigma}_{i}^{2})$.
Internode transmission requirements From the previous discussion, it follows that in the scenario with known variances at each instant n, it suffices for each node s to transmit to the chosen neighbor r the set of particles$\{{\mathbf{x}}_{s,n1}^{(q)}\}$ (4Q real numbers for a fourdimensional state space) and the respective set of importance weights$\{{w}_{s,n1}^{(q)}\}$ (Q real numbers). In addition, node s also sends its scalar observation z_{s,n} and the known observation model parameters$({\zeta}_{s},{\mathbf{x}}_{s},{\sigma}_{s}^{2})$ (see (3)) to all nodes i in the neighborhood of s.
4.2 RaoBlackwellized ReDifPF with unknown sensor variances
Let I G(σ^{2}α,β) denote the PDF of a continuous random variable σ^{2} with an inversegamma distribution specified by the parameters α and β, i.e.[19],
and zero otherwise. In (16), Γ(.) denotes the gamma function
Similarly, let also N(xm,Σ) denote the PDF of a Gaussian random vector taking values in ℜ^{L} and with mean m and positive definite covariance matrix Σ, i.e.,
where Σ denotes the determinant of the matrix Σ and the superscript T denotes the transpose of a vector.
In the scenario with unknown sensor variances, it can be shown (see Appendix 2) that if at instant n  1,
then
where i ∈ {r} ∪ N_{ r }, and each factor${\stackrel{\u0304}{\lambda}}_{i,n}^{(q)}({\mathbf{x}}_{r,n}^{(q)})$ in the product on the righthand side of (18) is computed by solving the integral
where Γ(•), as before, denotes the gamma function
with g_{ i }(•) calculated as in (3). Furthermore, at node r and instant n, the updated parameter posterior PDF
where α_{r,i,n} and${\beta}_{r,i,n}^{(q)}$ are updated as in (20) and (21) if i ∈ {r} ∪ N_{ r } or, otherwise, are kept equal respectively to α_{s,i,n1} and${\beta}_{s,i,n1}^{(q)}$. If regularization is used to combat particle degeneracy, the posterior parameters$\{{\beta}_{s,i,n1}^{(q)}\}$ must be also resampled according to new weights${w}_{r,n}^{(q)}$ updated as in (13) and a new set$\left\{{\beta}_{r,i,n}^{(q)}\right\}$ must be recalculated for i ∈ {r} ∪ N_{ r } using (21) with the resampled$\{{\beta}_{s,i,n1}^{(q)}\}$ and the new moved particles$\{{\mathbf{x}}_{r,n}^{(q)}\}$. We follow, however, a different suboptimal strategy described in Section 4.3, which also allows a significant reduction in internode communication cost.
Internode transmission requirements In the unknown variance scenario, based on the previous discussion, at each instant n, a node s has to transmit to its (randomly chosen) neighboring node r its particle set$\left\{{\mathbf{x}}_{s,n1}^{(q)}\right\}$ (4Q real numbers) plus the respective importance weights$\left\{{w}_{s,n1}^{(q)}\right\}$ (Q real numbers) and the set of hyperparameters$\left\{({\alpha}_{s,i,n1},\phantom{\rule{0.3em}{0ex}}{\beta}_{s,i,n1}^{(q)})\right\},i\in \mathcal{R},q\in \mathcal{Q}$ (another R × (Q + 1) real numbers), which specify the posterior PDF$p({\sigma}_{1:R}^{2}{\mathbf{x}}_{s,0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$. In addition, as before, node s also sends its scalar observation z_{s,n} and the observation model parameters (ζ_{ s },x_{ s }) to all nodes i in the neighborhood of s.
4.3 Approximate RB ReDifPF
Although the exact ReDifPF algorithms in Sections 4.1 and 4.2 converge asymptotically to the state estimate in (5) as the number of particles Q goes to infinity, their internode communication cost is still relatively high. To reduce the communication burden, we propose two suboptimal approximations which are described in detail in the sequel.
GMM approximation of the marginal posterior of the states To circumvent the inconvenience of having to transmit, either in the known or unknown sensor parameter scenarios, Q particles and respective weights per node at each time step, we follow the lead in[21] and build a GMM representation of the marginal posterior$p({\mathbf{x}}_{n1}{\mathcal{Z}}_{s,0:n1})$ of the form
where$\mathcal{K}=\left\{1,\dots ,K\right\}$ and the parameters${\eta}_{s,n1}^{(k)},{\mathit{\mu}}_{s,n1}^{(k)}$, and${\mathbf{\Sigma}}_{s,n1}^{(k)}$ are obtained from the weighted particle set$\left\{{\mathbf{x}}_{s,n1}^{(q)},{w}_{s,n1}^{(q)}\right\},q\in \mathcal{Q}$, at node s using the ExpectationMaximization (EM)[31] algorithm. Node s now transmits to node r only the parameters that specify the GMM model, i.e., 15K real numbers for a fourdimensional state vector, as opposed to 5Q real numbers, where typically Q >> K (in the simulations in Section 5 for example, K is either 1 or 2, whereas Q is 500). Node r then locally resamples Q new particles${\mathbf{x}}_{s,n1}^{(q)}$ according to the received GMM PDF and resets its importance weights${w}_{s,n1}^{(q)}$ to 1/Q. Since resampling from the GMM approximation is used, we omit the regularization step mentioned in Section 4.
Approximation of the posterior distribution of the sensor variances In the particular situation where the sensor variances are unknown, in theory we should also locally resample the previous particle trajectories${\mathbf{x}}_{s,0:n2}^{(q)}$ jointly with${\mathbf{x}}_{s,n1}^{(q)}$ from some parametric approximation to$p({\mathbf{x}}_{0:n1}{\mathcal{Z}}_{s,0:n1})$ and then recompute retroactively the posterior PDF’s$p({\sigma}_{i}^{2}{\mathbf{x}}_{s,0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$, i = 1,…,R for the resampled particle paths. To eliminate that curse of dimensionality, it is desirable to introduce a parametric approximation to$p({\sigma}_{i}^{2}{\mathbf{x}}_{s,0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$ that eliminates the dependence of that function on the particle label q and the simulated sequence${\mathbf{x}}_{s,0:n1}^{(q)}$.
Specifically, we follow the lead in[11, 22, 32], and, for each$i\in \mathcal{R}$, approximate the marginal posteriors$p({\sigma}_{i}^{2}{\mathbf{x}}_{0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$ for all particle labels q and all possible sequences${\mathbf{x}}_{0:n1}^{(q)}$ by a new inverse gamma PDF with parameters${\stackrel{~}{\alpha}}_{s,i,n1}$ and${\stackrel{~}{\beta}}_{s,i,n1}$, independent of q and chosen such that the approximated PDF$\mathit{\text{IG}}({\sigma}_{i}^{2}{\stackrel{~}{\alpha}}_{s,i,n1},{\stackrel{~}{\beta}}_{s,i,n1})$ matches the first and second moments of
where the term on the lefthand side of (24) is the average (or expected value) of$p({\sigma}_{i}^{2}{\mathbf{x}}_{0:n1},{\mathcal{Z}}_{s,0:n1})$ over all possible realizations of x_{0:n1} conditioned on the observations${\mathcal{Z}}_{s,0:n1}$. Assuming now that$\{({w}_{s,n1}^{(q)},{\mathbf{x}}_{s,0:n1}^{(q)})\},q\in \mathcal{Q}$, is a properly weight set available at node s at instant n  1 to represent$p({\mathbf{x}}_{0:n1}{\mathcal{Z}}_{s,0:n1})$, we make the Monte Carlo approximation
On the other hand, from the assumption that$p({\sigma}_{1:R}^{2}{\mathbf{x}}_{s,0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$ is a separable function factored as in (17), it follows that
and, therefore,
In the sequel, recall that if${\sigma}^{2}\sim \mathcal{I}\mathcal{G}(\alpha ,\beta )$, then the respective mean and variance of σ^{2} are given by[19]
Therefore, the parameters${\stackrel{~}{\alpha}}_{s,i,n1}$ and${\stackrel{~}{\beta}}_{s,i,n1}$ such that$\mathit{\text{IG}}({\sigma}_{i}^{2}{\stackrel{~}{\alpha}}_{s,i,n1},{\stackrel{~}{\beta}}_{s,i,n1})$ matches the mean and variance associated with the PDF on the righthand side of (26) are found, following the procedure in[11, 22, 32] by making
where
Replacing now$p({\sigma}_{i}^{2}{\mathbf{x}}_{s,0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$ in (19) with
for all q ∈ {1,…,Q} and all possible sequences${\mathbf{x}}_{s,0:n1}^{(q)}$, we get, at node r at instant n, new factors${\stackrel{~}{\lambda}}_{i,n}(.)$ such that
where
for all$q\in \mathcal{Q}$ and all i ∈ {r} ∪ N_{ r }. Otherwise, if i ∉ {r} ∪ N_{ r }
again for all$q\in \mathcal{Q}$. The modified importance weight update rule at node r at instant n now becomes
Internode communication cost By combining the GMM approximation and the momentmatching approximation described before, node s now transmits to its (randomly chosen) neighbor r only the GMM model parameters (15K real numbers as previously explained) plus 2R hyperparameters$({\stackrel{~}{\alpha}}_{s,i,n1},\phantom{\rule{0.3em}{0ex}}{\stackrel{~}{\beta}}_{s,i,n1}),i\in \mathcal{R}$, as opposed to R × (Q + 1) hyperparameters as before in the exact RB ReDifPF algorithm.
Summary of the approximate RBReDifPF Algorithm 1 summarizes the approximate RB ReDifPF tracker at node r at instant n. In Algorithm 1, the symbol Θ_{r,n} denotes the set$\left\{\left({\eta}_{r,n}^{(k)},{\mathit{\mu}}_{r,n}^{(k)},{\mathbf{\Sigma}}_{r,n}^{(k)}\right),\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left({\stackrel{~}{\alpha}}_{r,i,n}{\stackrel{~}{\beta}}_{r,i,n}\right)\right\}$ for$i\in \mathcal{R}$ and$k\in \mathcal{K}$.
Algorithm 1 Approximate RaoBlackwellized random exchange diffusion particle filter
4.4 Differences between ReDifPF and the Markov chain distributed particle filter
An alternative and different approach to distributed particle filtering is the MCDPF algorithm introduced in[9]. MCDPF, like other previous work in the distributed PF literature, assumes conditional independence of the sensor observations given the target state and, therefore, should be compared to the proposed ReDifPF algorithm in this paper in the known sensor parameters scenario of Section 4.1 as opposed to the more general RaoBlackwellized version of ReDifPF proposed for unknown sensor parameters in Section 4.2.
The main idea in MCDPF is to move each particle and its associated weight multiple times between nodes in the time interval between instants n and n + 1, according to a Markov chain with transition probabilities defined by the normalized adjacency matrix of the graph that defines the network topology. Each time a given particle${\mathbf{x}}_{n}^{\star}$ visits a network node r, its weight is multiplied by the pseudolikelihood$p{({z}_{r,n}{\mathbf{x}}_{n}^{\star})}^{1/J\phantom{\rule{0.3em}{0ex}}\varphi (r)}$ where ϕ(r) is the longterm stationary probability of the state of the Markov chain specified by being equal to r, r = 1,…,R, and J is total number of Markov chain move steps between consecutive sensor measurements, which is set by the user. Since the number of visits to the node r divided by J converges to ϕ(r)[9] as J → ∞, it follows that if J is large enough so that particle${\mathbf{x}}_{n}^{\star}$ not only visits all network nodes but also visits each node multiple times, then the aggregate update factor for its corresponding weight at the end of the random walk will approach
which, under the assumption of conditional independence of the sensor measurements given the target state, is the exact update factor for the optimal global weight associated with particle${\mathbf{x}}_{n}^{\star}$. For a finite and especially low number of move steps, MCDPF is no longer optimal, meaning that the choice of the parameter J involves a tradeoff between internode communication cost and state estimation error.
Contrary to MCDPF, the proposed ReDifPF does not attempt to compute the exact optimal global posterior PDF p(x_{0:n}z_{1:R,0:n}) at all nodes r = 1,…,R at each instant n. Instead, as explained in previous sections, ReDifPF builds at each node r and at each instant n a Monte Carlo representation of the posterior$p({\mathbf{x}}_{0:n}{\mathcal{Z}}_{r,0:n})$, where${\mathcal{Z}}_{r,0:n}$ is a random subset of z_{1:R,0:n} that changes from node to node. Such Monte Carlo representation is built in a way that between instants n and n + 1, each node makes only one request to exchange particles/weights (or equivalent parametric approximations of posterior distributions) with a randomly chosen neighbor, thus eliminating the need for multiple iterative internode communication between consecutive sensor measurements and resulting in a communication cost that is much lower than that of the MCDPF algorithm for a similar mean square state estimation error (see the numerical simulation results in Section 5.2).
Finally, we also note that compared to the noniterative ReDifPF, MCDPF is also computationally more intensive since each node r has to compute the local likelihoods$p({z}_{r,n}{\mathbf{x}}_{n}^{(q)})$ for all its particles${\mathbf{x}}_{n}^{(q)}$ multiple (namely J) times between instants n and n + 1. We also illustrate that point in the numerical simulations of Section 5.2.
5 Simulation results
We assessed the performance of the proposed algorithms using 100 Monte Carlo runs with simulated data in three distinct scenarios assuming both unknown and known sensor variances. In all scenarios, we used R = 25 RSS sensors with parameters P_{0} = 1 dBm, d_{0} = 1 m, ζ_{ r } = 3,$\forall r\in \mathcal{R}$, and${\sigma}_{r}^{2}$ independently sampled at each node according to an distribution with mean 16. The nodes were deployed on a jittered grid within a square of size 100 m × 100 m. In the fully distributed algorithms, each node communicates with other nodes within a range of 40 m. All particle filters used Q = 500 particles.
Figure3 shows the sensor positions and two distinct realizations of the emitter trajectory generated for T = 1 s and${\mathbf{x}}_{0}={\left[\begin{array}{cccc}25\phantom{\rule{.3em}{0ex}}\mathrm{m}& 0.5\phantom{\rule{.3em}{0ex}}\text{m/s}& 35\phantom{\rule{.3em}{0ex}}\mathrm{m}& 0.5\phantom{\rule{.3em}{0ex}}\text{m/s}\end{array}\phantom{\rule{.3em}{0ex}}\right]}^{T}$ considering respectively σ_{accel} = 0.05 m/s^{2} and σ_{accel} = 0.2 m/s^{2}. It also depicts the available network connections. The diameter of the sensor network is D = 5 hops and the minimum number of neighbors for any possible node is 3.
5.1 Scenario I: ReDifPF vs. CbPF
In the first scenario, we assumed unknown sensor variances and evaluated the performance of the RaoBlackwellized ReDifPF and two consensusbased PF trackers using respectively iterative minimum consensus (CbPFa) and flooding (CbPFb) (see also[11]). The aforementioned algorithms were compared to the equivalent broadcast implementation of the optimal centralized PF tracker, referred to as DcPF in[11] and[27] and in Section 3.1 of this paper. We also assumed Gaussian priors with mean${\left[\begin{array}{cc}{x}_{0}& {y}_{0}\end{array}\right]}^{\mathrm{T}}$ and covariance matrix diag(20^{2},20^{2}) for the emitter’s position in Cartesian coordinates and mean${\left[\sqrt{{\stackrel{\u0307}{x}}_{0}^{2}+{\stackrel{\u0307}{y}}_{0}^{2}}\phantom{\rule{.3em}{0ex}}arctan\phantom{\rule{.3em}{0ex}}({\stackrel{\u0307}{y}}_{0}/{\stackrel{\u0307}{x}}_{0})\right]}^{\mathrm{T}}$ and covariance matrix diag(0.3^{2},(5π/180)^{2}) for the emitter’s velocity in polar coordinates, where x_{0} = 25 m, y_{0} = 35 m, and${\stackrel{\u0307}{x}}_{0}={\stackrel{\u0307}{y}}_{0}=0.5\phantom{\rule{0.3em}{0ex}}\text{m/s}$. In the initialization step, the realizations of the initial emitter velocity are sampled from the aforementioned Gaussian prior and, then, converted from polar to Cartesian coordinates.
Figure4 shows the evolution of the root mean square (RMS) error norm  averaged over all network nodes and Monte Carlo runs  of the emitter position estimates for the RB ReDifPF and the CbPFa and CbPFb algorithms superimposed to the benchmark RMS error curve for the optimal DcPF algorithm. Furthermore, we also show in Figure4 the average RMS error norm for the noncooperative (isolated node) trackers and for a local cooperation scheme. In the former, each node runs a regularized PF tracker (see[11]) which assimilates local measurements only, while in the latter, a node r incorporates all measurements${\mathcal{Z}}_{r,n}$ in its vicinity in the same way as in the ReDifPF tracker, but it does not exchange its updated posterior with its neighbors. The bars shown in Figure4 represent the standard deviation of the error norm across all nodes in the network. There are no bars for the DcPF and CbPF algorithms since they provide the same state estimate at all nodes. The RMS error norm at time step 0 for all algorithms was calculated after the measurements z_{1:R,0} were assimilated. We implemented the RB ReDifPF in this scenario with the parametric approximations in Section 4.3 using only one Gaussian mode to represent$p({\mathbf{x}}_{n1}{\mathcal{Z}}_{s,0:n1})$.
As expected, CbPFa and CbPFb match the performance of the DcPF tracker since both algorithms reproduce the optimal centralized PF tracker exactly, albeit with different communication and computational costs. On the other hand, as shown in Figure4, the RB ReDifPF tracker has a performance degradation compared to DcPF. This result is again theoretically expected since, in the RB ReDifPF algorithm, the posterior at each node assimilates just a subset of the available measurements z_{1:R,n} in the whole network at each time step n. However, ReDifPF offers an improvement in error performance compared to the local cooperation scheme by better diffusing the information across the network. We also note from Figure4 that the standard deviation of the state estimate across the different network nodes is much lower in the ReDifPF algorithm than in the local cooperation scheme. Note also that, as shown in Figure4, isolated nodes were not able to properly track the emitter in the evaluated scenario.
Finally, Figure5 shows the performance comparison between the ReDifPF and consensusbased algorithms for σ_{accel} ∈ {0.05,0.1,0.2}.
As expected, as σ_{accel} increases, there is a deterioration in the RMS error performance. However, the ratio between the RMS error performance of the suboptimal ReDifPF tracker and the benchmark optimal DcPF/CbPFb algorithms remains approximately constant (close to a factor of two) along the simulation period for all three different values of σ_{accel} employed.
Communication and computation cost Considering a fourbyte and a onebyte network representation respectively for real and Boolean values, the total amount of bytes transmitted and received by all nodes over the network was recorded while running each tracker in Figure4. Table1 summarizes the communication cost for each algorithm in the first scenario (unknown sensor variances) in terms of average transmission (TX) and average reception (RX) rates per node and also quantifies the processing cost for each algorithm in terms of average duty cycle per node, measured in a Intel Core i5 machine with 4GB RAM. The duty cycle of a given node is defined as the ratio between the total node processing time and the simulation period 100 s. Finally, values in Table1 are averaged over all Monte Carlo simulations.
As shown in Table1, the RB ReDifPF tracker with the parametric approximations in Section 4.3 using only one Gaussian mode has a communication cost based on TX rate that is approximately one order of magnitude lower than the floodingbased CbPFb’s communication requirements. Compared to the iterative minimum consensus solution (CbPFa), the average communication cost is reduced by two orders of magnitude.
5.2 Scenario II: ReDifPF vs. ReDifEKF
In the second scenario, the sensor variances are perfectly known and the ReDifPF tracker is compared both to the optimal centralized PF and to a linearized random exchange extended Kalman filter (ReDifEKF), which is summarized in Appendix 3. In the simulations, we assumed a noninformative prior for the sensor’s initial position that is uniform in the entire surveillance space. The actual initial position of the emitter was, however, sampled from a Gaussian distribution centered at (5 m,5 m) with standard deviation of 3 m in both dimensions. Figure6 shows a normalized contour map for the posterior PDF p(x_{0},y_{0}z_{1:R,0}) at instant 0 as a function of x_{0} and y_{0} assuming the aforementioned noninformative prior. As seen from Figure6, the initial posterior distribution of the target’s position is nonGaussian.
Figure7 shows the evolution of the RMS error norm assuming known sensor variances respectively for the ReDifPF algorithm in Section 4.1 with a twoGaussian GMM parametric approximation and the ReDifEKF algorithm in Appendix 3. We also show the RMS curve for the optimal centralized PF tracker as a benchmark. The plots in Figure7 show that, especially in the initial time steps, when the posterior distribution of the states is strongly nonGaussian as suggested by Figure6, the fully distributed ReDifPF outperforms its linearized counterpart, the ReDifEKF. As the emitter moves away from the near field of the initial dominant sensor, the performance of the ReDifEKF slowly improves and approaches that of the ReDifPF, albeit still with a slight degradation towards the end of the simulation.
Communication and Computation Cost Table2 summarizes the communication and processing cost per node for each algorithm in the second scenario.
As expected, the DcPF algorithm assuming known sensor variances has the same communication requirements as in the scenario with unknown variances since DcPF locally computes the likelihood functions and then broadcasts them to the entire network. However, as shown in Table2, DcPF has a slightly lower processing cost when the sensor variances are known. The ReDifPF tracker on the other hand outperformed the ReDifEKF tracker in terms of the position RMS error at the expense of a greater communication and computational cost. However, as indicated in Table2, the communication requirements of the ReDifPF and ReDifEKF trackers still have the same order of magnitude.
5.3 Scenario III: ReDifPF vs. MCDPF/selective gossip
In the third scenario, the ReDifPF tracker is compared to two iterative algorithms from the literature  the MCDPF and the selective gossip from[9] and[23], respectively  assuming perfectly known sensor variances as in the second scenario and the same Gaussian priors for the emitter’s initial position and velocity used in the first scenario.
Figure8 shows the evolution of the RMS error norm assuming known sensor variances for the ReDifPF algorithm in Section 4.1 with a singlemode GMM parametric approximation and the MCDPF algorithm in[9] for J ∈ {10,30,50,100} iterations.
Figure9 shows the evolution of the RMS error norm for the ReDifPF algorithm in Section 4.1 with a singlemode GMM parametric approximation and the selective gossip algorithm in[23] using respectively J ∈ {1,000;2,000;4,000} iterations. More specifically, we first run J average gossip iterations considering only the particles in the top 10% bracket in terms of loglikelihood for each randomly selected pair of nodes at each iteration and, subsequently, we run J standard max gossip iterations for the averaged loglikelihood of the selected particle as proposed in[23] to ensure that all nodes have exactly the same weight update factors. Note that, since only one pair of nodes is active at each average gossip iteration and only 10% of the particles are being transmitted between the active nodes, the Selective Gossip algorithm has a lower internode communication cost than MCDPF even when a much larger number of iterations is used between consecutive sensor measurements.
Communication and computation cost Table3 summarizes the communication and processing cost per node for each algorithm in the third scenario.
The MCDPF and the selective gossip algorithms have a RMS error performance similar to the ReDifPF algorithm for J = 30 and J = 4,000 iterations, respectively, at the expense of a communication cost approximately two orders of magnitude larger than that of the ReDifPF tracker. Moreover, for a comparable RMS error, the measured ReDifPF duty cycle is also approximately five and seven times lower than the duty cycle of the MCDPF and the selective gossip algorithms respectively. Note, however, that the selective gossip tracker converges to the same estimate at all nodes and the estimates at each node provided by the MCDPF tracker have a lower standard deviation than those provided by the ReDifPF algorithm.
We also note from Table3 that with J = 100 Markov chain move steps between sensor measurements, the MCDPF RMS error approaches the error curve of the optimal floodingbased CbPFb tracker with a internode communication cost that is, however, roughly four times greater than that of the CbPFb algorithm.
6 Conclusions
We introduced in this paper a RaoBlackwellized version of the random exchange diffusion particle filter which enables fully distributed tracking of hidden state vectors in cooperative sensor networks with unknown sensor parameters. Although the general structure of the algorithm can be generalized to arbitrary signal models, we specified the algorithm in this particular paper in an application where we track a moving emitter using multiple RSS sensors with unknown noise variances. The ReDifPF tracker, introduced originally in a simpler version in[17], is based on random information dissemination and is well suited for realtime applications since, unlike consensusbased approaches, it does not require iterative internode communication between measurement arrivals.
The new RaoBlackwellized version of the ReDifPF was compared to an exact broadcast implementation of the optimal centralized PF solution, referred to as the DcPF algorithm, and to two equivalent, fully distributed PFs using respectively iterative minimum consensus (CbPFa) and flooding (CbPFb). As expected, due to its suboptimality, the ReDifPF tracker showed a degradation in RMS error performance compared to both DcPF and the equivalent consensus implementations in our simulations, but required much lower communication bandwidth with savings of one order of magnitude compared to DcPF and CbPFb in terms of transmission rate, and two orders of magnitude compared to CbPFa. The communication cost savings in the RB ReDifPF algorithm were possible due to suitable parametric approximations introduced in Section 4.3.
The RB ReDifPF algorithm RMS error performance was also compared in the unknown variance scenario to a local cooperation scheme in which each node assimilates all available measurements in its neighborhood but does not exchange its posterior statistics with other nodes. By diffusing information over the network, the RB ReDifPF tracker showed better error performance than the local cooperation scheme that uses local information only. Additionally, the standard deviation of the error norm considering all nodes in the network was much lower for RB ReDifPF than in the local cooperation scheme, suggesting possible weak consensus.
Next, in a second scenario with perfectly known variances, we also compared a nonRB ReDifPF tracker to its distributed linear filtering counterpart, the ReDifEKF described in Appendix 3. Due to the nonGaussianity of the posterior distribution of the states, the distributed PF solution outperformed the distributed EKF solution, albeit, as expected, at a greater computational and communication cost.
Finally, in a third scenario also with perfectly known variances, we compared the nonRB ReDifPF tracker to two alternative distributed particle filters based respectively on iterative Markov chain move steps between sensor measurements as proposed in[9] and on iterative selective average gossiping as proposed in[23]. In our simulations, the novel ReDifPF matched the RMS error performance with both the Markov chain and the selective gossip filters with an internode communication cost approximately two orders of magnitude lower and a required duty cycle that is reduced by a factor of 5 when compared to MCDPF and a factor of 7 when compared to the selective gossip scheme.
As future work, we plan to extend the ReDifPF algorithm to perform joint detection and trackingconsidering scenarios with probability of detection less than 1 and probability of false alarm greater than 0 as in[33]. We also plan to analyze the diffusion properties of ReDifPF by investigating the longterm statistical properties of the sequence of visited nodes {r_{ n }},n > 0, defined by the random exchange protocol starting from a random node r_{0}.
Appendix 1
In this appendix, we use an importance sampling methodology (see[5, 6]) to show that the augmented particle set${\mathbf{x}}_{r,0:n}^{(q)}=\{({\mathbf{x}}_{s,0:n1}^{(q)},{\mathbf{x}}_{r,n}^{(q)})\}$, q = 1,…,Q with weights$\{{w}_{r,n}^{(q)}\}$ obtained according to (12) and (13) in Section 4 is a properly weighted set to represent the posterior PDF$p({\mathbf{x}}_{0:n}{\mathcal{Z}}_{r,n},\phantom{\rule{0.3em}{0ex}}{\mathcal{Z}}_{s,0:n1})$ in the sense that for any measurable function h(•),
Specifically, let$\left\{{\mathbf{x}}_{s,0:n1}^{(q)}\right\}$ with associated weights$\left\{{w}_{s,n1}^{(q)}\right\},q\in \mathcal{Q}$, be a properly weighted set that represents the posterior PDF$p({\mathbf{x}}_{0:n1}{\mathcal{Z}}_{s,0:n1})$ at node s. Assuming that the particle set$\left\{{\mathbf{x}}_{s,0:n1}^{(q)}\right\}$ was sampled according to some proposal importance function$\pi ({\mathbf{x}}_{0:n1}{\mathcal{Z}}_{s,0:n1})$, the proper weights$\left\{{w}_{s,n1}^{(q)}\right\}$ may be written as[5, 6]
where
Assume next that node s sends its particle set and weights to a neighboring node r that can access at instant n the measurements${\mathcal{Z}}_{r,n}=\left\{{z}_{r,n}\right\}\cup {\left\{{z}_{i,n}\right\}}_{i\in {\mathbf{N}}_{r}}$. For any measurable function h(•), we note that
Sampling now at node r new particles${\mathbf{x}}_{r,n}^{(q)}\sim p({\mathbf{x}}_{n}{\mathbf{x}}_{s,n1}^{(q)})$ and building the augmented particle trajectories${\mathbf{x}}_{r,0:n}^{(q)}=({\mathbf{x}}_{s,0:n1}^{(q)},{\mathbf{x}}_{r,n}^{(q)})\sim p({\mathbf{x}}_{n}{\mathbf{x}}_{n1})\phantom{\rule{0.3em}{0ex}}\pi ({\mathbf{x}}_{0:n1}{\mathcal{Z}}_{s,0:n1})$ the integral on the righthand side of (40) can be approximated as
where
and
Substituting (43) into (42) and recalling from the model assumptions that$p({\mathbf{x}}_{n}{\mathbf{x}}_{0:n1},{\mathcal{Z}}_{s,0:n1})=p({\mathbf{x}}_{n}{\mathbf{x}}_{n1})$ we get the recursion
where, as before,
Appendix 2
Let, as before,
where σ^{2} > 0 and m ∈ ℜ. After some algebraic calculations, it can be shown (see[19] and also[6, 11]) that
where
Similarly, using the same algebraic procedure, it follows that (see[6, 19])
where$\stackrel{\u0304}{\alpha}$ and$\stackrel{\u0304}{\beta}$ are given respectively by (48) and (49).
Assume now that at node s at instant n  1, the joint posterior PDF$p({\sigma}_{1:R}^{2}{\mathbf{x}}_{s,0:n1}^{(q)},{\mathcal{Z}}_{s,0:n1})$ is factored as
In the sequel, assume that node s transmits to a neighboring node r its weighted particle set$\left\{({w}_{s,n1}^{(q)},\phantom{\rule{0.3em}{0ex}}{\mathbf{x}}_{s,n1}^{(q)})\right\}$ and the corresponding parameters$\left\{{\alpha}_{s,i,n1},{\beta}_{s,i,n1}^{(q)}\right\}$, q = 1,…,Q, i = 1,…,R. At instant n, as explained in Section 4, node r samples a new set of particles
and updates its weights as
where${\stackrel{~}{\mathbf{N}}}_{r}$ denotes {r} ∪ N_{ r } and, as in Section 4,${\mathcal{Z}}_{r,n}$ is a notation for the set {z_{i,n}} for all$i\in {\stackrel{~}{\mathbf{N}}}_{r}$. In (53), we used the facts that
and
which, in turn, is assumed to be factored as in (51). On the other hand, using (50), it follows that for each i ∈ {r} ∪ N_{ r },
where, from (48) and (49), α_{r,i,n} and${\beta}_{r,i,n}^{(q)}$ are given by (20) and (21) in Section 4.2.
Similarly, node r at instant n updates the posterior PDF of the unknown variances as
where
is a normalization constant that does not depend on${\sigma}_{1:R}^{2}$ and, using (47), (48), and (49), for all i ∈ {r} ∪ N_{ r },
as in (20) and (21) in Section 4.2. Otherwise, if i ∉ {r} ∪ N_{ r }, then
Appendix 3
In a scenario with perfectly known sensor model parameters, assume that at instant n  1, node s has a linear estimate${\widehat{\mathbf{x}}}_{s,n1n1}$ of the hidden state x_{n1} based on the observations${\mathcal{Z}}_{s,0:n1}$, which were assimilated by node s from instant zero up to instant n  1.
In the sequel, as proposed in[2], assume that node s and a randomly chosen node r in the neighborhood of s exchange their respective estimates${\widehat{\mathbf{x}}}_{s,n1n1}$ and${\widehat{\mathbf{x}}}_{r,n1n1}$, and the respective associated conditional covariance matrices, P_{s,n1n1} and P_{r,n1n1}.
At instant n then, we may get a new linear estimate${\widehat{\mathbf{x}}}_{r,nn}$ at node r, with associated conditional covariance matrix P_{r,nn}, propagating${\widehat{\mathbf{x}}}_{s,n1n1}$ and P_{s,n1n1} using the usual extended Kalman filter recursions, but assimilating now only the local measurements {z_{i,n}}, i ∈ {r} ∪ N_{ r }, also denoted${\mathcal{Z}}_{r,n}$. Under that approach,${\widehat{\mathbf{x}}}_{r,nn}$ is now an approximate linear minimum mean square error estimate (see[6]) of the hidden state x_{ n } at instant n given the new set of observations${\mathcal{Z}}_{r,0:n}=\{{\mathcal{Z}}_{r,n},\phantom{\rule{0.3em}{0ex}}{\mathcal{Z}}_{s,0:n1}\}$.
Specifically, for a more general statespace model of the form
with$E\left\{{\mathbf{u}}_{n}\phantom{\rule{0.3em}{0ex}}{\mathbf{u}}_{n}^{T}\right\}={\mathbf{Q}}_{n}$ and$E\left\{{\mathbf{v}}_{r,n}\phantom{\rule{0.3em}{0ex}}{\mathbf{v}}_{r,n}^{T}\right\}={\mathbf{R}}_{r,n}$, the prediction step of the extended Kalman filter at node r at instant n, after parameter exchange, is given by
On the other hand, making
the updated step equations of the distributed EKF become
Note that in the updated step of the random exchange distributed EKF, node r must have access to the measurements {z_{i,n}} from its immediate neighbors and must also know their respective sensor covariance matrices {R_{i,n}} and the analytic expressions of the neighboring gradients$\left\{\frac{\partial {\mathbf{h}}_{i}(.)}{\partial \mathbf{x}}\right\}$, which are then all evaluated locally at node r at the predicted estimate${\widehat{\mathbf{x}}}_{r,n\mid n1}$. Alternatively, node r may transmit${\widehat{\mathbf{x}}}_{r,n\mid n1}$ to its neighbors, which then evaluate their respective gradients and transmit back the matrices {H_{i,n}} and {R_{i,n}} to node r.
References
 1.
Djurić PM, Beaudeau J, Bugallo MF: Noncentralized target tracking with mobile agents. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Prague: IEEE; 2011:59285931.
 2.
Kar S, Moura JMF: Gossip and distributed Kalman filtering: weak consensus under weak detectability. IEEE Trans. Signal Process 2011, 59(4):17661784.
 3.
Cattivelli FS, Sayed AH: Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans. Automatic Control 2010, 55(9):20692084.
 4.
Ribeiro A, Giannakis GB, Roumeliotis SI: SOIKF: Distributed Kalman filtering with lowcost communications using the sign of innovations. IEEE Trans. on Signal Process 2006, 54(12):47824795.
 5.
Doucet A, Godsill S, Andrieu C: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput 2000, 10(3):197208. 10.1023/A:1008935410038
 6.
Bruno MGS: Sequential Monte Carlo methods for nonlinear discretetime filtering. Synth. Lect. Signal Process 2013, 6(1):199. 10.2200/S00471ED1V01Y201303SPR011
 7.
Hlinka O, Hlawatsch F, Djurić PM: Distributed particle filtering in agent networks: a survey, classification, and comparison. IEEE Signal Process. Mag 2013, 30(1):6181.
 8.
Hlinka O, Sluciak O, Hlawatsch F, Djurić PM, Rupp M: Likelihood consensus and its application to distributed particle filtering. IEEE Trans. Signal Process 2012, 60(8):43344349.
 9.
Lee SH, West M: Markov chain distributed particle filters (MCDPF). In Proceedings of the 48th IEEE International Conference on Decision and Control. Shanghai: IEEE; 2009:54965501.
 10.
Ustebay D, Coates M, Rabbat M: Distributed auxiliary particle filters using selective gossip. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Prague: IEEE; 2011:32963299.
 11.
Dias SS, Bruno MGS: Cooperative target tracking using decentralized particle filtering and RSS sensors. IEEE Trans. Signal Process 2013, 61(14):323646.
 12.
Yadav V, Salapaka MV: Distributed protocol for determining when averaging consensus is reached. In 45th Annual Allerton Conference. Allerton House  UIUC; 2007:715720.
 13.
Tsoumakos D, Roussopoulos N: A comparison of peertopeer search methods. In Proceedings of the WebDB. San Diego: Citeseer; 2013:6166.
 14.
Farahmand S, Roumeliotis SI: GB Giannakis Particle filter adaptation for distributed sensors via set membership. In Proceedings of the 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Dallas: IEEE; 2010:33743377.
 15.
Mohammadi A, Asif A: Consensusbased distributed unscented particle filter. In Proceedings of the 2011 IEEE Statistical Signal Processing Workshop (SSP). Nice: IEEE; 2011:237240.
 16.
Sayed AH, Tu SY, Chen J, Zhao X, Towfic ZJ: Diffusion strategies for adaptation and learning over networks: An examination of distributed strategies and network behavior. IEEE Signal Process. Mag 2013, 30(3):155171.
 17.
Dias SS, Bruno MGS: Distributed emitter tracking using random exchange diffusion particle filters. In Proceedings of the 16th International Conference on Information Fusion. Istanbul: IEEE; 2013.
 18.
Casella G, Robert CP: Raoblackwellisation of sampling schemes. Biometrika 1996, 83(1):8194. 10.1093/biomet/83.1.81
 19.
Gelman A, Carlin JB, Stern HS, Rubin DB: Texts in Statistical Science  Bayesian Data Analysis. Florida: Chapman & Hall/CRC; 2003.
 20.
Dias SS, Bruno MGS: A RaoBlackwellized random exchange diffusion particle filter for distributed emitter tracking. In IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP). St. Martin: IEEE; 2013.
 21.
Sheng X, Hu YH, Ramanathan P: Distributed particle filter with GMM approximation for multiple targets localization and tracking in wireless sensor network. In Proceedings of the 4th International Symposium on Information Processing in Sensor Networks, IPSN ’05. Los Angeles: IEEE Press; 2005:181188.
 22.
Bordin CJ, Bruno MGS: A particle filtering algorithm for cooperative blind equalization using VB parametric approximations. In Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP). Dallas: IEEE; 2010:38343837.
 23.
Üstebay D, Castro R, Rabbat M: Selective gossip. In Proceedings of the 3rd IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP). Aruba: IEEE; 2009:6164.
 24.
BarShalom Y, Li XR: MultitargetMultisensor Tracking: Principles and Techniques. Storrs, CT: University of Connecticut; 1995.
 25.
Patwari N, Hero III AO, Perkins M, Correal NS, O’dea RJ: Relative location estimation in wireless sensor networks. Proc. IEEE Trans. Signal Process 2003, 51(8):21372148. 10.1109/TSP.2003.814469
 26.
Coates M: Distributed particle filters for sensor networks. In Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks. New York: ACM; 2004:99107.
 27.
Bordin CJ, Bruno MGS: Cooperative blind equalization of frequencyselective channels in sensor networks using decentralized particle filtering. In Proceedings of the 42nd Asilomar Conference on Signals, Systems and Computers. Pacific Grove: IEEE; 2008:11981201.
 28.
Xiao L, Boyd S: Fast linear iterations for distributed averaging. Syst. & Control Lett 2004, 53(1):6578. 10.1016/j.sysconle.2004.02.022
 29.
Bordin CJ, Bruno MGS: Consensusbased distributed particle filtering algorithms for cooperative blind equalization in receiver networks. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Prague: IEEE; 2011:39683971.
 30.
Boyd S, Ghosh A, Prabhakar B, Shah D: Randomized gossip algorithms. IEEE Trans. Inf. Theory 2006, 52(6):25082530.
 31.
Dempster AP, Laird NM, Rubin DB: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc 1977, 39(1):138.
 32.
Dias SS, Bruno MGS: Cooperative particle filtering for emitter tracking with unknown noise variance. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Kyoto: IEEE; 2012:26292632.
 33.
Bruno MGS, Araújo RV, Pavlov AG: Sequential monte carlo methods for joint detection and tracking of multiaspect targets in infrared radar images. EURASIP J. Adv. Signal Process 2008., 2008: doi:10.1155/2008/217373
Acknowledgements
The authors would like to thank Professor José M. F. Moura for fruitful discussions at ICASSP 2012 that motivated this work. The authors would also like to acknowledge Dr. Claudio Bordin Jr. for helpful discussions on the topic of internode communication cost in network particle filtering.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Marcelo G S Bruno and Stiven S Dias contributed equally to this work.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Bruno, M.G.S., Dias, S.S. Collaborative emitter tracking using RaoBlackwellized random exchange diffusion particle filtering. EURASIP J. Adv. Signal Process. 2014, 19 (2014). https://doi.org/10.1186/16876180201419
Received:
Accepted:
Published:
Keywords
 Distributed particle filters
 RSS emitter tracking
 Diffusion
 Wireless sensor networks