Skip to content

Advertisement

  • Research
  • Open Access

Exploiting sensor mobility and covariance sparsity for distributed tracking of multiple sparse targets

EURASIP Journal on Advances in Signal Processing20162016:53

https://doi.org/10.1186/s13634-016-0354-y

  • Received: 2 November 2015
  • Accepted: 22 April 2016
  • Published:

Abstract

The problem of distributed tracking of multiple targets is tackled by exploiting sensor mobility and the presence of sparsity in the sensor data covariance matrix. Sparse matrix decomposition relying on norm-one/two regularization is integrated with a kinematic framework to identify informative sensors, associate them with the targets, and enable them to follow closely the moving targets. Coordinate descent techniques are employed to determine in a distributed way the target-informative sensors, while the modified barrier method is employed to minimize proper error covariance matrices acquired by extended Kalman filtering. Different from existing approaches which force all sensors to move, here, local updating recursive rules are obtained only for the target-informative sensors that can update their location and follow closely the corresponding targets while staying connected. Simulations advocate that the proposed scheme outperforms alternative tracking schemes while accurately tracks multiple targets by imposing mobility only on the target-informative sensors.

Keywords

  • Mobile sensor networks
  • Distributed processing
  • Sparse decomposition
  • Multi-target tracking

1 Introduction

In recent years, potential applications of sensor networks (SN) have expanded due to the low cost of the sensing units, their ability to cover large areas, and the robustness distributed processing offers. One characteristic exploited more and more in sensor networks is sensor mobility and the design of kinematic rules that control sensor movement. Sensor mobility adds extra flexibility to a sensor network making it capable of covering larger areas, as well as being more energy efficient and robust [1]. Mobile sensors have been extensively utilized in target tracking applications to enable sensors to closely follow the moving target(s) and provide accurate target location estimates [24]. The aforementioned approaches require all the sensors to keep active [24], which may lead to excessive resource consumption despite the targets’ locality and the fact that in practice a small portion of sensors may possess useful information about the present targets. The aim here is to design an adaptive scheme that exploits mobility and covariance sparsity to associate targets with sensors and then properly determine kinematic strategies only for the informative sensors which will closely follow the field targets.

In the absence of sensor mobility, there has been a plethora of approaches for tracking multiple targets while associating targets with sensor measurements. Existing works [58] associate measurements acquired at static sensors with targets across time and rely heavily on probability models. A distributed Kalman filtering scheme is proposed in [9] relying on information diffusion strategies. In [9], only neighboring sensors collaborate, though all sensors in the network are utilized to track a single source while sensors have fixed locations. A different approach is followed in [10], where consensus-averaging is employed across the whole sensor network and all the sensors are forced to be active irrespective of the quality of their measurements. In [11], a related single-target distributed tracking approach is proposed, in which extended Kalman filtering is employed for tracking. A probability model is assumed to determine informative sensors which may lead to instability due to its dependence on the tracking estimates. Different in this paper, distributed tracking of multiple targets will be considered, while sensor mobility will be exploited and combined with a sensor-to-target association scheme for selecting target-informative sensors without the need of relying on model parameters and state estimators that maybe inaccurate and result divergence. It should be pointed out that the distributed characterization here is referring to the fact that (i) only neighboring sensors need to communicate with each other and collaborate for multi-target tracking, while (ii) processing will take place in a few head sensors and will not involve all sensors in the network but only those sensors that bear information about the moving targets.

Single-target tracking using mobile sensor networks has been studied for a variety of different scenarios [1214]. Most of these approaches control the movement of all sensors by minimizing the estimation error covariance, [4, 13], while the approach in [12] manages sensor mobility based on a Bayesian estimation model and restricting sensors to move only on a grid of locations. A path planning strategy for a setting involving a fixed-location target and a single moving sensor is designed in [15] by maximizing the determinant of the Fisher information matrix corresponding to the configuration. In [16], an approach is proposed for controlling the trajectories of multiple UAVs by minimizing the localization uncertainty for a fixed-location target setting where the target is emitting a radio signal. The work in [17] rigorously presents how sensor mobility can increase spatial resolution when tracking a target with mobile sensors.

When tracking multiple targets with mobile sensors, the approach in [3] proposed an active sensing model, whereas the target-sensor association is based on a nearest neighbor rule which heavily relies on the accuracy of the state estimator while a central processing center is required. The scheme in [18] tackled the problem of moving sensors using a flock control law where all sensors are utilized, while the targets are some of the moving sensors whose position is known. The approach in [19] is utilizing clustering and neural networks to move sensors under the assumption that target locations are available. The scheme in [20] designs a Kalman filtering approach with gradient descent-based kinematic rules under the assumption that it is known which targets every sensor observes bypassing in that way the essential sensor-to-target association step. These schemes involve movement of all sensors at every time instant leading to resource-demanding algorithms that do not exploit spatial locality of the field targets.

Measurements corresponding to sensors which are close to the same target tend to be statistically correlated. Given that targets are spatially localized and affect small portions of the sensor network, an approximately sparse sensor data covariance matrix is emerging. Sparsity (presence of a many zero entries in a vector or matrix) has been exploited in a wide range of applications including sparse regression and statistical inference, e.g., see [21, 22]. The problem of associating targets to sensors, as well as determining the sensors with target-informative measurements, is formulated here as the task of decomposing a matrix into sparse factors. The sparse matrix factorization techniques in [23, 24] are integrated here with proper sensor kinematic strategies and tracking techniques to exploit sensor mobility. Note that in [23, 24], a stationary (immobile) sensor network is considered where sensors have fixed locations. Tracking in [23, 24] is performed by immobile sensors, whereas here tracking is generalized to a mobile network with the more challenging task of designing and integrating with multi-target tracking, sensor kinematic strategies that improve tracking accuracy while preserving local sensor network connectivity. Norm-one and norm-two regularization mechanisms are employed to formulate a pertinent minimization framework that recovers sparse covariance factors, while estimates the number of targets on the field. Coordinate descent techniques [25, 26] are employed to derive local updating recursions that allow sensors to associate with targets.

Different from the aforementioned tracking schemes using sensor mobility, here, only the target-informative sensors will be enabled to move at every time instant and track closely the moving targets. Thus, only target-affected portions of the sensor network will be used for tracking the moving targets, potentially resulting better resource consumption and prolongation of the network lifetime. Kinematic rules will be designed by minimizing proper error covariance matrices obtained by extended Kalman filtering recursions [27] used to track each of the targets. The minimization will be performed under connectivity constraints that ensure the moving sensors stay connected and are able to communicate. The modified barrier method ([26], pg. 423) is employed to solve a pertinent constrained minimization problem and obtain distributed kinematic rules that the mobile sensors can apply locally without the need of a central controller. In contrast to existing approaches, the novel framework identifies and controls the movement only of target-informative sensors allowing for accurate tracking.

The novelties of the proposed framework with respect to existing work involves the following: (i) utilization of covariance sparsity and properly designed kinematic rules to perform dynamic spatio-temporal sensor-to-target association and tracking using mobile sensors not present in [11, 23, 24]; (ii) different from existing approaches [1214], it enables only the sensors acquiring informative measurements about the targets to move instead of moving all sensors to track a single target; (iii) different from [11], it does not rely on probabilistic models or state estimates to associate sensors with targets; (iv) it takes local sensor network connectivity into account to ensure that the tracking process carries out continuously; and (v) different from [15, 16] that consider fixed-location target(s), here, kinematic strategies are developed for tracking multiple moving targets. The proposed approach here achieves this task only by using the sensor data and sparsity-imposing mechanisms.

The paper is structured as follows. The problem formulation and setting are given in Section 2. The sensor-to-target association scheme which provides the target-informative sensors is delineated in Section 3.1 and is combined with extended Kalman filtering techniques in Section 3.2. Novel kinematic rules are developed in Section 3.3 after minimizing a pertinent error covariance matrix under connectivity constraints and employing the modified barrier method to derive local kinematic rules that enable the target-informative sensors to update their location and follow closely to moving targets. The proposed algorithmic framework is detailed in Section 4.1, while the communication and computational costs are discussed in Section 4.2. Extensive numerical tests are carried out in Section 5 that corroborate the advantages of the novel method over existing alternatives.

2 Problem setting

An ad hoc sensor network conformed by p mobile sensors is considered here. The sensors monitor a field where an unknown and possibly time-varying number of moving targets is present. Each sensor communicates only with its neighboring sensors which are within its communication range and are able to exchange information with a single-hop of communication. The single-hop neighborhood of sensor j is denoted as \(\mathcal {N}_{j}(t)\), where t denotes the time index.

In general, all targets are assumed to be moving in a K-dimensional space. Then, every target, say the ρth, is characterized by a 2K×1 state vector which contains both its position coordinates and velocity information for each coordinate. The position coordinates for the ρth target at time t are stacked in vector \(\mathbf {p}_{\rho }(t)=\left [p_{\rho,x_{1}}(t),\ldots,p_{\rho,x_{K}}(t)\right ]^{T}\), while the velocity per coordinate is in vector \(\mathbf {v}_{\rho }(t)=\left [v_{\rho,x_{1}}(t),\ldots,v_{\rho,x_{K}(t)}\right ]^{T}\). So at time t, the state vector can be written as \(\mathbf {s}_{\rho }(t)=\left [\mathbf {p}_{\rho }^{T}(t), \mathbf {v}_{\rho }^{T}(t)\right ]^{T}\), while it evolves according to a near constant velocity model [28]. Specifically, the ρth target’s state vector evolves according to the following constant velocity model, see e.g., [28]:
$$ \mathbf{s}_{\rho}(t+1)=\mathbf{F}\mathbf{s}_{\rho}(t) + \mathbf{u}_{\rho}(t), $$
(1)
where F is the 2K×2K state transition matrix and u ρ (t) the zero-mean Gaussian state noise with variance Σ u . Matrices F and Σ u are given as follows:
$$ \begin{aligned} \mathbf{F}&=\left[\begin{array}{cccccc} 1&0&\ldots&\Delta T&\ldots &0\\ \vdots &\vdots & \vdots & \vdots & \vdots & \vdots\\ 0&1&0&0&\ldots& \Delta T\\ 0&0&1&\ldots&1&0\\ 0&0&0&\ldots&0&1\\ \end{array} \right],\\\mathbf{\Sigma}_{u}&={\sigma_{u}^{2}}\left[\begin{array}{crcr} (\Delta T)^{3}/3\cdot \mathbf{I}_{K}& (\Delta T)^{2}/2\cdot\mathbf{I}_{K}\\ (\Delta T)^{2}/2\cdot\mathbf{I}_{K}&\Delta T\cdot\mathbf{I}_{K}\\ \end{array} \right], \end{aligned} $$
(2)

where \({\sigma _{u}^{2}}\) is the noise variance and I K denotes the identity matrix of size K×K, while Δ T denotes the sampling period.

Sensor j senses the moving targets, by acquiring at time t a scalar measurement depending on the target location according to the following nonlinear model:
$$ x_{j}(t)={\sum}_{\rho=1}^{R}{{a_{\rho}(t)}{d_{j,\rho}^{-2}(t)}}+w_{j}(t),\;j=1,\ldots,p, $$
(3)
where a ρ (t) denotes the intensity of a signal emitted from the ρth target and d j,ρ (t)=p j (t)−p ρ (t) is the distance between sensor j and the ρth target at time t. The total number of targets which move in the field through the lifespan of the SN is indicated as R, while w j (t) denotes the white sensing noise with variance \({\sigma _{w}^{2}}\) and zero-mean. The following assumptions are introduced in the considered setting:
  • A1: In the measurement model in (3), it is assumed that the targets act as transmitters and each sensor will receive one reflection of the signal emitted from the targets. Signals emitted from the targets propagate via free space, explaining the \(d_{j,\rho }^{-2}(t)\) attenuation coefficients, and are superimposed as shown in (3), see e.g., [29].

  • A2: The signal amplitudes a ρ (t) are considered to be uncorrelated across the different targets.

  • A3: Among the summands \(a_{\rho }(t)d_{j,\rho }^{-2}(t)\) in (3), only one has a large amplitude when sensor j is close to the ρth target, whereas others are negligible due to the square-law attenuation \(d_{j,\rho }^{-2}(t)\) caused by the free space propagation.

Note that assumption A3 corresponds to a setting where at most, one target is present within the sensing range of a sensor. Note that this is a more relaxed version of the common assumption that one sensor just contains the measurement of a specific target [68]. The signal amplitudes a ρ (t) will be nonzero for the interval in which the corresponding target is active and moving while is kept at zero when the target is inactive and disappears.

The emitted, from the targets, signals a ρ (t) could correspond to communication radio signals that possibly the targets are transmitting, e.g., targets could correspond to cell phone users moving in an area or unnamed aerial/ground vehicles or military vehicles that move within the monitored area and need to be tracked, see e.g., [16]. The deployed sensors are listening for these signals to track the moving entities. The targets could correspond to independent entities; thus, it is expected that the information bits they transmit are uncorrelated, giving rise to uncorrelated transmission signals [30]. Thus, the communication radio signals that the targets may be emitting are utilized to perform tracking and move the sensors appropriately. Applications include localization and tracking of mobile users in wireless networks, as well as tracking of vehicles in tactical environments [16].

Stacking all the sensor measurements in (3) on a p×1 vector, it follows:
$$ \mathbf{x}_{t}=\mathbf{D}_{t}\mathbf{a}_{t}+\mathbf{w}_{t},\text{where }\mathbf{a}_{t}:=\left[a_{1}(t) \; a_{2}(t)\ldots a_{R}(t)\right]^{T}, $$
(4)
where D t is a p×R matrix with entries \(\mathbf {D}_{t}(j,\rho)=d_{j,\rho }^{-2}(t)\) with j=1,…,p and ρ=1,…,R. The noise w t has covariance \(\mathbf {\Sigma }_{w}={\sigma _{w}^{2}}\mathbf {I}_{p}\). Given that the entries of a t are uncorrelated, it follows readily that the data covariance matrix is
$$ \mathbf{\Sigma}_{x,t}=\mathbf{D}_{t}\mathbf{\Sigma}_{a}\mathbf{D}_{t}^{T}+{\sigma_{w}^{2}}\mathbf{I}_{p}= \bar{\mathbf{D}}_{t}\bar{\mathbf{D}}_{t}^{T}+{\sigma_{w}^{2}}\mathbf{I}_{p}, $$
(5)

where Σ a is the diagonal covariance matrix of a t , while \(\bar {\mathbf {D}}_{t}:=\mathbf {D}_{t}\mathbf {\Sigma }_{a}^{1/2}\). Among the R entries in a t , there will be r(t) nonzero entries corresponding to the active targets moving at the sensed field at t. In the setting here, once a target becomes inactive (i.e., a ρ (t)=0), it remains inactive for the rest of time.

The goal is to enable the mobile sensors to track an unknown number of targets present in the monitored field. Novel target association and sensor mobility strategies will be combined with tracking techniques to enable sensors to accurately track the different target trajectories. Proper kinematic strategies will be developed to allow only a small percentage of target-informative sensors to move, different from existing approaches [12, 14] where all sensors are moving at every time instant that may be more resource demanding. Judiciously selecting and moving sensors will enable target tracking even when the targets move outside the area originally monitored by the sensors.

3 Distributed association, tracking, and sensor kinematic strategies

3.1 Target-informative sensor selection

Due to the presence of multiple target in the monitored field, the first goal is to determine sets of sensors, namely \({\mathcal {S}}_{\rho,t}\), that acquire information bearing measurements about the ρth target. From the observation model in (4), note that the strong-amplitude entries of the ρ column in D t , namely \(\{\mathbf {D}_{t,:\rho }\}_{\rho =1}^{R}\), can reveal the sensors within subset \({\mathcal {S}}_{\rho,t}\). Specifically, recall that \(\mathbf {D}_{t}(j,\rho)=d_{j,\rho }^{-2}(t)\); thus, when sensor j and target ρ are close in distance then the corresponding entry is expected to have large amplitude, while the further away they get from each other the closer to zero the corresponding entry gets. The matrix D t can be assumed approximately sparse. Thus, the strong-amplitude entries (away from zero) in D t,:ρ can be used to determine the informative sensor members of \({\mathcal {S}}_{\rho,t}\) at time instant t. Thus, determining \({\mathcal {S}}_{\rho,t}\) boils down to the problem of recovering the support of the columns of D t .

To recover the sparse matrix \(\bar {\mathbf {D}}_{t}\) in (4), the data covariance matrix will be decomposed into sparse factors. Due to the fact that the targets and sensors may be moving while the number of targets is changing, the sparse sensor data covariance Σ x,t is also time-varying. In practice, the ensemble covariance is not available and needs to be estimated. To this end, exponential weighing is employed to estimate the time-varying covariance entries. The notion of exponential weighing in recursive least-squares used in processing non-stationary signals, see e.g., ([31], Ch. 9), is estimating the time-varying covariance matrix here as follows:
$$ \hat{\mathbf{{\Sigma}}}_{x,t}=\frac{1-\omega}{1-\omega^{t+1}}\textstyle\sum_{{\tau}=0}^{t}{\omega}^{t-{\tau}}(\mathbf{x}_{\tau}-\bar{\mathbf{x}}_{t}) (\mathbf{x}_{\tau}-\bar{\mathbf{x}}_{t})^{T}, $$
(6)
where ω(0,1) denotes a forgetting factor and
$$ \bar{\mathbf{x}}_{t}=\frac{1-\omega}{1-\omega^{t+1}}\textstyle\sum_{{\tau}=0}^{t}{\omega}^{t-{\tau}}\mathbf{x}_{\tau}, $$
(7)

corresponds to a real-time estimate for the ensemble mean at time instant t. Note that ω in Eqs. (6) and (7) is used in a way that puts more emphasis to the recent data while it gradually forgets the past data, which is exactly what an up-to-date estimator needs to do for the time-varying setting considered here. The scaling (1−ω)(1−ω t+1)−1 in (6) and (7) is to ensure that the two estimates \(\hat {\mathbf {\Sigma }}_{{x,t}}\) and \(\bar {\mathbf {x}}_{t}\) for the ensemble quantities Σ x,t and \(\mathbb {E}[\mathbf {x}_{t}]\) will be unbiased.

To account for the nearly sparse structure of \(\bar {\mathbf {D}}_{t}\), the unknown number of targets and single-hop connectivity of the sensor nodes the following formulation relying on norm-one/norm-two regularization is utilized:
$$ \begin{aligned} \left(\hat{\mathbf{M}}_{t},\{\hat{\sigma}_{j}\}_{j=1}^{m}\right)&:=\arg\min_{\mathbf{M}_{t},\{\sigma_{j}\}_{j=1}^{m}} \|\mathbf{E}\odot\left(\hat{\boldsymbol{\Sigma}}_{x,t}-\mathbf{M}_{t}\mathbf{M}_{t}^{T}\right.\\ &\quad\left.-\text{diag}\left(\sigma_{1,t}^{2},\ldots,\sigma_{p,t}^{2}\right)\right)\|_{F}^{2}\\ &\quad+\sum_{\ell=1}^{L}\left(\lambda_{\ell}\|\mathbf{M}_{t,:\ell}\|_{1}+\phi\|\mathbf{M}_{t,:\ell}\|_{2}\right), \end{aligned} $$
(8)

where denotes the Hadamard operator (entry-wise matrix product), while \({\sigma _{j}^{2}}\) is the noise variance estimate at sensor j, and L is an upper bound for the number of active sensed targets r(t) (Lr(t)) and \(\mathbf {M}_{t}\in \mathbb {R}^{p\times L}\) contains L columns that estimate the sparse columns of \(\bar {\mathbf {D}}_{t}\). M t,: denotes the th column of M t . The formulation was first proposed in [23, 24] to perform target-sensor association in a network of stationary sensors that do not have moving capabilities. Here, this formulation will be utilized to determine the different sets of informative sensors observing different targets before being integrated with kinematic control rules.

The Hadamand operator along with the adjacency matrix E in (8) allows only the single-hop covariance entries to be used since they can be calculated by direct communication of the corresponding neighboring sensors. The first term in (8) accounts for the structure in (5). Sparsity is induced in the columns of M t using the norm-one term in (8), (see e.g., [22]), while λ denotes the nonnegative sparsity-controlling coefficient used to adjust the number of zeros in \(\hat {\mathbf {M}}_{t,:\ell }\). The coefficient ϕ≥0 in the last term of (8) promotes group sparsity among rows, [32], thus is introduced to adjust the number of nonzero columns of \(\hat {\mathbf {M}}_{t}\) needed to approximate \(\hat {\boldsymbol {\Sigma }}_{{x,t}}\). This is done to zero-out unnecessary columns in \(\hat {\mathbf {M}}\) when the number of active targets in the field is smaller than R. The number of nonzero columns in M t indirectly estimates the number of targets at time instant t, namely \(\hat {r}(t)\).

The cost in (8) is minimized by an iterative minimization scheme based on coordinate descent [25, 26], where sensor j is responsible for updating the jth row of M t , namely M t,j:. Specifically, the cost is minimized wrt one entry of M t or \(\text {diag}({\sigma _{1}^{2}},\ldots,{\sigma _{p}^{2}})\), while keeping the rest fixed to their most up-to-date values. Sensor j updates the entries \(\{\mathbf {M}_{t}(j,\ell)\}_{\ell =1}^{L}\) and variance σ j,t2. During one coordinate cycle, all the entries of M t and \(\text {diag}(\sigma _{1,t}^{2},\ldots,\sigma _{{p,t}}^{2})\) will be updated.

The updates for entries \(\hat {\mathbf {M}}_{t}^{k}(j,\ell)\) will be formed by differentiating (8) wrt M t (j,) and setting the derivative equal to zero, while fixing the rest of the entries of M t and σ j,t to their most recent updates in \(\hat {\mathbf {M}}_{t}^{k-1}\) and {σ j,t,k−12} evaluated at cycle k−1. It turns out (details in [23, 24]) that during coordinate cycle k, the update \(\hat {\mathbf {M}}_{t}^{k}(j,\ell)\) can be obtained as the value that gets the minimum possible cost in (8) (while fixing the rest of the variables) among the candidate values: (i) z=0; (ii) the real positive roots of the third-degree polynomial
$$\begin{array}{*{20}l} z^{3}&+\left[\sum_{i\in{\mathcal{N}}_{j}}[\hat{\mathbf{M}}_{t}^{k-1}(i,\ell)]^{2} -\psi_{t,\Sigma}^{k}(j,j,\ell)+\frac{\phi}{2}\right] \\ z&-\left[\sum_{i\in{\mathcal{N}}_{j}}\psi_{t,\Sigma}^{k}(j,\mu,\ell)\hat{\mathbf{M}}_{t}^{k-1}(i,\ell)\right]+\frac{\lambda_{\ell}}{4}=0 \end{array} $$
(9)
and (iii) the real negative roots of the third-degree polynomial
$$\begin{array}{*{20}l} z^{3}&+\left[\sum_{i\in{\mathcal{N}}_{j}}[\hat{\mathbf{M}}_{t}^{k-1}(\mu,\ell)]^{2} -\psi_{t,\Sigma}^{k}(j,j,\ell)+\frac{\phi}{2}\right] \\ z&-\left[\sum_{i\in{\mathcal{N}}_{j}}\psi_{t,\Sigma}^{k}(j,i,\ell)\hat{\mathbf{M}}_{t}^{k-1}(i,\ell)\right]-\frac{\lambda_{\ell}}{4}=0 \end{array} $$
(10)
where
$$\begin{array}{*{20}l} \psi_{t,\Sigma}^{k}(j,i,\ell)&:=\hat{\boldsymbol{\Sigma}}_{x,t}(j,i)-\delta_{j,i}\hat{\sigma}_{j,t,k-1}^{2} \\ &\quad-\sum_{\ell'=1,\ell'\neq \ell}^{L}\hat{\mathbf{M}}_{t}^{k-1}\left(j,\ell'\right)\hat{\mathbf{M}}_{t}^{k-1}\left(i,\ell'\right), \end{array} $$
(11)

while δ j,i denotes the Kronecker delta function, i.e., δ j,i =1 if j=i, and δ j,i =0 if ji. The roots of the two aforementioned polynomials can be calculated using companion matrices, see e.g., [33].

Furthermore, during cycle k at time instant t, the noise variance estimates across sensors can be updated as
$$ \hat{\sigma}_{j,t,k}^{2}=\hat{\boldsymbol{\Sigma}}_{x,t}(j,j)-\hat{\mathbf{M}}_{t,j:}^{k}\left(\hat{\mathbf{M}}_{t,j:}^{k}\right)^{T},\;\;j=1,\ldots,p. $$
(12)

Sensor j needs to communicate only with its single-hop neighbors in \(\mathcal {N}_{j}\), in order to evaluate the coefficients of the polynomials in (9) and (10) and to update the noise variance estimates in (12). It can be shown that as k, the updates \(\hat {\mathbf {M}}_{t}^{k-1}\) converge at least to a stationary point of (8). Further, the sparsity-controlling coefficients \(\{\lambda _{\ell }\}_{\ell =1}^{L}\) can be set using the strategy proposed in ([23], Sec. V.A). Once the sparse columns \(\{\hat {\mathbf {M}}_{t,:\ell }\}\) are estimated, their support (the indices of relatively strong-amplitude entries) is used to determine which sensors sense a specific target at time t.

3.2 Tracking via extended Kalman filtering

The target-informative sensor subsets \({\mathcal {S}}_{\rho _{\ell },t}\) for \(\ell =1,\ldots,\hat {r}(t)\), where \(\hat {r}(t)\) corresponds to an estimate of the number of targets at time instant t obtained from the number of nonzero columns of \(\hat {\mathbf {M}}_{t}:=\hat {\mathbf {M}}_{t}^{\bar {K}}\), after applying \(\bar {K}\) coordinate cycles. Extended Kalman filtering is employed to process the nonlinear observations and track each target’s location using the observations of the corresponding set \({\mathcal {S}}_{\rho _{\ell },t}\). For simplicity in exposition, the specifics of extended Kalman filtering (EKF) will be delineated here for K=2 dimensions, but it can be readily generalized to more dimensions. The target state estimator and corresponding error covariance matrix, obtained by the extended Kalman filter using the observations in \({\mathcal {S}}_{\rho _{\ell },t}\) for target ρ are denoted as \(\hat {\mathbf {s}}_{\rho _{\ell }}(t|t)\) and \(\mathbf {M}_{\rho _{\ell }}(t|t)\), respectively. The prediction step, see e.g., [27], involves the following updating recursions for the state estimator and covariance at time instant t
$$ {\small{\begin{aligned} {}\hat{\mathbf{s}}_{\rho_{\ell}}(t+1|t)=\mathbf{F}\hat{\mathbf{s}}_{\rho_{\ell}}(t|t),\;\;\;\hat{\mathbf{M}}_{\rho_{\ell}}(t+1|t)=\mathbf{F}\hat{\mathbf{M}}_{\rho_{\ell}}(t|t)\mathbf{F}^{T}+\boldsymbol{\Sigma}_{u}. \end{aligned}}} $$
(13)
The measurements of the sensors within set \({\mathcal {S}}_{\rho _{\ell },t}\) will then be used to carry out the correction step of the extended Kalman filter which involves the following update recursions:
$$\begin{array}{*{20}l} \hat{\mathbf{s}}_{\rho_{\ell}}(t+1|t+1)&=\hat{\mathbf{s}}_{\rho_{\ell}}(t+1|t)+\mathbf{K}(t+1)\\ &\quad\cdot\left[\mathbf{x}_{t+1}-a_{\rho}(t)\hat{\mathbf{D}}_{\mathcal{S}_{\rho_{\ell},t}}\right] \end{array} $$
(14)
$$\begin{array}{*{20}l} \mathbf{M}_{\rho_{\ell}}(t+1|t+1)&=\mathbf{M}_{\rho_{\ell}}(t+1|t)+\mathbf{D}^{T}_{\nabla,\rho_{\ell}}(t+1|t)\\ &\quad{\cdot\sigma^{2}_{w}}\mathbf{I}_{|\mathcal{S}_{\rho_{\ell},t}|}\cdot\mathbf{D}_{\nabla,\rho_{\ell}}(t+1|t), \end{array} $$
(15)
for \(\ell =1,\ldots,\hat {r}(t)\), while the matrix \(\mathbf {K}_{\rho _{\ell }}(t+1)\) corresponds to the Kalman gain given as
$$ \mathbf{K}_{\rho_{\ell}}(t+1)=\mathbf{M}_{\rho_{\ell}}(t+1|t+1)\cdot\mathbf{D}_{\nabla,\rho_{\ell}}^{T}(t+1|t){\cdot\sigma^{2}_{w}}\mathbf{I}_{|\mathcal{S}_{\rho_{\ell},t}|}, $$
(16)
where \(\hat {\mathbf {D}}_{\mathcal {S}_{\rho _{\ell },t}}\) is a \(|\mathcal {S}_{\rho _{\ell },t}|\times 1\) vector whose entries are given by \(\left \{||\mathbf {p}_{j}(t)-\hat {\mathbf {p}}_{\rho _{\ell }}(t+1|t)||^{-2}\right \}_{j\in \mathcal {S}_{\rho _{\ell },t}}\), in which \(\hat {\mathbf {p}}_{\rho _{\ell }}(t+1|t)\) is the ρ th target position extracted from the state prediction \(\hat {\mathbf {s}}_{\rho _{\ell }}(t+1|t)\). Further, \(\mathbf {D}_{\nabla,\rho _{\ell }}(t+1|t)\) is the \(|\mathcal {S}_{\rho _{\ell },t}|\times 4\) matrix whose rows constitute of gradient D t (j,ρ ) with respect to the state vector \(\mathbf {s}_{\rho _{\ell }}\) and evaluated at \(\hat {\mathbf {s}}_{\rho _{\ell }}(t+1|t)\) for \(j\in \mathcal {S}_{\rho _{\ell },t}\), i.e.,
$${} \begin{aligned} &\nabla_{\mathbf{s}_{\rho}} \mathbf{D}_{t}(j,\rho_{\ell}) {\left|{\vphantom{\mathbf{s}_{\rho_{\ell_{1}}}}}\right.}_{\mathbf{s}_{\rho_{\ell}}=\hat{\mathbf{s}}_{\rho_{\ell}}(t+1|t)} \\ &=\frac{2\cdot\left[\mathbf{p}_{j,x}(t)-\hat{\mathbf{p}}_{\rho_{\ell},x}(t+1|t), \mathbf{p}_{j,y}(t)-\hat{\mathbf{p}}_{\rho_{\ell},y}(t+1|t), 0, 0\right]^{T}}{\left[ (\mathbf{p}_{j,x}(t)-\hat{\mathbf{p}}_{\rho_{\ell},x}(t+1|t))^{2}+(\mathbf{p}_{j,y}(t) -\hat{\mathbf{p}}_{\rho_{\ell},y}(t+1|t))^{2}\right]^{2}}. \end{aligned} $$
(17)

Within each informative subset of sensors \(\mathcal {S}_{\rho _{\ell },t}\), the sensor closest in distance to the predicted position of the ρ th target, namely \(\hat {\mathbf {s}}_{\rho _{\ell }}(t+1|t)\), is set as a the subset head sensor that will gather the measurements of all other sensors in \(\mathcal {S}_{\rho _{\ell },t}\) and perform the EKF tracking recursions.

3.3 Sensor kinematics

The focus in this section is to derive kinematic rules for the target-informative sensors, which are selected according to the scheme in Section 3.1, such that they follow closely the moving targets and give accurate position estimates. The benefit from having a few sensors moving is that targets can be tracked even when they move away from the original field monitored by the sensors. Having sensors following closely, the moving targets can provide more reliable measurements about the targets than just using static sensors. Note that only informative sensors close to the targets will be responsible for carrying out the tracking procedure leading to resource savings. Toward this end, the informative sensors in each subset \({\mathcal {S}}_{\rho _{\ell }}\) will be placed/move in locations that minimize the trace of the error covariance associated with the estimator \(\hat {\mathbf {s}}_{\rho _{\ell }}(t|t)\). This will ensure that the informative sensors associated with each target move to a location that will provide measurements that result good tracking accuracy. The idea of minimizing a scalar function of the predicted error covariance was also applied in moving all sensors in a network for tracking a single target [2, 3, 12]. Here kinematic strategies are derived in the presence of multiple targets, while a judiciously selected small portion of target-informative sensors will be moving instead of all sensors moving.

Among the two terms in the covariance matrix in Eq. (15), only the second term is affected by the sensors’ location. The latter term, after using (17), can be written as:
$$ {{\begin{aligned} {} \sum_{j\in\mathcal{S}_{\rho_{\ell},t}}\frac{4}{\left[(\mathbf{p}_{j,x}(t+1)-\hat{\mathbf{p}}_{\rho_{\ell},x}(t+1|t))^{2}+(\mathbf{p}_{j,y}(t+1)-\hat{\mathbf{p}}_{\rho_{\ell},y}(t+1|t))^{2}\right]^{3}}. \end{aligned}}} $$
(18)
Clearly, (18) depends on the position of the sensors associated with target ρ , namely the sensors in subset \(\mathcal {S}_{\rho _{\ell },t}\), at time instant t. Letting p j (t+1):=[p j,x (t+1) p j,y (t+1)] T notice that the trace cost in (18) is separable with respect to the position of each sensor j within subset \(\mathcal {S}_{\rho _{\ell },t}\). Thus, the position of sensor \(j\in \mathcal {S}_{\rho _{\ell },t}\) at time instant t+1 is determined by minimizing the corresponding summand in (18), i.e., the updated location for sensors \(j\in \mathcal {S}_{\rho _{\ell },t}\) can be found as
$${} {{\begin{aligned} \mathbf{p}_{j}(t+1)&=\!\arg\text{min}_{\mathbf{p}_{j,x},\mathbf{p}_{j,y}} \frac{4}{\left(\left[\mathbf{p}_{j,x}-\hat{\mathbf{p}}_{\rho_{\ell},x}(t+1|t)\right]^{2}\!+\left[\mathbf{p}_{j,y}-\hat{\mathbf{p}}_{\rho,y}(t+1|t)\right]^{2}\right)^{3}}\\ &\textrm{s. to} \left[\mathbf{p}_{j,x}-\hat{\mathbf{p}}_{\rho_{\ell},x}(t+1|t)\right]^{2}+\left[\mathbf{p}_{j,y}-\hat{\mathbf{p}}_{\rho_{\ell},y}(t+1|t)\right]^{2}<R^{2} \end{aligned}}} $$
(19)
Note that the inequality constraint in (19) ensures that the new location of the moving sensors \(j\in {\mathcal {S}}_{\rho _{\ell }}\) will be within distance R j from the latest target location estimate \(\hat {\mathbf {p}}_{\rho _{\ell }}(t+1|t)\). This inequality further ensures that all sensors in \({\mathcal {S}}_{\rho _{\ell }}\) will move to new locations which are “close” to the target. After applying the triangle inequality for the new locations of two sensors j and j within \({\mathcal {S}}_{\rho _{\ell }}\) and using the constraint in (19), it turns out that the new location should satisfy
$$ \|\mathbf{p}_{j}(t+1)-\mathbf{p}_{j'}(t+1)\|_{2}\leq \sqrt{2}R, $$
(20)

which ensures that each subset \({\mathcal {S}}_{\rho _{\ell }}\) of moving sensors will stay connected, as long as the communication range of the sensing units is at least \(\sqrt {2}R\). Thus, R can be set such that the moving sensors stay connected. Connectivity is necessary to elect a head sensor for each moving subset of sensors that will acquire the measurements of all other sensors and perform clustering. Details of the algorithm are given in Section 4.1. Note that existing approaches do not entail mechanisms as the one introduced here to ensure that sensors will be connected.

Next, the modified barrier method (MBM) is utilized ([26], pg. 423) to allow every sensor \(j\in {\mathcal {S}}_{\rho _{\ell }}\) to solve (19) and determine its next location. To this end, let f(p j ) denote the cost in (19) and g(p j ) denote the left hand side function of the inequality constraint in (19). MBM involves an iterative application of the following unconstrained minimization problem (where κ denotes the iteration index within time instant t+1):
$$\begin{array}{*{20}l} \mathbf{p}_{j}^{\kappa}(t+1)\in \arg\min_{\mathbf{p}_{j,x},\mathbf{p}_{j,y}} \left\{f(\mathbf{p}_{j})+\frac{\mu^{\kappa}}{c^{\kappa}}\phi\left[c^{\kappa}\cdot g(\mathbf{p}_{j})\right]\right\}, \end{array} $$
(21)
where the Lagrange multiplier-like scalar μ κ is updated as
$$\begin{array}{*{20}l} \mu^{\kappa+1}=\mu^{\kappa}\cdot\nabla\phi\left[c^{\kappa}\cdot g(\mathbf{p}_{j}^{\kappa}(t+1))\right], \end{array} $$
(22)
while the barrier function ϕ[τ] is chosen as a logarithmic function having the form
$$\begin{array}{*{20}l} \phi(\tau) = -ln(1-\tau) \end{array} $$
(23)
and c κ is a penalty parameter associated with the inequality constraint in (19) that is updated according to the recursion
$$\begin{array}{*{20}l} c^{\kappa}=\frac{\gamma^{\kappa}}{\mu^{\kappa}} \end{array} $$
(24)

where {γ k } is a positive monotonically increasing scalar sequence [26].

For the logarithmic barrier function in (23) the updating recursion of the multipliers in (22) takes the following form
$$\begin{array}{*{20}l} \mu^{k+1}=\frac{\mu^{\kappa}}{1-c^{\kappa} g(\mathbf{p}_{j}^{\kappa}(t+1))}. \end{array} $$
(25)
Further, letting \(F(\mathbf {p}_{j}):=f(\mathbf {p}_{j})+\frac {\mu ^{\kappa }}{c^{\kappa }}\phi \left [c^{\kappa }\cdot g(\mathbf {p}_{j})\right ]\), the coordinates of the new sensor location p j (t+1) are updated during iteration κ according to the following gradient descent recursions
$$\begin{array}{*{20}l} &\mathbf{p}_{j,x}^{\kappa+1} (t+1)= \mathbf{p}_{j,x}^{\kappa}(t+1) -\Gamma\cdot\frac{d F(\mathbf{p}_{j})}{d\mathbf{p}_{j,x}}{\left|{\vphantom{\mathbf{s}_{\rho_{\ell_{1}}}}}\right.}_{\mathbf{p}_{j,x}=\mathbf{p}_{j,x}^{\kappa}(t+1)} \end{array} $$
(26)
$$\begin{array}{*{20}l} &\mathbf{p}_{j,y}^{\kappa+1}(t+1) = \mathbf{p}_{j,y}^{\kappa} (t+1)- \Gamma\cdot\frac{d F(\mathbf{p}_{j})}{d\mathbf{p}_{j,y}}{\left|{\vphantom{\mathbf{s}_{\rho_{\ell_{1}}}}}\right.}_{\mathbf{p}_{j,y}=\mathbf{p}_{j,y}^{\kappa}(t+1)} \end{array} $$
(27)
where Γ is the step size for the gradient descent method, while the derivatives in (26) are given as
$${} {{\begin{aligned} &\frac{d F(\mathbf{p}_{j})}{d\mathbf{p}_{j,x}}{\left|{\vphantom{\mathbf{s}_{\rho_{\ell_{1}}}}}\right.}_{\mathbf{p}_{j,x}=\mathbf{p}_{j,x}^{\kappa}(t+1)} \\ &= \frac{24\cdot \left[-\mathbf{p}_{j,x}^{\kappa}(t+1)+\hat{\mathbf{p}}_{\rho,x}(t+1|t)\right]}{\left(\left[\mathbf{p}_{j,x}^{\kappa}(t+1)-\hat{\mathbf{p}}_{\rho,x}(t+1|t)\right]^{2}+\left[\mathbf{p}_{j,y}^{\kappa}-\hat{\mathbf{p}}_{\rho,y}(t+1|t)\right]^{2}\right)^{4}} \\&\quad+ \frac{\mu^{\kappa}}{1-c^{\kappa}g(\mathbf{p}_{j}^{\kappa}(t+1))}\cdot 2\cdot(\mathbf{p}_{j,x}^{\kappa}(t+1)-\hat{\mathbf{p}}_{\rho,x}(t+1|t)),\\ &\frac{d F(\mathbf{p}_{j})}{d\mathbf{p}_{j,y}}{\left|{\vphantom{\mathbf{s}_{\rho_{\ell_{1}}}}}\right.}_{\mathbf{p}_{j,y}=\mathbf{p}_{j,y}^{\kappa}(t+1)} \\ & =\!\!\frac{24\cdot \left[-\mathbf{p}_{j,y}^{\kappa}(t+1)+\hat{\mathbf{p}}_{\rho,y}(t+1|t)\right]}{\left(\!\left[\!\mathbf{p}_{j,x}^{\kappa}(t+1)-\hat{\mathbf{p}}_{\rho,x}(t+1|t)\!\right]^{2}\!+\left[\mathbf{p}_{j,y}^{\kappa}(t+1)-\hat{\mathbf{p}}_{\rho,y}(t+1|t)\!\right]^{2}\!\right)^{4}} \\ &\quad + \frac{\mu^{\kappa}}{1-c^{\kappa}g\left(\mathbf{p}_{j}^{\kappa}(t+1)\right)}\cdot 2\cdot\left(\mathbf{p}_{j,y}^{\kappa}(t+1)-\hat{\mathbf{p}}_{\rho,y}(t+1|t)\right). \end{aligned}}} $$
(28)

During time instant t+1, each sensor j within the subset \(\mathcal {S}_{\rho _{\ell },t}\) will keep updating their location until the cost function in (19) is not reduced more than a predefined threshold ε within two consecutive updating steps κ,κ+1. The location p j (t+1) will be set to the last update \(\mathbf {p}_{j}^{K'}(t+1)\) obtained after K MBM iterations during time instant t+1. The following steps are carried out during the determination of the sensor’s new location:

S1) The head sensor in each subset \({\mathcal {S}}_{\rho _{\ell },t}\) sends the predicted position estimate of target ρ , namely \(\hat {\mathbf {p}}_{\rho _{\ell }}(t+1|t)\), to all sensors in \({\mathcal {S}}_{\rho _{\ell },t}\).

S2) Each sensor in \(j\in \mathcal {S}_{\rho _{\ell },t}\), determines its new location using the MBM scheme. Sensors in \(\mathcal {S}_{\rho _{\ell },t}\) check their distances to other neighboring sensors, and if their future location is too close, they adjust their coordinates to avoid collision when moving. Similarly, each moving sensor checks the distance between its updated location and the position estimate of target ρ , and if too close, adjustments will be made to the sensor’s location such that a minimum distance will be kept from the target. Specifically, if sensor j has an updated location \(\phantom {\dot {i}\!}\mathbf {p}^{K'}_{j}(t+1)\) which is too close to the already updated location of sensor j , namely \(\phantom {\dot {i}\!}\mathbf {p}_{j'}(t+1)\), i.e., \(\phantom {\dot {i}\!}\|\mathbf {p}^{K'}_{j}(t+1)-\mathbf {p}_{j'}(t+1)\|_{2}\leq R_{\text {min},a}\), where R min,a is the smallest distance allowed that two sensors can be separated from each other, then \(\phantom {\dot {i}\!}\mathbf {p}^{K'}_{j}(t+1)\) is updated as follows:
$$ \mathbf{p}_{j}(t+1)=\mathbf{p}_{j'}(t+1)+R_{\text{min},a}\frac{\mathbf{p}^{K'}_{j}(t+1)-\mathbf{p}_{j'}(t+1)}{\|\mathbf{p}^{K'}_{j}(t+1)-\mathbf{p}_{j'}(t+1)\|_{2}}. $$
(29)
Similarly, if sensor j has an updated location \(\mathbf {p}^{K'}_{j}(t+1)\) which is too close to the ρ target location estimate, i.e., \(\|\mathbf {p}^{K'}_{j}(t+1)-\hat {\mathbf {p}}_{\rho _{\ell }}(t+1|t)|_{2}\leq R_{\text {min},b}\) where R min,b is the smallest distance that a sensor can be placed from a target, then location \(\mathbf {p}^{K'}_{j}(t+1)\) is updated as follows:
$$ {\small{\begin{aligned} {}\mathbf{p}_{j}(t+1)\,=\,\hat{\mathbf{p}}_{\rho_{\ell}}(t+1|t)+R_{\text{min},b}\frac{\mathbf{p}^{K'}_{j}(t+1)-\hat{\mathbf{p}}_{\rho_{\ell}}(t+1|t)}{\|\mathbf{p}^{K'}_{j}(t+1)-\hat{\mathbf{p}}_{\rho_{\ell}}(t+1|t)\|_{2}}. \end{aligned}}} $$
(30)

The collision-avoidance position modifications in (30) were proposed in [34] to prevent collision of unmanned aerial vehicles with a stationary target. The position updates in (29) and (30) ensure that the updated locations are at distance R min,a and R min,b from another moving sensor or moving target, respectively, satisfying the minimum distance required to prevent collision.

The actual movement can be achieved using for example robotic sensors, see e.g., [35, 36]. Each sensor \(j\in \mathcal {S}_{\rho _{\ell },t}\) updates its location p j (t+1), in a coordinate fashion while the remaining sensors in \(\mathcal {S}_{\rho _{\ell },t}\) are kept stationary waiting for their turn to update their location.

4 Algorithmic summary

4.1 Implementation

At the start-up stage, fast sampling is used to acquire Q measurements fast enough that the initial number of targets r(0) can be assumed stationary. By utilizing the Q acquired data, the subsets of target-informative sensors \(\{{\mathcal {S}}_{\rho _{\ell },0}\}\) are initialized, where \(\ell =1,\ldots,\hat {r}(0)\) and \(\hat {r}(0)\) is the estimated number of r(0) sensed targets at time t=0 (number of nonzero columns in the sparse matrix \(\hat {\mathcal {M}}_{0}\)). One sensor within each \({\mathcal {S}}_{\rho _{\ell },0}\) will be randomly selected as the head sensor, which will collect the measurements x j (0) and their positions p j (0) from all the other sensors \(j\in {\mathcal {S}}_{\rho _{\ell },0}\). Each head sensor \(C_{\rho _{\ell },0}\) averages the positions of the informative sensors in subset \({\mathcal {S}}_{\rho _{\ell },0}\) to be the initial estimate of the corresponding target ρ . The latter target location estimate along with the informative measurements x j (0), for \(j\in {\mathcal {S}}_{\rho _{\ell },0}\) are utilized to initialize the recursions of the extended Kalman filtering carrying out the target tracking in Section 3.2.

At time instant t, every head sensor \(C_{\rho _{\ell },t}\) has available the state estimates for active target ρ , namely \(\hat {\mathcal {s}}_{\rho _{\ell }}(t|t)\), obtained from EKF in Section 3.2. The target’s estimated position \(\hat {\mathbf {p}}_{\rho _{\ell }}(t|t)\) is then used to select a group of “candidate informative” sensors, which are denoted as \({\mathcal {J}}_{\rho _{\ell },t}\) for target ρ at time instant t. This set is formed by having the head sensor transmit the estimated state \(\hat {\mathcal {s}}_{\rho _{\ell }}(t|t)\) to its single-hop neighboring sensors which then transmit the same information to their own neighbors. Every sensor j who receives \(\hat {\mathcal {s}}_{\rho _{\ell }}(t|t)\), from a neighboring sensor, subsequently forwards this estimate only to those sensors \(j'\in {\mathcal {N}}_{j}\) whose present position is within radius R s from the estimated target location, i.e., \(\|\mathbf {p}_{j'}(t)-\hat {\mathbf {p}}_{\rho _{\ell }}(t|t)\|_{2}\leq R_{s}\). The parameter R s can be set to be sufficiently large in order for all ρ -target-informative sensors to be incorporated in subset \({\mathcal {J}}_{\rho _{\ell },t}\). The sensor subset \({\mathcal {J}}_{\rho _{\ell },t}\) by construction is connected.

Since not all sensors within the candidate subsets \({\mathcal {J}}_{\rho _{\ell },t}\) maybe informative, the scheme in Section 3.1 is employed among the sensors in \({\mathcal {J}}_{\rho _{\ell },t}\) to find out the target-informative sensor subset \({\mathcal {S}}_{{\rho _{\ell }},{t+1}}\subseteq {\mathcal {J}}_{\rho _{\ell },t}\) for all the active targets. Rather than running the target-sensor association scheme in Section 3.1 in the whole sensor network, it is performed independently in the different sensor subsets \({\mathcal {J}}_{\rho _{\ell },t}\) associated with each target.

Once subsets \({\mathcal {S}}_{{\rho _{\ell }},{t+1}}\) are found, the head sensor in each of these subsets is chosen to be the sensor whose distance is the closest to the estimated position of the corresponding target ρ , i.e.,
$$C_{\rho_{\ell},{t+1}}=\arg\min_{j\in{\mathcal{S}}_{\rho_{\ell},{t+1}}}\|\mathbf{p}_{j}(t)-\hat{\mathbf{p}}_{\rho_{\ell}}(t|t)\|_{2}. $$

The head sensor \(C_{\rho _{\ell },{t+1}}\) gathers the sensor measurements x j (t+1) from the informative sensors \(j\in {\mathcal {S}}_{\rho _{\ell },{t+1}}\) to carry out the extended Kalman filtering recursions at time instant t+1 as outlined in Section 3.2. Then, steps S1 and S2 in Section 3.3 are employed to allow all sensors in \({\mathcal {S}}_{\rho _{\ell },{t+1}}\) to determine and move to their new positions p j (t+1). Note that connectivity of the sensors in \({\mathcal {S}}_{\rho _{\ell },{t+1}}\) is preserved as explained in Section 3.3. The head sensor \(C_{\rho _{\ell },{t+1}}\) broadcasts the latest state estimate \(\hat {\mathbf {s}}_{\rho _{\ell }}(t+1|t+1)\) to its single-hop neighbors and repeats the process described earlier to update the candidate informative sensor subsets \({\mathcal {J}}_{\rho _{\ell },t+1}\).

It is worth mentioning that the kinematic rules implemented in Section 3.3 at each sensor are fully distributed since each sensor requires knowledge only of its location and the estimated target position obtained from the head sensor in \({\mathcal {S}}_{{\rho _{\ell }},{t+1}}\). Connectivity of the candidate informative subsets \({\mathcal {J}}_{\rho _{\ell },t+1}\) is ensured by construction irrespective of the sensor movement. This way the sensor-to-target association scheme in Section 3.1 can still be applied in \({\mathcal {J}}_{\rho _{\ell },t+1}\) and determine the informative sensors.

The target-informative sensor selection scheme in Section 3.1 may also need to be reapplied across the whole sensor network since moving targets may disappear and not being sensed anymore, while new targets may appear at different regions of the sensor network. The following conditions are checked to determine such events: (i) If any of the sensor subsets \({\mathcal {S}}_{\rho _{\ell },{t+1}}\) becomes empty, this implies that some of the targets previously sensed are not present anymore; (ii) If at t+1, the energy of a sensor, not previously selected, exceeds a certain threshold, this indicates that most likely a new target enters the sensed field. The two aforementioned conditions signify that the target configuration has changed and the sensor selection scheme in Section 3.1 needs to be reapplied in the sensor network to update the sensor-informative subsets. The novel tracking scheme is summarized as Algorithm 1.

4.2 Communication and computational expenses

The communication cost of the proposed algorithm is studied next. Note that inter-sensor communication takes place during (i) the sensor-to-target association scheme in Section 3.1; (ii) carrying out the EKF tracking steps in Section 3.2; and (iii) when applying the kinematic strategy in Section 3.3 to move the informative sensors. In detail, at time instant t, sensor j has to receive \(|{\mathcal {N}}_{j}|\) scalar measurements from its neighbors, namely \(\{x_{j'}(t+1)\}_{j'\in {\mathcal {N}}_{j}}\), to update \(\hat {\boldsymbol {\Sigma }}_{{x,t}+1}({j,j}')\). Furthermore, to implement the association scheme in Section 3.1, each sensor j receives the updates \(\{\hat {\mathbf {M}}_{t+1}^{k-1}(j',\ell)\}_{\ell =1}^{L}\) from neighborhood \({\mathcal {N}}_{j}\), corresponding to \(L|{\mathcal {N}}_{j}|\) scalars in total, to form its local updates \(\{\hat {\mathbf {M}}_{t+1}^{k}(j,\ell)\}_{\ell =1}^{L}\). Thus, sensor j receives \((L+1)|{\mathcal {N}}_{j}|\) scalars in total. Similarly, sensor j has to transmit x j (t+1) and \(\{\hat {\mathbf {M}}_{t+1}^{k-1}(j,\ell)\}_{\ell =1}^{L}\), a total of L+1 scalars to its neighbors, per iteration k.

After the target-informative sensors are determined, each head sensor has to carry out the estimation process about the corresponding target’s states. Thus, the head sensor \(\{C_{\rho _{\ell },t}\}\) will collect the measurements x j (t) from the sensors within \({\mathcal {S}}_{\rho _{\ell },t+1}\). This involves \(|{\mathcal {S}}_{\rho _{\ell },t+1}|\) scalar exchanges. Further, all sensors in the subset \({\mathcal {S}}_{\rho _{\ell },t+1}\) will receive four scalars corresponding to the current state estimate. Once the state estimation process (Section 3.2) is carried out by the head sensors, sensor communication also occurs among the informative sensors when adjusting their new location to avoid collision with closely located sensors (Section 3.3). Specifically, sensor j receives \(2 |\mathcal {N}_{j}|\) scalars from its neighbors, corresponding to their two location coordinates, while it sends out its own location. It is worth mentioning that the communication complexity for each sensor is linear with respect to its neighborhood size \(|\mathcal {N}_{j}|\), and the upper bound number of present targets L. The latter linear cost advocates that the proposed framework is a communication-affordable distributed approach.

When applying the scheme in Section 3.1 during each coordinate cycle k and time instant t, each sensor j has to form the coefficients in (10), (9) with a computational complexity of the order \({\mathcal {O}}(| {\mathcal {N}}_{j}|)\), while determining the roots of the two third-order polynomial in (10), (9) involves determining the eigenvalues of two 3×3 companion matrices whose complexity is fixed and non-dependent on any algorithmic parameters. The EKF in Section 3.2 can be carried out at a complexity of the order \({\mathcal {O}}(K^{2}+|{\mathcal {S}}_{\rho }|)\), where K=4 here and \(|{\mathcal {S}}_{\rho }|\) corresponds to the size of the target-informative subsets. The kinematic rules implemented at the target-informative sensors in Section 3.3 have a computational complexity \({\mathcal {O}}(K)\).

5 Simulations

The tracking performance of the novel scheme is tested in a network with p=80 sensors, which are deployed randomly in the region of [0,15]×[0,15]m 2. The tracking root mean-square error (RMSE) is studied and compared with the RMSE attained by the tracking schemes in [2, 12]. The comparison is done using one target since the aforementioned existing approaches can handle one target. Target ρ=1 is initialized at location [1.5,11.5] and moves with velocities of [0.15,0.1]m/s respectively along the x-axis and y-axis. The tracking process is carried out for a total of 30 s, with the state noise and observation noise variances set to be \({\sigma _{u}^{2}}=0.08\) and \({\sigma _{w}^{2}}=0.08\) (corresponding to a sensing SNR of 11 dB). Figure 1 depicts in logarithmic scale the tracking RMSE (for better display) of (i) the novel approach proposed here; (ii) the tracking scheme in [12]; and (iii) the tracking approach in [2]. Note that for all the three tracking schemes, the initial position of the target is found by applying the sparsity matrix decomposition scheme in Section 3.1, ensuring the same initial error for all the three different tracking approaches. As corroborated by Fig. 1, our tracking scheme exhibits the lowest tracking RMSE. The approach in [12] attains the worst performance since the sensors can only move on a grid which reduces accuracy. The scheme in [2] performs worse than our approach since it does not have an informative sensor selection scheme, which results all sensors to move and participate in the tracking process which may reduce accuracy when noisy sensors are utilized.
Fig. 1
Fig. 1

Tracking RMSE versus time for a single-target setting

Figure 2 depicts the distance (in meters) between two moving sensors and the corresponding moving target during the 30-s tracking period. It can be seen that the distance is decreasing with time which further implies that the proposed approach allows the informative sensors to closely follow the target.
Fig. 2
Fig. 2

Distance between target and moving sensors versus time

Next, the performance of our novel tracking scheme is tested in a setting where the number of targets is changing. Specifically, targets ρ=1,2 start moving at positions [ 1.5,11.5], [5,7] and follow the dynamics in (1), with velocities of [0.15,0.1] and [0.4,0.13] m/s along the x-axis and y-axis, respectively. Targets ρ=1,2 move in the field for the time interval [ 1,30] s and then are not sensed anymore. In the interval [ 15,17] s, no targets are present in the field. Then, targets ρ=3,4 start at positions [ 6.1,4.8], [ 9.0,4.0] and move according to same state model followed by the first two for the time interval [ 32,50] s but with velocities [ 0.03,0.35] and [ 0.25,−0.25] m/s. Again, no targets are present during the interval [ 50,52] s. Then, three targets ρ=5,6,7 start moving at positions [ 15,1.1], [ 13,13.5], and [ 12,6], according to (1), for the time interval [ 52,70] s and velocities [ −0.1,0] m/s for target ρ =5, and [ 0.12,−0.03] for both ρ=6,7 along the x-axis and y-axis, respectively. Figure 3 depicts the original positions of the sensors represented by blue circles.
Fig. 3
Fig. 3

Original sensor network topology (before applying kinematic rules to the sensors)

Figures 4, 5 and 6 show snapshots of the configuration of the targets and the moving sensors at different time instances. Details for the different curves and coloring on those figures is given in the caption below the figures. From Fig. 4, it is clear that for the first two targets, even though the targets move out of the original [ 0,15]×[ 0,15] region, both targets are still tracked well as some of the sensors follow them closely. Note that only informative sensors, on average around 10 % of the total number of sensors, move according to the proposed kinematic rules in Section 3.3, while the majority of other sensors which are not close to the moving targets are not moving and maintain their original positions. Similar conclusions can be extracted from Figs. 5 and 6, where different sets of targets are moving on the field. It should be emphasized that it is not known that at t=30 and t=50, the target configuration is changing. As discussed in Section 4, such changes can be determined by having all sensors self-checking the energy level of their measurements, while the head sensors monitor the informative sensor subsets whether they are empty or not.
Fig. 4
Fig. 4

Snapshot of the trajectories of targets and moving sensors at time instant t=30 s. Red solid curves correspond to the true target trajectories; blue dashed curves represent the estimated track. Blue stars indicate the starting locations of the targets, while the red crosses denote the initial locations of some moving sensors. The black dashed curves show the trajectories of some of the moving sensors during the tracking process

Fig. 5
Fig. 5

Snapshot of the trajectories of targets and moving sensors at time instant t=50 s. Red solid curves correspond to the true target trajectories; blue dashed curves represent the estimated track. Blue stars indicate the starting locations of the targets, while the red crosses denote the initial locations of some moving sensors. The black dashed curves show the trajectories of some of the moving sensors during the tracking process

Fig. 6
Fig. 6

Snapshot of the trajectories of targets and moving sensors at time instant t=70 s. Red solid curves correspond to the true target trajectories; blue dashed lines represent the estimated track. Blue stars indicate the starting locations of the targets, while the red crosses denote the initial locations of some moving sensors. The black dashed curves show the trajectories of some of the moving sensors during the tracking process

The tracking RMSE for the above tracking setting is plotted in Fig. 7 in logarithmic scale. The proposed tracking framework exploiting sensor mobility is compared with a tracking scheme where sensors are stationary and not moving. In the immobile sensor network, when the targets are moving away from the sensed field, the sensors will acquire less and less reliable measurements leading to the dramatic increase of the tracking RMSE (blue dashed curves). In contrast, the proposed framework here enables sensors to follows closely the targets and achieve a much lower tracking RMSE. So even though the targets move out of the original sensed field, there is always a group of sensors keeping adjacent to it and can be selected again to provide accurate measurements. Note that in Fig. 7, there are three discontinued curves which corresponds to the error of tracking the three different groups of targets that appear and cease to exist in the monitored field at different time periods. This leads to the RMSE discontinuity at time t=31 s and t=51 s since there are no targets moving during that time interval and no need for tracking.
Fig. 7
Fig. 7

Tracking RMSE versus time for tracking the multiple objects in the setting depicted in Figs. 4, 5, and 6. Two different cases are studied where sensor mobility is utilized (red curves) and when sensors are immobile (blue curves)

Next, a tracking setting is considered with two targets where one of targets splits into two targets at a certain time. Similarly to the previous tracking scenario, two targets ρ=1,2 initialized at positions [ 1.6,11.5], [5.4,7] (indicated by the blue stars) start moving according to the dynamics in (1), with velocities of v 1=[v 1,x ,v 1,y ]=[0.15,0.1] m/s and v 2=[v 2,x ,v 2,y ]=[0.4,0.13] m/s, respectively. As t=30, the second target stops moving while the first one splits into two targets. Target ρ=3 continues to move according to the dynamics of target ρ=1, while target ρ=4 moves with velocities v x =0.4 m/s and v y =−0.5 m/s along the x-axis and y-axis. The two new targets move for the time interval [31,42] s. The splitting point is indicated by the green star in Fig. 8. Figure 8 shows the trajectories of the targets and some moving sensors; details on the coloring and curve types used can be found in the caption of Fig. 8. The target trajectories in Fig. 8 are depicted by blue dashed lines for the time interval [1,30] s and by blue crossed lines after t=30 s. When sensors do not move, the violet estimated trajectories indicate that the split of targets cannot be followed, while target ρ=1 cannot be tracked after a while since is moving away from the immobile sensors. When the kinematic rules in Section 3.3 are employed, informative sensors follow closely the targets as depicted by the black dashed sensor trajectories. Note that the corresponding estimated red trajectories accurately follow the multiple targets present in the field. As before, the tracking RMSE (logarithm) is compared for the cases where sensors cannot move with the case where the approach in Section 3.3 is applied. As Fig. 9 shows, our active tracking scheme outperforms in terms of tracking accuracy the utilization of stationary sensors. Notice that in Fig. 9, after the target splits, tracking using stationary sensors performs much better than before splitting. The reason is that when using stationary sensors, for the second tracking phase (t=[ 31,42] s), there are more sensors originally located close to the trajectories of the targets, compared to the first tracking phase (t=[ 1,30] s) which makes the tracking error much smaller than the first 30 s, though the performance when using stationary sensors is still worse than tracking using our sensor mobility-based tracking scheme.
Fig. 8
Fig. 8

Tracking multiple objects; target trajectories and mobile sensor trajectories. Blue dashed curves and blue crossed curves correspond to the true target trajectories; red dashed curves represent the estimated track using our novel framework. Blue stars indicate the starting locations of the targets, while the red crosses represent the initial locations of some moving sensors. The black dashed curves show the trajectories of some of the moving sensors during the tracking process using the kinematic rules in Section 3.3. Violet colored curves correspond to estimated trajectories using immobile sensors, while the green star points to the position where the splitting of the targets takes place

Fig. 9
Fig. 9

Tracking RMSE versus time in the setting in Fig. 8 when sensor mobility is utilized (blue curves) and when sensors are immobile (red curves)

6 Conclusions

A novel framework combining sparse matrix factorization with proper kinematic rules enables multiple mobile sensors to track multiple targets. A norm-one/norm-two regularized matrix decomposition formulation is utilized to perform sensor-to-target association and select the target-informative sensors. Optimal kinematic rules are obtained by minimizing the covariances of parallel extended Kalman filters that track multiple targets using only target-informative sensors. The modified barrier method is utilized to obtain the sensors’ location updates while ensuring that the moving sensors remain connected. Numerical tests in multi-sensor networks corroborate that our novel scheme outperforms related approaches and accurately tracks multiple targets utilizing only a small percentage of moving sensors that closely follow the targets.

Declarations

Acknowledgements

Work in this paper is supported by the NSF grant CCF 1218079 and the AirForce grant #FA9550-15-1-0103, Simons Foundation Award # 279870 and UTA.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA
(2)
Department of Math, University of Tennessee at Knoxville, Knoxville, TN, USA

References

  1. R Silva, J Silva, F Boavida, Mobility in wireless sensor networks—survey and proposal. Elsevier Comput. Commun.52:, 1–20 (2014).View ArticleGoogle Scholar
  2. P Yang, R Freeman, K Lynch, in Proc. of the 2007 IEEE Intl. Conf. on Robotics and Automation. Distributed Cooperative Active Sensing Using Consensus Filters (Rome, Italy, 2007), pp. 405–410.Google Scholar
  3. T Mukai, M Ishikawa, An active sensing method using estimated errors for multisensor fusion systems. IEEE Trans. Ind. Electron.43(3), 380–386 (1996).View ArticleGoogle Scholar
  4. T Chung, V Gupta, J Burdick, R Murray, in Proc. of 43rd IEEE Conference on Decision and Control, 2. On a Decentralized Active Sensing Strategy Using Mobile Sensor Platforms in a Network (Nassau, The Bahamas, 2004), pp. 1914–1919.Google Scholar
  5. A Gorji, M Menhaj, Multiple target tracking for mobile robots using the jpdaf algorithm. IEEE Int. Conf. Tools Artif. Intell.1:, 137–145 (2007).Google Scholar
  6. S Oh, S Russell, S Sastry, Markov chain monte carlo data association for multi-target tracking. IEEE Trans. Autom. Control. 54:, 481–497 (2009).MathSciNetView ArticleGoogle Scholar
  7. J Vermaak, S Godsill, P Perez, Monte Carlo filtering for multi target tracking and data association. IEEE Trans. Aerosp. Electron. Syst.41(1), 309–332 (2005).View ArticleGoogle Scholar
  8. JLC C Hue, P Perez, Tracking multiple objects with particle filtering. IEEE Trans. Aerosp. Electron. Syst.38(3), 1457–1469 (2002).Google Scholar
  9. FS Cattivelli, AH Sayed, Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans. Autom. Control. 55(9), 2069–2084 (2010).MathSciNetView ArticleGoogle Scholar
  10. NF Sandell, R Olfati-Saber, in Proc. of IEEE Conf. on Decision and Control. Distributed Data Association for Multi-target Tracking in Sensor Networks (IEEECancun, Mexico, 2008), pp. 1085–1090.Google Scholar
  11. J Lin, W Xiao, FL Lewis, L Xie, Energy-efficient distributed adaptive multisensor scheduling for target tracking in wireless sensor networks. IEEE Trans. Instrum. Meas.58(6), 1886–1896 (2009).View ArticleGoogle Scholar
  12. Y Zou, K Chakrabarty, Distributed mobility management for target tracking in mobile sensor networks. IEEE Trans. Mob. Comput.6(8), 872–887 (2007).View ArticleGoogle Scholar
  13. K Zhou, S Roumeliotis, Multirobot active target tracking with combinations of relative observations. IEEE Trans. Robot.27(4), 678–695 (2011).View ArticleGoogle Scholar
  14. K Lynch, I Schwartz, P Yang, R Freeman, Decentralized environmental modeling by mobile sensor networks. IEEE Trans. Robot.24(3), 710–724 (2008).View ArticleGoogle Scholar
  15. Y Oshman, P Davidson, Optimization of observer trajectories for bearings-only target localization. IEEE Trans. Aerosp. Electron. Syst.35(3), 892–902 (1999).View ArticleGoogle Scholar
  16. Doğançay, UAV path planning for passive emitter localization. IEEE Trans. Aerosp. Electron. Syst.48(2), 1150–1166 (2012).View ArticleGoogle Scholar
  17. G Keung, B Li, Q Zhang, H Yang, in Proc. of 2011 Global Telecommunications Conference. The Target Tracking in Mobile Sensor Networks (Houston, TX, 2011), pp. 1–5.Google Scholar
  18. H La, W Sheng, Dynamic target tracking and observing in a mobile sensor network. Elsevier Robot. Auton. Syst.60(7), 996–1009 (2012).View ArticleGoogle Scholar
  19. A Elmogy, F Karray, Cooperative multi-target tracking using multi-sensor network. Int. J. Smart Sens. Intell. Syst.1(3), 716–734 (2008).Google Scholar
  20. Y Fu, L Yang, Sensor mobility control for multitarget tracking in mobile sensor networks. International Journal of Distributed Sensor Networks. 2014(3), 1–15 (2014).View ArticleGoogle Scholar
  21. R Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. 58(1), 276–288 (1996).MathSciNetMATHGoogle Scholar
  22. H Zou, T Hastie, R Tibshirani, Sparse principal component analysis. J. Comput. Graph. Stat.15(2), 265–286 (2006).MathSciNetView ArticleGoogle Scholar
  23. I Schizas, Distributed informative-sensor identification via sparsity-aware matrix decomposition. IEEE Trans. Sig. Proc.61(18), 4610–4624 (2013).MathSciNetView ArticleGoogle Scholar
  24. G Ren, V Maroulas, I Schizas, Distributed spatio-temporal association and tracking of multiple targets using multiple sensors. IEEE Trans. Aerosp. Electron. Syst.51(4), 2570–2589 (2015).View ArticleGoogle Scholar
  25. P Tseng, Convergence of a block coordinate descent method for nondifferentiable minimization. J. Opt. Theory Appl.109(3), 475–494 (2001).MathSciNetView ArticleMATHGoogle Scholar
  26. D Bertsekas, Nonlinear Programming, Second edition (Athena Scientific, Nashua, 2003).MATHGoogle Scholar
  27. S Kay, Fundamental of Statistical Signal Processing: Estimation Theory (Prentice Hall, New Jersey, 1993).MATHGoogle Scholar
  28. Y Bar-Shalom, X Li, T Kirubarajan, Estimation With Applications to Tracking and Navigation (Wiley, New York, 2001).View ArticleGoogle Scholar
  29. A Goldsmith, Wireless Communications (Cambridge University Press, New York, 2005).View ArticleGoogle Scholar
  30. JG Proakis, M Salehi, Digital Communications, 5th edition (McGrawHill, New York, 2008).Google Scholar
  31. SS Haykin, Adaptive Filter Theory (Pearson Education India, India, 2008).MATHGoogle Scholar
  32. M Yuan, Y Lin, Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat Methodol.68(1), 49–67 (2006).MathSciNetView ArticleMATHGoogle Scholar
  33. R Horn, C Johnson, Matrix Analysis (Cambridge Univ. Press, Cambridge, 1985).View ArticleMATHGoogle Scholar
  34. K Dogancay, Online optimization of receiver trajectories for scan-based emitter localization. IEEE Trans. Aerosp. Electron. Syst.43(3), 1117–1125 (2007).View ArticleGoogle Scholar
  35. J Borenstein, L Feng, H Everett, Navigating Mobile Robots: Systems and Techniques (AK Peters, Ltd, Wellesley, 1996).MATHGoogle Scholar
  36. B Barshan, H Durrant-Whyte, Inertial navigation systems for mobile robots. IEEE Trans. Robot. Autom.11(3), 328–342 (1995).View ArticleGoogle Scholar

Copyright

© Ren et al. 2016

Advertisement