 Research
 Open Access
 Published:
Functional quantizer design for source localization in sensor networks
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 151 (2013)
Abstract
In this paper, we address the problem of quantizer design optimized for a source localization application in acoustic sensor networks where physically separated sensors make measurements of acoustic signal energy, quantize them, and transmit the quantized data to a fusion node, which then produces an estimate of the source location. We propose an iterative regular quantizer design algorithm that minimizes the localization error. To construct regular quantization partitions, we suggest the average distance error as a metric in the functional quantization since the distance is monotonic in each sensor reading. Furthermore, to guarantee minimization of the localization error, we propose a new technique to update the codewords and prove that the localization error can be reduced at each iteration while the average distance error remains nonincreasing by applying our update technique. Our experiments show that our proposed algorithm yields significantly improved performance as compared with traditional quantizer designs.
1 Introduction
In sensor networks, a large number of lowcost sensors, each equipped with a processor, a lowpower communication transceiver, and one or more sensing capabilities, are deployed in a sensor field. Each sensor operates on its limited amount of battery energy which is consumed mostly by wireless communication between sensors. The network lifetime is a crucial concern for sensor networks, and the basic strategy for prolonging the lifetime would then be to decrease the communication cost at the expense of additional computation in the sensors[1]. This motivates us to use data compression for various tasks such as detection, classification, localization, and tracking, where collected and processed data are exchanged between sensors.
Quantization of measurements arises in practical estimation systems where estimation algorithms are conducted on quantized data. Thus, efficient quantization will be needed in order to achieve improvements in ratedistortion performance. For example, the authors in[2] considered a source localization system where each sensor measures the signal energy, quantizes it, and sends the quantized sensor reading to a fusion node where the localization is performed. In this framework, the maximum likelihood (ML) estimation problem based on quantized data was addressed and the CramerRao bound (CRB) was derived for comparison, assuming that each sensor used identical (uniform) quantizers. In[3], heuristic quantization schemes were proposed in order to assign quantizers to each sensor without taking into account the sensor locations. However, it should be noted that if the sensor locations are known during the quantizer design process, significant performance gain can be achieved with respect to simple uniform quantization at all sensors. This raises the problem of quantizer design optimized for distributed estimation systems where the goal is to design independently operating quantizers that minimize a global metric such as the estimation error (a function of all sensor readings).
To this end, a cooperative designseparate encoding approach was suggested for a decentralized hypothesis testing system[4, 5] where a distributional distance was used as a criterion for quantizer design in order to yield a manageable design procedure. For a distributed detection system, the authors in[6] proposed a heuristic procedure for quantizer design that minimizes the upper bound of the probability of error. Lam and Reibman[7] constructed optimal quantization partitions in distributed estimation systems where the necessary conditions for the partitions were presented. In the highresolution regime, asymptotic quantizer designs for distributed estimation were derived in[8, 9] by using the limiting density of quantizer partitions. In[10], the necessary conditions for optimal quantization rules and linear estimation fusion rules were derived and shown to be searched simultaneously by using an iterative algorithm. An iterative quantizer design algorithm was presented in the Lloyd algorithm framework and evaluated for nonideal channels in[11]. It was shown that the resulting distributed scalar quantizers should be nonregular, implying that the same codeword is assigned to several disjoint intervals in order to reduce the distortion.
Although the quantizer design algorithms are developed in sensor networks where all sensors involved send their quantized data directly to a fusion node without communicating with each other (no routing), the design problem can be also addressed for general network topology. In[12], the authors incorporated network topology into compression system design and presented a design algorithm for local optimal vector quantizers that achieves improvements in ratedistortion performance and system functionality. In[13], an algorithm for optimal rate allocation to sensors was presented, given a multihop routing tree from sensors to a fusion node in order to minimize the amount of transmission energy. In addition, since selecting a proper subset of sensors and optimizing the routing structure can lead to important power savings, an iterative algorithm was proposed in[14] for joint optimization on the sensor selection and the routing for distributed estimation.
In this work, we consider a source localization system in acoustic sensor networks (one of the distributed estimation systems) where distributed sensors measure acoustic source signals, quantize them, and send them to a fusion node which will estimate the source location based on quantized sensor readings. We seek to design independently operating regular quantizers that minimize the localization error. We also take the cooperative designseparate encoding strategy and propose an iterative quantizer design algorithm similar to the cyclic generalized Lloyd algorithm. The challenge here is that since the Lloyd algorithm was devised for quantizer design when a local metric (e.g., reconstruction error of local sensor readings) is used as a cost function, simply replacing it by a global metric may cause problems. More specifically, the quantizer update at each step from the Voronoi region construction and the subsequent computation of codewords based on the Voronoi partitions, the two main tasks in the typical Lloyd design, would not generally produce the regular quantization partitions and cannot guarantee that the global metric will not increase. In order to tackle the problems, the authors in[15–17] adopted a simple distance rule to construct the regular quantization partitions and proposed a weighted sum of both of the metrics as a cost function (i.e., local + λ × global, λ ≥ 0) along with a search for proper weights. The use of the weighted metric is motivated by the observation that there always exists a certain λ such that the cost function remains nonincreasing under the Lloyd iterations (e.g., λ = 0 always leads to a nonincreasing cost function, although there typically exist multiple nonzero values of λ with the same property).
It should be noticed that the regular quantization partitions can be also constructed by the functional quantization of a monotonic estimator (e.g., a linear minimum mean square error (MMSE) estimator) since each quantization partition for the estimator transforms to a regular one for quantizers at each sensor due to its monotonicity in each sensor reading[9]. In this work, we suggest using a new metric to be minimized in order to obtain regular quantization partitions in the functional quantization framework. Specifically, we propose the average distance error as a metric since the functional quantization of the distance allows us to generate regular quantization partitions due to the monotonicity of the distance with respect to each sensor reading even when the nonlinear MMSE estimator for a source location is employed. Notice that we focus on the design of regular independent scalar quantizers that minimize the global metric. In[18], it was shown that nonregular independent quantizers can be systematically designed by applying the distributed encoding algorithm to regular quantizers; that is, substantial performance gain could be further achieved after the design of regular quantizers by merging their nonadjacent quantization bins in a distributed manner.
Obviously, minimizing the average distance error would not necessarily lead to minimization of the localization error. We further develop a new technique for computation of the codewords and prove that the localization error can be reduced at each iteration while the average distance error remains nonincreasing by using the proposed technique. We demonstrate through extensive experiments that our iterative design algorithm achieves significant performance improvement over typical design techniques such as uniform quantizers and Lloyd quantizers. We also evaluate the proposed algorithm by comparing with the previous work[17], which recently proposed a novel quantizer design technique optimized for source localization in acoustic sensor networks, the application considered in this work. The benefit of the proposed algorithm is illustrated by the analysis and experiments providing similar localization performance with much lower complexity^{a}. The main contributions of this paper are twofold: first, we define a monotonic cost function to be minimized for regular quantizer design, present an iterative quantizer design algorithm based on functional quantization, and propose a codeword update technique that guarantees minimization of the localization error. We believe that our approach is applicable to general cases where sensors measure information that is a function of distance. Second, although the proposed algorithm has no obvious advantages over the previous work in[17] in terms of localization performance, the nature of our algorithm yields substantial reduction in the computational complexity. The complexity analysis shows that the benefit of the proposed algorithm becomes more significant as the rate and/or the number of sensors increases.
In this paper, we assume that each sensor can estimate noisecorrupted acoustic signal energy using actual measurements (e.g., time series measurements). We also assume that there is only oneway communication from sensors to the fusion node, i.e., there is no feedback channel, the sensors do not communicate with each other (no relay between sensors), and these various communication links are reliable.
This paper is organized as follows. The problem formulation of the quantizer design is given in Section 2. A brief description of functional quantization is provided in Section 2.1. An iterative quantizer design algorithm is explained in detail in Section 3 and summarized in Section 3.1. The complexity analysis of the proposed algorithm is discussed in Section 3.2. Simulation results are given in Section 4, and the conclusions are found in Section 5.
2 Problem formulation
Consider a sensor field S ⊂ R ^{2} where M sensors are located at known spatial locations, denoted by x _{ i }, i = 1,…,M. The sensors measure an acoustic signal energy emitted from a source located at an unknown location x ∈ S, assumed to be static during the localization process. In collecting acoustic energy readings, it is assumed that each sensor adopts the energy decay sensor model proposed in[19] and the signal energy measured at sensor i over a given time interval k can be expressed as follows:
where z _{ i } is the acoustic energy reading at sensor i and the model parameter consists of the gain factor g _{ i } of the i th sensor, an energy decay factor α, which is approximately equal to 2, and the source signal energy a measured 1 m from the source which is assumed to be uniformly distributed over the range [a _{min} a _{max}]. It is also assumed that the measurement noise term w _{ i }(k) can be approximated by a normal distribution,$N(0,{\sigma}_{i}^{2})$. Note that the energy decay model was verified by the field experiment in[19] and was also used in[20–22].
In this paper, we consider source localization based on quantized sensor readings where the i th sensor uses a R _{ i }bit quantizer with a dynamic range [z _{ i,min} z _{ i,max}] which is assumed to be selected based on desirable properties of their respective sensing ranges (see[17] for the details). We denote α _{ i }(·) the encoder at sensor i, which generates a quantization index${Q}_{i}\in {I}_{i}=\{1,\dots {2}^{{R}_{i}}={L}_{i}\}$. In what follows, Q _{ i } will be also used to denote the quantization bin to which measurement z _{ i } belongs. Each sensor captures its measurement z _{ i }(x,k) at time interval k, quantizes it, and sends it to a fusion node, where all sensor readings are used to obtain an estimate$\widehat{\mathbf{x}}$ of the source location^{b}. It is noted that in some cases one measurement per sensor is used for localization and in the other cases when multiple measurements (i.e., z _{ i }(x,k) for several k’s) can be made at each sensor, a sufficient statistic for localization from the multiple measurements can be computed before being quantized and transmitted.
2.1 Functional quantization of monotonic estimators
Suppose that we are given the estimator$\widehat{\mathbf{x}}=g({z}_{1},\dots ,{z}_{M})\equiv g({\mathbf{z}}_{1}^{M})$ monotonic in each of the sensor readings (e.g., linear MMSE estimator) where (z _{1},…,z _{ M }) is abbreviated as${\mathbf{z}}_{1}^{M}$ for simple notation. In this case, functional quantization can be applied for quantizer design to minimize the criterion$E\parallel \widehat{\mathbf{x}}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2}\phantom{\rule{0.3em}{0ex}}=E\parallel g({\mathbf{z}}_{1}^{M})\u011d({\widehat{\mathbf{z}}}_{1}^{M}){\parallel}^{2}$ where${\widehat{\mathbf{x}}}^{Q}=\u011d(\xb7)$ is the estimator employed at the fusion node that operates on quantized sensor readings${\widehat{z}}_{i},i=1,\dots ,M$ and${\widehat{z}}_{i}$ corresponds to the reconstruction value transmitted from sensor i when z _{ i } ∈ Q _{ i }. Without loss of optimality, we can find${z}_{1}^{\ast},\dots ,{z}_{M}^{\ast},{z}_{i}^{\ast}\in {Q}_{i}$ such that$g({\mathbf{z}}_{1}^{\ast M})=\u011d({\widehat{\mathbf{z}}}_{1}^{M})$ by the intermediate value theorem. Notice that the functional quantization focuses on minimization of$E\parallel \widehat{\mathbf{x}}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2}$ rather than$E\parallel \mathbf{x}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2}$. Clearly, as M becomes large,$E\parallel \widehat{\mathbf{x}}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2}\approx E\parallel \mathbf{x}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2}$.
We first consider the functional quantizer design at sensor i in the Lloyd design framework. We are initially given the reconstruction values (or the codewords)${\u011d}_{i}^{\phantom{\rule{0.3em}{0ex}}j}$ corresponding to the j th functional quantization partition${{V}_{g}}_{i}^{j}$ for the range of the estimator g(·). The Voronoi region construction and the codeword computation, the two main tasks in the algorithm, are conducted as follows:
As in the standard Lloyd algorithm,${{V}_{g}}_{i}^{j}$ clearly forms regular partitions, and these tasks will be repeated with${\u011d}_{i}^{j}={\u011d\ast}_{i}^{j}$ for the next iteration until a certain criterion is satisfied.
Now, we can easily obtain the Voronoi regions for quantizer design at sensor i:
We can also compute the codeword${\widehat{z}}_{i}^{j}$ from${\u011d}_{i}^{j}=\u011d({\widehat{z}}_{i}^{j})$. It should be noticed that${V}_{i}^{j}$, j = 1,…, L _{ i }, are regular partitions since${{V}_{g}}_{i}^{j}$ will transform to a regular one for z _{ i } due to the monotonicity of$g({\mathbf{z}}_{1}^{M})$ with respect to z _{ i }.
3 Functional quantizer design algorithm
First, we consider the average distance error at sensor i to be minimized for quantizer design as follows:
where${Q}_{i}^{j}$ is the j th quantization partition at sensor i,${\widehat{r}}_{i}=\parallel \widehat{\mathbf{x}}{\mathbf{x}}_{i}\parallel =\parallel g({\mathbf{z}}_{1}^{M}){\mathbf{x}}_{i}\parallel $ is the distance between the source and the i th sensor estimated by using unquantized sensor readings, and${\widehat{r}}_{i}^{Q}=\parallel {\widehat{\mathbf{x}}}^{Q}{\mathbf{x}}_{i}\parallel =\parallel \u011d({\widehat{\mathbf{z}}}_{1}^{M}){\mathbf{x}}_{i}\parallel $ is the estimated distance when quantized sensor readings are involved. Note that$g({\mathbf{z}}_{1}^{M})$ can be any good estimators^{c}. In order to incorporate the metric E _{ x } J _{ i } into the design process, we find${\widehat{r}}_{i}^{j}\in \{{\widehat{r}}_{i}{z}_{i}(\mathbf{x})\in {Q}_{i}^{j}\}$ such that${E}_{\mathbf{x}}[{\widehat{r}}_{i}{\widehat{r}}_{i}^{Q}{}^{2}{z}_{i}(\mathbf{x})\in {Q}_{i}^{j}]\approx {E}_{\mathbf{x}}[{\widehat{r}}_{i}{\widehat{r}}_{i}^{j}{}^{2}{z}_{i}(\mathbf{x})\in {Q}_{i}^{j}],\phantom{\rule{2.77626pt}{0ex}}j=1,\dots ,{L}_{i}.$ The approximation would be possible since${\widehat{r}}_{i}^{Q}\in \{{\widehat{r}}_{i}{z}_{i}(\mathbf{x})\in {Q}_{i}^{j}\}$ and we can easily choose${\widehat{r}}_{i}^{j}$ close to${\widehat{r}}_{i}^{Q}$ at a high rate as the partition${Q}_{i}^{j}$ becomes small. Clearly, it will allow us to avoid calculation of${\widehat{\mathbf{x}}}^{Q}=\u011d({\widehat{\mathbf{z}}}_{1}^{M})$ in an iterative loop (for the details, see steps 2 to 6 in the algorithm that appears in Section 3.1), enabling fast operation. The metric can be minimized by taking the centroid of${\widehat{r}}_{i}$ over each partition${Q}_{i}^{j}$ to replace${\widehat{r}}_{i}^{j}$. Formally,
We can possibly reduce the localization error by minimizing E _{ x } J _{ i },i = 1,…,M, at each sensor since as the accuracy of the range information becomes better, the fusion node will be able to estimate source locations with higher precision.
Now, we describe this consideration in the functional quantization framework where the distance error${J}_{i}={\widehat{r}}_{i}{\widehat{r}}_{i}^{Q}{}^{2}$ is minimized for quantizer design:
Note that the metric has an important property which is essential for regular quantizer design; that is, the distance${\widehat{r}}_{i}$ is monotonically decreasing in the sensor reading z _{ i }. As explained in (2) and (3), this monotonicity allows us to always construct the regular quantization partition${V}_{i}=\{{V}_{i}^{j},j=1,\dots ,{L}_{i}\}$ for z _{ i }:
where${{V}_{r}}_{i}^{j}$ is the j th functional quantization partition for the distance${\widehat{r}}_{i}$ and${V}_{i}^{j}$ is the corresponding region for z _{ i } consisting of the i th sensor readings that would minimize the metric if assigned to the j th quantization bin. Thus, as in the standard Lloyd design algorithm, the construction of quantization partitions${V}_{i}^{j}$ in (6) and its codeword computation in (5) will clearly reduce the average distance error, E(J _{ i }) at each iteration, leading to convergence. However, simply minimizing the metric would not guarantee minimization of the localization error. In this work, we prove that the localization error is also reduced at each iteration by applying a new technique for the codeword computation${\widehat{r}}_{i}^{j}$ which will be developed in what follows.
Lemma 1. The localization error (LE) ${E}_{\mathbf{x}}\parallel \widehat{\mathbf{x}}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2}$ is minimized by using the codewords ${\hat{{r}_{\mathit{\text{LE}}}}}_{i}^{j}$ given by
where θ(x) is the angle between $\widehat{\mathbf{x}}$ and ${\widehat{\mathbf{x}}}^{Q}$.
Proof.
□
Here, the second term in (8) is irrelevant to quantization process and denoted by the constant C. Obviously,${\hat{{r}_{\text{LE}}}}_{i}^{j}$ can be computed by taking the centroid over${Q}_{i}^{j}$ which is given by${E}_{\mathbf{x}}[\phantom{\rule{0.3em}{0ex}}{\widehat{r}}_{i}\phantom{\rule{0.3em}{0ex}}cos\phantom{\rule{0.3em}{0ex}}\theta (\mathbf{x}){z}_{i}(\mathbf{x})\in {Q}_{i}^{j}]$.
It is noticed that if you attempt to design quantizers that minimize the LE by using$\{{\hat{{r}_{\text{LE}}}}_{i}^{j}\}$ for the codeword computation, you would fail to make the design process convergent. The challenge here is that we should be able to update the codeword at the next iteration denoted by${\widehat{r}}_{i}^{\ast j}$ such that the localization error is reduced while the average distance error remains nonincreasing at each iteration. First, we easily show from (5) and (7) that there exists the relation
Next, we prove that the LE and the average distance error computed by using${\widehat{r}}_{i}^{j}$ are proportional to the squared distances from the optimal values, i.e.,${\hat{{r}_{\text{LE}}}}_{i}^{j}{\widehat{r}}_{i}^{j}{}^{2}$ and${\hat{{r}_{R}}}_{i}^{j}{\widehat{r}}_{i}^{j}{}^{2}$, respectively.
Lemma 2. Let $\mathit{\text{LE}}({\widehat{r}}_{i}^{j})$ be the localization error computed by using ${\widehat{r}}_{i}^{j}$. Then, $\mathrm{\Delta}\mathit{\text{LE}}({\widehat{r}}_{i}^{\phantom{\rule{0.3em}{0ex}}j})\equiv \mathit{\text{LE}}({\widehat{r}}_{i}^{\phantom{\rule{0.3em}{0ex}}j})\mathit{\text{LE}}({\hat{{r}_{\mathit{\text{LE}}}}}_{i}^{j})$ is given by $\sum _{j=1}^{{L}_{i}}{\hat{{r}_{\mathit{\text{LE}}}}}_{i}^{j}{\widehat{r}}_{i}^{j}{}^{2}$. Similarly, let $\mathit{\text{DE}}({\widehat{r}}_{i}^{j})$ be the average distance error computed by using ${\widehat{r}}_{i}^{j}$. Then, $\mathrm{\Delta}\mathit{\text{DE}}({\widehat{r}}_{i}^{j})\equiv \mathit{\text{DE}}({\widehat{r}}_{i}^{j})\mathit{\text{DE}}({\hat{{r}_{R}}}_{i}^{j})$ is given by ${\sum}_{j=1}^{{L}_{i}}{\hat{{r}_{R}}}_{i}^{j}{\widehat{r}}_{i}^{j}{}^{2}$.
Proof. Let${\widehat{r}}_{i}^{j}={\hat{{r}_{\text{LE}}}}_{i}^{j}+\mathrm{\Delta}{r}_{i}^{j}$. Then we have
where (12) follows from (9) and the second term in (13) equals zero from (7). For the case of the average distance error, similar manipulation can be easily applied to derive the corresponding relation. □
Now, we are in a position to prove the theorem that states how to update the codewords${\widehat{r}}_{i}^{\ast j}$.
Theorem 3. Let ${\widehat{r}}_{i}^{j}$ be the codewords at the current iteration and ${\widehat{r}}_{i}^{\ast j}$ the ones at the next iteration. Suppose that the average distance error is minimized in the generalized Lloyd framework, where the Voronoi region is constructed by using (6) and ${\widehat{r}}_{i}^{\ast j}$ corresponding to jth region is computed as follows:
Then, the quantizers updated from ${\widehat{r}}_{i}^{\ast j}$ will not increase the localization error while the average distance error remains nonincreasing at each iteration.
Proof. If${\widehat{r}}_{i}^{j}$ lies between${\hat{{r}_{\text{LE}}}}_{i}^{j}$ and${\hat{{r}_{R}}}_{i}^{j}$, it will be kept unchanged by setting${\widehat{r}}_{i}^{\ast j}={\widehat{r}}_{i}^{j}$, implying that the metric and the LE will not increase. If${\widehat{r}}_{i}^{j}\le {\hat{{r}_{\text{LE}}}}_{i}^{j}$, then${\widehat{r}}_{i}^{\ast j}={\hat{{r}_{\text{LE}}}}_{i}^{j}$ will reduce the localization error from Lemma 1. It is easy to see from (11) that${\widehat{r}}_{i}^{\ast j}$ is closer to${\hat{{r}_{R}}}_{i}^{j}$ than${\widehat{r}}_{i}^{j}$. Thus, the distance error will also decrease from Lemma 2. Similarly, if${\widehat{r}}_{i}^{j}\ge {\hat{{r}_{R}}}_{i}^{j}$, then${\widehat{r}}_{i}^{\ast j}={\hat{{r}_{R}}}_{i}^{j}$ will reduce the metric and the LE as well. Therefore, we conclude that the technique of computing${\widehat{r}}_{i}^{\ast j}$ in the theorem will not increase the LE at each iteration while guaranteeing the convergence of the design algorithm. □
3.1 Proposed design algorithm
Given the number of quantization levels,${L}_{i}={2}^{{R}_{i}}$, at sensor i, the algorithm summarized below is iteratively conducted over all sensors i = 1,…,M until no change in α _{ i },i = 1,…,M, is achieved.
Step 1 : Initialize the encoder${\alpha}_{i}(\xb7)=\{{Q}_{i}^{j},j=1,\dots ,{L}_{i}\}$ and the corresponding reconstruction values,$\{{\widehat{z}}_{i}^{j},j=1,\dots ,{L}_{i}\}$. Set threshold ε, and iteration index n = 0. Compute the metric${D}_{n}={E}_{\mathbf{x}}\phantom{\rule{0.3em}{0ex}}{J}_{i}={E}_{\mathbf{x}}[{\widehat{r}}_{i}{\widehat{r}}_{i}^{Q}{}^{2}]\approx {\sum}_{j=1}^{{L}_{i}}{E}_{\mathbf{x}}[{\widehat{r}}_{i}{\widehat{r}}_{i}^{j}{}^{2}{z}_{i}(\mathbf{x})\in {Q}_{i}^{j}]$ ^{d}.
Step 2 : Construct the partition V _{ i } using (6). In this step, the metric is minimized by the optimal regular quantization partition construction.
Step 3 : Update the encoder α _{ i } by simply letting${Q}_{i}^{j}={V}_{i}^{j},\phantom{\rule{.5em}{0ex}}j=1,\dots ,{L}_{i}$.
Step 4 : Compute${\widehat{r}}_{i}^{\ast j},\phantom{\rule{.5em}{0ex}}j=1,\dots ,{L}_{i}$, using (5) and (7) by following the technique in Theorem 3.
Step 5 : n = n + 1; compute the metric D _{ n } with${\widehat{r}}_{i}^{j}={\widehat{r}}_{i}^{\ast j}$.
Step 6 : If$\frac{({D}_{n1}{D}_{n})}{{D}_{n}}<\epsilon $, stop; otherwise, go to step 2.
Note that the quantizer design is performed offline using a training set that is generated based on (1) and the source distribution p(x); thus, the quantizer training phase makes use of information about all sensors, but when the resulting quantizers are actually used, each sensor quantizes the information available to it independently. A discussion of the robustness of our quantizer to mismatches of the sensor model parameters is also left for Section 4.
3.2 Analysis of computational complexity
In this section, we discuss the computational complexity of the proposed algorithm by comparing with the previous work in[17]. Clearly, once${\widehat{r}}_{i}=\parallel \widehat{\mathbf{x}}{\mathbf{x}}_{i}\parallel =\parallel g({\mathbf{z}}_{1}^{M}){\mathbf{x}}_{i}\parallel ,\forall \mathbf{x}\in S$, is obtained in step 1, there is no need to conduct in an iterative loop (steps 2 to 6) the MMSE estimation which is the most computational operation, thereby facilitating a much faster design process. In contrast, the previous work seeks to minimize the localization error directly by finding the weight λ at each iteration that guarantees the convergence of the cost function and the nonincreasing localization error. This approach may yield a relatively small number of iterations needed to design quantizers but will suffer from the increased design complexity due to the repeated MMSE estimation computation.
In this analysis, we express the computational complexity in terms of the time taken to design quantizers which will be determined by the product of the average number of iterations to produce the resulting quantizers and the average execution time to complete one iteration. Notice that as long as there is no substantial difference in the average number of iterations, the execution time will be a decisive factor for this comparison. Furthermore, the execution time per iteration is mainly spent by the MMSE estimation which becomes more computative as the rate R _{ i } and/or the number of sensor M increases. Thus, the analysis demonstrates the significant advantage over the previous work in terms of the computational complexity: for example, our experiments show that the proposed algorithm performs about 10 times faster than the previous work for the case of M = 5, R _{ i } = 3. For the detailed numerical results, refer to Section 4.1.
4 Simulation results
In this section, we denote functional localizationspecific quantizer (FLSQ) the quantizer designed using the algorithm proposed in Section 3.1 and assume that each sensor uses the same dynamic range for all quantizers (uniform quantizer, Lloyd quantizer, localizationspecific quantizer (LSQ) proposed in[17], and FLSQ). We design FLSQs with the equally distancedivided quantizer (EDQ) initialization^{e} introduced in[15, 17] by using a training set generated from a uniform distribution of source locations and the model parameters given by a = 50,α = 2,g _{ i } = 1, and${\sigma}_{i}^{2}={\sigma}^{2}=0$. We also design Lloyd quantizers from the same training set by using different initializations for comparison: the Lloyd quantizers designed using as initialization a uniform quantizer and EDQ are denoted by Lloyd Q and Lloyd Q_{EDQ}, respectively. In our experiments, we consider a sensor network where M (= 3,4,5) sensors are deployed in a 10 × 10 m^{2} sensor field. Extensive simulation is conducted to compare the effectiveness of different design algorithms and investigate the sensitivity to parameter perturbation, variation of noise level, and unknown source signal energy. Finally, a larger sensor network is also considered for testing our design since typical sensor networks involve many sensor nodes in large sensor fields. For evaluation, the localization error$E(\parallel \mathbf{x}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2})$ is computed using the MMSE estimation except for the experiments where otherwise stated.
4.1 Effectiveness of design algorithms
In this experiment, 100 different sensor configurations are generated for M = 3,4,5. For each configuration, uniform quantizers, Lloyd Q, Lloyd Q_{EDQ}, LSQ, and FLSQ are designed for R _{ i } = 2,3,4 and evaluated by generating a test set of 1,000 source locations from p(x), the model parameters, and the source signal energy which were assumed during quantizer design.$E(\parallel \phantom{\rule{0.3em}{0ex}}\mathbf{x}{\widehat{\mathbf{x}}}^{Q}\phantom{\rule{0.3em}{0ex}}{\parallel}^{2})$ is computed for each configuration and averaged over 100 configurations. The localization results for the various different designs are illustrated in Figure1. Note that LSQ is also designed using EDQ initialization. As expected, our proposed design provides greatly improved performance over traditional quantizers because FLSQ makes full use of the correlation of the other sensor readings and the sensor location information while the other quantizers except for LSQ are designed without taking into account this useful information. It can be also noted that FLSQ and LSQ provide similar localization performance although they take completely different design approaches. As mentioned in Section 3.2, our design algorithm performs much faster than LSQ due to its simplified design process. In the experiments, the proposed algorithm consumes almost twice the average number of iterations than the previous work[17], but the execution time per iteration is about 20 times (or over 40 times) faster for the case of M = 5 and R _{ i } = 3 (or M = 5 and R _{ i } = 4). Thus, it can be said that the proposed algorithm operates about 10 times (or over 20 times) faster than the previous work and this advantage will become more obvious with increased rate and/or number of sensors. In addition, the Lloyd Q_{EDQ} performs poorly even with a good initialization of EDQ, implying that typical standard designs are not suitable for our localization system, regardless of initialization.
4.2 Sensitivity analysis of design algorithms
We first investigate how the performance of the proposed design algorithm can be affected by the model parameter perturbation. Although the quantizers are designed under the assumption that the source energy is known (a = 50) and sensor readings are noiseless (σ = 0), we further examine the design algorithms (uniform Q, Lloyd Q, and FLSQ) to understand how sensitive the localization results will be with respect to the presence of the measurement noise and the unknown source energy a. In the experiments where the source energy is unknown, the dynamic range for quantizer design is extended to accommodate the sensor readings generated from source energy a randomly drawn from [a _{min} a _{max}] = [0 100] (see Section 2).
4.2.1 Sensitivity of FLSQ to parameter perturbation
For each of 100 different fivesensor configurations, we design FLSQ with R _{ i } = 3 and modify one of the parameters with respect to what was assumed during quantizer training to generate a test set of 1,000 source locations with a = 50 under assumptions of both a uniform distribution and a normal distribution of source locations. In this setup, FLSQ is tested under various types of mismatch conditions. It is assumed that the true parameters can be estimated at the fusion node and used for localization. The simulation results are tabulated in Table1. FLSQ shows robust performance in the mismatch situations where the parameters used in quantizer design are different from those characterizing the simulation conditions.
4.2.2 Sensitivity of design algorithms to noise level and unknown source energy
For each of 100 different fivesensor configurations, Lloyd quantizers are designed with multiple initializations for comparison with FLSQs designed for R _{ i } = 3. We first investigate the sensitivity to noise level by generating a test set of 1,000 source locations for each configuration with a = 50 and signaltonoise ratio (SNR) in the range from 20 to 80 dB by varying σ ^{f}. Note that the localization error$E(\parallel \mathbf{x}{\widehat{\mathbf{x}}}^{Q}{\parallel}^{2})$ is obtained by using the maximum a posteriori (MAP)based algorithm proposed in[22] for faster computation and averaged over 100 configurations. Figure2 demonstrates that FLSQ performs better in all cases.
We also examine how the design algorithms will be affected by the unknown source energy. For each configuration, a test set of 1,000 source locations is generated with σ = 0.05 and SNR ranging from 20 to 70 dB by varying the source energy. Figure3 demonstrates that our proposed algorithm performs well with respect to Lloyd quantizers regardless of the initializations, some of which are considered efficient initializations for our localization system.
4.3 Performance analysis in a larger sensor network: comparison with traditional quantizers
In this experiment, 50 different sensor configurations in a larger sensor field, 20 × 20 m^{2}, are generated for M = 12,16,20. For each sensor configuration, FLSQs are designed with a given rate of R _{ i } = 3 and compared with the standard designs in Figure4. As expected, FLSQ outperforms the typical quantizer designs in large sensor networks. It can be also seen from the experiment in a sensor network of 10 × 10 m^{2} (M = 3,4,5) that better performance is achieved in larger sensor networks even with the same sensor density. Note that the sensor density for M = 20 in 20 × 20 m^{2} is equal to$\frac{20}{20\times 20}=0.05$ which is that for the case of M = 5 in 10 × 10 m^{2}. This is because localization performance degrades around the edges of the sensor field, and thus, in a larger sensor field, there is a relatively smaller number of source locations near the edge, as compared to a smaller field with the same sensor density.
5 Conclusions
In this paper, we have proposed an iterative functional quantizer design algorithm for source localization in sensor networks. With the monotonicity of the distance in each sensor reading, we have suggested the average distance error as a metric in the functional quantization to construct the regular quantization partition such that the metric is iteratively reduced in the generalized Lloyd algorithm framework. Since the goal is to design independent regular quantizers that minimize the localization error, we have proved that the localization error can be also reduced at each iteration by updating the codewords based on the proposed technique. Our proposed algorithm was shown to perform quite well in comparison with typical standard designs and operate fast due to its simple structure and work robust to mismatches of the sensor model parameters. In the future, we will work on an extension of this algorithm that addresses nonregular quantization partitions to achieve further improvement.
Endnotes
^{a} In[17], a high computational complexity is inevitable because a search for λ including the MMSE estimation is iteratively conducted at each step.
^{b} In this paper, we assume that M sensors are activated prior to the localization process. However, selecting the best set of sensors for localization accuracy would be important in order to improve the system performance with limited energy budget[23–25].
^{c} In this work, the nonlinear MMSE estimator is used for quantizer design.
^{d} ${\widehat{r}}_{i}^{j}$ is initially given by$\sqrt[\alpha ]{{g}_{i}\frac{a}{{\widehat{z}}_{i}^{j}}}$ from (1) whenever${\alpha}_{i}({z}_{i}(\mathbf{x}))={Q}_{i}^{j}$ (i.e.,${\widehat{z}}_{i}={\widehat{z}}_{i}^{j}$) and cosθ(x),∀x ∈ S is computed once in this step for${\hat{{r}_{\text{LE}}}}_{i}^{j}$.
^{e} The EDQ is shown to be simply designed by dividing uniformly the dynamic range of the distance. The EDQ design is verified through simulations in[15, 17], showing that EDQ can be used as an efficient initialization for quantizer design because of its good localization performance.
^{f} Note that the SNR computed by$10{log}_{10}\frac{{a}^{2}}{{\sigma}^{2}}$ is measured at 1 m from the source and for practical vehicle target, it is often much higher than 40 dB. A typical value of the variance of measurement noise σ ^{2} is 0.05^{2} (= 60 dB)[19, 21].
Abbreviations
 ML:

Maximum likelihood
 CRB:

CramerRao bound
 MMSE:

Minimum mean square error
 LE:

Localization error
 EDQ:

Equally distancedivided quantizer
 FLSQ:

Functional localizationspecific quantizer
 LSQ:

Localizationspecific quantizer
 MAP:

Maximum a posteriori
 SNR:

Signaltonoise ratio.
References
 1.
Zhao F, Shin J, Reich J: Informationdriven dynamic sensor collaboration for target tracking. IEEE Signal Process. Mag 2002, 19(2):6172. 10.1109/79.985685
 2.
Niu R, Varshney PK: Target location estimation in wireless sensor networks using binary data. In The 38th Annual Conference on Information Sciences and Systems. Princeton; 17–19 Mar 2004.
 3.
Niu R, Varshney PK: Target location estimation in sensor networks with quantized data. IEEE Trans. Signal Process 2006, 54(12):45194528.
 4.
Longo M, Lookabaugh TD, Gray RM: Quantization for decentralized hypothesis testing under communication constraints. IEEE Trans. Inf. Theory 1990, 36(2):241255. 10.1109/18.52470
 5.
Flynn TJ, Gray RM: Encoding of correlated observations. IEEE Trans. Inf. Theory 1987, 33(6):773787. 10.1109/TIT.1987.1057384
 6.
Hashlamoun WA, Varshney PK: Nearoptimum quantization for signal detection. IEEE Trans. Commun 1996, 44(3):294297. 10.1109/26.486322
 7.
Lam W, Reibman A: Design of quantizers for decentralized estimation systems. IEEE Trans. Commun 1993, 41(11):16021605. 10.1109/26.241739
 8.
Marano S, Matta V, Willett P: Asymptotic design of quantizers for decentralized MMSE estimation. IEEE Trans. Process 2007, 55(11):54855496.
 9.
Misra V, Goyal VK, Varshney LR: Highresolution functional quantization. In Data Compression Conference. Snowbird; 25–27 Mar 2008:113122.
 10.
Shen X, Zhu Y, You Z: An efficient sensor quantization algorithm for decentralized estimation fusion. Automatica 2011, 47: 10531059. 10.1016/j.automatica.2011.01.082
 11.
Wernersson N, Karlsson J, Skoglund M: Distributed quantization over noisy channels. IEEE Trans. Commun 2009, 57: 16931700.
 12.
Fleming M, Zhao Q, Effros M: Network vector quantization. IEEE Trans. Inf. Theory 2004, 50(8):15841604. 10.1109/TIT.2004.831832
 13.
Huang Y, Hua Y: Multihop progressive decentralized estimation in wireless sensor networks. IEEE Signal Process. Lett 2007, 14(12):10041007.
 14.
Shah S, BeferullLozano B: Innetwork iterative distributed estimation for powerconstrained wireless sensor networks. In IEEE International Conference on Distributed Computing in Sensor Systems. Hangzhou; 16–18 May 2012:239246.
 15.
Kim YH, Ortega A: Quantizer design for source localization in sensor networks. In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Philadelphia; 18–23 Mar 2005.
 16.
Kim YH, Ortega A: Quantizer design and distributed encoding algorithm for source localization in sensor networks. In IEEE International Symposium on Information Processing in Sensor Networks (IPSN). Los Angeles; 25–27 Apr 2005.
 17.
Kim YH, Ortega A: Quantizer design for energybased source localization in sensor networks. IEEE Trans. Signal Process 2011, 59(11):55775588.
 18.
Kim YH, Ortega A: Distributed encoding algorithms for source localization in sensor networks. EURASIP J. Adv. Signal Process 2010, 2010: 781720. 10.1155/2010/781720
 19.
Li D, Hu YH: Energybased collaborative source localization using acoustic microsensor array. EURASIP J. Appl. Signal Process 2003, 2003: 321337. 10.1155/S1110865703212075
 20.
Hero AO, Blatt D: Sensor network source localization via projection onto convex sets (POCS). In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Philadelphia; 18–23 Mar 2005.
 21.
Liu J, Reich J, Zhao F: Collaborative innetwork processing for target tracking. EURASIP J. Appl. Signal Process 2003, 2003: 378391. 10.1155/S111086570321204X
 22.
Kim YH, Ortega A: Maximun a posteriori (MAP)based algorithm for distributed source localization using quantized acoustic sensor readings. In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Toulouse; 14–19 May 2006.
 23.
Isler V, Bajcsy R: The sensor selection problem for bounded uncertainty sensing models. In IEEE International Symposium on Information Processing in Sensor Networks (IPSN). Los Angeles; 25–27Apr 2005.
 24.
Wang H, Yao K, Pottie G, Estrin D: Entropybased sensor selection heuristic for target localization. In IEEE International symposium on Information Processing in Sensor Networks (IPSN). Berkeley; 26–27 Apr 2004.
 25.
Kim YH: Quantizationaware sensor selection for source localization in sensor networks. Int. J. KIMICS 2011, 9(2):155160.
Acknowledgements
This study was supported by a research fund from Chosun University, 2012.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The author declares that he has no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kim, Y.H. Functional quantizer design for source localization in sensor networks. EURASIP J. Adv. Signal Process. 2013, 151 (2013). https://doi.org/10.1186/168761802013151
Received:
Accepted:
Published:
Keywords
 Sensor Network
 Localization Error
 Minimum Mean Square Error
 Quantizer Design
 Sensor Reading