Skip to main content

Functional quantizer design for source localization in sensor networks

Abstract

In this paper, we address the problem of quantizer design optimized for a source localization application in acoustic sensor networks where physically separated sensors make measurements of acoustic signal energy, quantize them, and transmit the quantized data to a fusion node, which then produces an estimate of the source location. We propose an iterative regular quantizer design algorithm that minimizes the localization error. To construct regular quantization partitions, we suggest the average distance error as a metric in the functional quantization since the distance is monotonic in each sensor reading. Furthermore, to guarantee minimization of the localization error, we propose a new technique to update the codewords and prove that the localization error can be reduced at each iteration while the average distance error remains nonincreasing by applying our update technique. Our experiments show that our proposed algorithm yields significantly improved performance as compared with traditional quantizer designs.

1 Introduction

In sensor networks, a large number of low-cost sensors, each equipped with a processor, a low-power communication transceiver, and one or more sensing capabilities, are deployed in a sensor field. Each sensor operates on its limited amount of battery energy which is consumed mostly by wireless communication between sensors. The network lifetime is a crucial concern for sensor networks, and the basic strategy for prolonging the lifetime would then be to decrease the communication cost at the expense of additional computation in the sensors[1]. This motivates us to use data compression for various tasks such as detection, classification, localization, and tracking, where collected and processed data are exchanged between sensors.

Quantization of measurements arises in practical estimation systems where estimation algorithms are conducted on quantized data. Thus, efficient quantization will be needed in order to achieve improvements in rate-distortion performance. For example, the authors in[2] considered a source localization system where each sensor measures the signal energy, quantizes it, and sends the quantized sensor reading to a fusion node where the localization is performed. In this framework, the maximum likelihood (ML) estimation problem based on quantized data was addressed and the Cramer-Rao bound (CRB) was derived for comparison, assuming that each sensor used identical (uniform) quantizers. In[3], heuristic quantization schemes were proposed in order to assign quantizers to each sensor without taking into account the sensor locations. However, it should be noted that if the sensor locations are known during the quantizer design process, significant performance gain can be achieved with respect to simple uniform quantization at all sensors. This raises the problem of quantizer design optimized for distributed estimation systems where the goal is to design independently operating quantizers that minimize a global metric such as the estimation error (a function of all sensor readings).

To this end, a cooperative design-separate encoding approach was suggested for a decentralized hypothesis testing system[4, 5] where a distributional distance was used as a criterion for quantizer design in order to yield a manageable design procedure. For a distributed detection system, the authors in[6] proposed a heuristic procedure for quantizer design that minimizes the upper bound of the probability of error. Lam and Reibman[7] constructed optimal quantization partitions in distributed estimation systems where the necessary conditions for the partitions were presented. In the high-resolution regime, asymptotic quantizer designs for distributed estimation were derived in[8, 9] by using the limiting density of quantizer partitions. In[10], the necessary conditions for optimal quantization rules and linear estimation fusion rules were derived and shown to be searched simultaneously by using an iterative algorithm. An iterative quantizer design algorithm was presented in the Lloyd algorithm framework and evaluated for nonideal channels in[11]. It was shown that the resulting distributed scalar quantizers should be nonregular, implying that the same codeword is assigned to several disjoint intervals in order to reduce the distortion.

Although the quantizer design algorithms are developed in sensor networks where all sensors involved send their quantized data directly to a fusion node without communicating with each other (no routing), the design problem can be also addressed for general network topology. In[12], the authors incorporated network topology into compression system design and presented a design algorithm for local optimal vector quantizers that achieves improvements in rate-distortion performance and system functionality. In[13], an algorithm for optimal rate allocation to sensors was presented, given a multihop routing tree from sensors to a fusion node in order to minimize the amount of transmission energy. In addition, since selecting a proper subset of sensors and optimizing the routing structure can lead to important power savings, an iterative algorithm was proposed in[14] for joint optimization on the sensor selection and the routing for distributed estimation.

In this work, we consider a source localization system in acoustic sensor networks (one of the distributed estimation systems) where distributed sensors measure acoustic source signals, quantize them, and send them to a fusion node which will estimate the source location based on quantized sensor readings. We seek to design independently operating regular quantizers that minimize the localization error. We also take the cooperative design-separate encoding strategy and propose an iterative quantizer design algorithm similar to the cyclic generalized Lloyd algorithm. The challenge here is that since the Lloyd algorithm was devised for quantizer design when a local metric (e.g., reconstruction error of local sensor readings) is used as a cost function, simply replacing it by a global metric may cause problems. More specifically, the quantizer update at each step from the Voronoi region construction and the subsequent computation of codewords based on the Voronoi partitions, the two main tasks in the typical Lloyd design, would not generally produce the regular quantization partitions and cannot guarantee that the global metric will not increase. In order to tackle the problems, the authors in[1517] adopted a simple distance rule to construct the regular quantization partitions and proposed a weighted sum of both of the metrics as a cost function (i.e., local + λ × global, λ ≥ 0) along with a search for proper weights. The use of the weighted metric is motivated by the observation that there always exists a certain λ such that the cost function remains nonincreasing under the Lloyd iterations (e.g., λ = 0 always leads to a nonincreasing cost function, although there typically exist multiple nonzero values of λ with the same property).

It should be noticed that the regular quantization partitions can be also constructed by the functional quantization of a monotonic estimator (e.g., a linear minimum mean square error (MMSE) estimator) since each quantization partition for the estimator transforms to a regular one for quantizers at each sensor due to its monotonicity in each sensor reading[9]. In this work, we suggest using a new metric to be minimized in order to obtain regular quantization partitions in the functional quantization framework. Specifically, we propose the average distance error as a metric since the functional quantization of the distance allows us to generate regular quantization partitions due to the monotonicity of the distance with respect to each sensor reading even when the nonlinear MMSE estimator for a source location is employed. Notice that we focus on the design of regular independent scalar quantizers that minimize the global metric. In[18], it was shown that nonregular independent quantizers can be systematically designed by applying the distributed encoding algorithm to regular quantizers; that is, substantial performance gain could be further achieved after the design of regular quantizers by merging their nonadjacent quantization bins in a distributed manner.

Obviously, minimizing the average distance error would not necessarily lead to minimization of the localization error. We further develop a new technique for computation of the codewords and prove that the localization error can be reduced at each iteration while the average distance error remains nonincreasing by using the proposed technique. We demonstrate through extensive experiments that our iterative design algorithm achieves significant performance improvement over typical design techniques such as uniform quantizers and Lloyd quantizers. We also evaluate the proposed algorithm by comparing with the previous work[17], which recently proposed a novel quantizer design technique optimized for source localization in acoustic sensor networks, the application considered in this work. The benefit of the proposed algorithm is illustrated by the analysis and experiments providing similar localization performance with much lower complexitya. The main contributions of this paper are twofold: first, we define a monotonic cost function to be minimized for regular quantizer design, present an iterative quantizer design algorithm based on functional quantization, and propose a codeword update technique that guarantees minimization of the localization error. We believe that our approach is applicable to general cases where sensors measure information that is a function of distance. Second, although the proposed algorithm has no obvious advantages over the previous work in[17] in terms of localization performance, the nature of our algorithm yields substantial reduction in the computational complexity. The complexity analysis shows that the benefit of the proposed algorithm becomes more significant as the rate and/or the number of sensors increases.

In this paper, we assume that each sensor can estimate noise-corrupted acoustic signal energy using actual measurements (e.g., time series measurements). We also assume that there is only one-way communication from sensors to the fusion node, i.e., there is no feedback channel, the sensors do not communicate with each other (no relay between sensors), and these various communication links are reliable.

This paper is organized as follows. The problem formulation of the quantizer design is given in Section 2. A brief description of functional quantization is provided in Section 2.1. An iterative quantizer design algorithm is explained in detail in Section 3 and summarized in Section 3.1. The complexity analysis of the proposed algorithm is discussed in Section 3.2. Simulation results are given in Section 4, and the conclusions are found in Section 5.

2 Problem formulation

Consider a sensor field SR 2 where M sensors are located at known spatial locations, denoted by x i , i = 1,…,M. The sensors measure an acoustic signal energy emitted from a source located at an unknown location xS, assumed to be static during the localization process. In collecting acoustic energy readings, it is assumed that each sensor adopts the energy decay sensor model proposed in[19] and the signal energy measured at sensor i over a given time interval k can be expressed as follows:

z i (x,k)= g i a x x i α + w i (k),
(1)

where z i is the acoustic energy reading at sensor i and the model parameter consists of the gain factor g i of the i th sensor, an energy decay factor α, which is approximately equal to 2, and the source signal energy a measured 1 m from the source which is assumed to be uniformly distributed over the range [a min a max]. It is also assumed that the measurement noise term w i (k) can be approximated by a normal distribution,N(0, σ i 2 ). Note that the energy decay model was verified by the field experiment in[19] and was also used in[2022].

In this paper, we consider source localization based on quantized sensor readings where the i th sensor uses a R i -bit quantizer with a dynamic range [z i,min z i,max] which is assumed to be selected based on desirable properties of their respective sensing ranges (see[17] for the details). We denote α i (·) the encoder at sensor i, which generates a quantization index Q i I i ={1, 2 R i = L i }. In what follows, Q i will be also used to denote the quantization bin to which measurement z i belongs. Each sensor captures its measurement z i (x,k) at time interval k, quantizes it, and sends it to a fusion node, where all sensor readings are used to obtain an estimate x ̂ of the source locationb. It is noted that in some cases one measurement per sensor is used for localization and in the other cases when multiple measurements (i.e., z i (x,k) for several k’s) can be made at each sensor, a sufficient statistic for localization from the multiple measurements can be computed before being quantized and transmitted.

2.1 Functional quantization of monotonic estimators

Suppose that we are given the estimator x ̂ =g( z 1 ,, z M )g( z 1 M ) monotonic in each of the sensor readings (e.g., linear MMSE estimator) where (z 1,…,z M ) is abbreviated as z 1 M for simple notation. In this case, functional quantization can be applied for quantizer design to minimize the criterionE x ̂ x ̂ Q 2 =Eg( z 1 M )ĝ( z ̂ 1 M ) 2 where x ̂ Q =ĝ(·) is the estimator employed at the fusion node that operates on quantized sensor readings z ̂ i ,i=1,,M and z ̂ i corresponds to the reconstruction value transmitted from sensor i when z i Q i . Without loss of optimality, we can find z 1 ,, z M , z i Q i such thatg( z 1 M )=ĝ( z ̂ 1 M ) by the intermediate value theorem. Notice that the functional quantization focuses on minimization ofE x ̂ x ̂ Q 2 rather thanEx x ̂ Q 2 . Clearly, as M becomes large,E x ̂ x ̂ Q 2 Ex x ̂ Q 2 .

We first consider the functional quantizer design at sensor i in the Lloyd design framework. We are initially given the reconstruction values (or the codewords) ĝ i j corresponding to the j th functional quantization partition V g i j for the range of the estimator g(·). The Voronoi region construction and the codeword computation, the two main tasks in the algorithm, are conducted as follows:

V g i j = g ( z 1 M ) : g ( z 1 M ) ĝ i j 2 g ( z 1 M ) ĝ i k 2 , k j
(2)
ĝ i j = arg min ĝ i j E g ( z 1 M ) ĝ i j 2 | g ( z 1 M ) V g i j , j = 1 , , L i .

As in the standard Lloyd algorithm, V g i j clearly forms regular partitions, and these tasks will be repeated with ĝ i j = ĝ i j for the next iteration until a certain criterion is satisfied.

Now, we can easily obtain the Voronoi regions for quantizer design at sensor i:

V i j = z i ( x ) : g ( z 1 M ) V g i j ,j=1,, L i .
(3)

We can also compute the codeword z ̂ i j from ĝ i j =ĝ( z ̂ i j ). It should be noticed that V i j , j = 1,…, L i , are regular partitions since V g i j will transform to a regular one for z i due to the monotonicity ofg( z 1 M ) with respect to z i .

3 Functional quantizer design algorithm

First, we consider the average distance error at sensor i to be minimized for quantizer design as follows:

E x J i = E x | r ̂ i r ̂ i Q | 2 = j = 1 L i E x | r ̂ i r ̂ i Q | 2 | z i ( x ) Q i j ,
(4)

where Q i j is the j th quantization partition at sensor i, r ̂ i = x ̂ x i =g( z 1 M ) x i is the distance between the source and the i th sensor estimated by using unquantized sensor readings, and r ̂ i Q = x ̂ Q x i =ĝ( z ̂ 1 M ) x i is the estimated distance when quantized sensor readings are involved. Note thatg( z 1 M ) can be any good estimatorsc. In order to incorporate the metric E x J i into the design process, we find r ̂ i j { r ̂ i | z i (x) Q i j } such that E x [| r ̂ i r ̂ i Q | 2 | z i (x) Q i j ] E x [| r ̂ i r ̂ i j | 2 | z i (x) Q i j ],j=1,, L i . The approximation would be possible since r ̂ i Q { r ̂ i | z i (x) Q i j } and we can easily choose r ̂ i j close to r ̂ i Q at a high rate as the partition Q i j becomes small. Clearly, it will allow us to avoid calculation of x ̂ Q =ĝ( z ̂ 1 M ) in an iterative loop (for the details, see steps 2 to 6 in the algorithm that appears in Section 3.1), enabling fast operation. The metric can be minimized by taking the centroid of r ̂ i over each partition Q i j to replace r ̂ i j . Formally,

j = 1 L i E x | r ̂ i r ̂ i j | 2 | z i ( x ) Q i j j = 1 L i E x | r ̂ i r R ̂ i j | 2 | z i ( x ) Q i j where r R ̂ i j = E x r ̂ i | z i Q i j .
(5)

We can possibly reduce the localization error by minimizing E x J i ,i = 1,…,M, at each sensor since as the accuracy of the range information becomes better, the fusion node will be able to estimate source locations with higher precision.

Now, we describe this consideration in the functional quantization framework where the distance error J i =| r ̂ i r ̂ i Q | 2 is minimized for quantizer design:

J i = | r ̂ i ( z 1 M ) r ̂ i Q ( z ̂ 1 M ) | 2 = | r ̂ i ( z 1 M ) r ̂ i ( z 1 M ) | 2 by the intermediate value theorem = | r ̂ i ( z i ) r ̂ i ( z i ) | 2 with quantizers at other sensors fixed = | r ̂ i ( z i ) r ̂ i j | 2 where we let r ̂ i j = r ̂ i ( α i ( z i ) = Q i j ) , j = 1 , , L i .

Note that the metric has an important property which is essential for regular quantizer design; that is, the distance r ̂ i is monotonically decreasing in the sensor reading z i . As explained in (2) and (3), this monotonicity allows us to always construct the regular quantization partition V i ={ V i j ,j=1,, L i } for z i :

V r i j = r ̂ i ( z i ) : | r ̂ i ( z i ) r ̂ i j | 2 | r ̂ i ( z i ) r ̂ i k | 2 , k j , k = 1 , , L i V i j = z i ( x ) : r ̂ i ( z i ) V r i j , j = 1 , , L i ,
(6)

where V r i j is the j th functional quantization partition for the distance r ̂ i and V i j is the corresponding region for z i consisting of the i th sensor readings that would minimize the metric if assigned to the j th quantization bin. Thus, as in the standard Lloyd design algorithm, the construction of quantization partitions V i j in (6) and its codeword computation in (5) will clearly reduce the average distance error, E(J i ) at each iteration, leading to convergence. However, simply minimizing the metric would not guarantee minimization of the localization error. In this work, we prove that the localization error is also reduced at each iteration by applying a new technique for the codeword computation r ̂ i j which will be developed in what follows.

Lemma 1. The localization error (LE) E x x ̂ x ̂ Q 2 is minimized by using the codewords r LE ̂ i j given by

r LE ̂ i j = E x [ r ̂ i cosθ(x)| z i (x) Q i j ],j=1,, L i ,
(7)

where θ(x) is the angle between x ̂ and x ̂ Q .

Proof.

E x x ̂ x ̂ Q 2 = E x x ̂ x i ( x ̂ Q x i ) 2 = E x [ ( r ̂ i cos θ ( x ) r ̂ i Q ) 2 ] + E x [ r ̂ i 2 r ̂ i 2 cos 2 θ ( x ) ]
(8)
= j = 1 L i E x [ ( r ̂ i cos θ ( x ) r ̂ i Q ) 2 | z i ( x ) Q i j ] + C
j = 1 L i E x [ ( r ̂ i cos θ ( x ) r ̂ i j ) 2 | z i ( x ) Q i j ] + C
(9)
j = 1 L i E x [ ( r ̂ i cos θ ( x ) r LE ̂ i j ) 2 | z i ( x ) Q i j ] + C .
(10)

Here, the second term in (8) is irrelevant to quantization process and denoted by the constant C. Obviously, r LE ̂ i j can be computed by taking the centroid over Q i j which is given by E x [ r ̂ i cosθ(x)| z i (x) Q i j ].

It is noticed that if you attempt to design quantizers that minimize the LE by using{ r LE ̂ i j } for the codeword computation, you would fail to make the design process convergent. The challenge here is that we should be able to update the codeword at the next iteration denoted by r ̂ i j such that the localization error is reduced while the average distance error remains nonincreasing at each iteration. First, we easily show from (5) and (7) that there exists the relation

r LE ̂ i j r R ̂ i j ,j=1,, L i .
(11)

Next, we prove that the LE and the average distance error computed by using r ̂ i j are proportional to the squared distances from the optimal values, i.e.,| r LE ̂ i j r ̂ i j | 2 and| r R ̂ i j r ̂ i j | 2 , respectively.

Lemma 2. Let LE( r ̂ i j ) be the localization error computed by using r ̂ i j . Then, ΔLE( r ̂ i j )LE( r ̂ i j )LE( r LE ̂ i j ) is given by j = 1 L i | r LE ̂ i j r ̂ i j | 2 . Similarly, let DE( r ̂ i j ) be the average distance error computed by using r ̂ i j . Then, ΔDE( r ̂ i j )DE( r ̂ i j )DE( r R ̂ i j ) is given by j = 1 L i | r R ̂ i j r ̂ i j | 2 .

Proof. Let r ̂ i j = r LE ̂ i j +Δ r i j . Then we have

LE ( r ̂ i j ) = j = 1 L i E x [ | r ̂ i cos θ ( x ) r ̂ i j | 2 | z i ( x ) Q i j ] + C
(12)
= j = 1 L i E x [ | r ̂ i cos θ ( x ) r LE ̂ i j Δ r i j | 2 | z i ( x ) Q i j ] + C = j = 1 L i E x [ | r ̂ i cos θ ( x ) r LE ̂ i j | 2 | z i ( x ) Q i j ] 2 j = 1 L i Δ r i j E x [ ( r ̂ i cos θ ( x ) r LE ̂ i j ) | z i ( x ) Q i j ] + j = 1 L i E x [ ( Δ r i j ) 2 | z i ( x ) Q i j ] + C
(13)
= j = 1 L i E x [ | r ̂ i cos θ ( x ) r LE ̂ i j | 2 | z i ( x ) Q i j ] + j = 1 L i ( Δ r i j ) 2 + C = LE ( r LE ̂ i j ) + Δ LE ( r ̂ i j )

where (12) follows from (9) and the second term in (13) equals zero from (7). For the case of the average distance error, similar manipulation can be easily applied to derive the corresponding relation. □

Now, we are in a position to prove the theorem that states how to update the codewords r ̂ i j .

Theorem 3. Let r ̂ i j be the codewords at the current iteration and r ̂ i j the ones at the next iteration. Suppose that the average distance error is minimized in the generalized Lloyd framework, where the Voronoi region is constructed by using (6) and r ̂ i j corresponding to jth region is computed as follows:

if r LE ̂ i j r ̂ i j r R ̂ i j , then r ̂ i j = r ̂ i j else if r ̂ i j r LE ̂ i j , then r ̂ i j = r LE ̂ i j else if r ̂ i j r R ̂ i j , then r ̂ i j = r R ̂ i j .

Then, the quantizers updated from r ̂ i j will not increase the localization error while the average distance error remains nonincreasing at each iteration.

Proof. If r ̂ i j lies between r LE ̂ i j and r R ̂ i j , it will be kept unchanged by setting r ̂ i j = r ̂ i j , implying that the metric and the LE will not increase. If r ̂ i j r LE ̂ i j , then r ̂ i j = r LE ̂ i j will reduce the localization error from Lemma 1. It is easy to see from (11) that r ̂ i j is closer to r R ̂ i j than r ̂ i j . Thus, the distance error will also decrease from Lemma 2. Similarly, if r ̂ i j r R ̂ i j , then r ̂ i j = r R ̂ i j will reduce the metric and the LE as well. Therefore, we conclude that the technique of computing r ̂ i j in the theorem will not increase the LE at each iteration while guaranteeing the convergence of the design algorithm. □

3.1 Proposed design algorithm

Given the number of quantization levels, L i = 2 R i , at sensor i, the algorithm summarized below is iteratively conducted over all sensors i = 1,…,M until no change in α i ,i = 1,…,M, is achieved.

Step 1 : Initialize the encoder α i (·)={ Q i j ,j=1,, L i } and the corresponding reconstruction values,{ z ̂ i j ,j=1,, L i }. Set threshold ε, and iteration index n = 0. Compute the metric D n = E x J i = E x [| r ̂ i r ̂ i Q | 2 ] j = 1 L i E x [| r ̂ i r ̂ i j | 2 | z i (x) Q i j ] d.

Step 2 : Construct the partition V i using (6). In this step, the metric is minimized by the optimal regular quantization partition construction.

Step 3 : Update the encoder α i by simply letting Q i j = V i j ,j=1,, L i .

Step 4 : Compute r ̂ i j ,j=1,, L i , using (5) and (7) by following the technique in Theorem 3.

Step 5 : n = n + 1; compute the metric D n with r ̂ i j = r ̂ i j .

Step 6 : If ( D n 1 D n ) D n <ε, stop; otherwise, go to step 2.

Note that the quantizer design is performed off-line using a training set that is generated based on (1) and the source distribution p(x); thus, the quantizer training phase makes use of information about all sensors, but when the resulting quantizers are actually used, each sensor quantizes the information available to it independently. A discussion of the robustness of our quantizer to mismatches of the sensor model parameters is also left for Section 4.

3.2 Analysis of computational complexity

In this section, we discuss the computational complexity of the proposed algorithm by comparing with the previous work in[17]. Clearly, once r ̂ i = x ̂ x i =g( z 1 M ) x i ,xS, is obtained in step 1, there is no need to conduct in an iterative loop (steps 2 to 6) the MMSE estimation which is the most computational operation, thereby facilitating a much faster design process. In contrast, the previous work seeks to minimize the localization error directly by finding the weight λ at each iteration that guarantees the convergence of the cost function and the nonincreasing localization error. This approach may yield a relatively small number of iterations needed to design quantizers but will suffer from the increased design complexity due to the repeated MMSE estimation computation.

In this analysis, we express the computational complexity in terms of the time taken to design quantizers which will be determined by the product of the average number of iterations to produce the resulting quantizers and the average execution time to complete one iteration. Notice that as long as there is no substantial difference in the average number of iterations, the execution time will be a decisive factor for this comparison. Furthermore, the execution time per iteration is mainly spent by the MMSE estimation which becomes more computative as the rate R i and/or the number of sensor M increases. Thus, the analysis demonstrates the significant advantage over the previous work in terms of the computational complexity: for example, our experiments show that the proposed algorithm performs about 10 times faster than the previous work for the case of M = 5, R i  = 3. For the detailed numerical results, refer to Section 4.1.

4 Simulation results

In this section, we denote functional localization-specific quantizer (FLSQ) the quantizer designed using the algorithm proposed in Section 3.1 and assume that each sensor uses the same dynamic range for all quantizers (uniform quantizer, Lloyd quantizer, localization-specific quantizer (LSQ) proposed in[17], and FLSQ). We design FLSQs with the equally distance-divided quantizer (EDQ) initializatione introduced in[15, 17] by using a training set generated from a uniform distribution of source locations and the model parameters given by a = 50,α = 2,g i  = 1, and σ i 2 = σ 2 =0. We also design Lloyd quantizers from the same training set by using different initializations for comparison: the Lloyd quantizers designed using as initialization a uniform quantizer and EDQ are denoted by Lloyd Q and Lloyd QEDQ, respectively. In our experiments, we consider a sensor network where M (= 3,4,5) sensors are deployed in a 10 × 10 m2 sensor field. Extensive simulation is conducted to compare the effectiveness of different design algorithms and investigate the sensitivity to parameter perturbation, variation of noise level, and unknown source signal energy. Finally, a larger sensor network is also considered for testing our design since typical sensor networks involve many sensor nodes in large sensor fields. For evaluation, the localization errorE(x x ̂ Q 2 ) is computed using the MMSE estimation except for the experiments where otherwise stated.

4.1 Effectiveness of design algorithms

In this experiment, 100 different sensor configurations are generated for M = 3,4,5. For each configuration, uniform quantizers, Lloyd Q, Lloyd QEDQ, LSQ, and FLSQ are designed for R i  = 2,3,4 and evaluated by generating a test set of 1,000 source locations from p(x), the model parameters, and the source signal energy which were assumed during quantizer design.E(x x ̂ Q 2 ) is computed for each configuration and averaged over 100 configurations. The localization results for the various different designs are illustrated in Figure1. Note that LSQ is also designed using EDQ initialization. As expected, our proposed design provides greatly improved performance over traditional quantizers because FLSQ makes full use of the correlation of the other sensor readings and the sensor location information while the other quantizers except for LSQ are designed without taking into account this useful information. It can be also noted that FLSQ and LSQ provide similar localization performance although they take completely different design approaches. As mentioned in Section 3.2, our design algorithm performs much faster than LSQ due to its simplified design process. In the experiments, the proposed algorithm consumes almost twice the average number of iterations than the previous work[17], but the execution time per iteration is about 20 times (or over 40 times) faster for the case of M = 5 and R i  = 3 (or M = 5 and R i  = 4). Thus, it can be said that the proposed algorithm operates about 10 times (or over 20 times) faster than the previous work and this advantage will become more obvious with increased rate and/or number of sensors. In addition, the Lloyd QEDQ performs poorly even with a good initialization of EDQ, implying that typical standard designs are not suitable for our localization system, regardless of initialization.

Figure 1
figure 1

Comparison of FLSQ with different design algorithms. The average localization error is plotted vs. the number of bits, R i , assigned to each sensor with M = 5 (left) and vs. the number of sensors, M, with R i  = 3 bits (right).

4.2 Sensitivity analysis of design algorithms

We first investigate how the performance of the proposed design algorithm can be affected by the model parameter perturbation. Although the quantizers are designed under the assumption that the source energy is known (a = 50) and sensor readings are noiseless (σ = 0), we further examine the design algorithms (uniform Q, Lloyd Q, and FLSQ) to understand how sensitive the localization results will be with respect to the presence of the measurement noise and the unknown source energy a. In the experiments where the source energy is unknown, the dynamic range for quantizer design is extended to accommodate the sensor readings generated from source energy a randomly drawn from [a min a max] = [0 100] (see Section 2).

4.2.1 Sensitivity of FLSQ to parameter perturbation

For each of 100 different five-sensor configurations, we design FLSQ with R i  = 3 and modify one of the parameters with respect to what was assumed during quantizer training to generate a test set of 1,000 source locations with a = 50 under assumptions of both a uniform distribution and a normal distribution of source locations. In this setup, FLSQ is tested under various types of mismatch conditions. It is assumed that the true parameters can be estimated at the fusion node and used for localization. The simulation results are tabulated in Table1. FLSQ shows robust performance in the mismatch situations where the parameters used in quantizer design are different from those characterizing the simulation conditions.

Table 1 Sensitivity to parameter perturbation

4.2.2 Sensitivity of design algorithms to noise level and unknown source energy

For each of 100 different five-sensor configurations, Lloyd quantizers are designed with multiple initializations for comparison with FLSQs designed for R i  = 3. We first investigate the sensitivity to noise level by generating a test set of 1,000 source locations for each configuration with a = 50 and signal-to-noise ratio (SNR) in the range from 20 to 80 dB by varying σ f. Note that the localization errorE(x x ̂ Q 2 ) is obtained by using the maximum a posteriori (MAP)-based algorithm proposed in[22] for faster computation and averaged over 100 configurations. Figure2 demonstrates that FLSQ performs better in all cases.

Figure 2
figure 2

Sensitivity to noise level. The average localization error is plotted vs. SNR (dB) with M = 5, R i  = 3, and a = 50. SNR ranged from 20 to 80 dB by varying σ.

We also examine how the design algorithms will be affected by the unknown source energy. For each configuration, a test set of 1,000 source locations is generated with σ = 0.05 and SNR ranging from 20 to 70 dB by varying the source energy. Figure3 demonstrates that our proposed algorithm performs well with respect to Lloyd quantizers regardless of the initializations, some of which are considered efficient initializations for our localization system.

Figure 3
figure 3

Sensitivity to unknown signal energy. The average localization error is plotted vs. SNR (dB) with M = 5, R i  = 3, and σ = 0.05. SNR of 20 to 70 dB is obtained by varying the source energy.

4.3 Performance analysis in a larger sensor network: comparison with traditional quantizers

In this experiment, 50 different sensor configurations in a larger sensor field, 20 × 20 m2, are generated for M = 12,16,20. For each sensor configuration, FLSQs are designed with a given rate of R i  = 3 and compared with the standard designs in Figure4. As expected, FLSQ outperforms the typical quantizer designs in large sensor networks. It can be also seen from the experiment in a sensor network of 10 × 10 m2 (M = 3,4,5) that better performance is achieved in larger sensor networks even with the same sensor density. Note that the sensor density for M = 20 in 20 × 20 m2 is equal to 20 20 × 20 =0.05 which is that for the case of M = 5 in 10 × 10 m2. This is because localization performance degrades around the edges of the sensor field, and thus, in a larger sensor field, there is a relatively smaller number of source locations near the edge, as compared to a smaller field with the same sensor density.

Figure 4
figure 4

Evaluation in a larger sensor network. Average localization error (m) vs. number of sensors (M = 12,16,20) in a larger sensor field, 20 × 20 m2. FLSQ is designed with R i  = 3 and compared with typical designs.

5 Conclusions

In this paper, we have proposed an iterative functional quantizer design algorithm for source localization in sensor networks. With the monotonicity of the distance in each sensor reading, we have suggested the average distance error as a metric in the functional quantization to construct the regular quantization partition such that the metric is iteratively reduced in the generalized Lloyd algorithm framework. Since the goal is to design independent regular quantizers that minimize the localization error, we have proved that the localization error can be also reduced at each iteration by updating the codewords based on the proposed technique. Our proposed algorithm was shown to perform quite well in comparison with typical standard designs and operate fast due to its simple structure and work robust to mismatches of the sensor model parameters. In the future, we will work on an extension of this algorithm that addresses nonregular quantization partitions to achieve further improvement.

Endnotes

a In[17], a high computational complexity is inevitable because a search for λ including the MMSE estimation is iteratively conducted at each step.

b In this paper, we assume that M sensors are activated prior to the localization process. However, selecting the best set of sensors for localization accuracy would be important in order to improve the system performance with limited energy budget[2325].

c In this work, the nonlinear MMSE estimator is used for quantizer design.

d r ̂ i j is initially given by g i a z ̂ i j α from (1) whenever α i ( z i (x))= Q i j (i.e., z ̂ i = z ̂ i j ) and cosθ(x),xS is computed once in this step for r LE ̂ i j .

e The EDQ is shown to be simply designed by dividing uniformly the dynamic range of the distance. The EDQ design is verified through simulations in[15, 17], showing that EDQ can be used as an efficient initialization for quantizer design because of its good localization performance.

f Note that the SNR computed by10 log 10 a 2 σ 2 is measured at 1 m from the source and for practical vehicle target, it is often much higher than 40 dB. A typical value of the variance of measurement noise σ 2 is 0.052 (= 60 dB)[19, 21].

Abbreviations

ML:

Maximum likelihood

CRB:

Cramer-Rao bound

MMSE:

Minimum mean square error

LE:

Localization error

EDQ:

Equally distance-divided quantizer

FLSQ:

Functional localization-specific quantizer

LSQ:

Localization-specific quantizer

MAP:

Maximum a posteriori

SNR:

Signal-to-noise ratio.

References

  1. Zhao F, Shin J, Reich J: Information-driven dynamic sensor collaboration for target tracking. IEEE Signal Process. Mag 2002, 19(2):61-72. 10.1109/79.985685

    Article  Google Scholar 

  2. Niu R, Varshney PK: Target location estimation in wireless sensor networks using binary data. In The 38th Annual Conference on Information Sciences and Systems. Princeton; 17–19 Mar 2004.

    Google Scholar 

  3. Niu R, Varshney PK: Target location estimation in sensor networks with quantized data. IEEE Trans. Signal Process 2006, 54(12):4519-4528.

    Article  Google Scholar 

  4. Longo M, Lookabaugh TD, Gray RM: Quantization for decentralized hypothesis testing under communication constraints. IEEE Trans. Inf. Theory 1990, 36(2):241-255. 10.1109/18.52470

    Article  MathSciNet  Google Scholar 

  5. Flynn TJ, Gray RM: Encoding of correlated observations. IEEE Trans. Inf. Theory 1987, 33(6):773-787. 10.1109/TIT.1987.1057384

    Article  MathSciNet  Google Scholar 

  6. Hashlamoun WA, Varshney PK: Near-optimum quantization for signal detection. IEEE Trans. Commun 1996, 44(3):294-297. 10.1109/26.486322

    Article  Google Scholar 

  7. Lam W, Reibman A: Design of quantizers for decentralized estimation systems. IEEE Trans. Commun 1993, 41(11):1602-1605. 10.1109/26.241739

    Article  Google Scholar 

  8. Marano S, Matta V, Willett P: Asymptotic design of quantizers for decentralized MMSE estimation. IEEE Trans. Process 2007, 55(11):5485-5496.

    Article  MathSciNet  Google Scholar 

  9. Misra V, Goyal VK, Varshney LR: High-resolution functional quantization. In Data Compression Conference. Snowbird; 25–27 Mar 2008:113-122.

    Google Scholar 

  10. Shen X, Zhu Y, You Z: An efficient sensor quantization algorithm for decentralized estimation fusion. Automatica 2011, 47: 1053-1059. 10.1016/j.automatica.2011.01.082

    Article  MathSciNet  Google Scholar 

  11. Wernersson N, Karlsson J, Skoglund M: Distributed quantization over noisy channels. IEEE Trans. Commun 2009, 57: 1693-1700.

    Article  Google Scholar 

  12. Fleming M, Zhao Q, Effros M: Network vector quantization. IEEE Trans. Inf. Theory 2004, 50(8):1584-1604. 10.1109/TIT.2004.831832

    Article  MathSciNet  Google Scholar 

  13. Huang Y, Hua Y: Multihop progressive decentralized estimation in wireless sensor networks. IEEE Signal Process. Lett 2007, 14(12):1004-1007.

    Article  Google Scholar 

  14. Shah S, Beferull-Lozano B: In-network iterative distributed estimation for power-constrained wireless sensor networks. In IEEE International Conference on Distributed Computing in Sensor Systems. Hangzhou; 16–18 May 2012:239-246.

    Google Scholar 

  15. Kim YH, Ortega A: Quantizer design for source localization in sensor networks. In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Philadelphia; 18–23 Mar 2005.

    Google Scholar 

  16. Kim YH, Ortega A: Quantizer design and distributed encoding algorithm for source localization in sensor networks. In IEEE International Symposium on Information Processing in Sensor Networks (IPSN). Los Angeles; 25–27 Apr 2005.

    Google Scholar 

  17. Kim YH, Ortega A: Quantizer design for energy-based source localization in sensor networks. IEEE Trans. Signal Process 2011, 59(11):5577-5588.

    Article  MathSciNet  Google Scholar 

  18. Kim YH, Ortega A: Distributed encoding algorithms for source localization in sensor networks. EURASIP J. Adv. Signal Process 2010, 2010: 781720. 10.1155/2010/781720

    Article  Google Scholar 

  19. Li D, Hu YH: Energy-based collaborative source localization using acoustic microsensor array. EURASIP J. Appl. Signal Process 2003, 2003: 321-337. 10.1155/S1110865703212075

    Article  Google Scholar 

  20. Hero AO, Blatt D: Sensor network source localization via projection onto convex sets (POCS). In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Philadelphia; 18–23 Mar 2005.

    Google Scholar 

  21. Liu J, Reich J, Zhao F: Collaborative in-network processing for target tracking. EURASIP J. Appl. Signal Process 2003, 2003: 378-391. 10.1155/S111086570321204X

    Article  Google Scholar 

  22. Kim YH, Ortega A: Maximun a posteriori (MAP)-based algorithm for distributed source localization using quantized acoustic sensor readings. In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP). Toulouse; 14–19 May 2006.

    Google Scholar 

  23. Isler V, Bajcsy R: The sensor selection problem for bounded uncertainty sensing models. In IEEE International Symposium on Information Processing in Sensor Networks (IPSN). Los Angeles; 25–27Apr 2005.

    Google Scholar 

  24. Wang H, Yao K, Pottie G, Estrin D: Entropy-based sensor selection heuristic for target localization. In IEEE International symposium on Information Processing in Sensor Networks (IPSN). Berkeley; 26–27 Apr 2004.

    Google Scholar 

  25. Kim YH: Quantization-aware sensor selection for source localization in sensor networks. Int. J. KIMICS 2011, 9(2):155-160.

    Google Scholar 

Download references

Acknowledgements

This study was supported by a research fund from Chosun University, 2012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoon Hak Kim.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kim, Y.H. Functional quantizer design for source localization in sensor networks. EURASIP J. Adv. Signal Process. 2013, 151 (2013). https://doi.org/10.1186/1687-6180-2013-151

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-151

Keywords