 Research
 Open Access
Conditional downsampling for energyefficient communications in wireless sensor networks
 Joan Enric BarcelóLladó^{1}Email author,
 Antoni Morell^{1} and
 Gonzalo SecoGranados^{1}
https://doi.org/10.1186/168761802013101
© BarcelóLladóet al.; licensee Springer. 2013
 Received: 14 January 2013
 Accepted: 30 April 2013
 Published: 10 May 2013
Abstract
This paper deals with the power limitations in a wireless sensor network scenario. Concretely, we propose to use a conditional downsampling encoder (CDE) at the sensing nodes as an energyefficient solution for the communication problem. It exploits the knowledge about the signal structure, which is assumed to be timecorrelated, in order to decrease the sampling rate and hence to reduce the number of transmissions within the network. We analytically assess the performance of the CDE in terms of quadratic distortion, from which we derive closedform expressions when it is combined with one of the two decoders: the step decoder and the predictive decoder. Moreover, we propose two methodologies to design the CDE in order to guarantee a given coding rate. We also compare the CDE, both analytically and experimentally, with other classical decimator techniques, which are the deterministic downsampling encoder and the probabilistic downsampling encoder. Numerical simulation validates our analytical results. Moreover, we compare the obtained quadratic distortion and extract the conclusions of the capabilities of the studied encodingdecoding schemes.
Keywords
 Mean Square Error
 Wireless Sensor Network
 Fusion Center
 Observation Vector
 Rate Distortion
1 Introduction
1.1 Motivation and previous work
Wireless sensor network (WSN) design is currently one of the most challenging topics in the communications field. In particular, WSNs are severely energyconstrained because they consist of many small, cheap, and powerlimited nodes, whose batteries cannot be recharged in most cases. Hence, the application of energyefficient algorithms turns out to be crucial.
Following with this motivation, many energyefficient strategies can be found in order to mitigate the energy costs and hence increase the lifetime of the WSN. Without the aim of being exhaustive, we point out some examples:

Energyaware routing for cooperative WSNs and ad hoc networks [1, 2]. These techniques seek the optimum path that minimizes the total spent energy in multihop WSNs.

Signal processing techniques for minimumpower distributed transmission schemes [3, 4]. Using distributed beamforming techniques, the nodes can decrease the transmitted power at the same time that they increase the total throughput of the network.

Dataaware techniques to reduce energy by efficient information processing [5, 6]. By means of signal processing techniques, the network exploits the inherent structure and properties of the measured signal in order to sample the data and therefore reduce the associated energy costs.
Our study falls in the third category and may be complementary to the other approaches. Actually, we propose to encode the sensed data removing redundancy in the time domain. Many transmission schemes use noncausal transmissions such as block coding. In these cases, the source collects a number of contiguous time samples in order to compress them by removing part of (or all) the redundancy among them. Within this group of encodersdecoders, a large amount of different techniques can be found. Albeit these transmissions are very appropriate for highrate transmissions and/or delaytolerant communications, these noncausal transmission schemes may not be applicable in some scenarios because block transmissions are not always allowed due to delay constraints and/or low symbol rates of the source.
For delaysensitive applications such as realtime monitoring in WSNs, where the reconstruction of the signal must take place at the same time instant as the corresponding input measurement, causal source codes are more convenient. Hence, a source code is said to be causal if the n th decoded sample depends on the output signal only through its first n components or, in other words, depends on the past and present outputs but not on future ones. Quantizers, delta modulators, differential pulse code modulators, and adaptive versions of these are all causal in the above sense. The basic properties of causal source codes have been introduced in 1982 in [7], and related works have been expanded so far. The work in [8] extends the general results of [7] for the case where side information, i.e., extra information that is correlated with the source, is available at the encoder and the decoder. In addition, a causal source code is called a zerodelay or sequential code if both the encoder and the decoder are causal (note that for the causal source code definition, the assumption of causality is only at the decoder) [9, 10].
In the literature, there are several zerodelay coding systems. One of the most common zerodelay coding systems is the wellknown differential pulse code modulation (DPCM). In a nutshell, the current sample to be coded is predicted from previously coded samples. This prediction is used as a reference, and it is compared with the current sample. Hence, the output of the encoder is the prediction error. The inverse operation takes place at the decoder side. According to [11], DPCM was first introduced in a US patent by C. Cutler in 1952. Since then, many results have appeared. In particular, the autoregressive (AR) model has received special attention for the study of zerodelay coding schemes. Some of the early works on AR models date back to the 1960s. The works in [12, 13] analyze the quadratic rate distortion of DPCM (the reader can find an extended description of the rate distortion in Chapter 13 of [14]). The work in [15] extends these results assuming a Gaussian distribution of the predicted error. Other works proposed algorithms for nonuniform quantizers optimized in order to minimize the distortion rate [16].
Later works, as the one in [17], try to particularize the results obtained by the DPCM also for the case of low bit rates. In such cases, the system performance becomes worse. Then, the classic DPCM encoder is modified in order to achieve better performance in terms of rate distortion for a lowbitrate regime.
Recent works on this field have tried to unify the theoretical limits of the DPCM (and other zerodelay schemes) for AR models with other information theory concepts. The authors in [11] provide analytical results for the existing duality between the rate distortion of an AR process with the capacity of the intersymbol interference channels. By contrast, other works such as [9] follow an information theoretical approach that adjusts the upper and lower limits of the rate distortion for generic zerodelay schemes using the mutual information as a measure of the achievable rate.
1.2 Our contribution
Our proposed work also follows the same sequential transmission approach exposed above. Concretely, the approach of this paper is similar to that of [6], where the authors seek for the optimal sampling in a WSN scenario with correlated sources. However, we present the problem from a more realistic energyefficient perspective. According to the results in the literature about energy consumption in sensor networks [18], the main source of energy spent in a sensor is the power dedicated to maintain the sensor awake. Concretely, most energy is consumed by the elements of the frontend [19]. Therefore, our goal is to reduce the number of total transmissions in order to keep the sensors in sleep mode as long as possible.
Note that for the complete characterization of the performance of real communication systems, several metrics should be evaluated, e.g., the robustness against noise in terms of the signaltonoise ratio, the quantization error as a function of the codification scheme, or the bit error rate related to a selected modulation. However, in this paper, we only focus on the the study of the downsampling distortion (see Section 2.2) as a figure of merit of the quadratic reconstruction error introduced by a downsampling technique at the fusion center. The study of other performance metrics, although interesting, is out of the scope of this paper.
In particular, we study downsampling techniques in which the samples of an input signal are either blocked or transmitted following a given criterion. For that purpose, we propose a downsampling encoding scheme called conditional downsampling encoder (CDE). A CDE benefits from the existing time correlation in the measured signal in order to sequentially elaborate the decimator pattern. Typically, the readings in WSNs are spacetimecorrelated, and hence, strategies in the two domains can potentially improve the accuracy of the signal recovered at the receiver side. However, note that considering not only the time correlation but also the space correlation at the sensing nodes would require intensive internode communication. Since this approach would penalize in terms of signaling, complexity, and energy consumption, we have discarded it. Basically, the CDE predicts the current sample using a linear estimation and takes this prediction as a reference. Then, the transmission is blocked if the prediction error does not exceed a given threshold and transmitted otherwise. It is clear that a key step of the CDE design is to determine the threshold that ensures a sample rate reduction of a factor γ. Therefore, two different threshold designs are proposed in this paper.
Clearly, the CDE presents some similarities with the DPCM in the sense that both schemes use (linear) prediction as a reference in order to encode the input signal. However, they present important differences as well, which can be summarized as follows:

A DPCM produces an outcome sample for each input sample. In other words, it does not change the sample rate. On the contrary, the CDE (and also the deterministic downsampling encoder (DDE) and the probabilistic downsampling encoder (PDE)) reduces the sample rate. This behavior is very convenient in some energyconstrained scenarios, such as WSNs, since the total number of transmissions is reduced by a factor γ, increasing the energy efficiency of the network.

While a DPCM works at the symbol level, the CDE does at the sample level. Thus, the downsampling encoderdecoder schemes studied in this paper are not exclusive to the DPCM or other zerodelay coding techniques. Actually, they can be used on top of them when the signal is transmitted.
In addition, we compare the performance loss of CDE with different encodingdecoding pairs when the number of samples is reduced by a factor γ. In particular, we study the following two downsampling criteria: (1) a DDE and (2) a PDE.
A DDE works as a decimator, i.e., it reduces the number of samples following a deterministic pattern. Hence, the DDE selects only one in γ ^{−1} samples, where γ ^{−1} is typically a natural number.
A PDE slightly differs from a common decimator since it reduces the number of samples following a probabilistic pattern, i.e., one sample will be transmitted with probability γ and otherwise blocked with probability 1−γ. This method eliminates the restriction of γ ^{−1} to be a natural number. However, we analytically show that a DDE outperforms a PDE in terms of quadratic distortion.
On the other hand, the decoder at the fusion center recovers the original sampling rate by upsampling the signal. We study two possible decoders: (1) a step decoder (SD) and (2) a predictive decoder (PD). A SD reconstructs the missing samples by replicating the last decoded sample. This does not require any side information knowledge. On the contrary, the PD reconstructs the missing samples by linear prediction (as in the CDE case). We analytically show the improvements in terms of quadratic distortion when the samples are predicted rather than simply replicated.
Hence, we give analytical expressions for the quadratic distortion of the following downsampling encodingdecoding pairs: DDESD, DDEPD, PDESD, and PDEPD. Furthermore, we also provide accurate approximations for the quadratic distortion of CDESD and CDEPD. Numerical simulations support our proposed analytical expressions.
1.3 Organization of the paper
The rest of the paper is organized as follows: In Section 2, we introduce the assumptions and the scenario considered throughout the paper. Section 3 presents the proposed CDE as well as the other encodingdecoding schemes under study. The analytical expressions of the downsampling distortion for the proposed CDE are detailed in Section 4. Also, two different design strategies are presented in this section. The analytical expressions of the downsampling distortion for other encodingdecoding schemes are detailed in Section 5. Simulation results are shown in Section 6. Conclusions and suggestions for future research are drawn in Section 7.
2 System model and assumptions
Let us consider a WSN configured in star topology that monitors a given physical scalar magnitude such as temperature or humidity. The network is composed of two types of nodes: (1) a set of S sensing nodes that transmit wirelessly the measurements to (2) one fusion center that manages, gathers, and processes the measurements from the sensing nodes.
2.1 Assumptions on the signal model
The autoregression coefficient is denoted by ρ∈[0,1] and assumed to be constant during the transmission. The random process z(n) is a sequence of Gaussiandistributed and independent random variables with zero mean and variance ${\sigma}_{z}^{2}$.
Without loss of generality, we also assume that the variance of the measurement x _{ s }(n), i.e., ${\sigma}_{x}^{2}$, is equal to 1. Therefore, the variance of the noise is well known, and it is ${\sigma}_{z}^{2}=1{\rho}^{2}$.
2.2 Assumptions on the system model
Note that for simplicity, we have replaced the notation x _{ s }(n) by x(n). Furthermore, we require that the signal x(n) is transmitted in a zerodelay manner from the source to the destination. Throughout this paper, we understand for zerodelay transmission when for each sample at time n, the receiver will have a reconstruction of the signal x(n). Furthermore, for time instant n, we are not interested in x(n−1) anymore, so delaytolerant strategies (such as block encoding schemes) are not feasible. Following this constraint, we will look for encoders that allow us to reduce the sample rate samplebysample in real time.
Hence, we consider a nonlinear encoder with a coding rate γ at the sensing nodes. In our particular case, the encoder selects which samples from x(n) are going to be transmitted with a rate of γ, and the rest will be discarded. The selected samples are represented by y(n); therefore, note that y(n) is only defined for those time slots in which the encoder decides to transmit.
Moreover, we consider nonlinear decoders in order to recover an approximation of x(n), i.e., $\stackrel{~}{x}\left(n\right)$, from y(n) at the fusion center. Roughly speaking, the decoder will construct $\stackrel{~}{x}\left(n\right)$ copying the samples of y(n) when the transmission exists and predicting the rest otherwise.
Definition 1
3 Dowsampling transmission schemes
3.1 Different encoding alternatives
We compare our proposed CDE with two selected downsampling encoders among many other possibilities. These are (1) the DDE and (2) the PDE. They have been chosen since they are simple and because many other strategies can be derived from them.
In order to describe the selected encoders, we first need to introduce the following definition:
Definition 2
The transmission support function of an encoder e, named g _{ e }(n), is an indicator function which takes the value 1 when the transmission exists and 0 otherwise.
3.1.1 Deterministic downsampling encoder
Note that for uniform downsampling, the DDE is only defined for compression rates γ of the form ${\gamma}^{1}\in \mathbb{N}$.
3.1.2 Probabilistic downsampling encoder
It is straightforward to see that in order to guarantee a compression rate of γ, the value of the transmission probability p should be p=γ.
3.1.3 Conditional downsampling encoder
Although this scheme is quite simple, it has two main complications: (1) the LWF predictor assumes the knowledge of the correlation parameters R and R or at least good estimates of them, and (2) the threshold Δ should be designed in such a way that it ensures a coding rate of γ. The first problem adds some complexity to the system but can be efficiently solved using existing correlation estimators [22]. The second one is addressed later in Section 4.
3.2 Different decoding alternatives
As for the encoding strategies, we select two decoders from a bunch of possible solutions. The first one is probably the simplest and does not require any knowledge of the correlation parameters, while the second one exploits the signal correlation in order to achieve higher prediction accuracy.
3.2.1 Step decoder
This approach is very typical when the source is sensing a given timecorrelated phenomenon. Since it is assumed to be slow changing, the magnitude is maintained until we receive an update.
3.2.2 Predictive decoder
4 Downsampling distortion of the conditional downsampling encoder
4.1 Signal prediction using incomplete observation vectors
Let the observation vector $\stackrel{~}{\mathbf{x}}\left(n\right)\in {\mathbb{R}}^{N}$, where $\stackrel{~}{\mathbf{x}}\left(n\right)={\left[\stackrel{~}{x}\right(n1\left)\phantom{\rule{0.3em}{0ex}}\stackrel{~}{x}\right(n1)\cdots \phantom{\rule{1em}{0ex}}\stackrel{~}{x}(nN\left)\right]}^{T}$, be an incomplete version of x(n). The vector $\stackrel{~}{\mathbf{x}}\left(n\right)$ is constructed using the N last decoded samples. This is because the decoder does not necessarily know all the values of x(n) and only knows the decoded ones. Hence, some values of $\stackrel{~}{\mathbf{x}}\left(n\right)$ are replicas of x(n), and the rest are predicted values $\widehat{x}\left(n\right)$.
Definition 3
Theorem 1
Proof
□
Corollary 1
For a given ρ, the MSE is only a function of the position of the last true measurement in the observation vector for an AR1 process. Furthermore, it is not dependent on the dimension N of ${\stackrel{~}{\mathbf{x}}}_{t}\left(n\right)$.
Proof
□
Hence, the probability that the last true sample of the vector $\stackrel{~}{\mathbf{x}}\left(n\right)$ is in the position t depends directly on the downsampling criteria used at the encoder. Therefore, in order to compute the downsampling distortion for the CDE, we need to compute the probability of occurrence of the event t, or what is the same, the probability that the observation vector $\stackrel{~}{\mathbf{x}}$ is actually ${\stackrel{~}{\mathbf{x}}}_{t}$. Next, we illustrate the CDE problem using a Markov chain (MC) model.
4.2 The Markov chain solution for the incomplete observation vector case
Definition 4
and each row represents a probability distribution, so [T ^{ T }]_{ i } 1=1.
Definition 5
where p=[P _{0} P _{1} … P _{ T−1}]^{ T } contains the probabilities to be in each state t=0,1,…,T in the stationary regime of the MC process.
4.3 The Markov chain model for the CDE
In this section, we analytically evaluate the performance of the proposed CDE with both PD and SD decoders in terms of the downsampling distortion.
It is easy to observe that there are infinite solutions for the transition probabilities p _{ i,j }. Thus, we address the design and the corresponding performance in the following sections.
4.4 Approximations for the downsampling distortion of the CDEPD and CDESD
Following the scheme in (7), our aim is to design the threshold value Δ in order to guarantee that the source only transmits a fraction γ of the total samples. For thegeneral case, we may have different values of Δ according to each state t of the MC. Therefore, we define the threshold Δ _{ t } as the threshold value applied to the state t.
The condition in (7) modifies the probability density function (pdf) of the error.
Definition 6
Lemma 1
Proof
□
Definition 7
4.4.1 The pair CDEPD
The knowledge of some prior information about the signal can notably reduce the MSE at the decoder compared to other classical methods. This is because only the samples with lower MSE are predicted, i.e., the ones that satisfy $\leftx\right(n){\mathbf{w}}^{T}{\stackrel{~}{\mathbf{x}}}_{t}(n\left)\right<{\Delta}_{t}$, since they introduce less noise power at the decoder.
Lemma 2
Proof
□
However, this is still an open problem. It is because the values of P _{ t } are not determined yet. We study this issue afterwards in Section 4.5.
4.4.2 The pair CDESD
If $\widehat{x}\left(n\right)$ is constructed from a linear prediction using the LWF, the MSE in prediction is directly ${\sigma}_{z}^{2}=1{\rho}^{2}$. However, using other strategies, the error will increase as we have seen in (18). In particular, the pair CDESD constructs $\widehat{x}\left(n\right)$ as the last transmitted sample, i.e., $\widehat{x}\left(n\right)=x(nt)$. This prediction scheme introduces an error not only due to z(n) but also due to x(n).
Lemma 3
Proof
□
As for the case of the CDEPD pair, this is still an open problem, and it is studied afterwards in Section 4.5.
4.5 Design of the CDESD and the CDEPD
From the design point of view, our aim is to obtain a set of Δ _{ t }’s that assure a coding rate at the CDE of γ. However, there are infinite solutions as we pointed out in (24). That is why we propose two possible approaches to face with the design of Δ _{ t }:

Fixed Δ _{ t }, i.e., Δ _{ t }=Δ for all t.

Variable Δ _{ t } in order to maintain constant transition probabilities, i.e., p _{ t−1,t }=p for all t.
4.5.1 Fixed Δ _{ t }design
This is probably the simplest approach to design the CDE since the encoder does not have to change the value of Δ _{ t } according to the current state since Δ _{ t }=Δ for all t.
where f _{ t }(x) is the pdf of the error at state t.
4.5.2 Variable Δ _{ t }design
This approach allows for a slightly easier computation of the values of Δ _{ t }. The main difference with the previous design scheme is that we can use the result in the following lemma:
Lemma 4
Proof
□
where ${\text{MSE}}_{0}^{\text{CDE{SD,PD}}}=0$; hence, ${\text{MSE}}_{0}^{\text{CDE}}=1{\rho}^{2}$ (as in (60)).
To graphically validate our design framework, we have proposed the following experiment:
Experiment 1
We have simulated the CDESD and the CDEPD for γ=[1/8 1/4 1/2] and for ρ∈[0,1]. The signal has been generated following the AR1 process of 5,000 samples (for each value of ρ). We have computed the probability of transmission P _{0} obtained using our threshold design framework.
5 Downsampling distortion of other typical strategies
In order to measure the performance of the CDE, we also evaluate the performance of different encoderdecoder pairs in terms of the downsampling distortion. These are DDESD, DDEPD, PDESD, and PDEPD.
5.1 The pair DDESD
5.2 The pair DDEPD
5.3 The pair PDESD
The PDE can also be modeled following the infinite MC in Figure 2. Hence, the transmission matrix T _{CDE} has the same structure than T _{CDE} in (22), and the expressions (23) and (24) are valid as well. However, the rest is different.
 1.
It is the easiest solution to be implemented in practice. The source decides either to transmit or not regardless of what the current state t is.
 2.
It reduces the problem to a closedform solution.
5.4 The pair PDEPD
6 Performance evaluation
In this section, we evaluate and compare the performance of the different encoderdecoder pairs as a function of the downsampling distortion. Moreover, we introduce an experimental evaluation in order to confirm the validity of our theoretical results. For that, we have generated a signal x(n) as a sequence of 5,000 samples using the AR1 model in (2) and for different values of the autoregressive parameter ρ∈[0,1] with resolution 0.01. The results are computed for γ=[1/8, 1/4, 1/2].
6.1 The pair DDESD and the pair DDEPD
Also, we compare the difference in performance according to the decoder used. The PD takes into account the signal correlation information in the decoding process, and hence, the total performance is increased notably for low values of ρ. On the contrary, if ρ→1, both decoders perform similarly since x(n)−ρ ^{ t } x(n−t)≈x(n)−x(n−t).
In Figure 6, we can also graphically evaluate the impact of γ. In our scenario, the signal x(n) is transmitted by the DDE in {8, 4, 2} times following a uniform pattern. It is easy to see that the larger the γ, the lower is the distortion. However, there exists a tradeoff between the downsampling distortion and the compression rate.
6.2 The pair PDESD and the pair PDEPD
6.3 The pair CDESD and the pair CDEPD
Another conclusion is that the downsampling distortion is notably higher for the fixed design. It is because their transition probabilities p _{ t−1,t } are increasing in t, and it facilitates to achieve higher states t in the MC with higher probability (i.e., higher MSE_{ t }’s). On the contrary, the variable design concentrates the states in lower t values.
From a practical point of view, the CDE is simpler if it follows a fixed design since the encoder only needs to know the value of Δ and also it does not need to track the current state t. However, from a computational point of view, the variable approach is simpler since it can be computed analytically, instead of numerically.
6.4 Comparison of the downsampling distortion
It can be observed that the performance of the DDE and PDE are similar. However, the deterministic encoder works slightly better since it only uses the lowest γ ^{−1} states of the finite MC while PDE uses higher states that are related to higher errors. However, the main disadvantage of the DDE in front of the PDE is its lack of flexibility since the uniform solution is only valid for natural values of γ ^{−1}. Furthermore, the PDE with uniform transition probabilities does not need to track the current state t of the process, and hence, it is simpler.
The big hop in performance is observed for the CDE. This encoder eliminates the transmissions of the samples with the most redundant information. Thus, only the most ‘unpredictable’ samples are transmitted.
7 Conclusions
In this chapter, we have evaluated the performance of different encodingdecoding strategies in order to reduce the number of transmitted samples and hence to decrease the power spent in transmission. We have presented them as an energyefficient solution for the wireless sensor network communication problem. In particular, we define the downsampling distortion function in order to evaluate the performance in terms of the tradeoff between compression rate and distortion at the fusion center of the combination of three downsampling encoders, which are the DDE, the PDE, and the CDE, with two decoders: the SD and the PD.
We have obtained closedform expressions for the pairs DDESD, DDEPD, PDESD, and PDEPD and accurate approximations for CDESD and CDEPD. Moreover, we have proposed two strategies in order to design the threshold of the condition in the CDE, i.e., the fixed threshold design and the variable threshold design.
The simulation results validate our theoretical results. Furthermore, we have compared the performance of the different pairs and showed the impact of taking into account the signal model in the encodingdecoding process. Hence, the pair CDEPD (with variable threshold design) outperforms by far the rest of the studied strategies. However, extending the CDE analysis for higher order AR models or even for other timecorrelated signal models remains as an open problem.
Endnotes
^{a} Notation. Boldface uppercase letters denote matrices, boldface lowercase letters denote column vectors, and italics denote scalars. (·)^{ T },(·)^{∗},(·)^{ H } denote transpose, complex conjugate, and conjugate transpose (Hermitian), respectively. [X]_{ i,j } and [x]_{ i } are the (i th, j th) element of matrix X and the i th position of vector x, respectively. [X]_{ i } denotes the i th column of X. · is the absolute value. ∥a∥ represents the Euclidean norm of a. Let $\xe2$ refer to the estimated value of variable a. $\mathbb{E}[\xb7]$ is the statistical expectation. Function erf(·) represents the error function.
^{b} The conditional variance of a continuous random variable X given the condition Y=y is defined as $\text{var}\left(X\rightY=y)=\mathbb{E}[{X}^{2}Y=y]=\underset{\infty}{\overset{\infty}{\int}}{x}^{2}f\left(X\rightY=y)\mathit{\text{dx}}$, where f(XY=y) is the conditional pdf of X given Y=y.
^{c} It comes from the definition of the cumulative density function of a Gaussian variable such that ${\int}_{\infty}^{a}f\left(x\right)\mathit{\text{dx}}=\frac{1}{2}\left(1+\text{erf}\left(\frac{a}{\sqrt{2{\sigma}_{a}^{2}}}\right)\right)$.
Declarations
Acknowledgements
This work is supported by the Spanish Government under project TEC201128219 and the Catalan Government under grant 2009 SGR 298.
Authors’ Affiliations
References
 Toh CK: Maximum battery life routing to support ubiquitous mobile computing in wireless ad hoc networks. Commun. Mag., IEEE 2001, 39(6):138147. 10.1109/35.925682View ArticleGoogle Scholar
 Younis O, Fahmy S: HEED: a hybrid, energyefficient, distributed clustering approach for ad hoc sensor networks. Mobile Comput. IEEE Trans 2004, 3(4):366379. 10.1109/TMC.2004.41View ArticleGoogle Scholar
 Mudumbai R, Brown D, Madhow U, Poor H: Distributed transmit beamforming: challenges and recent progress. Commun. Mag., IEEE 2009, 47(2):102110.View ArticleGoogle Scholar
 Zarifi K, Zaidi S, Affes S, Ghrayeb A: A distributed amplifyandforward beamforming technique in wireless sensor networks. Signal Process., IEEE Trans 2011, 59(8):36573674.MathSciNetView ArticleGoogle Scholar
 Pradhan S, Kusuma J, Ramchandran K: Distributed compression in a dense microsensor network. Signal Process. Mag., IEEE 2002, 19(2):5160. 10.1109/79.985684View ArticleGoogle Scholar
 Sun N, Wu J: Optimum sampling in spatialtemporally correlated wireless sensor networks. EURASIP J. Wireless Commun. Netw 2013, 2013: 5. 10.1186/1687149920135View ArticleGoogle Scholar
 Neuhoff D, Gilbert R: Causal source codes. Inf. Theory, IEEE Trans 1982, 28(5):701713. 10.1109/TIT.1982.1056552MathSciNetView ArticleGoogle Scholar
 Weissman T, Merhav N: On causal source codes with side information. Inf. Theory, IEEE Trans 2005, 51(11):40034013. 10.1109/TIT.2005.856978MathSciNetView ArticleGoogle Scholar
 Derpich M: Improved upper bounds to the causal quadratic ratedistortion function for Gaussian stationary. Inf. Theory, IEEE Trans 2012, 58(99):31313152.MathSciNetView ArticleGoogle Scholar
 Viswanathan H, Berger T: Sequential coding of correlated sources. Inf. Theory, IEEE Trans 2000, 46: 236246. 10.1109/18.817521MathSciNetView ArticleGoogle Scholar
 Zamir R, Kochman Y, Erez U: Achieving the Gaussian rate distortion function by prediction. Inf. Theory, IEEE Trans 2008, 54(7):33543364.MathSciNetView ArticleGoogle Scholar
 O’Neal JB, Deltamodulation quantizing noiseanalytic and computer simulation results for Gaussian and television input signals: Bell Syst. Tech. J. 1971, 45: 117141.View ArticleGoogle Scholar
 Protonotarios EN: Slope overload noise in differential pulse code modulation systems. Bell Syst. Tech. J 1967, 46: 21192161.View ArticleGoogle Scholar
 Cover TM, Thomas JA: Elements on Information Theory. New York: Wiley; 1991.View ArticleGoogle Scholar
 O’Neal JB: Signaltoquantizatingnoise ratio for differential PCM. IEEE Trans. Commun. Technol 1971, 19: 568570. 10.1109/TCOM.1971.1090668View ArticleGoogle Scholar
 Farvardin N, Modestino J: Ratedistortion performance of DPCM schemes for autoregressive sources. Inf. Theory, IEEE Trans 1985, 31(3):402418. 10.1109/TIT.1985.1057040MathSciNetView ArticleGoogle Scholar
 Guleryuz O, Orchard M: On the DPCM compression of Gaussian autoregressive sequences. Inf. Theory, IEEE Trans 2001, 47(3):945956. 10.1109/18.915650View ArticleGoogle Scholar
 Rugin R, Conti A, Mazzini G: Experimental investigation of the energy consumption for wireless sensor network with centralized data collection scheme. In Proceedings of the 15th International Conference on Software, Telecommunications and Computer Networks, 2007. SoftCOM 2007. SplitDubrovnik; 27–29 Sept 2007:15.View ArticleGoogle Scholar
 Wang Q: Traffic analysis, modeling and their applications in energyconstrained wireless sensor networks: on network optimization and anomaly detection. (Mid Sweden University, 2010) . Accessed 15 July 2012 http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva10690
 Hashimoto T, Arimoto S: On the ratedistortion function for the nonstationary Gaussian autoregressive process. Inf. Theory, IEEE Trans 1980, 26(4):478480.MathSciNetView ArticleGoogle Scholar
 Haykin S: Adaptive Filter Theory. Upper Saddle River: Prentice Hall; 2001.Google Scholar
 BarceloLlado J, Morell A, SecoGranados G: Enhanced correlation estimators for distributed source coding in large wireless sensor networks. IEEE Sensors J 2012, 12(9):27992806.View ArticleGoogle Scholar
 Rosenbaum S: Moments of a truncated bivariate normal distribution. J. R. Stat. Soc. Ser B (Methodological) 1961, 23(2):405408.Google Scholar
 Manjunath BG, Wilhelm S: Moments calculation for the double truncated multivariate normal density (Social Science Research Network 2009). . Accessed 20 Aug 2012 http://dx.doi.org/10.2139/ssrn.1472153
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.