Skip to main content

Cross-layer design for decentralized detection in WSNs

Abstract

A wireless sensor network (WSN) deployed for detection applications has the distinguishing feature that the sensors cooperate to perform the detection task. Therefore, the decoupled and maximum throughput design approaches typically used to design communication networks do not lead to the desired optimal detection performance. Recent work on decentralized detection has addressed the design of media access control (MAC) and routing protocols for detection applications by considering independently the quality of information (QoI), channel state information (CSI), and residual energy information (REI) for each sensor. However, little attention has been given to integrate the three quality measures (QoI, CSI, and REI) in the system design. In this work, we present a cross-layer approach to design a QoI, CSI, and REI-aware transmission control policy (XCP) that coordinates communication between local sensors and the fusion center, in order to maximize the detection performance. We formulate and solve a constrained non-linear optimization problem to find the optimal XCP design variables, for both ALOHA and time-division multiple access (TDMA) sensor networks. We show the detection performance gain compared to the typical decoupled and maximum throughput design approaches, without utilizing additional network resources. We compare ALOHA and TDMA MAC schemes and show the conditions under which each transmission scheme outperforms.

1 Introduction

The deployment of wireless sensor networks (WSNs) in decentralized detection applications is motivated by the availability of low-cost sensors with computational capabilities, combined with advances in communication network technologies. In decentralized detection (DD), multiple sensors collaborate to distinguish between two or more hypotheses, and the classical problem is to find the local sensor detection strategies (quantization rules) to minimize a system-wide cost function using different network topologies and channel models [1, 2]. Despite the fact that this classical problem is insightful, current research on detection using modern WSNs has shifted the focus away from this classical quantization problem for two main reasons: (1) performance loss due to quantization decays rapidly with the number of information bits in the packet payload [3, 4], and (2) the payload of a packet could be considered large enough to represent local sensor information with adequate accuracy, as additional bits in the payload are unlikely to affect power or delay, given the relatively large packet overhead [5, 6] (e.g., IEEE 802.15.4 standard has a minimum of 9 bytes for the medium access control (MAC) overhead [7]). On the other hand, the deployment of WSNs in detection applications brings new challenges to the field. In addition to the design of signal processing algorithms at the application layer that has been previously addressed [8], protocols for other communication layers have to be optimized to maximize the detection performance.

The layered approach commonly adopted to design wireless networks may not be appropriate for detection applications. Although the layered approach provides simplicity in the design due to the decoupling of system layers, it neither provides the optimal resource allocation nor exploits the application domain knowledge. As an example, throughput is a common performance metric used to design media access control protocols. In DD applications, maximizing the throughput is not the prime objective, rather maximizing the quality of the information received that yields the best detection performance is the prime objective. Accordingly, a cross-layer design approach is desired for efficient implementation of WSNs in decentralized detection applications.

In this paper, we pursue a cross-layer approach to design a WSN deployed for detection applications. We integrate the physical, MAC, and detection application layers in one unified model that captures different sensor quality measures. In making our modeling choices, we are motivated by the desire to develop a system model that captures the basic features of practical sensor networks while being amenable to analysis. Specifically, we make the following design assumptions:

  1. 1.

    Digital transmission. Although uncoded analog transmission is optimal in a sensor network under certain conditions (see, e.g., [9]), digital transmission is still the choice for cost-effective, commercial off-the-shelf deployments of sensor network applications.

  2. 2.

    Slotted ALOHA/time-division multiple access (TDMA) MAC. The traditional assumption of a dedicated orthogonal channel between each sensor node and the fusion center may not be feasible in practice. On the other hand, random access techniques and TDMA are frequently used MAC protocols. Therefore, tuning of the protocol parameters to optimize the detection performance can be done in practice without a need to redesign the system.

  3. 3.

    Single hop networks. We focus on the case where sensor nodes cannot communicate with each other to form a multihop network to the fusion center, e.g., radio-frequency identification (RFID) sensors communicating to an RFID reader. Preliminary results for multihop tree networks are presented in [10, 11].

The rest of the paper is organized as follows: Section 2 summarizes the related work. Section 3 presents the detection problem formulation for WSNs. Section 4 explains the derivation of the system model. Section 5 presents the solution of the optimization problem. Section 6 presents the performance comparison between the proposed design approach and the classical design approaches. Section 7 presents a performance comparison between the TDMA and ALOHA networks. The work is concluded in Section 8.

2 Related work

Early work on cross-layer design has focused on the design of channel-aware decentralized detection schemes [12]. More recent work on channel-aware design considers sequential detection schemes [13]. The cross-layer design approach has been recently explored for the design of MAC and routing protocols for detection applications. Cooperative MAC, where individual sensor transmissions are superimposed in a way that allows the fusion center to extract the relevant detection information, is considered in [14]. This approach leads to significant gains in performance when compared to conventional architectures allocating different orthogonal channels for each sensor. However, technical issues such as symbol and phase synchronization have to be taken into account for practical implementations [5, 15]. Data-centric MAC, where existing protocols are tuned/modified for optimal performance, represents a viable alternative to cooperative MAC and, therefore, has gained considerable attention recently. Decision fusion over slotted ALOHA MAC employing a collision resolution algorithm is studied in [16], where the objective is to analyze the performance, rather than to design the MAC layer, in order to optimize the detection performance. A more thorough investigation of the design of MAC transmission policies to minimize the error probability has been considered in [17], where the system model includes the MAC and detection application layers, excluding the physical channel model, and assuming a stochastic MAC policy. Although stochastic transmission policy results in performance gains compared to deterministic policies, the extension of this framework to include the physical channel model is challenging mathematically.

The integration of the channel model and the MAC layer in the context of distributed estimation has been considered in [18], where analog transmission of sensor data is assumed. The cross-layer approach is also considered in [6], where an integrated model for the physical channel and the queuing behavior for sensors is developed. The design problem is to choose the code rate and the number of sensors to minimize the error probability for an frequency-division multiple access (FDMA) system, where orthogonal channels are used between sensors and the fusion center.

Routing for decentralized detection has been considered separately from the MAC design problem. Energy-efficient routing for signal detection in WSNs is considered in [19], where the objective is to find the optimal route for local data from a target location to the fusion center. Cooperative routing for distributed detection in large sensor networks is studied in [20] using a link metric that characterizes the detection error exponent. For a survey on the interplay between signal processing and networking in sensor networks, see [21] and the references therein.

We summarize the contributions of this paper, as compared to existing literature, in the following main points:

  1. 1.

    Integrated model for the detection system. We integrate the physical layer, MAC layer, and the detection application layer in one unified system model.

  2. 2.

    Integration of different sensor quality measures. The model captures three sensor quality measures, namely, quality of information (QoI), channel state information (CSI), and residual energy information (REI). In addition, delay for detection and network lifetime are considered as additional design constraints.

  3. 3.

    Design of a complete transmission control policy. We design an optimal transmission control policy that includes not only the transmission probabilities but also the communication rate and the energy allocation for each sensor. The authors are not aware of a literature work on cross-layer design for detection applications, where an integrated model is developed that captures different communication layers and several sensor quality measures, while simultaneously considering different transmission design parameters as well as the delay for detection.

  4. 4.

    Non-asymptotic analysis. We assume a finite number of sensor nodes and do not resort to asymptotic analysis as commonly adopted in detection studies. Therefore, the analysis results are applicable on small-scale and large-scale sensor networks.

  5. 5.

    Enhanced detection performance. Without additional resources, the proposed design approach outperforms the classical decoupled and maximum throughput design approaches.

  6. 6.

    Slotted ALOHA-TDMA comparative analysis. We show the conditions under which each transmission scheme outperforms. These conditions represent a guideline for the designer to choose between the two protocols based on the available system resources and design constraints.

The work presented in this paper is an extension of our previous work in [22], where only ALOHA networks were considered, and the energy allocation scheme was fixed. In addition, a more detailed simulation experiment and a full comparison between ALOHA and TDMA sensor networks are presented.

3 Problem formulation

Figure 1 illustrates the detection system architecture, where a set of N wireless sensors, S={ S 1 , S 2 ,, S N }, and a fusion center (FC) collaborate to detect the phenomenon of interest in a geographic area divided into a number of resolution cells. We can summarize our design problem as “Given the CSI, REI, and QoI for each sensor, how can we find the optimal transmission control policy (XCP) for each sensor that maximizes the system detection performance?”

Figure 1
figure 1

System model for detection in one-hop sensor networks.

Initially, the fusion center broadcasts a message containing the location of the resolution cell to be surveyed, soliciting information from different sensors. Each sensor responds with the following information: (1) channel state between the sensor and the fusion center, which could be estimated using channel measurement techniques [23, 24]; (2) signal-to-noise ratio for the reflected probing signal used by the sensor to illuminate the target, which could be estimated from sensor location (estimated using different localization methods [25]), resolution cell location, and channel measurement techniques; and (3) the energy reserve, representing the REI, which could be estimated by the sensor from the battery charging state. We call this communication process the handshaking cycle, which starts with the broadcast message from the fusion center and ends when the fusion center receives the information from all sensors. This handshaking cycle is repeated periodically to cope with changes in the environment that impact the sensor quality measures. Therefore, the overhead of the handshaking cycle is proportional to the environment dynamics, i.e., how fast the environment changes to affect the quality measures for sensors. We ignore the handshaking overhead in the development of the system model in Section 4. In Section 4.6, we provide an upper bound on the environment dynamics such that the handshaking overhead could be safely ignored without affecting the model accuracy.

After the handshaking cycle, the fusion center calculates the optimal transmission control policy for each sensor based on the quality measures received. The values of the XCP variables are then sent back to the sensors that have reliable quality measures to contribute to the detection task. The resulting values of the XCP variables are stored in a lookup table in the sensor memory (for each resolution cell), which remain valid for the given location as long as the quality measures for each sensor have not changed from the last run of the optimization algorithm. The detection process then proceeds as follows: the fusion center broadcasts a message to initiate a detection cycle for a specific resolution cell. The local sensors selected by the optimization algorithm will sample the environment by collecting a number of observations x i , form a data packet, and communicate their messages directly to the fusion center over the MAC channel. Finally, the fusion center makes a final decision after a fixed amount of time representing the maximum allowed delay for detection.

4 System model

The detection scheme described above suggests a layered approach to system modeling, as depicted in Figure 2. The physical layer represents the wireless channel model and defines system parameters such as the communication bit rate as well as the energy consumed in communicating sensor information to the fusion center. The media access control layer represents either the slotted ALOHA or the TDMA protocol model and defines the protocol-specific parameters such as the transmission probability. Finally, the application layer represents the sensing and energy models and defines the model of the observations obtained by local sensors, as well as the WSN energy constraints. In what follows, we derive the model for each layer of the system.

Figure 2
figure 2

A layered approach to detection system modeling.

4.1 Wireless channel model

We focus on the case where the sensor nodes and the fusion center have minimal movement and the environment changes slowly. Accordingly, only the slow-fading component of the wireless channel is considered. Figure 3 shows the fading channel model, where w(t) is an AWGN with PSD N0 / 2, and m(dc) is the mean path attenuation for a sensor node at a distance dc from the fusion center. Using the Hata path-loss model, the total decibel power loss is given by [26]

P L = 20 log 4 π d 0 / λ p + 10 ρ c log ( d c / d 0 ) μ c + X σ c dB
(1)
Figure 3
figure 3

Block diagram for the wireless communication channel.

where d0 is a reference distance corresponding to a point located in the far field of the transmit antenna, λ p is the wavelength of the propagating signal, ρc is the path loss exponent, and X σ c N(0, σ c 2 ).

The wireless channel represents an unreliable bit pipe for the data link layer, with instantaneous Shannon capacity C = W log2(1 + Pr / N0W)bps, where W is the channel bandwidth and Pr is the signal power received by the fusion center. Using Shannon’s coding theorem and given the state-of-the-art coding schemes that approach the Shannon capacity, we can approximately assume that the fusion center can perform error-free decoding for any transmission with bit rate R < C, i.e., the channel is considered ‘ON’ when R < C and ‘OFF’ otherwise, giving rise to the two-state channel model akin to [6]. Using (1), the probability of the channel being ON during sensor i transmission could be expressed as

λ c i = Φ 1 σ c i 10 log P t i N 0 W ( 2 R i / W - 1 ) - μ c i ,
(2)

where Pt is the average transmitted signal power, and Φ(.) is the cumulative distribution function for the standard normal PDF. We note that the CSI relevant to our model is represented by the statistics σc, μc, and N0. These statistics are required to be estimated by each sensor, and no instantaneous channel state information is required for the XCP design. Since we assume fixed nodes and a slowly varying channel, the estimation process could be executed less frequently to save sensor node resources. This is particularly important in wireless sensor networks since the estimation of the channel state is both time and power consuming.

It should be highlighted that the large-scale fading model presented here allows us to obtain the closed form solution in (2). More complex fading models, e.g., small-scale fading, can be integrated similarly, but they may allow only numerical solutions.

4.2 Media access control protocol model

The detection cycle initiated by the fusion center is illustrated in Figure 4. We assume a slotted multiaccess communication scheme with number of communication slots L per detection cycle, where each packet requires one time slot for transmission, all time slots have the same length τ / L, τ is the delay for detection, and all transmitters are synchronized. Each local sensor i collects a number of observations n i and forms an information packet for transmission over the wireless channel, with communication rate:

R i = bL n i / τ ,
(3)
Figure 4
figure 4

Detection cycle for the ALOHA and TDMA sensor networks.

where b is the number of encoding bits for each observation. The sensor i then attempts to transmit to the fusion center according to the MAC scheme:

  1. 1.

    Slotted ALOHA. Sensor attempts transmission in every slot during the detection cycle, with probability q i , despite the state of their last transmission.

  2. 2.

    TDMA. Sensor transmits to the fusion center only in its dedicated time slot, assigned using the fixed assignment TDMA scheme.

Unequal priority TDMA, where a single sensor may be assigned more than one time slot, could also be used and may lead to a better detection performance. However, the resulting optimization problem is an integer programming problem that is generally hard to solve in real time. Therefore, only equal-priority TDMA is considered in this work. Without loss of generality, we assume that L = m N, where m is a positive integer, i.e., at each detection cycle, all sensors transmit the same number of times. This assumption facilitates the comparison with the slotted ALOHA scheme. The decision takes place at the end of the detection cycle by the FC. The process repeats for every detection request initiated by the fusion center. To simplify the analysis, the MAC protocol does not consider the acknowledgement slots and any protocol specifics required for synchronization or rate negotiation. Also, we ignored the packet overhead, which is a reasonable approximation for practical WSN protocols with large packet payload.

Now, we calculate the overall probability of a successful packet transmission:

  1. 1.

    Slotted ALOHA. At any given time slot, the probability of a successful packet transmission by sensor i is given by q i j i (1- q j ). Further, this packet will be successfully received by the fusion center if the state of the physical channel between the sensor and the fusion center is ON during this time slot.

  2. 2.

    TDMA. Since collisions are eliminated, the probability of successful packet transmission depends solely on the physical channel condition, as given by (2).

The total probability of a successful packet transmission by sensor i is then given by

λ i = q i j i ( 1 - q j ) λ c i ALOHA λ c i TDMA
(4)

4.3 Energy model

To formulate the energy model for each sensor, we first define the sensor network lifetime. Different definitions exist in the literature [27, 28], and the choice of a specific definition is usually governed by the application and the tractability of the resulting problem formulation. A general definition for network lifetime is presented in [29]. The definition holds regardless of the underlying network model, including network architecture and protocol, channel fading characteristics, and energy consumption model. The lifetime is given by

L = E 0 - E w / f r E r = i = 1 N ( e i 0 - e i w ) f r i = 1 N e i r ,
(5)

where E 0 = i = 1 N e i 0 is the total initial energy in all sensors at the time of deployment, E w = i = 1 N e i w is the total wasted energy remaining in sensor nodes when the network dies, f r is the average sensor reporting rate defined here as the number of detection cycles per unit time, and E r = i = 1 N e i r is the expected reporting energy consumed by all sensors in one detection cycle.

Our objective is to allocate the reporting energy e i r for each sensor in such a way that maximizes the detection performance. In what follows, we derive the energy constraints for the sensor network.

4.3.1 Total energy constraint

In practice, it is desired to have a minimum network lifetime, where sensors can perform the assigned task, i.e., L l t . Using (5), we get:

E r E 0 - E w / f r l t = i = 1 N ( e i 0 - e i w ) f r l t = ε t .
(6)

The total energy constraint is thus expressed as

i = 1 N e i r ε t .
(7)

4.3.2 Individual energy constraints

Since the network lifetime tends towards the least lifetime of sensor nodes, it is desired to keep a minimum lifetime for each sensor. This prolongs the network lifetime and avoids depleting the energy reserve for high-quality sensors, resulting in a quick expiry of the sensor network. In addition, depleting sensor energy may result in loss of coverage for the area under surveillance. Therefore, we impose the constraint L i l,i=1,2,,N on all sensor nodes. Accordingly, we get:

e i r e i 0 - e i w / f r l = ε i ,
(8)

where we note that l < lt, i.e., ε t < i = 1 N ε i . Otherwise, each sensor trivially allocates its maximum energy ε i . Obviously, the reporting energy ≥ 0; hence, the individual energy constraint is summarized as

0 e i r ε i i = 1 , 2 , , N .
(9)

The constraint (9) essentially limits the energy expended by each sensor in each detection cycle to avoid fast depletion of the sensor energy. Finally, we need to relate the transmission power Pt in (2) to the reporting energy er in each detection cycle. We note that P t i = e i r /T, where T is the total time the sensor is transmitting during a detection cycle.

  1. 1.

    ALOHA: We note that the expected number of transmissions by sensor i during a detection cycle is Lq i . Therefore, T = (τ / L)L q i  = τ q i , and we get P t i = e i r /τ q i .

  2. 2.

    TDMA: Since we assume L = m N, each sensor transmits m times, and we get P t i = Ne i r /τ.

The total probability of successful packet transmission is then expressed as

λ i = q i j i ( 1 - q j ) Φ ρ i ALOHA Φ ρ i TDMA ,
(10)
ρ i = a i + 10 / σ c i log e i r q i ( 2 R i / W - 1 ) ALOHA a i + 10 / σ c i log e i r N ( 2 R i / W - 1 ) TDMA ,
(11)
a i = - 1 / σ c i 10 log ( N 0 ) + μ c i .
(12)

We note that in the above discussion, we neglected the energy consumed by each sensor to report its quality measures to the fusion center. This energy component could be included in the analysis by subtracting it from the initial sensor energy. However, for slowly varying environments, where the sensor characteristics need to be updated less frequently, this energy component could be neglected compared to the periodic sensor reporting energy.

4.4 Sensing model

We focus our work on detection using signal amplitude measurements. Therefore, the observation at sensor i, located at distance d i from an object at a specific resolution cell, could be expressed as

x i = ε / d i η / 2 + w i ,
(13)

where ε is the amplitude of the emitted signal at the object, η is a known attenuation coefficient, typically between 2 and 4, and w i is an additive white Gaussian noise with zero mean and variance σ s i 2 . We note that the above model considers passive sensing [25]. In the active sensing case, the observation model is given by x i  = ζ ε tr  / (2d i )η/2 + w i , where ζ is a known reflection coefficient at the object, ε tr is the amplitude of the signal transmitted by the active sensor (illuminating signal), and 2d i is the round trip distance traveled by the signal [19]. We note that the two observation models differ only in the scaling factor ζ / 2η/2. Therefore, without loss of generality, we assume the active sensing model in the following discussion. If passive sensing is assumed, then the detection problem will be slightly different than the problem presented here, since the amplitude of the source signal, ε, is unknown and has to be estimated from sensor observations.

The detection problem could be defined as the following binary hypothesis testing problem, for each time slot k:

H 0 : x i [ j , k ] = w i [ j , k ] j = 1 , 2 , , n i H 1 : x i [ j , k ] = μ i + w i [ j , k ] j = 1 , 2 , , n i ,
(14)

where μi = ζ ε tr  / (2d i )η/2, and n i is the number of independent and identically distributed (IID) observations obtained by sensor i at each time slot. We note that noise samples are independent across sensors, i.e., the observations at local sensors are independent across time and space, but not necessarily identically distributed since some sensors may be closer to the resolution cell, and noise variances are assumed unequal. In the following, we designate the vector of sensor observations at time slot k by x i [k] = [x i [1, k] x i [2, k] … x i [n i , k]]. We note that x i [k] has the multivariate Gaussian distribution N(0,C) under hypothesis H 0 and N(μ,C) under hypothesis H 1 , where μ = [μ1μ2μN] and C= σ s i 2 I.

Proposition 1.

The optimal test statistic at the fusion center is given by

V = k = 1 n s i = 1 N j = 1 n i μ i / σ s i 2 r i [ k ] x i [ j , k ] ,
(15)

where n s  = L for ALOHA and m for TDMA, and r i [k] is a Bernoulli random process representing the success (r i  = 1) or failure (r i  = 0) of receiving a packet from sensor i in communication slot k. The sample space and probability measure of r i are defined as Ω r i ={0,1}and P[r i  = 1] = λ i , respectively, where λ i is given by (4).

Proof.

At the fusion center, the log likelihood ratio (LLR) is the sum of the individual observations received at each time slot. Therefore, the test could be expressed as

k = 1 n s i = 1 N r i [ k ] l ( x i [ k ] ) H 1 H 0 ln γ ,
(16)

where l(x i [k]) is given by (17). The LLR test then reduces to (18), where z i = k = 1 n s r i [k] indicates the number of times sensor i successfully transmitted to the fusion center in n s time slots. The random vector z = [z1z2z N z e ], where z e = n s - i = 1 N z i , is multinomially distributed with probability vector p = [λ1λ2λ N λ e ], where the probability of collision or idle slot λ e =1- i = 1 N λ i , and the sample space represents all possible combinations of z i such that z e + i = 1 N z i = n s .

l ( x i [ k ] ) = ln p x i x i [ k ] ; H 1 p x i x i [ k ] ; H 0 = ln exp - 1 2 ( x i [ k ] - μ i 1 ) T C - 1 ( x i [ k ] - μ i 1 ) exp - 1 2 x i T [ k ] C - 1 x i [ k ] = μ i σ s i 2 j = 1 n i x i [ j , k ] - 1 2 n i μ i σ s i 2
(17)
V = k = 1 n s i = 1 N j = 1 n i μ i σ s i 2 r i [ k ] x i [ j , k ] H 1 H 0 1 2 k = 1 n s i = 1 N n i r i [ k ] × μ i σ s i 2 + ln γ = 1 2 i = 1 N z i n i μ i σ s i 2 + ln γ = γ .
(18)

4.5 Measurement of detection performance

One of the widely used performance measures for detection applications is the receiver operating characteristics (ROC) curve [30]. The curve relates the probability of detection PD to the probability of false alarm PFA for different threshold values γ of the detector. For example, for the centralized shift-in-mean Gaussian detection problem, where all observations are available at the fusion center, the ROC curve is expressed as

P D = Q Q - 1 ( P FA ) - μ 1 - μ 0 σ ,
(19)

where Q[.] = 1 - Φ[.] is the complementary cumulative distribution function for the standard normal PDF, μ1 - μ0 is the shift-in-mean value, and σ2 is the measurement variance. For our detection problem, the expressions for PD and PFA could be derived as follows:

P D = P [ V > γ ; H 1 ] = z Z P [ V > γ | z ; H 1 ] p [ z ] = z Z γ p ( v | z ; H 1 ) p ( z ) d v .

We note from (15) that v|z is a Gaussian random variable with μ v | z ; H 0 =0, μ v | z ; H 1 = σ v | z 2 = i = 1 N z i n i μ i σ s i 2 . Accordingly,

P D = z Z Q γ - i = 1 N z i n i μ i σ s i 2 i = 1 N z i n i μ i σ s i 2 × L ! z 1 ! z 2 ! z N ! z e ! λ 1 z 1 λ 2 z 2 λ N z N λ e z e .
(20)

From (18), we get:

P D = z Z Q ln γ - 1 2 i = 1 N z i n i μ i σ s i 2 i = 1 N z i n i μ i σ s i 2 × L ! z 1 ! z 2 ! z N ! z e ! λ 1 z 1 λ 2 z 2 λ N z N λ e z e .
(21)

Similarly, the probability of false alarm is given by

P FA = z Z Q ln γ + 1 2 i = 1 N z i n i μ i σ s i 2 i = 1 N z i n i μ i σ s i 2 × L ! z 1 ! z 2 ! z N ! z e ! λ 1 z 1 λ 2 z 2 λ N z N λ e z e .
(22)

Equations 21 and 22 represent the ROC curve, which is a function of the detector threshold (γ), channel drop probabilities (λ), number of successful transmissions for each sensor (z), number of transmission slots (L), and measurement signal-to-noise ratios (μ / σ s ). This ROC curve cannot be expressed by one equation by eliminating the detector threshold, as in (19), due to the complexity of the equations. Furthermore, optimization with respect to the expressions in (21) and (22) is computationally prohibitive. Therefore, we adopt the deflection coefficient, a closely related performance measure that leads to a computationally less intensive problem, defined as [30]

D 2 = E [ V ; H 1 ] - E [ V ; H 0 ] 2 var [ V ; H 0 ] .
(23)

The deflection coefficient is a measure of the separation between the two probability density functions under the two hypotheses. Under Gaussian assumptions, it is known that maximizing the deflection coefficient leads to the maximization of the detection performance in terms of the ROC curve [31]. In fact, it can be shown that for the centralized shift-in-mean Gaussian detection problem, the ROC curve in (19) could be expressed as [30]

P D = Q Q - 1 ( P FA ) - D 2 .
(24)

Under non-Gaussian assumptions, there is no general result that enhancement of the deflection coefficient will lead to a better performance in terms of the ROC curve. However, it is likely that more separation between the two density functions will lead to a better detection performance.

Proposition 2.

The deflection coefficient for the detector in (15) is given by

D 2 = n s i = 1 N n i λ i μ i / σ s i 2 c i ,
(25)

where λ i is given by (4).

Proof.

We consider the ALOHA network case(n s  = L). For the TDMA network, the proof is identical, except that L is replaced by m. To calculate the deflection coefficient for the detector in (15), we use the fact that both r i [k] and x i [j, k] are strict-sense stationary random processes (being IID) and independent of each other. Therefore,

E [ V ; H 0 ] = L i = 1 N n i E [ r i ] E [ x i ] μ i / σ s i 2 = 0
(26)
E [ V ; H 1 ] = L i = 1 N n i λ i μ i / σ s i 2
(27)

var[V; H 0 ] is given by (28), and noting that E[ r i 1 r i 2 ]=0 for i1 ≠ i2 and that E[ r i 2 ]= λ i , we get (29).

var [ V ; H 0 ] = L var i = 1 N j = 1 n i r i x i [ j ] μ i σ s i 2 = LE i = 1 N j = 1 n i μ i σ s i 2 r i x i [ j ] 2 - L E i = 1 N j = 1 n i μ i σ s i 2 r i x i [ j ] 2
= L i 1 = 1 N i 2 = 1 N j 1 = 1 n i 1 j 2 = 1 n i 2 E μ i 1 μ i 2 σ s i 1 2 σ s i 2 2 r i 1 r i 2 x i 1 [ j 1 ] x i 2 [ j 2 ] - L i = 1 N j = 1 n i μ i σ s i E r i x i [ j ] 2
(28)
var [ V ; H 0 ] = L i = 1 N j 1 = 1 n i 1 j 2 = 1 n i 2 μ i σ s i 2 2 λ i E x i [ j 1 ] x i [ j 2 ] = L i = 1 N j = 1 n i μ i σ s i 2 2 λ i E [ x i 2 [ j ] ] + L i = 1 N j 1 = 1 n i 1 j 2 = 1 j 2 j 1 n i 2 μ i σ s i 2 2 × λ i E [ x i [ j 1 ] ] E [ x i [ j 2 ] ] = L i = 1 N n i λ i μ i σ s i 2 .
(29)

From (26), (27), and (29), we get (25).

We note that the quantity D i = n i μ i / σ s i 2 represents the signal-to-noise ratio at sensor i, and we adopt it as a measure of the sensor QoI. From (25), we note that the overall deflection coefficient at the fusion center is simply a weighted sum of the individual deflection coefficients for each sensor, where the weights are the probabilities of successful packet transmission for each sensor, and the deflection coefficient in case of a collision is set to 0.

Combining (3), (11), and (25) we obtain the objective function:

D 2 = τ b i = 1 N c i R i q i j i ( 1 - q j ) Φ ρ i ALOHA τ bN i = 1 N c i R i Φ ρ i TDMA .
(30)

We note that c i = ε 2 / σ s i 2 d i η ; therefore, the signal amplitude at the object to be detected appears as a scaling factor only in the objective function. This means that the signal amplitude does not affect the optimal operating point for the system. However, the amplitude does affect the detection performance, as intuitively expected. We further note that the objective function does not depend directly on L and n i . Rather, from the optimal communication rates and (3), L and n i could be arbitrarily chosen such that L n i  = τ R i  / b for any non-zero communication rate, i.e., R i  > 0, n i  ≥ 1, and consequently L ≤ τ R i  / b.

Table 1 lists the essential model parameters and their description. The third column classifies each parameter according to its method of calculation as given from the application knowledge, estimated online, calculated, or as a design parameter. The ‘Notes’ column highlights the parameters that are a measure of the REI, CSI, or QoI for each sensor. The last column classifies each parameter according to its relevant layer in the system model.

Table 1 Model parameters for the wireless sensor network

4.6 Handshaking overhead

While developing the system model, we ignored the overhead of the handshaking protocol used by the fusion center to collect the quality measures of each sensor, as outlined in Section 3. In this section, we derive the conditions that have to be satisfied to ensure that the developed model accuracy is not affected by ignoring the handshaking overhead. We measure the protocol overhead by (1) the total delay time taken by the fusion center to collect the quality measures from all sensors in one handshaking cycle (τh) and (2) the total energy spent in the handshaking cycle (eh). We designate the communication rate and transmission power used by all sensors during the handshaking process by Rh and Ph, respectively. We further designate the rate at which the environment changes by fh (handshaking cycles per day). Our objective is to derive an upper bound on fh for a given network that enables us to ignore the handshaking overhead without affecting the model accuracy. The delay condition could be expressed as

τ h f h < α ( τ f r ) ,
(31)

where α represents the allowed percentage of resources to be consumed in the handshaking process, such that the handshaking overhead could be ignored. To calculate τh, we assume IEEE 754 half-precision binary floating-point format (2 bytes) to represent the quality measures of each sensor [32]. According to Table 1, we have five values representing the sensor quality measures: CSI (N0, μc, σc), QoI (μ / σ s ), and REI. Therefore, each sensor requires 10 bytes of payload. Assuming 9 bytes of overhead, the total handshaking delay is given by

τ h = 19 × 8 × N R h = 152 N R h .
(32)

Combining (31) and (32):

f h < α τ f r 152 N R h .
(33)

The energy condition could be expressed as

e h f h l < α e 0 ,
(34)

where e0 is the initial energy in each sensor battery, which is assumed the same for all sensors. The energy spent in the handshaking process by each sensor could be expressed as

e h = P h 19 × 8 R h = 152 P h R h .
(35)

Combining (34) and (35),

f h < α e 0 152 l R h P h .
(36)

Equations 33 and 36 represent the two conditions that need to be satisfied to ensure that the derived model accuracy will not be affected by ignoring the handshaking overhead. These two conditions could be verified for any values of the design parameters Rh and Ph. However, since we ignored the channel drop probability during the handshaking process in the analysis, one more constraint is required to guarantee minimum probability of successful transmission, λ, and hence reliable communication during the handshaking cycle. Since all sensors need to transmit during the handshaking process, we assume that TDMA is the protocol used during handshaking. Accordingly, we can use (2) to get

P h = N 0 W 1 0 0.1 μ c + σ c Φ - 1 [ λ ] 2 R h / W - 1
(37)

and substituting in (36)

f h < α e 0 1 0 - 0.1 μ c + σ c Φ - 1 [ λ ] 152 N 0 Wl R h 2 R h / W - 1 .
(38)

We note that fh needs to satisfy (33) and (38) simultaneously. Since the right-hand side of (33) is a monotonically increasing function of Rh and the right-hand side of (38) is a monotonically decreasing function of Rh, the upper bound on fh is at the intersection of the two functions. Hence,

R h = W log 2 1 + e 0 N 1 0 - 0.1 μ c + σ c Φ - 1 [ λ ] N 0 Wlτ f r
(39)
P h = e 0 N f r ,
(40)

and finally, the upper bound on fh is given by

f h < α τ f r W 152 N log 2 1 + e 0 N 1 0 - 0.1 μ c + σ c Φ - 1 [ λ ] N 0 Wlτ f r .
(41)

Example 1.

We evaluate the upper bound for the example network simulated in Section 6, where the parameters are shown in Table 1. We use σc = 6 d B and an average value of μc = 50 d B for all sensors. Further, we need λ = 0.9 to ensure reliable communication during the handshaking cycle and a percentage of resources consumed in the handshaking α = 0.05. For delay for detection τ = 25 s ec, and network lifetime l = 250 days, we get:

Design parameters : R h = 965 bps P h = 0.56 W Upper bound : f h < 22.69 cycles / day Resources consumed : τ h f h = 242 sec / day e h f h = 1.94 J / day.
(42)

Figure 5 plots the operating point for the two conditions (33) and (38). Accordingly, for the given sensor network, the analysis and the developed model could be used with sufficient accuracy as long as the environment dynamics do not require more than 22 cycles/day to update the fusion center. If the environment dynamics are much faster, then the handshaking overhead has to be included in the model development.

Figure 5
figure 5

Handshaking overhead constraints for the example sensor network.

5 Transmission control policy design for optimal detection

In Section 4, we developed an integrated model for the detection system, obtained an expression for the detection performance measure (the deflection coefficient), and defined our design constraints. Now, we have the complete optimization problem that we need to solve to obtain the optimal system design variables (sensor communication rate R i , reporting energy e i r , and retransmission probability for the ALOHA case q i ). We start by summarizing the optimization problem:

max τ b i = 1 N c i R i q i j i ( 1 - q j ) Φ ρ i ALOHA τ bN i = 1 N c i R i Φ ρ i TDMA
(43)
subject to 0 q i 1 ( ALOHA )
(44)
0 R i
(45)
0 e i r ε i i = 1 , 2 , , N
(46)
i = 1 N e i r ε t .
(47)

In the following discussion, we consider the ALOHA optimization problem only, since the TDMA optimization problem has the same structure, with one of the decision variables omitted (q). We denote the decision variables by

x = q 1 q N R 1 R N e 1 r e N r ,
(48)

where xR3N, and the objective function by J(x). The optimization problem could be rewritten in the form:

min x - J ( x ) subject to Ax b ,
(49)
A = I - I 0 0 0 0 0 0 I 0 0 0 0 0 0 I - I - 1 T ,
(50)
b = - 0 1 0 0 ε ε t T .
(51)

I is the identity matrix, 0(1) is the vector/matrix of all zeros (ones), with appropriate dimensions, and ε= ε 1 ε 2 ε N . We note that our objective function is not convex. Instead of following the classical approach to simplify the system model to obtain a convex function, which may ignore important system dependencies and may lead to a less accurate model, our approach is to analyze the optimization problem to obtain a possible set of candidate points that may speed up the convergence process for existing numerical optimization algorithms, then resort to simulation experiments for performance evaluation.

Although the objective function is not convex, we note that the inequality constraints are linear. Therefore, the Karush-Kuhn-Tucker (KKT) conditions represent a necessary condition for a local maximizer of the objective function [33]. We first form the Lagrangian:

L ( x , ν ) = - J ( x ) - ν T ( Ax - b ) ,
(52)

where ν is the vector of Lagrange multipliers, defined as

ν T = ν q 1 0 ν q N 0 ν q 1 1 ν q N 1 ν R 1 ν R N ν e 1 0 ν e N 0 ν e 1 ν e N ν e T
(53)

ν q i 0 and ν q i 1 are the Lagrange multipliers for the constraints in (44), ν R i is the Lagrange multiplier for the constraint in (45), ν e i 0 and ν e i are the Lagrange multipliers for the constraints in (46), and ν e T is the Lagrange multiplier for the constraint in (47). We denote the primal and dual optimal points by x and ν, respectively. The KKT conditions are thus given by

- J ( x ) - A T ν = 0 (Stationarity)
(54)
ν T ( A x - b ) = 0 (Complementary slackness)
(55)
( A x - b ) 0 (Primal feasibility)
(56)
ν 0 (Dual feasibility)
(57)
- Z T 2 J ( x ) Z 0 ,
(58)

where Z is a null space matrix for the matrix of active constraints at x, and represents componentwise inequality for vectors and positive semidefiniteness for matrices. Further, the KKT conditions are sufficient for a strict local maximizer if the following condition holds:

- Z + T 2 J ( x ) Z + 0
(59)

where Z+ is a null space matrix for the matrix of non-degenerate active constraints at x, i.e., constraints with Lagrange multipliers ≠ 0. The stationarity condition could be expressed as

τ b c i R i j i ( 1 - q j ) Φ ( ρ i ) - 10 2 π σ c i ln 10 exp ( - ρ i 2 / 2 ) - j i c j R j q j k i , j ( 1 - q k ) Φ ( ρ j ) + ν q i 0 - ν q i 1 = 0
(60)
τ b c i q i j i ( 1 - q j ) Φ ( ρ i ) - 10 ln 2 2 π W σ c i ln 10 × R i 1 - 2 - R i / W exp ( - ρ i 2 / 2 ) + ν R i = 0
(61)
τ b c i R i q i j i ( 1 - q j ) 10 2 π σ c i ln 10 exp ( - ρ i 2 / 2 ) e i r + ν e i 0 - ν e i 1 - ν e T = 0 .
(62)

The given KKT conditions cannot be solved analytically. However, the optimization problem could be solved efficiently using a variety of constrained optimization algorithms, e.g., the interior point method. The following two theorems provide a possible set of candidate points for a local maximizer, hence speeding up the convergence process and providing a set of initial points for the optimization algorithm.

Theorem 1.

A candidate point for a local maximizer of the objective function in (43) is when one sensor transmits with probability one (q = 1) and maximum energy ( e i r = ε i ), while all other sensors remain silent. The optimal communication rate is given by

R i = argmax R i R i Φ a i + 10 / σ c i log min ( ε , ε t ) 2 R i / W - 1 .
(63)

Proof.

When q i  = 1, the objective function reduces to D 2 =(τ/b) c i R i Φ ρ i j i (1- q j ). Any value of q j  ≠ 0 will cause the objective function value to decrease. Physically, q j  ≠ 0 corresponds to a guaranteed collision, i.e., loss of information. Therefore, q j  = 0. Since R j and e j r do not affect the objective function, we arbitrarily set R j  = 0. e j r should be set to 0 to save the energy budget for the non-contributing sensor. The solution q j  = R j  = e j r = 0 for j ≠ i could be shown to satisfy the KKT conditions by direct substitution. The objective function monotonically increases with e i r . Therefore, e i r should be set to its maximum value, i.e., e i r =min(ε, ε t ). Finally, optimal R i is set to maximize the objective function and, hence, given by (63).

We conclude that we have a set of N candidate points, (q i  = 1, q j  = 0, j ≠ i), for a local maximum, which could be checked easily in N time steps, in addition to the computations required to find the optimal communication rate, which could be implemented efficiently for a single-variable function.

Theorem 2.

A candidate point for a local maximizer of the objective function in (43) is when a subset of the sensors, defined by the index set S ε , transmit with their maximum energy, while all other sensors remain silent. Optimal design variables for the active sensors are at x, where J(x) = 0. The unallocated energy is equal to ε t - i S ε ε i .

Proof.

For the active sensors, 0 < q i  < 1. The total energy constraint is inactive, i.e., ν e T =0. If e i r < ε i , then from the complementary slackness condition ν e i 0 = ν e i 1 =0. From the stationarity condition, we get e i r J=0, but e i r J=0 if and only if e i r =0, a contradiction. Therefore, the only option left is e i r = ε i . In this case, the constraint e i r ε i is active; hence, ν e i 0 =0. From the stationarity condition, we get e i r J | e i r = ε i = ν e i 1 , which satisfies the dual feasibility condition, since the left-hand side is ≥0. Therefore, this point is a candidate for a local maximizer.

We conclude that all active sensors in this case should transmit with maximum energy. Since all other constraints are inactive, all Lagrange multipliers are equal to 0, and therefore, from the stationarity condition, the optimal values for q and R are equal to the stationary point x, where J(x) = 0.

Theorem 2 results in few candidate points if the individual energy constraint for each sensor is a large fraction of the total energy constraint, such that only few sensors consume the total energy budget. Otherwise, the number of candidate points will be prohibitively large. The solution of the optimization problem is summarized in Algorithm 1, where Theorem 1 is used, and the called optimization algorithm is any optimization method of choice, e.g., interior point method.

Algorithm 1 Optimization Problem Solution

6 Performance comparison

We compare our design approach with two other approaches commonly used to design the transmission control policy for practical sensor networks. We do not attempt to compare with specific designs treated in the literature that are optimized for detection applications with very specific hardware configurations not typically used in practical WSNs. We call our approach cross layer design (CLD) hereafter, since it integrates the physical, MAC, and application layers. In both approaches, we assume equal energy allocation scheme, where the energy is divided equally across sensor nodes. This allocation scheme is typically used when sensor quality measures are not integrated in the design, and therefore, all sensors are treated equally. Sensor energy is thus given by e i r = ε t /N. This allocation scheme is feasible if εt / N ≤ ε i . Otherwise, ε i is allocated to each sensor, i.e., e i r =min( ε t /N, ε i ). For the special case when all sensors have the same initial and wasted energies, i.e., e0 - ew is the same, we have from (6) and (8)

ε t = N ( e 0 - e w ) / f r l t = l / l t N ε i ,
(64)

where ε i is the same for all sensors. Since l < lt, we have εt / N < ε i , and therefore, the equal energy allocation is feasible in this case. This case is the one considered in the numerical example in Section 6.4. Finally, we present an upper bound on the system performance to better assess how well the CLD performs compared to the best possible performance, achievable only in theory.

6.1 Maximum throughput design

The throughput for a given WSN is calculated as T= i = 1 N R i λ i . For the given sensor network,

T = i = 1 N R i q i j i ( 1 - q j ) Φ ρ i ALOHA i = 1 N R i Φ ρ i TDMA
(65)

In maximum throughput designs, the design variables R i and q i are chosen to maximize the throughput in (65). The maximum throughput design thus does not consider the QoI for each sensor. This is clearly shown by comparing (65) to (30), where we note that the maximum throughput design is equivalent to the CLD if all sensors have the same quality of information.

6.2 Decoupled design

In the conventional slotted ALOHA, the MAC sublayer is designed to minimize the probability of collision, without regard to the QoI or CSI of each node. Minimum probability of collision occurs at q i  = 1 / N. For both ALOHA and TDMA, the physical layer is designed to guarantee a minimum probability of successful packet transmission, λ, i.e.,

Φ a i + 10 σ c i log N e i r 2 R i W - 1 = λ .
(66)

Accordingly, R i is given by

R i = W log 2 1 + 1 0 0.1 σ c i ( a i - Φ - 1 [ λ ] ) + log N e r ,
(67)

and using (30), the deflection coefficient is given by

D 2 = τ λ bN 1 - 1 N N - 1 i = 1 N c i R i ALOHA τ λ bN i = 1 N c i R i TDMA .
(68)

In practice, λ is pre-determined from the application. However, to make a fair comparison, we use the value of λ that maximizes the deflection coefficient in (68), i.e., λ = arg max λ D2, 0 ≤ λ ≤ 1.

6.3 Performance upper bound

For the given problem setup and for a given energy and delay constraints, the upper bound on the performance is when there are no channel drops or contentions between sensors, i.e., all observations generated locally at each sensor are received successfully at the fusion center. Mathematically, this case is equivalent to a realization of the physical channel were the channel state is ON for all transmissions. In this case, the detection problem reduces to the classical centralized shift-in-mean Gaussian detection problem, where the ROC curve is given by (24).

6.4 Simulation results

In this section, we evaluate the proposed cross-layer design approach for the system in Figure 1, as compared to the classical approaches summarized in Section 6, via a numerical example. We consider a network with 70 sensors (N = 70) deployed for detection, with parameter values as shown in Table 1. To avoid manual entry of parameter values for the 70 sensors, the mean path loss, path loss variance, and the signal-to-noise ratio for each sensor are generated using uniform random number generators. The evaluation is performed both numerically and through Monte Carlo Simulation (MCS) experiments.

6.4.1 Numerical evaluation

We use Algorithm 1 to calculate the optimal solution for the CLD in (30) and the maximum throughput design in (65), where the interior point method is used as the core optimization algorithm. The interior point method is also used to find the optimal probability of successful packet transmission for the decoupled design in (68).

6.4.2 Simulation study

We use the optimal solution for the design variables obtained from the numerical evaluation to set up an MCS for the wireless network, as follows:

  • Hypothesis. MCS is performed for both H 0 and H 1 . to evaluate the deflection coefficient, ROC curve, and probability of error.

  • Sensors. Observations are generated locally at each sensor for each communication slot. For ALOHA channels, each sensor attempts transmission randomly according to its retransmission probability. For TDMA channels, each sensor transmits in its allocated slots only.

  • Communication channel. The channel state for each sensor is simulated for each detection cycle.

  • Fusion center. The fusion center performs the likelihood ratio test on the observations received. Equivalently, the fusion center calculates the test statistic in (15) and compares it to a threshold value.

  • Performance evaluation. The deflection coefficient is evaluated statistically according to (23). The ROC curve is evaluated by running MCS for different threshold γ values.

We run the MCS experiment 5,000 times for each delay for detection/network lifetime values to obtain accurate results.

6.4.3 Deflection coefficient

Figure 6, left graph, shows the performance surface for the slotted ALOHA sensor network for the proposed CLD approach, for different delay and network lifetime values. For a fixed network lifetime, the deflection coefficient increases with the delay for detection, as more observations are expected at the fusion center. For a fixed delay for detection, the deflection coefficient decreases with network lifetime. This is mainly because the energy budget allocated for each detection cycle decreases to prolong the network lifetime. Decreasing the energy budget reduces the probability of successful packet transmission, hence causing less observations at the fusion center. The TDMA sensor network exhibits a similar behavior as illustrated in Figure 6, right graph. The drop in the deflection coefficient at around 60-s delay and 100-day network lifetime is due to local convergence of the optimization algorithm. This point represents a local maximum, and a point with larger deflection coefficient could be obtained by varying the initial point of the optimization algorithm.

Figure 6
figure 6

Wireless sensor network performance for the ALOHA (left graph) and TDMA (right graph) sensor networks.

We resort to two-dimensional plots to compare between the different design approaches. Figure 7, top left graph, shows the deflection coefficient versus the delay for detection for the three design approaches for the ALOHA network, where network lifetime is set to 250 days. The decoupled design approach has the worst performance, even when choosing the optimal value for the probability of successful transmission λ. This is mainly because the parameters at each layer are specified independently, without regard to the application. The maximum throughput design has a better performance since it seeks to maximize the quantity of the information at the fusion center, by integrating the physical and MAC layers. However, since increasing the quantity of the information is not equivalent to increasing the information quality, as sensors have different QoI, the maximum throughput is outperformed by the proposed CLD approach. The performance of the proposed design represents an upper bound on the maximum throughput performance. This upper bound is achieved if all sensors have the same QoI. The MCS results are superimposed on the numerical curves. The simulated results coincide with the numerical results (apart from MCS accuracy), hence verifying the correctness of the analysis.

Figure 7
figure 7

Deflection coefficient for the ALOHA (top graphs) and TDMA (bottom graphs) wireless sensor networks.

Figure 7, top right graph, shows the deflection coefficient as it varies with the network lifetime, where delay for detection is set to 50 s. The results are similar to the delay for detection study, where the proposed CLD approach outperforms the maximum throughput and decoupled design approaches. Equivalently, for the same deflection coefficient, the network lifetime with the CLD is longer. The MCS results are superimposed on the numerically obtained curves, verifying the correctness of the analysis. The TDMA sensor network exhibits a similar behavior as illustrated in Figure 7, bottom left and right graphs. The MCS results are shown for the CLD design only to avoid cluttering the figures.

6.4.4 ROC curves

For the probability of error and the ROC curves, we resort to MCS experiments to obtain the performance curves. Figure 8, top left graph, shows the probability of error versus the delay for detection for the ALOHA sensor network, where network lifetime is equal to 250 days. Since the probability of error is directly proportional to the deflection coefficient, we obtain the same relative performance, i.e., the proposed CLD approach outperforms the other two approaches, while the maximum throughput approach outperforms the decoupled design one. The same results are obtained for the probability of error as it varies with the network lifetime, for a fixed delay for detection, which is shown in Figure 8, top right graph. The TDMA sensor network exhibits a similar behavior as illustrated in Figure 8, bottom graphs. The difference between the decoupled design and the maximum throughput design is not noticed in Figure 8, bottom left graph, due to MCS accuracy.

Figure 8
figure 8

Probability of error for the ALOHA (top graphs) and TDMA (bottom graphs) wireless sensor networks. MCS curves smoothed out for better presentation.

Figure 9 shows the simulated ROC curves for τ = 50 s and lifetime = 250 days and for different values of the threshold γ [0, ). The figure shows the performance enhancement using the CLD approach. For the same probability of false alarm, the proposed cross-layer approach results in higher probability of detection than the other approaches. Therefore, by integrating different system layers and quality measures in the design process, we obtain performance enhancement that would not be possible without increasing the delay and/or shortening the network lifetime. Figure 9 also shows the upper bound on the performance, given by (24), where D2 is calculated from MCS assuming no channel drops or contentions. The upper bound is achievable only for ideal channels, and the CLD approaches the upper bound by an amount proportional to the given channel quality.

Figure 9
figure 9

ROC curve for the ALOHA (left graph) and TDMA (right graph) wireless sensor networks.

In practice, a family of these ROC curves are provided for different values of the delay for detection and network lifetimes. The operating point is located on a specific ROC curve, and the relevant values of the detector threshold and the WSN design variables are set accordingly.

7 Slotted ALOHA-TDMA comparison

One important question to be answered is whether the TDMA scheme is superior to slotted ALOHA for our detection application. On one hand, elimination of collisions results in energy-saving and guaranteed transmission for sensor data. On the other hand, TDMA scheme treats all sensors equally, as it assigns each sensor a time slot while ignoring its quality measures. In this section, we compare the performance of the slotted ALOHA and TDMA sensor networks, based on the numerical example in Section 6. Figure 10, top left graph, shows the deflection coefficient for different delay constraints. We note that the ALOHA network outperforms TDMA if the delay is below a threshold value, τth. If the delay is increased further, TDMA outperforms the ALOHA network. For τ < τth, the ALOHA network outperforms because of its selectivity property, where sensors with relatively lower quality measures compared to other sensors are excluded from the detection task. This selectivity property is lacking in the TDMA network, where all sensors are treated equally and scheduled to transmit their observations, regardless of their quality measures. For τ > τth, TDMA outperforms ALOHA, mainly because the average energy per detection cycle (average power) decreases with increasing delay. Therefore, transmission attempts for each sensor for the ALOHA network have to be lowered to conserve energy wasted in probable collisions. On the other hand, TDMA does not suffer from collisions. Therefore, even with very small energy per detection cycle, sensors may be able to transfer their information to the fusion center, and therefore, the detection performance will be higher. In general, the delay threshold τth gets higher as the reporting energy per detection cycle for each sensor, er, increases. For the given example, the delay threshold τth≈120 s. Since detection applications are delay sensitive, the ALOHA network would be the choice for network design. However, for scarce energy applications, with very low energy per sensor, TDMA maybe a viable alternative.

Figure 10
figure 10

ALOHA-TDMA comparison.

Figure 10, top right graph, shows the deflection coefficient for different lifetime values. Similarly, the TDMA outperforms the ALOHA for lifetime values greater than the threshold lifetime L th . The threshold lifetime gets higher as the delay for detection decreases. For the given numerical example, the threshold lifetime L th 285 days. Since the performance degrades with increasing network lifetime, the deflection coefficient at the threshold lifetime may be below the minimum design value, and therefore, TDMA may not be a feasible design option. For example, in Figure 10, top right graph, the minimum detection performance is specified by D2=6, and therefore, the ALOHA is the design option. At the threshold lifetime, D2≈5.2, which is below the minimum design requirement, and therefore, TDMA cannot be used with such design requirements. However, for scarce energy applications, the threshold lifetime gets smaller, so that TDMA maybe the only viable design option to extend the network lifetime, on the expense of degraded detection performance.

Figure 10, bottom left graph, summarizes the performance comparison in the delay lifetime two-dimensional space. The curve represents the boundary between the ALOHA and TDMA regions. For any pair of (delay, lifetime) in the ALOHA region, the ALOHA sensor network has a superior performance and similarly for the TDMA region. The figure could be augmented by the contour lines for the deflection coefficient for both ALOHA and TDMA to show the performance measure value. Using the deflection coefficient values, the designer can check whether the selected operating point satisfies the minimum performance requirement. Figure 10, bottom right graph, shows the performance regions with the contour lines for the ALOHA region.

8 Conclusion

In this paper, we pursued a cross-layer, model-based approach to design a single-hop ALOHA and TDMA WSNs deployed for detection applications. We developed an integrated model for the detection system that includes the communication network, sensing, and energy models. We considered the QoI, CSI, and REI quality measures in the design process. We designed a complete transmission control policy that includes the transmission probabilities, communication rate, and energy allocation for each sensor. We showed a significant performance increase over the decoupled and maximum throughput design approaches with equal energy allocation scheme, for both ALOHA and TDMA networks.

The TDMA sensor network is easier to design than the ALOHA network, since one of the design variables is omitted (retransmission probability). However, we showed in this paper that the ALOHA network outperforms TDMA for small to moderate delays. For large delays, TDMA outperforms the ALOHA network unless the network lifetime is reduced. The designer chooses the best option based on the delay and lifetime constraints, in addition to the minimum allowed performance measure.

The cross-layer design approach results in a no-cost performance increase, since the designer obtains a performance increase for the same delay and lifetime constraints. However, the cross-layer design has its own pitfalls. First, a mathematical model that captures the inter-relationships between different layers has to be developed. This model is, in general, complex, and it maybe required to go through the design process several times to refine the assumptions in order to obtain a tractable model. Second, the optimization problem obtained has to be solvable in real time with existing optimization algorithms. This is not always possible, as the optimization problem complexity is closely coupled to the model complexity. Finally, the optimality of the design depends on the availability of the global information in real time. This assumption may not always be true in practice. Despite these pitfalls, the cross-layer design complexity is justified when it is desired to optimize the performance with limited system resources that cannot be replenished (e.g., remote WSN in a battlefield). The decoupled approach, on the other hand, maybe justified for systems with enough resources such that the performance loss could be compensated by additional resource allocation.

Several extensions could be made to the work presented in this paper. Multi-hop sensor networks could be addressed instead of single-hop networks. Small-scale fading could be incorporated in the system model, providing a more general model that is applicable in a variety of sensor network applications. Finally, other channel access schemes could be considered, e.g., FDMA, CDMA, and SDMA.

References

  1. Viswanathan R, Varshney P: Distributed detection with multiple sensors I. Fundamentals. Proc. IEEE 1997, 85: 54-63. 10.1109/5.554208

    Article  Google Scholar 

  2. Abu-romeh AS, Jones DL: Decentralized detection in censoring sensor networks under correlated observations. EURASIP J. Adv. Signal Process. 2010, 2010: 1-10.

    Article  Google Scholar 

  3. Duman T, Salehi M: Decentralized detection over multiple-access channels. IEEE Trans. Aerosp. Electron. Syst. 1998, 34(2):469-476. 10.1109/7.670328

    Article  Google Scholar 

  4. Longo M, Lookabaugh T, Gray R: Quantization for decentralized hypothesis testing under communication constraints. IEEE Trans. Inform. Theory 1990, 36(2):241-255. 10.1109/18.52470

    Article  MathSciNet  Google Scholar 

  5. Veeravalli VV, Chamberland JF: Detection in sensor networks. In Wireless Sensor Networks: Signal Processing and Communications Perspectives. Edited by: Tong L, Swami A, Zhao Q, Hong TW. West Sussex: Wiley; 2007:119-148.

    Google Scholar 

  6. Liu L, Chamberland JF: Cross-layer optimization and information assurance in decentralized detection over wireless sensor networks. 2006.

    Chapter  Google Scholar 

  7. IEEE Computer Society: IEEE Standard for Local and Metropolitan Area Networks–Part 15.4: Low-Rate Wireless Personal Area Networks (LR-WPANs). New York: IEEE; 2011.

    Google Scholar 

  8. Tsitsiklis JN: Decentralized detection. In Advances in Signal Processing. Edited by: Poor HV, Thomas JB. Oxford: JAI Press; 1993:297-344.

    Google Scholar 

  9. Gastpar M: Uncoded transmission is exactly optimal for a simple Gaussian sensor network. IEEE Trans.Inform. Theory 2008, 54(11):5247-5251.

    Article  MathSciNet  MATH  Google Scholar 

  10. Tantawy A, Koutsoukos X, Biswas G: Transmission control policy design for decentralized detection in tree topology sensor networks. Paper presented at the 14th international conference on information fusion, Chicago, IL, USA, July 5–8 2011, 1–8

  11. Tantawy A, Koutsoukos X, Biswas G: A cross-layer design for decentralized detection in tree sensor networks. Paper presented at the IEEE international conference on distributed computing in sensor systems (DCOSS), Hangzhou, China, 16–18 May 2012

  12. Chen B, Jiang R, Kasetkasem T, Varshney PK: Channel aware decision fusion in wireless sensor networks. IEEE Trans. Signal Process 2004, 52(12):3454-3458. 10.1109/TSP.2004.837404

    Article  MathSciNet  Google Scholar 

  13. Yilmaz Y, Moustakides GV, Wang X: Channel-aware decentralized detection via level-triggered sampling. IEEE Trans. Signal Process 2013, 61: 300-315.

    Article  MathSciNet  Google Scholar 

  14. Mergen G, Naware V, Tong L: Asymptotic detection performance of type-based multiple access over multiaccess fading channels. IEEE Trans. Signal Process 2007, 55(3):1081-1092.

    Article  MathSciNet  Google Scholar 

  15. Hong YW, Varshney PK: Data-centric and cooperative MAC protocols for sensor networks. In Wireless Sensor Networks: Signal Processing and Communications. Edited by: Swami A, Zhao Q, Hong YW, Tong L. West Sussex: Wiley; 2007:311-344.

    Chapter  Google Scholar 

  16. Yuan Y, Kam M: Distributed decision fusion with a random-access channel for sensor network applications. IEEE Trans. Instrum. Meas 2004, 53(4):1339-1344. 10.1109/TIM.2004.830598

    Article  Google Scholar 

  17. Chang TY, Hsu TC, Hong PW: Exploiting data-dependent transmission control and MAC timing information for distributed detection in sensor networks. IEEE Trans. Signal Process 2010, 58(3):1369-1382.

    Article  MathSciNet  Google Scholar 

  18. Hong YW, Lei KU, Chi CY: Channel-aware random access control for distributed estimation in sensor networks. IEEE Trans. Signal Process 2008, 56(7):2967-2980.

    Article  MathSciNet  Google Scholar 

  19. Yang Y, Blum R, Sadler B: Energy-efficient routing for signal detection in wireless sensor networks. IEEE Trans. Signal Process 2009, 57(6):2050-2063.

    Article  MathSciNet  Google Scholar 

  20. Sung Y, Misra S, Tong L, Ephremides A: Cooperative routing for distributed detection in large sensor networks. IEEEJ. Select. Areas Commun 2007, 25(2):471-483.

    Article  Google Scholar 

  21. Zhao Q, Swami A, Tong L: The interplay between signal processing and networking in sensor networks. IEEE Signal Process. Mag 2006, 23(4):84-93.

    Article  Google Scholar 

  22. Tantawy A, Koutsoukos X, Biswas G: Transmission control policy design for decentralized detection in sensor networks.

  23. Gallager R: Principles of Digital Communication. New York: Cambridge University Press; 2008.

    Book  MATH  Google Scholar 

  24. Salous S: Radio Propagation Measurement and Channel Modelling. West Sussex: Wiley; 2013.

    Book  Google Scholar 

  25. Zhao F, Guibas L: Wireless Sensor Networks: An Information Processing Approach. New York: Morgan Kaufmann; 2004.

    Google Scholar 

  26. Hata M: Empirical formula for propagation loss in land mobile radio services. IEEE Trans. Veh. Technol 1980, 29(3):317-325.

    Article  MathSciNet  Google Scholar 

  27. Bhardwaj M, Garnett T, Chandrakasan A: Upper bounds on the lifetime of sensor networks. ICC 2001, 3: 785-790.

    Google Scholar 

  28. Deng J, Han Y, Heinzelman W, Varshney P: Scheduling sleeping nodes in high density cluster-based sensor networks. Mobile Netw. Appl 2005, 10(6):825-835. 10.1007/s11036-005-4441-9

    Article  Google Scholar 

  29. Chen Y, Zhao Q: On the lifetime of wireless sensor networks. IEEE Commun. Lett 2005, 9(11):976-978. 10.1109/LCOMM.2005.11010

    Article  Google Scholar 

  30. Kay SM: Fundamentals of Statistical Signal Processing: Detection Theory. New Jersey: Prentice Hall; 1998.

    Google Scholar 

  31. Picinbono B: On deflection as a performance criterion in detection. IEEE Trans. Aerosp. Electron. Syst 1995, 31(3):1072-1081. 10.1109/7.395235

    Article  Google Scholar 

  32. IEEE: IEEE Standard 754-2008: IEEE Standard for Floating-Point Arithmetic. New York: IEEE; 2008.

    Google Scholar 

  33. Boyd S, Vandenberghe L: Convex Optimization. Cambridge: Cambridge University Press; 2004.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported in part by the National Science Foundation (CNS-1238959, CNS-1035655).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashraf Tantawy.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tantawy, A., Koutsoukos, X. & Biswas, G. Cross-layer design for decentralized detection in WSNs. EURASIP J. Adv. Signal Process. 2014, 43 (2014). https://doi.org/10.1186/1687-6180-2014-43

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-43

Keywords