- Research Article
- Open Access

# Distributed Encoding Algorithm for Source Localization in Sensor Networks

- YoonHak Kim
^{1}Email author and - Antonio Ortega
^{2}

**2010**:781720

https://doi.org/10.1155/2010/781720

© Y. H. Kim and A. Ortega. 2010

**Received:**12 May 2010**Accepted:**21 September 2010**Published:**26 September 2010

## Abstract

We consider sensor-based distributed source localization applications, where sensors transmit quantized data to a fusion node, which then produces an estimate of the source location. For this application, the goal is to minimize the amount of information that the sensor nodes have to exchange in order to attain a certain source localization accuracy. We propose a distributed encoding algorithm that is applied after quantization and achieves significant rate savings by merging quantization bins. The bin-merging technique exploits the fact that certain combinations of quantization bins at each node cannot occur because the corresponding spatial regions have an empty intersection. We apply the algorithm to a system where an acoustic amplitude sensor model is employed at each node for source localization. Our experiments demonstrate significant rate savings (e.g., over 30%, 5 nodes, and 4 bits per node) when our novel bin-merging algorithms are used.

## Keywords

- Encode Algorithm
- Rate Saving
- Sensor Reading
- Sensor Field
- Empty Intersection

## 1. Introduction

In sensor networks, multiple correlated sensor readings are available from many sensors that can sense, compute and communicate. Often these sensors are battery-powered and operate under strict limitations on wireless communication bandwidth. This motivates the use of data compression in the context of various tasks such as detection, classification, localization, and tracking, which require data exchange between sensors. The basic strategy for reducing the overall energy usage in the sensor network would then be to decrease the communication cost at the expense of additional computation in the sensors [1].

One important sensor collaboration task with broad applications is source localization. The goal is to estimate the location of a source within a sensor field, where a set of distributed sensors measures acoustic or seismic signals emitted by a source and manipulates the measurements to produce meaningful information such as signal energy, direction-of-arrival (DOA), and time difference-of-arrival (TDOA) [2, 3].

Localization based on acoustic signal energy measured at individual acoustic amplitude sensors is proposed in [4], where each sensor transmits unquantized acoustic energy readings to a fusion node, which then computes an estimate of the location of the source of these acoustic signals. Localization can be also performed using DOA sensors (sensor arrays) [5]. The sensor arrays generally provide better localization accuracy, especially in far field, as compared to amplitude sensors, while they are computationally more expensive. TDOA can be estimated by using various correlation operations and a least squares (LS) formulation can be used to estimate source location [6]. Good localization accuracy for the TDOA method can be accomplished if there is accurate synchronization among sensors, which will tend to require a high cost in wireless sensor networks [3].

None of these approaches take explicitly into account the effect of sensor reading quantization. Since practical systems will require quantization of sensor readings before transmission, estimation algorithms will be run on quantized sensor readings. Thus, it would be desirable to minimize the information in terms of rate before being transmitted to a fusion node. It is noted that there exists some degree of redundancy between the quantized sensor readings since each sensor collects information (e.g., signal energy or direction) regarding a source location. Clearly, this redundancy can be reduced by adopting distributed quantizers designed to maximize the localization accuracy by exploiting the correlation between the sensor readings (see [7, 8]).

*z*

_{i}in Figure 1), such as signal energy or DOA, using actual measurements (e.g., time-series measurements or spatial measurements). We also assume that there is only one way communication from nodes to the fusion node; that is, there is no feedback channel, the nodes do not communicate with each other (no relay between nodes), and these various communication links are reliable.

*transmitted from the nodes*will tend to produce nonempty intersections (the shaded regions in Figure 2, resp.) while numerous other combinations randomly collected may lead to empty intersections, implying that such combinations are very unlikely to be transmitted from the nodes (e.g., and many others). In this work, we focus on developing tools that allow us to exploit this observation in order to eliminate the redundancy. More specifically, we consider a novel way of reducing the effective number of quantization bins consumed by all the nodes involved while preserving localization performance. Suppose that one of the nodes reduces the number of bins that are being used. This will cause a corresponding increase of uncertainty. However, the fusion node that receives a combination of the bins from all the nodes should be able to compensate for the increase by using the data from the other nodes as side information.

We propose a novel distributed encoding algorithm that allows us to achieve significant rate savings [8, 10]. With our method, we merge (non-adjacent) quantization bins in a given node whenever we determine that the ambiguity created by this merging can be resolved at the fusion node once information from other nodes is taken into account. In [11], the authors focused on encoding the correlated measurements by merging the adjacent quantization bins at each node so as to achieve rate savings at the expense of distortion. Notice that they search the quantization bins to be merged that show redundancy in encoding perspective while we find the bins for merging that produce redundancy in *localization perspective*. In addition, while in their approach each computation of distortion for pairs of bins will be required to find the bins for merging, we develop simple techniques that choose the bins to be merged in a systematic way.

It is noted that our algorithm is an example of binning as can be found in Slepian-Wolf and Wyner-Ziv techniques [11, 12]. In our approach, however, we achieve rate savings purely through binning and provide several methods to select candidate bins for merging. We apply our distributed encoding algorithm to a system, where an acoustic amplitude sensor model proposed in [4] is considered. Our experiments show rate savings (e.g., over 5 nodes, and bits per node) when our novel bin-merging algorithms are used.

This paper is organized as follows. The terminologies and definitions are given in Section 2, and the motivation is explained in Section 3. In Section 4, we consider quantization schemes that can be used with the encoding at each node. An iterative encoding algorithm is proposed in Section 5. For a noisy situation, we consider the modified encoding algorithm in Section 6 and describe the decoding process and how to handle decoding errors in Section 7. In Section 8, we apply our encoding algorithm to the source localization system, where an acoustic amplitude sensor model is employed. Simulation results are given in Section 9, and the conclusions are found in Section 10.

## 2. Terminologies and Definitions

where denotes the sensor model employed at node and the measurement noise can be approximated using a normal distribution, (The sensor models for acoustic amplitude sensors and DOA sensors can be expressed in this form [4, 13].) is the parameter vector for the sensor model (an example of for an acoustic amplitude sensor case is given in Section 8). It is assumed that each node measures its sensor reading at time interval quantizes it and sends it to a fusion node, where all sensor readings are used to obtain an estimate of the source location.

At node we use a -bit quantizer with a dynamic range We assume that the quantization range can be selected for each node based on desirable properties of their respective sensing ranges [14]. Denote by the quantizer with quantization level at node which generates a quantization index In what follows, will be also used to denote the quantization bin to which measurement belongs.

This formulation is general and captures many scenarios of practical interest. For example,
could be the energy captured by an acoustic amplitude sensor (this will be the case study presented in Section 8), but it could also be a DOA measurement. (In the DOA case, each measurement at a given node location will be provided by an array of collocated sensors.) Each scenario will obviously lead to a different sensor model
We assume that the fusion node needs measurements,
from*all* nodes in order to estimate the source location.

For example, assuming that each node measures noiseless sensor readings (i.e., ), we can construct the set by collecting only the combinations that lead to nonempty intersections. (The combinations corresponding to the shaded regions in Figure 2 will belong to ) In a noisy situation, how to construct will be further explained in Section 6.

This set will provide all possible combinations of ( ) tuples that can be transmitted from other nodes when the th bin at node was actually transmitted. In other words, the fusion node will be able to identify which bin actually occurred at node by exploiting the set as side information, when there is uncertainty induced by merging bins at node

Since ( ) quantized measurements out of each -tuple in are used in actual process of encoding, it would be useful to construct the set of ( ) tuples generated from We denote by the set of -tuples obtained from -tuples in where only the quantization bins at positions other than position are stored. That is, if then we always have Clearly, there is one to one correspondence between the elements in and so that

## 3. Motivation: Identifiability

In this section, we assume that that is, only combinations of quantization indices belonging to can occur and those combinations belonging to never occur. These sets can be easily obtained when there is no measurement noise (i.e., ) and no parameter mismatches. As discussed in the introduction, there will be numerous elements in that are not in Therefore, simple scalar quantization at each node would be inefficient because a standard scalar quantizer would allow us to represent any of the -tuples in What we would like to determine now is a method such that independent quantization can still be performed at each node, while at the same time, we reduce the redundancy inherent in allowing all the combinations in to be chosen. Note that, in general, determining that a specific quantizer assignment in does not belong to requires having access to the whole vector, which obviously is not possible if quantization has to be performed independently at each node.

In our design, we will look for quantization bins in a given node that can be *merged* without affecting localization. As will be discussed next, this is because the ambiguity created by the merger can be resolved once information obtained from the other nodes is taken into account. Note that this is the basic principle behind distributed source coding techniques: binning at the encoder, which can be disambiguated once side information is made available at the decoder [11, 12, 15] (in this case, quantized values from other nodes).

Since the encoder at node merges and into with , it sends the corresponding index, to the fusion node whenever the sensor reading belongs to or The decoder will try to determine which of the two merged bins ( or in this case) actually occurred at node To do so, the decoder will use the information provided by the other nodes, that is, the quantization indices Consider one particular source position for which node produces and the remaining nodes produce a combination of quantization indices (To avoid confusion, we denote a vector of quantization indices and a vector of -1 quantization indices, resp.) Then, for this there would be no ambiguity at the decoder, even if bins and were to be merged, as long as This follows because if the decoder would be able to determine that only is consistent with receiving With the notation adopted earlier this leads to the following definition:

Definition 1.

and are identifiable, and therefore can be merged, if and only if

## 4. Quantization Schemes

As mentioned in the previous section, there will be redundancy in -tuples after quantization which can be eliminated by our merging technique. However, we can also attempt to reduce the redundancy during quantizer design before the encoding of the bins is performed. Thus, it would be worth considering the effect of selection of a given quantization scheme on system performance when the merging technique is employed. In this section, we consider three schemes as follows.

*(i) Uniform quantizers*

Since they do not utilize any statistics about the sensor readings for quantizer design, there will be no reduction in redundancy by the quantization scheme. Thus only the merging technique plays a role in improving the system performance.

*(ii) L1oyd quantizers*

Using the statistics about the sensor reading available at node the th quantizer is designed using the generalized L1oyd algorithm [16] with the cost function which is minimized in an iterative fashion. Since each node consider only the information available to it during quantizer design, there will still exist much redundancy after quantization which the merging technique can attempt to reduce.

*(iii) Localization specific quantizers (LSQs) proposed in* [7]

While designing a quantizer at node we can take into account the effect of quantized sensor readings at other nodes on the quantizer design by introducing the localization error in a new cost function, which will be minimized in an iterative manner. (The new cost function to be minimized is expressed as the Lagrangian functional The topic of quantizer design in distributed setting goes beyond the scope of this work. See [7, 8] for detailed information.) Since the correlation between sensor readings is exploited during quantizer design, LSQ along with our merging technique will show the best performance of all.

We will discuss the effect of quantization and encoding on the system performance based on experiments for an acoustic amplitude sensor system in Section 9.1.

## 5. Proposed Encoding Algorithm

In general, there will be multiple pairs of identifiable quantization bins that can be merged. Often, all candidate identifiable pairs cannot be merged simultaneously; that is, after a pair has been merged, other candidate pairs may become nonidentifiable. In what follows, we propose algorithms to determine in a sequential manner which pairs should be merged.

where This is a weighted sum of the bin probability and the number of the combinations of -tuples that include If is large the corresponding bin would be a good candidate for merging under criterion (1) whereas a small value of will indicate a good choice under criterion (2). In our proposed procedure, for a suitable value of we will seek to prioritize the merging of those identifiable bins having the largest total weighted metric. This will be repeated iteratively until there are no identifiable bins left. The selection of can be heuristically made so as to minimize the total rate. For example, several different could be evaluated in (7) to first determine its applicable range which will be then searched to find a proper value of Clearly, depends on the application.

The proposed *global merging algorithm* is summarized as follows.

Step 1.

Set where indicating that none of the bins, have been merged yet.

Step 2.

Find that is, we search over all the nonmerged bins for the one with the largest metric

Step 3.

Find such that where the search for the maximum is done only over the bins identifiable with at node and go to Step 4. If there are no bins identifiable with set indicating the bin is no longer involved in the merging process. If stop; otherwise, go to Step 2.

Step 4.

Merge and to with Set Go to Step 2.

In the proposed algorithm, the search for the maximum of the metric is done for the bins of all nodes involved. However, different approaches can be considered for the search. These are explained as follows.

Method 1 (*Complete sequential merging*).

In this method, we process one node at a time in a specified order. For each node, we merge the maximum number of bins possible before proceeding to the next node. Merging decisions are not modified once made. Since we exhaust all possible mergers in each node, after scanning all the nodes no more additional mergers are possible.

Method 2 : (*Partial sequential merging*).

In this method, we again process one node at a time in a specified order. For each node, among all possible bin mergers, the best one according to a criterion is chosen (the criterion could be entropy based and e.g., (7) is used in this paper) and after the chosen bin is merged we proceed to the next node. This process is continued until no additional mergers are possible in any node. This may require multiple passes through the set of nodes.

These two methods can be easily implemented with minor modifications to our proposed algorithm. Notice that the final result of the encoding algorithm will be merging tables, each of which has the information about which bins can be merged at each node in real operation. That is, each node will merge the quantization bins using the merging table stored at the node and will send the merged bin to the fusion node which then tries to determine which bin actually occurred via the decoding process using merging tables and

### 5.1. Incremental Merging

The complexity of the above procedures is a function of the total number of quantization bins, and thus of the number of the nodes involved. These approaches could potentially be complex for large sensor fields. We now show that incremental merging is possible; that is, we can start by performing the merging based on a subset consisting of sensor nodes, and it can be guaranteed that the merging decisions that were valid when nodes were considered will remain valid even when all nodes are taken into account. To see this, suppose that and are identifiable when only nodes are considered. From Definition 1, where indicates the number of nodes involved in the merging process. Note that since every element (In this section, we denote by an element Later, it will be also used to denote an th element in in Section 8 without confusion) is constructed by concatenating indices with the corresponding element, we have that if By the property of the intersection operator we can claim that implying that and are still identifiable even when we consider nodes. Thus, we can start the merging process with just two nodes and continue to do further merging by adding one node (or a few) at a time without change in previously merged bins. When many nodes are involved, this would lead to significant savings in computational complexity. In addition, if some of the nodes are located far away from the nodes being added (i.e., the dynamic ranges of their quantizers do not overlap with those of the nodes being added), they can be skipped for further merging without loss of merging performance.

## 6. Extension of Identifiability: -Identifiability

Since for real operating conditions, there exist measurement noise (
and/or parameter mismatches, it is computationally impractical to construct the set
satisfying the assumption of
under which the merging algorithm was derived in Section 3. Instead, we construct
such that
and propose an extended version of *identifiability* that allows us to still apply the merging technique under noisy situations. With this consideration, Definition 1 can be extended as follows.

Definition 2.

and are -identifiable, and therefore can be merged, if and only if where and are constructed from as from in Section 2. Obviously, to maximize the rate gain achievable by the merging technique, we need to construct as small as possible given Ideally, we can build the set by collecting the -tuples with high probability although it would require huge computational complexity especially when many nodes are involved at high rates. In this work, we suggest following the procedure stated below for construction of with reduced complexity.

Step 1.

Compute the interval such that Since where in (1), we can construct the interval that is symmetric with respect to that is, so that Notice that is determined by and (not a function of ). For example, if is given by and with

Step 2.

From intervals we generate possible -tuples satisfying that Denote by a set containing such tuples. It is noted that the process of generating -tuples from intervals is deterministic, given quantizers. (Simple programming allows us to generate -tuples from intervals. For example, suppose that and and are computed given in Step 1. Pick an -tuple with and Then, we determine whether or not by checking In this example, we have )

Step 3.

As approaches will be asymptotically reduced to the set constructed in a noiseless case. It should be mentioned that this procedure provides a tool that enables us to change the size of by simply adjusting Obviously, computation of is unnecessary.

Notice that all the merged bins are -identifiable (or identifiable) at the fusion node as long as the -tuple to be encoded belongs to (or ). In other words, decoding errors will be generated when elements in occur and there will be tradeoff between rate savings and decoding errors. If we choose to be as small as possible, yielding a small set , we can achieve good rate savings at the expense of large decoding error (equivalently, large), which could lead to degradation of localization performance. Handling of decoding errors will be discussed in Section 7.

## 7. Decoding of Merged Bins and Handling Decoding Errors

Suppose that we have a set of K -tuples, decomposed from via merging tables. Then, clearly, and where is the true -tuple before encoding (see Figure 4). Notice that if then all merged bins would be identifiable at the fusion node; that is, after decomposition, there is only one decomposed -tuple, belonging to , (As the decomposition is processed, all the decomposed -tuples except will be discarded since they do not belong to .) and we declare decoding successful. Otherwise, we declare decoding errors and apply the decoding rules which will be explained in the following subsections, to handle those errors. Since the decoding error occurs only when the decoding error probability will be less than

It is observed that since the decomposed -tuples are produced via the merging tables from it is very likely that where In other words, since the encoding process merges the quantization bins whenever any -tuples that contain either of them are very unlikely to happen at the same time, the -tuples tend to take very low probability.

### 7.1. Decoding Rule : Simple Maximum Rule

where is the decoded -tuple which will be forwarded to the localization routine.

### 7.2. Decoding Rule : Weighted Decoding Rule

where is the weight of and if Typically, is chosen as a small number (e.g., in our experiments). Note that the weighted decoding rule with is equivalent to the simple maximum rule in (8).

## 8. Application to Acoustic Amplitude Sensor Case

where the parameter vector in (1) consists of the gain factor of the th node an energy decay factor which is approximately equal to in free space, and the source signal energy The measurement noise term can be approximated using a normal distribution, In (11), it is assumed that the signal energy, is uniformly distributed over the range .

where the th sensor reading is expressed by the sensor model and the measurement noise,

Since the nodes involved in localization of any given source generate the same -tuple, the set will be computed deterministically and we have Thus, using we can apply our merging technique to this case and achieve significant rate savings without any degradation of localization accuracy (no decoding error).

However, measurement noise and/or unknown signal energy will make this problem complicated by allowing random realizations of -tuples generated by nodes for any given source location. For this case, we construct by following the procedure in Section 6 and apply our decoding rules explained in Section 7 to handle decoding errors.

## 9. Experimental Results

The distributed encoding algorithm described in Section 5 is applied to the system, where each node employs an acoustic amplitude sensor model given by (11) for source localization. The experimental results are provided in terms of average localization error. (Clearly, the localization error would be affected by the estimators employed at the fusion node. The estimation algorithms go beyond the scope of this work. For detailed information, see [9].) and rate savings (%) computed by where is the rate consumed by nodes when only the independent entropy coding (Huffman coding) is used after quantization and is the rate by nodes when the merging technique is applied to quantized data before the entropy coding. We assume that each node uses LSQ described in Section 4 (for further details, refer to [7]) except for the experiments where otherwise stated.

### 9.1. Distributed Encoding Algorithm: Noiseless Case

### 9.2. Encoding with -Identifiability and Decoding Rules: Noisy Case

### 9.3. Performance Comparison

We address the question of how our technique compares with the best achievable performance for this source localization scenario. As a bound on achievable performance we consider a system where (i) each node quantizes its measurement independently and (ii) the quantization indices generated by all nodes for a given source location are jointly coded (in our case, we use the joint entropy of the vector of measurements as the rate estimate).

Note that this is not a realistic bound because joint coding cannot be achieved unless the nodes are able to communicate before encoding. In order to approximate the behavior of the joint entropy coder via DSC techniques one would have to transmit multiple sensor readings of the source energy from each node, as the source is moving around the sensor field. Some of the nodes could send measurements that are directly encoded, while others could transmit a syndrome produced by an error correcting code based on the quantized measurements. Then, as the fusion node receives all the information from the various nodes it would be able to exploit the correlation from the measurements and approximate the joint entropy. This method would not be desirable, however, because the information in each node depends on the location of the source and thus to obtain a reliable estimate of the measurement at all nodes one would have to have measurements at a sufficient number of positions of the source. Thus, instantaneous localization of the source would not be possible. The key point here, then, is that the randomness between measurements across nodes is based on the localization of the source, which is precisely what we wish to observe.

## 10. Conclusion and Future Works

Using the distributed property of the quantized sensor readings, we proposed a novel encoding algorithm to achieve significant rate savings by merging quantization bins. We also developed decoding rules to deal with the decoding errors which can be caused by measurement noise and/or parameter mismatches. In the experiment, we showed that the system equipped with the distributed encoders achieved significant data compression as compared with standard systems.

So far, we have considered encoding algorithms by fixing quantizers. However, since there exists dependency between quantization and encoding of quantized data which can be exploited to obtain better performance gain, it would be worth considering a joint design of quantizers and encoders.

## Declarations

### Acknowledgments

The authors would like to thank the anonymous reviewers for their careful reading of the paper and useful suggestions which led to significant improvements in the paper. This research has been funded in part by the Pratt & Whitney Institute for Collaborative Engineering (PWICE) at USC, and in part by NASA under the Advanced Information Systems Technology (AIST) program. The work was presented in part in IEEE International Symposium on Information Processing in Sensor Networks (IPSN), April 2005.

## Authors’ Affiliations

## References

- Zhao F, Shin J, Reich J: Information-driven dynamic sensor collaboration.
*IEEE Signal Processing Magazine*2002, 19(2):61-72. 10.1109/79.985685View ArticleGoogle Scholar - Chen JC, Yao K, Hudson RE: Source localization and beamforming.
*IEEE Signal Processing Magazine*2002, 19(2):30-39. 10.1109/79.985676View ArticleGoogle Scholar - Li D, Wong KD, Hu YH, Sayeed AM: Detection, classification, and tracking of targets.
*IEEE Signal Processing Magazine*2002, 19(2):17-29. 10.1109/79.985674View ArticleGoogle Scholar - Li D, Hu YH: Energy-based collaborative source localization using acoustic microsensor array.
*EURASIP Journal on Applied Signal Processing*2003, 2003(4):321-337. 10.1155/S1110865703212075View ArticleMATHGoogle Scholar - Chen JC, Yao K, Hudson RE: Acoustic source localization and beamforming: theory and practice.
*EURASIP Journal on Applied Signal Processing*2003, 2003(4):359-370. 10.1155/S1110865703212038View ArticleMATHGoogle Scholar - Chen JC, Yip L, Elson J, Wang H, Maniezzo D, Hudson RE, Yao K, Estrin D: Coherent acoustic array processing and localization on wireless sensor networks.
*Proceedings of the IEEE*2003, 91(8):1154-1161. 10.1109/JPROC.2003.814924View ArticleGoogle Scholar - Kim YH, Ortega A: Quantizer design for source localization in sensor networks.
*Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), March 2005*857-860.Google Scholar - Kim YH:
*Distrbuted algorithms for source localization using quantized sensor readings, Ph.D. dissertation*. USC; December 2007.Google Scholar - Kim YH, Ortega A: Maximum a posteriori (MAP)-based algorithm for distributed source localization using quantized acoustic sensor readings.
*Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), May 2006*1053-1056.Google Scholar - Kim YH, Ortega A: Quantizer design and distributed encoding algorithm for source localization in sensor networks.
*Proceedings of the 4th International Symposium on Information Processing in Sensor Networks (IPSN '05), April 2005*231-238.Google Scholar - Flynn TJ, Gray RM: Encoding of correlated observations.
*IEEE Transactions on Information Theory*1988, 33(6):773-787.MathSciNetView ArticleGoogle Scholar - Ishwar P, Puri R, Ramchandran K, Pradhan SS: On rate-constrained distributed estimation in unreliable sensor networks.
*IEEE Journal on Selected Areas in Communications*2005, 23(4):765-774.View ArticleMATHGoogle Scholar - Liu J, Reich J, Zhao F: Collaborative in-network processing for target tracking.
*EURASIP Journal on Applied Signal Processing*2003, 2003(4):378-391. 10.1155/S111086570321204XView ArticleMATHGoogle Scholar - Yang H, Sikdar B: A protocol for tracking mobile targets using sensor networks.
*Proceedings of IEEE Workshop on Sensor Network Protocols and Applications (SNPA '03), May 2003, Anchorage, Alaska, USA*71-81.Google Scholar - Cover TM, Thomas JA:
*Elements of Information Theory*. Wiley-Interscience, New York, NY, USA; 1991.View ArticleMATHGoogle Scholar - Sayood K:
*Introduction to Data Compression*. 2nd edition. Morgan Kaufmann Publishers, San Fransisco, Calif, USA; 2000.MATHGoogle Scholar - Hero AO III, Blatt D: Sensor network source localization via projection onto convex sets (POCS).
*Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), March 2005*689-692.Google Scholar - Rappaport TS:
*Wireless Communications:Principles and Practice*. Prentice-Hall, Upper Saddle River, NJ, USA; 1996.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.