Blind distributed estimation algorithms for adaptive networks
© Bin Saeed et al.; licensee Springer. 2014
Received: 16 July 2014
Accepted: 19 July 2014
Published: 1 September 2014
Until recently, a lot of work has been done to develop algorithms that utilize the distributed structure of an ad hoc wireless sensor network to estimate a certain parameter of interest. However, all these algorithms assume that the input regressor data is available to the sensors, but this data is not always available to the sensors. In such cases, blind estimation of the required parameter is needed. This work formulates two newly developed blind block-recursive algorithms based on singular value decomposition (SVD) and Cholesky factorization-based techniques. These adaptive algorithms are then used for blind estimation in a wireless sensor network using diffusion of data among cooperative sensors. Simulation results show that the performance greatly improves over the case where no cooperation among sensors is involved.
KeywordsBlind estimation Diffusion Adaptive networks
Several algorithms have been devised in the literature for distributed estimation [1–5]. The work in  introduces a distributed estimation approach using the recursive least squares algorithm. Other algorithms involving the least-mean-square (LMS) approach have also been suggested [2–5].
However, all these algorithms assume that the input regressor data, uk,i, is available at the sensors. If this information is not available, then the said problem becomes a blind estimation problem. Blind algorithms have been a topic of interest ever since Sato devised a blind algorithm  in the context of equalization . Since then, several algorithms have been derived for blind estimation [8–15]. The work in  summarizes the second-order statistics-based approaches for blind identification. These include multichannel as well as single-channel blind estimation methods such as the works in  and . The work in  is one of the most cited blind estimation techniques for a single-input-single-output (SISO) model. However, unlike in , it is shown in  that the technique of  can be improved upon using only two blocks of data. A key idea in  is used in  to devise an algorithm that does indeed show improvement over the algorithm of . However, the computational complexity of this new algorithm (in ) is very demanding. A generalized algorithm is devised in , improving upon both algorithms developed in [12, 13]. In , a Cholesky factorization-based least squares solution is suggested that simplifies the work of [11, 13, 14]. Although the performance of the algorithm developed in  is not as good as that of the previous algorithms, it nevertheless provides an excellent trade-off between performance level and computational complexity. However, in systems where less complexity is required and performance can be compromised to some extent, this algorithm would provide a good substitute to the algorithms developed in [12, 13].
As mentioned above, for the case where the input regressor data is not available to the WSN environment used, then blind estimation techniques become mandatory. In this case, since blind estimation techniques have not yet been developed for this field, blind block-recursive least squares algorithms would have to be devised, inspired from the works in  and , and then implemented in a distributed WSN environment using the diffusion approach suggested in .
The following notation has been used here. Boldface letters are used for vectors/matrices and normal font for scalar quantities. Matrices are defined by capital letters and small letters are used for vectors. The notation (.) T stands for transposition for vectors and matrices and expectation operation is denoted by E[.]. Any other mathematical operators used in this paper will be defined as and when introduced in the paper.
The paper is divided as follows: Section 2 defines the problem statement. Section 3 gives a brief overview of the blind estimation algorithms taken into consideration in this work. Section 4 proposes the newly developed recursive forms of the two algorithms, as well as their diffusion counterparts, to be used in wireless sensor networks. Section 5 studies the computational complexity of all the algorithms. The simulation results are discussed in detail in section 6. Finally, the paper is concluded in section 7.
2 Problem statement
where uk,i is a (1×M) input regressor vector, v k is a spatially uncorrelated zero-mean additive white Gaussian noise with variance and i denotes the time index. The input data is assumed to be Gaussian. The aim of this work is to estimate the unknown vector w o using the sensed data d k (i) without knowledge of the input regressor vector. The estimate of the unknown vector can be denoted by an (M×1) vector wk,i. Assuming that each node k cooperates only with its neighbors and k has access to updates wl,i, from its neighboring nodes at every time instant i, where , in addition to its own estimate, wk,i. The adapt-then-combine (ATC) diffusion scheme  first updates the local estimate at each node using the adaptive algorithm and then fuses together the estimates from the neighboring nodes. This scheme will be used in this work for the development of our distributed algorithm. Note that, even though this work is designed for a fixed topology, it can be extended to a dynamic one.
3 Blind estimation algorithm
In this work, the input regressor data, u k (i) is assumed to be not available to the sensors and the unknown vector w o is estimated using only the sensed values, d k (i). Since the data considered here is Gaussian, a method using second-order statistics only is sufficient for such an estimation problem as it will capture all the required data statistics. Even for the case of non-Gaussian data, such an approach would provide a suboptimal yet accurate enough estimate. The work in  uses the second-order statistics in an intelligent manner to create a null space with respect to the unknown vector w o . At the receiver end, this null space is then exploited to estimate the unknown vector. The authors in  further simplify the algorithm of  by proposing a new algorithm that reduces complexity but at a cost of performance degradation. These two algorithms are taken into consideration in this work as one provides excellent results whereas the other provides a computationally tractable solution.
3.1 Singular value decomposition-based blind algorithm
The final parameter estimate is given by the unique solution (up to a constant factor) of Equation 9. It is important to note here that due to the presence of noise, the final estimate is not accurate.
3.2 Cholesky factorization-based blind algorithm
However, this information, particularly the information about the input regressor data, is not always known and cannot be easily estimated either. Therefore, the correlation matrix of the unknown parameter vector has to be approximated by the correlation matrix of the received/sensed data. Now the algorithm in  uses the Cholesky factor of this correlation matrix to provide a least squares estimate of the unknown parameter vector.
The work in  also gives a method for estimating the noise variance that is on the whole adequate except it may not provide correct estimates of the noise variance at low SNRs. As a result, subtracting the estimated variance from the autocorrelation matrix may not yield a positive-definite matrix. In such cases, the use of Cholesky factorization may not be justified. However, neglecting the noise variance estimate altogether may lead to a poor estimate of the parameter vector. Despite this shortcoming, the main advantage of this method remains its very low computational complexity. Whereas the method of  requires the singular value decomposition of the autocorrelation matrix followed by the building of Hankel matrices using the null eigenvectors and then finding a unique solution to an over-determined set of linear equation, this method  simply evaluates the Cholesky factor (upper triangular matrix) of the autocorrelation matrix and then uses it to directly find the required estimate. Computational complexity is, thus, greatly reduced but at the cost of a performance degradation.
Both of the above-mentioned methods require several blocks of data to be stored before estimation can be performed. Although the least squares approximation gives a good estimate, a sensor network requires an algorithm that can be deployed in a distributed manner, which is possible only with recursive algorithms. Therefore, the first step would be to make both algorithms in  and  recursive in order to utilize them in a WSN setup.
4 Proposed recursive blind estimation algorithms
In the ensuing, the previously mentioned blind estimation algorithms are made recursive and applied over a wireless sensor network.
4.1 Blind block recursive SVD algorithm
It can be seen from (23) that the recursive algorithm does not become computationally less complex. However, it does require lesser memory compared to the original algorithm of  and the result improves with an increase in the number of data blocks processed. The performance almost matches that of the algorithm of .
4.2 Blind block recursive Cholesky algorithm
In this section, we show how the algorithm of  can be converted into a blind block recursive solution.
where is a variable forgetting factor.
4.3 Diffusion blind block recursive algorithms
In a wireless sensor network, a distributed algorithm is required, through which nodes can interact with each other and improve their individual estimates as well as the overall performance of the network. In such a scenario, a recursive algorithm is required. This is one major reason for requiring a recursive blind algorithm. Each node can individually update its estimate and then collaborate with the neighboring nodes to improve that estimate. A comparison of different distributed schemes has shown that the Adapt-Then-Combine (ATC) diffusion strategy provides the best performance . Therefore, we also implement our distributed algorithms using the ATC scheme.
5 Complexity of the recursive algorithms
In order to fully understand the variation in performance of these two algorithms, it is necessary to look at their computational complexity as it will allow us to estimate the loss in performance that would result from a reduction in computational load. We first analyze the complexity of the original algorithms and then deal with that of their recursive versions.
5.1 Blind SVD algorithm
5.2 Blind Cholesky algorithm
5.3 Blind block recursive SVD algorithm
5.4 Blind block recursive Cholesky algorithm
5.5 Comparison of all algorithms
Computations for original least squares algorithms under different settings
M = 4
N = 10
N = 10
N = 20
N = 20
N = 20
K = 8
K = 10
K = 8
K = 10
K = 20
Computations for recursive algorithms under different settings
M = 4
K = 8
K = 10
K = 20
Table 1 lists the number of computations for the original algorithms, showing that the Cholesky-based method requires fewer computations than the SVD-based method and so the trade-off between performance and complexity is justified. If the number of blocks is small, then the Cholesky-based method may even perform better than the SVD-based method as shown in . Here it is assumed that the exact length of the unknown vector is known. Generally, an upper bound of this value is known and that value is used instead of the exact value, resulting in an increase in computations. This assumption is made for both algorithms here to make their comparative study fair.
Table 2 lists the computations-per-iteration for the recursive versions of these two algorithms. RS and RC give the number of computations for the recursive SVD algorithm and the recursive Cholesky algorithm respectively. RCNV lists the number of computations when the noise variance is estimated only once in the recursive Cholesky algorithm. This shows how the complexity of the algorithm can be reduced by an order of magnitude by adopting an extra implicit assumption regarding the wide-sense stationarity of the noise and hence the constancy of its variance from one iteration to the next. Although the performance does suffer slightly, the gain in the reduction of computational complexity more than compensates for this loss.
It is important to note here that even though the SVD and Cholesky factorization operations are being run at every iteration, there is a significant gain achieved in the calculation of the autocorrelation function. While each batch processing algorithm would require a total of P2N2 multiplications, where (P × N) is the size of the data block matrix, the recursive algorithms only require P2N multiplications. Thus, there is a reduction in the number of multiplications by a factor of N, which becomes significant when the number of blocks N is large.
6 Simulations and results
6.1 Performance of the SVD and Cholesky algorithms
6.2 Further simulation-based analysis of the effect of forgetting factor
6.3 Performance of the two algorithms using an optimal forgetting factor
6.4 Effect of block size
6.5 Effect of network size
6.6 Effect of node malfunction
This work develops blind block recursive least squares algorithms based on Cholesky factorization and singular value decomposition (SVD). The algorithms are then used to estimate an unknown vector of interest in a wireless sensor network using cooperation between neighboring sensor nodes. Incorporating the algorithms in the sensor networks creates new diffusion-based algorithms, which are shown to perform much better than their non-diffusion-based counterparts. The new algorithms have been tested using both a variable as well as a fixed forgetting factor. The two developed algorithms are named diffusion blind block recursive Cholesky (DRC) and diffusion blind block recursive SVD (DRS) algorithms. Extensive simulation work comparing the two algorithms under different scenarios revealed that the DRS algorithm performs much better than the DRC algorithm but at the cost of a higher computational complexity. Also, of the two algorithms, the DRC algorithm performs better when the forgetting factor is variable whereas the DRS algorithm gives better results with a fixed forgetting factor. In the case of DRS, the value of the forgetting factor does not effect the overall performance a great deal except for a slight variation in convergence speed and steady-state performance. It was also seen that the size of the data block has an effect on the performance of the two algorithms. The speed of convergence slows down with an increasing block size which means an increasing amount of data to be processed. A block size increase, however, does not necessarily improve performance. It was found that, in general, a small block size gives a better performance. Therefore, it is essential to estimate a very low upper bound to the size of the unknown vector so that the data block size to be used is not unnecessarily large. Next, it was noticed that an increase in the network size improves performance but the improvement gradually decreases with an increasing network size. Moreover, it was shown that switching off some nodes with the largest neighborhoods can slightly degrade the performance of the algorithm. Finally at low SNRs, the Cholesky-based algorithm suffers from a severe degradation, whereas the SVD-based one only experiences a slight degradation.
The authors acknowledge the support provided by the Deanship of Scientific Research (DSR) at KFUPM under Research Grants RG1216, RG1112, SB101024, and FT100012.
- Sayed AH, Lopes CG: Distributed recursive least-squares strategies over adaptive networks. In Proceedings of the 40th Asilomar Conference on Signals, Systems, Computers. Monterey, CA; 2006:233-237.Google Scholar
- Lopes CG, Sayed AH: Incremental adaptive strategies over distributed networks. IEEE Trans. Signal Process 2007, 55: 4064-4077.MathSciNetView ArticleGoogle Scholar
- Lopes CG, Sayed AH: Diffusion least-mean squares over adaptive networks: formulation and performance analysis. IEEE Trans. Signal Process 2008, 56(7):3122-3136.MathSciNetView ArticleGoogle Scholar
- Schizas ID, Mateos G, Giannakis GB: Distributed LMS for consensus-based in-network adaptive processing. IEEE Trans. Signal Process 2009, 57(6):2365-2382.MathSciNetView ArticleGoogle Scholar
- Bin Saeed MO, Zerguine A, Zummo SA: A variable step-size strategy for distributed estimation over distributed networks. Eur. J. Adv. Signal Process 2013, 2013: 135. 10.1186/1687-6180-2013-135View ArticleMATHGoogle Scholar
- Sato Y: A method of self-recovering equalization for multilevel amplitude-modulation. IEEE Trans. Commun 1975, COM-23(6):679-682.View ArticleGoogle Scholar
- Proakis J: Digital Communications. McGraw-Hill, New York; 2000.MATHGoogle Scholar
- Tong L, Perreau S: Multichannel blind identification: from subspace to maximum likelihood methods. Proc. IEEE 1998, 86(10):1951-1968. 10.1109/5.720247View ArticleGoogle Scholar
- Xu G, Liu H, Tong L, Kailath T: A least-squares approach to blind channel identification. IEEE Trans. Signal Process 1995, 43(12):2982-2993. 10.1109/78.476442View ArticleGoogle Scholar
- Abed-Meraim K, Qiu W, Hua Y: Blind system identification. IEEE Trans. Signal Process 1997, 45(3):770-773. 10.1109/78.558501View ArticleGoogle Scholar
- Scaglione A, Giannakis GB, Barbarossa S: Redundant filterbank precoders and equalizers part II: blind channel estimation, synchronization, and direct equalization. IEEE Tran. Signal Proc 1999, 47(7):2007-2022. 10.1109/78.771048View ArticleGoogle Scholar
- Manton JH, Neumann WD: Totally blind channel identification by exploiting guard intervals. Syst. Control Lett 2003, 48(2):113-119. 10.1016/S0167-6911(02)00278-5MathSciNetView ArticleMATHGoogle Scholar
- Pham DH, Manton JH: A subspace algorithm for guard interval based channel identification and source recovery requiring just two received blocks. In Proceedings of the IEEE ICASSP ’03. Hong Kong; 2003:317-320.Google Scholar
- Su B, Vaidyanathan PP: A generalized algorithm for blind channel identification with linear redundant precoders. Eur. J. Adv. Signal Proc 2007, 2007: 1-13. Article ID 25672MathSciNetGoogle Scholar
- Choi J, Lim C-C: A cholesky factorization based approach for blind FIR channel identification. IEEE Tran. Signal Proc 2008, 56(4):1730-1735.MathSciNetView ArticleGoogle Scholar
- Cattivelli F, Sayed AH: Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process 2010, 58(3):1035-1048.MathSciNetView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.