The VSSLMS algorithms show marked improvement over the LMS algorithm at a low computational complexity[20–25]. Therefore, this variation is inserted in the distributed algorithm to inherit the improved performance of the VSSLMS algorithm. Different variations have their own advantages and disadvantages. A complex step-size adaptation algorithm would not be suitable because of the physical limitations of the sensor node. As shown in[23], the algorithm proposed by[20] shows the best performance as well as having low complexity. Therefore, it is well suited for this application. A further comparison of performance of these variants in the present scenario confirm our choice of the VSSLMS algorithm.
The proposed algorithm simply incorporates the VSSLMS algorithm into the diffusion scheme given by (4). Using a VSSLMS algorithm, the step-size will also become a variable in this system of equations defining the proposed distributed algorithm. Then the VSSDLMS algorithm is governed by the following:
(5)
where f[μ
k
(i)] is the step-size adaptation function that is defined using the VSSLMS adaptation given in[20] where the update equation is given by
(6)
where e
k
(i) = d
k
(i) − u
k
(i)w
k
(i), 0 < α < 1 and γ > 0.
Since nodes exchange data amongst themselves, their current update will then be affected by the weighted average of the previous estimates. Therefore, to account for this inter-node dependence, it is suitable to study the performance of the whole network. Hence, some new variables need to be introduced and the local ones are transformed into global variables as follows:
From these new variables, a completely new set of equations representing the entire network is formed, starting with the relation between the measurements
where w
(o) = Q w
o, and Q = c o l{I
M
,I
M
,…,I
M
} is a M N × M matrix. Similarly, the update equations can be remodeled to represent the entire network
(8)
where G = C ⊗ I
M
; C is an N × N weighting matrix, where {C}
lk
= c
lk
; ⊗ is the Kronecker product; D(i) is the diagonal step-size matrix; and the error energy matrix, E(i), is given by
(9)
Considering the above set of equations, the mean and mean-square analyses and the steady-state behavior of the VSSDLMS algorithm are carried out as shown next. The mean analysis considers the stability of the algorithm and derives a bound for the step-size which would guarantee convergence. The mean-square analysis also derives transient and steady-state expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The MSD is defined as the error in the estimate of the unknown vector. The weight-error vector for node k is given by
(10)
then the MSD can be simply defined as
(11)
Similarly, the EMSE is derived from the error equation as follows:
which can be solved further to get the following expression for the EMSE:
(12)
where R
k
is the autocorrelation matrix for node k.
3.1 Mean analysis
To begin with, let us introduce the global weight-error vector defined in[6, 26] as
Since, by incorporating the global weight-error vector into (8), we get
(14)
Here we use the assumption that the step-size matrix D(i) is independent of the regressor matrix U(i)[20]. Accordingly, for small values of γ in (6), the following relation holds true asymptotically
(15)
where E[U
T(i)U(i)] = R
U
is the auto-correklation matrix of U(i), and taking the expectation on both sides of (14) gives
(16)
where the expectation of the second term of the right-hand side of (14) is 0 since the measurement noise is spatially uncorrelated with the regressor and zero-mean, as explained earlier.
From (16), we see that for stability in the mean we must have |λ
max(G B)| < 1, where B = (I
MN
− E[D(i)]R
U
). Since G comes from C and we know that ∥G B∥2 ≤ ∥G∥2.∥B∥2, we can safely infer that
(17)
Since there is already a condition that ∥C∥2 = 1 and for noncooperative schemes, we have (G = I
MN
), we can safely conclude that
(18)
So we can see that the cooperation mode only enhances the stability of the system (for further details, refer to[6, 7]). Since stability is also dependent on the step-size, then the algorithm will be stable in the mean if
(19)
which holds true if the mean of the step-size is governed by
(20)
where λ
max(R
u,k
) is the maximum eigenvalue of the auto-correlation matrix R
u,k
. This scenario is different from that of the fixed step-size as in this case where the system is stable only when the mean of the step-size is within the limits defined by (20).
3.2 Mean-square analysis
In this section, the mean-square analysis of the VSSDLMS algorithm is investigated. Here, the weighted norm has been used instead of the regular norm. The motivation behind using a weighted norm stems from the fact that even though the MSD does not require a weighted norm, the evaluation of the EMSE depends on a weighted norm. In order to accommodate both these measures, a general analysis is conducted using a weighted norm, where a weighting matrix is replaced by an identity matrix for the case of MSD, where a weighting matrix is not required[26].
We take the weighted norm of (14) and then apply the expectation operator to both of its sides. This yields the following:
(21)
where
(23)
Using the data independence assumption[26] and applying the expectation operator gives
(24)
For ease of notation, we denote for the remaining analysis.
3.2.1 Mean-square analysis for Gaussian data
The evaluation of the expectations in (24) is quite tedious for non-Gaussian data. Therefore, it is assumed here that the data is Gaussian in order to evaluate (24). The auto-correlation matrix can be decomposed as R
U
= T Λ T
T, where Λ is a diagonal matrix containing the eigenvalues for the entire network and T is a matrix containing the eigenvectors corresponding to these eigenvalues. Using this eigenvalue decomposition, we define the following relations
where the input regressors are considered independent of each other at each node and the step-size matrix D(i) is block-diagonal, so it does not transform since T
T
T = I. Using these relations, (21) and (24) can be rewritten, respectively, as
(25)
and
(26)
where.
It can be seen that. Also, using the bvec operator[27], we have, where the bvec operator divides the matrix into smaller blocks and then applies the vec operator to each of the smaller blocks. Now, let R
v
= Λ
v
⊙ I
M
denote the block diagonal noise covariance matrix for the entire network, where ⊙ denotes the block Kronecker product[27] and Λ
v
is a diagonal noise variance matrix for the network. Hence, the second term of the right-hand side of (25) is
(27)
where b(i) = bvec{R
v
E[D
2(i)]Λ}.
The fourth-order moment in (26) remains to be evaluated. Using the step-size independence assumption and the ⊙ operator, we get
(28)
where we have from[6]
(29)
and each matrix A
k
is given by
(30)
where Λ
k
defines the diagonal eigenvalue matrix and λ
k
is the eigenvalue vector for node k.
The output of the matrix E[D(i) ⊙ D(i)] can be written as
(31)
Now applying the bvec operator to the weighting matrix using the relation, where we can get back the original through, we get
(32)
where
(33)
Then (21) will take on the following form:
(34)
which characterizes the transient behavior of the network. Although (34) does not explicitly show the effect of the variable step (VSS) algorithm on the network’s performance, this effect is in fact subsumed in the weighting matrix, F(i) which varies for each iteration, unlike in the fixed step-size LMS algorithm where the analysis shows that this weighting matrix remains fixed at all iterations. Also, (33) clearly shows the effect of the VSS algorithm on the performance of the algorithm through the presence of the diagonal step-size matrix D(i).
3.2.2 Learning behavior of the proposed algorithm
In this section, the learning behavior (which shows how the algorithm evolves with time) of the VSSDLMS algorithm is evaluated. Starting with and D
0 = μ
0
I
MN
, we have for iteration (i + 1)
(35)
(36)
(37)
(38)
(39)
then incorporating the above relations in (34) gives
(40)
Now, after subtracting the results of iteration i from those of iteration (i + 1) and simplifying them, we get
(41)
where
(42)
(43)
which can be defined iteratively as
(44)
(45)
In order to evaluate the MSD and EMSE, we need to define the corresponding weighting matrix for each of them. Taking and for the MSD, we get
(46)
Similarly, taking and, the EMSE behavior is governed by
(47)
The relations in (46) and (47) govern the transient behavior of the MSD and EMSE of the proposed algorithm. These relations show how the effect on the proposed algorithm’s transient behavior of the weighting matrix varies from one iteration to the next as the weighting matrix itself varies at each iteration. This is not the case in the simple fixed step-size DLMS in[6] where the weighting matrix remains constant for all iterations. Since the weighting matrix depends on the step-size matrix, which becomes very small asymptotically, then both the norm and influence of the weighting matrix also become asymptotically small. From the above relations, it is seen that both the MSD and EMSE become very small at steady-state because the weighting matrix itself becomes small at steady-state and these relations will then depend only on the product of the weighting matrices at each iteration.
3.3 Steady-state analysis
From the second relation in (8), it is seen that the step-size for each node is independent of the data received from other nodes. Even though the connectivity matrix, G, does not permit the weighting matrix, F ( i ), to be evaluated separately for each node, this is not the case for the determination of the step-size at any node. Here, we define the misadjustment as the ratio of the EMSE to the minimum mean square error. The misadjustment value is used in determining the steady-state performance of the algorithm[11]. Therefore, taking the approach of[20], we first find the misadjustment, as given by
(48)
Then solving (36) and (37) along with (48) leads to the steady-state values for the step-size and its square for each node
(49)
(50)
Incorporating these two steady-state relations in (33) yields the steady-state weighting matrix as
(51)
where D
ss
= diag{μ
ss,k
I
M
}.
Thus, the steady-state mean-square behavior is given by
(52)
where b
ss
= R
v
D ss 2Λ and. Now solving (52), we get
(53)
This equation gives the steady-state performance measure for the entire network. In order to solve for the steady-state values of MSD and EMSE, we take and, respectively, as in (46) and (47). This gives us the steady-state values for MSD and EMSE as follows:
(54)
(55)
The above two steady-state relationships depend on the steady-state weighting matrix which becomes very small at steady-state, as explained before. As a result, the steady-state results for the proposed algorithms become very small compared to those for the fixed step-size DLMS algorithm.