Skip to main content

Table 1 The original SPMLS algorithm

From: Sequential convex combinations of multiple adaptive lattice filters in cognitive radio channel identification

Stage inputs and initialization  
\( \bar {b}^{1}_{m}(n)=b^{m}_{\ell -1}(n), \bar {f}^{1}_{m}(n)=f^{m}_{\ell -1}(n), \bar {e}^{1}_{m}(n)=e^{m}_{\ell -1}(n)\) (T.1.1)
\(\gamma ^{f}_{\ell -1,1}(n)=\gamma _{\ell -1}(n-1), \gamma ^{b}_{\ell -1,1}(n)=\gamma _{\ell -1}(n) \) (T.1.2)
\( r^{b}_{\ell -1,k}(-1)=r^{f}_{\ell -1,k}(-1)=\delta, (k=1,\ldots,M) \) (T.1.3)
\(\bar {\kappa }^{b}_{kj}(-1)=\bar {\kappa }^{f}_{kj}(-1)=\Delta ^{e}_{k\upsilon }(-1)=\Delta ^{f}_{k\upsilon }(-1)=\Delta ^{b}_{k\upsilon }(-1)=0.0\) (T.1.4)
(k=1,…,M),(j=k+1,…,M),(υ=1,…,M)  
For k=1,…,M  
Computations at SOPs  
\( \hat {b}_{\ell -1}^{k}(n)=\bar {b}^{k}_{k}(n), \hat {f}_{\ell -1}^{k}(n)=\bar {f}^{k}_{k}(n) \) (T.1.5)
\( r^{b}_{\ell -1,k}(n) = \lambda \ r^{b}_{\ell -1,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \mid \hat {b}_{\ell -1}^{k}(n) \mid ^{2} \) (T.1.6)
\( \gamma ^{b}_{\ell -1,k+1}(n)=\gamma ^{b}_{\ell -1,k}(n) - \mid \gamma ^{b}_{\ell -1,k}(n) \mid ^{2} \mid \hat {b}_{\ell -1}^{k}(n)\mid ^{2}/r^{b}_{\ell -1,k}(n) \) (T.1.7)
\( r^{f}_{\ell -1,k}(n) = \lambda \ r^{f}_{\ell -1,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \mid \hat {f}_{\ell -1}^{k}(n) \mid ^{2} \) (T.1.8)
\( \gamma ^{f}_{\ell -1,k+1}(n)=\gamma ^{f}_{\ell -1,k}(n) - \mid \gamma ^{f}_{\ell -1,k}(n)\mid ^{2} \mid \hat {f}_{\ell -1}^{k}(n) \mid ^{2}/r^{f}_{\ell -1,k}(n) \) (T.1.9)
For j=k+1,…,M  
\( \bar {b}^{k+1}_{j}(n)=\bar {b}^{k}_{j}(n) - \bar {\kappa }^{b^{*}}_{kj}(n-1) \ \hat {b}_{\ell -1}^{k}(n) \) (T.1.10)
\( \bar {\kappa }^{b}_{kj}(n)= \bar {\kappa }^{b}_{kj}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \bar {b}^{k+1^{\ast }}_{j}(n) \hat {b}^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n) \) (T.1.11)
\( \bar {f}^{k+1}_{j}(n)=\bar {f}^{k}_{j}(n) - \bar {\kappa }^{f^{*}}_{kj}(n-1) \ f_{\ell -1}^{k}(n) \) (T.1.12)
\( \bar {\kappa }^{f}_{kj}(n)= \bar {\kappa }^{f}_{kj}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \bar {f}^{k+1^{\ast }}_{j}(n) \hat {f}^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) (T.1.13)
End  
For υ=1,,M  
Joint process estimation (ROP)  
\( e_{\upsilon }^{k+1}(n)=e_{\upsilon }^{k}(n) - \Delta ^{{e}^{*}}_{k\upsilon }(n-1) \ \hat {b}_{\ell -1}^{k}(n) \) (T.1.14)
\( \Delta ^{e}_{k\upsilon }(n)= \Delta ^{e}_{k\upsilon }(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ e^{k+1^{\ast }}_{\nu }(n) b^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n) \) (T.1.15)
Forward error prediction (ROP)  
\( f^{k+1}_{\upsilon }(n)=f^{k}_{\upsilon }(n) - \Delta ^{f^{*}}_{k\upsilon }(n-1) \ \hat {b}_{\ell -1}^{k}(n-1) \) (T.1.16)
\( \Delta ^{f}_{k\upsilon }(n)= \Delta ^{f}_{k\upsilon }(n-1) + \gamma ^{b}_{\ell -1,k}(n-1) \ f^{k+1^{\ast }}_{\nu }(n) b^{k}_{\ell -1}(n-1)/r^{b}_{\ell -1,k}(n-1) \) (T.1.17)
Backward error prediction (ROP)  
\( b^{k+1}_{\upsilon }(n)=b^{k}_{\upsilon }(n-1) - \Delta ^{b^{*}}_{k\upsilon }(n-1) \ \hat {f}_{\ell -1}^{k}(n) \) (T.1.18)
\( \Delta ^{b}_{k\upsilon }(n)= \Delta ^{b}_{k\upsilon }(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ b^{k+1^{\ast }}_{\nu }(n) f^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) (T.1.19)
End  
End  
Stage outputs  
\( b^{m}_{\ell }(n)=b^{M+1}_{m}(n), \ f^{m}_{\ell }(n)=f^{M+1}_{m}(n), \) (T.1.20)
\( e^{m}_{\ell }(n)=e^{M+1}_{m}(n), \ \gamma _{\ell }(n)=\gamma ^{b}_{\ell -1,M+1}(n)\) (T.1.21)