Skip to main content

Adaptive multichannel sequential lattice prediction filtering method for ARMA spectrum estimation in subbands

Abstract

Abstract

A multichannel characterization for autoregressive moving average (ARMA) spectrum estimation in subbands is considered in this article. The fullband ARMA spectrum estimation can be realized in two-channels as a special form of this characterization. A complete orthogonalization of input multichannel data is accomplished using a modified form of sequential processing multichannel lattice stages. Matrix operations are avoided, only scalar operations are used, and a multichannel ARMA prediction filter with a highly modular and suitable structure for VLSI implementations is achieved. Lattice reflection coefficients for autoregressive (AR) and moving average (MA) parts are simultaneously computed. These coefficients are then converted to process parameters using a newly developed Levinson–Durbin type multichannel conversion algorithm. Hence, a novel method for spectrum estimation in subbands as well as in fullband is developed. The computational complexity is given in terms of model order parameters, and comparisons with the complexities of nonparametric methods are provided. In addition, the performance is visually and statistically compared against those of the nonparametric methods under both stationary and nonstationary conditions.

1 Introduction

While parametric or model-based methods are used extensively for high-resolution spectrum estimation, these methods perform poorly when SNR and spacing between frequencies is small. In many cases, input noise is assumed to be white; if this is not the case, colored noise can be adapted, provided that its statistics are known. However, such statistics may not be known in many cases, and instead, noise may incorrectly be assumed white. Such shortcomings can be overcome by applying subband decomposition methods in spectrum estimation.

It was shown by Rao and Pearlman[1] that the well-known AR modeling was a promising method for spectrum estimation in subbands, and it was proved that p th-order prediction from subbands is superior to p th-order prediction in the fullband when p is finite, and subband decomposition of a source resulted in a whitening of the composite subband spectrum. The equivalence of linear prediction and AR spectrum estimation was then exploited to show that AR spectrum from subbands offers a gain over fullband AR spectrum estimation. Unfortunately, new problems such as spectral overlapping and the increase in the variance of estimated parameters appear. The first disadvantage was addressed in a conference paper by Bonacci et al.[2], where nonreal-time procedures have been proposed to perform subband spectral estimation without discontinuities or aliasing at subband borders. However, this procedure is appropriate for a uniform filter bank, even though methods applicable to any kind of filter bank are desired. In another conference paper, Bonacci et al.[3] proposed to tackle the second drawback by a Subband Multichannel Autoregressive Spectral Estimation method, which was also intended for an off-line implementation.

Another popular model, autoregressive moving average (ARMA) model, which includes AR and MA methods as its special cases, has the input–output relationship given by

y(n)= = 1 p a 1 y(n)+ j = 0 q a j 2 x(nj)
(1)

for an ARMA(p,q) process. Here, x(n) is zero mean, white noise with a variance of σ x 2 , and â 1 and â j 2 , respectively, represent the th and j th coefficients related to AR and MA parts. Such processes arise in various applications such as modeling radar signals[4, 5] or speech signals[6, 7], where spectral zeros as well as poles are often present due to the physical mechanism generating the data. In addition, processes that are purely autoregressive are often transformed into ARMA(p,p) processes by addition of measurement noise, and especially sinusoids in noise are known to obey the degenerate ARMA equation[8, 9]. Even though an ARMA process can be represented by a unique AR model of generally infinite order, the ARMA modeling approach often leads to more efficient implementations. A hierarchical ARMA modeling method for classifying high-resolution radar signals at multiple scales was presented in[10], and it was shown that the radar signal at a different scale obeyed an ARMA process if it was an ARMA process at the observed scale.

ARMA model-based applications such as the classification of high-resolution radar signatures using multi-scale features, and lattice speech analysis/synthesis were reported in[11, 12], respectively. As a consequence of degenerate ARMA modeling of sinusoids in noise, adaptive multiple frequency tracking, previously considered in[1315], has gained momentum recently[16], and presents great interest in communications[17], biomedical engineering[18], speech processing[19], and power systems[20, 21]. Another recent consequence of degenerate ARMA modeling of sinusoids in noise is related to spectrum sensing for cognitive radios[22, 23], where the primary task is to dynamically explore the radio spectrum for the existence of signals (sinusoids) so as to determine portions of the frequency band that may used for radio transmission. In view of these developments, we think that methods of subband spectrum estimation based on ARMA modeling with possible extensions to fullband spectrum estimation can provide good alternatives in radar and speech classification, adaptive multiple frequency tracking as well as spectrum sensing for cognitive radio applications.

In this article, we propose a novel method that relies on estimation of the driving noise in subbands. Even though methods based on estimation of the driving noise were previously proposed for fullband[24], the important difference of our method is that we first transform the subband ARMA filtering problem into multichannel AR filtering problem by embedding subband ARMA processes into multichannel AR processes, and then we achieve a complete modified Gram-Schmidt orthogonalization of input multichannel signal using a modified version of the sequential processing multichannel lattice stages (SPMLSs)[25]. A number of alternatives for adaptive multichannel processing were proposed after the introduction of SPMLSs in[25]. Two of such alternatives are the modular lattice architectures proposed by Lev-ari[26], and Glentis and Kalouptsidis[27]. While the architecture in[26] is suitable for equal channel orders and involves more computations than SPMLSs, neither of these architectures is preferable for sequential processing. Another alternative is the QR decomposition-based lattice approach in[28], which is also for equal channel orders, and was later extended to unequal channel orders by Yang[29]. Newer versions of multichannel QR algorithms based on orthogonal Givens rotation for equal as well as unequal channel orders were later presented by Rontogiannis and Theodoridis[30]. Recently, an array-based QR multichannel lattice filter that extends the correspondence between recursive least-squares update equations and Kalman filter equations to the multichannel lattice case was presented by Gomes and Barrosso[31]. In addition, transversal-type algorithms such as[32, 33] were proposed due to their lower complexity and direct relation to channel coefficients. However, these algorithms generally require the implementation of stabilization techniques, and their structure is less regular. The principle of modular decomposition appears to be the implicit basis in all these adaptive multichannel processing techniques, and provides for the scalar only operations. In QR decomposition approaches, the Q matrix is implicitly formed and then used to compute the R matrix, whereas in the Gram-Schmidt approach, the inverse of the R is implicitly formed and then used to compute the Q matrix. As a consequence of this fact, Regalia and Bellanger[34] showed that there exists a duality between QR and lattice methods, and the possibility of combining elements of both approaches to obtain new hybrid algorithms. With respect to developing these hybrid algorithms, Ling[35] showed that a orthogonal Givens rotation-based algorithm algebraically coincides with the recursive-modified Gram-Schmidt-based lattice algorithm in[36].

In accordance with this perspective in multichannel signal processing, as SPMLSs already have modularity, order recursiveness, regularity, simplicity, sequentiality, and equal as well as unequal channel processing capabilities, we modify them in order to improve their numerical performance by using the error-feedback formula of the recursive-modified Gram-Schmidt algorithm[35, 36] in the processing cells. Thus, the complete orthogonalization of multichannel input data and sequential nature of the modified SPMLSs make it possible to feed back the delayed forward prediction error signals to represent the unknown input noise signals of original ARMA processes. Although we introduced the complete orthogonalization concept previously in linear and nonlinear adaptive decision feedback equalization frameworks in[37, 38], its application to adaptive spectrum estimation problem in subbands as well as in fullband results in novel implementations, particularly to the development of a new Levinson–Durbin type conversion algorithm for the modified SPMLSs in order to compute ARMA process parameters from lattice reflection coefficients. To the best of the authors’ knowledge, this particular multichannel lattice prediction filter structure for ARMA spectrum estimation in subbands or in fullband and the new Levinson–Durbin type multichannel conversion algorithm do not exist in the literature.

A two-subband ARMA spectrum estimation problem is considered in this article due to the ease of explanation and space limitations in developing the method. However, it is considered straightforward to apply the method to any number of subbands, and to AR spectrum estimation in subbands. The method is appropriate for uniform and nonuniform filter bank realizations, while aliasing problems due to spectral overlapping in adjacent channels are also addressed. A highly modular, regular, time and order recursive, recursive least squares (RLS) ARMA parameter estimator with inherently good numerical properties, suitable for VLSI and recent programable system on chip implementations[39], is designed, and AR and MA parameters are found simultaneously. With these properties, the method is applicable for both off-line and on-line implementations; it is especially possible to monitor the forward prediction error signal, start the parameter estimation for a fullband AR(p) or ARMA(p,q) or ARMA(p,p) process; if performance requirements are not met, end up for subband ARMA(p k ,q k ) or ARMA(p k ,p k ) realizations. Consequently, it dynamically extends the lattice parametrization of fullband spectrum into subbands, and thereby arises as an useful and practical method for radar signal analysis/classification, speech analysis/synthesis, adaptive multiple frequency tracking, and cognitive radio spectrum sensing tasks.

An adaptive FIR filtering approach to spectral estimation, which is referred to as amplitude and phase estimation of a sinusoid (APES) and has applications to radar target recognition, was proposed by Li and Stoica[40], and the adaptive FIR filtering approach to the Capon method was also discussed by Stoica and Moses[41]. Moreover, the APES method has been extended to array processing by Yardibi et al.[42], and named as iterative adaptive approach for amplitude and phase estimation (IAA-APES). An FIR filtering reinterpretation of the Thomson’s multitaper method[43, 44] with applications to spectrum sensing for cognitive radio was also presented by Farhang-Boroujeny[45]. Recently, computationally efficient versions of the adaptive Capon and APES, and IAA methods have been proposed in[46, 47], respectively. In this article, we compare the complexity and performance of our method with those of the Periodogram, multitaper, Capon, APES, and IAA methods, and show that our method is competitive in terms of complexity and performance.

The remainder of this article is organized as follows. In Section 2, we present the development of the new multichannel ARMA lattice prediction filter using the modified SPMLSs. In Section 3, we develop the new Levinson–Durbin type multichannel conversion algorithm for the modified SPMLSs, and relate lattice parameters to process parameters. Spectrum estimation expression in two-subbands is given in Section 4. The computational complexity computations are treated in Section 5. Section 6 is concerned with the experimental results. Finally, Section 7 is about the discussions of results and conclusions. The following notations are used in this article. (∙) represents the complex conjugate of (∙). (∙) T and (∙) H stand for the transpose and the Hermitian transpose of (∙), respectively. The variables m, i, and n are global while all other variables are local. The variable m represents the stage number while n and i are the time indexes related to data and coefficients, respectively, till we equate them in Section 3 to have a single time index.

2 Adaptive multichannel ARMA lattice prediction filtering

2.1 Multichannel prediction problem

An illustration of the adaptive multichannel ARMA prediction filtering in subbands for two-subband case is presented in Figure1. Therein, y(n) represents the input fullband signal while y 1(n) and y 2(n) stand for the input subband signals. In adaptive multichannel ARMA prediction filtering, the objective is to find an exponentially windowed, LS solution for the AR and MA coefficients of the k th forward prediction filter that minimizes each of the two cost functions

J k (i)= n = 0 i λ i n f p k k (n) 2
(2)
Figure 1
figure 1

A block diagram of the adaptive multichannel ARMA prediction filtering in subbands.

at each time instant i, and k = 1,2. The forward prediction error f p k k (n) in this expression is defined as

f p k k (n)= d k (n) d ̂ i k (n)
(3)

and the k th forward prediction filter output, d ̂ i k (n), is an estimate of the k th desired signal, d k(n) = y k (n), is given by

d ̂ i k (n)= j = 1 p k ã 1 , j k (i) y k (nj)+ l = 0 q k ã 2 , l k (i) û k (nl).
(4)

Herein, p k and q k denote the order of the (p k ,q k ) prediction error filter associated with the k th subband, and û k (n) is the estimate of the k th ARMA process input signal. The estimated k th ARMA process input signal, û k (n), is obtained by delaying and feeding back the p k th-order forward prediction error, û k (n)= f p k k (n1). Hence, the input vector to the k th ARMA filter at time instant n, y ~ k (n), and the corresponding coefficient vector a ~ k (i), at time instant i, are defined as

y ~ k ( n ) = y k ( n 1 ) , , y k ( n p k ) , û k ( n ) , û k ( n 1 ) , , û k ( n q k ) T
(5)

and

a ~ kT (i)= ã 1 , 1 k ( i ) , , ã 1 , p k k ( i ) , ã 2 , 0 k ( i ) , ã 2 , 1 k ( i ) , , ã 2 , q k k ( i ) ,
(6)

respectively. Herein, ã 1 , j k (i) and ã 2 , j k (i), respectively, represent the j th coefficient related to the AR and MA parts of the forward prediction filter for the k th subband at time instant i. It is assumed, without loss of generality, that p k  ≥ q k . p k  = q k case corresponds to the prediction filter for an ARMA(p k ,p k ) process, while p k  > q k prediction filter is for a general ARMA(p k ,q k ) process. Note that an ARMA backward prediction can be performed for the desired signal, d k(n) = y k (np k ), and the prediction filter in that case would use the reversed and conjugated forward prediction filter coefficients, which are defined in the backward prediction error coefficient vector as

c ~ kT (i)= c ~ 1 , p k k ( i ) , , c ~ 1 , 1 k ( i ) , c ~ 2 , q k k ( i ) , , c ~ 2 , 1 k ( i ) , c ~ 2 , 0 k ( i )
(7)

where c ~ 1 , j k (i) and c ~ 2 , j k (i) are, respectively, defined as the j th coefficient related to the AR and MA parts of the backward prediction filter for the k th subband at time instant i.

Consequently, the main concern of the exponentially weighted LS problem under consideration is to find, at each time i, the k th optimal coefficient vector, a ~ k (i) that would minimize the cost function

J k (i)= n = 0 i λ i n d k (n) a ~ kH (i) y ~ k (n) 2 .
(8)

The k th optimal coefficient vector related to the k th subband filter

a ~ opt k (i)= R k 1 (i) P k (i)
(9)

is found by differentiating J k(i) with respect to a ~ k (i), setting the derivative to zero, and solving for a ~ k (i), where

R k (i)= n = 0 i λ i n y ~ k (n) y ~ k H (n)
(10)

and

P k (i)= n = 0 i λ i n y ~ k (n) d k (n).
(11)

2.2 Sequential lattice orthogonalization

In order to find a modular, regular, and simple solution to the two-subband ARMA prediction problem, we would like to use a single multichannel lattice filter as depicted in Figure2, instead of using two separate transversal filters and solving two separate optimization problems as in Figure1. We would also like to avoid direct evaluations as in (9), and achieve good numerical properties. As the number of channels at different sections of the proposed multichannel lattice filter is different due to the sequential processing nature of SPMLSs, we carry out the exponentially weighted LS optimization problem by taking into consideration each of these sections separately, and therefore we assume that the filter is comprised of three cascaded filters, which are two-channel, three-channel, and four-channel lattice sections; and we use a different index for each section while using m to indicate a stage in the whole filter. We also assume p 1 = p 2 for the ease of explanation without loss of generality.

Figure 2
figure 2

A block diagram of the adaptive multichannel ARMA lattice prediction filtering in subbands.

In order to sequentially solve the exponentially weighted LS optimization problem under consideration, we first organize the elements of input signal vectors y 1(n) = [y 1(n),…,y 1(n − )]T, and y 2(n) = [y 2(n),…,y 2(n − )]T according to the natural ordering of SPMLSs as

y ̄ + 1 ( n ) = y 1 ( n ) y 2 ( n ) y 1 ( n 1 ) y 2 ( n 1 ) y 1 ( n ) y 2 ( n )
(12)

and input to two-channel stages for which the stage number (m) has a range of values given by 0 < m ≤ (p 1 − q 1). Accordingly, we redefine Equations (10) and (11) using this new data vector as follows

R (i)= n = 0 i λ i n y ̄ + 1 (n) y ̄ + 1 H (n)
(13)

and

P , k (i)= n = 0 i λ i n y ̄ + 1 (n) d k (n)
(14)

where k = 1,2. The orthogonalization of data using SPMLSs corresponds to the transformation of (13) and (14) into

D + 1 f (i)= n = 0 i λ i n Ω f (i) y ̄ + 1 (n) y ̄ + 1 H (n) Ω fH (i)
(15)

and

Z + 1 , k f (i)= n = 0 i λ i n Ω f (i1) y ̄ + 1 (n1) d k (n),
(16)

respectively. Here, Ω f (i) is the 2  ×  2 lower triangular transformation matrix for forward prediction, and is sequentially realized stage-by-stage using 2  ×  2 lower triangular transformation matrices

L f (i)= 1 0 κ ̂ f ( i 1 ) 1
(17)

whose diagonal elements are all equal to unity at time instant i, and κ ̂ f (i) is the reflection coefficient computed at the single circular cell in the triangular-shaped self-orthogonalization processor of the th two-channel SPMLS. Then, the forward lattice predictor coefficients are computed using

Θ , k f (i)= D + 1 f (i1) Z + 1 , k f (i)
(18)

where Θ , k f (i) represents the k th row of the 2 × 2 lattice forward prediction reflection coefficient matrix Θ f (i), and is also sequentially implemented stage-by-stage by means of 2 × 2 forward prediction reflection coefficient matrices

Δ f ( i ) = κ ̄ , 1 , 1 f ( i ) κ ̄ , 1 , 2 f ( i ) κ ̄ , 2 , 1 f ( i ) κ ̄ , 2 , 2 f ( i )
(19)

in which κ ̄ , k , j f (i) is the j th reflection coefficient related to the forward prediction of the k th channel signal, and it is computed at the (k,j)th single circular cell of the square-shaped reference-orthogonalization processor related to forward prediction at the th two-channel SPMLS. Note that the matrix inversion operation in Equation (9) is transformed into a simple scalar inversion operation in (18) due to the diagonal nature of D + 1 f (i). The backward prediction counterpart of this optimization problem is similarly solved using 2 × 2 lower triangular transformation matrices L b (i), and 2 × 2 lattice backward prediction reflection coefficient matrices, Δ b (i).

After the processing of input signals by two-channel lattice stages, the delayed and fed back forward prediction error û 1 (n)= f p 1 (n1) is incorporated at the (p 1q 1+1)t h stage, as the third channel. Accordingly, we expand the optimization problem by organizing the elements of the input data vectors y 1(n) = [y 1(n),…,y 1(n − α)]T, y 2(n) = [y 2(n),…,y 2(nα)]T, and u ̂ 1 (n)= [ û 1 ( n ) , , û 1 ( n α ) ] T as follows:

y ̄ α + 1 ( n ) = y 1 ( n ) y 2 ( n ) û 1 ( n ) y 1 ( n α ) y 2 ( n α ) û 1 ( n α ) ,
(20)

and input to three-channel lattice section, where the stage number (m) takes values in the range given by (p 1 − q 1) < m ≤ (p 2 − q 2). Subsequently, we solve the optimization problem in (18) once again with the new input vector, in which case Ω α f (i) and Θ α f (i) are the 3α × 3α lower triangular transformation and the 3 × 3α forward lattice prediction coefficient matrices, respectively. Ω α f (i) is computed sequentially by means of 3 × 3 lower triangular transformation matrices, L α f (i), and Θ α f (i) is similarly realized stage-by-stage making use of 3 × 3 forward prediction coefficient matrices, Δ α f (i), at time instant i. Note that, since the delayed and fed back signal is considered to constitute a new channel in the multichannel sequential lattice filtering, we have three desired signals at this point, d k(n), where k = 1,2,3, one of which did not exist in the optimization problem stated in Section 2.1, and this new desired signal, d 3(n), is related to the MA part of the first subband ARMA modeling.

Finally, the optimization problem is expanded one more time with the inclusion of the second delayed and fed back forward prediction error û 2 (n)= f p 2 (n1), and this time, the elements of input data vectors y 1(n) = [y 1(n),…,y 1(n − ν)]T, y 2(n) = [y 2(n),…,y 2(n − ν)]T, u ̂ 1 (n)= [ û 1 ( n ) , , û 1 ( n ν ) ] T , and u ̂ 2 (n)= [ û 2 ( n ) , , û 2 ( n ν ) ] T are organized as

y ̄ ν + 1 ( n ) = y 1 ( n ) y 2 ( n ) û 1 ( n ) û 2 ( n ) y 1 ( n ν ) y 2 ( n ν ) û 1 ( n ν ) û 2 ( n ν )
(21)

where the stage number (m) is in the range given by (p 2 − q 2) < m ≤ p 2 due to four-channel processing. Similar to two-channel and three-channel cases, we solve the optimization problem in (18) using the new data vector in Equation (21), in which case Ω ν f (i) and Θ ν f (i) are 4ν × 4ν lower triangular transformation, and 4 × 4ν forward lattice prediction coefficient matrices at the time instant i, respectively. Similar to previous cases, these matrices are computed stage-by-stage by the use of 4 × 4 lower triangular transformation matrices, L ν f (i), and 4 × 4 forward prediction coefficient matrices, Δ ν f (i), at time instant i, respectively. As the second delayed and fed back signal is also considered as a new channel in the multichannel sequential lattice filtering, hereafter we have four desired signals, d k(n), where k = 1,2,3,4, and this fourth desired signal, d 4(n), is associated with the MA part of the second subband ARMA modeling.

2.3 Matrix visualization

In order to further explain the sequential lattice orthogonalization, we consider a (8,5) and (8,2) ARMA prediction lattice prediction filter for the first and second subbands, and organize the elements of input data vectors y 1(n) = [y 1(n),…,y 1(n − 8)]T, y 2(n) = [y 2(n),…,y 2(n − 8)]T, u ̂ 1 (n)= [ û 1 ( n ) , û 1 ( n 1 ) , , û 1 ( n 5 ) ] T , and u 2 ̂ (n)= [ û 2 ( n ) , û 2 ( n 1 ) , , û 2 ( n 2 ) ] T as columns of a matrix,

y 1 ( n ) y 1 ( n 1 ) y 1 ( n 2 ) y 1 ( n 3 ) y 1 ( n 4 ) y 1 ( n 5 ) y 1 ( n 6 ) y 1 ( n 7 ) y 1 ( n 8 ) y 2 ( n ) y 2 ( n 1 ) y 1 ( n 2 ) y 1 ( n 3 ) y 2 ( n 4 ) y 2 ( n 5 ) y 2 ( n 6 ) y 2 ( n 7 ) y 2 ( n 8 ) û 1 ( n ) û 1 ( n 1 ) û 1 ( n 2 ) û 1 ( n 3 ) û 1 ( n 4 ) û 1 ( n 5 ) û 2 ( n ) û 2 ( n 1 ) û 2 ( n 2 )
(22)

by taking into consideration different number of parameters in the feedforward and feedback channels and shifting properties of input data. This matrix helps us to visualize the orthogonalization process, and thus to draw a diagram of the four-channel prediction filter structure under consideration as in Figure3. Note that the elements of the first and second rows are related to the input signals of the first and the second subband channels of the ARMA filter under consideration, while the third and fourth rows are associated with the fed back and delayed signals. Lattice orthogonalization begins with the elements of the first two rows using two-channel sequential lattice processing stages until the first fed back and delayed channel is incorporated as the new channel. Then, the orthogonalization continues with three-channel lattice stages until the fourth channel, which is the second fed back and delayed channel, is taken into the process, and so the orthogonalization of input data finalizes with four-channel stages when the mean squared prediction error performance requirements are met, and thereby the k th desired signal, d k(n), is sequentially predicted using self-orthogonalized and delayed backward prediction error signals as follows:

d ̂ i k ( n ) = m = 1 p 1 q 1 j = 1 2 κ ̄ m , k , j f ( i 1 ) b ̂ m 1 j ( n 1 ) + m = p 1 q 1 + 1 p 2 q 2 j = 1 3 κ ̄ m , k , j f ( i 1 ) b ̂ m 1 j ( n 1 ) + m = p 2 q 2 + 1 p 2 j = 1 4 κ ̄ m , k , j f ( i 1 ) b ̂ m 1 j ( n 1 ) .
(23)
Figure 3
figure 3

A diagram of the four-channel ARMA lattice filter structure for two-subband spectrum estimation.

Here, the first and second summations represent the prediction accomplished by the two-channel and three-channel sections, respectively, and the fourth summation is connected with the four-channel prediction section. In each section, κ ̄ m , k , j f (i) represents the j th forward prediction reflection coefficient at the m th stage related to the k th channel as defined in the previous subsection, and b ̂ m 1 j (n) represents the j th element of the self-orthogonalized backward prediction error signal vector, b ̂ m 1 (n), at the input of the m th stage. The self-orthogonalized backward prediction error vector, b ̂ m 1 (n), is produced by the lower triangular transformation of the input backward prediction error vector, b m−1(n), using L m f (n), and this operation is accomplished at the triangular shaped self-orthogonalization processor (related to forward prediction) of the m th SPMLS. Note that the sizes of vectors, b ̂ m 1 (n), b m−1(n), and matrix, L m f (n), at different sections of the proposed lattice filter are as follows: 2 × 1, and 2 × 2 in two-channel section, 3 × 1, and 3 × 3 in three-channel section, and 4 × 1, and 4 × 4 in four-channel section, respectively.

We would also like to point out that a lattice filter for fullband ARMA spectrum estimation is a special form of the two-subband implementation, and therefore it can similarly be realized using sequential processing one-channel and two-channel lattice stages as illustrated in Figure4 for an ARMA(10,2) implementation.

3 Conversion of lattice coefficients to process parameters

Since the mathematical link between process parameters and reflection coefficients of a lattice prediction filter is provided by the Levinson–Durbin algorithm[48, 49], we develop a new Levinson–Durbin type conversion algorithm specifically for SPMLSs in order to convert lattice reflection coefficients to subband ARMA process parameters. Due to the sequential nature of the proposed lattice structure, we carry out the development of the new Levinson–Durbin type multichannel conversion algorithm by taking into consideration each of these sections separately, and therefore we assume that the filter is comprised of three cascaded filters as in Section 2.2.

Figure 4
figure 4

A diagram of the two-channel ARMA lattice filter structure for fullband spectrum estimation.

We first consider the conversion algorithm for the two-channel section of lattice prediction filter, and we organize the input signal samples to two-channel lattices as

y ̄ + 1 ( n ) = y 1 ( n ) y 1 ( n ) y 2 ( n ) y 2 ( n ) = y 1 ( n ) y 1 ( n 1 ) y 2 ( n ) y 2 ( n 1 )
(24)

where we define the data vectors as y 1(n) = [y 1(n),…,y 1(n −  + 1)]T, y 2(n) = [y 2(n),…,y 2(n −  + 1)]T, and 0 < m ≤ (p 1 − q 1). The corresponding forward and backward prediction error coefficient matrices for the th-order transversal filter for the k th channel are defined as

a ̂ kT ( i ) = â 0 k ( i ) , â 1 k ( i ) , â 2 k ( i ) , , , â 2 2 k ( i ) , â 2 1 k ( i ) , â 2 k ( i )
(25)

and

c ̂ kT ( i ) = ĉ 2 k ( i ) , ĉ 2 1 k ( i ) , ĉ 2 2 k ( i ) , , , ĉ 2 k ( i ) , ĉ 1 k ( i ) , ĉ 0 k ( i )
(26)

where k = 1,2 due to two-channel lattice processing, and â 0 k (i)= ĉ 0 k (i)=1.0. Since the signal time shifting and ordering properties of SPMLSs when expressed in matrix form as in Equation (12) are different than the organization of input signal samples in matrix form as in Equation (24), we use (2 + 1) × (2 + 1) shuffling matrices, J + 1 1 for the first channel and J + 1 2 for the second channel, to reorder the elements of coefficient matrices, a ̂ 1 H (i), a ̂ 2 H (i) and c ̂ 1 H (i), c ̂ 2 H (i), according to the sample ordering of SPMLSs. Therefore, the forward and backward prediction errors for the end of the observation interval n = i at the output of the general th-order filters with transversal structure can be stated as

f 1 ( n ) f 2 ( n ) = J + 1 1 a ̂ 1 H ( n ) 0 0 J + 1 2 a ̂ 2 H ( n ) y ̄ + 1 (n)
(27)
b 1 ( n ) b 2 ( n ) = J + 1 1 c ̂ 1 H ( n ) 0 0 J + 1 2 c ̂ 2 H ( n ) y ̄ + 1 (n)
(28)

where 0 is a 1x( + 1) zero matrix. Then, we can express the ( − 1)th prediction errors as

f 1 1 ( n ) f 1 2 ( n ) = J + 1 1 a ̂ 1 1 H ( n ) 0 0 0 0 J + 1 2 a ̂ 1 2 H ( n ) 0 0 y ̄ + 1 ( n )
(29)
b 1 1 ( n 1 ) b 1 2 ( n 1 ) = J + 1 1 0 0 c ̂ 1 1 H ( n 1 ) 0 0 J + 1 2 0 0 c ̂ 1 2 H ( n 1 ) y ̄ + 1 ( n )
(30)

Note that the size of each coefficient matrix increases by two when the order of prediction filter increases from −1 to , and 0 is a 1 × ( + 1) zero matrix as before. Subsequently, we define the th-order prediction errors in terms of lattice parameters and the (−1)th-order prediction errors as follows

f 1 ( n ) f 2 ( n ) = f 1 1 ( n ) f 1 2 ( n ) + κ ̄ , 1 , 1 f ( n 1 ) κ ̄ , 1 , 2 f ( n 1 ) κ ̄ , 2 , 1 f ( n 1 ) κ ̄ , 2 , 2 f ( n 1 )  ×  1 0 κ ̂ f ( n 2 ) 1 b 1 1 ( n 1 ) b 1 2 ( n 1 )
(31)
b 1 ( n ) b 2 ( n ) = b 1 1 ( n 1 ) b 1 2 ( n 1 ) + κ ̄ , 1 , 1 b ( n 1 ) κ ̄ , 1 , 2 b ( n 1 ) κ ̄ , 2 , 1 b ( n 1 ) κ ̄ , 2 , 2 b ( n 1 )  ×  1 0 κ ̂ b ( n 1 ) 1 f 1 1 ( n ) f 1 2 ( n )
(32)

where the lower coefficient triangular and square matrices are generated in triangular shaped self-orthogonalization and square shaped reference-orthogonalization processors in a two-channel SPMLS as defined in Equations (17) and (19). Accordingly, we multiply these lower triangular and square coefficient matrices, and make the following definitions

Γ f ( n ) = Γ , 1 , 1 f ( n ) Γ , 1 , 2 f ( n ) Γ , 2 , 1 f ( n ) Γ , 2 , 2 f ( n ) = κ ̄ , 1 , 1 f ( n ) + κ ̄ , 1 , 2 f ( n ) κ ̂ f ( n 1 ) κ ̄ , 1 , 2 f ( n ) κ ̄ , 2 , 1 f ( n ) + κ ̄ , 2 , 2 f ( n ) κ ̂ f ( n 1 ) κ ̄ , 2 , 2 f ( n )
(33)
Γ b ( n ) = Γ , 1 , 1 b ( n ) Γ , 1 , 2 b ( n ) Γ , 2 , 1 b ( n ) Γ , 2 , 2 b ( n ) = κ ̄ , 1 , 1 b ( n ) + κ ̄ , 1 , 2 b ( n ) κ ̂ b ( n ) κ ̄ , 1 , 2 b ( n ) κ ̄ , 2 , 1 b ( n ) + κ ̄ , 2 , 2 b ( n ) κ ̂ b ( n ) κ ̄ , 2 , 2 b ( n )
(34)

in order to obtain compact versions of Equations (31) and (32) as follows

f 1 ( n ) f 2 ( n ) = f 1 1 ( n ) f 1 2 ( n ) + Γ f (n1) b 1 1 ( n 1 ) b 1 2 ( n 1 )
(35)
b 1 ( n ) b 2 ( n ) = b 1 1 ( n 1 ) b 1 2 ( n 1 ) + Γ b (n1) f 1 1 ( n ) f 1 2 ( n ) .
(36)

Then, the th-order prediction error matrices in Equations (27) and (28), and the (−1)th-order prediction error matrices in Equations (29) and (30) are substituted in the th-order prediction error expressions in (35) and (36) so as to obtain the following pairs of order updates

a ̂ 1 ( n ) = a ̂ 1 1 ( n ) 0 + Γ , 1 , 1 f ( n 1 ) J + 1 1 0 c ̂ 1 1 ( n 1 ) + Γ , 1 , 2 f ( n 1 ) J + 1 2 0 c ̂ 1 2 ( n 1 )
(37)
a ̂ 2 ( n ) = a ̂ 1 2 ( n ) 0 + Γ , 2 , 1 f ( n 1 ) J + 1 1 0 c ̂ 1 1 ( n 1 ) + Γ , 2 , 2 f ( n 1 ) J + 1 2 0 c ̂ 1 2 ( n 1 )
(38)
c ̂ 1 ( n ) = 0 c ̂ 1 1 ( n 1 ) + Γ , 1 , 1 b ( n 1 ) J + 1 1 a ̂ 1 1 ( n ) 0 + Γ , 1 , 2 b ( n 1 ) J + 1 2 a ̂ 1 2 ( n ) 0
(39)
c ̂ 2 ( n ) = 0 c ̂ 1 2 ( n 1 ) + Γ , 2 , 1 b ( n 1 ) J + 1 1 a ̂ 1 1 ( n ) 0 + Γ , 2 , 2 b ( n 1 ) J + 1 2 a ̂ 1 2 ( n ) 0
(40)

and since the size of each coefficient matrix increase by two, 0 is a 2 × 1 zero matrix. The three-channel section starts with the incorporation of the third channel ( û 1 (n)) as the new channel at the (p 1 − q 1 + 1)th stage. In order to develop the Levinson–Durbin algorithm for this section, we assume that three-channel section is a separate filter, and thereby considering the input signal samples to the three-channel section as follows

y ̄ α + 1 ( n ) = y 1 ( n ) y 1 ( n α ) y 2 ( n ) y 2 ( n α ) u ̂ 1 ( n ) û 1 ( n α ) = y 1 ( n ) y 1 ( n 1 ) y 2 ( n ) y 2 ( n 1 ) û 1 ( n ) u ̂ 1 ( n 1 )
(41)

where y 1(n) = [y 1(n),…,y 1(n − α + 1)]T, y 2(n) = [y 2(n),…,y 2(n − α + 1)]T, and u ̂ 1 (n)= û 1 ( n ) , , û 1 ( n α + 1 ) T . Correspondingly, the forward and backward prediction error coefficient matrices for the α th-order transversal filtering are defined as

a ̂ kT ( n ) = â 0 k ( n ) , â 1 k ( n ) , â 2 k ( n ) , â 3 k ( n ) , , , â 3 α 2 k ( n ) , â 3 α 1 k ( n ) , â 3 α k ( n )
(42)

and

c ̂ kT ( n ) = ĉ 3 α k ( n ) , ĉ 3 α 1 k ( n ) , ĉ 3 α 2 k ( n ) , , , ĉ 3 k ( n ) , ĉ 2 k ( n ) , ĉ 1 k ( n ) , ĉ 0 k ( n )
(43)

where k = 1,2,3 due to three-channel processing. Then, the prediction filtering continues with three-channel lattice stages for (p 1 − q 1) < m ≤ (p 2 − q 2). The Levinson–Durbin recursions for the three-channel section can be developed similar to the two-channel section by establishing the mathematical link between transversal and lattice filter coefficients. Since the organization of signal samples in Equation (41) is different than the ordering of signal samples entering into three-channel SPMLSs in (20), we use (3α + 1) × (3α + 1) shuffling matrices, J α + 1 1 for the first channel, J α + 1 2 for the second channel, and J α + 1 3 for the third channel to reorder the elements of coefficient matrices, a ̂ α 1 H (n), a ̂ α 2 H (n), a ̂ α 3 H (n) and c ̂ α 1 H (n), c ̂ α 2 H (n), c ̂ α 3 H (n), according to the sample ordering of SPMLSs. Similar to Equations (27) and (28) in two-channel case, the forward and backward prediction errors in three-channel case for the output of the general α th-order filter with transversal structure are expressed as

f α 1 ( n ) f α 2 ( n ) f α 3 ( n ) = J α + 1 1 a ̂ α 1 H ( n ) 0 0 0 J α + 1 2 a ̂ α 2 H ( n ) 0 0 0 J α + 1 3 a ̂ α 3 H ( n ) y ̄ α + 1 (n)
(44)
b α 1 ( n ) b α 2 ( n ) b α 3 ( n ) = J α + 1 1 c ̂ α 1 H ( n ) 0 0 0 J α + 1 2 c ̂ α 2 H ( n ) 0 0 0 J α + 1 3 c ̂ α 3 H ( n ) y ̄ α + 1 (n)
(45)

where 0 is a 1 × (α + 1) zero matrix in this case. We can then express the (α−1)th prediction errors as

f α 1 1 ( n ) f α 1 2 ( n ) f α 1 3 ( n ) = J α + 1 1 a ̂ α 1 1 H ( n ) 0 0 0 0 0 0 J α + 1 2 a ̂ α 1 2 H ( n ) 0 0 0 0 0 0 J α + 1 3 a ̂ α 1 3 H ( n ) 0 0 0 y ̄ α + 1 ( n )
(46)
b α 1 1 ( n 1 ) b α 1 2 ( n 1 ) b α 1 3 ( n 1 ) = J α + 1 1 0 0 0 c ̂ α 1 1 H ( n 1 ) 0 0 0 J α + 1 2 0 0 0 c ̂ α 1 2 H ( n 1 ) 0 0 0 J α + 1 3 0 0 0 c ̂ α 1 3 H ( n 1 ) y ̄ α + 1 (n).
(47)

Note that the size of each coefficient matrix in three-channel case increases by three when the order of prediction filter increases from α − 1 to α. Similar to Equations (35) and (36) in two-channel case, the lattice prediction errors for the α t h three-channel stage can be expressed in compact form with the following equations

f α 1 ( n ) f α 2 ( n ) f α 3 ( n ) = f α 1 1 ( n ) f α 1 2 ( n ) f α 1 3 ( n ) + Γ α f (n1) b α 1 1 ( n 1 ) b α 1 2 ( n 1 ) b α 1 3 ( n 1 )
(48)
b α 1 ( n ) b α 2 ( n ) b α 2 ( n ) = b α 1 1 ( n 1 ) b α 1 2 ( n 1 ) b α 1 3 ( n 1 ) + Γ α b (n1) f α 1 1 ( n ) f α 1 2 ( n ) f α 1 3 ( n )
(49)

where

Γ α f ( n ) = Γ α , 1 , 1 f ( n ) Γ α , 1 , 2 f ( n ) Γ α , 1 , 3 f ( n ) Γ α , 2 , 1 f ( n ) Γ α , 2 , 2 f ( n ) Γ α , 2 , 3 f ( n ) Γ α , 3 , 1 f ( n ) Γ α , 3 , 2 f ( n ) Γ α , 3 , 3 f ( n ) = κ ̄ α , 1 , 1 f ( n ) κ ̄ α , 1 , 2 f ( n ) κ ̄ α , 1 , 3 f ( n ) κ ̄ α , 2 , 1 f ( n ) κ ̄ α , 2 , 2 f ( n ) κ ̄ α , 2 , 3 f ( n ) κ ̄ α , 3 , 1 f ( n ) κ ̄ α , 3 , 2 f ( n ) κ ̄ α , 3 , 3 f ( n )  ×  1 0 0 κ ̂ α , 2 , 1 f ( n 1 ) 1 0 κ ̂ α , 3 , 1 f ( n 1 ) κ ̂ α , 3 , 2 f ( n 1 ) 1

and

Γ α b ( n ) = Γ α , 1 , 1 b ( n ) Γ α , 1 , 2 b ( n ) Γ α , 1 , 3 b ( n ) Γ α , 2 , 1 b ( n ) Γ α , 2 , 2 b ( n ) Γ α , 2 , 3 b ( n ) Γ α , 3 , 1 b ( n ) Γ α , 3 , 2 b ( n ) Γ α , 3 , 3 b ( n ) = κ ̄ α , 1 , 1 b ( n ) κ ̄ α , 1 , 2 b ( n ) κ ̄ α , 1 , 3 b ( n ) κ ̄ α , 2 , 1 b ( n ) κ ̄ α , 2 , 2 b ( n ) κ ̄ α , 2 , 3 b ( n ) κ ̄ α , 3 , 1 b ( n ) κ ̄ α , 3 , 2 b ( n ) κ ̄ α , 3 , 3 b ( n )  ×  1 0 0 κ ̂ α , 2 , 1 b ( n ) 1 0 κ ̂ α , 3 , 1 b ( n ) κ ̂ α , 3 , 2 b ( n ) 1 .

The α th-order prediction error matrices in Equations (44) and (45), and the (α−1)th-order prediction error matrices in Equations (46) and (47) are subsequently substituted in the α th-order prediction error expressions in (48) and (49) so that the following pairs of order updates are produced

a ̂ α 1 ( n ) = a ̂ α 1 1 ( n ) 0 + Γ α , 1 , 1 f ( n 1 ) J α + 1 1 0 c ̂ α 1 1 ( n 1 ) + + Γ α , 1 , 3 f ( n 1 ) J α + 1 3 0 c ̂ α 1 3 ( n 1 )
(50)
a ̂ α 2 ( n ) = a ̂ α 1 2 ( n ) 0 + Γ α , 2 , 1 f ( n 1 ) J α + 1 1 0 c ̂ α 1 1 ( n 1 ) + + Γ α , 2 , 3 f ( n 1 ) J α + 1 3 0 c ̂ α 1 3 ( n 1 )
(51)
a ̂ α 3 ( n ) = a ̂ α 1 3 ( n ) 0 + Γ α , 3 , 1 f ( n 1 ) J α + 1 1 0 c ̂ α 1 1 ( n 1 ) + + Γ α , 3 , 3 f ( n 1 ) J α + 1 3 0 c ̂ α 1 3 ( n 1 )
(52)
c ̂ α 1 ( n ) = 0 c ̂ α 1 1 ( n 1 ) + Γ α , 1 , 1 b ( n 1 ) J α + 1 1 a ̂ α 1 1 ( n ) 0 + + Γ α , 1 , 3 b ( n 1 ) J α + 1 3 a ̂ α 1 3 ( n ) 0
(53)
c ̂ α 2 ( n ) = 0 c ̂ α 1 2 ( n 1 ) + Γ α , 2 , 1 b ( n 1 ) J α + 1 1 a ̂ α 1 1 ( n ) 0 + + Γ α , 2 , 3 b ( n 1 ) J α + 1 3 a ̂ α 1 3 ( n ) 0
(54)
c ̂ α 3 ( n ) = 0 c ̂ α 1 3 ( n 1 ) + Γ α , 3 , 1 b ( n 1 ) J α + 1 1 a ̂ α 1 1 ( n ) 0 + + Γ α , 3 , 3 b ( n 1 ) J α + 1 3 a ̂ α 1 3 ( n ) 0
(55)

where the size of 0 is a 3 × 1 zero matrix. Finally, the fourth channel( û 2 (n)), which represents the fed back and delayed signal related to the second subband, is taken into the orthogonalization process at the (p 2q 2 + 1)th stage, and the prediction filtering continues with four-channel lattice stages through (p 2q 2) < m ≤ p 2. In order to develop the Levinson–Durbin recursions for this section, we define the forward and backward prediction error coefficient matrices for the ν th-order transversal filtering as

a ̂ kT ( n ) = â 0 k ( n ) , â 1 k ( n ) , â 2 k ( n ) , â 3 k ( n ) , â 4 k ( n ) , , , â 4 ν 3 k ( n ) , â 4 ν 2 k ( n ) , â 4 ν 1 k ( n ) , â 4 ν k ( n )
(56)

and

c ̂ kT ( n ) = ĉ 4 ν k ( n ) , ĉ 4 ν 1 k ( n ) , ĉ 4 ν 2 k ( n ) , ĉ 4 ν 3 k ( n ) , , , ĉ 4 k ( n ) , ĉ 3 k ( n ) , ĉ 2 k ( n ) , ĉ 1 k ( n ) , ĉ 0 k ( n )
(57)

where k = 1,2,3,4 due to four-channel lattice processing, and we also visualize as before that the following organization of the elements of input vectors y 1(n) = [y 1(n),…,y 1(n − ν + 1)]T, y 2(n) = [y 2(n),…,y 2(n − ν + 1)]T, u ̂ 1 (n)= [ û 1 ( n ) , , û 1 ( n ν + 1 ) ] T , and u ̂ 2 (n)= [ û 2 ( n ) , , û 2 ( n ν + 1 ) ] T is established:

y ̄ ν + 1 ( n ) = y 1 ( n ) y 1 ( n ν ) y 2 ( n ) y 2 ( n ν ) u ̂ 1 ( n ) û 1 ( n ν ) u ̂ 2 ( n ) û 2 ( n ν ) = y 1 ( n ) y 1 ( n 1 ) y 2 ( n ) y 2 ( n 1 ) û 1 ( n ) u ̂ 1 ( n 1 ) û 2 ( n ) u ̂ 2 ( n 1 ) .
(58)

Similar to the previous two steps, the signal sample ordering in Equation (58) is different than the ordering in Equation (21), hence we use (4ν+1) × (4ν+1) shuffling matrices, J ν + 1 1 for the first channel, J ν + 1 2 for the second channel, J ν + 1 3 for the third channel, and J ν + 1 4 for the fourth channel to reorder the elements of coefficient matrices a ̂ ν 1 H (n), a ̂ ν 2 H (n), a ̂ ν 3 H (n), a ̂ ν 4 H (n) and c ̂ ν 1 H (n), c ̂ ν 2 H (n), c ̂ ν 3 H (n), c ̂ ν 4 H (n), according to the sample ordering of SPMLSs. Then, the development of the Levinson–Durbin recursions for this section unfolds as in two and three-channel sections. First, the ν th and the (ν−1)th order forward and backward prediction errors are stated as the output of a transversal. Second, the prediction order update equations for the (ν−1)th and the ν th-orders are expressed for a four-channel lattice section, and finally the ν th and the (ν−1)th-order forward and backward transversal filter prediction error expressions are substituted in the lattice prediction order update equations such that the following pairs of order updates are obtained

a ̂ ν 1 ( n ) = a ̂ ν 1 1 ( n ) 0 + Γ ν , 1 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 1 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(59)
a ̂ ν 2 ( n ) = a ̂ ν 1 2 ( n ) 0 + Γ ν , 2 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 2 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(60)
a ̂ ν 3 ( n ) = a ̂ ν 1 3 ( n ) 0 + Γ ν , 3 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 3 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(61)
a ̂ ν 4 ( n ) = a ̂ ν 1 4 ( n ) 0 + Γ ν , 4 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 4 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(62)
c ̂ ν 1 ( n ) = 0 c ̂ ν 1 1 ( n 1 ) + Γ ν , 1 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 1 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0
(63)
c ̂ ν 2 ( n ) = 0 c ̂ ν 1 2 ( n 1 ) + Γ ν , 2 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 2 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0
(64)
c ̂ ν 3 ( n ) = 0 c ̂ ν 1 3 ( n 1 ) + Γ ν , 3 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 3 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0
(65)
c ̂ ν 4 ( n ) = 0 c ̂ ν 1 4 ( n 1 ) + Γ ν , 4 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 4 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0 .
(66)

Note that 0 is a 4 × 1 zero matrix, and that conversion of lattice parameters to process parameters started with two channels, but ended with four channels due to sequential processing. The new Levinson–Durbin type conversion algorithm for a fullbandARMA spectrum estimation can be similarly developed as a special case of subband implementation. The lattice prediction filter for fullband ARMA spectrum estimation, which consists of one and two-channel sections, is shown in Figure4. The corresponding conversion algorithm can also be realized in two sections as summarized in Subsection New Levinson-Durbin Type Conversion Algorithm for Two-Channel ARMA Lattice Prediction.

3.1 New Levinson-Durbin type conversion algorithm for two-channel ARMA lattice prediction

Initialization :

â 0 1 ( n ) = 1 . 0 , ĉ 0 1 ( n ) = 1 . 0 , â p q 2 ( n ) = 1 . 0 , ĉ 0 2 ( n ) = 1 . 0 .
(67)

One-channel Section (0 < m ≤ (p − q)) :

a ̂ m 1 (n)= a ̂ m 1 1 ( n ) 0 + Γ m , 1 f (n1) 0 c ̂ m 1 1 ( n 1 )
(68)
c ̂ m 1 (n)= 0 c ̂ m 1 1 ( n 1 ) + Γ m , 1 b (n1) a ̂ m 1 1 ( n ) 0 .
(69)

Two-channel Section ((p − q) < m ≤ p) :

a ̂ m 1 ( n ) = a ̂ m 1 1 ( n ) 0 + Γ m , 1 , 1 f ( n 1 ) J m + 1 1 0 c ̂ m 1 1 ( n 1 ) + Γ m , 1 , 2 f ( n 1 ) J m + 1 2 0 c ̂ m 1 2 ( n 1 )
(70)
a ̂ m 2 ( n ) = a ̂ m 1 2 ( n ) 0 + Γ m , 2 , 1 f ( n 1 ) J m + 1 1 0 c ̂ m 1 1 ( n 1 ) + Γ m , 2 , 2 f ( n 1 ) J m + 1 2 0 c ̂ m 1 2 ( n 1 )
(71)
c ̂ m 1 ( n ) = 0 c ̂ m 1 1 ( n 1 ) + Γ m , 1 , 1 b ( n 1 ) J m + 1 1 a ̂ m 1 1 ( n ) 0 + Γ m , 1 , 2 b ( n 1 ) J m + 1 2 a ̂ m 1 2 ( n ) 0
(72)
c ̂ m 2 ( n ) = 0 c ̂ m 1 2 ( n 1 ) + Γ m , 2 , 1 b ( n 1 ) J m + 1 1