# Adaptive multichannel sequential lattice prediction filtering method for ARMA spectrum estimation in subbands

- Mehmet Tahir Ozden
^{1}Email author

**2013**:9

https://doi.org/10.1186/1687-6180-2013-9

© Ozden; licensee Springer. 2013

**Received: **18 October 2012

**Accepted: **4 January 2013

**Published: **28 January 2013

## Abstract

### Abstract

A multichannel characterization for autoregressive moving average (ARMA) spectrum estimation in subbands is considered in this article. The fullband ARMA spectrum estimation can be realized in two-channels as a special form of this characterization. A complete orthogonalization of input multichannel data is accomplished using a modified form of sequential processing multichannel lattice stages. Matrix operations are avoided, only scalar operations are used, and a multichannel ARMA prediction filter with a highly modular and suitable structure for VLSI implementations is achieved. Lattice reflection coefficients for autoregressive (AR) and moving average (MA) parts are simultaneously computed. These coefficients are then converted to process parameters using a newly developed Levinson–Durbin type multichannel conversion algorithm. Hence, a novel method for spectrum estimation in subbands as well as in fullband is developed. The computational complexity is given in terms of model order parameters, and comparisons with the complexities of nonparametric methods are provided. In addition, the performance is visually and statistically compared against those of the nonparametric methods under both stationary and nonstationary conditions.

### Keywords

Parametric modeling Subband spectrum estimation and sensing Frequency estimation and tracking Radar and speech analysis## 1 Introduction

While parametric or model-based methods are used extensively for high-resolution spectrum estimation, these methods perform poorly when SNR and spacing between frequencies is small. In many cases, input noise is assumed to be white; if this is not the case, colored noise can be adapted, provided that its statistics are known. However, such statistics may not be known in many cases, and instead, noise may incorrectly be assumed white. Such shortcomings can be overcome by applying subband decomposition methods in spectrum estimation.

It was shown by Rao and Pearlman[1] that the well-known AR modeling was a promising method for spectrum estimation in subbands, and it was proved that *p* th-order prediction from subbands is superior to *p* th-order prediction in the fullband when *p* is finite, and subband decomposition of a source resulted in a whitening of the composite subband spectrum. The equivalence of linear prediction and AR spectrum estimation was then exploited to show that AR spectrum from subbands offers a gain over fullband AR spectrum estimation. Unfortunately, new problems such as spectral overlapping and the increase in the variance of estimated parameters appear. The first disadvantage was addressed in a conference paper by Bonacci et al.[2], where nonreal-time procedures have been proposed to perform subband spectral estimation without discontinuities or aliasing at subband borders. However, this procedure is appropriate for a uniform filter bank, even though methods applicable to any kind of filter bank are desired. In another conference paper, Bonacci et al.[3] proposed to tackle the second drawback by a Subband Multichannel Autoregressive Spectral Estimation method, which was also intended for an off-line implementation.

Another popular model, autoregressive moving average (ARMA) model, which includes AR and MA methods as its special cases, has the input–output relationship given by

for an ARMA(*p*,*q*) process. Here, *x*(*n*) is zero mean, white noise with a variance of${\sigma}_{x}^{2}$, and${\xe2}_{\ell}^{1}$ and${\xe2}_{j}^{2}$, respectively, represent the *ℓ* th and *j* th coefficients related to AR and MA parts. Such processes arise in various applications such as modeling radar signals[4, 5] or speech signals[6, 7], where spectral zeros as well as poles are often present due to the physical mechanism generating the data. In addition, processes that are purely autoregressive are often transformed into ARMA(*p*,*p*) processes by addition of measurement noise, and especially sinusoids in noise are known to obey the degenerate ARMA equation[8, 9]. Even though an ARMA process can be represented by a unique AR model of generally infinite order, the ARMA modeling approach often leads to more efficient implementations. A hierarchical ARMA modeling method for classifying high-resolution radar signals at multiple scales was presented in[10], and it was shown that the radar signal at a different scale obeyed an ARMA process if it was an ARMA process at the observed scale.

ARMA model-based applications such as the classification of high-resolution radar signatures using multi-scale features, and lattice speech analysis/synthesis were reported in[11, 12], respectively. As a consequence of degenerate ARMA modeling of sinusoids in noise, adaptive multiple frequency tracking, previously considered in[13–15], has gained momentum recently[16], and presents great interest in communications[17], biomedical engineering[18], speech processing[19], and power systems[20, 21]. Another recent consequence of degenerate ARMA modeling of sinusoids in noise is related to spectrum sensing for cognitive radios[22, 23], where the primary task is to dynamically explore the radio spectrum for the existence of signals (sinusoids) so as to determine portions of the frequency band that may used for radio transmission. In view of these developments, we think that methods of subband spectrum estimation based on ARMA modeling with possible extensions to fullband spectrum estimation can provide good alternatives in radar and speech classification, adaptive multiple frequency tracking as well as spectrum sensing for cognitive radio applications.

In this article, we propose a novel method that relies on estimation of the driving noise in subbands. Even though methods based on estimation of the driving noise were previously proposed for fullband[24], the important difference of our method is that we first transform the subband ARMA filtering problem into multichannel AR filtering problem by embedding subband ARMA processes into multichannel AR processes, and then we achieve a complete modified Gram-Schmidt orthogonalization of input multichannel signal using a modified version of the sequential processing multichannel lattice stages (SPMLSs)[25]. A number of alternatives for adaptive multichannel processing were proposed after the introduction of SPMLSs in[25]. Two of such alternatives are the modular lattice architectures proposed by Lev-ari[26], and Glentis and Kalouptsidis[27]. While the architecture in[26] is suitable for equal channel orders and involves more computations than SPMLSs, neither of these architectures is preferable for sequential processing. Another alternative is the *QR* decomposition-based lattice approach in[28], which is also for equal channel orders, and was later extended to unequal channel orders by Yang[29]. Newer versions of multichannel *QR* algorithms based on orthogonal Givens rotation for equal as well as unequal channel orders were later presented by Rontogiannis and Theodoridis[30]. Recently, an array-based *QR* multichannel lattice filter that extends the correspondence between recursive least-squares update equations and Kalman filter equations to the multichannel lattice case was presented by Gomes and Barrosso[31]. In addition, transversal-type algorithms such as[32, 33] were proposed due to their lower complexity and direct relation to channel coefficients. However, these algorithms generally require the implementation of stabilization techniques, and their structure is less regular. The principle of modular decomposition appears to be the implicit basis in all these adaptive multichannel processing techniques, and provides for the scalar only operations. In *QR* decomposition approaches, the *Q* matrix is implicitly formed and then used to compute the *R* matrix, whereas in the Gram-Schmidt approach, the inverse of the *R* is implicitly formed and then used to compute the *Q* matrix. As a consequence of this fact, Regalia and Bellanger[34] showed that there exists a duality between *QR* and lattice methods, and the possibility of combining elements of both approaches to obtain new hybrid algorithms. With respect to developing these hybrid algorithms, Ling[35] showed that a orthogonal Givens rotation-based algorithm algebraically coincides with the recursive-modified Gram-Schmidt-based lattice algorithm in[36].

In accordance with this perspective in multichannel signal processing, as SPMLSs already have modularity, order recursiveness, regularity, simplicity, sequentiality, and equal as well as unequal channel processing capabilities, we modify them in order to improve their numerical performance by using the error-feedback formula of the recursive-modified Gram-Schmidt algorithm[35, 36] in the processing cells. Thus, the complete orthogonalization of multichannel input data and sequential nature of the modified SPMLSs make it possible to feed back the delayed forward prediction error signals to represent the unknown input noise signals of original ARMA processes. Although we introduced the complete orthogonalization concept previously in linear and nonlinear adaptive decision feedback equalization frameworks in[37, 38], its application to adaptive spectrum estimation problem in subbands as well as in fullband results in novel implementations, particularly to the development of a new Levinson–Durbin type conversion algorithm for the modified SPMLSs in order to compute ARMA process parameters from lattice reflection coefficients. To the best of the authors’ knowledge, this particular multichannel lattice prediction filter structure for ARMA spectrum estimation in subbands or in fullband and the new Levinson–Durbin type multichannel conversion algorithm do not exist in the literature.

A two-subband ARMA spectrum estimation problem is considered in this article due to the ease of explanation and space limitations in developing the method. However, it is considered straightforward to apply the method to any number of subbands, and to AR spectrum estimation in subbands. The method is appropriate for uniform and nonuniform filter bank realizations, while aliasing problems due to spectral overlapping in adjacent channels are also addressed. A highly modular, regular, time and order recursive, recursive least squares (RLS) ARMA parameter estimator with inherently good numerical properties, suitable for VLSI and recent programable system on chip implementations[39], is designed, and AR and MA parameters are found simultaneously. With these properties, the method is applicable for both off-line and on-line implementations; it is especially possible to monitor the forward prediction error signal, start the parameter estimation for a fullband AR*(p)* or ARMA*(p,q)* or ARMA*(p,p)* process; if performance requirements are not met, end up for subband ARMA(*p*
_{
k
},*q*
_{
k
}) or ARMA(*p*
_{
k
},*p*
_{
k
}) realizations. Consequently, it dynamically extends the lattice parametrization of fullband spectrum into subbands, and thereby arises as an useful and practical method for radar signal analysis/classification, speech analysis/synthesis, adaptive multiple frequency tracking, and cognitive radio spectrum sensing tasks.

An adaptive FIR filtering approach to spectral estimation, which is referred to as amplitude and phase estimation of a sinusoid (APES) and has applications to radar target recognition, was proposed by Li and Stoica[40], and the adaptive FIR filtering approach to the Capon method was also discussed by Stoica and Moses[41]. Moreover, the APES method has been extended to array processing by Yardibi et al.[42], and named as iterative adaptive approach for amplitude and phase estimation (IAA-APES). An FIR filtering reinterpretation of the Thomson’s multitaper method[43, 44] with applications to spectrum sensing for cognitive radio was also presented by Farhang-Boroujeny[45]. Recently, computationally efficient versions of the adaptive Capon and APES, and IAA methods have been proposed in[46, 47], respectively. In this article, we compare the complexity and performance of our method with those of the Periodogram, multitaper, Capon, APES, and IAA methods, and show that our method is competitive in terms of complexity and performance.

The remainder of this article is organized as follows. In Section 2, we present the development of the new multichannel ARMA lattice prediction filter using the modified SPMLSs. In Section 3, we develop the new Levinson–Durbin type multichannel conversion algorithm for the modified SPMLSs, and relate lattice parameters to process parameters. Spectrum estimation expression in two-subbands is given in Section 4. The computational complexity computations are treated in Section 5. Section 6 is concerned with the experimental results. Finally, Section 7 is about the discussions of results and conclusions. The following notations are used in this article. (∙) ^{∗} represents the complex conjugate of (∙). (∙) ^{
T
} and (∙) ^{
H
} stand for the transpose and the Hermitian transpose of (∙), respectively. The variables *m*, *i*, and *n* are global while all other variables are local. The variable *m* represents the stage number while *n* and *i* are the time indexes related to data and coefficients, respectively, till we equate them in Section 3 to have a single time index.

## 2 Adaptive multichannel ARMA lattice prediction filtering

### 2.1 Multichannel prediction problem

An illustration of the adaptive multichannel ARMA prediction filtering in subbands for two-subband case is presented in Figure1. Therein, *y*(*n*) represents the input fullband signal while *y*
_{1}(*n*) and *y*
_{2}(*n*) stand for the input subband signals. In adaptive multichannel ARMA prediction filtering, the objective is to find an exponentially windowed, LS solution for the AR and MA coefficients of the *k* th forward prediction filter that minimizes each of the two cost functions

at each time instant *i*, and *k* = 1,2. The forward prediction error${f}_{{p}_{k}}^{k}\left(n\right)$ in this expression is defined as

and the *k* th forward prediction filter output,${\widehat{d}}_{i}^{k}\left(n\right)$, is an estimate of the *k* th desired signal, *d*
^{
k
}(*n*) = *y*
_{
k
}(*n*), is given by

Herein, *p*
_{
k
} and *q*
_{
k
} denote the order of the (*p*
_{
k
},*q*
_{
k
}) prediction error filter associated with the *k* th subband, and${\xfb}_{k}\left(n\right)$ is the estimate of the *k* th ARMA process input signal. The estimated *k* th ARMA process input signal,${\xfb}_{k}\left(n\right),$ is obtained by delaying and feeding back the *p*
_{
k
}th-order forward prediction error,${\xfb}_{k}\left(n\right)={f}_{{p}_{k}}^{k}(n-1)$. Hence, the input vector to the *k* th ARMA filter at time instant *n*,${\stackrel{~}{\mathbf{y}}}_{k}\left(n\right)$, and the corresponding coefficient vector${\stackrel{~}{\mathbf{a}}}^{k}\left(i\right)$, at time instant *i*, are defined as

and

respectively. Herein,${\xe3}_{1,j}^{k}\left(i\right)$ and${\xe3}_{2,j}^{k}\left(i\right)$, respectively, represent the *j* th coefficient related to the AR and MA parts of the forward prediction filter for the *k* th subband at time instant *i*. It is assumed, without loss of generality, that *p*
_{
k
} ≥ *q*
_{
k
}. *p*
_{
k
} = *q*
_{
k
} case corresponds to the prediction filter for an ARMA(*p*
_{
k
},*p*
_{
k
}) process, while *p*
_{
k
} > *q*
_{
k
} prediction filter is for a general ARMA(*p*
_{
k
},*q*
_{
k
}) process. Note that an ARMA backward prediction can be performed for the desired signal, *d*
^{
k
}(*n*) = *y*
_{
k
}(*n*−*p*
_{
k
}), and the prediction filter in that case would use the reversed and conjugated forward prediction filter coefficients, which are defined in the backward prediction error coefficient vector as

where${\stackrel{~}{c}}_{1,j}^{k}\left(i\right)$ and${\stackrel{~}{c}}_{2,j}^{k}\left(i\right)$ are, respectively, defined as the *j* th coefficient related to the AR and MA parts of the backward prediction filter for the *k* th subband at time instant *i*.

Consequently, the main concern of the exponentially weighted LS problem under consideration is to find, at each time *i*, the *k* th optimal coefficient vector,${\stackrel{~}{\mathbf{a}}}^{k}\left(i\right)$ that would minimize the cost function

The *k* th optimal coefficient vector related to the *k* th subband filter

is found by differentiating *J*
^{
k
}(*i*) with respect to${\stackrel{~}{\mathbf{a}}}^{k}\left(i\right)$, setting the derivative to zero, and solving for${\stackrel{~}{\mathbf{a}}}^{k}\left(i\right)$, where

and

### 2.2 Sequential lattice orthogonalization

*LS*optimization problem by taking into consideration each of these sections separately, and therefore we assume that the filter is comprised of three cascaded filters, which are two-channel, three-channel, and four-channel lattice sections; and we use a different index for each section while using

*m*to indicate a stage in the whole filter. We also assume

*p*

_{1}=

*p*

_{2}for the ease of explanation without loss of generality.

In order to sequentially solve the exponentially weighted *LS* optimization problem under consideration, we first organize the elements of input signal vectors **y**
_{1}(*n*) = [*y*
_{1}(*n*),…,*y*
_{1}(*n* − *ℓ*)]^{
T
}, and **y**
_{2}(*n*) = [*y*
_{2}(*n*),…,*y*
_{2}(*n* − *ℓ*)]^{
T
} according to the natural ordering of SPMLSs as

and input to two-channel stages for which the stage number (*m*) has a range of values given by 0 < *m* ≤ (*p*
_{1} − *q*
_{1}). Accordingly, we redefine Equations (10) and (11) using this new data vector as follows

and

where *k* = 1,2. The orthogonalization of data using SPMLSs corresponds to the transformation of (13) and (14) into

*ℓ*× 2

*ℓ*lower triangular transformation matrix for forward prediction, and is sequentially realized stage-by-stage using 2 × 2 lower triangular transformation matrices

*i*, and${\widehat{\kappa}}_{\ell}^{f}\left(i\right)$ is the reflection coefficient computed at the single circular cell in the triangular-shaped self-orthogonalization processor of the

*ℓ*th two-channel SPMLS. Then, the forward lattice predictor coefficients are computed using

*k*th row of the 2 × 2

*ℓ*lattice forward prediction reflection coefficient matrix${\mathbf{\Theta}}_{\ell}^{f}\left(i\right)$, and is also sequentially implemented stage-by-stage by means of 2 × 2 forward prediction reflection coefficient matrices

in which${\stackrel{\u0304}{\mathbf{\kappa}}}_{\ell {,}_{k,j}}^{f}\left(i\right)$ is the *j* th reflection coefficient related to the forward prediction of the *k* th channel signal, and it is computed at the (*k*,*j*)th single circular cell of the square-shaped reference-orthogonalization processor related to forward prediction at the *ℓ* th two-channel SPMLS. Note that the matrix inversion operation in Equation (9) is transformed into a simple scalar inversion operation in (18) due to the diagonal nature of${\mathbf{D}}_{\ell +1}^{f}\left(i\right)$. The backward prediction counterpart of this optimization problem is similarly solved using 2 × 2 lower triangular transformation matrices${\mathbf{L}}_{\ell}^{b}\left(i\right)$, and 2 × 2 lattice backward prediction reflection coefficient matrices,${\mathit{\Delta}}_{\ell}^{b}\left(i\right)$.

*p*

_{1}−

*q*

_{1}+1)

*t*

*h*stage, as the third channel. Accordingly, we expand the optimization problem by organizing the elements of the input data vectors

*y*

_{1}(

*n*) = [

*y*

_{1}(

*n*),…,

*y*

_{1}(

*n*−

*α*)]

^{ T },

*y*

_{2}(

*n*) = [

*y*

_{2}(

*n*),…,

*y*

_{2}(

*n*−

*α*)]

^{ T }, and${\widehat{\mathbf{u}}}_{1}\left(n\right)={\left[{\xfb}_{1}\right(n),\dots ,{\xfb}_{1}(n-\alpha \left)\right]}^{T}$ as follows:

and input to three-channel lattice section, where the stage number (*m*) takes values in the range given by (*p*
_{1} − *q*
_{1}) < *m* ≤ (*p*
_{2} − *q*
_{2}). Subsequently, we solve the optimization problem in (18) once again with the new input vector, in which case${\mathit{\Omega}}_{\alpha}^{f}\left(i\right)$ and${\mathbf{\Theta}}_{\alpha}^{f}\left(i\right)$ are the 3*α* × 3*α* lower triangular transformation and the 3 × 3*α* forward lattice prediction coefficient matrices, respectively.${\mathit{\Omega}}_{\alpha}^{f}\left(i\right)$ is computed sequentially by means of 3 × 3 lower triangular transformation matrices,${\mathbf{L}}_{\alpha}^{f}\left(i\right)$, and${\mathbf{\Theta}}_{\alpha}^{f}\left(i\right)$ is similarly realized stage-by-stage making use of 3 × 3 forward prediction coefficient matrices,${\mathit{\Delta}}_{\alpha}^{f}\left(i\right)$, at time instant *i*. Note that, since the delayed and fed back signal is considered to constitute a new channel in the multichannel sequential lattice filtering, we have three desired signals at this point, *d*
^{
k
}(*n*), where *k* = 1,2,3, one of which did not exist in the optimization problem stated in Section 2.1, and this new desired signal, *d*
^{3}(*n*), is related to the MA part of the first subband ARMA modeling.

**y**

_{1}(

*n*) = [

*y*

_{1}(

*n*),…,

*y*

_{1}(

*n*−

*ν*)]

^{ T },

**y**

_{2}(

*n*) = [

*y*

_{2}(

*n*),…,

*y*

_{2}(

*n*−

*ν*)]

^{ T },${\widehat{\mathbf{u}}}_{1}\left(n\right)={\left[{\xfb}_{1}\right(n),\dots ,{\xfb}_{1}(n-\nu \left)\right]}^{T}$, and${\widehat{\mathbf{u}}}_{2}\left(n\right)={\left[{\xfb}_{2}\right(n),\dots ,{\xfb}_{2}(n-\nu \left)\right]}^{T}$ are organized as

where the stage number (*m*) is in the range given by (*p*
_{2} − *q*
_{2}) < *m* ≤ *p*
_{2} due to four-channel processing. Similar to two-channel and three-channel cases, we solve the optimization problem in (18) using the new data vector in Equation (21), in which case${\mathit{\Omega}}_{\nu}^{f}\left(i\right)$ and${\mathbf{\Theta}}_{\nu}^{f}\left(i\right)$ are 4*ν* × 4*ν* lower triangular transformation, and 4 × 4*ν* forward lattice prediction coefficient matrices at the time instant *i*, respectively. Similar to previous cases, these matrices are computed stage-by-stage by the use of 4 × 4 lower triangular transformation matrices,${\mathbf{L}}_{\nu}^{f}\left(i\right)$, and 4 × 4 forward prediction coefficient matrices,${\mathit{\Delta}}_{\nu}^{f}\left(i\right)$, at time instant *i*, respectively. As the second delayed and fed back signal is also considered as a new channel in the multichannel sequential lattice filtering, hereafter we have four desired signals, *d*
^{
k
}(*n*), where *k* = 1,2,3,4, and this fourth desired signal, *d*
^{4}(*n*), is associated with the MA part of the second subband ARMA modeling.

### 2.3 Matrix visualization

**y**

_{1}(

*n*) = [

*y*

_{1}(

*n*),…,

*y*

_{1}(

*n*− 8)]

^{ T },

**y**

_{2}(

*n*) = [

*y*

_{2}(

*n*),…,

*y*

_{2}(

*n*− 8)]

^{ T },${\widehat{\mathbf{u}}}_{1}\left(n\right)=\phantom{\rule{2.77626pt}{0ex}}{\left[{\xfb}_{1}\right(n),{\xfb}_{1}(n-1),\dots ,{\xfb}_{1}(n-5\left)\right]}^{T}$, and$\widehat{{\mathbf{u}}_{2}}\left(n\right)=\phantom{\rule{2.77626pt}{0ex}}{\left[{\xfb}_{2}\right(n),{\xfb}_{2}(n-1),\dots ,{\xfb}_{2}(n-2\left)\right]}^{T}$ as columns of a matrix,

*k*th desired signal,

*d*

^{ k }(

*n*), is sequentially predicted using self-orthogonalized and delayed backward prediction error signals as follows:

Here, the first and second summations represent the prediction accomplished by the two-channel and three-channel sections, respectively, and the fourth summation is connected with the four-channel prediction section. In each section,${\stackrel{\u0304}{\kappa}}_{m,k,j}^{f}\left(i\right)$ represents the *j* th forward prediction reflection coefficient at the *m* th stage related to the *k* th channel as defined in the previous subsection, and${\widehat{b}}_{m-1}^{j}\left(n\right)$ represents the *j* th element of the self-orthogonalized backward prediction error signal vector,${\widehat{\mathbf{b}}}_{m-1}\left(n\right)$, at the input of the *m* th stage. The self-orthogonalized backward prediction error vector,${\widehat{\mathbf{b}}}_{m-1}\left(n\right)$, is produced by the lower triangular transformation of the input backward prediction error vector, **b**
_{
m−1}(*n*), using${\mathbf{L}}_{m}^{f}\left(n\right)$, and this operation is accomplished at the triangular shaped self-orthogonalization processor (related to forward prediction) of the *m* th SPMLS. Note that the sizes of vectors,${\widehat{\mathbf{b}}}_{m-1}\left(n\right)$, **b**
_{
m−1}(*n*), and matrix,${\mathbf{L}}_{m}^{f}\left(n\right)$, at different sections of the proposed lattice filter are as follows: 2 × 1, and 2 × 2 in two-channel section, 3 × 1, and 3 × 3 in three-channel section, and 4 × 1, and 4 × 4 in four-channel section, respectively.

We would also like to point out that a lattice filter for fullband ARMA spectrum estimation is a special form of the two-subband implementation, and therefore it can similarly be realized using sequential processing one-channel and two-channel lattice stages as illustrated in Figure4 for an ARMA(10,2) implementation.

## 3 Conversion of lattice coefficients to process parameters

**y**

_{1}(

*n*) = [

*y*

_{1}(

*n*),…,

*y*

_{1}(

*n*−

*ℓ*+ 1)]

^{ T },

**y**

_{2}(

*n*) = [

*y*

_{2}(

*n*),…,

*y*

_{2}(

*n*−

*ℓ*+ 1)]

^{ T }, and 0 <

*m*≤ (

*p*

_{1}−

*q*

_{1}). The corresponding forward and backward prediction error coefficient matrices for the

*ℓ*th-order transversal filter for the

*k*th channel are defined as

*k*= 1,2 due to two-channel lattice processing, and${\xe2}_{0}^{k}\left(i\right)={\u0109}_{0}^{k}\left(i\right)=1.0$. Since the signal time shifting and ordering properties of SPMLSs when expressed in matrix form as in Equation (12) are different than the organization of input signal samples in matrix form as in Equation (24), we use (2

*ℓ*+ 1) × (2

*ℓ*+ 1) shuffling matrices,${\mathbf{J}}_{\ell +1}^{1}$ for the first channel and${\mathbf{J}}_{\ell +1}^{2}$ for the second channel, to reorder the elements of coefficient matrices,${\widehat{\mathbf{a}}}_{\ell}^{1H}\left(i\right),{\widehat{\mathbf{a}}}_{\ell}^{2H}\left(i\right)$ and${\widehat{\mathbf{c}}}_{\ell}^{1H}\left(i\right),{\widehat{\mathbf{c}}}_{\ell}^{2H}\left(i\right)$, according to the sample ordering of SPMLSs. Therefore, the forward and backward prediction errors for the end of the observation interval

*n*=

*i*at the output of the general

*ℓ*th-order filters with transversal structure can be stated as

**0**is a 1

*x*(

*ℓ*+ 1) zero matrix. Then, we can express the (

*ℓ*− 1)th prediction errors as

*ℓ*−1 to

*ℓ*, and

**0**is a 1 × (

*ℓ*+ 1) zero matrix as before. Subsequently, we define the

*ℓ*th-order prediction errors in terms of lattice parameters and the (

*ℓ*−1)th-order prediction errors as follows

*ℓ*th-order prediction error matrices in Equations (27) and (28), and the (

*ℓ*−1)th-order prediction error matrices in Equations (29) and (30) are substituted in the

*ℓ*th-order prediction error expressions in (35) and (36) so as to obtain the following pairs of order updates

**0**is a 2 × 1 zero matrix. The three-channel section starts with the incorporation of the third channel (${\xfb}_{1}\left(n\right)$) as the new channel at the (

*p*

_{1}−

*q*

_{1}+ 1)th stage. In order to develop the Levinson–Durbin algorithm for this section, we assume that three-channel section is a separate filter, and thereby considering the input signal samples to the three-channel section as follows

where **y**
_{1}(*n*) = [*y*
_{1}(*n*),…,*y*
_{1}(*n* − *α* + 1)]^{
T
}, **y**
_{2}(*n*) = [*y*
_{2}(*n*),…,*y*
_{2}(*n* − *α* + 1)]^{
T
}, and${\widehat{\mathbf{u}}}_{1}\left(n\right)=\phantom{\rule{0.3em}{0ex}}\left[{\xfb}_{1}\left(n\right),\dots ,\right.$
${\left(\right)close="]">{\xfb}_{1}(n-\alpha +1)}^{}T$. Correspondingly, the forward and backward prediction error coefficient matrices for the *α* th-order transversal filtering are defined as

*k*= 1,2,3 due to three-channel processing. Then, the prediction filtering continues with three-channel lattice stages for (

*p*

_{1}−

*q*

_{1}) <

*m*≤ (

*p*

_{2}−

*q*

_{2}). The Levinson–Durbin recursions for the three-channel section can be developed similar to the two-channel section by establishing the mathematical link between transversal and lattice filter coefficients. Since the organization of signal samples in Equation (41) is different than the ordering of signal samples entering into three-channel SPMLSs in (20), we use (3

*α*+ 1) × (3

*α*+ 1) shuffling matrices,${\mathbf{J}}_{\alpha +1}^{1}$ for the first channel,${\mathbf{J}}_{\alpha +1}^{2}$ for the second channel, and${\mathbf{J}}_{\alpha +1}^{3}$ for the third channel to reorder the elements of coefficient matrices,${\widehat{\mathbf{a}}}_{\alpha}^{1H}\left(n\right),{\widehat{\mathbf{a}}}_{\alpha}^{2H}\left(n\right),{\widehat{\mathbf{a}}}_{\alpha}^{3H}\left(n\right)$ and${\widehat{\mathbf{c}}}_{\alpha}^{1H}\left(n\right),{\widehat{\mathbf{c}}}_{\alpha}^{2H}\left(n\right),{\widehat{\mathbf{c}}}_{\alpha}^{3H}\left(n\right)$, according to the sample ordering of SPMLSs. Similar to Equations (27) and (28) in two-channel case, the forward and backward prediction errors in three-channel case for the output of the general

*α*th-order filter with transversal structure are expressed as

where **0** is a 1 × (*α* + 1) zero matrix in this case. We can then express the (*α*−1)th prediction errors as

Note that the size of each coefficient matrix in three-channel case increases by three when the order of prediction filter increases from *α* − 1 to *α*. Similar to Equations (35) and (36) in two-channel case, the lattice prediction errors for the *α* *t* *h* three-channel stage can be expressed in compact form with the following equations

*α*th-order prediction error matrices in Equations (44) and (45), and the (

*α*−1)th-order prediction error matrices in Equations (46) and (47) are subsequently substituted in the

*α*th-order prediction error expressions in (48) and (49) so that the following pairs of order updates are produced

**0**is a 3 × 1 zero matrix. Finally, the fourth channel(${\xfb}_{2}\left(n\right)$), which represents the fed back and delayed signal related to the second subband, is taken into the orthogonalization process at the (

*p*

_{2}−

*q*

_{2}+ 1)th stage, and the prediction filtering continues with four-channel lattice stages through (

*p*

_{2}−

*q*

_{2}) <

*m*≤

*p*

_{2}. In order to develop the Levinson–Durbin recursions for this section, we define the forward and backward prediction error coefficient matrices for the

*ν*th-order transversal filtering as

*k*= 1,2,3,4 due to four-channel lattice processing, and we also visualize as before that the following organization of the elements of input vectors

**y**

_{1}(

*n*) = [

*y*

_{1}(

*n*),…,

*y*

_{1}(

*n*−

*ν*+ 1)]

^{ T },

**y**

_{2}(

*n*) = [

*y*

_{2}(

*n*),…,

*y*

_{2}(

*n*−

*ν*+ 1)]

^{ T },${\widehat{\mathbf{u}}}_{1}\left(n\right)={\left[{\xfb}_{1}\right(n),\dots ,{\xfb}_{1}(n-\nu +1\left)\right]}^{T}$, and${\widehat{\mathbf{u}}}_{2}\left(n\right)={\left[{\xfb}_{2}\right(n),\dots ,{\xfb}_{2}(n-\nu +1\left)\right]}^{T}$ is established:

*ν*+1) × (4

*ν*+1) shuffling matrices,${\mathbf{J}}_{\nu +1}^{1}$ for the first channel,${\mathbf{J}}_{\nu +1}^{2}$ for the second channel,${\mathbf{J}}_{\nu +1}^{3}$ for the third channel, and${\mathbf{J}}_{\nu +1}^{4}$ for the fourth channel to reorder the elements of coefficient matrices${\widehat{\mathbf{a}}}_{\nu}^{1H}\left(n\right),{\widehat{\mathbf{a}}}_{\nu}^{2H}\left(n\right),{\widehat{\mathbf{a}}}_{\nu}^{3H}\left(n\right),{\widehat{\mathbf{a}}}_{\nu}^{4H}\left(n\right)$ and${\widehat{\mathbf{c}}}_{\nu}^{1H}\left(n\right),{\widehat{\mathbf{c}}}_{\nu}^{2H}\left(n\right),{\widehat{\mathbf{c}}}_{\nu}^{3H}\left(n\right),{\widehat{\mathbf{c}}}_{\nu}^{4H}\left(n\right)$, according to the sample ordering of SPMLSs. Then, the development of the Levinson–Durbin recursions for this section unfolds as in two and three-channel sections. First, the

*ν*th and the (

*ν*−1)th order forward and backward prediction errors are stated as the output of a transversal. Second, the prediction order update equations for the (

*ν*−1)th and the

*ν*th-orders are expressed for a four-channel lattice section, and finally the

*ν*th and the (

*ν*−1)th-order forward and backward transversal filter prediction error expressions are substituted in the lattice prediction order update equations such that the following pairs of order updates are obtained

Note that **0** is a 4 × 1 zero matrix, and that conversion of lattice parameters to process parameters started with two channels, but ended with four channels due to sequential processing. The new Levinson–Durbin type conversion algorithm for a fullbandARMA spectrum estimation can be similarly developed as a special case of subband implementation. The lattice prediction filter for fullband ARMA spectrum estimation, which consists of one and two-channel sections, is shown in Figure4. The corresponding conversion algorithm can also be realized in two sections as summarized in Subsection New Levinson-Durbin Type Conversion Algorithm for Two-Channel ARMA Lattice Prediction.

### 3.1 New Levinson-Durbin type conversion algorithm for two-channel ARMA lattice prediction

*m*≤ (

*p*−

*q*)) :

*p*−

*q*) <

*m*≤

*p*) :

## 4 Spectrum estimation from subbands

where${\widehat{\sigma}}_{{x}_{k}}^{2}$ represents the prediction error variance for the *k* th subband; and the coefficients,${\u0101}_{1}^{1},\dots ,{\u0101}_{{p}_{1}}^{1}$ and${\u0101}_{1}^{3},\dots ,{\u0101}_{{q}_{1}}^{3}$ are related to the AR and MA parts of the first subband ARMA spectrum while the coefficients${\u0101}_{1}^{2},\dots ,{\u0101}_{{p}_{2}}^{2}$ and${\u0101}_{1}^{4},\dots ,{\u0101}_{{q}_{2}}^{4}$ are associated with AR and MA parts of the second subband ARMA spectrum. Specifically, we determine the coefficients related to the first and second subbands in Equation (74) from the elements of coefficient vectors in Equations (25), (42), and (56) using the coefficient selection rule given in Subsection Coefficient Selection Rule for Process Parameters in Four-Channel ARMA Lattice Prediction. Note that we omit the extra coefficients,${\xe2}_{0}^{1}$ and${\xe2}_{0}^{2}$, in Equation (42), and${\xe2}_{0}^{1}$,${\xe2}_{0}^{2}$, and${\xe2}_{0}^{3}$ in Equation (56) as they had appeared due to separate filter assumption for the sections of ARMA lattice prediction filter. We also present the coefficient selection rule for the two-channel fullband case in Subsection Coefficient Selection Rule for Process Parameters in Two-Channel ARMA Lattice Prediction.

### 4.1 Coefficient selection rule for process parameters in four-channel ARMA lattice prediction

*ℓ*≤ (

*p*

_{1}−

*q*

_{1})) :

*ℓ*≤ (

*p*

_{1}−

*q*

_{1})−(

*p*

_{2}−

*q*

_{2})) :

*ℓ*≤

*q*

_{2}) :

### 4.2 Coefficient selection rule for process parameters in two-channel ARMA lattice prediction

*ℓ*≤ (

*p*−

*q*)) :

*ℓ*≤

*q*) :

*w*

_{0}in fullband is mapped into the frequency

*w*

_{ M }in subbands with

*M*is the number of subbands. On the other hand, knowing the sinusoid frequency

*w*

_{ M }at subbands, the frequency

*w*

_{0}can be obtained by

where *K* is the integer part of$\frac{\mathrm{M.}{w}_{0}}{2\pi}$.

## 5 Computational complexity

*p*+16

*q*using the number of operations required for one-channel and two-channel sequential processing lattice stages[25], where “one operation is considered as one multiplication (division) and one addition”. The Levinson–Durbin recursion for one-channel lattice sections requires (

*p*−

*q*)(

*p*−

*q*+ 1) operations, and 4

*q*(

*q*+ 1) operations for two-channel lattice sections to compute the ARMA process parameters. Therefore, the number of operations required becomes

*p*

^{2}+ 11

*p*− 2

*p*

*q*+ 5

*q*

^{2}+ 19

*q*, and then this expression can be extended to the total number of required operations for an

*M*subband, multichannel implementation as

Accordingly, we would like to compare the total number of operations for the proposed method with adaptive transversal filtering, and the nonparametric methods, namely, the Periodogram, multitaper, Capon, APES, and IAA methods.

The computational complexity of a fast RLS transversal ARMA filter can also be expressed in order of *p*
_{
k
} and *q*
_{
k
}[50]. When the fast Fourier transform (FFT) is utilized in implementing the Periodogram method, the required number of operations, which is the total number of real additions(subtractions) and multiplications(divisions) is *C*
_{FFT}(*N*) = 4*N* log2*N*, where *N* is the number of signal samples and is a power of2[51].

The computational complexity of the multitaper method is then approximately given by *C*
_{
M
T
} ≈ *N* *W* *C*
_{FFT} (*N*), where *NW* and 2*W* are defined as the time-bandwidth product and the resolution bandwidth, respectively[41, 43]. The complexity of brute force computations of the adaptive Capon and APES spectral estimators are given in[46] as *C*
_{CAPON}(*N*
_{
f
},*K*)$\approx {N}_{f}^{3}+{N}_{f}^{2}K$ and${C}_{\text{APES}}({N}_{f},{L}_{w},K)\approx {N}_{f}^{3}+{N}_{f}^{2}{L}_{w}^{2}+$
$({N}_{f}^{2}+{L}_{w}^{2}+{N}_{f}{L}_{w})$, respectively, where *K* represents the size of uniformly spaced grid of frequencies, *N*
_{
f
} is the filter length, and *L*
_{
w
} is the sliding window size. It is also shown in[46] that these complexities can be reduced to *C*
_{CAPON}(*K*) ≈ 12*K* and *C*
_{APES}(*K*) ≈ 42*K* if computationally efficient versions of adaptive Capon and APES spectral estimators, which are classified as FRLS-III type, are utilized. Similarly, the complexity of brute force version of the IAA spectral estimator is provided in[47] as${C}_{\text{IAA}}={m}_{c}[2K{N}_{o}^{2}+K{N}_{o}+{N}_{o}^{3}]$, where *m*
_{
c
} is the number of IAA iterations necessary to allow for convergence, and *K* and *N*
_{
o
} are the frequency grid size and the number of observed data samples. Then, the computationally efficient version of IAA method, which is named as fast segmented IAA-II(FSIAA-II), is given in[47] as${C}_{\text{FSIAA-II}}={m}_{c}[{N}_{s}^{2}+(5+7{L}_{s}\left){C}_{\text{FFT}}\right(2{N}_{s})+({L}_{s}+2\left){C}_{\text{FFT}}\right(K\left)\right]$, where *C*
_{FFT}(2*N*
_{
s
}) and *C*
_{FFT}(*K*) denote the cost of performing FFT of lengths 2*N*
_{
s
} and *K*, respectively, *N*
_{
s
} is the nonoverlapping segment length (*N*
_{
s
}=*N*
_{
o
}/*L*
_{
s
}), *L*
_{
s
} is the number of segments, and *K* is the frequency grid size.

*p*to change up to 256 for ARMA(

*p*,

*p*), and

*A*

*R*(

*p*) filters.

Since a transversal implementation does not require a Levinson–Durbin type conversion algorithm, the fast RLS transversal ARMA filtering method in subbands is computationally advantageous as compared to the proposed lattice method. The computational complexity of the proposed lattice method for ARMA(*p*,*p*) spectrum estimation compared to the Periodogram method (*N* = 128) is low as long as filter order (*p*) is smaller than 27 in fullband, 19 in two-subbands, and 13 in four-subbands. Similarly, the complexity for *A* *R*(*p*) lattice spectrum estimation compared to the Periodogram method (*N* = 128) is low as long as filter order (*p*) is smaller than 52 in fullband, 36 in two-subbands, and 23 in four-subbands. If longer data lengths are preferred, the low complexity threshold value of filter order for ARMA(*p*,*p*) and *A* *R*(*p*) implementations moves to higher values as can be observed in Figures5,6, and7. We would also like to point out that it is possible to generate a family of complexity curves for each case by assuming different configurations for subband prediction filters.

*p*,

*p*) method for a data length of

*N*= 128 in Figures8,9, and10. When computing the computational complexity of the multitaper method, we assumed that the time-bandwidth product was

*N*

*W*= 2. Four nonoverlapping (

*N*

_{ s }= 32,

*L*

_{ s }= 4) segments, and ten iterations for convergence (

*m*

_{ c }= 10) were considered for FSIAA-II. In addition, the frequency grid size for the FRLS-III type Capon and APES methods, and for the FSIAA-II method was

*K*= 4096.

Accordingly, under the assumed conditions, the computational complexity of the proposed lattice method for ARMA(*p*,*p*) spectrum estimation comparing to the multitaper method is low as long as the filter order (*p*) is smaller than 38 in fullband, 23 in two-subbands, and 19 in four-subbands. Then, we compare the complexity of proposed lattice ARMA(*p*,*p*) method with that of the Capon method, and find that its complexity is lower than the complexity of Capon method as long as the filter order (*p*) is smaller than 108 in fullband, 74 in two-subbands, and 55 in four-subbands. When a similar comparison is carried out for the APES method, the computational complexity of the proposed ARMA(*p*,*p*) method is lower than the APES method as long as filter order (*p*) is less than 204 in fullband, 142 in two-subbands, and 100 in four-subbands. When the IAA method is considered, the IAA method’s complexity is larger than the proposed ARMA(*p*,*p*) method as long as filter order (*p*) less than 3400 in fullband, 2450 in two-subbands, and 1750 in four-subbands.

## 6 Experimental results

We focused on ARMA(*p*,*p*) spectral estimation in simulation experiments due to its relevance in subband implementations. Accordingly, the objectives of simulation experiments are to visually and statistically demonstrate that the proposed method has the frequency spacing improvement, whitening and SNR improvement properties, and compare its performance with the performances of nonparametric methods, viz., the Periodogram, multitaper, Capon, APES, and IAA methods. Accordingly, we present and compare the ARMA(*p*,*p*) lattice subband spectrum estimation results with the ARMA(*p*,*p*) lattice fullband results, and then compare the lattice four-subband results with the nonparametric results.

*λ*= 0.995 for stationary cases while it was smaller in nonstationary cases,

*λ*= 0.98, so as to better track the time-varying signal. We repeated simulation experiments one hundred times; the results of these simulations are then ensemble-averaged. In subband decomposition of input signals, we used the Kaiser window based approach in[52] for designing cosine modulated filter banks. In two- and four-subband decompositions, the filter lengths are 30 and 60, respectively, and their frequency responses are presented in Figures11 and12.

We used a data length of *N* = 128, and data was zeropadded to 32 times the data length in stationary signal simulations involving the proposed lattice method, multitaper, and the Periodogram methods. In nonstationary cases, no zeropadding was utilized with any of the methods. In the comparisons with the Capon, and APES, we used the batch processing versions of these methods in[41], then we implemented the adaptive brute force versions of these methods in[46] for nonstationary signal experiments. In stationary signal cases involving the IAA method, we utilized the batch processing brute force version in[47], and subsequently in nonstationary signal cases, we made use of the adaptive brute force version in[53]. The filter lengths for the Capon and APES methods were *N*
_{
f
} = 63, and we used data observation lengths of *N*
_{
o
} = *N*/2 and *N*
_{
o
} = *N* in visual results and statistical analysis subsections for the IAA method, respectively. The frequency grid sizes were chosen as *K* = 4096 for the Capon, APES, and IAA methods. The sliding window size for the adaptive version of APES method was determined as *L*
_{
w
} = 50. The number of IAA iterations for convergence was *m*
_{
c
} = 10. We used a time-bandwidth product of *N* *W* = 2 for the multitaper method.

In order to determine prediction filter order in simulations, we mainly relied on our knowledge of input process order, and started with this order. Since our criteria of optimization is the minimization of forward prediction error, we increased the order of prediction filter, and monitored output forward prediction error. If the decrease in output prediction error was negligible with the increase of filter order, we stopped increasing the filter order. Since we would not have *a priori* knowledge of process order in a practical situation, a model estimator such as ARMAsel[54] can be used for this purpose. As ARMAsel itself functions based on prediction error evaluations, we might not even need further prediction error evaluations. In addition to these considerations, we kept the total complexity the same in all configurations in order to provide a fair comparison of performance such that the order of fullband predictor filter was 48 while the order of predictor filters in two and four-subband implementations were 24 and 12, respectively, in all simulations.

*u*(

*n*) is white Gaussian complex noise with uncorrelated real and imaginary parts, each with a variance${\sigma}_{u}^{2}$ and zero mean, such that the SNR for the

*k*th cisoid is defined as