Skip to main content

Adaptive multichannel sequential lattice prediction filtering method for ARMA spectrum estimation in subbands

Abstract

Abstract

A multichannel characterization for autoregressive moving average (ARMA) spectrum estimation in subbands is considered in this article. The fullband ARMA spectrum estimation can be realized in two-channels as a special form of this characterization. A complete orthogonalization of input multichannel data is accomplished using a modified form of sequential processing multichannel lattice stages. Matrix operations are avoided, only scalar operations are used, and a multichannel ARMA prediction filter with a highly modular and suitable structure for VLSI implementations is achieved. Lattice reflection coefficients for autoregressive (AR) and moving average (MA) parts are simultaneously computed. These coefficients are then converted to process parameters using a newly developed Levinson–Durbin type multichannel conversion algorithm. Hence, a novel method for spectrum estimation in subbands as well as in fullband is developed. The computational complexity is given in terms of model order parameters, and comparisons with the complexities of nonparametric methods are provided. In addition, the performance is visually and statistically compared against those of the nonparametric methods under both stationary and nonstationary conditions.

1 Introduction

While parametric or model-based methods are used extensively for high-resolution spectrum estimation, these methods perform poorly when SNR and spacing between frequencies is small. In many cases, input noise is assumed to be white; if this is not the case, colored noise can be adapted, provided that its statistics are known. However, such statistics may not be known in many cases, and instead, noise may incorrectly be assumed white. Such shortcomings can be overcome by applying subband decomposition methods in spectrum estimation.

It was shown by Rao and Pearlman[1] that the well-known AR modeling was a promising method for spectrum estimation in subbands, and it was proved that p th-order prediction from subbands is superior to p th-order prediction in the fullband when p is finite, and subband decomposition of a source resulted in a whitening of the composite subband spectrum. The equivalence of linear prediction and AR spectrum estimation was then exploited to show that AR spectrum from subbands offers a gain over fullband AR spectrum estimation. Unfortunately, new problems such as spectral overlapping and the increase in the variance of estimated parameters appear. The first disadvantage was addressed in a conference paper by Bonacci et al.[2], where nonreal-time procedures have been proposed to perform subband spectral estimation without discontinuities or aliasing at subband borders. However, this procedure is appropriate for a uniform filter bank, even though methods applicable to any kind of filter bank are desired. In another conference paper, Bonacci et al.[3] proposed to tackle the second drawback by a Subband Multichannel Autoregressive Spectral Estimation method, which was also intended for an off-line implementation.

Another popular model, autoregressive moving average (ARMA) model, which includes AR and MA methods as its special cases, has the input–output relationship given by

y(n)= = 1 p a 1 y(n)+ j = 0 q a j 2 x(nj)
(1)

for an ARMA(p,q) process. Here, x(n) is zero mean, white noise with a variance of σ x 2 , and â 1 and â j 2 , respectively, represent the th and j th coefficients related to AR and MA parts. Such processes arise in various applications such as modeling radar signals[4, 5] or speech signals[6, 7], where spectral zeros as well as poles are often present due to the physical mechanism generating the data. In addition, processes that are purely autoregressive are often transformed into ARMA(p,p) processes by addition of measurement noise, and especially sinusoids in noise are known to obey the degenerate ARMA equation[8, 9]. Even though an ARMA process can be represented by a unique AR model of generally infinite order, the ARMA modeling approach often leads to more efficient implementations. A hierarchical ARMA modeling method for classifying high-resolution radar signals at multiple scales was presented in[10], and it was shown that the radar signal at a different scale obeyed an ARMA process if it was an ARMA process at the observed scale.

ARMA model-based applications such as the classification of high-resolution radar signatures using multi-scale features, and lattice speech analysis/synthesis were reported in[11, 12], respectively. As a consequence of degenerate ARMA modeling of sinusoids in noise, adaptive multiple frequency tracking, previously considered in[1315], has gained momentum recently[16], and presents great interest in communications[17], biomedical engineering[18], speech processing[19], and power systems[20, 21]. Another recent consequence of degenerate ARMA modeling of sinusoids in noise is related to spectrum sensing for cognitive radios[22, 23], where the primary task is to dynamically explore the radio spectrum for the existence of signals (sinusoids) so as to determine portions of the frequency band that may used for radio transmission. In view of these developments, we think that methods of subband spectrum estimation based on ARMA modeling with possible extensions to fullband spectrum estimation can provide good alternatives in radar and speech classification, adaptive multiple frequency tracking as well as spectrum sensing for cognitive radio applications.

In this article, we propose a novel method that relies on estimation of the driving noise in subbands. Even though methods based on estimation of the driving noise were previously proposed for fullband[24], the important difference of our method is that we first transform the subband ARMA filtering problem into multichannel AR filtering problem by embedding subband ARMA processes into multichannel AR processes, and then we achieve a complete modified Gram-Schmidt orthogonalization of input multichannel signal using a modified version of the sequential processing multichannel lattice stages (SPMLSs)[25]. A number of alternatives for adaptive multichannel processing were proposed after the introduction of SPMLSs in[25]. Two of such alternatives are the modular lattice architectures proposed by Lev-ari[26], and Glentis and Kalouptsidis[27]. While the architecture in[26] is suitable for equal channel orders and involves more computations than SPMLSs, neither of these architectures is preferable for sequential processing. Another alternative is the QR decomposition-based lattice approach in[28], which is also for equal channel orders, and was later extended to unequal channel orders by Yang[29]. Newer versions of multichannel QR algorithms based on orthogonal Givens rotation for equal as well as unequal channel orders were later presented by Rontogiannis and Theodoridis[30]. Recently, an array-based QR multichannel lattice filter that extends the correspondence between recursive least-squares update equations and Kalman filter equations to the multichannel lattice case was presented by Gomes and Barrosso[31]. In addition, transversal-type algorithms such as[32, 33] were proposed due to their lower complexity and direct relation to channel coefficients. However, these algorithms generally require the implementation of stabilization techniques, and their structure is less regular. The principle of modular decomposition appears to be the implicit basis in all these adaptive multichannel processing techniques, and provides for the scalar only operations. In QR decomposition approaches, the Q matrix is implicitly formed and then used to compute the R matrix, whereas in the Gram-Schmidt approach, the inverse of the R is implicitly formed and then used to compute the Q matrix. As a consequence of this fact, Regalia and Bellanger[34] showed that there exists a duality between QR and lattice methods, and the possibility of combining elements of both approaches to obtain new hybrid algorithms. With respect to developing these hybrid algorithms, Ling[35] showed that a orthogonal Givens rotation-based algorithm algebraically coincides with the recursive-modified Gram-Schmidt-based lattice algorithm in[36].

In accordance with this perspective in multichannel signal processing, as SPMLSs already have modularity, order recursiveness, regularity, simplicity, sequentiality, and equal as well as unequal channel processing capabilities, we modify them in order to improve their numerical performance by using the error-feedback formula of the recursive-modified Gram-Schmidt algorithm[35, 36] in the processing cells. Thus, the complete orthogonalization of multichannel input data and sequential nature of the modified SPMLSs make it possible to feed back the delayed forward prediction error signals to represent the unknown input noise signals of original ARMA processes. Although we introduced the complete orthogonalization concept previously in linear and nonlinear adaptive decision feedback equalization frameworks in[37, 38], its application to adaptive spectrum estimation problem in subbands as well as in fullband results in novel implementations, particularly to the development of a new Levinson–Durbin type conversion algorithm for the modified SPMLSs in order to compute ARMA process parameters from lattice reflection coefficients. To the best of the authors’ knowledge, this particular multichannel lattice prediction filter structure for ARMA spectrum estimation in subbands or in fullband and the new Levinson–Durbin type multichannel conversion algorithm do not exist in the literature.

A two-subband ARMA spectrum estimation problem is considered in this article due to the ease of explanation and space limitations in developing the method. However, it is considered straightforward to apply the method to any number of subbands, and to AR spectrum estimation in subbands. The method is appropriate for uniform and nonuniform filter bank realizations, while aliasing problems due to spectral overlapping in adjacent channels are also addressed. A highly modular, regular, time and order recursive, recursive least squares (RLS) ARMA parameter estimator with inherently good numerical properties, suitable for VLSI and recent programable system on chip implementations[39], is designed, and AR and MA parameters are found simultaneously. With these properties, the method is applicable for both off-line and on-line implementations; it is especially possible to monitor the forward prediction error signal, start the parameter estimation for a fullband AR(p) or ARMA(p,q) or ARMA(p,p) process; if performance requirements are not met, end up for subband ARMA(p k ,q k ) or ARMA(p k ,p k ) realizations. Consequently, it dynamically extends the lattice parametrization of fullband spectrum into subbands, and thereby arises as an useful and practical method for radar signal analysis/classification, speech analysis/synthesis, adaptive multiple frequency tracking, and cognitive radio spectrum sensing tasks.

An adaptive FIR filtering approach to spectral estimation, which is referred to as amplitude and phase estimation of a sinusoid (APES) and has applications to radar target recognition, was proposed by Li and Stoica[40], and the adaptive FIR filtering approach to the Capon method was also discussed by Stoica and Moses[41]. Moreover, the APES method has been extended to array processing by Yardibi et al.[42], and named as iterative adaptive approach for amplitude and phase estimation (IAA-APES). An FIR filtering reinterpretation of the Thomson’s multitaper method[43, 44] with applications to spectrum sensing for cognitive radio was also presented by Farhang-Boroujeny[45]. Recently, computationally efficient versions of the adaptive Capon and APES, and IAA methods have been proposed in[46, 47], respectively. In this article, we compare the complexity and performance of our method with those of the Periodogram, multitaper, Capon, APES, and IAA methods, and show that our method is competitive in terms of complexity and performance.

The remainder of this article is organized as follows. In Section 2, we present the development of the new multichannel ARMA lattice prediction filter using the modified SPMLSs. In Section 3, we develop the new Levinson–Durbin type multichannel conversion algorithm for the modified SPMLSs, and relate lattice parameters to process parameters. Spectrum estimation expression in two-subbands is given in Section 4. The computational complexity computations are treated in Section 5. Section 6 is concerned with the experimental results. Finally, Section 7 is about the discussions of results and conclusions. The following notations are used in this article. (∙) represents the complex conjugate of (∙). (∙) T and (∙) H stand for the transpose and the Hermitian transpose of (∙), respectively. The variables m, i, and n are global while all other variables are local. The variable m represents the stage number while n and i are the time indexes related to data and coefficients, respectively, till we equate them in Section 3 to have a single time index.

2 Adaptive multichannel ARMA lattice prediction filtering

2.1 Multichannel prediction problem

An illustration of the adaptive multichannel ARMA prediction filtering in subbands for two-subband case is presented in Figure1. Therein, y(n) represents the input fullband signal while y 1(n) and y 2(n) stand for the input subband signals. In adaptive multichannel ARMA prediction filtering, the objective is to find an exponentially windowed, LS solution for the AR and MA coefficients of the k th forward prediction filter that minimizes each of the two cost functions

J k (i)= n = 0 i λ i n f p k k (n) 2
(2)
Figure 1
figure 1

A block diagram of the adaptive multichannel ARMA prediction filtering in subbands.

at each time instant i, and k = 1,2. The forward prediction error f p k k (n) in this expression is defined as

f p k k (n)= d k (n) d ̂ i k (n)
(3)

and the k th forward prediction filter output, d ̂ i k (n), is an estimate of the k th desired signal, d k(n) = y k (n), is given by

d ̂ i k (n)= j = 1 p k ã 1 , j k (i) y k (nj)+ l = 0 q k ã 2 , l k (i) û k (nl).
(4)

Herein, p k and q k denote the order of the (p k ,q k ) prediction error filter associated with the k th subband, and û k (n) is the estimate of the k th ARMA process input signal. The estimated k th ARMA process input signal, û k (n), is obtained by delaying and feeding back the p k th-order forward prediction error, û k (n)= f p k k (n1). Hence, the input vector to the k th ARMA filter at time instant n, y ~ k (n), and the corresponding coefficient vector a ~ k (i), at time instant i, are defined as

y ~ k ( n ) = y k ( n 1 ) , , y k ( n p k ) , û k ( n ) , û k ( n 1 ) , , û k ( n q k ) T
(5)

and

a ~ kT (i)= ã 1 , 1 k ( i ) , , ã 1 , p k k ( i ) , ã 2 , 0 k ( i ) , ã 2 , 1 k ( i ) , , ã 2 , q k k ( i ) ,
(6)

respectively. Herein, ã 1 , j k (i) and ã 2 , j k (i), respectively, represent the j th coefficient related to the AR and MA parts of the forward prediction filter for the k th subband at time instant i. It is assumed, without loss of generality, that p k  ≥ q k . p k  = q k case corresponds to the prediction filter for an ARMA(p k ,p k ) process, while p k  > q k prediction filter is for a general ARMA(p k ,q k ) process. Note that an ARMA backward prediction can be performed for the desired signal, d k(n) = y k (np k ), and the prediction filter in that case would use the reversed and conjugated forward prediction filter coefficients, which are defined in the backward prediction error coefficient vector as

c ~ kT (i)= c ~ 1 , p k k ( i ) , , c ~ 1 , 1 k ( i ) , c ~ 2 , q k k ( i ) , , c ~ 2 , 1 k ( i ) , c ~ 2 , 0 k ( i )
(7)

where c ~ 1 , j k (i) and c ~ 2 , j k (i) are, respectively, defined as the j th coefficient related to the AR and MA parts of the backward prediction filter for the k th subband at time instant i.

Consequently, the main concern of the exponentially weighted LS problem under consideration is to find, at each time i, the k th optimal coefficient vector, a ~ k (i) that would minimize the cost function

J k (i)= n = 0 i λ i n d k (n) a ~ kH (i) y ~ k (n) 2 .
(8)

The k th optimal coefficient vector related to the k th subband filter

a ~ opt k (i)= R k 1 (i) P k (i)
(9)

is found by differentiating J k(i) with respect to a ~ k (i), setting the derivative to zero, and solving for a ~ k (i), where

R k (i)= n = 0 i λ i n y ~ k (n) y ~ k H (n)
(10)

and

P k (i)= n = 0 i λ i n y ~ k (n) d k (n).
(11)

2.2 Sequential lattice orthogonalization

In order to find a modular, regular, and simple solution to the two-subband ARMA prediction problem, we would like to use a single multichannel lattice filter as depicted in Figure2, instead of using two separate transversal filters and solving two separate optimization problems as in Figure1. We would also like to avoid direct evaluations as in (9), and achieve good numerical properties. As the number of channels at different sections of the proposed multichannel lattice filter is different due to the sequential processing nature of SPMLSs, we carry out the exponentially weighted LS optimization problem by taking into consideration each of these sections separately, and therefore we assume that the filter is comprised of three cascaded filters, which are two-channel, three-channel, and four-channel lattice sections; and we use a different index for each section while using m to indicate a stage in the whole filter. We also assume p 1 = p 2 for the ease of explanation without loss of generality.

Figure 2
figure 2

A block diagram of the adaptive multichannel ARMA lattice prediction filtering in subbands.

In order to sequentially solve the exponentially weighted LS optimization problem under consideration, we first organize the elements of input signal vectors y 1(n) = [y 1(n),…,y 1(n − )]T, and y 2(n) = [y 2(n),…,y 2(n − )]T according to the natural ordering of SPMLSs as

y ̄ + 1 ( n ) = y 1 ( n ) y 2 ( n ) y 1 ( n 1 ) y 2 ( n 1 ) y 1 ( n ) y 2 ( n )
(12)

and input to two-channel stages for which the stage number (m) has a range of values given by 0 < m ≤ (p 1 − q 1). Accordingly, we redefine Equations (10) and (11) using this new data vector as follows

R (i)= n = 0 i λ i n y ̄ + 1 (n) y ̄ + 1 H (n)
(13)

and

P , k (i)= n = 0 i λ i n y ̄ + 1 (n) d k (n)
(14)

where k = 1,2. The orthogonalization of data using SPMLSs corresponds to the transformation of (13) and (14) into

D + 1 f (i)= n = 0 i λ i n Ω f (i) y ̄ + 1 (n) y ̄ + 1 H (n) Ω fH (i)
(15)

and

Z + 1 , k f (i)= n = 0 i λ i n Ω f (i1) y ̄ + 1 (n1) d k (n),
(16)

respectively. Here, Ω f (i) is the 2  ×  2 lower triangular transformation matrix for forward prediction, and is sequentially realized stage-by-stage using 2  ×  2 lower triangular transformation matrices

L f (i)= 1 0 κ ̂ f ( i 1 ) 1
(17)

whose diagonal elements are all equal to unity at time instant i, and κ ̂ f (i) is the reflection coefficient computed at the single circular cell in the triangular-shaped self-orthogonalization processor of the th two-channel SPMLS. Then, the forward lattice predictor coefficients are computed using

Θ , k f (i)= D + 1 f (i1) Z + 1 , k f (i)
(18)

where Θ , k f (i) represents the k th row of the 2 × 2 lattice forward prediction reflection coefficient matrix Θ f (i), and is also sequentially implemented stage-by-stage by means of 2 × 2 forward prediction reflection coefficient matrices

Δ f ( i ) = κ ̄ , 1 , 1 f ( i ) κ ̄ , 1 , 2 f ( i ) κ ̄ , 2 , 1 f ( i ) κ ̄ , 2 , 2 f ( i )
(19)

in which κ ̄ , k , j f (i) is the j th reflection coefficient related to the forward prediction of the k th channel signal, and it is computed at the (k,j)th single circular cell of the square-shaped reference-orthogonalization processor related to forward prediction at the th two-channel SPMLS. Note that the matrix inversion operation in Equation (9) is transformed into a simple scalar inversion operation in (18) due to the diagonal nature of D + 1 f (i). The backward prediction counterpart of this optimization problem is similarly solved using 2 × 2 lower triangular transformation matrices L b (i), and 2 × 2 lattice backward prediction reflection coefficient matrices, Δ b (i).

After the processing of input signals by two-channel lattice stages, the delayed and fed back forward prediction error û 1 (n)= f p 1 (n1) is incorporated at the (p 1q 1+1)t h stage, as the third channel. Accordingly, we expand the optimization problem by organizing the elements of the input data vectors y 1(n) = [y 1(n),…,y 1(n − α)]T, y 2(n) = [y 2(n),…,y 2(nα)]T, and u ̂ 1 (n)= [ û 1 ( n ) , , û 1 ( n α ) ] T as follows:

y ̄ α + 1 ( n ) = y 1 ( n ) y 2 ( n ) û 1 ( n ) y 1 ( n α ) y 2 ( n α ) û 1 ( n α ) ,
(20)

and input to three-channel lattice section, where the stage number (m) takes values in the range given by (p 1 − q 1) < m ≤ (p 2 − q 2). Subsequently, we solve the optimization problem in (18) once again with the new input vector, in which case Ω α f (i) and Θ α f (i) are the 3α × 3α lower triangular transformation and the 3 × 3α forward lattice prediction coefficient matrices, respectively. Ω α f (i) is computed sequentially by means of 3 × 3 lower triangular transformation matrices, L α f (i), and Θ α f (i) is similarly realized stage-by-stage making use of 3 × 3 forward prediction coefficient matrices, Δ α f (i), at time instant i. Note that, since the delayed and fed back signal is considered to constitute a new channel in the multichannel sequential lattice filtering, we have three desired signals at this point, d k(n), where k = 1,2,3, one of which did not exist in the optimization problem stated in Section 2.1, and this new desired signal, d 3(n), is related to the MA part of the first subband ARMA modeling.

Finally, the optimization problem is expanded one more time with the inclusion of the second delayed and fed back forward prediction error û 2 (n)= f p 2 (n1), and this time, the elements of input data vectors y 1(n) = [y 1(n),…,y 1(n − ν)]T, y 2(n) = [y 2(n),…,y 2(n − ν)]T, u ̂ 1 (n)= [ û 1 ( n ) , , û 1 ( n ν ) ] T , and u ̂ 2 (n)= [ û 2 ( n ) , , û 2 ( n ν ) ] T are organized as

y ̄ ν + 1 ( n ) = y 1 ( n ) y 2 ( n ) û 1 ( n ) û 2 ( n ) y 1 ( n ν ) y 2 ( n ν ) û 1 ( n ν ) û 2 ( n ν )
(21)

where the stage number (m) is in the range given by (p 2 − q 2) < m ≤ p 2 due to four-channel processing. Similar to two-channel and three-channel cases, we solve the optimization problem in (18) using the new data vector in Equation (21), in which case Ω ν f (i) and Θ ν f (i) are 4ν × 4ν lower triangular transformation, and 4 × 4ν forward lattice prediction coefficient matrices at the time instant i, respectively. Similar to previous cases, these matrices are computed stage-by-stage by the use of 4 × 4 lower triangular transformation matrices, L ν f (i), and 4 × 4 forward prediction coefficient matrices, Δ ν f (i), at time instant i, respectively. As the second delayed and fed back signal is also considered as a new channel in the multichannel sequential lattice filtering, hereafter we have four desired signals, d k(n), where k = 1,2,3,4, and this fourth desired signal, d 4(n), is associated with the MA part of the second subband ARMA modeling.

2.3 Matrix visualization

In order to further explain the sequential lattice orthogonalization, we consider a (8,5) and (8,2) ARMA prediction lattice prediction filter for the first and second subbands, and organize the elements of input data vectors y 1(n) = [y 1(n),…,y 1(n − 8)]T, y 2(n) = [y 2(n),…,y 2(n − 8)]T, u ̂ 1 (n)= [ û 1 ( n ) , û 1 ( n 1 ) , , û 1 ( n 5 ) ] T , and u 2 ̂ (n)= [ û 2 ( n ) , û 2 ( n 1 ) , , û 2 ( n 2 ) ] T as columns of a matrix,

y 1 ( n ) y 1 ( n 1 ) y 1 ( n 2 ) y 1 ( n 3 ) y 1 ( n 4 ) y 1 ( n 5 ) y 1 ( n 6 ) y 1 ( n 7 ) y 1 ( n 8 ) y 2 ( n ) y 2 ( n 1 ) y 1 ( n 2 ) y 1 ( n 3 ) y 2 ( n 4 ) y 2 ( n 5 ) y 2 ( n 6 ) y 2 ( n 7 ) y 2 ( n 8 ) û 1 ( n ) û 1 ( n 1 ) û 1 ( n 2 ) û 1 ( n 3 ) û 1 ( n 4 ) û 1 ( n 5 ) û 2 ( n ) û 2 ( n 1 ) û 2 ( n 2 )
(22)

by taking into consideration different number of parameters in the feedforward and feedback channels and shifting properties of input data. This matrix helps us to visualize the orthogonalization process, and thus to draw a diagram of the four-channel prediction filter structure under consideration as in Figure3. Note that the elements of the first and second rows are related to the input signals of the first and the second subband channels of the ARMA filter under consideration, while the third and fourth rows are associated with the fed back and delayed signals. Lattice orthogonalization begins with the elements of the first two rows using two-channel sequential lattice processing stages until the first fed back and delayed channel is incorporated as the new channel. Then, the orthogonalization continues with three-channel lattice stages until the fourth channel, which is the second fed back and delayed channel, is taken into the process, and so the orthogonalization of input data finalizes with four-channel stages when the mean squared prediction error performance requirements are met, and thereby the k th desired signal, d k(n), is sequentially predicted using self-orthogonalized and delayed backward prediction error signals as follows:

d ̂ i k ( n ) = m = 1 p 1 q 1 j = 1 2 κ ̄ m , k , j f ( i 1 ) b ̂ m 1 j ( n 1 ) + m = p 1 q 1 + 1 p 2 q 2 j = 1 3 κ ̄ m , k , j f ( i 1 ) b ̂ m 1 j ( n 1 ) + m = p 2 q 2 + 1 p 2 j = 1 4 κ ̄ m , k , j f ( i 1 ) b ̂ m 1 j ( n 1 ) .
(23)
Figure 3
figure 3

A diagram of the four-channel ARMA lattice filter structure for two-subband spectrum estimation.

Here, the first and second summations represent the prediction accomplished by the two-channel and three-channel sections, respectively, and the fourth summation is connected with the four-channel prediction section. In each section, κ ̄ m , k , j f (i) represents the j th forward prediction reflection coefficient at the m th stage related to the k th channel as defined in the previous subsection, and b ̂ m 1 j (n) represents the j th element of the self-orthogonalized backward prediction error signal vector, b ̂ m 1 (n), at the input of the m th stage. The self-orthogonalized backward prediction error vector, b ̂ m 1 (n), is produced by the lower triangular transformation of the input backward prediction error vector, b m−1(n), using L m f (n), and this operation is accomplished at the triangular shaped self-orthogonalization processor (related to forward prediction) of the m th SPMLS. Note that the sizes of vectors, b ̂ m 1 (n), b m−1(n), and matrix, L m f (n), at different sections of the proposed lattice filter are as follows: 2 × 1, and 2 × 2 in two-channel section, 3 × 1, and 3 × 3 in three-channel section, and 4 × 1, and 4 × 4 in four-channel section, respectively.

We would also like to point out that a lattice filter for fullband ARMA spectrum estimation is a special form of the two-subband implementation, and therefore it can similarly be realized using sequential processing one-channel and two-channel lattice stages as illustrated in Figure4 for an ARMA(10,2) implementation.

3 Conversion of lattice coefficients to process parameters

Since the mathematical link between process parameters and reflection coefficients of a lattice prediction filter is provided by the Levinson–Durbin algorithm[48, 49], we develop a new Levinson–Durbin type conversion algorithm specifically for SPMLSs in order to convert lattice reflection coefficients to subband ARMA process parameters. Due to the sequential nature of the proposed lattice structure, we carry out the development of the new Levinson–Durbin type multichannel conversion algorithm by taking into consideration each of these sections separately, and therefore we assume that the filter is comprised of three cascaded filters as in Section 2.2.

Figure 4
figure 4

A diagram of the two-channel ARMA lattice filter structure for fullband spectrum estimation.

We first consider the conversion algorithm for the two-channel section of lattice prediction filter, and we organize the input signal samples to two-channel lattices as

y ̄ + 1 ( n ) = y 1 ( n ) y 1 ( n ) y 2 ( n ) y 2 ( n ) = y 1 ( n ) y 1 ( n 1 ) y 2 ( n ) y 2 ( n 1 )
(24)

where we define the data vectors as y 1(n) = [y 1(n),…,y 1(n −  + 1)]T, y 2(n) = [y 2(n),…,y 2(n −  + 1)]T, and 0 < m ≤ (p 1 − q 1). The corresponding forward and backward prediction error coefficient matrices for the th-order transversal filter for the k th channel are defined as

a ̂ kT ( i ) = â 0 k ( i ) , â 1 k ( i ) , â 2 k ( i ) , , , â 2 2 k ( i ) , â 2 1 k ( i ) , â 2 k ( i )
(25)

and

c ̂ kT ( i ) = ĉ 2 k ( i ) , ĉ 2 1 k ( i ) , ĉ 2 2 k ( i ) , , , ĉ 2 k ( i ) , ĉ 1 k ( i ) , ĉ 0 k ( i )
(26)

where k = 1,2 due to two-channel lattice processing, and â 0 k (i)= ĉ 0 k (i)=1.0. Since the signal time shifting and ordering properties of SPMLSs when expressed in matrix form as in Equation (12) are different than the organization of input signal samples in matrix form as in Equation (24), we use (2 + 1) × (2 + 1) shuffling matrices, J + 1 1 for the first channel and J + 1 2 for the second channel, to reorder the elements of coefficient matrices, a ̂ 1 H (i), a ̂ 2 H (i) and c ̂ 1 H (i), c ̂ 2 H (i), according to the sample ordering of SPMLSs. Therefore, the forward and backward prediction errors for the end of the observation interval n = i at the output of the general th-order filters with transversal structure can be stated as

f 1 ( n ) f 2 ( n ) = J + 1 1 a ̂ 1 H ( n ) 0 0 J + 1 2 a ̂ 2 H ( n ) y ̄ + 1 (n)
(27)
b 1 ( n ) b 2 ( n ) = J + 1 1 c ̂ 1 H ( n ) 0 0 J + 1 2 c ̂ 2 H ( n ) y ̄ + 1 (n)
(28)

where 0 is a 1x( + 1) zero matrix. Then, we can express the ( − 1)th prediction errors as

f 1 1 ( n ) f 1 2 ( n ) = J + 1 1 a ̂ 1 1 H ( n ) 0 0 0 0 J + 1 2 a ̂ 1 2 H ( n ) 0 0 y ̄ + 1 ( n )
(29)
b 1 1 ( n 1 ) b 1 2 ( n 1 ) = J + 1 1 0 0 c ̂ 1 1 H ( n 1 ) 0 0 J + 1 2 0 0 c ̂ 1 2 H ( n 1 ) y ̄ + 1 ( n )
(30)

Note that the size of each coefficient matrix increases by two when the order of prediction filter increases from −1 to , and 0 is a 1 × ( + 1) zero matrix as before. Subsequently, we define the th-order prediction errors in terms of lattice parameters and the (−1)th-order prediction errors as follows

f 1 ( n ) f 2 ( n ) = f 1 1 ( n ) f 1 2 ( n ) + κ ̄ , 1 , 1 f ( n 1 ) κ ̄ , 1 , 2 f ( n 1 ) κ ̄ , 2 , 1 f ( n 1 ) κ ̄ , 2 , 2 f ( n 1 )  ×  1 0 κ ̂ f ( n 2 ) 1 b 1 1 ( n 1 ) b 1 2 ( n 1 )
(31)
b 1 ( n ) b 2 ( n ) = b 1 1 ( n 1 ) b 1 2 ( n 1 ) + κ ̄ , 1 , 1 b ( n 1 ) κ ̄ , 1 , 2 b ( n 1 ) κ ̄ , 2 , 1 b ( n 1 ) κ ̄ , 2 , 2 b ( n 1 )  ×  1 0 κ ̂ b ( n 1 ) 1 f 1 1 ( n ) f 1 2 ( n )
(32)

where the lower coefficient triangular and square matrices are generated in triangular shaped self-orthogonalization and square shaped reference-orthogonalization processors in a two-channel SPMLS as defined in Equations (17) and (19). Accordingly, we multiply these lower triangular and square coefficient matrices, and make the following definitions

Γ f ( n ) = Γ , 1 , 1 f ( n ) Γ , 1 , 2 f ( n ) Γ , 2 , 1 f ( n ) Γ , 2 , 2 f ( n ) = κ ̄ , 1 , 1 f ( n ) + κ ̄ , 1 , 2 f ( n ) κ ̂ f ( n 1 ) κ ̄ , 1 , 2 f ( n ) κ ̄ , 2 , 1 f ( n ) + κ ̄ , 2 , 2 f ( n ) κ ̂ f ( n 1 ) κ ̄ , 2 , 2 f ( n )
(33)
Γ b ( n ) = Γ , 1 , 1 b ( n ) Γ , 1 , 2 b ( n ) Γ , 2 , 1 b ( n ) Γ , 2 , 2 b ( n ) = κ ̄ , 1 , 1 b ( n ) + κ ̄ , 1 , 2 b ( n ) κ ̂ b ( n ) κ ̄ , 1 , 2 b ( n ) κ ̄ , 2 , 1 b ( n ) + κ ̄ , 2 , 2 b ( n ) κ ̂ b ( n ) κ ̄ , 2 , 2 b ( n )
(34)

in order to obtain compact versions of Equations (31) and (32) as follows

f 1 ( n ) f 2 ( n ) = f 1 1 ( n ) f 1 2 ( n ) + Γ f (n1) b 1 1 ( n 1 ) b 1 2 ( n 1 )
(35)
b 1 ( n ) b 2 ( n ) = b 1 1 ( n 1 ) b 1 2 ( n 1 ) + Γ b (n1) f 1 1 ( n ) f 1 2 ( n ) .
(36)

Then, the th-order prediction error matrices in Equations (27) and (28), and the (−1)th-order prediction error matrices in Equations (29) and (30) are substituted in the th-order prediction error expressions in (35) and (36) so as to obtain the following pairs of order updates

a ̂ 1 ( n ) = a ̂ 1 1 ( n ) 0 + Γ , 1 , 1 f ( n 1 ) J + 1 1 0 c ̂ 1 1 ( n 1 ) + Γ , 1 , 2 f ( n 1 ) J + 1 2 0 c ̂ 1 2 ( n 1 )
(37)
a ̂ 2 ( n ) = a ̂ 1 2 ( n ) 0 + Γ , 2 , 1 f ( n 1 ) J + 1 1 0 c ̂ 1 1 ( n 1 ) + Γ , 2 , 2 f ( n 1 ) J + 1 2 0 c ̂ 1 2 ( n 1 )
(38)
c ̂ 1 ( n ) = 0 c ̂ 1 1 ( n 1 ) + Γ , 1 , 1 b ( n 1 ) J + 1 1 a ̂ 1 1 ( n ) 0 + Γ , 1 , 2 b ( n 1 ) J + 1 2 a ̂ 1 2 ( n ) 0
(39)
c ̂ 2 ( n ) = 0 c ̂ 1 2 ( n 1 ) + Γ , 2 , 1 b ( n 1 ) J + 1 1 a ̂ 1 1 ( n ) 0 + Γ , 2 , 2 b ( n 1 ) J + 1 2 a ̂ 1 2 ( n ) 0
(40)

and since the size of each coefficient matrix increase by two, 0 is a 2 × 1 zero matrix. The three-channel section starts with the incorporation of the third channel ( û 1 (n)) as the new channel at the (p 1 − q 1 + 1)th stage. In order to develop the Levinson–Durbin algorithm for this section, we assume that three-channel section is a separate filter, and thereby considering the input signal samples to the three-channel section as follows

y ̄ α + 1 ( n ) = y 1 ( n ) y 1 ( n α ) y 2 ( n ) y 2 ( n α ) u ̂ 1 ( n ) û 1 ( n α ) = y 1 ( n ) y 1 ( n 1 ) y 2 ( n ) y 2 ( n 1 ) û 1 ( n ) u ̂ 1 ( n 1 )
(41)

where y 1(n) = [y 1(n),…,y 1(n − α + 1)]T, y 2(n) = [y 2(n),…,y 2(n − α + 1)]T, and u ̂ 1 (n)= û 1 ( n ) , , û 1 ( n α + 1 ) T . Correspondingly, the forward and backward prediction error coefficient matrices for the α th-order transversal filtering are defined as

a ̂ kT ( n ) = â 0 k ( n ) , â 1 k ( n ) , â 2 k ( n ) , â 3 k ( n ) , , , â 3 α 2 k ( n ) , â 3 α 1 k ( n ) , â 3 α k ( n )
(42)

and

c ̂ kT ( n ) = ĉ 3 α k ( n ) , ĉ 3 α 1 k ( n ) , ĉ 3 α 2 k ( n ) , , , ĉ 3 k ( n ) , ĉ 2 k ( n ) , ĉ 1 k ( n ) , ĉ 0 k ( n )
(43)

where k = 1,2,3 due to three-channel processing. Then, the prediction filtering continues with three-channel lattice stages for (p 1 − q 1) < m ≤ (p 2 − q 2). The Levinson–Durbin recursions for the three-channel section can be developed similar to the two-channel section by establishing the mathematical link between transversal and lattice filter coefficients. Since the organization of signal samples in Equation (41) is different than the ordering of signal samples entering into three-channel SPMLSs in (20), we use (3α + 1) × (3α + 1) shuffling matrices, J α + 1 1 for the first channel, J α + 1 2 for the second channel, and J α + 1 3 for the third channel to reorder the elements of coefficient matrices, a ̂ α 1 H (n), a ̂ α 2 H (n), a ̂ α 3 H (n) and c ̂ α 1 H (n), c ̂ α 2 H (n), c ̂ α 3 H (n), according to the sample ordering of SPMLSs. Similar to Equations (27) and (28) in two-channel case, the forward and backward prediction errors in three-channel case for the output of the general α th-order filter with transversal structure are expressed as

f α 1 ( n ) f α 2 ( n ) f α 3 ( n ) = J α + 1 1 a ̂ α 1 H ( n ) 0 0 0 J α + 1 2 a ̂ α 2 H ( n ) 0 0 0 J α + 1 3 a ̂ α 3 H ( n ) y ̄ α + 1 (n)
(44)
b α 1 ( n ) b α 2 ( n ) b α 3 ( n ) = J α + 1 1 c ̂ α 1 H ( n ) 0 0 0 J α + 1 2 c ̂ α 2 H ( n ) 0 0 0 J α + 1 3 c ̂ α 3 H ( n ) y ̄ α + 1 (n)
(45)

where 0 is a 1 × (α + 1) zero matrix in this case. We can then express the (α−1)th prediction errors as

f α 1 1 ( n ) f α 1 2 ( n ) f α 1 3 ( n ) = J α + 1 1 a ̂ α 1 1 H ( n ) 0 0 0 0 0 0 J α + 1 2 a ̂ α 1 2 H ( n ) 0 0 0 0 0 0 J α + 1 3 a ̂ α 1 3 H ( n ) 0 0 0 y ̄ α + 1 ( n )
(46)
b α 1 1 ( n 1 ) b α 1 2 ( n 1 ) b α 1 3 ( n 1 ) = J α + 1 1 0 0 0 c ̂ α 1 1 H ( n 1 ) 0 0 0 J α + 1 2 0 0 0 c ̂ α 1 2 H ( n 1 ) 0 0 0 J α + 1 3 0 0 0 c ̂ α 1 3 H ( n 1 ) y ̄ α + 1 (n).
(47)

Note that the size of each coefficient matrix in three-channel case increases by three when the order of prediction filter increases from α − 1 to α. Similar to Equations (35) and (36) in two-channel case, the lattice prediction errors for the α t h three-channel stage can be expressed in compact form with the following equations

f α 1 ( n ) f α 2 ( n ) f α 3 ( n ) = f α 1 1 ( n ) f α 1 2 ( n ) f α 1 3 ( n ) + Γ α f (n1) b α 1 1 ( n 1 ) b α 1 2 ( n 1 ) b α 1 3 ( n 1 )
(48)
b α 1 ( n ) b α 2 ( n ) b α 2 ( n ) = b α 1 1 ( n 1 ) b α 1 2 ( n 1 ) b α 1 3 ( n 1 ) + Γ α b (n1) f α 1 1 ( n ) f α 1 2 ( n ) f α 1 3 ( n )
(49)

where

Γ α f ( n ) = Γ α , 1 , 1 f ( n ) Γ α , 1 , 2 f ( n ) Γ α , 1 , 3 f ( n ) Γ α , 2 , 1 f ( n ) Γ α , 2 , 2 f ( n ) Γ α , 2 , 3 f ( n ) Γ α , 3 , 1 f ( n ) Γ α , 3 , 2 f ( n ) Γ α , 3 , 3 f ( n ) = κ ̄ α , 1 , 1 f ( n ) κ ̄ α , 1 , 2 f ( n ) κ ̄ α , 1 , 3 f ( n ) κ ̄ α , 2 , 1 f ( n ) κ ̄ α , 2 , 2 f ( n ) κ ̄ α , 2 , 3 f ( n ) κ ̄ α , 3 , 1 f ( n ) κ ̄ α , 3 , 2 f ( n ) κ ̄ α , 3 , 3 f ( n )  ×  1 0 0 κ ̂ α , 2 , 1 f ( n 1 ) 1 0 κ ̂ α , 3 , 1 f ( n 1 ) κ ̂ α , 3 , 2 f ( n 1 ) 1

and

Γ α b ( n ) = Γ α , 1 , 1 b ( n ) Γ α , 1 , 2 b ( n ) Γ α , 1 , 3 b ( n ) Γ α , 2 , 1 b ( n ) Γ α , 2 , 2 b ( n ) Γ α , 2 , 3 b ( n ) Γ α , 3 , 1 b ( n ) Γ α , 3 , 2 b ( n ) Γ α , 3 , 3 b ( n ) = κ ̄ α , 1 , 1 b ( n ) κ ̄ α , 1 , 2 b ( n ) κ ̄ α , 1 , 3 b ( n ) κ ̄ α , 2 , 1 b ( n ) κ ̄ α , 2 , 2 b ( n ) κ ̄ α , 2 , 3 b ( n ) κ ̄ α , 3 , 1 b ( n ) κ ̄ α , 3 , 2 b ( n ) κ ̄ α , 3 , 3 b ( n )  ×  1 0 0 κ ̂ α , 2 , 1 b ( n ) 1 0 κ ̂ α , 3 , 1 b ( n ) κ ̂ α , 3 , 2 b ( n ) 1 .

The α th-order prediction error matrices in Equations (44) and (45), and the (α−1)th-order prediction error matrices in Equations (46) and (47) are subsequently substituted in the α th-order prediction error expressions in (48) and (49) so that the following pairs of order updates are produced

a ̂ α 1 ( n ) = a ̂ α 1 1 ( n ) 0 + Γ α , 1 , 1 f ( n 1 ) J α + 1 1 0 c ̂ α 1 1 ( n 1 ) + + Γ α , 1 , 3 f ( n 1 ) J α + 1 3 0 c ̂ α 1 3 ( n 1 )
(50)
a ̂ α 2 ( n ) = a ̂ α 1 2 ( n ) 0 + Γ α , 2 , 1 f ( n 1 ) J α + 1 1 0 c ̂ α 1 1 ( n 1 ) + + Γ α , 2 , 3 f ( n 1 ) J α + 1 3 0 c ̂ α 1 3 ( n 1 )
(51)
a ̂ α 3 ( n ) = a ̂ α 1 3 ( n ) 0 + Γ α , 3 , 1 f ( n 1 ) J α + 1 1 0 c ̂ α 1 1 ( n 1 ) + + Γ α , 3 , 3 f ( n 1 ) J α + 1 3 0 c ̂ α 1 3 ( n 1 )
(52)
c ̂ α 1 ( n ) = 0 c ̂ α 1 1 ( n 1 ) + Γ α , 1 , 1 b ( n 1 ) J α + 1 1 a ̂ α 1 1 ( n ) 0 + + Γ α , 1 , 3 b ( n 1 ) J α + 1 3 a ̂ α 1 3 ( n ) 0
(53)
c ̂ α 2 ( n ) = 0 c ̂ α 1 2 ( n 1 ) + Γ α , 2 , 1 b ( n 1 ) J α + 1 1 a ̂ α 1 1 ( n ) 0 + + Γ α , 2 , 3 b ( n 1 ) J α + 1 3 a ̂ α 1 3 ( n ) 0
(54)
c ̂ α 3 ( n ) = 0 c ̂ α 1 3 ( n 1 ) + Γ α , 3 , 1 b ( n 1 ) J α + 1 1 a ̂ α 1 1 ( n ) 0 + + Γ α , 3 , 3 b ( n 1 ) J α + 1 3 a ̂ α 1 3 ( n ) 0
(55)

where the size of 0 is a 3 × 1 zero matrix. Finally, the fourth channel( û 2 (n)), which represents the fed back and delayed signal related to the second subband, is taken into the orthogonalization process at the (p 2q 2 + 1)th stage, and the prediction filtering continues with four-channel lattice stages through (p 2q 2) < m ≤ p 2. In order to develop the Levinson–Durbin recursions for this section, we define the forward and backward prediction error coefficient matrices for the ν th-order transversal filtering as

a ̂ kT ( n ) = â 0 k ( n ) , â 1 k ( n ) , â 2 k ( n ) , â 3 k ( n ) , â 4 k ( n ) , , , â 4 ν 3 k ( n ) , â 4 ν 2 k ( n ) , â 4 ν 1 k ( n ) , â 4 ν k ( n )
(56)

and

c ̂ kT ( n ) = ĉ 4 ν k ( n ) , ĉ 4 ν 1 k ( n ) , ĉ 4 ν 2 k ( n ) , ĉ 4 ν 3 k ( n ) , , , ĉ 4 k ( n ) , ĉ 3 k ( n ) , ĉ 2 k ( n ) , ĉ 1 k ( n ) , ĉ 0 k ( n )
(57)

where k = 1,2,3,4 due to four-channel lattice processing, and we also visualize as before that the following organization of the elements of input vectors y 1(n) = [y 1(n),…,y 1(n − ν + 1)]T, y 2(n) = [y 2(n),…,y 2(n − ν + 1)]T, u ̂ 1 (n)= [ û 1 ( n ) , , û 1 ( n ν + 1 ) ] T , and u ̂ 2 (n)= [ û 2 ( n ) , , û 2 ( n ν + 1 ) ] T is established:

y ̄ ν + 1 ( n ) = y 1 ( n ) y 1 ( n ν ) y 2 ( n ) y 2 ( n ν ) u ̂ 1 ( n ) û 1 ( n ν ) u ̂ 2 ( n ) û 2 ( n ν ) = y 1 ( n ) y 1 ( n 1 ) y 2 ( n ) y 2 ( n 1 ) û 1 ( n ) u ̂ 1 ( n 1 ) û 2 ( n ) u ̂ 2 ( n 1 ) .
(58)

Similar to the previous two steps, the signal sample ordering in Equation (58) is different than the ordering in Equation (21), hence we use (4ν+1) × (4ν+1) shuffling matrices, J ν + 1 1 for the first channel, J ν + 1 2 for the second channel, J ν + 1 3 for the third channel, and J ν + 1 4 for the fourth channel to reorder the elements of coefficient matrices a ̂ ν 1 H (n), a ̂ ν 2 H (n), a ̂ ν 3 H (n), a ̂ ν 4 H (n) and c ̂ ν 1 H (n), c ̂ ν 2 H (n), c ̂ ν 3 H (n), c ̂ ν 4 H (n), according to the sample ordering of SPMLSs. Then, the development of the Levinson–Durbin recursions for this section unfolds as in two and three-channel sections. First, the ν th and the (ν−1)th order forward and backward prediction errors are stated as the output of a transversal. Second, the prediction order update equations for the (ν−1)th and the ν th-orders are expressed for a four-channel lattice section, and finally the ν th and the (ν−1)th-order forward and backward transversal filter prediction error expressions are substituted in the lattice prediction order update equations such that the following pairs of order updates are obtained

a ̂ ν 1 ( n ) = a ̂ ν 1 1 ( n ) 0 + Γ ν , 1 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 1 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(59)
a ̂ ν 2 ( n ) = a ̂ ν 1 2 ( n ) 0 + Γ ν , 2 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 2 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(60)
a ̂ ν 3 ( n ) = a ̂ ν 1 3 ( n ) 0 + Γ ν , 3 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 3 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(61)
a ̂ ν 4 ( n ) = a ̂ ν 1 4 ( n ) 0 + Γ ν , 4 , 1 f ( n 1 ) J ν + 1 1 0 c ̂ ν 1 1 ( n 1 ) + + Γ ν , 4 , 4 f ( n 1 ) J ν + 1 4 0 c ̂ ν 1 4 ( n 1 )
(62)
c ̂ ν 1 ( n ) = 0 c ̂ ν 1 1 ( n 1 ) + Γ ν , 1 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 1 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0
(63)
c ̂ ν 2 ( n ) = 0 c ̂ ν 1 2 ( n 1 ) + Γ ν , 2 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 2 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0
(64)
c ̂ ν 3 ( n ) = 0 c ̂ ν 1 3 ( n 1 ) + Γ ν , 3 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 3 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0
(65)
c ̂ ν 4 ( n ) = 0 c ̂ ν 1 4 ( n 1 ) + Γ ν , 4 , 1 b ( n 1 ) J ν + 1 1 a ̂ ν 1 1 ( n ) 0 + + Γ ν , 4 , 4 b ( n 1 ) J ν + 1 4 a ̂ ν 1 4 ( n ) 0 .
(66)

Note that 0 is a 4 × 1 zero matrix, and that conversion of lattice parameters to process parameters started with two channels, but ended with four channels due to sequential processing. The new Levinson–Durbin type conversion algorithm for a fullbandARMA spectrum estimation can be similarly developed as a special case of subband implementation. The lattice prediction filter for fullband ARMA spectrum estimation, which consists of one and two-channel sections, is shown in Figure4. The corresponding conversion algorithm can also be realized in two sections as summarized in Subsection New Levinson-Durbin Type Conversion Algorithm for Two-Channel ARMA Lattice Prediction.

3.1 New Levinson-Durbin type conversion algorithm for two-channel ARMA lattice prediction

Initialization :

â 0 1 ( n ) = 1 . 0 , ĉ 0 1 ( n ) = 1 . 0 , â p q 2 ( n ) = 1 . 0 , ĉ 0 2 ( n ) = 1 . 0 .
(67)

One-channel Section (0 < m ≤ (p − q)) :

a ̂ m 1 (n)= a ̂ m 1 1 ( n ) 0 + Γ m , 1 f (n1) 0 c ̂ m 1 1 ( n 1 )
(68)
c ̂ m 1 (n)= 0 c ̂ m 1 1 ( n 1 ) + Γ m , 1 b (n1) a ̂ m 1 1 ( n ) 0 .
(69)

Two-channel Section ((p − q) < m ≤ p) :

a ̂ m 1 ( n ) = a ̂ m 1 1 ( n ) 0 + Γ m , 1 , 1 f ( n 1 ) J m + 1 1 0 c ̂ m 1 1 ( n 1 ) + Γ m , 1 , 2 f ( n 1 ) J m + 1 2 0 c ̂ m 1 2 ( n 1 )
(70)
a ̂ m 2 ( n ) = a ̂ m 1 2 ( n ) 0 + Γ m , 2 , 1 f ( n 1 ) J m + 1 1 0 c ̂ m 1 1 ( n 1 ) + Γ m , 2 , 2 f ( n 1 ) J m + 1 2 0 c ̂ m 1 2 ( n 1 )
(71)
c ̂ m 1 ( n ) = 0 c ̂ m 1 1 ( n 1 ) + Γ m , 1 , 1 b ( n 1 ) J m + 1 1 a ̂ m 1 1 ( n ) 0 + Γ m , 1 , 2 b ( n 1 ) J m + 1 2 a ̂ m 1 2 ( n ) 0
(72)
c ̂ m 2 ( n ) = 0 c ̂ m 1 2 ( n 1 ) + Γ m , 2 , 1 b ( n 1 ) J m + 1 1 a ̂ m 1 1 ( n ) 0 + Γ m , 2 , 2 b ( n 1 ) J m + 1 2 a ̂ m 1 2 ( n ) 0 .
(73)

4 Spectrum estimation from subbands

After computing process parameters from lattice coefficients, we established the link between multichannel prediction and spectrum estimation. Hence, the estimated spectrum in subbands can be expressed in terms of the subband prediction filter parameters as

S y ( w k ) = 1 + ā 1 3 e j w k + + ā q 1 3 e j q 1 w k 1 + ā 1 1 e j w k + + ā p 1 1 e j p 1 w k 2 . σ ̂ x 1 2 + 1 + ā 1 4 e j w k + + ā q 2 4 e j q 2 w k 1 + ā 1 2 e j w k + + ā p 2 2 e j p 2 w k 2 . σ ̂ x 2 2
(74)

where σ ̂ x k 2 represents the prediction error variance for the k th subband; and the coefficients, ā 1 1 ,, ā p 1 1 and ā 1 3 ,, ā q 1 3 are related to the AR and MA parts of the first subband ARMA spectrum while the coefficients ā 1 2 ,, ā p 2 2 and ā 1 4 ,, ā q 2 4 are associated with AR and MA parts of the second subband ARMA spectrum. Specifically, we determine the coefficients related to the first and second subbands in Equation (74) from the elements of coefficient vectors in Equations (25), (42), and (56) using the coefficient selection rule given in Subsection Coefficient Selection Rule for Process Parameters in Four-Channel ARMA Lattice Prediction. Note that we omit the extra coefficients, â 0 1 and â 0 2 , in Equation (42), and â 0 1 , â 0 2 , and â 0 3 in Equation (56) as they had appeared due to separate filter assumption for the sections of ARMA lattice prediction filter. We also present the coefficient selection rule for the two-channel fullband case in Subsection Coefficient Selection Rule for Process Parameters in Two-Channel ARMA Lattice Prediction.

4.1 Coefficient selection rule for process parameters in four-channel ARMA lattice prediction

Two-channel section (1 ≤  ≤ (p 1 − q 1)) :

ā 1 (n)= â 2 1 1 (n)
(75)
ā 2 (n)= â 2 2 (n)
(76)

Three-channel section (1 ≤  ≤ (p 1q 1)−(p 2q 2)) :

ā + p 1 q 1 1 (n)= â 3 2 1 (n)
(77)
ā + p 1 q 1 2 (n)= â 3 1 2 (n)
(78)
ā + p 1 q 1 3 (n)= â 3 3 (n)
(79)

Four-channel section (1 ≤  ≤ q 2) :

ā + p 2 q 2 1 (n)= â 4 3 1 (n)
(80)
ā + p 2 q 2 2 (n)= â 4 2 2 (n)
(81)
ā + p 2 q 2 3 (n)= â 4 1 3 (n)
(82)
ā + p 2 q 2 4 (n)= â 4 4 (n)
(83)

4.2 Coefficient selection rule for process parameters in two-channel ARMA lattice prediction

One-channel section (1 ≤  ≤ (pq)) :

ā 1 (n)= â 1 (n)
(84)

Two-channel section (1 ≤  ≤ q) :

ā + p q 1 (n)= â 2 1 1 (n)
(85)
ā + p q 2 (n)= â 2 2 (n)
(86)

The connection between subband and fullband frequencies can be explained using an example from[15]. Accordingly, a sinusoid of frequency w 0 in fullband is mapped into the frequency w M in subbands with

w M =M. w 0 mod(2π)
(87)

where M is the number of subbands. On the other hand, knowing the sinusoid frequency w M at subbands, the frequency w 0 can be obtained by

w 0 = 2 π M .K+ w M M
(88)

where K is the integer part of M. w 0 2 π .

5 Computational complexity

The number of operations required for the two-channel ARMA lattice prediction filter for fullband spectrum estimation is calculated as 10p+16q using the number of operations required for one-channel and two-channel sequential processing lattice stages[25], where “one operation is considered as one multiplication (division) and one addition”. The Levinson–Durbin recursion for one-channel lattice sections requires (p − q)(p − q + 1) operations, and 4q(q + 1) operations for two-channel lattice sections to compute the ARMA process parameters. Therefore, the number of operations required becomes p 2 + 11p − 2p q + 5q 2 + 19q, and then this expression can be extended to the total number of required operations for an M subband, multichannel implementation as

Total complexity= k = 1 M p k 2 +11 p k 2 p k q k +5 q k 2 +19 q k .
(89)

Accordingly, we would like to compare the total number of operations for the proposed method with adaptive transversal filtering, and the nonparametric methods, namely, the Periodogram, multitaper, Capon, APES, and IAA methods.

The computational complexity of a fast RLS transversal ARMA filter can also be expressed in order of p k and q k [50]. When the fast Fourier transform (FFT) is utilized in implementing the Periodogram method, the required number of operations, which is the total number of real additions(subtractions) and multiplications(divisions) is C FFT(N) = 4N log2N, where N is the number of signal samples and is a power of2[51].

The computational complexity of the multitaper method is then approximately given by C M T  ≈ N W C FFT (N), where NW and 2W are defined as the time-bandwidth product and the resolution bandwidth, respectively[41, 43]. The complexity of brute force computations of the adaptive Capon and APES spectral estimators are given in[46] as C CAPON(N f ,K) N f 3 + N f 2 K and C APES ( N f , L w ,K) N f 3 + N f 2 L w 2 + ( N f 2 + L w 2 + N f L w ), respectively, where K represents the size of uniformly spaced grid of frequencies, N f is the filter length, and L w is the sliding window size. It is also shown in[46] that these complexities can be reduced to C CAPON(K) ≈ 12K and C APES(K) ≈ 42K if computationally efficient versions of adaptive Capon and APES spectral estimators, which are classified as FRLS-III type, are utilized. Similarly, the complexity of brute force version of the IAA spectral estimator is provided in[47] as C IAA = m c [2K N o 2 +K N o + N o 3 ], where m c is the number of IAA iterations necessary to allow for convergence, and K and N o are the frequency grid size and the number of observed data samples. Then, the computationally efficient version of IAA method, which is named as fast segmented IAA-II(FSIAA-II), is given in[47] as C FSIAA-II = m c [ N s 2 +(5+7 L s ) C FFT (2 N s )+( L s +2) C FFT (K)], where C FFT(2N s ) and C FFT(K) denote the cost of performing FFT of lengths 2N s and K, respectively, N s is the nonoverlapping segment length (N s =N o /L s ), L s is the number of segments, and K is the frequency grid size.

In order to provide an comparative analysis, we plotted the complexities of the ARMA lattice and transversal filters in fullband, two-subband, and four-subband cases against the complexity of the Periodogram method in Figures5,6, and7, respectively. When generating a specific complexity curve for the proposed and transversal filtering methods, we assumed that the same spectrum estimation method and configuration is implemented in all subbands. Accordingly, we considered four different data lengths for the Periodogram method while allowing the filter order p to change up to 256 for ARMA(p,p), and A R(p) filters.

Figure 5
figure 5

Computational complexity curves for comparative analysis of fullband lattice and transversal prediction filtering versus the Periodogram methods.

Figure 6
figure 6

Computational complexity curves for comparative analysis of two-subband lattice and transversal prediction filtering versus the Periodogram methods.

Figure 7
figure 7

Computational complexity curves for comparative analysis of four-subband lattice and transversal prediction filtering versus the Periodogram methods.

Since a transversal implementation does not require a Levinson–Durbin type conversion algorithm, the fast RLS transversal ARMA filtering method in subbands is computationally advantageous as compared to the proposed lattice method. The computational complexity of the proposed lattice method for ARMA(p,p) spectrum estimation compared to the Periodogram method (N = 128) is low as long as filter order (p) is smaller than 27 in fullband, 19 in two-subbands, and 13 in four-subbands. Similarly, the complexity for A R(p) lattice spectrum estimation compared to the Periodogram method (N = 128) is low as long as filter order (p) is smaller than 52 in fullband, 36 in two-subbands, and 23 in four-subbands. If longer data lengths are preferred, the low complexity threshold value of filter order for ARMA(p,p) and A R(p) implementations moves to higher values as can be observed in Figures5,6, and7. We would also like to point out that it is possible to generate a family of complexity curves for each case by assuming different configurations for subband prediction filters.

We compare the computational complexities of Capon, APES, IAA, and multitaper methods with the proposed lattice ARMA(p,p) method for a data length of N = 128 in Figures8,9, and10. When computing the computational complexity of the multitaper method, we assumed that the time-bandwidth product was N W = 2. Four nonoverlapping (N s  = 32,L s  = 4) segments, and ten iterations for convergence (m c  = 10) were considered for FSIAA-II. In addition, the frequency grid size for the FRLS-III type Capon and APES methods, and for the FSIAA-II method was K = 4096.

Figure 8
figure 8

Computational complexity curves for comparative analysis of fullband lattice prediction filtering versus the multitaper, Capon, APES, and IAA methods.

Figure 9
figure 9

Computational complexity curves for comparative analysis of two-subband lattice prediction filtering versus the multitaper, Capon, APES, and IAA methods.

Figure 10
figure 10

Computational complexity curves for comparative analysis of four-subband lattice prediction filtering versus the multitaper, Capon, APES, and IAA methods.

Accordingly, under the assumed conditions, the computational complexity of the proposed lattice method for ARMA(p,p) spectrum estimation comparing to the multitaper method is low as long as the filter order (p) is smaller than 38 in fullband, 23 in two-subbands, and 19 in four-subbands. Then, we compare the complexity of proposed lattice ARMA(p,p) method with that of the Capon method, and find that its complexity is lower than the complexity of Capon method as long as the filter order (p) is smaller than 108 in fullband, 74 in two-subbands, and 55 in four-subbands. When a similar comparison is carried out for the APES method, the computational complexity of the proposed ARMA(p,p) method is lower than the APES method as long as filter order (p) is less than 204 in fullband, 142 in two-subbands, and 100 in four-subbands. When the IAA method is considered, the IAA method’s complexity is larger than the proposed ARMA(p,p) method as long as filter order (p) less than 3400 in fullband, 2450 in two-subbands, and 1750 in four-subbands.

6 Experimental results

We focused on ARMA(p,p) spectral estimation in simulation experiments due to its relevance in subband implementations. Accordingly, the objectives of simulation experiments are to visually and statistically demonstrate that the proposed method has the frequency spacing improvement, whitening and SNR improvement properties, and compare its performance with the performances of nonparametric methods, viz., the Periodogram, multitaper, Capon, APES, and IAA methods. Accordingly, we present and compare the ARMA(p,p) lattice subband spectrum estimation results with the ARMA(p,p) lattice fullband results, and then compare the lattice four-subband results with the nonparametric results.

The forgetting factor was λ = 0.995 for stationary cases while it was smaller in nonstationary cases, λ = 0.98, so as to better track the time-varying signal. We repeated simulation experiments one hundred times; the results of these simulations are then ensemble-averaged. In subband decomposition of input signals, we used the Kaiser window based approach in[52] for designing cosine modulated filter banks. In two- and four-subband decompositions, the filter lengths are 30 and 60, respectively, and their frequency responses are presented in Figures11 and12.

Figure 11
figure 11

Frequency responses of two-subband decomposition filters.

Figure 12
figure 12

Frequency responses of four-subband decomposition filters.

We used a data length of N = 128, and data was zeropadded to 32 times the data length in stationary signal simulations involving the proposed lattice method, multitaper, and the Periodogram methods. In nonstationary cases, no zeropadding was utilized with any of the methods. In the comparisons with the Capon, and APES, we used the batch processing versions of these methods in[41], then we implemented the adaptive brute force versions of these methods in[46] for nonstationary signal experiments. In stationary signal cases involving the IAA method, we utilized the batch processing brute force version in[47], and subsequently in nonstationary signal cases, we made use of the adaptive brute force version in[53]. The filter lengths for the Capon and APES methods were N f  = 63, and we used data observation lengths of N o  = N/2 and N o  = N in visual results and statistical analysis subsections for the IAA method, respectively. The frequency grid sizes were chosen as K = 4096 for the Capon, APES, and IAA methods. The sliding window size for the adaptive version of APES method was determined as L w  = 50. The number of IAA iterations for convergence was m c  = 10. We used a time-bandwidth product of N W = 2 for the multitaper method.

In order to determine prediction filter order in simulations, we mainly relied on our knowledge of input process order, and started with this order. Since our criteria of optimization is the minimization of forward prediction error, we increased the order of prediction filter, and monitored output forward prediction error. If the decrease in output prediction error was negligible with the increase of filter order, we stopped increasing the filter order. Since we would not have a priori knowledge of process order in a practical situation, a model estimator such as ARMAsel[54] can be used for this purpose. As ARMAsel itself functions based on prediction error evaluations, we might not even need further prediction error evaluations. In addition to these considerations, we kept the total complexity the same in all configurations in order to provide a fair comparison of performance such that the order of fullband predictor filter was 48 while the order of predictor filters in two and four-subband implementations were 24 and 12, respectively, in all simulations.

During the simulations when the input signal is stationary, we considered two closely spaced cisoids embedded in white as well as colored noise cases. The time series for cisoids in white noise are given by

y ( n ) = A 1 e j 2 π f 1 n + ϕ 1 + A 2 e j 2 π f 2 n + ϕ 2 + u ( n ) , 1 n N ,
(90)

where u(n) is white Gaussian complex noise with uncorrelated real and imaginary parts, each with a variance σ u 2 and zero mean, such that the SNR for the k th cisoid is defined as

SNR k =10 log 10 A k 2 2 σ u 2 .
(91)

The time series generated by two cisoids embedded in colored noise are similarly expressed by passing the white Gaussian complex noise u(n), with zero mean and unit variance, through a second-order AR coloring filter given by

v(n)=0.1v(n1)0.3v(n2)+u(n),1nN,
(92)

and the local SNR in this case for the k th cisoid is defined as

SNR k =10 log 10 A k 2 V ( e j w k )
(93)

where V(e jw k ) denotes the spectral density function of the AR process at the frequency of the k th cisoid. While the cisoidal frequencies were f 1 = 0.05 and f 2 = 0.0656 in the white noise case, they were f 1 = 0.2106 and f 2 = 0.2262 in the colored noise case. The initial phases ϕ 1 and ϕ 2 were zero in both cases.

In the nonstationary input signal experiments, we again considered two closely spaced cisoids embedded in white noise, but in this case, the frequencies of each individual cisoid in (90) was varied deterministically according to the following rule in the visual subsection,

f k (n)= f ̂ k γ k .(n1),k=1,2,andn=1,,N
(94)

where γ 1 = γ 2 = 0.0001, and f ̂ 1 =0.12 and f ̂ 2 =0.1044, so that the difference between the frequencies of cisoids is kept constant (same as the stationary signal case) during frequency sweep interval, and then according to the random walk model of[9] in the statistical analysis subsection,

f k ( n ) = f k ( n 1 ) + γ k . φ k ( n ) , k = 1 , 2 , and n = 2 , , N
(95)

where γ 1 = γ 2 = 0.001, and φ k (n) is zero mean white Gaussian noise of unit variance independent of φ j (n),j ≠ k, and independent of the phases ϕ k and the measurement noise u(n) for both cisoids (k = 1,2). The initial frequencies of cisoids in (95) were f 1(1) = 0.05 and f 2(1) = 0.0656, respectively, and the initial phases ϕ k in both of the nonstationary signal cases were randomly chosen in the interval [0,2 π ]. The parameters, γ k , and initial frequencies in (94) and (95) were chosen so as to keep the frequency variation inside the bandwidths of the first two- and four- subband decomposition filters during the frequency variation interval.

6.1 Visual results

6.1.1 Stationary signal case

We first consider the frequency spacing improvement simulations in which the input time series were generated using Equation (90) with no noise. The individual SNRs are infinite so that the only improvement can be due to frequency spacing. In Figure13, we show that two-subband lattice implementation improves the frequency spacing between closely spaced cisoids, and that this improvement is even furthered by four-subband lattice when compared to fullband lattice implementation. We compare the performance of the proposed method implemented in four-subbands with those of the Periodogram and multitaper methods in Figure14, and with those of the Capon, APES, and IAA methods in Figure15.

Figure 13
figure 13

Frequency spacing improvement performance of the proposed ARMA lattice method.

Figure 14
figure 14

Comparison of frequency spacing improvement performance of the proposed method in four-subbands with the Periodogram and multitaper methods.

Figure 15
figure 15

Comparison of frequency spacing improvement performance of the proposed method in four-subbands with the Capon, APES, and IAA methods.

In the SNR improvement simulations, we again used the input time series generated by (90) with relatively low SNRs, SNR1 = SNR2 = −3 dB, so that the SNR improvement can be better displayed. Figure16 illustrates that two-subband lattice implementation is able to resolve cisoids better comparing to fullband lattice implementation, and that four-subband implementation can resolve them much better due to SNR improvement effect of subband filtering. We show the performance of the proposed method in four-subbands together with the Periodogram and multitaper methods in Figure17, and then with the Capon, APES, and IAA methods in Figure18.

Figure 16
figure 16

SNR improvement performance of the proposed ARMA lattice method.

Figure 17
figure 17

Comparison of SNR improvement performance of the proposed method in four-subbands with the Periodogram and multitaper methods.

Figure 18
figure 18

Comparison of SNR improvement performance of the proposed method in four-subbands with the Capon, APES, and IAA methods.

The next property to be considered is the whitening property of subband filtering. In this experiment, we once again generated the input time series using Equation (90), however the white noise u(n) was replaced with the colored noise v(n) in Equation (92). The local SNRs for the cisoids are SNR1 = SNR2 = 0 dB in this case. In Figure19, we compare the estimated spectrums based on fullband, two-, and four-subbands ARMA lattice filtering. Evidently, the cisoids are better resolved in the two- and four-subband spectrums when compared to the fullband spectrum. Also, they are more clearly resolvable in four-subbands than in two-subbands. Figure20 demonstrates the comparison of the proposed method in four-subbands with the Periodogram method and multitaper methods, and Figure21 does the comparison with the Capon, APES, and IAA methods.

Figure 19
figure 19

Whitening effect performance of the proposed ARMA lattice method.

Figure 20
figure 20

Comparison of whitening effect performance of the proposed method in four-subbands with the Peridogram and multitaper methods.

Figure 21
figure 21

Comparison of whitening effect performance of the proposed method in four-subbands with the Capon, APES, and IAA methods.

In Figures14,17, and20, we see that nulls between cisoids are deeper in the Periodogram and multitaper spectrums than those of the four-subband lattice method; the proposed lattice method in four-subband results in smoother spectrums with narrow mainlobes and low sidelobes while the Periodogram and multitaper spectrums have broader mainlobes and higher sidelobes. We then compare the lattice four-subband spectrums with those of the Capon, APES, and IAA in Figures15,18, and21. It can be seen in these figures that the Capon spectrums have the highest null between cisoids among all of the methods, but they are smooth with relatively broader mainlobes and higher sidelobes comparing to the lattice four-subband spectrums. The APES spectrums have deeper nulls between cisoids than the lattice four-subband spectrums, and they are also smooth with broader mainlobes and higher sidelobes than the lattice four-subband spectrums. The IAA spectrums have the lowest nulls, but their mainlobes are broader and their sidelobes are higher in SNR improvement and whitening experiments comparing to the proposed method using four-subbands.

6.1.2 Nonstationary signal case

We again consider the input time series that were generated using Equation (90), but cisoidal frequencies are varied deterministically according to Equation (94). SNR1 = SNR2 = 15 dB was assumed. In Figure22, we plotted one of the frequencies estimated by the proposed lattice method in fullband, two-subbands, and four-subbands against the true frequency trajectory as a function of number of iterations. It can be observed that the lattice four-subband estimates are closer to the true frequency values and also converges earlier than the lattice two-subband and fullband estimates. Figures23,24, and25 compare the lattice four-subband frequency estimates with those of the Capon, APES, and IAA estimates, respectively. Note that the Capon method converges earlier than the lattice four-subband method, and the converge behavior of the APES method is similar to the proposed method using four-subbands, however its values at the end of converge duration are farther away from the true frequency values comparing to those of the lattice method. Figure25 illustrates that the IAA method converges earlier than the proposed method using four-subbands after a rather fluctuating convergence behavior.

Figure 22
figure 22

Estimation of time-varying frequencies of two cisoids embedded in white noise with the proposed ARMA lattice method.

Figure 23
figure 23

Comparison of the proposed method in four-subbands with the Capon method by means of estimated time-varying frequencies of two cisoids embedded in white noise.

Figure 24
figure 24

Comparison of the proposed method in four-subbands with the APES method by means of estimated time-varying frequencies of two cisoids embedded in white noise.

Figure 25
figure 25

Comparison of the proposed method in four-subbands with the IAA method by means of estimated time-varying frequencies of two cisoids embedded in white noise.

6.2 Statistical analysis

We also consider the statistical analysis section under stationary and nonstationary signal cases. In stationary signal case, we investigate the parameter estimation performance of the proposed method by performing variance and mean bias simulations for SNRs ranging from −20 to 20 dB with 1 dB increments. We assumed that the number of cisoids are known in these simulations, and then compared the variance performance of the proposed method against the Cramer-Rao bound (CRB) in[55]. In nonstationary signal case, we provide the parameter estimation performance by means of variance and mean bias versus number of iterations, and also compare against the CRB as a function of number of iterations.

6.2.1 Stationary signal case

In the first experiment, the input time series were generated by Equation (90) for two closely spaced cisoids in white noise. We present the variance curves of the proposed lattice method in Figure26, and then compare the lattice four-subband variance curve with the variance curves generated using the Periodogram and multitaper methods in Figure27, and the Capon, APES, and IAA methods in Figure28, respectively. It can be observed in Figure26 that subband filtering improvement effects are more pronounced in the low-SNR region (−20 dB ≤ SNR ≤ −10 dB), and the threshold SNR, below which the estimation accuracy degrades rapidly, is about −9 dB. In the higher-SNR region (−10 dB < SNR < 10 dB), the subband estimation variance curves cross over the CRB while saturating earlier (at approximately SNR = 7 dB and SNR = 9 dB in two- and four-subbands, respectively) than the fullband estimation variance curve due to the shorter data lengths in subbands and the quantization effects induced by the sampling of the frequency variable. The saturation point can be moved to higher SNRs and accordingly the estimation accuracy can be improved by using longer data and more zeropadding at the cost of more computation time. When we compare the lattice four-subband variance curve with those of the Periodogram and multitaper methods in Figure27, we can see that the lattice four-subband variance values are very close to those of the Periodogram and multitaper methods throughout all SNR regions. In Figure28, the variance curve of the proposed method using four-subbands is plotted against those of the Capon, APES, and IAA methods. It can be observed that the variance curve of the proposed method using four-subbands is higher than those of the Capon, APES, and IAA in the medium SNR region (−10 dB < SNR < 5 dB), and close to those of the Capon, APES, and IAA in the rest of SNR range.

Figure 26
figure 26

Variance versus SNR plots for estimating the frequencies of two cisoids embedded in white noise with the proposed ARMA lattice method.

Figure 27
figure 27

Comparison of the proposed method with the Periodogram and multitaper methods by means of variance versus SNR plots for estimating the frequencies of two cisoids embedded in white noise.

Figure 28
figure 28

Comparison of the proposed method with the Capon, APES, and IAA methods by means of variance versus SNR plots for estimating the frequencies of two cisoids embedded in white noise.

In Figure29, we present mean bias curves for fullband as well as subband implementations for two cisoids in white noise. Similar to the variance curves, the mean bias values of the proposed method in fullband and two-subbands are relatively higher than those of the proposed method in four-subbands in the low-SNR region, and the mean bias values of the proposed method in two-subbands are also higher that those of the proposed method in four-subbands in the same SNR region, and in the higher-SNR region (SNR>−9), the proposed method in fullband, two-subbands, and four-subbands display similar mean bias behaviors. In Figure30, it can be seen that the Periodogram and multitaper have higher mean bias values comparing to the lattice four-subband method in the full SNR range, and in Figure31, it can be observed that the lattice method in four-subbands performs better than the Capon, APES methods, and IAA methods in the SNR region (−20 dB ≤SNR≤5 dB) while it is better than the Capon and APES methods and slightly worse than the IAA method in the rest of SNR region.

Figure 29
figure 29

Mean bias versus SNR plots for estimating the frequencies of two cisoids embedded in white noise with the proposed ARMA lattice method.

Figure 30
figure 30

Comparison of the proposed method with the Periodogram and multitaper methods by means of mean bias versus SNR plots for estimating the frequencies of two cisoids embedded in white noise.

Figure 31
figure 31

Comparison of the proposed method with the Capon, APES, and IAA methods by means of mean bias versus SNR plots for estimating the frequencies of two cisoids embedded in white noise.

Figure32 demonstrates variance curves of the proposed lattice method when the input time series were generated by Equation (90) with the colored noise v(n) in Equation (92) replaced the white noise u(n). In Figure32, it can be observed that the threshold local SNR moved to approximately SNR = 2 dB in fullband, SNR = −7 dB in two-subbands, and −9 dB in four-subbands. Again, subband filtering effects are more pronounced in the low-SNR region while they relatively improve the estimation performance in the higher-SNR region. Note that only four-subband variance curve crosses the CRB in the colored additive noise simulation. We compare the variance performance results of the proposed lattice method using four-subbands with those of the Periodogram and multitaper in Figure33, and the Capon, APES, IAA methods in Figure34. In both of these figures, it can be observed that the proposed lattice in four-subbands performs much better than the nonparametric methods throughout the complete SNR range due to the whitening property of subband implementations.

Figure 32
figure 32

Variance versus SNR plots for estimating the frequencies of two cisoids embedded in colored noise with the proposed ARMA lattice method.

Figure 33
figure 33

Comparison of the proposed method with the Periodogram and multitaper methods by means of variance versus SNR plots for estimating the frequencies of two cisoids embedded in colored noise.

Figure 34
figure 34

Comparison of the proposed method with the Capon, APES, and IAA methods by means of variance versus SNR plots for estimating the frequencies of two cisoids embedded in colored noise.

In Figure35, we present the mean bias curves, which demonstrate the relatively better performance of subband implementations especially in the low-SNR region. In Figures36 and37, we compare the mean bias curves of the nonparametric methods with the proposed method using four-subbands. In both of these figures, there is a performance difference in favor of the four-subband lattice implementation in the SNR region (−20 dB ≤ SNR ≤ − 5 dB), on the other hand, in the rest of the SNR region, the performance of the proposed lattice method in four-subbands is worse than those of the Periodogram, multitaper, Capon, APES, and IAA methods.

Figure 35
figure 35

Mean bias versus SNR plots for estimating the frequencies of two cisoids embedded in colored noise with the proposed ARMA lattice method.

Figure 36
figure 36

Comparison of the proposed method with the Periodogram and multitaper methods by means of mean bias versus SNR plots for estimating the frequencies of two cisoids embedded in colored noise.

Figure 37
figure 37

Comparison of the proposed method with the Capon, APES, and IAA methods by means of mean bias versus SNR plots for estimating the frequencies of two cisoids embedded in colored noise.

6.2.2 Nonstationary signal case

We present the variance and mean bias of frequency estimation versus number of iterations plots when estimating the randomly time-varying frequencies of two cisoids embedded in white noise (SNR1 = SNR2 = − 15 dB) for the proposed lattice method in Figures38 and39 respectively. Therefore, the time series in this experiment were generated by using Equation (90) with frequencies replaced with the randomly time varying frequencies in Equation (95). The plots presented in Figures38 and39 are for one of the frequencies in order to improve the readability, and they also display that the tracking performance improves as we increase the number of subbands. We think that the number of subbands in the proposed method plays a similar role to the pole contraction factor, ρ, in[8, 9], and frequency tracking with the proposed lattice method in fullband, two-subband, and four-subband cases can be likened to increasing pole contraction factors in notch filtering. An approximate way of corresponding the number of subbands to pole contraction factor can be by using the normalized bandwidth of complex notches, which is given as B W =( 1 − ρ) in[8, 9]. Accordingly, a typical pole contraction factor of ρ = 0.95, and therefore the normalized notch bandwidth of B W = 0.05 corresponds to using ten-subbands in our method. In Figures40 and41, we compare the variance and mean bias versus number of iterations curves of the proposed lattice method in four-subbands with the adaptive nonparametric methods (Capon, APES, and IAA), and demonstrate that the variance and mean bias performances of the proposed lattice four-subband method are better those of the adaptive Capon, APES, and IAA methods, and it is also considered that the better performance of the proposed method in such a low-SNR condition is mainly due to the SNR improvement effect of subband filtering.

Figure 38
figure 38

Variance versus number of iterations plots for tracking the frequencies of two cisoids embedded in white noise with the proposed ARMA lattice method.

Figure 39
figure 39

Comparison of the proposed method with the Capon, APES, and IAA methods by means of variance versus number of iterations plots for tracking the frequencies of two cisoids embedded in white noise.

Figure 40
figure 40

Mean bias versus number of iterations plots for tracking the frequencies of two cisoids embedded in white noise with theproposed ARMA lattice method.

Figure 41
figure 41

Comparison of the proposed method with the Capon, APES, and IAA methods by means of mean bias versus number of iterations plots for tracking the frequencies of two cisoids embedded in white noise.

7 Conclusions

A novel lattice method for adaptive ARMA spectrum estimation in subbands has been presented. The proposed method is different from previous methods such that it first transforms the ARMA spectrum estimation problem in subbands into an equivalent, multichannel prediction filtering problem, and then completely orthogonalizes the multichannel input signal using SPMLSs. In order to convert lattice reflection coefficients to process parameters, the method includes a new Levinson–Durbin type multichannel conversion algorithm specially developed for SPMLSs. The fullband spectrum estimation is a special form of the proposed subbands spectrum estimation method, and also gives rise to a novel two-channel prediction filter implementation.

The advantages of the method are that it can dynamically parameterize all subbands at once using a single filter structure, which is modular, regular, order-recursive, and therefore suitable for VLSI implementations such as developing both programable DSP chips and dedicated system on chip solutions; its inherently good numerical properties due to the avoidance of matrix inversions; and scalar only operations. The other strengths of the method are applicability to uniform as well as nonuniform subband filtering, and also to on-line and off-line implementations.

A detailed computational analysis has been provided by comparing the complexity of the proposed method with those of adaptive fast RLS transversal prediction filtering in subbands and the Periodogram method for different model orders and data lengths. Then, the complexity has also been compared against the multitaper, the Capon, APES, and IAA for one data length (N=128), and it has been shown that the proposed method is computationally advantageous comparing to the multitaper and Capon methods for a range of lattice prediction filter orders, while it is computationally more advantageous than the APES method for a wide range of lattice prediction filter orders, and it is less complex than the IAA method for a very wide range of lattice prediction filter orders.

The simulation results demonstrate that the proposed subband spectrum estimation method has the SNR and frequency spacing improvement and whitening effect advantages of a typical subband implementation. In stationary signal case, the variance and mean bias values versus SNR performance when estimating the frequencies of two cisoids in additive white and colored noise has been presented, and compared against the CRB for white noise as well as against the performances of the Periodogram, multitaper, Capon, APES, and IAA methods. In nonstationary signal cases, the performance has also been investigated, and compared against the Capon, APES, and IAA methods under deterministically and randomly time-varying signal conditions.

The performance results display that the proposed method improves the poor performance of parametric methods in low SNR and small frequency spacing, and colored noise conditions. It can also perform competitively with respect to the nonparametric methods in a computationally efficient manner. Consequently, it is considered that the proposed method with increasing number of subbands has good potential for cognitive radio spectrum sensing, radar and speech recognition tasks, and frequency estimation and tracking applications.

References

  1. Rao S, Pearlman WA: Analysis of linear prediction, coding, and spectral estimation from subbands. IEEE Trans. Inf. Theory 1996, 142(4):1160-1178.

    Article  MATH  Google Scholar 

  2. Bonacci D, Mailhes C, Djuric PM: Improving frequency resolution for correlation-based spectral estimation methods using subband decomposition. Proceedings of the IEEE International Conference Acoustics, Speech, and Signal Processing (ICASSP) May 2003, 329-332.

    Google Scholar 

  3. Bonacci D, Mailhes C, Tourneret JY: Subband decomposition using multichannel AR spectral estimation. Proceedings of the IEEE International Conference Acoustics, Speech, and Signal Processing (ICASSP) May 2005, 409-412.

    Google Scholar 

  4. Haykin S: Radar signal processing. IEEE ASSP Mag 1985, 2(2):2-18.

    Article  Google Scholar 

  5. Carriere R, Moses RL: Autoregressive moving average modeling of radar target signatures. Proceedings of the IEEE National Radar Conference 20–21 April 1988, 225-229.

    Google Scholar 

  6. Lim I, Lee BG: Lossless pole-zero modeling of speech signals. IEEE Trans. Speech Audio Process 1993, 1(3):269-276. 10.1109/89.232610

    Article  Google Scholar 

  7. Lim I, Lee BG: Lossy pole-zero modeling of speech signals. IEEE Trans. Speech Audio Process 1996, 4(2):81-88. 10.1109/89.486057

    Article  Google Scholar 

  8. Nehorai A: A minimal parameter adaptive notch filter with constrained poles and zeros. IEEE Trans. Acoust. Speech, Signal Process 1985, 33(4):983-996. 10.1109/TASSP.1985.1164643

    Article  Google Scholar 

  9. Händel P, Nehorai A: Tracking analysis of an adaptive notch filter with constrained poles and zeros. IEEE Trans. Signal Process 1994, 42(2):281-291. 10.1109/78.275602

    Article  Google Scholar 

  10. Eom KB, Chellappa R: Noncooperative target classification using hierarchical modeling of high-range resolution radar signatures. IEEE Trans. Signal Process 1997, 45(9):2318-2327. 10.1109/78.622954

    Article  Google Scholar 

  11. Yong-Xiang L, Xiang L, Zhao-Wen Z: Modeling of multirate signal in radar target recognition. Proceedings of the IEEE International Conference on Neural Networks and Signal Processing 14–17 Dec 2003, 1604-1606.

    Google Scholar 

  12. Kwan HK, Wang M: ARMA lattice model for speech analysis and synthesis. Proceedings of the IEEE International Conference on Neural Networks and Signal Processing 14–17 Dec 2003, 912-915.

    Google Scholar 

  13. Tichavsky P, Höndel P: Two algorithms for adaptive retrieval of slowly time-varying multiple cioids in noise. IEEE Signal Process 1995, 43(5):1116-1127. 10.1109/78.382397

    Article  MATH  Google Scholar 

  14. Regalia PA: An improved lattice-based adaptive IIR notch filter. IEEE Signal Process 1991, 39(9):2124-2128. 10.1109/78.134453

    Article  Google Scholar 

  15. Petraglia MR, Mitra SK, Szczupak J: Adaptive sinusoid detection using IIR notch filters and multirate techniques. IEEE Trans. Circuits Syst. II. Analog and Digital Signal Process 1994, 41(11):709-717. 10.1109/82.331541

    Article  Google Scholar 

  16. Prudat Y, Vesin J-M: Multi-signal extension of adaptive frequency tracking algorithms. Signal Process 2009, 89: 963-973. 10.1016/j.sigpro.2008.11.002

    Article  MATH  Google Scholar 

  17. Liu H-Yu, Yen RY: Effective adaptive iteration algorithm for frequency tracking and channel estimation in OFDM systems. IEEE Trans. Veh. Technol 2010, 59(4):2093-2097.

    Article  Google Scholar 

  18. Sandberg F, Stridth M, Sörnmo L: Frequency tracking of atrial fibrillation using hidden markov models. IEEE Trans. Biomed. Eng 2008, 55(2):502-511.

    Article  Google Scholar 

  19. Evangelopoulos G, Maragos P: Multiband modulation energy tracking for noisy speech detection. IEEE Trans. Audio Speech Lang. Process 2006, 14(6):2024-2038.

    Article  Google Scholar 

  20. Piskorowski J: Suppressing harmonic powerline interference using multiple-notch filtering methods with improved transient behavior. Measurement 2012. http://dx.doi.org/10.1016/j.measurement.2012.03.004

    Google Scholar 

  21. Tan L, Jiang J: Novel adaptive IIR filter for frequency estimation and tracking. IEEE Signal Process. Mag 2009, 26(6):186-189.

    Article  Google Scholar 

  22. Ma J, Li GY, Juang BH: Signal processing in cognitive radio. Proc. IEEE 2009, 97(5):805-823.

    Article  Google Scholar 

  23. Axell E, Leus G, Larsson EG, Poor HV: Spectrum sensing for cognitive radio. IEEE Signal Process. Mag 2012, 29(3):101-116.

    Article  Google Scholar 

  24. Marple SL: Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice Hall; 1987.

    Google Scholar 

  25. Ling F, Proakis JG: A generalized multichannel least squares lattice algorithm based on sequential processing stages. IEEE Trans. Acoust. Speech Signal Process 1984, ASSP-32(2):381-389.

    Article  Google Scholar 

  26. Lev-ari H: Modular architectures for adaptive multichannel lattice algorithms. IEEE Trans. Acoust. Speech Signal Process 1987, ASSP-35(4):543-552.

    Article  Google Scholar 

  27. Glentis G-O, Kalouptsidis N: A highly modular adaptive lattice algorithm for multichannel least squares filtering. Signal Process 1995, 46: 47-55. 10.1016/0165-1684(95)00071-K

    Article  MATH  Google Scholar 

  28. Lewis PS: QR-based algorithms for multichannel adaptive least squares lattice filters. IEEE Trans. Acoust. Speech Signal Process 1990, ASSP-38(3):421-431.

    Article  MATH  Google Scholar 

  29. Yang B: A QR multichannel least squares lattice algorithm for adaptive nonlinear filtering. Int. J. Electron. Commun. (AEÜ) 1995, 49(4):171-182.

    Google Scholar 

  30. Rontogiannis AA, Theodoridis S: IEEE Signal Process. 1998, 46(11):2862-2876. 10.1109/78.726801

    Article  Google Scholar 

  31. Gomes J, Barroso VAN: Array-based QR-RLS multichannel lattice filtering. IEEE Signal Process 2008, 56(8):3510-3522.

    Article  MathSciNet  Google Scholar 

  32. Glentis GA, Kalouptsidis N: Fast adaptive algorithms for multichannel filtering and system identification. IEEE Signal Process 1992, 40(10):2433-2457. 10.1109/78.157288

    Article  MATH  Google Scholar 

  33. Slock DTM, Chisci L, Lev-Ari H, Kailath T: Modular and numerically stable fast transversal filters for multichannel and multiexperiment RLS. IEEE Signal Process 1992, 40(4):784-802. 10.1109/78.127952

    Article  Google Scholar 

  34. Regalia PA, Bellanger MG: On the duality between fast QR methods and lattice methods in least squares adaptive filtering. IEEE Trans. Signal Process 1991, 39(4):879-891. 10.1109/78.80910

    Article  Google Scholar 

  35. Ling F: Givens rotation based least squares lattice and related algorithms. IEEE Trans. Signal Process 1991, 39(7):1541-1551. 10.1109/78.134393

    Article  Google Scholar 

  36. Ling F: A recursive modified Gram-Schmidt algorithm for least-squares estimation. IEEE Trans. Acoust. Speech Signal Process 1986, ASSP-34(4):829-835.

    Article  Google Scholar 

  37. Ozden MT, Kayran AH: Decision feedback equalisation with complete lattice orthogonalisation. Electron. Lett 2001, 37(14):923-926. 10.1049/el:20010602

    Article  Google Scholar 

  38. Ozden MT, Kayran AH: Adaptive multichannel decision feedback equalization for volterra type nonlinear communication channels. Int. J. Electron. Commun. (AEÜ) 2008, 62(6):430-437. 10.1016/j.aeue.2007.06.005

    Article  Google Scholar 

  39. Woods R, McAllister J, Lightbody G, Yi Y: FPGA-Based Implementation of Signal Processing Systems. New York: Wiley; 2008.

    Book  Google Scholar 

  40. Li J, Stoica P: An adaptive filtering approach to spectral estimation and SAR imaging. IEEE Trans. Signal Process 1996, 44(6):1469-1483. 10.1109/78.506612

    Article  Google Scholar 

  41. Stoica P, Moses R: Spectral Analysis of Signals. Englewood Cliffs, NJ: Prentice Hall; 2005.

    Google Scholar 

  42. Yardibi T, Li J, Stoica P, Xue M, Baggeroer AB: Source localization and sensing: a nonparametric iterative adaptive approach based on weighted least squares. IEEE Trans. Aerosp. Electron. Syst 2010, 46(1):425-442.

    Article  Google Scholar 

  43. Thomson AB: Spectrum estimation and harmonic analysis. Proc. IEEE 1982, 70: 1055-1096.

    Article  Google Scholar 

  44. Haykin S, Thomson DJ, Reed JH: Spectrum sensing for cognitive radio. Proc. IEEE 2009, 97(5):849-877.

    Article  Google Scholar 

  45. Farhang-Boroujeny B: Filter bank spectrum sensing for cognitive radios. IEEE Trans. Signal Process 2008, 56(5):1801-1811.

    Article  MathSciNet  Google Scholar 

  46. Glentis GO: Efficient algorithms for adaptive Capon and APES spectral estimation. IEEE Trans. Signal Process 2010, 58(1):84-96.

    Article  MathSciNet  Google Scholar 

  47. Glentis GO, Jakobsson A: Efficient implementation of iterative adaptive approach spectral techniques. IEEE Trans. Signal Process 2011, 59(9):4154-4167.

    Article  MathSciNet  Google Scholar 

  48. Haykin S: Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice Hall; 2004.

    MATH  Google Scholar 

  49. Strobach P: New forms of Levinson and Schur algorithms. IEEE Signal Process. Mag 1991, 8(1):12-36.

    Article  Google Scholar 

  50. Ardalan SH, Faber LJ: A fast ARMA transversal RLS filter algorithm. IEEE Trans. Acoust. Speech, Signal Process 1998, ASSP-36(3):349-358.

    MATH  Google Scholar 

  51. Cochran WT, Cooley JW, Favin DL, Helms HD, Kaelnel RA, Lang WW, Maling GC, Nelson DE, Rader CM, Welch PD: What is the fast Fourier transform? Proc. IEEE 1967, 55(10):1664-1674.

    Article  Google Scholar 

  52. Lin Y-P, Vaidyanathan VDD: A Kaiser window approach for the design of prototype filters of cosine modulated filterbanks. IEEE Signal Process. Lett 1998, 5(6):132-134.

    Article  Google Scholar 

  53. Glentis GO, Jakobsson A: Time-recursive IAA spectral estimation. IEEE Trans. Signal Process. Lettr 2011, 18(2):111-114.

    Article  Google Scholar 

  54. Broersen PMT, de Waele S: Time series analysis in a frequency subband. IEEE Trans. Instrum. Meas 2003, 52(4):1054-1060. 10.1109/TIM.2003.814823

    Article  Google Scholar 

  55. Kay SM: Modern Spectral Estimation: Theory and Application. Englewood Cliffs, NJ: Prentice Hall; 1987.

    Google Scholar 

Download references

Acknowledgements

The author is grateful to the Editor Prof. Cassio G. Lopes and anonymous reviewers for their useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehmet Tahir Ozden.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Authors’ original file for figure 34

Authors’ original file for figure 35

Authors’ original file for figure 36

Authors’ original file for figure 37

Authors’ original file for figure 38

Authors’ original file for figure 39

Authors’ original file for figure 40

Authors’ original file for figure 41

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ozden, M.T. Adaptive multichannel sequential lattice prediction filtering method for ARMA spectrum estimation in subbands. EURASIP J. Adv. Signal Process. 2013, 9 (2013). https://doi.org/10.1186/1687-6180-2013-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-9

Keywords