Open Access

Lattice implementation of adaptive channel shortening in multicarrier transmission over IIR channels

  • Emna Ben Salem1Email author,
  • Sofiane Cherif1,
  • Hichem Besbes1 and
  • Roberto López-Valcarce2
EURASIP Journal on Advances in Signal Processing20132013:174

Received: 18 January 2013

Accepted: 30 August 2013

Published: 19 November 2013


Time-domain equalization is crucial in reducing channel dispersion and canceling interference in multicarrier systems. The equalizer is a finite impulse response (FIR) filter with the purpose that the delay spread of the combined channel-plus-equalizer IR is not longer than the cyclic prefix length. In this paper, a specific framework of long FIR channel-shortening problem is studied. In fact, approximated by a stable pole-zero model, we show that the channel transfer function poles introduce interference. Hence, to cancel bad poles, we propose the use of lattice structure to implement the channel shortener which places their zeros very close to critical channel poles and cancels them out. For low complexity implementation, we adopt adaptive algorithms to design the lattice channel shorteners. This paper analyzes the lattice structure performances of two blind adaptive channel shorteners: sum-squared autocorrelation minimization and multicarrier equalization by restoration of redundancy algorithms. The proposed implementation performances are given in terms of bit rate, and the simulation results are studied in the context of asymmetric digital subscriber line system.


Impulse ResponseFinite Impulse ResponseCyclic PrefixRecursive Little SquareMean Square Deviation

1 Introduction

Multicarrier (MC) modulation has various advantages that make it useful for a wide variety of digital communication systems [1]. Actually, it has been chosen as the physical layer standard for a diversity of basic systems such as digital transmission over telephone lines, applications in broadcasting, and in wireless networks [26]. The most important advantage of the MC system is its robustness against interferences. In fact, cyclic prefix (CP) insertion through MC symbols provides higher immunity against channel delay spread. Therefore, as long as channel dispersion is not longer than the CP, system performance does not degrade. However, a highly time-dispersive channel leads to a significant reduction of the transmission data rate since the received signal is corrupted by both inter-carrier and inter-symbol interferences. To avoid such a performance degradation, a channel-shortening technique, commonly referred to as time-domain equalizer (TEQ), is introduced at the receiver front end. Generally, the TEQ is defined as a transversal filter with the main purpose of keeping the delay spread of the combined channel-plus-equalizer impulse response (IR) not longer than the CP length [7]. Yet, by being finite IR (FIR) filters, all of the proposed TEQs are designed for both FIR and/or infinite IR (IIR) channel shortening which make all of them solve the channel-shortening problems without taking into account the transmission channel models.

Otherwise, we must stress the fact that the transmission characteristics of the channel determine directly the performance of communication systems. Therefore, several channel models were proposed to accurately simulate the effect of transmission of the MC signal through the channel. Particularly, it is proven that in twisted pair lines, the channel is well modeled by a recursive filter with a slowly decaying IR [8, 9]. This means that the channel transfer function presents poles close to the unit circle (UC). By exploiting this result, we propose in this paper to develop a specific framework for slowly decaying recursive channel shortening. Indeed, approximated by a stable pole-zero model, we need to show that the channel poles introduce a term of interference in each received MC symbols. Thus, in order to shorten the recursive channel, an effective TEQ will place their zeros on the critical poles to cancel them out. Yet, it is worth noting that inaccurate zeros position results in a limited channel-shortening performance, which motivates the use of lattice structure to cancel channel poles. Therefore, this can be effectively done only by adopting the lattice structure which must be conceived by optimizing appropriate channel-shortening criteria.

Furthermore, in the literature, most of the supervised channel-shortening criteria formulations, such as MSSNR [10], MGSNR [11], min-ISI [12], MBR [13] and SINR-max [14], are mainly based on the combined channel-plus-equalizer IR expression and their optimum TEQs which are perceived as solutions of algebra linear equations. However, when lattice structure is put into service, all of these criteria deal with non-linear formulation in terms of lattice TEQ reflection coefficients, whereas the criteria which are based on lattice filter output signal present linear formulation in terms of reflection coefficients. Thus, for linear algebra resolution and low complexity lattice implementation, we need to adopt the last kind of criteria, from which we quote the blind multicarrier equalization by restoration of redundancy (MERRY) and sum-squared autocorrelation minimization (SAM) algorithms [15, 16].

Accordingly, we tend to present through this paper the lattice structure performances of the adaptive MERRY and SAM algorithms. Indeed, the proposed implementation performances are given in terms of bit rate, and the simulation results are studied in the context of digital subscriber line (DSL) systems. Initially, Section 2 discusses the problem statement of long FIR channel shortening and motivates the use of the lattice structure as a way to implement the TEQ. Section 4 then develops the steepest descent implementation of the lattice-based SAM algorithm, whereas Section 5 gives the recursive least squares (RLS) implementation of the lattice-based MERRY algorithm. Further, Section 6 presents comparative simulation results, and conclusions are given in Section 7.

The matrices and vectors are denoted with upper and lower case boldface letters (e.g., M and v), the superscripts , T, H, and -1 denote the conjugate, the transpose, the Hermitian (conjugate transpose), and the inverse of a matrix, respectively; 0 N × L  and I N  denote the N × L null matrix and the N × N identity matrix. For any vector v, v denotes the Euclidean norm, and M = diag (v) is the diagonal matrix with diagonal elements equal to the vector v elements. The notation s k,n  is used to present the n th MC sample transmitted or received at k th MC symbol period.

2 Channel interference analysis for MC transmission

Actually, the ultimate goal of this section is to show that if the MC transmission channel is modeled by slowly decaying recursive filter, then its partial equalization can be done only through canceling its transfer function poles, causing interference. Thus, to achieve this purpose, the filter equalizer must place the zeros in critical poles as a way to cancel them out.

2.1 MC system model

MC modulation divides the transmission bandwidth into N parallel subchannels by means of inverse fast Fourier transform (IFFT). A CP is appended to each symbol to ensure subchannel orthogonality after propagation through the time-dispersive channel. Demodulation of the received signal is performed by an FFT operation. The simplified baseband equivalent MC system model is shown in Figure 1. At the transmitter and after modulation, the data sequence is converted into N parallel subsequences, z k,n , where n  {1,2,…,N} refers to the subcarrier number and k is the discrete time index. The block z k z k , 1 , , z k , n , , z k , N T is used to modulate the different subcarriers by means of an IFFT, the result block vector is denoted by x k x k , 1 , , x k , n , , x k , N T and is expressed by x k  = F z k , where F represents the unitary symmetric IFFT matrix, { F } n 1 , n 2 1 N e j ̲ 2 π n 1 n 2 N , n 1,n 2 {0,1,…,N - 1}, and j ̲ 2 = - 1 . To ensure that the subcarriers remain orthogonal after propagation through the channel, the last P samples (corresponding to the CP) of x k  are copied and added to the beginning of this block to form the k th MC block, x ~ k , of length M = N + P, where x ~ k x ~ kM + 1 , , x ~ ( k + 1 ) M T = x k , N - P + 1 , , x k , N , x k , 1 , , x k , N T .
Figure 1

MC system model. N, MC symbol size; i, MC symbol index; k and n, sample index before and after inserting cyclic prefix.

Let x ~ i , where i kM + m and m  {1,…,M}, be the source sequence to be transmitted, after P/S operation. The process x ~ i is modeled as zero mean, wide sense stationary with unit variance, and white (this property holds provided that sampling is done at twice the signal bandwidth). The noise v ~ i is a zero mean white Gaussian process with variance σ v ~ 2 uncorrelated with the transmitted signal.

2.2 Channel pole-zero model

In order to analyze the channel-shortening problem, we propose to approximate the transmission channel with a lower order pole-zero model. In fact, when the channel is characterized by its long tail, it can be approximated by a reduced-parameter model. Actually, the problem of approximating a long FIR filter by a pole-zero filter with a small total number of coefficients had been investigated by many researchers [17, 18], some of them had especially developed a stable pole-zero model of the DSL loop as a way to reduce the implementation complexity of the DSL channel equalization [9]. Thus, the transfer function of long FIR channel can be as follows:
H ( z ) = q = 0 L a a q z - q 1 + p = 1 L b b p z - p ,
where L a  and L b  are the channel numerator and denominator orders, respectively. Moreover, the direct form channel model can be characterized in the time domain by the following difference equation:
r ~ i = q = 0 L a a q x ~ i - q - p = 1 L b b p r ~ i - p + v ~ i ,

with r ~ i as the received signal.

2.3 Channel interference analysis

In the following, we detail the necessity of channel shortening in the case of MC long FIR channels. Hence, we will show that the presence of at least one pole in the channel transfer function introduces a term of interference in the received MC block. In order to simplify this analysis, we need to state the following explanation by assuming that the channel had one pole, i.e., L b  = 1, the numerator order is less than the CP, i.e., L a  ≤ P, the equalizer filter is transparent, i.e., y ~ i = r ~ i , and no channel delay is considered, i.e., δ = 0.

At the receiver, the channel output, y ~ i , is converted into M parallel substreams and the cyclic prefix is removed from the received block in order to obtain the k th MC symbol vector y k = y ~ kM + P + 1 , y ~ kM + P + 2 , , y ~ ( k + 1 ) M T . According to the relation (2), we can state the matrix-vector form of the received MC block as follows:
y k = A x k - B y k ( 1 ) + v ~ k ,
where A is a N × N circulant matrix, where the first column and row are, respectively, disposed by a 0 , , a L a , 0 , , 0 T and a 0 , 0 , , 0 , a L a , , a 1 , v ~ k v ~ kM + P + 1 , , v ~ ( k + 1 ) M T is the noise vector, y k ( 1 ) y ~ kM + P , , y ~ ( k + 1 ) M - 1 T and B b 1 I N . In another aspect, for detailing the signal components received at k th symbol period, we denote by y k ( j ) the vector y ~ kM + P + 1 - j , , y ~ ( k + 1 ) M - j T and by v ~ k ( j ) the vector v ~ kM + P + 1 - j , , v ~ ( k + 1 ) M - j T . The last notation is omitted for j = 0, i.e., y k ( 0 ) = y k and v ~ k ( 0 ) = v ~ k . As it can be shown, for j ≤ P - L a and l ≤ P, the vectors y k ( j ) and x k ( l ) are recursively computed according to the following relations:
y k ( j ) = A x k ( j ) - B y k ( j + 1 ) + v ~ k ( j ) ,
x k ( l + 1 ) = J ~ x k ( l ) .
Here, J ~ is a shifting matrix, J ~ J ̆ , e where J ̆ 0 ( N - 1 ) × 1 , I N - 1 T and e 1 , 0 1 × ( N - 1 ) T , and then, by exploiting the recursive relations (4) and (5), we can write for L = P - L a :
y k = j = 0 L ( - 1 ) j B j A J ~ j x k + ( - 1 ) L + 1 B L + 1 y k L + 1 + j = 0 L ( - 1 ) j B j v ~ k ( j ) .
Let D j = 0 L ( - 1 ) j B j A J ~ j ; we can easily show that D is a circulant matrix. Further, an FFT operation demodulates the received MC symbol y k , and the resulting block which is denoted by s k  is given by the following:
s k = F H D F z k desired symbol + ( - 1 ) L + 1 F H B L + 1 y k L + 1 interference term + j = 0 L ( - 1 ) j F H B j v ~ k ( j ) noise term .
As a result, we notice that the received MC symbol consists of three major components: the desired signal component which is obtained through the diagonal matrix F H D F, the interference component which is expressed by the term ( - 1 ) L + 1 F H B L + 1 y k L + 1 , and the noise component j = 0 L ( - 1 ) j F H B j v ~ k ( j ) . Notably, the interference component is analyzed by expressing y k L + 1 as follows:
y k L + 1 = A ~ J L + 1 x k - B y k L + 2 + v ~ k L + 1 ,

where A ~ a , A T ( : , 2 : N ) T , and a a 0 , 0 , , 0 , a L a - 1 , , a 1 T . Therefore, we can observe that the matrix B L + 1 A ~ J L + 1 cannot be circulant because A ~ is non-circulant. Hence, it cannot be diagonalized by Fourier transform which makes this quotation produce an undesired interference signal in the received MC symbol s k .

Finally, we notice that, independent of the channel zero number, the channel poles introduce a term of interference with power depending on their locations inside the UC. Therefore, with the aim to cancel interference, it is proposed to insert a TEQ at the receiver front end to equalize poles and produce a combined channel-plus-equalizer IR not longer than the CP length. Next, we will discuss the lattice structure derived to implement the TEQ to shorten the recursive channel model.

3 Lattice structure of channel equalizer

There is no doubt that the equalizer coefficients depend mainly on the characteristics of the channel that are similarly determined from the measurements obtained by transmitting signals through the physical media. Such filters, with adjustable parameters, are usually called adaptive, especially when they are incorporated with algorithms which allow the filter coefficients to be adapted to the changes in the signal statistics. Adaptive technique can also lead to a reducing complexity.

Under their direct transversal shape, all of the proposed adaptive channel shorteners are designed for both FIR and/or IIR channel shortening. Therefore, in this paper, we propose a specific framework for channel shortening where we consider the channel model as a slowly decaying IIR filter. In order to ensure the best algorithm convergence to TEQ zeros, canceling poles and causing the tail, we suggest working with the lattice structure for the adaptive TEQ implementation. We admit the adaptive blind MERRY and SAM algorithms.

As it will be illustrated in the following sections, the choice of the TEQ filter structure has a profound effect on the operation of the channel-shortening algorithms. In fact, we will show that when the adaptive TEQs are implemented using the lattice structure, the TEQ coefficients converge very close to the optimal values.

Therefore, we need to distinguish the most popular filter structures: the transversal and lattice structures. Basically, the transversal structure is the most common structure used in implementing channel shorteners, i.e., the function of the latter is to adjust the set of L w  + 1 filter coefficients w j ,j = 0,1,…,L w (tap weights) which makes the output y ~ i to be close as possible to a desired signal. The output filter is calculated as a linear combination of the input sequence.

The lattice TEQ filter is perceived as modular in structure so that it consists of a number of individual stages, each one of them has the appearance of a lattice, as it is stated in Figure 2. The transfer function of the lattice filter is determined by the reflection coefficients q p , for p = 1,2,…,L w . The stage outputs are obtained as follows:
u 1 ( i ) = v 1 ( i + 1 ) = r ~ i ,
Figure 2

Lattice implementation of the TEQ.

v p ( i + 1 ) = q p - 1 u p - 1 ( i ) + v p - 1 ( i ) ,
u p ( i ) = q p - 1 v p - 1 ( i ) + u p - 1 ( i ) ,
and the output TEQ is given by the following:
y ~ i = r ~ i + p = 1 L w q p v p ( i ) , = r ~ i + q T v i ,

where q = [ q 1 , q 2 , , q L w ] T and v i = [ v 1 ( i ) , v 2 ( i ) , , v L w ( i ) ] T .

The mapping from transversal to lattice form can be achieved via the Levinson-Durbin recursions [19]. Note that the lattice structure effectively imposes a monic constraint on the TEQ, i.e., the first tap is always 1. Hence, the equivalent filter length is equal to L w + 1.

Hence, by adopting the lattice structure, we need to optimize the TEQ coefficients by implementing adaptive algorithms. We suggest working with the blind MERRY and SAM algorithms.

4 Lattice implementation of the adaptive SAM algorithm

The short autocorrelation of the effective channel is a property that is degraded by a long channel IR. Uniquely, SAM is a blind adaptive channel-shortening algorithm that attempts to restore this short autocorrelation property.

4.1 SAM cost function properties

SAM algorithm performs a gradient descent of a specific cost function which is defined as the sum of the squares of autocorrelation coefficients of all lags greater than the desired channel memory. This cost function could be stated as follows:
J SAM ( w ) l = P + 1 L c | R cc ( l ) | 2 ,
where R cc ( l ) = i = 0 L c c i c i - l is the autocorrelation of the overall IR c i  ( c i = l = 0 L w w l h i - l ). Note that if c i = 0 for i ≥ P + 1, then R cc (l) = 0 for l ≥ P + 1 so that J SAM will be zero. In other words, a short channel implies a short autocorrelation. Therefore, we must specify that the converse is not true: for example, let us consider the IR as follows:
c i = α , i = 0 , α 2 - 1 , i = 1 , α ( α 2 - 1 ) , 2 i L c ,

with |α| < 1. For sufficiently large L c , one has R cc (l) ≈ δ (l), whereas the IR cannot be said to be ‘short’ (just take |α| close to 1). This, normally, is due to the fact that c i  in Equation 14 resembles the IR of an all-pass system. Nevertheless, shortening R cc (l) seems to be a useful way that aims to attempt channel shortening in practical scenarios [16].

Another important observation regarding the SAM cost (13) is its invariance to flipping any of the zeros of the overall transfer function C ( z ) = i = 0 L c c i z - i with respect to the unit circle, since this operation leaves R cc (l) unchanged. Since any zero of the TEQ transfer function W ( z ) = i = 0 L w w i z - i is a zero of C(z), flipping the TEQ zeros leaves the cost J SAM unaltered. As a result, by considering any point on the cost surface, there will be 2 L w points giving identical cost elsewhere, and in particular, any minimum (local or global) will be repeated 2 L w times. These minima in reality may or may not yield a good performance in terms of shortening the effective channel IR. For example, let us consider a channel with IR h i  given by Equation 14, and then the two-tap TEQs w 1 = [ 1,-α] T  and w 2 = [ -α,1] T result in identical short autocorrelation. However, while w 1 yields good performance in terms of overall IR shortening, w 2 does not.

As a way to ensure SAM algorithm convergence to the good TEQ zeros, canceling poles and causing the tail, we suggest a minimum phase constraint be imposed on the TEQ. This can be effectively implemented using the lattice structure. On the other hand, there is a necessary and sufficient condition for this filter which allows it to be in a minimum phase: all reflection coefficients have magnitude less than unity. We propose to check this condition as the algorithm progresses.

4.2 Lattice SAM algorithm

In [16], it has been showed that the SAM cost function is merely approximated by the sum-squared autocorrelation of the TEQ output sequence:
J SAM l = P + 1 L c | R y ~ y ~ ( l ) | 2 .
Given the lattice TEQ output signal y ~ i , the purpose of the SAM algorithm is to update q as a way to minimize Equation 15. This can be done by means of the following well-known gradient descent:
q ( i + 1 ) = q ( i ) - μ q J SAM ,
where q J SAM is the gradient of J SAM evaluated for q = q(i) and μ is the step size. However, in order to implement Equation 16, we define an instantaneous cost function by replacing the expectation operation with a moving average over a defined window of length ν:
J SAM l = P + 1 L c ξ ( i , l ) ν 2 , with ξ ( i , l ) j I i y ~ j y ~ j - l ,
where I i { i , i - 1 , , i - ν + 1 } . The partial derivative of J SAM with respect to lattice-TEQ coefficient, q p , is denoted as follows:
J SAM q p = 1 ν 2 l = P + 1 L c ξ ( i , l ) ξ ( i , l ) q p + ξ ( i , l ) ∂ξ ( i , l ) q p .
Now, one has
ξ ( i , l ) q p v p ( i ) y ~ i - l ,
ξ ( i , l ) q p v p ( i - l ) y ~ i .
In Equations 19 and 20, we have neglected the dependence of the signals v p (i) with the reflection coefficients (see Equation 12). In spite of that, in the summation, we have neglected all but one term in order to keep the computational complexity at bay. Let us define the vectors
y ~ i y ~ i - P - 1 , y ~ i - P - 2 , , y ~ i - L c T ,
Ξ i ξ ( i , P + 1 ) , ξ ( i , P + 2 ) , , ξ ( i , L c ) T ,
and the matrix
V ~ i v i - P - 1 , v i - P - 2 , , v i - L c T .
After absorbing the factor 1 ν 2 into the step-size μ, the proposed update rule can then be written as follows:
q ( i + 1 ) = q ( i ) - μ v i Ξ i T y ~ i + y ~ i V ~ i T Ξ i .

In other respects, the lattice version of SAM (Equation 24) requires on the order of L w (L c - P) multiplications and additions per update, which is comparable to that of the original transversal implementation.

4.3 Normalized lattice SAM algorithm

As a way to improve the convergence rate of the lattice SAM algorithm, there is a variable step size that can be used in the update rule as follows:
q p ( i + 1 ) = q p ( i ) - μ i γ p ( i ) , 1 p L w ,

where γ p ( i ) = v p ( i ) Ξ i T y ~ i + y ~ i V ~ i T ( p , : ) Ξ i .

In order to determine the optimal value of μ i , we propose to minimize, at each i, the a posteriori cost function J SAM pos which mainly depends on the updated TEQ output signal y ~ i pos . Let us define a posteriori output signal as follows:
y ~ i pos = r ~ i + p = 1 L w q p ( i + 1 ) v p pos ( i ) .
Thus, to determine v p pos ( i ) , we exploit the recursive relations in Equation 9, at a posteriori time. By neglecting the terms of proportional to μ m for m ≥ 2, it can be verified that v p pos ( i ) and u p pos ( i ) are recursively computed as follows:
v p pos ( i ) = v p ( i ) - μ i t p ( i ) ,
u p pos ( i ) = u p ( i ) - μ i s p ( i ) ,
where t p (i) and s p (i) are determined according to the following:
t p + 1 ( i ) = t p ( i - 1 ) + q p ( i ) s p ( i - 1 ) + γ p ( i ) u p ( i - 1 ) , s p + 1 ( i ) = q p ( i ) t p ( i ) + γ p ( i ) v p ( i ) + s p ( i ) .
By neglecting again terms in μ m  for m ≥ 2 and replacing Equations 27 and 28 into Equation 26, the a posteriori TEQ output yields in the following notation:
y ~ i pos = y ~ i - μ i p = 1 L w q p ( i ) t p ( i ) + v p ( i ) γ p ( i ) .
We may define the a posteriori cost function at the i th iteration as follows:
J SAM pos l = P + 1 L c j = i - ν + 1 i - 1 y ~ j y ~ j - l + y ~ i pos y ~ i - l 2 .
By admitting that κ ( i ) = p = 1 L w q p ( i ) t p ( i ) + v p ( i ) γ p ( i ) . Through replacing the expression of y ~ i pos from Equation 30 into Equation 31, it follows that
J SAM pos = J SAM + μ i 2 κ ( i ) 2 l = P + 1 L c | y ~ i - l | 2 - 2 μ i Ξ i H y ~ i κ ( i ) .
The value of μ i minimizing J SAM pos is found to be
μ i opt = Ξ i H y ~ i κ ( i ) κ ( i ) 2 l = P + 1 L c | y ~ i - l | 2 .
The updating rule for the lattice version of SAM will be disposed by the following:
q p ( i + 1 ) = q p ( i ) - α μ i opt γ p ( i ) , 1 p L w ,

where 0 < α ≤ 2 is a fixed stability factor.

5 Lattice implementation of the adaptive MERRY algorithm

Inspired by the behavior of the lattice implementation of the SAM cost function, we examine throughout this section the benefit that can be accomplished by implementing MERRY with the same structure. Therefore, we develop an adaptive algorithm relying on the lattice RLS MERRY implementation.

In the following, we seek to reformulate the minimization of the lattice MERRY cost function into least squares approach to adaptively implement the RLS algorithm. That is why we define a weighted least squares cost function as follows:
J MERRY l I k λ k - l | y ~ lM + P + δ - y ~ lM + N + P + δ | 2 ,
where I k { 0 , 1 , , k } , 0 < λ  ≤ 1 is an exponential weighting factor which effectively limits the number of symbols based on which the cost function is minimized. Therefore, the RLS algorithm is detailed in the following quotation by expressing the gradient of the LS function:
J MERRY = l I k λ k - l | r ~ d ( l ) + q T v d ( l ) | 2 ,
where v d ( l ) [ v d 1 ( l ) , , v d p ( l ) , , v d L w ( l ) ] T with v d p ( l ) v p ( lM + P + δ ) - v p ( lM + N + P + δ ) and r ~ d ( l ) r ~ lM + P + δ - r ~ lM + N + P + δ . If we take the gradient, evaluated at (k + 1)th MC symbol for q = q (k), we have
q J MERRY = q T ( k ) l I k λ k - l v d ( l ) v d H ( l ) + l I k λ k - l r ~ d ( l ) v d H ( l ) ,
when the cost function gradient is equal to zero, it results in the following:
q T ( k ) l I k λ k - l v d ( l ) v d H ( l ) = - l I k λ k - l r ~ d ( l ) v d H ( l ) .
We then emphasize the correlation matrix as V d ( k ) l I k λ k - l v d ( l ) v d H ( l ) and the cross-correlation vector as r v ( k ) l I k λ k - l r ~ d ( l ) v d ( l ) ; the LS solution for time instant k is then stated as follows:
q T ( k ) = - r v H ( k ) V d ( k ) - 1 .
We start the derivation of the recursive algorithm by expressing the deterministic correlations matrix, V d (k), and the deterministic cross-correlation vector, r v (k), in their recursive forms:
V d ( k ) = λ V d ( k - 1 ) + v d ( k ) v d H ( k ) r v ( k ) = λ r v ( k - 1 ) + r ~ d ( k ) v d ( k ) .
In order to generate the coefficient vector of Equation 38, we are interested in the inverse of the deterministic autocorrelation matrix V d (k). For that task, the matrix inversion lemma comes in handy. Let us denote by P (k) the inverse of the matrix V d (k) then
P ( k ) = λ - 1 P ( k - 1 ) - λ - 2 P ( k - 1 ) v d ( k ) v d H ( k ) P ( k - 1 ) 1 + λ - 1 v d H ( k ) P ( k - 1 ) v d ( k ) .
Through the equations which were already seen, the RLS algorithm becomes easy to follow and to implement without the need of matrix inversion. The forgetting factor λ presents the contribution of the previous samples; this makes the filter sensitive or not to the recent samples. The matrix P (k) is typically initialized as a scaled identity matrix ρ I L w , where ρ is a large positive constant. The discussion resulted in a single equation to determine the lattice coefficient vector which minimizes the following cost function:
q ( k ) = I L w - P ~ T ( k - 1 ) q ( k - 1 ) - P T ( k ) v d ( k ) r ~ d ( k ) ,

where P ~ ( k - 1 ) = v d ( k ) v d H ( k ) P ( k - 1 ) λ + v d H ( k ) P ( k - 1 ) v d ( k ) . The computational cost of this algorithm is approximately 2 L w 2 multiplications and accumulate operations.

6 Simulation results

Accordingly, we tend to observe through this section the lattice structure performances of the blind MERRY and SAM algorithms. Indeed, the proposed implementation performances are given in terms of convergence properties of the algorithms and simulation results in the DSL environment.

6.1 SAM convergance properties

In the following, we propose to compare the cost surface behavior of the lattice to the transversal TEQs adjusted by the SAM criterion. Let us consider an example of the two-pole channel model where the transfer function is stated as follows:
H ( z ) = 1 - 0.5 z - 1 ( 1 - 0.8 e j π / 2 z - 1 ) ( 1 - 0.8 e - j π / 2 z - 1 ) .
The CP is composed of two samples, the lattice shortener has three taps (L w = 2), whereas no noise is considered. Three-dimensional and contour plots of the SAM cost function are shown in Figures 3 and 4, in terms of the TEQ reflection coefficients q 1and q 2 for the lattice implementation of spherical coordinates for direct transversal implementation. Nevertheless, we observe that in the working range |q p |<1, the cost of the lattice SAM through this example is convex and has a unique minimum, while for the transversal structure there are four minima having equivalent values of the SAM cost.
Figure 3

Transversal SAM cost function versus spherical coordinates θ and ϕ . (a) Three-dimensional plot and (b) contour plot.

Figure 4

Lattice SAM cost function versus q 1 and q 2 . (a) Three-dimensional plot and (b) contour plot.

6.2 Convergence properties of the RLS-MERRY algorithms

Furthermore, we aim to compare the convergence properties of the proposed lattice RLS-MERRY algorithm to their homology in the transversal implementation. The transversal RLS-MERRY algorithm is deeply detailed in [20]. Let us consider the same example of the channel transfer function given in the previous paragraph. Thus, to examine the RLS-MERRY algorithms convergence to the optimal solution, the mean square deviation (MSD) performance metric is used. The MSD with respect to the optimal coefficients corresponds to the minimum of the MERRY cost function:
MSD w ( k ) - w 2 w 2 .
The optimal TEQ, w , is obtained by averaging the filter coefficient, determined after the algorithm convergence, over 100 independent runs. In the case of lattice structure, the RLS-MERRY reflection coefficients are computed using the update rule (Equation 41), then the mapping from lattice to transversal is made through the Levinson-Durbin recursions. Figure 5 shows the MSD variation during lattice RLS-MERRY algorithm compared with the transversal RLS-MERRY TEQ. The MSD curves are obtained by averaging 100 independent runs. Notice that, for the two RLS-MERRY algorithms, the forgetting factor is λ=0.99 and the scaled coefficient is ρ = 100.
Figure 5

Lattice RLS-MERRY MSD versus MC symbol block number.

Based on the results shown in the MSD plot in Figure 5, we may observe that for the same rate convergence, the steady-state value of the MSD produced by the lattice implementation is much smaller than in the case of transversal implementation. To confirm this result, we show in Figure 6 the TEQ zero locations with transversal and lattice structures (a) and (b). We remark that when lattice structure is implemented, the TEQ zeros are very close to the channel poles.
Figure 6

Pole-zero plot of MERRY TEQ after convergence. (a) Transversal TEQ and (b) Lattice TEQ.

6.3 DSL environment simulation

Throughout this section, we compare the performance of the different studied TEQ implementations, specifically in the considered DSL environment where the transmission channel is modeled as an IIR channel. Indeed, it is shown that, in twisted pair lines, the channel is well modeled by an IIR filter with a slowly decaying IR [8]. This means that the channel transfer function presents poles very close to the UC.

To shorten the DSL channel, an effective TEQ will place zeros on the critical poles to cancel them out, which motivates the use of the lattice structure to implement a minimum phase TEQ and to ensure convergence of the multimodal SAM algorithm to a good stationary points. Further, the lattice structure is adopted to implement the blind MERRY algorithm.

To compare the performance of different implementations of adaptive equalizers, simulation results are presented for a set of standard ADSL channels. The transfer functions for the different channels include the model of the copper wire itself as well as the digital and analog transmit and receive filters. Hence, we consider carrier serving area (CSA) test loops combined with a plain old telephone service splitter. Actually, the parameters were chosen to match the downstream ADSL standard system: the cyclic prefix is 32 samples, and the FFT size is 512. The external noise consists of -140 dBm/Hz additive white Gaussian noise as given in [21], and no crosstalk is considered. In the following simulations, we will focus only on equalization issues without optimizing the power allocation of the different subcarriers. We suppose that 16-QAM signaling is used on all of the subchannels, and the sampling frequency is 2.208 MHz.

By analyzing the lattice implementation performances of SAM and RLS-MERRY algorithms, we disposed, as performance metric, the bit rate. The achievable bit rate of the multicarrier channel can be written as the sum of the capacities of AWGN subchannels
R n V log 2 ( 1 + SNR n Γ ) ,

where V is the index set of used subchannels and Γ is the SNR gap to Shannon capacity, which is assumed to be constant over all subchannels. Actually, the SNR gap is a function of the bit error probability aimed at [22], we take Γ = 9.8 dB. The subchannel SNR n , as defined in [13], incorporates both interference and noise distortion. The bit rate was measured and averaged over 100 realizations of each channel-shortening procedure.

To begin the implementation of the normalized version of lattice SAM algorithm, we consider the moving average window of length ν = 100. Therefore, Figure 7 indicates the normalized lattice SAM achievable bit rate versus the fixed step size, α, for different equalizer length L ~ w = L w + 1 . For the other CSA loops, the optimal values of TEQ order and fixed step size as well as the achievable bit rate are summarized in Table 1. Also, Figure 8 shows the bit rate evolution over time (sample index), by comparing the normalized lattice and the original transversal implementations of SAM algorithm. From the last figure, we can see that the normalized lattice SAM equalizer outperforms the transversal SAM algorithm (which is updated with a moving average implementation as well).
Figure 7

Normalized lattice SAM achievable bit rate as function of fixed step size ( α ) and equalizer length ( L ~ w ). Simulations for an upstream ADSL in the CSA loop 1.

Table 1

Normalized lattice SAM achievable bit rate

CSA loop number









L ~ w









α opt









R (Mbps)









It is a function of optimal values of equalizer order, L ~ w , and fixed step size, α, for the eight CSA loops.

Figure 8

SAM bit rate versus iteration number (MC sample index, i ), CSA loop 1.

Notably, the RLS-MERRY channel shortener algorithm depends on parameter synchronization δ. Hence, Figure 9 shows the RLS-lattice MERRY achievable bit rate as a function of the delay parameter, δ, and the TEQ order, L ~ w , for CSA loop 1. For the other CSA loops, the optimal values of delay parameter and equalizers’ length are given in Table 2 where the achievable bit rate of RLS-transversal MERRY and RLS-lattice MERRY algorithms are also computed. However, Figure 10 displays the bit rate evolution over iteration number (MC symbol number), for comparing the RLS lattice and the original RLS transversal implementations of MERRY. Easily, we can see that the lattice structure results in a noticeable improvement in terms of achievable bit rate. Be reminded that the unit first tap constraint is imposed by the lattice structure for all studied algorithms. Also, we can see that the lattice implementation of RLS-MERRY equalizer outperform the lattice SAM TEQ for the eight CSA loops.
Figure 9

RLS-lattice MERRY achievable bit rate as a function of the delay parameter ( δ ) and the TEQ order ( L ~ w ). Simulations for an upstream ADSL in the CSA loop 1.

Table 2

Achievable bit rate as a function of optimal values of equalizer order L ~ w and delay parameter δ

CSA loop

Transversal RLS-MERRY



L ~ w

δ opt

R (Mbps)

L ~ w

δ opt


























































This is for the eight CSA loops shortened with the transversal RLS-MERRY and lattice RLS-MERRY algorithms.

Figure 10

RLS-MERRY bit rate evolution versus iteration number (MC block number, k ). Simulations for an upstream ADSL in the CSA loop 1.

7 Conclusions

Specific framework of MC transmission channel has been studied in this paper. We have considered slowly decaying recursive channel model. Therefore, on the basis of the pole-zero model, we have shown that the channel poles introduce a term of interference in each received MC symbols; thus, to shorten this kind of channel, an effective TEQ will place their zeros on the critical poles to cancel them out. However, inaccurate zero location may result in limited channel-shortening performance, which motivates the use of the lattice TEQ structure aiming to place zeros very close to the channel poles.

Furthermore, for linear algebra resolution and low complexity reasons, we have adopted only adaptive algorithms to update the TEQ which is designed using the lattice structure. Hence, an implementation of the lattice SAM and MERRY adaptive algorithms has been proposed. With computational complexity similar to that of the original versions of transversal algorithms, the convergence of lattice algorithms has been observed. In addition, we have showed that when it is compared to the transversal direct implementation, the lattice TEQ algorithms achieve higher performance in terms of achievable bit rate.



The work of R. Lopez-Valcarce is supported by the Spanish Government, ERDF funds (TEC2010-21245-C02-02/TCM DYNACS, CONSOLIDER-INGENIO 2010 CSD2008-00010 COMONSENS), and the Galician Regional Government (CN 2012/260 AtlantTIC).

Authors’ Affiliations

Cosim laboratory, High School of Communications of Tunis (SUPCOM), Ariana, Tunisia
Department of Signal Theory and Communications, University of Vigo, Vigo, Spain


  1. Bingham JA: Multicarrier modulation for data transmission: an idea whose time has come. IEEE Commun. Mag 1990, 28: 5-14.MathSciNetView ArticleGoogle Scholar
  2. Starr T, Cioffi JM, Silvermann PJ: Understanding Digital Subscriber Line Technology. Prentice Hall, Englewood Cliffs; 1999.Google Scholar
  3. IEEE: IEEE 802.11 wireless local area networks: the working group WLAN standards. 2010. . Accessed 10/07/2012Google Scholar
  4. ETSI Normalization Committee: High performance radio local area networks (HIPERLAN) type 2. 1995. . Accessed 1 Sept 2012Google Scholar
  5. ETSI Normalization Committee: Radio broadcasting systems; digital audio broadcasting (DAB) to mobile, portable and fixed receivers. 2006. . Accessed 1 Sept 2012Google Scholar
  6. ETSI Normalization Committee: Digital video broadcasting (DVB): framing structure, channel coding and modulation for digital terrestrial television. 2004. . Accessed 1 Sept 2012Google Scholar
  7. Martin RK, Vanbleu K, Ding M, Ysebaert G, Milosevic M, Evans BL, Moonen M, Johnson, Jr. CR: Unification and evaluation of equalization structures and design algorithms for discrete multitone modulation systems. IEEE Trans. Signal Process 2005, 53: 3880-3894.MathSciNetView ArticleGoogle Scholar
  8. Chen WY: DSL Simulation Techniques and Standard Development for Digital Subscriber Line Systems. McMillan Technical Publishing, Indianapolis; 1998.Google Scholar
  9. Al-Dhahir N, Sayed AH, Cioffi JM: Stable pole-zero modeling of long FIR filters with application to the MMSE-DFE. IEEE J. Selected Areas Comm 1997, 45: 508-513.MATHGoogle Scholar
  10. Melsa PJW, Younce RC, Rohrs CE: Impulse response shortening for discrete multitone transceivers. IEEE Trans. Commun 1996, 44: 1662-1672. 10.1109/26.545896View ArticleGoogle Scholar
  11. Al-Dhahir N, Cioffi JM: Bandwidth-optimized reduced-complexity equalized multicarrier transceiver. IEEE J. Selected Areas Comm 1997, 45: 948-956.Google Scholar
  12. Arslan G, Evans BL, Kiaei S: Optimum Channel Shortening for Discrete Multitone Transceivers. Int. Conf. Acoustics, Speech, Signal Process 2000, 5: 2965-2968.Google Scholar
  13. Arslan G, Evans BL, Kiaei S: Equalization for discrete multitone receivers to maximize bit rate. IEEE Trans. Signal Process 2001, 49: 3123-3135. 10.1109/78.969519View ArticleGoogle Scholar
  14. Ben Salem E, Sofiane C, Sofiane H, Besbes H, López-Valcarce R: Fractionally spaced channel shortening by subchannels SINR maximization in multicarrier system. Proceedings of 6th International Symposium on Image and Signal Processing and Analysis (ISPA 2009). IEEE, Piscataway; 2009:53-58.Google Scholar
  15. Martin RK, Balakrishnan J, Sethares WA, Johnson CR: A blind adaptive TEQ for multicarrier systems. IEEE Signal Process. Lett 2002, 9: 341-343.View ArticleGoogle Scholar
  16. Balakrishnan J, Martin RK, Johnson, Jr. CR: Blind, adaptive channel shortening by sum-squared autocorrelation minimization (SAM). IEEE Trans. Signal Process 2003, 51: 3086-3093. 10.1109/TSP.2003.818892MathSciNetView ArticleGoogle Scholar
  17. Betser A, Zeheb E: Reduced order IIR approximation to FIR digital filters. IEEE Trans. Signal Process 1991, 39: 2540-2544. 10.1109/78.98009View ArticleGoogle Scholar
  18. Crespo P, Honig M: Pole-Zero decision feedback equalization with a rapidly converging adaptive IIR algorithm. IEEE J. Selected Areas Comm 1991, 6: 817-829.View ArticleGoogle Scholar
  19. Bellanger M: Adaptive Digital Filters and Signal Processing. Marcel Dekker Inc., New York; 1987.MATHGoogle Scholar
  20. Martin RK: Fast-converging blind adaptive channel-shortening and frequency-domain equalization. IEEE Trans. Signal Process 2007, 55: 102-110.MathSciNetView ArticleGoogle Scholar
  21. Sistanizadeh K: Loss characteristics of the proposed canonical, ADSL loops with 100-ohm termination at 70, 90, and 120 F. ANSI T1E1.4 Committee Contribution 161, Washington, DC; 1991.Google Scholar
  22. Forney GD, Eyuboglu MV: Combined equalization and coding using precoding. IEEE Commun. Mag 1991, 12: 25-34.View ArticleGoogle Scholar


© Ben Salem et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.