Skip to main content

Optimal MSE solution for a decision feedback equalizer

Abstract

Due to the inherent feedback in a decision feedback equalizer (DFE) the minimum mean square error (MMSE) or Wiener solution is not known exactly. The main difficulty in such analysis is due to the propagation of the decision errors, which occur because of the feedback. Thus in literature, these errors are neglected while designing and/or analyzing the DFEs. Then a closed form expression is obtained for Wiener solution and we refer this as ideal DFE (IDFE). DFE has also been designed using an iterative and computationally efficient alternative called least mean square (LMS) algorithm. However, again due to the feedback involved, the analysis of an LMS-DFE is not known so far. In this paper we theoretically analyze a DFE taking into account the decision errors. We study its performance at steady state. We then study an LMS-DFE and show the proximity of LMS-DFE attractors to that of the optimal DFE Wiener filter (obtained after considering the decision errors) at high signal to noise ratios (SNR). Further, via simulations we demonstrate that, even at moderate SNRs, an LMS-DFE is close to the MSE optimal DFE. Finally, we compare the LMS DFE attractors with IDFE via simulations. We show that an LMS equalizer outperforms the IDFE. In fact, the performance improvement is very significant even at high SNRs (up to 33%), where an IDFE is believed to be closer to the optimal one. Towards the end, we briefly discuss the tracking properties of the LMS-DFE.

Introduction

A channel equalizer is an important component of a communication system and is used to mitigate the inter symbol interference (ISI) introduced by the channel. The equalizer depends upon the channel characteristics. A variety of equalizers have been proposed and utilized in communication systems [13] Usually simple linear equalizers (LE) would suffice (see for e.g., [13]) but for a channel with deep spectral nulls one would require a more complex, non LE like a decision feed back equalizer (DFE).

A LE is a linear filter that is used to mitigate ISI while a Wiener filter (WF) equalizer is an optimal filter that minimizes the mean square error (MSE) between the input symbols and the decoded symbols (decoded after the equalizer). Closed form expression for WF LE is available ([4, 5] etc). This closed form expression involves a matrix inverse which can be computationally intensive if the filter has a large dimension. Alternatively, least mean square linear equalizer (LMS-LE), a computationally efficient iterative algorithm, is used extensively (see [46]) to obtain the WF equalizer. It can also track the time variations in the WF, if required, as in the case of Wireless channels. For a fixed channel its convergence to the WF has been studied in [6, 7] (see also the references therein). Its performance on a wireless (time varying) channel has been studied theoretically in [8, 9] (see also [4, 5, 10] and the references there in, where the performance has been studied via simulations, approximations and upper bounds on probability of error).

Decision feedback are nonlinear equalizers (a pair of linear filters one in the forward path and another in the feedback path), which can provide significantly better performance than LE [3, 11, 12], especially for ‘bad’ channels. A DFE feeds back the previous decisions of the transmitted symbols, to nullify the ISI due to them (which can now happen without amplifying the thermal noise) and makes a better decision about the current symbol. Although these equalizers have also been used for quite sometime, due to feedback their behavior is much more complex than that of the LEs. Hence their performance is not well understood. Existence of a hard decoder inside the feedback loop, due to its nonlinearity, makes the study all the more difficult. A DFE mainly exploits the finite alphabet structure of the hard decoder output [2, 13] and hence the hard decoder cannot be ignored (i.e., its performance is better than a system with a soft decoder).

Since the statistics of the previous decisions in a DFE are not known, there is no known technique available that provides an minimum MSE (MMSE) DFE (we will call it as DFE-WF in the rest of the article) even for a fixed channel [2, 3, 14]. Thus an MMSE DFE is commonly designed by assuming perfect decisions (see, e.g., [2, 15]). For convenience, for the rest of the article, we will call such a DFE as ideal DFE (IDFE). In this article IDFE is also computed using perfect channel estimates. The IDFE often outperforms the Linear WF significantly [3, 11, 12]. But it is generally believed that DFE-WF, the true MSE optimal DFE (designed considering the decision errors), can outperform even this.

Another way to obtain an optimal filter is to replace the feedback filter at the receiver by a precoder at the transmitter [3, 14]. This way one can indeed obtain the optimal filter but this requires the knowledge of the channel at the transmitter. For wireless channels, which are time varying, this is often not an attractive solution [2, 3].

Some research has been done to deal with the decision errors in a DFE. Sternad et al. [16] approximated the errors in decisions with an additive white Gaussian noise (AWGN) independent of the input sequence and obtained a DFE WF. But as is stated in the article this approximation is not realistic. Erdogan et al. [13] obtain an H optimal DFE taking into account the decision errors. However no comparison to DFE-WF was provided.

Ideal DFE also contains a matrix inverse for which LMS is again used as a computationally efficient alternative in practical communications systems. However, convergence of LMS-DFE is not well understood even for a fixed channel, again due to the complexity introduced by the feedback. Trajectory of the LMS-DFE algorithm, on a fixed channel, with a soft decoder in the feedback loop has been approximated by an ODE in [17]. But this ODE does not approximate the LMS-DFE with a hard decoder. Beneveniste et al. [6] have shown the ODE approximation of an LMS-DFE with a hard decoder. But the ODE obtained by them is not explicit enough. Furthermore, they do not relate the attractors of this ODE to the DFE-WF.

Our conjuncture is that LMS can actually converge to the true DFE WF (obtained considering the decision errors) and one of the main goals of this article is to prove the same. In this article, we study an LMS-DFE on a fixed channel using an ODE approximation. Towards this, we first obtain the stationary performance of the system with DFE and prove the existence of DFE-WF (the minimum MSE solution) for every channel state (whenever the domain of optimization is compact). We then show that the DFE-WF and an LMS-DFE attractor are close to each other at high signal to noise ratios (SNRs). We show the same is true for nominal values of SNRs via simulations.

Further we demonstrate via simulations, that the LMS-DFE can outperform the commonly used IDFE, at all practical SNRs. An interesting observation is that, the improvement is significant even at high SNRs where an IDFE does not suffer from error propagation and is believed to be close to the true DFE-WF.

The article is organized as follows. Our system model, notations and assumptions are discussed in Section “The model and notation”. In Section “The issues and our approach” we discuss our approach. Section “Analysis of LMS-DFE and DFE-WF” obtains an ODE approximation and then the analysis of the attractors of LMS-DFE. Section “Numerical examples” provides some examples. Section “Tracking analysis” briefs the tracking behavior while Section “Conclusions” concludes the article. The sections Appendices 1 to 5 provide proofs for our theorems.

The model and notation

We consider a communication system with a DFE (see Figure 1). Inputs {s k } enter a finite impulse response channel { z l } l = 0 L 1 , and are corrupted by an additive zero mean white Gaussian noise {n k } with variance σ2. The channel output, u k , at time k, is given by,

u k = l = 0 L 1 s k l z l + n k .
Figure 1
figure 1

Block diagram of a wireless channel followed by a DFE.

The channel output passes through a DFE given by a linear forward filter θ f and a linear feedback filter θ b . In addition, there is a hard decoder Q(.). The output of the decoder at time k is,

ŝ k =Q l = 0 N f 1 θ f l u k l + l = 1 N b θ b l ŝ k l .
(1)

We provide below the assumptions made and the notations used in this article. Most of these assumptions can be generalized as discussed at the end of this section.

  • Sequences {s k }and {n k }are independent and identically distributed (i.i.d) sequences and are independent of each other. The inputs {s k }are uniformly distributed over { + 1,−1}(BPSK modulation).

  • f N (y) is the N dimensional standard i.i.d Gaussian density, where N is the dimension of the vector y, i.e., f N (y)= ( 2 Π ) N / 2 ex p y 2 2 . Whenever not mentioned, integrability is with respect to f N (y)dy.

  • The equalizer forward, feedback filters are given by { θ f l } l = 0 N f 1 , { θ b l } l = 1 N b respectively. Also, let N L N f +L1.

  • We assume that the symbols are modulated using BPSK and so the hard decoder equals, Q(x):=1{x≥0}−1{x<0}in (1).

  • For any vector, x, x l represents its lthcomponent and x l k , lk, represents the vector [ x k xk−1x l ]T.

    The following vector notations are used:

    S k s k N L + 1 k , N k n k N f + 1 k , U k u k N f + 1 k , Ŝ k ŝ k N b + 1 k , X k [ U k T Ŝ k 1 T ] T , G k [ S k T X k T ] T , θ f θ f 0 N f 1 , θ b θ b 1 N b , J k [ S k T Ŝ k 1 T N k T ] T , Θ [ θ f T θ b T ] T , Z z 0 , z 1 z L 1 .

    In the above, S k ,U k ,N k and Ŝ k 1 , respectively represent the vector of input symbols, channel outputs, noise samples and the decoder decisions that influence the equalizer output at time k. Vector X k forms input to the equalizer at time k while G k , J k are the two alternate representations of the system state at time k. Vector Z is the vector form of the channel while θ f , θ b are that of the equalizer feed-forward and feedback filters.

  • Θ k represent the time varying equalizer at time k.

  • Let S:={+1,1}. Under the above assumptions, {G k } and {J k } are Markov chains for a fixed channel, equalizer pair at (Z,Θ). These two Markov chains take values in S N L × S N b × R N f , where R is the set of real numbers. The current and the previous states of both these Markov chain are represented by the ordered pairs (i,y), (j,y) respectively. Here i,j take values from the discrete part of the state part of the state space, S N L × S N b , while y,y take values in R N f .

  • Ψ={ψ l }l=0N L −1 represents the convolution of the channel {z l } and the forward filter θ f .

  • The input to the hard decoder for a given state of the Markov chain is represented by,

    e Θ ( i , y ) : = l = 0 N L 1 ψ l s k l + l = 0 N f θ f l n k l + l = 1 N b θ b l ŝ k l .

    Note that ŝ k 1 =Q( e Θ (j, y )).

  • B(Θ,δ), B ̄ (Θ,δ) are the open and closed balls respectively with center Θ and radius δ.

  • The equalizer output without noise, e Θ (i,0)≠0for all values of i at the LMS attractor. Without this assumption the LMS algorithm makes more errors than the correct decisions.

Thus, the channel outputs {u k }pass through a DFE Θ with a hard decoder. The performance of this system will depend upon the DFE filters Θ. We are interested in a filter Θ that minimizes the commonly used criterion, the mean of the squared error between the input symbol s k and their corresponding decisions ΘtX k (MSE):

MSE=E s k Θ t X k 2 .
(2)

The LMS algorithm,

Θ k + 1 = Θ k μ k H Θ k ( G k ); H Θ (G)X X t Θ s ,
(3)

a computationally efficient iterative algorithm, is expected to provide the MMSE solution. However, with a feedback structure inserted, the convergence behavior of LMS is not understood properly. In fact, it is not even clear if the minimum mean square problem is well posed neither is it clear if an MMSE solution exists. Even prior to these questions, one first needs to define the expectation in (2) appropriately. One is often interested in optimizing a stationary performance, i.e., expectation in (2) is with respect to the stationary distribution of the system. However the stationary distribution depends upon the parameter Θ. The existence of the stationary distribution for any given Θ is not known. We take up these issues one by one and our final goal is to show that the above iterative algorithm (3) indeed converges close to the MMSE solution.

One can easily extend the theory of this article to any finite alphabet (complex) input source with any arbitrary distribution and to a complex channel. However we stick to BPSK modulation and to a real channel to keep the explanations simple. Also, the theory to follow, considers an optimal equalizer for delay 0. The entire theory will go through for any arbitrary delay. Indeed in Section “Numerical examples”, an example with an optimal equalizer for delay 1, is presented. This is once again done to simplify the explanations.

The issues and our approach

A DFE-WF on a fixed channel (if it exists) is given by,

Θ =arg min Θ E Θ t X k s k 2 ,
(4)

where the expectation on the right hand side is defined under stationarity for a given Θ. Vector X k = U k T , Ŝ k 1 T T , includes previous decisions Ŝ k 1 and hence its stationary distribution depends upon the parameter Θ. Thus this is a complex case of optimization in which, the stationary distribution defining the average cost also depends upon the parameter to be optimized. There is no known technique to compute a WF, Θ of (4), even for a fixed channel.

In practical systems, a DFE WF is commonly designed assuming perfect decisions (i.e., Ŝ k = S k N b + 1 k ), which we have called IDFE. It is easy to see that the IDFE for a fixed channel is given by,

Θ IDFE = E X k X k t 1 E X k s k , where X k t : = U k t S k N b + 1 k 1 t .

This computation may be expensive because of matrix inversion and LMS (3) is actually used as an alternative [4, 5]. Our claim is that in case of a DFE, apart from being computationally efficient the LMS algorithm also outperforms the IDFE, ΘIDFE. This is because we will see briefly that the LMS attractors are close to DFE-WF while IDFE is away from DFE-WF. We achieve this goal by showing that the LMS-DFE attractors are close to that of the DFE-WF at high SNRs (later in Section “Numerical examples” we show that this covers the practically used SNRs). Further, LMS can also be used to track the channel variations. We first study an LMS-DFE on a fixed channel and later on briefly discuss its tracking behavior.

Another issue related to (4) is that we should take the expectation in the right hand side under stationarity. However, it appears that the existence of stationary distribution of {X k } for a given Θ is not known. Thus, first, in Theorem 2, we show the existence of a unique stationary distribution (and stationary density w.r.t. f N (y)dy) for {X k } for any Θ.

As is usually done in adaptive algorithm analysis, we study the LMS-DFE using an ODE approximating it. Using the stationary distribution of {X k } we convert the ODE in [6] to the following more tractable ODE,

Θ (t)= 1 2 E Θ Θ Θ t X s 2 = E Θ X Θ t X s .
(5)

The attractors of the LMS-DFE will be the zeros of the RHS of the above ODE, while the DFE-WF will be a zero of the gradient (if it exists) of the cost in the RHS of (4). Under certain conditions (with representing the gradient),

Θ E Θ Θ t X s 2 = E Θ Θ Θ t X s 2 + E Θ t X s 2 Θ Π Θ ,
(6)

where Π Θ is the stationary density of the Markov chain, {J k }, w.r.t. the Lebesgue measure, when the DFE Θ is used. One can expect the LMS-DFE attractors to be close to the DFE-WF, if the second term in the RHS of (6) is close to zero. However, we could not even get differentiability of Π Θ . Nevertheless, we achieve the required differentiability (Theorem 3) by considering a hard decoder that is a slightly perturbed version of the original hard decoder. We also show that the DFE-WF and an LMS-DFE attractor of this perturbed decoder converge to that of the original hard decoder as the level of perturbation tends to zero (Theorem 4). We then analyze this perturbed decoder and show that the LMS-DFE attractors of this decoder are close to its DFE-WFs at high SNR (Theorem 5). This suggests that at high SNR an LMS attractor for the original decoder is close to its DFE-WF.

Analysis of LMS-DFE and DFE-WF

We provide a step by step analysis of LMS-DFE and its connection to DFE-WF in this section, while addressing the issues raised in Section “The issues and our approach” one after the other.

Previous ODE approximation result

We start with an ODE approximation for LMS-DFE, which will be used in the subsequent sections for performance analysis. DFE with a hard decoder has been approximated by an ODE in [6]. We start our LMS-DFE analysis with this ODE. We reproduce the ODE approximation result of [6] here in our notations. Towards this goal, as a first step, we write down the ODE approximating LMS-DFE (3): let Θ(t,a) denote the solution of the ODE,

Θ (t)=h(Θ)withh(Θ):= lim n P Θ n H Θ (j, y ),
(7)

with initial condition Θ(0)=a, where P Θ n is the n-step transition function of the Markov chain J k with DFE Θ, and P Θ n H Θ (j, y ) is the expectation of the function H Θ (G) (defined in (3)) using the conditional measure P Θ n(.|j,y)(Note G k is a fixed function of J k ). The limit in (7) will be independent of the initial condition (j,y) ([6], p. 252).

It is easy to see that the LMS algorithm satisfies all the required hypothesis of ([6], Theorem 13, p. 278) and hence one can approximate its trajectory on any finite time scale with the solution of the ODE (7), the precise result is:

Theorem 1

For any initial condition Θ0, finite time T, with t(r):= k = 0 r μ k ,

sup r : t ( r ) T Θ r Θ t ( r ) , Θ 0 p 0 as k μ k 1 + δ 0 for some δ < 0 . 5 ,

whenever μ k ≤1 for all k and if lim inf k μ k + r k >0for every integer r. ▀ 

Stationary distribution and a simplified ODE

We will show below that the RHS of the ODE (7) is same as that of the ODE (5) and hence equate the ODE (7) with a more tractable ODE (5). As a first step, we prove that the Markov chain {J k } has a stationary distribution for any given DFE, channel pair (Z,Θ). In the following, at many places we do not include channel value Z for notation, as this article mainly works with fixed channel behavior. However the proofs are applicable for any pair (Z,Θ) and the notation includes Z, when required to be specific.

Theorem 2

The following results hold:

  1. (i)

    For every fixed (Z,Θ), Markov chain {J k }has a unique stationary distribution π Z,Θ.

  2. (ii)

    Starting from any initial condition (i,y), the n-step transition measure ( P Θ n (.|i,y)) of the Markov chain converges geometrically to the stationary distribution, Π Θ , in total variation norm.

  3. (iii)

    The continuous part of the stationary distribution has a density, Π Θ that is continuous with respect to (Z,Θ)in L 1norm.

  4. (iv)

    The MSE under stationarity is continuous in (Z,Θ).

Proof: Please see Appendix 1. ▀ 

For each Θ, {J k } is a Markov chain taking values in S N L × S N b × R N f . Its transition function

P Θ 1 ( i , y B | j , y ) = δ ~ ( i , j ) δ ̄ ( y , y ) P ( i 1 ) P ( y 1 B y ) × P Θ ( i N L + 1 | j , y ) ,
(8)

where δ ̄ (y, y ) equals 1 when the vector formed from all but the last component of the vector yequals the vector formed from all but the first component of the vector y and otherwise zero and δ ~ (i,j)= δ ̄ i 1 N L , j 1 N L δ ̄ i N L + 1 N b + N L , j N L + 1 N b + N L (note that the first component, i 1 N L , represents the sample value of S k , while the second one, i N L + 1 N b + N L , represents the sample value of Ŝ k 1 ). The only component of the transition function (8) that depends upon Θ is P Θ ( i N L + 1 =1|j, y )= 1 e Θ ( j , y ) > 0 .

By i.i.d nature of the input s k and noise n k one can choose n0large enough such that the continuous part of the n step transition function P Θ n (i,yB|j, y ) is absolutely continuous with respect to f N (y)dy for all nn0. Further, n0 is chosen larger than N L to ensure ensure s k , S k n 0 are independent. Fix one such n. The corresponding density (Radon–Nikodym derivative)

p Θ n i , y | j , y = l v P ( S k + 1 k + n ) π q = 1 n × P Θ ( ŝ k + n q | x ( q ) ) f N ( v ) dv ,
(9)

where

l = S k + 1 k + n N L , Ŝ k k + n 1 N b , v : = N k + 1 k + n N f / σ and x ( q ) : = S k + n q N L + 1 k + n q , Ŝ k + n q N b k + n q 1 , N k + n q N f + 1 k + n q .

From (9) it is easy to see that the density of the n-step transition function p Θ n (i,y|j, y )1, for all values of i,y, j,y and nn0. Also, we have by Theorem 2.ii, p Θ n ( . | j , y ) Π Θ 0 for every value of j,yas n in L1 norm. Further, the function H Θ () (given in (3)) can be bounded uniformly by, |H Θ (G k )|≤C1|X k |2 + C2|X k | for all Θ in a small neighborhood, for some appropriate constants C1, C2. The above bound is square integrable and depends only on {J k }. Hence Lemma 1 of Appendix 5 is applicable and we have,

lim n P Θ n H Θ ( j , y ) = π Θ H Θ ( G ) which h ( Θ ) = 1 2 E Θ Θ Θ t X s 2 .

Thus ODE (7) simplifies to ODE (5).

By Theorem 2, MSE is a continuous function of Θ and so by confining our search in (4) to a compact region, we obtain the existence of the WF, DFE-WF. Next we consider the LMS attractors which are now the attractors of ODE (5). The ODE attractors will be zeros of the RHS of (5), while the DFE-WF will be a zero of the gradient (if it exists) of the MSE (the cost in the RHS of (4)). As discussed in Section “The issues and our approach”, these two can be related as in (6) and for comparison of the two zeros, one needs to study, Θ Π Θ , the gradient of the stationary density. That is, to get the connection between an LMS-DFE attractor and the DFE-WF one needs to consider the differentiability of the stationary density.

Differentiability of the stationary density

One can see from Equation (9) that it is difficult to comment on differentiability of the n-step transition density itself. Thus, it is even more difficult to discuss the differentiability of the stationary density. To proceed further with the analysis, we perturb the hard decoder Q such that the n-step transition density and the stationary density become differentiable. Next we show that the LMS attractors and the DFE-WF of this perturbed decoder converge to that of the original decoder as the level of perturbation tends to zero. Finally we study the DFE using these perturbed decoders in Section “LMS attractors versus WF at high SNRs”.

We alter the decoder function Q(x)(of Equation (1)) to,

Q ε 0 ( x ) = 1 , with prob 1 , if x > ε 0 , 1 , with prob 1 , if x < ε 0 , 1 , with prob 1 2 cos ( x ε 0 ) Π 2 ε 0 + 1 , if x ε 0 ,
(10)

where ε0 is a small constant. Also, in (10) when |x|≤ε0, Q ε 0 (x) will be taken as −1 when it is not 1. Observe that the perturbed decoder is also a hard decoder. With the perturbed decoder Q ε 0 (x), the Θ dependent component of the transition function is,

P Θ ( ε 0 ) ( i N L + 1 = 1 | j , y ) = 1 e Θ ( j , y ) ε 0 2 × cos ( e Θ ( j , y ) ε 0 ) Π 2 ε 0 + 1 + 1 e Θ ( j , y ) ε 0 .

The partial derivative, P Θ ( ε 0 ) ( i N L + 1 = 1 | j , y ) ∂Θ exists everywhere and equals,

1 e Θ ( j , y ) ε 0 Π 4 ε 0 sin ( e Θ ( j , y ) ε 0 ) Π 2 ε 0 e Θ ( j , y ) ∂Θ .
(11)

By the uniform upper bound on the derivative (11) and by the bounded convergence theorem one can see that the n-step transition density (9) (with nn0) becomes differentiable (details are in Appendix 2, Lemma 2) and equals (using the notations of Equation (9)),

p Θ ( ε 0 ) , n ∂Θ i , y | j , y = l v m = 1 n π q m q = 1 n P Θ ( ε 0 ) ( ŝ k + n q | x ( q ) ) × P Θ ( ε 0 ) ( ŝ k + n m | x ( m ) ) ∂Θ P ( S k + 1 k + n ) f N ( v ) dv.
(12)

For these perturbed decoders, we show that the stationary density (with respect to f N (y)dy) also becomes differentiable. Furthermore, using an Implicit function theorem, we get a bound on the norm of this gradient.

Theorem 3

For every ε0>0, for every Θ0, the Markov chain {J k } has a unique stationary distribution, Π Θ ( ε 0 ) . It’s continuous part has a density, Π Θ ( ε 0 ) , that is continuously differentiable with respect to Θ in L2 norm. Further, for every δ>0 and σ 0 2 >0 there exists a constant C< such that for all ΘB(Θ0,δ), σ 2 σ 0 2 ,

Θ Π Θ ( ε 0 ) 2 C i P S k t Ψ + θ b t Ŝ k 1 + θ f t N k ε 0 + σ 2 .
(13)

Proof: The proof is provided in Appendix 2. ▀ 

We conclude this section by showing that the DFE-WFs and the LMS-DFE attractors of the perturbed decoder converge to that of the original decoder. In the following, let Θ n and Θ n LMS denote the DFE-WF and an LMS-DFE attractor (whose existence at high SNRs with small ε0 is established at the end of Appendix 4 and hence is assumed in the proof of the following theorem) for perturbation ε 0 n .

Theorem 4

For any σ2, for any sequence ε 0 n 0, there exists a subsequence ε 0 nk 0, a DFE-WF Θ of the original decoder and an LMS-DFE attractor ΘLMS of the original decoder, such that,

Θ nk Θ and Θ nk LMS Θ LMS .

Proof: Please see Appendix 3. ▀ 

Thus we can always take the perturbation ε0in (10) small enough such that the LMS attractors and the DFE-WFs for the perturbed decoder are close enough to the corresponding equalizers for the original decoder. Henceforth, we analyze these perturbed decoders to draw important conclusions.

LMS attractors versus WF at high SNRs

In this section we would like to understand the connection between an LMS attractor and a DFE-WF for a perturbed decoder. Since the former is a zero of the RHS of Equation (5) and the later is the zero of the gradient of the MSE (the cost in the RHS of (4)), we study the connection between the two.

Fix an ε0>0. With the error defined by, err Θ (J k ):=(s k e Θ (J k ))(note that i defined in the notations in Section “The model and notation” represents, ( S k , Ŝ k 1 ), the discrete part of the Markov chain, {J k }),

Θ E J k ( Θ ) err Θ ( J k ) 2 = a i Θ E f N err Θ ( J k ) 2 Π Θ ( ε 0 ) ( J k ) = b i E f N Θ err Θ ( J k ) 2 Π Θ ( ε 0 ) ( J k ) = i E f N Θ err Θ ( J k ) 2 Π Θ ( ε 0 ) ( J k ) + i E f N err Θ ( J k ) 2 Θ Π Θ ( ε 0 ) ( J k ) = E J k ( Θ ) Θ err Θ ( J k ) 2 + i E f N err Θ ( J k ) 2 Θ Π Θ ( ε 0 ) ( J k ) .
(14)

Here equality a follows by the existence of the stationary density Π Θ ( ε 0 ) with respect to the Gaussian measure f N (y)dy. Equality b is given by Lemma 2 of Appendix 5. The above equality above equality (14) is true for any ε0>0 and for any σ2. We will show below that the DFE-WF will be close to the limiting LMS-DFE if the second term on the right hand side of (14) is small.

We have assumed that S k t Ψ+ θ b t Ŝ k 1 0 at an LMS attractor. By continuity, S k t Ψ+ θ b t Ŝ k 1 0 for all Θ in a small neighborhood of the LMS attractor. We can further choose an ε1 small enough such that

0 [ S k t Ψ + θ b t Ŝ k 1 ε 1 , S k t Ψ + θ b t Ŝ k 1 + ε 1 ] ,

for all ( S k , Ŝ k 1 ) and for all Θ in a small neighborhood of an LMS attractor. Choose ε0ε1. By Chebyshev’s inequality, if 0[cε0,c + ε0](for some c) and if n is a Gaussian random variable with mean zero and variance σ2, then

P ( c + n ε 0 ) P ( n min { c ε 0 , c + ε 0 } ) 0 as σ 2 0 .

Thus, from the upper bound (13) of Theorem 3, for any fixed ε0ε1,

Θ Π Θ ( ε 0 ) 0 as σ 2 0 .

Thus by Cauchy Schwartz inequality as σ2→0(note err Θ has all moments),

Θ E Θ err Θ ( J k ) 2 E Θ Θ err Θ ( J k ) 2 = i E f N err Θ ( J k ) 2 Θ Π Θ ( ε 0 ) ( J k ) i E f N err Θ ( J k ) 4 1 / 2 | Θ Π Θ ( ε 0 ) | 0 .
(15)

Next we show the following: (15) implies the LMS-DFE attractors will be close to the DFE-WFs. In general two functions f1, f2can be close to each other at every point, but their zeros may be far apart, i.e., if x1 is a zero of f1 then f2(x1) can be close to zero but the zero of f2closest to x1may still be far away. It is useful to rule out this possibility in our scenario. We show this using the following theorem. Define,

s ( Θ , σ 2 ) : = E J k ( Θ ) Θ err Θ ( J k ) 2 and w ( Θ , σ 2 , η ) : = s ( Θ , σ 2 ) + η.

Theorem 5

There exists an ε2with 0<ε2ε1 such that for any ε0ε2, there exists a continuous function q:B(0,δ)R×R R N f , with

w q σ 2 , η , σ 2 , η = 0 .

Proof: Please see Appendix 4. ▀ 

Using the above theorem, we obtain the proximity of LMS attractors and the WFs in the following.

For any fixed ε0ε2, | Θ Π Θ ( ε 0 ) | near an LMS attractor, tends to zero as σ2→0. Thus by (15), there exists a small enough σ 0 2 such that for all σ 2 σ 0 2 ,

σ 2 , η w δ , where η w : = Θ E Θ err Θ ( J k ) 2 E Θ Θ err Θ ( J k ) 2 .

Note that q(σ2,0) is a zero of s(Θ,σ2) (note w(q(σ2,0),σ2,0)=s(Θ,σ)) and hence is an LMS attractor at σ2. Similarly from (14), q(σ2,η w )is a zero of the gradient of MSE cost and hence is a DFE-WF. Thus, for all σ 2 σ 0 2 , the LMS attractors, q(σ2,0), by continuity arguments of Theorem 5 will be close to that of the WFs, q(σ2,η w ).

It is clear from the above Theorem that at high SNRs, for very small ε0(close to the practical decoder), the LMS attractor is close to the DFE-WF. Since, IDFE ΘIDFE, is designed with an improper assumption (like perfect decisions), there is a good chance of these filters to be inefficient in comparison to the LMS attractors. We will see this in the examples provided in Section “Numerical examples”.

We conclude this section by pointing out another useful consequence of the Theorem 5. This theorem also establishes the existence of the LMS attractors at high SNRs for perturbed decoders with perturbation level ε0 small. A Remark at the end of Appendix 4 establishes this point.

One of the uses of the above ODE approximation is that, one can approximately obtain the performance (e.g., Bit error rate, MSE) of LMS-DFE at any time by using the trajectory of this ODE. Of course, obtaining bit error rate (BER) theoretically is still a problem because the BER of a system with a fixed known channel and a fixed DFE is still not available. But our ODE approximation is still useful because one can obtain the performance (transient as well as stationary) of the LMS-DFE with only one simulation, which would not be possible otherwise. This is because by Theorem 1, the ODE solution approximates the LMS-DFE trajectory in probability.

Numerical examples

In this section we reinforce the theory developed so far using some examples. We take a few examples of channels obtained from previous studies and show the proximity of the DFE-WF and the LMS attractor for practical values of SNRs. We also show that in many cases, the IDFE performs much worse than the DFE-WF but an LMS attractor performs close to the DFE-WF. BER and the MSE are used to compare the various equalizers. For every sample of the channel, we have used Monte-Carlo simulations to estimate the corresponding BER and MSE using one million samples of data.

DFE-WF, Θ, for every sample of the channel is obtained by running a gradient descent type of algorithm on the cost function (4) itself, where the gradient was approximated at each point by finite difference approximation, [EΘ + Δ(X(Θ + Δ)t(Θ + Δ)−s]2E Θ (X(Θ)t(Θ)−s)2) /|Δ|. Here the expectation E Θ (X(Θ)t(Θ)−s)2 is estimated by the sample path averages,

1 N i = 1 N X i t ( Θ ) Θ s i 2

using a large number of samples, N. Vector sequence X i ( Θ ) i = 1 N is obtained by running the DFE with fixed coefficients Θ(and on a channel that is fixed at its current sample). Thus Θ is estimated as the limit of the steepest descent algorithm:

Θ k + 1 = Θ k μ k N Δ k i = 1 N X k , i t ( Θ k ) Θ k s k , i 2 X k , i t ( Θ k + Δ k ) Θ k + Δ k s k , i 2 .

Here sk,i are i.i.d with the distribution of the inputs, s k . Sequences {Δ k }and {μ k }are chosen appropriately to reduce to zero. In our simulations we used μ k = 0 . 07 k 0 . 6 , Δ k =5μ k and N=4×105.

Least mean square attractors are obtained as the time limit of the LMS algorithm (3), with similar settings as with DFE-WF estimation.

We consider two examples in Tables 1 and 2. In Table 1, we have used an interesting example (significant part of the raised cosine channel of ([1], p. 199)) to show that the LMS attractors are close to the WFs at practical SNRs. Its coefficients are provided in the table. We also provide BER in this table. We further show the euclidean distance between the equalizer and the corresponding DFE-WF in first sub-columns. One can see that the distance between the LMS-DFE and DFE-WF is small while that between the IDFE and DFE-WF is large. One can also see an improvement up to 18% in BER in LMS-DFE in comparison to the IDFE. In fact this improvement is more at high SNRs (where the IDFE is assumed to have lesser problem because of error propagation). Further, the BER of the DFE-WF is close to that of the LMS attractor.

Table 1 Comparison of DFEs for raised cosine channel with N f = 5, N b = 10 and channel fixed at [0.45 0.59 0.43 0.11 −0.22 −0.32 −0.27 0 0.11 0.11]
Table 2 Comparison of DFEs with N f = N b = 2, and channel fixed at [ 0.41 .82 0.41]

We have developed the theory for an equalizer with delay zero. One can easily extend these results to the equalizer with any arbitrary delay. In fact, the channel in Table2is one such example. Here the equalizer with delay 1 will be the best one. The channel of Table 2 is very widely used (see [1], p. 165 and [4], p. 414). We can see once again a huge improvement (up to 30%) in BER for the LMS-DFE with respect to ΘIDFE. We also see that the LMS attractors are close to the DFE-WF, Θfor all practical SNRs.

In this section, we are comparing directly the time limits of LMS algorithm (3) with that of the true DFE-WF iteration mentioned at the beginning of this section. These two limits are further compared with IDFE, closed form expression. That the LMS trajectory approximates the solution of the ODE (5) is established theoretically (Theorem 1) in this article. In [9, 18] etc., we have demonstrated the same even via numerical simulations, for time varying channels. In Figures two, three and four of [9], it is shown numerically that the LMS trajectory approximates the appropriate ODE solution when the underlying channel is a time varying AR (2) process.

Tracking analysis

LMS being an iterative algorithm can track the channel variations if the update co-efficients μ k , in (3), converge to a non zero value. In [9, 18], we study the tracking behavior of an LMS-DFE, while it is operating on a wireless channel characterized by an AR(2) process. We demonstrate that an LMS-DFE can also track the time varying DFE-WF, whose variations result from the variations in a wireless channel. We also show that LMS-DFE can outperform the IDFE, on a time varying channel.

Conclusions

Obtaining MSE optimal filter for DFE is a long-standing problem. Precoding provides one practical solution but may not be feasible with wireless channels. The difficulty in the design and or analysis is because, the analysis of the past decisions (with feedback) is not known so far. To circumvent this, one commonly uses the optimal WF obtained assuming perfect past decisions. LMS, a computationally efficient alternative, is an iterative algorithm designed to converge to the WF. However, once again because of the feedback involved, complete analysis of an LMS-DFE is not available.

We show via ODE analysis, that LMS itself can provide/track the optimal WF. This article concentrates on fixed channel behavior and proves that the attractors of the LMS are close to that of the optimal DFE at high SNRs. Proofs become nontrivial partly because of the non-differentiability of the hard decoder. We circumvent this problem, by studying another hard decoder which is a slightly perturbed version of the original one. We first show that the LMS attractors and the DFE-WFs of the perturbed decoder converge to that of the original decoder and then show that the two themselves are close to each other at high SNRs. Next, we show by examples that the SNRs need not be very high, i.e., in fact practically used SNRs (upto 1.5 dB) can be sufficient. We also show that the BER (probability of error) of the commonly used WF, designed assuming perfect past decisions (also using perfect channel estimates), can be up to 33% higher than the optimal WF even at high SNRs (where the former is believed to be closer to the later).

In [18], we show that the LMS-DFE converges and then moves close to the instantaneous DFE-WF after the initial transience, while it is tracking a DFE-WF of a wireless channel modeled by an AR(2) process. We also show in [18] that the performance measures BER and MSE of the LMS-DFE are close to that of the DFE-WF after the transient period, while that of an IDFE are substantially inferior to that of the DFE-WF and the LMS-DFE.

Thus we conclude: (1) in case of a DFE, an LMS algorithm (originally designed for computational efficiency) converges and/or tracks a filter close to the Wiener solution; (2) the closed form expression for DFE WF (obtained after approximating the decision errors to zero) is far away from the Wiener solution and its performance can be significantly inferior.

Appendices

Appendix 1

Proof of Theorem 2: Using the results of [19], we prove the existence and continuity of the stationary distribution of the Markov chain, {J k }. For any (Z0,Θ0) and for any ε>0, ε0>0,

M 1 min S k , Ŝ k 1 , Θ B ̄ ( Θ 0 , ε ) , Z B ̄ ( Z 0 , ε 0 ) Ψ t S k + θ b t Ŝ k 1 .
(16)

Continuity of the map considered in (16) and compactness of the closed balls B ̄ ( Θ 0 ,ε), B ̄ ( Z 0 , ε 0 ) ensures |M1|<.

The map (Θ,N k )θ f tN k is continuous and hence the inverse image of the open set {x>−M1}under this map is open. Thus it is possible to get a open set C and a δε such that,

( N k , Θ ) : θ f t N k > M 1 C× B ̄ ( Θ 0 ,δ).
(17)

Thus whenever Θ B ̄ ( Θ 0 ,ε) and Z B ̄ ( Z 0 , ε 0 ), the decoder (1) outputs 1 (irrespective of the inputs/past decisions) when the noise vector is in C. Hence,

P ( Ŝ k = [ 1 . . 1 ] ) = P ( l = k N b 1 k { ŝ l = 1 } ) P l = k N b 1 k { N l C } P ( N k N b N f + 1 k C 1 × C 2 ) ,

where sets C 1 R N b , C 2 R N f are selected such that their respective Lebesgue measures are not equal to zero and ∩l=kN b −1k{N l C}C1×C2.

Define G:=[1..1]×[1..1]× R N f . For any n0>max{N b + N f + 1,N L }, for any initial condition J k n 0 , for any measurable set B N , and for any Θ B ̄ ( Θ 0 ,δ), Z B ̄ ( Z 0 , ε 0 )

P ( J k { [ 1 . . 1 ] × [ 1 . . 1 ] × B N } | J k n 0 ) = P ( S k = [ 1 . . 1 ] , Ŝ k 1 = [ 1 . . 1 ] , N k B N | J k n 0 ) P ( S k = [ 1 . . 1 ] , N k N b N f + 1 k C 1 × C 2 , N k B N ) αP ( N k B N C 2 )

where α:=P(S k =[ 1 .. 1 ])P(N kN b + N f kN f C1). Thus for any Θ B ̄ ( Θ 0 ,δ), Z B ̄ ( Z 0 , ε 0 ) and for any initial condition J k n 0 , the n0-step conditional measure is majorized:

P Z , Θ ( J k E | J k n 0 ) ν n 0 ( E G ) ,

where the measure ν n 0 () is defined by, ν n 0 ([1..1]×[1..1]× B E ):=αP( N k B E C 2 ). Thus the entire state space S N L × S N b × R N f is ν n 0 small (hence also a petite set) for all the Markov chains {J k }, parameterized by Θ B ̄ ( Θ 0 ,δ) and Z B ̄ ( Z 0 , ε 0 ). Then using ([19], Proposition 9.1.7, p. 206 and Theorem 10.01, p. 230) one obtains the existence and uniqueness of the stationary distribution, πZ,Θfor each Z,Θ.

Define ρ=1 ν n 0 (G). Then, by ([19], Theorem 16.2.4 in page 392), for all Θ B ̄ ( Θ 0 ,δ), Z B ̄ ( Z 0 , ε 0 ) and for all initial conditions (j,y) we get:

P Z , Θ n ( . | j , y ) π Z , Θ ρ n n 0 ,

where |.| represents the total variation norm. This along with the continuity of the transition function, establishes the continuity of the stationary distribution πZ,Θ under total variation norm at (Z0,Θ0). This is because, for any Θ B ̄ ( Θ 0 ,δ) and Z B ̄ ( Z 0 , ε 0 )

lim ( Z , Θ ) ( Z 0 , Θ 0 ) π Z , Θ π Z 0 , Θ 0 lim ( Z , Θ ) ( Z 0 , Θ 0 ) π Z , Θ P Z 0 , Θ 0 n ( . | j , y ) + π Z 0 , Θ 0 P Z 0 , Θ 0 n ( . | j , y ) + P Z 0 , Θ 0 n ( . | j , y ) P Z , Θ n ( . | j , y ) 2 ρ n n 0 + lim Θ Θ 0 P Z 0 , Θ 0 n ( x , . ) P Z , Θ n ( x , . ) = 2 ρ n n 0 ,

for all n≥1, where the last equality follows by continuity of the transition function with respect to (Z,Θ). By letting n

lim ( Z , Θ ) ( Z 0 , Θ 0 ) π Z , Θ π Z 0 , Θ 0 = 0 .

The stationary distribution, πZ,Θ, has discrete and continuous components. The continuous component of πZ,Θ, is absolutely continuous with respect to the measure f N (y)dy for every (Z,Θ). Hence the stationary density, πZ,Θfor {J k }exists. Continuity in total variation norm of the stationary distribution implies the continuity of the stationary densities in L1norm ([20], Theorem 8.2, p. 110). It is also easy to see that the stationary density ΠZ,Θ(i,y)≤1 for all (i,y).

Now by fixing the channel at some value Z, MSE, the cost in RHS of Equation (4), can be rewritten as, E Θ Θ t X s 2 = S , Ŝ E f N Θ t X s 2 Π Θ . Lemma 1 in Appendix 5, now gives the continuity of the MSE with respect to Θ for any fixed Z.

One can show the same conclusions for the Markov chain, {G k }, as G k =Γ(J k ) for some fixed one-one, onto C function Γ, whenever the channel and equalizer values are fixed. ▀ 

Appendix 2

Proof of Theorem 3: The existence and continuity of the stationary density Π Θ ( ε 0 ) for every ε0is achieved in a similar way as in the proof of the Theorem 2. The only difference being, ε0 must be added to −M1 in the definition of the set (17). We leave superscript ε0to simplify the notations in the rest of this proof.

We use Implicit function theorem to prove differentiability. For that, we will consider the Banach spaces:

  • X= R N f + N b with Euclidean norm.

  • Y={g: S N L + N b ×XR; g <} with L2 norm, |.|, defined by,

    g : = 1 S i y g ( i , y ) 2 f N ( y ) dy 1 / 2 ,

    where S represents the cardinality of set S N L + N b .

Fix n0>max{N f + N b ,N L }. We consider the following continuous map f:X×YY,

f ( Θ , Π ) = g ( Θ , Π ) Π + j y Π ( j , y ) f N ( y ) dy 1 ,

where,

g ( Θ , Π ) ( i , y ) : = j y p Θ n 0 ( i , y | j , y ) Π ( j , y ) f N ( y ) d y .

Observe that (Θ,Π Θ ) is a zero of f.

By Lemmas 1 and 2 the function f is differentiable with respect to Π and Θ, respectively and further the derivative ∂f ∂Π is a homeomorphism. Also, ∂f ∂Π 1 and ∂f ∂Θ are upper bounded locally by the RHS of (18) and (24) respectively.

Using similar logic one can easily show that both the partial derivatives of f are continuous in (Θ,Π). Hence by Implicit function theorem on Banach spaces, ([21], Theorem 3.1.10 and Corollary 3.1.11, p. 115), the map ΘΠ Θ is continuously differentiable and the derivative is given by,

Θ Π Θ = ∂f ∂Π ( Θ , Π Θ ) 1 ∂f ∂Θ ( Θ , Π Θ ) .

Upper bound 13 is obtained by bounding the above gradient using the upper bounds (18) and (24). ▀ 

Lemma 1.

f is differentiable with respect to Π and the derivative is a homeomorphism. Also for any δ>0, σ 0 2 >0 there exists a constant C0< such that,

∂f ∂Π ( Θ , Π Θ ) 1 C 0
(18)

for all ΘB(Θ0,δ), σ 2 σ 0 2 .

Proof: The function f is affine linear in the second variable ΠY. Thus,

∂f ∂Π ( Θ , Π ̂ ) (Π)=g(Θ,Π)Π+ j y Π ( j , y ) f N ( y ) dy .
(19)

We will show below that this map is one-one through contradiction. It is easy to see that g(Θ,Π)−Π is in the set,

H : = Π : j y Π ( j , y ) f N ( y ) d y = 0 Y.

Operator, Γ that maps Π j y Π ( j , y ) f N ( y ) d y , i.e.,

Γ : Y Y Π Γ ( Π ) with Γ ( Π ) ( i , y ) : = j y Π ( j , y ) f N ( y ) d y for all ( i , y ) .

has one-dimensional range which lies inside H c . We can show that the partial derivative (19) is one-one, if we show that there is no common non-zero vector in the null space of both the operators. Say there exists a vector Π≠0 in the null space of both the operators. Let,

D : = ( i , y ) : Π ( i , y ) 0 , α : = j { y : ( j , y ) D } Π ( j , y ) f N ( y ) d y , Π 1 : = j y Π ( j , y ) f N ( y ) d y .

As Π is in the null space of the operator, Γ,

j { y : ( j , y ) D c } Π ( j , y ) f N ( y ) d y = α.

Hence |Π|1=2α. Also, because g(Θ,Π)=Π,

g ( Θ , Π ) ( i , y ) 0 for all i , y D and g ( Θ , Π ) ( i , y ) < 0 for all i , y D c .

Then,

g ( Θ , Π ) 1 = i y : ( i , y ) D g ( Θ , Π ) ( i , y ) f N ( y ) dy i y : ( i , y ) D c g ( Θ , Π ) ( i , y ) f N ( y ) dy = i y : ( i , y ) D j y p Θ n 0 ( i , y | j , y ) Π ( j , y ) f N ( y ) d y f N ( y ) dy i y : ( i , y ) D c j y p Θ n 0 ( i , y | j , y ) Π ( j , y ) f N ( y ) d y f N ( y ) dy = Fubini j y i y : ( i , y ) D p Θ n 0 ( i , y | j , y ) f N ( y ) dy Π ( j , y ) f N ( y ) d y j y i y : ( i , y ) D c p Θ n 0 ( i , y | j , y ) f N ( y ) dy Π ( j , y ) f N ( y ) d y = j y P Θ n 0 ( i , y D | j , y ) Π ( j , y ) f N ( y ) d y j y P Θ n 0 ( i , y D c | j , y ) Π ( j , y ) f N ( y ) d y = j y P Θ n 0 ( i , y D | j , y ) P Θ n 0 ( i , y D c | j , y ) Π ( j , y ) f N ( y ) d y = j y : j , y D 1 2 P Θ n 0 ( i , y D c | j , y ) Π ( j , y ) f N ( y ) d y + j y : j , y D c 1 2 P Θ n 0 ( i , y D | j , y ) Π ( j , y ) f N ( y ) d y j y : j , y D 1 2 ν n 0 ( D c ) Π ( j , y ) f N ( y ) d y + j y : j , y D c 1 2 ν n 0 ( D ) Π ( j , y ) f N ( y ) d y = Π 1 2 2 2 ν n 0 ( D c ) 2 ν n 0 ( D ) = Π 1 ( 1 ν n 0 ( Y ) ) < Π 1 .

This provides a contradiction as 0< ν n 0 (Y)<1 and hence |Π|1=|g(Θ,Π)|1<|Π|1. This proves that the partial derivative (19) is one-one. The inequality is obtained by using the majorizing measure, ν n 0 (.), defined in the proof of continuity of stationary distribution.

The map g(Θ,Π)is compact integral operator ([22], Example 2, p. 277). The last map of the partial derivative has one-dimensional range and hence is compact. Therefore, the partial derivative equals TI, where T is a compact operator. Then by Riesz–Schauder Theory ([22], Theorem 1, p. 283), the fact that ∂f ∂Π is one-one implies that it is onto and also further that the inverse is bounded. Hence ∂f ∂Π is a linear homeomorphism.

Furthermore, the mapping ( σ 2 ,Θ) ∂f ∂Π ( Θ , Π Θ ) 1 is continuous. This continuity follows by the joint continuity of the n0-step transition function, p Θ n 0 (i,y|j, y ) with respect to (σ2,Θ) and then by bounded convergence theorem (as p Θ n 0 (i,y|j, y )+1 is uniformly bounded) and finally by the continuity of the map xx−1 ([23], p. 135). Hence the lemma follows for some C0<, δ>0, σ 0 2 >0. ▀ 

Lemma 2.

f is differentiable with respect to Θ. The partial derivative ∂f ∂Θ ( Θ , Π Θ ) is upper bounded by bound (24).

Proof: We reintroduce the notations that will be used here (notation of Equation (9)).

  • i= S k + n 0 ( N f + L 2 ) k + n 0 , Ŝ k + n 0 1 ( N b 1 ) k + n 0 1 , y= N k + n 0 ( N f 1 ) k + n 0 , represent the current state of the Markov Chain, at k + n0.

  • j= S k ( N f + L 2 ) k , Ŝ k ( N b 1 ) k 1 , y = N k ( N f 1 ) k represent the initial condition for n0−step transition function, which transition function, which is the state of the Markov chain at k.

  • l= S k + 1 k + n 0 ( N f + L 3 ) , Ŝ k k + n 0 1 ( N b 2 ) , v=N k + 1k + n0−(N f −2)represent the intermediate input, decision and noise vectors.

  • x(q):= S k + n 0 q ( N f + L 1 ) k + n 0 q , Ŝ k + n 0 q N b k + n 0 q 1 , N k + n 0 q N f k + n 0 q represent the intermediate state of the Markov chain at k + n0q.

To begin with, we will show component wise differentiability of the function f, i.e., differentiability of f(Θ,Π)(i,y) for every (i,y). We will show the differentiability of the n0−step transition function, p Θ n 0 i , y | j , y along with that. Positive and finite constants (like c, c′′etc) are introduced in the derivations as and when required. While obtaining upper bounds we have taken advantage of the finite alphabet nature of the set S. By simple computations, one can see that the density with respect to the Gaussian measure is,

p Θ n 0 i , y | j , y = l v π q = 1 n 0 P Θ ( ŝ k + n 0 q | x ( q ) ) × P ( S k + 1 k + n 0 ) f N ( v ) dv.
(20)

Hence,

f ( Θ , Π ) ( i , y ) = l , j v , y π q = 1 n 0 P Θ ( ŝ k + n 0 q | x ( q ) ) P ( S k + 1 k + n 0 ) × Π ( j , y ) f N ( v ) f N ( y ) dvdy Π ( i , y ) + j y Π ( j , y ) f N ( y ) d y 1 .

The only component of the above functions, depending upon Θ is P Θ ( ŝ k + n 0 q |j, y ). By (11),

P Θ ∂Θ ŝ k + n 0 q | x ( q ) c N k + n 0 q N f k + n 0 q ,

uniformly in (i,y), for every Θ, (j,y) and for every q. Thus for any Θ h in a small neighborhood of 0 and for any i,y, j,y, Θ and q (by mean value ([24], Theorem X.4.5, p.312)),

P Θ ( ŝ k + n 0 q | x ( q ) ) P Θ + Θ h ( ŝ k + n 0 q | x ( q ) ) Θ h t P Θ ∂Θ ŝ k + n 0 q | x ( q ) P Θ ( ŝ k + n 0 q | j , y ) P Θ + Θ h ( ŝ k + n 0 q | j , y ) + Θ h t P Θ ∂Θ ŝ k + n 0 q | j , y 2 c Θ h N k + n 0 q N f k + n 0 q .
(21)

For obtaining the above upper bound, the mean value theorem is used as explained below for a two dimensional function, which can easily be generalized to any n-dimensional function. Say f is any function of two variables. One can write, f(a + h,b + k)−f(a,b) as sum of terms f(a + h,b + k)−f(a + h,b)−f(a,b + k) + f(a,b), f(a + h,b)−f(a,b) and f(a,b + k)−f(a,b). The first term is bounded by mean value theorem for two variables, ([23], Theorem 9.40, p.235), while the remaining two terms can be bounded using mean value theorem for one variable, ([23], Theorem 5.10, p.108).

Finally by dominated convergence theorem, we will obtain the existence of the following partial derivatives, in the paragraphs that follow:

p Θ n 0 ∂Θ i , y | j , y = l v π q = 1 n P Θ ( ŝ k + n 0 q | x ( q ) ) ∂Θ × P ( S k + 1 k + n 0 ) f N ( v ) dv ,
(22)
∂f ∂Θ Θ , Π ( i , y ) = j y p Θ n 0 ∂Θ i , y | j , y Π ( j , y ) f N ( y ) d y .
(23)

For obtaining the partial derivative (23), we will need to study the following set of functions one for each different value of i,j,l,r

v , y P Θ ( ŝ k + n 0 r | x ( r ) ) P Θ + Θ h ( ŝ k + n 0 r | x ( r ) ) Θ h t P Θ ∂Θ ŝ k + n 0 r | x ( r ) π q = 1 , q r n 0 P Θ ( ŝ k + n 0 q | x ( q ) ) × Π ( j , y ) f N ( v ) f N ( y ) dvdy.

One can easily see from (21) that the function inside each of the above integral is dominated by some constant multiple of the integrable function,

N k + n 0 r N f k + n 0 r Π ( j , y ) ,

and that the above bound is is integrable by Cauchy Schwartz inequality. So by dominated convergence theorem, the limit lim Θ h 0 (which arises while defining the partial derivate) can be taken inside the integral for every i,j,l,r and this establishes the existence of component wise partial derivative, ∂f ∂Θ Θ , Π (i,y). Also, this component wise partial derivative, is uniformly upper bounded by,

∂f ∂Θ Θ , Π ( i , y ) c ′′ ε 0 y + 1 for all ( i , y ) , all Θ ,

where the constant c′′depends on |Π|.

One can now prove the existence of the overall partial derivative ∂f ∂Θ at every (Θ,Π)using the above upper bound and the dominated convergence theorem (in L2 norm). Consider the limit,

lim Θ h 0 1 Θ h i y f ( Θ , Π Θ ) ( i , y ) f ( Θ + Θ h , Π Θ ) ( i , y ) Θ h t ∂f ∂Θ Θ , Π Θ i , y 2 f N ( y ) dy = i y lim Θ h 0 1 Θ h f ( Θ , Π Θ ) ( i , y ) f ( Θ + Θ h , Π Θ ) ( i , y ) Θ h t ∂f ∂Θ Θ , Π Θ i , y 2 f N ( y ) dy = 0 .

The first equality follows because the function inside the integral tends to zero at every point and is upper bounded by the following integrable function,

c ε 0 y + 1 1 ∂f ∂Θ Θ , Π ( i , y ) 1 + c ′′ ε 0 y + 1 2 1 ∂f ∂Θ Θ , Π ( i , y ) > 1 .

We will now upper bound this partial derivative for all (Θ,Π Θ ). First observe that, because Π Θ (i,y)≤1for all (i,y), from (11),

∂f ∂Θ ( Θ , Π Θ ) ( i , y ) = l , j y , v π q = 1 n P Θ ( ŝ k + n 0 q | x ( q ) ) ∂Θ × P ( S k + 1 k + n 0 ) Π Θ ( j , y ) f N ( v ) f N ( y ) dvd y c 1 r = 1 n l , j v , y 1 e Θ ( x ( r ) ) ε 0 f N ( v ) f N ( y ) dvd y + c 2 y + c 3 E ( N k ) ,

for some appropriate constants c1,c2,c3. Then using k = 1 n a k 2 n k = 1 n a k 2 , |x|2≤|x| (when |x|≤1), we get,

∂f ∂Θ ( Θ , Π Θ ) 2 = i y ∂f ∂Θ ( Θ , Π Θ ) ( i , y ) 2 f N S ( y ) dy 1 / 2 2 c 1 r = 1 n l , j , i v , y , y 1 e Θ ( x ( r ) ) ε 0 f N ( v ) f N ( y ) f N ( y ) dvd y dy + c 2 i y y 2 f N ( y ) dy + c 3 E ( N k ) 2 = c 1 ′′ l P b l + θ f t N k ε 0 + c 2 ′′ σ 2 ,
(24)

where the constants b l take values S k t Ψ+ θ b t Ŝ k 1 . ▀ 

Appendix 3

Proof of Theorem 4: Let f 1 (Θ, ε 0 ):= E J k ( Θ ) ( ε 0 ) err Θ ( J k ) 2 , and f 2 (Θ, ε 0 ):= E J k ( Θ ) ( ε 0 ) Θ err Θ ( J k ) 2 . Note that for any fixed ε0, LMS attractors will be the zeros, i.e., minima of f2(.,ε0) while the DFE-WFs are the minima of the MSE cost, f1(.,ε0).Also note that ε0=0corresponds to the original decoder.

Let { ε 0 n } be any sequence converging to 0. Let Ω={ ε 0 n }. Take a compact set C large enough such that the WF is inside it (as Θ is increased to infinity, eventually MSE will start increasing and will tend to infinity). One can follow steps as in Theorem 2 and show that the stationary density Π Θ n ε 0 n converges to Π Θ 0 as ( ε 0 n , Θ n )(0,Θ). Similarly, one can also show that both functions f1,f2are jointly continuous in (Θ,ε0)C×Ω.

The domain of the parameter Θ for every ε0, say D(ε0), is the same compact set C and hence the correspondence ε0D(ε0)is compact and continuous [25]. Then by the maximum theorem ([25], p. 235), D 1 n :=arg min Θ C f 1 (Θ, ε 0 n ) and D 2 n :=arg min Θ C f 2 (Θ, ε 0 n ) are compact valued upper semi-continuous correspondences on Ω. Thus by ([25], Proposition 9.8, p. 231) there exists a subsequence of LMS attractors Θ n k LMS converging to an LMS attractor of the original decoder, Θ 0 LMS . Once again by the same proposition there exists a further subsequence such that the DFE-WFs Θ n k l converge to a DFE-WF of the original decoder, Θ 0 . Thus there exists a sequence (after renaming) ε 0 n 0 such that Θ n LMS Θ 0 LMS and Θ n Θ 0 . ▀ 

Appendix 4

Proof of Theorem 5: We have assumed in Section “The model and notation”, that S k t Ψ+ θ b t Ŝ k 1 0 for all values of S k , Ŝ k 1 at an LMS attractor. By continuity, this implies the same (in fact the sign of the term, S k t Ψ+ θ b t Ŝ k 1 , for each S k , Ŝ k 1 remains same) in a small neighborhood of the LMS attractor. Thus, when σ2 = 0 (the noiseless case), for the original decoder at an LMS attractor (call it Θ 0 ), we have,

P Θ ∂Θ ( i , y | j , y ) = 0 for all ( i , y ) ( j , y ) .

Thus following the steps as in the proof of Theorem 3 (whose proof is obtained in Appendix 2) one can show that the gradient of the stationary density, Θ Π Θ exists and equals zero at Θ 0 . Hence at Θ 0 (LMS attractor of the original decoder at σ2 = 0),

Θ E J k ( Θ ) err Θ ( J k ) 2 = E J k ( Θ ) Θ err Θ ( J k ) 2 .

Therefore in this case, the DFE-WF coincides with the LMS-DFE attractor, Θ 0 .

Choose ε 0 2 >0 such that, ε 0 2 < ( S k t Ψ 0 + Ŝ k 1 t Θ 0 b for all values of s k and Ŝ k 1 . The DFE-WF ( Θ , ε 0 ) and the LMS-DFE attractor ( Θ LMS , ε 0 ) coincide and equal Θ 0 for a noiseless system having a perturbed decoder, with ε 0 ε 0 2 . This happens because when there is no noise the perturbed decoder coincides with the original decoder for ε 0 ε 0 2 .

Fix ε 0 ε 0 2 . Then, w( Θ , ε 0 ,0,0)=0 and the partial derivative,

∂w ∂Θ ( Θ ε 0 , 0 , 0 ) = ds = E f N Θ Θ err Θ ( J k ) 2 Π Θ ( J k ) , = E Θ Θ Θ err Θ ( J k ) 2 = 2 R xx ( Θ , ε 0 ) ,

where R xx ( Θ , ε 0 ) is the autocorrelation matrix of the vector X k (Θ), under stationarity, at Θ , ε 0 . As Θ , ε 0 is a WF, the above partial derivative will be invertible (all the eigenvalues of the derivative should be negative for the equilibrium point to be an attractor).

Continuity of the above partial derivative with respect to σ2,η,Θ can be seen as before. Applying Implicit function theorem at ( Θ , ε 0 ,0,0), one gets a δ>0, and a continuous function q(σ2,η)such that q(0,0)= Θ , ε 0 and w(q(σ2,η),σ2,η)=0, for all (σ2,η) with |(σ2,η)|≤δ. ▀ 

Remark on existence of LMS attractors

The above theorem also provides the following useful conclusion. For all σ2δ, the zeros of w(.,σ2,0) exist and equal q(σ2,0). These zeros are continuous in σ2. One can see that these zeros will indeed be LMS attractors as invertibility of the derivative of the function f() at σ2=0 guarantees its invertibility in a small neighborhood of σ2=0.

Appendix 5

In this appendix we state and prove the lemmas, which are used in this article.

Lemma 3.

Let Π Θ n L 1 Π Θ . Let |f(Θ n ,x)|≤g1(x), |f(Θ n ,x)|2g2(x) for all n, where g1,g2 are integrable functions (with respect to measure μ). Also let f be continuous and Π Θ n ( x ) C< for all x. Then as n,

f ( Θ n , x ) Π Θ n ( x ) μ ( dx ) f ( Θ , x ) Π Θ ( x ) μ ( dx ) .

Proof: We have

f ( Θ n , x ) Π Θ n ( x ) f ( Θ , x ) Π Θ ( x ) μ ( dx ) f ( Θ n , x ) Π Θ n ( x ) f ( Θ , x ) Π Θ ( x ) μ ( dx ) f ( Θ n , x ) Π Θ n ( x ) Π Θ ( x ) μ ( dx ) + f ( Θ n , x ) f ( Θ , x ) Π Θ ( x ) μ ( dx ) f ( Θ n , x ) 2 μ ( dx ) 1 / 2 × Π Θ n ( x ) Π Θ ( x ) 2 μ ( dx ) 1 / 2 + f ( Θ n , x ) f ( Θ , x ) Π Θ ( x ) μ ( dx ) .

The first term on the right converges to zero because,

Π Θ n ( x ) Π Θ ( x ) 2 μ ( dx ) 4 C 2 Π Θ n ( x ) Π Θ ( x ) μ ( dx ) .

The second term converges to zero by continuity of the function f(.,.) in Θ and by the bounded convergence theorem, ▀ 

Lemma 4.

Let Π Θ represent the Radon-Nikodym derivative of measure Π Θ with respect to the common measure μ for all Θ R m , for some m. Assume Π Θ ≤1everywhere for all Θ. If Θ Π Θ exists in L2 norm, then

Θ E Θ ( g ( Θ ) ) = E μ Θ ( g ( Θ ) Π ( Θ ) ) ,

where g(Θ,.) is square integrable, continuously differentiable in Θ and bounded by a square integrable function uniformly in a neighborhood of Θ.

Proof: Since,

1 Θ h g ( Θ + Θ h ) Π Θ + Θ h g ( Θ ) Π Θ Θ h t g ( Θ ) Θ Π Θ Θ g ( Θ ) Π Θ μ ( dw ) 1 Θ h g ( Θ ) Π Θ + Θ h Π Θ Θ h t Θ Π Θ μ ( dw ) + 1 Θ h Π Θ + Θ h g ( Θ + Θ h ) g ( Θ ) Θ h t Θ g ( Θ ) μ ( dw ) + 1 Θ h ( Π Θ + Θ h Π Θ ) Θ h t Θ g ( Θ ) μ ( dw ) ,
(25)

we will have the result if we show that each of the terms on the right tend to zero as |Θ h |→0. By Cauchy Schwartz inequality,

lim Θ h 0 1 Θ h g ( Θ ) Π Θ + Θ h Π Θ Θ h t Θ Π Θ μ ( dw ) | g | 2 lim Θ h 0 1 Θ h × Π Θ + Θ h Π Θ Θ h t Θ Π Θ 2 μ ( dw ) 1 / 2

The right side tends to zero because the gradient Θ Π Θ exists in L2norm.

The second term on the right hand side of (25) tends to zero by bounded convergence theorem and mean value theorem (as in Appendix 2), because Π Θ ≤1everywhere for all Θ and as g is uniformly bounded in a neighborhood of Θ by an integrable function.

The third term of (25) tends to zero by Cauchy Schwartz inequality and by the continuity of the stationary density Π in L2 norm, and by uniform boundedness of the function ( Θ g)in a neighborhood of Θ by a square integrable function. ▀ 

References

  1. Giannakis GB, Hua Y, Stoica P, Tong L: Signal Processing Advances in Wireless and Mobile Communications, Trends in Channel Estimation and Equalization, vol. 1. Upper Saddle River, NJ: Prentice Hall; 2000.

    Google Scholar 

  2. Lee EA, Messerschmitt DG: Digital Communications,. : Kluwer Academic Publishers; 1994.

    Book  Google Scholar 

  3. Proakis JG: Digital Communications. New York: McGraw-Hill; 2000.

    MATH  Google Scholar 

  4. Haykin S: Adaptive Filter Theory,. : Prentice-Hall International inc; 1996.

    MATH  Google Scholar 

  5. Sayed AH: Fundamentals of Adaptive Filtering. New York: Wiley; 2003.

    Google Scholar 

  6. Benveniste A, Metivier M, Priouret P: Adaptive Algorithms and Stochastic Approximation. : Springer; 1990.

    Book  MATH  Google Scholar 

  7. Macchi O, Eweda E: Convergence analysis of self-adaptive equalizers. IEEE Trans. Inf. Theory 1984, 30(2):161-176. 10.1109/TIT.1984.1056896

    Article  MathSciNet  MATH  Google Scholar 

  8. Kavitha V, Sharma V: Analysis of an LMS linear equalizer for fading channels in decision directed mode. In 13th European Wireless Conference. : ; (Paris, France, Apr 2007)

  9. Kavitha V, Sharma V: Tracking performance of an LMS-linear equalizer for fading channels. In 44th Annual Allerton Conference on Communication, Control and Computing. : ; (USA, Sept 2006)

  10. Kommininakis C, Fragouli C, Sayed A, Wesel R: Multiple-input multiple-output fading channel tracking and equalization using Kalman estimation. IEEE Trans. Signal Process 2002, 50(5):1064-1076.

    Google Scholar 

  11. Belfiore CA, Park Jr. JH: Decision feedback equalization. Proc. IEEE 1979, 67(8):1143-1156.

    Article  Google Scholar 

  12. Taylor DP, Vitetta GM, Hart BD, Mammela A: Wireless channel equalization. Eur. Trans. Telecomun 1998, 9: 117-143. 10.1002/ett.4460090204

    Article  Google Scholar 

  13. Erdogan AT, Hassibi B, Kailath T: MIMO decision feedback equalization from an H∞ perspective. IEEE Trans. Signal Process 2004, 52(3):734-745. 10.1109/TSP.2003.822289

    Article  MathSciNet  Google Scholar 

  14. Cioffi JM, Dudevoir CP, Eyuboglu MV, Forney Jr. GD: MMSE decision-feedback equalizers and coding-part i: general results. IEEE Trans. Commun 1995, 43: 2582-2594. 10.1109/26.469441

    Article  MATH  Google Scholar 

  15. Tidestav C, Ahlen A, Sternad M: Realizable MIMO decision feedback equalizers: structure and design. IEEE Trans. Signal Process 2001, 49(1):121-133. 10.1109/78.890353

    Article  Google Scholar 

  16. Sternad M, Ahlen A, Lindskog E: Robust decision feedback equalizers. In Proc. IEEE Int. Conf. Acoust., Speech, Signal Process, vol. 3. : ; (Minneapolis, MN, Apr 1993), pp. 555–558

  17. Kushner HJ, Yin G: Stochastic Approximation Algorithms and Applications. : Springer; 1997.

    Book  MATH  Google Scholar 

  18. Kavitha V, Sharma V: Tracking Analysis of an LMS Decision Feedback Equalizer for a Wireless Channel,. Technical report no: TR-PME-2006-19, DRDO-IISc program on mathematical engineering, ECE Dept., IISc, Bangalore, October 2006. (downloadable from ∖_rep06.html) (Short version available in Proc. EuroWireless 2007) http://www.pal.ece.iisc.ernet.in/PAM/tech

  19. Meyn SP, Tweedie RL: Markov Chains and Stochastic Stability, Communications and Control Engineering Series. New York: Springer; 1993.

    Book  MATH  Google Scholar 

  20. Thorisson H: Coupling, Stationarity, and Regeneration, Probability and its Applications. New York: Springer; 2000.

    Book  MATH  Google Scholar 

  21. Berger MS: Nonlinearity and Functional Analysis. New York: Academic Press; 1977.

    MATH  Google Scholar 

  22. Yoshida K: Functional Analysis. Heidelberg: Springer; 1995.

    Google Scholar 

  23. Rudin W: Functional Analysis. New York: McGraw-Hill; 1973.

    MATH  Google Scholar 

  24. Bhatia R: Matrix Analysis. New York: Springer; 1997.

    Book  MATH  Google Scholar 

  25. Sundaram RK: A First Course in Optimization Theory. Cambridge: Cambridge University Press; 1996.

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work was partially supported by DRDO-IISc program on Mathematical Engineering. Parts of this article were presented in Allerton 2006. This work was mainly done when the first author was a PhD student at IISc, Bangalore.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Veeraruna Kavitha.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Kavitha, V., Sharma, V. Optimal MSE solution for a decision feedback equalizer. EURASIP J. Adv. Signal Process. 2012, 172 (2012). https://doi.org/10.1186/1687-6180-2012-172

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-172

Keywords