In this section, we derive a fixed-interval smoother by propagating a mixture of
Gaussians per ISI state forward and backward in the ISI trellis. Consequently, the ISI state
and the channel state
will be jointly estimated. Finally, the desired a posteriori probabilities for the bits
are obtained by a simple marginalization step.
3.1. Forward Filtering
A recursive expression of
, where
is obtained by noting that
where the discrete summation extends over the states
, for which a valid transition
exists. In general, the multiplications and integration in (15) cannot be expressed in closed form, therefore, we introduce the following Gaussian mixture parameterization at instant 
In (16), each discrete state
is associated with a mixture of
Gaussians, where
is a design parameter of choice.
Theorem 1.
A closed form expression of
is obtained as follows:
where the means
and covariance matrices
associated with the state transition
are obtained from the following recursions:
and the weights
are given by
Proof.
Injecting (16) into (15), one obtains
In the above expression, we easily recognize the integral as the well-known prediction step of Kalman filtering [14]. Moreover, the multiplication by
is the correction step of Kalman filtering. Therefore,
can be written as (17).
Figure 3 illustrates how the Gaussian mixture
is computed on a
-state trellis. The components of the Gaussian mixtures
and
undergo a Kalman prediction and correction given the hypothesized data symbol on the valid trellis branch
and
, respectively. The resulting Gaussian mixture
is obtained as a weighted sum of the resulting mixtures.
3.2. Complexity Reduction Algorithm (CRA)
A problem with (17) is that each discrete state
is now associated with a mixture of more than
Gaussians. This means that the number of terms in the Gaussian mixture will grow with time. In order to keep the computational complexity constant for each time instant, we need to approximate the exact expression given by (17) as follows:
so that again
Gaussians with weight
, mean
and covariance
,
are associated with each state
, as in (16). We do this by applying the CRA proposed in [15]. Assume that
(resp.
) is a multivariate Gaussian, whose weight, mean, and covariance are given by
,
, and
(resp.
,
,
). In [15], a practical measure of similarity between the two densities is given by
where
denotes the Kullback-Leibler distance. Then, pairs of similar Gaussians with minimal
are repeatedly merged until
Gaussians remain using the following approximation:
where
3.3. Backward Filtering
Let
denote the total number of available observations and
. A time-reversed version of the forward filter in Section 3.1 can also be derived. We seek a recursive expression of the likelihood
, propagated backward in time. We obtain the following recursion:
where the discrete summation extends over the states
, for which a valid transition
exists.
Theorem 2.
Assume that the following Gaussian mixture parameterization:
is available at instant
, a closed form expression of
is obtained as
where the means
and covariance matrices
associated with the state transition
are obtained from the following recursions:
and the weights
are given by
The proof is obtained by injecting (26) into (25) and using the same arguments as in the demonstration of Theorem 1.
Figure 4 illustrates how the Gaussian mixture
is computed on a
-state trellis. The components of the Gaussian mixtures
and
undergo a Kalman correction and backward prediction given the hypothesized data symbol on the valid trellis branch
and
, respectively. The resulting Gaussian mixture
is obtained as a weighted sum of the resulting mixtures.
Again, we need to apply the CRA of Section 3.2. Complexity reduction algorithm to (27), so that
admits the desired form
3.4. Smoothing
A two-filter smoothing formula is obtained as follows:
Theorem 3.
Using the Gaussian mixture approximations for the forward and the backward filter introduced in Sections 3.1 and 3.3, respectively, a closed form expression of
is obtained as follows:
where the covariances associated to transition
are
and the means associated to transition
are
for
. The expression of the weights is given by
The coefficient
has the following form:
where
denotes the dimension of the continuous valued state variable.
Proof.
In (31), the term
has been calculated as (21) and the integral, also appearing in (25), has already been computed as
Therefore, (31) can be rewritten as follows:
After straightforward algebraic manipulations on the product of two Gaussian densities, the desired result (32) is obtained.
Figure 5 illustrates how the Gaussian mixture
is computed on a
-state trellis. The components of the Gaussian mixtures
undergo a Kalman correction and backward prediction given the hypothesized data symbol on the valid trellis branch
. The resulting Gaussian mixture is multiplied with the Gaussian mixture
computed in the forward pass and by the scalar
, so as to obtain
.
Since we are interested in soft-output equalization, we must compute smoothed bit-by-bit marginal probabilities. Let
be the set of state transitions
such that the information bit
, with
. Taking (32) at instant
and marginalizing out the vector
, we obtain
The hard decision can then be written as follows:
Similarly, the a posteriori pdf of the channel vector is obtained as a Gaussian mixture by marginalizing out all possible ISI state transitions
Under the minimum mean square error (MMSE) criterion, the forward filtered channel vector estimated at instant
is obtained by marginalizing out the ISI state variable
Similarly, the MMSE smoothed estimate of the channel vector at instant
is obtained by marginalizing out all possible ISI state transitions
3.5. Complexity Evaluation
It is well known that the complexity of one recursion of the Kalman filter is
[16], where
denotes the dimension of the continuous-valued state estimate. However, in our forward and backward filters, the complexity of one recursion of the Kalman filter reduces to
due to the block diagonal form of
and the fact that the matrix inversion reduces to a division by a scalar. Thus, the overall complexity per information bit of the forward and backward filter is
. The complexity per information bit of the smoothing pass can be evaluated as
, due to the matrix inversions.