Skip to content

Advertisement

  • Research
  • Open Access

Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios

EURASIP Journal on Advances in Signal Processing20152015:8

https://doi.org/10.1186/s13634-015-0199-9

  • Received: 18 July 2014
  • Accepted: 18 January 2015
  • Published:

Abstract

An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.

Keywords

  • V-BLAST
  • MIMO-DFE
  • MIMO-OFDM
  • Cognitive radio
  • Software defined radio

1 Introduction

The fundamental problem in the design of future wireless communication systems is to reliably and efficiently transmit and receive information signals over imperfect channels using substantially high data rates. One successful approach adopted in several wireless standards such as digital audio broadcasting (DAB), digital video broadcasting (DVB-T), local area networking (LAN), and metropolitan area networking (MAN) is orthogonal frequency division multiplexing (OFDM) in which the entire bandwidth is divided into several narrow subbands so that the frequency response over each individual subband is relatively flat, and each subband channel occupies only a small fraction of the original bandwidth. Nevertheless, OFDM-based wireless communication systems can support peak rates of 54 Mbps at most, which is not enough to cover services the providers offer nowadays. Throughputs far beyond 54 Mbps can be provided when multiple input multiple output (MIMO) system approach is applied especially in a rich scattering environment [1,2]. Hence, the combination of OFDM and MIMO technologies [3,4] constitutes the basis for next-generation wireless communication systems such as IEEE 802.11n for wireless local area networks (WLAN), IEEE 802.16e for MAN, and evaluation of higher generational cellular systems.

The performance achieved through MIMO technology also entails a considerable increase in signal processing complexity in receiver, and there exists a major challenge in designing low-complexity receivers for multichannel wireless systems. A significant development for MIMO communications is the proposition of V-BLAST architecture [5], and two important areas of research on V-BLAST receivers are reduction of complexity, i.e., avoidance of matrix inversions, and extension to broadband implementation. Consequently, the recent research such as [6-8] focused on reducing the complexity of V-BLAST receiver architectures for frequency selective channels by developing efficient matrix inversion operations.

An important problem in realizing OFDM system designs, however, is the appendage of a cyclic prefix (CP) with a length at least equal to the channel length to each block of N IFFT coefficients, and this application may not be adequate in case the length of CP, ξ, is large relative to the data length, N, so that the channel throughput is reduced by a factor N/(N+ξ). In addition, information about the channel length may not even be available in some cases. Accordingly, it is desired to design systems that guarantee a certain amount of throughput, i.e., N/(N+ξ)1, in all possible channel conditions. An elegant solution to this problem is to implement a time domain equalizer to shorten the channel memory and hence reduce the CP overhead [9].

A recent development in the design of next-generation wireless communication systems is the cognitive radio, built on a software radio, which functions as an intelligent system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli in order to establish reliable communication by efficient utilization of the radio spectrum [10]. The concept of software radio on the other hand relies on the development of DSP technology, that is flexible, reconfigurable, and reprogrammable by software to adapt to an environment where there are multiple services, standards, and frequency bands [11]. Correspondingly, the infrastructure in a software radio system is generally required to use reconfigurable VLSI hardware components such as DSP chip sets [12], FPGAs [13], embedded processors [14], and even general purpose processors [15].

A typical cognitive radio cycle includes spectrum sensing, analysis, reasoning, and adaptation to new operating parameter steps [16]. The cognitive radio can detect the availability of a portion of frequency band through spectrum sensing and analysis steps [17]. During the reasoning step, it determines the optimum operating parameters, so that no harmful interference to other users of the spectrum is generated due to its transmission. In the adaptation step, the radio switches to transmission and reception mode using its reconfigurability and reprogrammability property and tunes its operating parameters according to its best response strategy. Another emerging requirement for cognitive radios is location and environment awareness that involves modeling the capabilities of human beings and bats for realization of advanced and autonomous location and environment awareness features [18]. Adaptive positioning, determining the coordinates of a cognitive radio in space, is a step towards realization of accurate location awareness in cognitive radios. The author has recently proposed a spectrum estimation method in [19] and a range estimation method in [20] that are suitable for spectrum sensing and positioning functions of cognitive radios, respectively. In this paper, we focus on the reception mode of operation of cognitive MIMO-OFDM radios and propose a new minimum MSE channel shortening equalizer design, which consists of adaptive fron-tend MIMO-DFE and multiple Viterbi detection sections.

The optimum solution for the channel shortening equalization problem can be found using one of these constraints: (1) unit tap constraint and (2) unit energy constraint; and the performances under these two criteria were compared for single input single output (SISO) and MIMO channel shortening equalization in [21,22], respectively. Since the findings in these papers show that the unit energy constrained channel shortener equalization resulted in better performance, we have used unit energy constraint for the MIMO channel shortening optimization problem under consideration in this paper. Accordingly, the contributions of the paper can be stated as follows: (1) the proposed equalizer has a front-end MIMO-DFE as opposed to the MIMO feed forward equalizer (MIMO-FFE) in [22], (2) a modified version of sequential processing multichannel lattice stages (SPMLSs) [23] is utilized in the design of front-end MIMO-DFE and a complete modified Gram-Schmidt orthogonalization of multichannel input data, which avoids matrix inversions, enables scalar only operations and contributes to the flexibility, reconfigurability, and reprogrammability of the receiver, is attained, (3) the proposed equalizer can be viewed as a V-BLAST receiver for frequency selective channels, (4) spectrum sensing or range estimation can be accomplished at no cost by simply reconfiguring the front-end MIMO-DFE as multichannel spectral analysis or positioning filter as shown in [19,20], respectively, and (5) a detailed computational complexity and performance analysis is presented. The first contribution is important from the perspective of interference removal and, by means of that, error performance, whereas the second one is considered the key since matrix inversion is a major bottleneck in the design of embedded receiver architectures that increases computational complexity [24]. The third one is relevant since the receiver operations of a V-BLAST system can be considered as performing Gram-Schmidt orthogonalization [25], whereby inter-symbol interference (ISI) as well as inter-channel interference (ICI) effects are suppressed. The fourth contribution is crucial from the point of view of cognitive radio operation cycle, so that the filter structure of MIMO-DFE can be reused for spectrum sensing and range estimation functions of cognitive radio, and finally, a comparative computational complexity and performance analysis of the proposed equalizer with respect to the other benchmark equalizers such as MIMO-DFE, MIMO-FFE, and multichannel Viterbi equalizer (VE), has been provided, which to the best of the author’s knowledge, does not exist in the literature.

Various MIMO-DFEs for MIMO ISI channels have been proposed in the literature [26-28] after the introduction of the finite length MIMO-DFE in [29], and it has been delineated by Ginis and Cioffi in [30] that DFE is the basic principle behind the BLAST detection algorithm. Very recently, a QR decomposition-based MIMO-DFE has been presented by Wang et al. in [31]. In QR decomposition approaches, the Q matrix is implicitly formed and then used to compute the R matrix; whereas in the Gram-Schmidt approach, the inverse of the R is implicitly formed and then used to compute the Q matrix. As a consequence of this fact, Regalia and Bellanger showed in [32] that there exists a duality between QR and lattice methods and the possibility of combining elements of both approaches to obtain new hybrid algorithms. With respect to developing these hybrid algorithms, Ling showed in [33] that an orthogonal Givens rotation-based algorithm algebraically coincides with the recursive modified Gram-Schmidt-based lattice algorithm in [34]. In accordance with this perspective, we modify the SPMLS using Givens rotation-based lattice algorithms of [33] on the structure of the SPMLS, so that a sequential processing multichannel Givens lattice stage (SPMGLS) is obtained. SPMLSs are known for their modular, order-recursive, and regular structure as we previously used them in the decision feedback equalization of nonlinear communication channels in [35]. Additionally, good numerical properties are incurred by the use of Givens rotation-based lattice algorithms.

Subsequent to Givens lattice realization of the front-end MIMO-DFE, we perform a systolic array implementation of multiple adaptive Viterbi detectors [36], thereby a highly concurrent receiver structure is obtained. A two-channel (2×2) problem is considered in this presentation due to the ease of explanation and space limitations in developing the method. However, it is considered straightforward to apply the method to any number of channels, i.e., to massive MIMO implementations for next-generation wireless systems [37], at the expense of increased complexity. Even though the complete orthogonality and thereby the suppression of ISI and ICI is accomplished in the minimum mean square error sense for any number of transmit and receive antennas, the performance achieved in terms of MSE as well as in the probability of error will depend on how much the channel is ill-conditioned.

The organization of this paper is as follows. In Section 1, the adaptive multichannel channel shortening equalization optimization problem is introduced. In Section 1, we describe the adaptive multiple Viterbi detection section of the proposed equalizer. The computational complexity computations are treated in Section 1. In Section 1, we present the experimental results, and finally, Section 1 is concerned with the discussion of results and conclusions. (∙) represents the complex conjugate of (∙). (∙) T , and (∙) H stands for the transpose and the Hermitian transpose of (∙), respectively. The variables m, i, and n are global while all other variables are local. The variable m represents the stage number while i and n are the time indexes related to data and coefficients, respectively, till we equate them in Section 1 to have a single time index.

2 Optimization problem statement

We consider the discrete-time baseband equivalent 2×2 channel shortening equalization problem depicted in Figure 1, where the number of transmitters (M T ) and receivers (M R ) is assumed equal, so that the number of antennas (M) is M=M T =M R =2. Note that the kth baseband channel in Figure 1 models the effects of serial-to-parallel (S/P) and parallel-to-serial (P/S) conversions, the addition and removal of CP, the IDFT and DFT operations, and the physical baseband channel itself as delineated in Figure 2. Accordingly, the input signal to the kth receiver can be expressed as the sum of transmitted signals corrupted by ISI, ICI, and the noise:
$${} y_{k}(n)= \sum\limits_{\ell=1}^{M} \sum\limits_{j=0}^{\mu_{{\ell,k}}} x_{{\ell}}(j) h_{{\ell,k}}(n-j) + u_{k}(n), \ \ k=1,2, $$
(1)
Figure 1
Figure 1

A block diagram of the baseband 2×2 channel shortening equalization problem.

Figure 2
Figure 2

Block diagram of the k th baseband channel model.

in which h ,k (n) is the impulse response of the channel between the th transmitter and the kth receiver, whereas x (n) is the transmitted sequence, and μ ,k is the memory of channel h ,k (n). Accordingly, h ,k (n) for k constitute ICI, and the elements of h ,k (n) for =k and μ ,k ≠0 amount to ISI. u k (n) denotes the kth channel noise.

In the adaptive two-channel channel shortening equalizer design problem, the objective is to find an exponentially windowed, least squares (LS) solution for the coefficients of the kth adaptive DFE and the corresponding kth adaptive desired impulse response (ADIR) filter that minimizes the kth cost function:
$$\begin{array}{@{}rcl@{}} J_{k}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \left| {e^{k}_{n}}(i) \right|^{2} \end{array} $$
(2)
at each time instant n, where k=1,2 and β is the exponential weighting factor. Herein, the error signal, \({e^{k}_{n}}(i)\), is given by:
$$ {e^{k}_{n}}(i)={d_{n}^{k}}(i) - \hat{d}_{n}^{k}(i). $$
(3)
The kth desired signal, \({d_{n}^{k}}(i)\), at the output of the kth ADIR is expressed as:
$$ {d_{n}^{k}}(i) = \sum\limits_{n=0}^{{N^{d}_{k}}-1} w_{k}(n) \bar{x}_{k}(i-n), $$
(4)
whereas the estimate of kth desired signal, \(\hat {d}_{n}^{k}(i)\), in training mode of operation, is equal to the output signal of the kth DFE:
$${} \hat{d}_{n}^{k}(i)={z_{n}^{k}}(i)=\sum\limits_{n=0}^{{N^{f}_{k}}-1} {p_{k}^{f}}(n) \text{} y_{k}(i-n) + \sum\limits_{n=0}^{{N^{b}_{k}}-1} {p_{k}^{b}}(n) \bar{x}_{k}(i-n), $$
(5)

and it is identical to the delayed output signal of the kth DFE:

$${} \begin{aligned} \hat{d}_{n}^{k}(i)={z_{n}^{k}}\left(i-\overline{D}_{k}\right)&=\sum\limits_{n=0}^{{N^{f}_{k}}-1} {p_{k}^{f}}(n) y_{k}\left(i-\overline{D}_{k}-n\right) \\ &\quad+ \sum\limits_{n=0}^{{N^{b}_{k}}-1} {p_{k}^{b}}(n) \bar{x}_{k}\left(i-\overline{D}_{k}-n\right) \end{aligned} $$
(6)

in decision directed mode. Note that \( {N^{d}_{k}} \) in (4) and \( {N^{f}_{k}} \) and \( {N^{b}_{k}} \) in Equations (5) and (6) are the length of the kth ADIR filter, and the lengths of feed forward and feedback sections of the kth DFE, respectively, whereas \(\left ({N^{d}_{k}}-1\right) \), \(\left ({N^{f}_{k}}-1\right)\), and \(\left ({N^{b}_{k}}-1\right)\) are the corresponding filter and equalizer memories. Accordingly, \(\overline {D}_{k}\) is the delay experienced by the output signal of kth DFE, \({z^{k}_{n}}(i)\), during the Viterbi processing. The input signal (\(\bar {x}_{k}(i)\)) to the feedback section and to the kth ADIR filter in Equations (4),(5), and (6) equals to the delayed kth channel input signal in training mode (\(\bar {x}_{k}(i)=x_{k}(i-D_{k})\)), while it is equal to the kth detected signal in decision directed mode of operation of the receiver (\( \bar {x}_{k}(i)=\hat {x}_{k}(i)\)). Herein, D k represents the delay experienced by the kth input signal, x k (i), through the corresponding channel during the training mode of operation.

Subsequently, we define the input vector to the kth DFE, y k (i), at time instant i, as:
$${} \begin{aligned} \mathbf{y}_{k}(i)\!&=\!\left[y_{k}(i),y_{k}(i-\!1), \ldots, y_{k}\!\left(\!i-{N^{f}_{k}}\!+1\!\right),\bar{x}_{k}(i), \bar{x}_{k}(i-\!1),\right.\\ &\qquad\left. \ldots, \bar{x}_{k}\left(i-{N^{b}_{k}}+1\right)\right]^{T}, \\[-15pt] \end{aligned} $$
(7)
and the corresponding coefficient vector, p k (n), at time instant n, as:
$${} \begin{aligned} \mathbf{p}_{k}(n)\!&=\!\left[\!{p_{k}^{f}}(n), {p_{k}^{f}}\!(n\,-\,1),\! \ldots, {p_{k}^{f}}\!\!\left(\!n\,-\,{N^{f}_{k}}\!\,+\,1\!\right)\!, {p_{k}^{b}}(n), {p_{k}^{b}}(n\,-\,1),\right.\\ &\qquad\left.\ldots, {p_{k}^{b}}\left(n\,-\,{N^{b}_{k}}\,+\,1\right)\!\right]^{T}\!. \\[-15pt] \end{aligned} $$
(8)
The input vector to the kth ADIR filter at time instant i and its coefficient vector at the time instant n are also defined as:
$$ \bar{\mathbf{x}}_{k}(i)=\left[\bar{x}_{k}(i),\bar{x}_{k}(i-1),\ldots,\bar{x}_{k}\left(i-{N^{d}_{k}}+1\right)\right]^{T} $$
(9)
and
$$ \mathbf{w}_{k}(n)=\left[w_{k}(n),w_{k}(n-1),\ldots, w_{k}\left(n-{N^{d}_{k}}+1\right)\right]^{T}. $$
(10)

Note that we assume, without loss of generality, that \( {N^{b}_{k}} \leq {N^{f}_{k}}\) for the kth DFE.

The main concern of the exponentially weighted LS problem under consideration is thus to find, at each time n, the kth optimal coefficient vectors, p k (n) and w k (n), that would minimize the cost function:
$$ J_{k}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \left| \mathbf{w}^{H}_{k}(n) \bar{\mathbf{x}}_{k}(i) - \mathbf{p}^{H}_{k}(n) \text{} \mathbf{y}_{k}(i) \right|^{2}, $$
(11)
which can be expressed in matrix form as follows:
$$ \begin{aligned}{} J_{k}(n)&=\mathbf{w}^{H}_{k}(n) \ \mathbf{R}_{\bar{x}_{k} \bar{x}_{k}}(n) \ \mathbf{w}_{k}(n) - \mathbf{w}^{H}_{k}(n) \ \mathbf{R}_{\bar{x}_{k} y_{k}}(n) \ \mathbf{p}_{k}(n) \\ &\quad - \mathbf{p}^{H}_{k}(n) \ \mathbf{R}_{y_{k} \bar{x}_{k}}(n) \ \mathbf{w}_{k}(n) + \mathbf{p}^{H}_{k}(n) \ \mathbf{R}_{{y}_{k}{y}_{k}}(n) \ \mathbf{w}_{k}(n). \end{aligned} $$
(12)
Herein, \( \mathbf {R}_{y_{k} y_{k}}(n)\) is the \(\left ({N^{f}_{k}} + {N^{b}_{k}}\right) \times \left ({N^{f}_{k}} + {N^{b}_{k}}\right)\) correlation matrix of y k (i), which is given by:
$$ \mathbf{R}_{y_{k} y_{k}}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \mathbf{y}_{k}(i) \mathbf{y}^{H}_{k}(i), $$
(13)
\(\mathbf {R}_{\bar {x}_{k} y_{k}}(n)\) is the \( {N^{d}_{k}} \times \left ({N^{f}_{k}} + {N^{b}_{k}}\right)\) cross-correlation matrix of \(\bar {\mathbf {x}}_{k}(n)\) and y k (n), which can be expressed as:
$$ \mathbf{R}_{\bar{x}_{k} y_{k}}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \bar{\mathbf{x}}_{k}(i) \mathbf{y}^{H}_{k}(i), $$
(14)
and \( \mathbf {R}_{\bar {x}_{k} \bar {x}_{k}}(n)\) is the \({N^{d}_{k}} \times {N^{d}_{k}}\) autocorrelation matrix of the kth ADIR filter input data vector \(\bar {\mathbf {x}}_{k}(i)\), and is found by:
$$ \mathbf{R}_{\bar{x}_{k} \bar{x}_{k}}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \bar{\mathbf{x}}_{k}(i) \bar{\mathbf{x}}^{H}_{k}(i). $$
(15)
Note that \( \mathbf {R}_{\bar {x}_{k} y_{k}}(n) \triangleq \mathbf {R}^{H}_{y_{k} \bar {x}_{k}}(n).\) Subsequently, the kth optimal coefficient vector for the equalizer is determined by differentiating J k (n) with respect to p k (n), setting the derivative to zero, and solving for p k (n):
$$ \mathbf{p}^{opt}_{k}(n)=\mathbf{R}^{-1}_{y_{k} y_{k}}(n) \mathbf{R}_{y_{k} \bar{x}_{k}}(n) \mathbf{w}_{k}(n). $$
(16)
In order to find a solution for the optimal coefficient vector of the kth ADIR filter, we substitute Equation (16) back into the cost function in Equation (12) and attain the following quadratic form in w k (n):
$${} J_{k}(n)\!=\mathbf{w}^{H}_{k}(n) \! \left[ \! \mathbf{R}_{\bar{x}_{k}\bar{x}_{k}}(n) \,-\, \mathbf{R}_{\bar{x}_{k} y_{k}}(n) \mathbf{R}^{-1}_{y_{k} y_{k}}\!(n) \mathbf{R}_{y_{k} \bar{x}_{k}}(n) \! \right] \! \mathbf{w}_{k}(n). $$
(17)
Then, the expression enclosed by square brackets in Equation (17), a symmetric \( {N^{d}_{k}} \times {N^{d}_{k}}\) matrix, is defined as:
$$ \mathbf{\widetilde{R}}_{\bar{x}_{k}y_{k}}(n)=\mathbf{R}_{\bar{x}_{k}\bar{x}_{k}}(n) - \mathbf{R}_{\bar{x}_{k} y_{k}}(n) \mathbf{R}^{-1}_{y_{k} y_{k}}(n) \mathbf{R}_{y_{k} \bar{x}_{k}}(n), $$
(18)
so that the cost function in Equation (17) can be restated as:
$$ J_{k}(n)=\mathbf{w}^{H}_{k}(n) \widetilde{\mathbf{R}}_{\bar{x}_{k}y_{k}}(n) \mathbf{w}_{k}(n). $$
(19)
In minimizing the expression in Equation (19), a unit energy constraint, \( \mathbf {w}^{H}_{k}(n)\mathbf {w}_{k}(n)=1 \), is applied to the kth ADIR filter coefficients to avoid the trivial null equalizer solution [21], and the following Lagrangian expression is formed:
$${} L(\mathbf{w}_{k}(n),\lambda)\!=\mathbf{w}^{H}_{k}(n) \widetilde{\mathbf{R}}_{\bar{x}_{k}y_{k}}(n) \mathbf{w}_{k}(n) + \lambda \! \left(\mathbf{w}^{H}_{k}(n) \mathbf{w}_{k}(n) - 1\right). $$
(20)
After taking the derivative of the expression in (20) and equating to zero, we get:
$$ \mathbf{\widetilde{R}}_{\bar{x}_{k}y_{k}}(n) \ \mathbf{w}^{\text{opt}}_{k}(n)= \lambda \ \mathbf{w}^{\text{opt}}_{k}(n), $$
(21)
which shows that the optimal kth ADIR coefficient vector \(\mathbf {w}^{\text {opt}}_{k}(n)\) and λ are the unit magnitude eigenvector and eigenvalue of the matrix \(\widetilde {\mathbf {R}}_{\bar {x}_{k}y_{k}}(n)\), respectively. If the expression on the righthand side of Equation (21) is substituted for \( \widetilde {\mathbf {R}}_{\bar {x}_{k}y_{k}}(n)\mathbf {w}_{k}(n)\) in Equation (19) and also \( \mathbf {w}^{H}_{k}(n)\) in Equation (19) is replaced with \(\mathbf {w}^{H\text {opt}}_{k}(n)\), then the minimum cost can be stated as follows:
$$\begin{array}{@{}rcl@{}} J_{k}(n)&=&\lambda \ \mathbf{w}^{H\text{opt}}_{k}(n) \mathbf{w}^{\text{opt}}_{k}(n) \\ &=& \lambda_{\text{min}} \end{array} $$
(22)

which demonstrates that the cost function is minimized by choosing \(\mathbf {w}^{\text {opt}}_{k}(n)\) to be equal to the eigenvector of the matrix \(\widetilde {\mathbf {R}}_{\bar {x}_{k}y_{k}}\) and that the corresponding eigenvalue λ is the minimum eigenvalue of the matrix \(\widetilde {\mathbf {R}}_{\bar {x}_{k}y_{k}}\) and is represented with λ min. Consequently, the optimal coefficient vectors for the kth equalizer and the kth ADIR filter, \(\mathbf {p}^{\text {opt}}_{k}(n)\) and \(\mathbf {w}^{\text {opt}}_{k}(n)\), are given by Equations (16) and (21), respectively.

2.1 V-BLAST type MIMO-DFE

We would like to use a V-BLAST type design approach for the front-end filter of the proposed equalizer, and we also require the design of a single, multichannel, and compact equalizer structure, so that two separate equalizers and direct evaluations as in (16) are avoided, and the same filter can be reconfigured as spectral analysis or positioning filter. These objectives can be accomplished by considering the equivalence of V-BLAST and modified Gram-Schmidt orthogonalization operations, and therefore by completely orthogonalizing the two-channel input data of DFE using SPMGLSs, which provide scalar only operations, good numerical properties as well as modularity, regularity, order recursiveness, and reconfigurability to the solution of equalization problem under consideration. Hereupon, we present the modifications we propose to make in SPMLSs so as to obtain SPMGLSs and then the design of front-end multichannel DFE using SPMGLSs.

2.2 SPMGLS

A SPMGLS has a block structure as shown in Figure 3, and the input signal vectors to a SPMGLS are defined as follows: the input forward prediction error vector:
$${} \mathbf{f}_{\ell-1}(i)=\left[\,f^{0}_{\ell-1}(i),f^{1}_{\ell-1}(i), \ldots, \ldots, f^{p-1}_{\ell-1}(i),f^{p}_{\ell-1}(i)\right]^{T}, $$
(23)
Figure 3
Figure 3

A block diagram of SPMGLS.

the backward prediction error vector:
$${} \mathbf{b}_{\ell-1}(i)=\left[b^{0}_{\ell-1}(i),b^{1}_{\ell-1}(i), \ldots, \ldots, b^{p-1}_{\ell-1}(i),b^{p}_{\ell-1}(i)\right]^{T}, $$
(24)
and the estimation error vector:
$${} \mathbf{e}_{\ell-1}(i)=\left[e^{0}_{\ell-1}(i),e^{1}_{\ell-1}(i),\ldots, \ldots, e^{p-1}_{\ell-1}(i), e^{p}_{\ell-1}(i)\right]^{T}. $$
(25)
The elements of input forward and backward prediction error vectors in Equations (23) and (24) are orthogonalized by using self-orthogonalization processors (SOPs), which are triangular-shaped processors in Figure 3. The outputs of SOPs are the orthogonalized forward prediction error vector:
$${} \hat{\mathbf{f}}_{\ell-1}(i)=\left[\,\hat{f}^{0}_{\ell-1}(i),\hat{f}^{1}_{\ell-1}(i),\ldots,\ldots,\hat{f}^{p-1}_{\ell-1}(i),\hat{f}^{p}_{\ell-1}(i)\right]^{T} $$
(26)
and the orthogonalized backward prediction error vector:
$${} \hat{\mathbf{b}}_{\ell-1}(i)=\left[\hat{b}^{0}_{\ell-1}(i), \hat{b}^{1}_{\ell-1}(i),\ldots,\ldots, \hat{b}^{p-1}_{\ell-1}(i),\hat{b}^{p}_{\ell-1}(i)\right]^{T}. $$
(27)

The elements of \(\hat {\mathbf {f}}_{\ell -1}(i)\) are fed into a forward prediction reference-orthogonalization processor (ROP) in order to predict the elements of b −1(i−1) and to produce the stage output back prediction error vector b (i). The elements of \(\hat {\mathbf {b}}_{\ell -1}(i)\) are fed into a ROP to perform p-channel joint process estimation and to produce the stage output estimation error vector e (i). Subsequently, the elements of \(\hat {\mathbf {b}}_{\ell -1}(i)\) are delayed and are also fed into another ROP to obtain the stage output forward prediction error vector f (i).

There are two types of processing cells, single and double circular processors, in a SPMGLS as in the original SPMLS in [23]. Nevertheless, we change the processing equations implemented in these processing cells with the equations of the square root version of the Givens algorithm in [33]. The interconnections and signals propagating through these processing cells are shown in Figure 4. The processing cells symbolized with a double circle, which are also called boundary (angle computer) cells, perform the following equations:
Figure 4
Figure 4

Processing cells in a SPMGLS.

$$ d(i)=\sqrt{\beta d^{2}(i-1) + |x_{ref}(i)|^{2}} $$
(28)
$$ c(i)=\frac{\sqrt{\beta} d(i-1)}{d(i)} $$
(29)
$$ s(i)=x^{*}_{ref}(i)/d(i). $$
(30)
From Equations (28),(29), and (30), it can be shown that the parameters c(i) and s(i) satisfy the equation:
$$ \left[ \begin{array}{ll} c(i) & s(i) \\ -s^{*}(i) & c(i) \end{array} \right] \left[ \begin{array}{l} \sqrt{\beta} \text{} d(i-1)\\ x_{ref}(i) \end{array} \right]= \left[ \begin{array}{c} d(i) \\ 0 \end{array} \right]. $$
(31)
The connection between the input, γ in(i), and output, γ out(i), likelihood variables is defined as:
$$ \gamma_{\text{out}}(i)=c(i) \gamma_{\text{in}}(i). $$
(32)
On the other hand, the processing cells symbolized with a single circle, which are called internal (rotator) cells, perform the following equations:
$$ x_{\text{out}}(i) = c(i) x_{\text{in}}(i) - s^{*}(i) \sqrt{\beta} \text{} \kappa(i-1) $$
(33)
and
$$ \kappa(i)=c(i) \sqrt{\beta} \kappa(i-1) + s(i) x_{\text{in}}(i). $$
(34)
Accordingly, Equations (31), (33), and (34) can be combined into:
$${} \left[\!\! \begin{array}{ll} c(i) & s(i) \\ -s^{*}(i) & c(i) \end{array}\!\! \right] \left[\!\! \begin{array}{ll} \sqrt{\beta} d(i-1) & \sqrt{\beta} \kappa(i-1) \\ x_{\text{ref}}(i) & x_{\text{in}}(i) \end{array}\!\! \right]= \left[\!\! \begin{array}{ll} d(i) & \kappa(i)\\ 0 & x_{\text{out}}(i) \end{array}\!\! \right], $$
(35)
where
$$ {\mathbf Q}(i)=\left[ \begin{array}{ll} c(i) & s(i) \\ -s^{*}(i) & c(i) \end{array} \right] $$
(36)

with |Q(i)|=1, so that it performs Givens plane rotation in a complex plane, and thereby the stability of Givens algorithm is guaranteed.

2.3 Sequential givens lattice orthogonalization

The V-BLAST processing is made possible by utilizing SPMGLSs in the design of MIMO-DFE, so that the number of channels at different sections of the proposed multichannel lattice DFE is different due to the sequential processing nature of SPMGLSs. Therefore, we carry out the exponentially weighted LS optimization problem by taking into consideration each of these sections separately and assume that the proposed equalizer is comprised of three cascaded equalizers, which are two-channel, three-channel, and four-channel lattice sections; and we use a different index for each section while using m to indicate a stage in the whole equalizer. Henceforth, we focus on the case \({N^{f}_{1}}={N^{f}_{2}}\) as it brings out the essential ideas without unduly complicating the development and loss of generality.

In order to sequentially solve the exponentially weighted LS optimization problem under consideration, we first organize the elements of input signal vectors y 1(i)=[y 1(i),…,y 1(i+1)] T and y 2(i)= [y 2(i),…,y 2(i+1)] T , according to the natural ordering of SPMGLSs as:
$$ \begin{array}{ll} & \bar{\mathbf{y}}_{\ell}(i)=\left[ \begin{array}{c} y_{1}(i) \\ y_{2}(i) \\ y_{1}(i-1) \\ y_{2}(i-1) \\ ---- \\ y_{1}(i-\ell+1)\\ y_{2}(i-\ell+1) \end{array} \right] \\ \end{array} $$
(37)

and input to two-channel stages for which the stage number m has a range of values given by \( 0 < m \leq \left ({N^{f}_{1}}-{N^{b}_{1}}\right)\).

Accordingly, we redefine Equations (13) and (14) using this new data vector as follows:
$$ \mathbf{\Re}_{\bar{y}_{\ell} \bar{y}_{\ell}}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \bar{\mathbf{y}_{\ell}}(i) \bar{\mathbf{y}_{\ell}}^{H}(i), $$
(38)
and
$$ \mathbf{\Re}_{\bar{x}_{k} \bar{y}_{\ell}}(n)=\sum\limits_{i=0}^{n} \beta^{n-i} \bar{\mathbf{x}}_{k}(i) \bar{\mathbf{y}^{H}_{\ell}}(i), $$
(39)
where k=1,2. The orthogonalization of input data using SPMGLSs corresponds to the transformation of (38) and (39) into:
$$ \mathbf{\Xi}_{\ell}(n) = \sum\limits_{i=0}^{n} \beta^{n-i} \mathbf{\Omega}_{\ell}(n) \bar{\mathbf{y}}_{\ell}(i) \bar{\mathbf{y}}_{\ell}^{H}(i) \mathbf{\Omega}^{H}_{\ell}(n) $$
(40)
and
$$ \mathbf{\Gamma}_{\ell,k}(n) = \sum\limits_{i=0}^{n} \beta^{n-i} \mathbf{\Omega}_{\ell}(n) \text{} \bar{\mathbf{y}}_{\ell}(i) \bar{\mathbf{x}}^{H}_{k}(i) $$
(41)
respectively. Here, Ω (n) is the 2×2 lower triangular transformation matrix and is realized stage-by-stage using 2×2 lower triangular transformation matrices:
$$ \mathbf{L}_{\ell}(n)=\left[ \begin{array}{ll} 1 & 0 \\ \hat{\kappa}_{\ell}(n-1) & 1 \end{array} \right] $$
(42)
whose diagonal elements are all equal to unity at time instant n, and \(\hat {\kappa }_{\ell }(n)\) is the reflection coefficient computed at the single circular cell in the triangular shaped self-orthogonalization processor of the th two-channel SPMGLS. Then, the lattice joint process estimation coefficients are computed by means of:
$$ \mathbf{\Theta}_{\ell_{k}}(n)={\mathbf \Xi}^{-1}_{\ell}(n) {\mathbf{\Gamma}}_{\ell,k}(n), $$
(43)
where \(\mathbf {\Theta }_{\ell _{k}}(n)\) represents the kth row of the 2×2 lattice joint process estimation reflection coefficient matrix Θ (n) which is also sequentially computed stage-by-stage using 2×2 joint process estimation coefficient matrices:
$$\begin{array}{@{}rcl@{}} \begin{array}{c} \mathbf{\Delta}_{\ell}(n) = \end{array} \left[ \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}(n) & \bar{\kappa}_{\ell,_{1,2}}(n) \\ \bar{\kappa}_{\ell,_{2,1}}(n) & \bar{\kappa}_{\ell,_{2,2}}(n) \end{array} \right], \end{array} $$
(44)

in which \(\bar {\mathbf {\kappa }}_{\ell,_{k,j}}(n)\) is the jth reflection coefficient related to the estimation of the kth desired signal, and it is computed at the (k,j)th single circular cell of the square shaped reference-orthogonalization processor related to joint process estimation at the th two-channel SPMGLS. Note that the matrix inversion operation in Equation (16) is transformed into a simple scalar inversion operation in (43) due to the diagonal nature of Ξ (n).

After the processing of input signals by two-channel lattice stages, the first estimation error signal, \(\bar {x}_{1}(i) =e_{1}(i)\), which corresponds to the detected and fed back signal of the first channel, is incorporated at the \(\left ({N^{f}_{1}}-{N^{b}_{1}} + 1\right)th\) stage as the third channel. Accordingly, we expand the optimization problem by organizing the elements of the input data vectors y 1(i)=[y 1(i),…,y 1(iα+1)] T , y 2(i) = [y 2(i),…,y 2(iα+1)] T , and \(\mathbf {x}_{1}(i)\,=\,\left [\bar {x}_{1}(i-1),\ldots \right.\), \(\left.\bar {x}_{1}(i-\alpha +1)\right ]^{T}\) as follows:
$$ \begin{array}{ll} & \bar{\mathbf{y}}_{\alpha}(i)=\left[ \begin{array}{c} y_{1}(i) \\ y_{2}(i) \\ \bar{x}_{1}(i-1) \\ ---- \\ y_{1}(i-\alpha+1)\\ y_{2}(i-\alpha+1) \\ \bar{x}_{1}(i-\alpha+1) \end{array} \right], \\ \end{array} $$
(45)

and input to the three-channel lattice section, where the stage number (m) takes values in the range given by \(\left ({N^{f}_{1}}-{N^{b}_{1}}\right) < m \leq \left ({N^{f}_{2}}-{N^{b}_{2}}\right)\). Subsequently, we solve the optimization problem in (43) once again with the new input vector, in which case Ω α (n) and Θ α (n) are the 3α×3α lower triangular transformation and the 3×3α lattice joint process estimation coefficient matrices, respectively. Ω α (n) is computed sequentially by means of 3×3 lower triangular transformation matrices, L α (n) and Θ α (n), and is similarly realized stage-by-stage making use of 3×3 joint process estimation coefficient matrices, Δ α (n), at time instant n.

Finally, the optimization problem is expanded one more time with the inclusion of the second estimation error signal, \(\bar {x}_{2}(i) =e_{2}(i)\), which is related to the detected and fed back signal of the second channel, and this time, the elements of input data vectors y 1(i)=[y 1(i),…,y 1(i𝜗+1)] T , y 2(i)= [y 2(i),…,y 2(i𝜗+1)] T , \(\bar {\mathbf {x}}_{1}(i)=\left [\bar {x}_{1}(i-1),\ldots, \bar {x}_{1}(i-\vartheta +1)\right ]^{T}\), and \(\bar {\mathbf {x}}_{2}(i)=\left [\bar {x}_{2}(i-1),\ldots,\bar {x}_{2}(i-\vartheta +1)\right ]^{T}\) are organized as:
$$ \begin{array}{ll} & \bar{\mathbf{y}}_{\vartheta}(i)=\left[ \begin{array}{c} y_{1}(i) \\ y_{2}(i) \\ \bar{x}_{1}(i-1) \\ \bar{x}_{2}(i-1) \\ ---- \\ y_{1}(i-\vartheta+1)\\ y_{2}(i-\vartheta+1) \\ \bar{x}_{1}(i-\vartheta+1) \\ \bar{x}_{2}(i-\vartheta+1) \end{array} \right], \\ \end{array} $$
(46)

where the stage number (m) is in the range given by \(\left ({N^{f}_{2}}-{N^{b}_{2}}\right) < m \leq {N^{f}_{2}}\) due to four-channel processing. Similar to two-channel and three-channel cases, we solve the optimization problem in (43) using the new data vector in Equation (46), in which case Ω 𝜗 (n) and Θ 𝜗 (n) are 4𝜗×4𝜗 lower triangular transformation, and 4×4𝜗 joint process estimation error coefficient matrices at the time instant n, respectively. Similar to previous cases, these matrices are computed stage-by-stage by the use of 4×4 lower triangular transformation matrices, L 𝜗 (n), and 4×4 joint process estimation error coefficient matrices, Δ 𝜗 (n), at time instant n, respectively.

2.4 Computation of error order updates

Due to the sequential nature of the proposed lattice structure, we carry out the multichannel error order update task by taking into consideration two-channel, three-channel, and four-channel sections separately, and therefore we assume that the filter is comprised of three cascaded filters as described in the previous subsection. The prediction and joint state estimation errors for the end of the observation interval n=i at the output of the th order two-channel equalizer section, where \( 0 < m \leq \left ({N^{f}_{1}}-{N^{b}_{1}}\right)\), can be stated in terms of lattice parameters and the (−1)th order prediction errors as follows:
$${} \begin{aligned} \left[ \begin{array}{c} f_{\ell}^{1}(n) \\ f_{\ell}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} f_{\ell-1}^{1}(n) \\ f_{\ell-1}^{2}(n) \end{array} \right] &+ \left[ \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}^{f \ast }(n-1) & \bar{\kappa}_{\ell,_{1,2}}^{f \ast}(n-1) \\ \bar{\kappa}_{\ell,_{2,1}}^{f \ast }(n-1) & \bar{\kappa}_{\ell,_{2,2}}^{f \ast}(n-1) \end{array} \right] \\ &\times \left[ \begin{array}{cc} 1 & 0 \\ \hat{\kappa}_{\ell}^{f \ast}(n-2)& 1 \end{array} \right] \! \left[ \begin{array}{c} b_{\ell-1}^{1}(n-1) \\ b_{\ell-1}^{2}(n-1) \end{array} \right] \end{aligned} $$
(47)
and
$${} \begin{aligned} \left[ \begin{array}{c} b_{\ell}^{1}(n) \\ b_{\ell}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} b_{\ell-1}^{1}(n-1) \\ b_{\ell-1}^{2}(n-1) \end{array} \right] &+ \left[\! \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}^{b \ast}(n-1) & \bar{\kappa}_{\ell,_{1,2}}^{b \ast}(n-1) \\ \bar{\kappa}_{\ell,_{2,1}}^{b \ast}(n-1) & \bar{\kappa}_{\ell,_{2,2}}^{b \ast}(n-1) \end{array}\!\! \right] \\ &\times \left[ \begin{array}{cc} 1 & 0 \\ \hat{\kappa}_{\ell}^{b \ast}(n-1) & 1 \end{array} \right] \left[ \begin{array}{c} f_{\ell-1}^{1}(n) \\ f_{\ell-1}^{2}(n) \end{array} \right] \end{aligned} $$
(48)
where the lower triangular and square coefficient matrices are generated in triangular-shaped self-orthogonalization and square-shaped reference-orthogonalization processors in a two-channel SPMGLS as previously defined in Equations (42) and (44). The joint process estimation error updates are accordingly given as:
$${} \begin{aligned} \left[ \begin{array}{c} e_{\ell}^{1}(n) \\ e_{\ell}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} e_{\ell-1}^{1}(n) \\ e_{\ell-1}^{2}(n) \end{array} \right] + \left[ \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}^{e \ast }(n-1) & \bar{\kappa}_{\ell,_{1,2}}^{e \ast}(n-1) \\ \bar{\kappa}_{\ell,_{2,1}}^{e \ast }(n-1) & \bar{\kappa}_{\ell,_{2,2}}^{e \ast }(n-1) \end{array} \right] \\ \times \left[ \begin{array}{cc} 1 & 0 \\ \hat{\kappa}_{\ell}^{e \ast }(n-1)& 1 \end{array} \right] \left[ \begin{array}{c} b_{\ell-1}^{1}(n) \\ b_{\ell-1}^{2}(n) \end{array} \right]. \end{aligned} $$
(49)
We then multiply the lower triangular and square coefficient matrices in Equations (47), (48), and (49), and make the following definitions:
$${} \begin{aligned} \boldsymbol{\Gamma}^{f}_{\ell}(n) &= \left[ \begin{array}{cc} \Gamma^{f}_{\ell,_{1,1}}(n) & \Gamma^{f}_{\ell,_{1,2}}(n) \\ \Gamma^{f}_{\ell,_{2,1}}(n) & \Gamma^{f}_{\ell,_{2,2}}(n) \end{array} \right]\\ &=\left[ \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}^{f}(n) + \bar{\kappa}_{\ell,_{1,2}}^{f}(n)\hat{\kappa}_{\ell}^{f}(n-1) & \bar{\kappa}_{\ell,_{1,2}}^{f}(n) \\ \bar{\kappa}_{\ell,_{2,1}}^{f}(n) + \bar{\kappa}_{\ell,_{2,2}}^{f}(n)\hat{\kappa}_{\ell}^{f}(n-1) & \bar{\kappa}_{\ell,_{2,2}}^{f}(n) \end{array} \right], \end{aligned} $$
(50)
$$ \begin{aligned} \boldsymbol{\Gamma}^{b}_{\ell}(n) &= \left[ \begin{array}{cc} \Gamma^{b}_{\ell,_{1,1}}(n) & \Gamma^{b}_{\ell,_{1,2}}(n) \\ \Gamma^{b}_{\ell,_{2,1}}(n) & \Gamma^{b}_{\ell,_{2,2}}(n) \end{array} \right]\\ &=\left[ \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}^{b}(n) + \bar{\kappa}_{\ell,_{1,2}}^{b}(n)\hat{\kappa}_{\ell}^{b}(n) & \bar{\kappa}_{\ell,_{1,2}}^{b}(n) \\ \bar{\kappa}_{\ell,_{2,1}}^{b}(n) + \bar{\kappa}_{\ell,_{2,2}}^{b}(n)\hat{\kappa}_{\ell}^{b}(n) & \bar{\kappa}_{\ell,_{2,2}}^{b}(n) \end{array} \right], \end{aligned} $$
(51)
and
$$ \begin{aligned} \boldsymbol{\Gamma}^{e}_{\ell}(n) &= \left[ \begin{array}{cc} \Gamma^{e}_{\ell,_{1,1}}(n) & \Gamma^{e}_{\ell,_{1,2}}(n) \\ \Gamma^{e}_{\ell,_{2,1}}(n) & \Gamma^{e}_{\ell,_{2,2}}(n) \end{array} \right]\\ &=\left[ \begin{array}{cc} \bar{\kappa}_{\ell,_{1,1}}^{e}(n) + \bar{\kappa}_{\ell,_{1,2}}^{e}(n)\hat{\kappa}_{\ell}^{e}(n) & \bar{\kappa}_{\ell,_{1,2}}^{e}(n) \\ \bar{\kappa}_{\ell,_{2,1}}^{e}(n) + \bar{\kappa}_{\ell,_{2,2}}^{e}(n)\hat{\kappa}_{\ell}^{e}(n) & \bar{\kappa}_{\ell,_{2,2}}^{e}(n) \end{array} \right], \end{aligned} $$
(52)
in order to obtain compact versions of the Equations (47), (48), and (49) as follows:
$$ \left[ \begin{array}{c} f_{\ell}^{1}(n) \\ f_{\ell}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} f_{\ell-1}^{1}(n) \\ f_{\ell-1}^{2}(n) \end{array} \right] + \boldsymbol{\Gamma}^{f \ast}_{\ell}(n-1)\left[ \begin{array}{c} b_{\ell-1}^{1}(n-1) \\ b_{\ell-1}^{2}(n-1) \end{array} \right], $$
(53)
$$ \left[ \begin{array}{c} b_{\ell}^{1}(n) \\ b_{\ell}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} b_{\ell-1}^{1}(n-1) \\ b_{\ell-1}^{2}(n-1) \end{array} \right] + \boldsymbol{\Gamma}^{b \ast}_{\ell}(n-1)\left[ \begin{array}{c} f_{\ell-1}^{1}(n) \\ f_{\ell-1}^{2}(n) \end{array} \right], $$
(54)
and
$$ \left[ \begin{array}{c} e_{\ell}^{1}(n) \\ e_{\ell}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} e_{\ell-1}^{1}(n) \\ e_{\ell-1}^{2}(n) \end{array} \right] + \boldsymbol{\Gamma}^{e \ast}_{\ell}(n-1)\left[ \begin{array}{c} b_{\ell-1}^{1}(n) \\ b_{\ell-1}^{2}(n) \end{array} \right]. $$
(55)
The development of prediction and joint process estimation error order updates from (α−1)th order to αth for the three-channel section, where the stage number (m) takes values in the range given by \(\left ({N^{f}_{1}}-{N^{b}_{1}}\right) < m \leq \left ({N^{f}_{2}}-{N^{b}_{2}}\right)\), is carried out in a similar fashion to the two-channel section, and they can be expressed in compact form with the following equations:
$${} \left[ \begin{array}{c} f_{\alpha}^{1}(n) \\ f_{\alpha}^{2}(n) \\ f_{\alpha}^{3}(n) \end{array} \right] =\left[ \begin{array}{c} f_{\alpha-1}^{1}(n) \\ f_{\alpha-1}^{2}(n) \\ f_{\alpha-1}^{3}(n) \end{array} \right] + \boldsymbol{\Gamma}^{f \ast}_{\alpha}(n-1)\left[ \begin{array}{c} b_{\alpha-1}^{1}(n-1) \\ b_{\alpha-1}^{2}(n-1) \\ b_{\alpha-1}^{3}(n-1) \end{array} \right], $$
(56)
$${} \left[ \begin{array}{c} b_{\alpha}^{1}(n) \\ b_{\alpha}^{2}(n) \\ b_{\alpha}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} b_{\alpha-1}^{1}(n-1) \\ b_{\alpha-1}^{2}(n-1) \\ b_{\alpha-1}^{3}(n-1) \end{array} \right] + \boldsymbol{\Gamma}^{b \ast}_{\alpha}(n-1)\left[ \begin{array}{c} f_{\alpha-1}^{1}(n) \\ f_{\alpha-1}^{2}(n) \\ f_{\alpha-1}^{3}(n) \\ \end{array} \right], $$
(57)
and
$$ \left[ \begin{array}{c} e_{\alpha}^{1}(n) \\ e_{\alpha}^{2}(n) \\ e_{\alpha}^{2}(n) \end{array} \right] =\left[ \begin{array}{c} e_{\alpha-1}^{1}(n) \\ e_{\alpha-1}^{2}(n) \\ e_{\alpha-1}^{3}(n) \end{array} \right] + \boldsymbol{\Gamma}^{e \ast}_{\alpha}(n-1)\left[ \begin{array}{c} b_{\alpha-1}^{1}(n) \\ b_{\alpha-1}^{2}(n) \\ b_{\alpha-1}^{3}(n) \\ \end{array} \right], $$
(58)
where
$$\begin{aligned} \boldsymbol{\Gamma}^{f}_{\alpha}(n) &= \left[ \begin{array}{ccc} \bar{\kappa}_{\alpha,_{1,1}}^{f}(n)& \bar{\kappa}_{\alpha,_{1,2}}^{f}(n) & \bar{\kappa}_{\alpha,_{1,3}}^{f}(n)\\ \bar{\kappa}_{\alpha,_{2,1}}^{f}(n)& \bar{\kappa}_{\alpha,_{2,2}}^{f}(n) & \bar{\kappa}_{\alpha,_{2,3}}^{f}(n)\\ \bar{\kappa}_{\alpha,_{3,1}}^{f}(n)& \bar{\kappa}_{\alpha,_{3,2}}^{f}(n) & \bar{\kappa}_{\alpha,_{3,3}}^{f}(n) \end{array} \right]\\ &\quad\times \left[ \begin{array}{ccc} 1 & 0 & 0 \\ \hat{\kappa}_{\alpha,_{2,1}}^{f}(n-1) & 1 & 0 \\ \hat{\kappa}_{\alpha,_{3,1}}^{f}(n-1) & \hat{\kappa}_{\alpha,_{3,2}}^{f}(n-1) & 1 \end{array} \right], \end{aligned} $$
$${} {\fontsize{8.7pt}{9.3pt}\selectfont{\begin{aligned} \mathbf{\Gamma}^{b}_{\alpha}(n) = \left[ \begin{array}{ccc} \bar{\kappa}_{\alpha,_{1,1}}^{b}(n)& \bar{\kappa}_{\alpha,_{1,2}}^{b}(n) & \bar{\kappa}_{\alpha,_{1,3}}^{b}(n)\\ \bar{\kappa}_{\alpha,_{2,1}}^{b}(n)& \bar{\kappa}_{\alpha,_{2,2}}^{b}(n) & \bar{\kappa}_{\alpha,_{2,3}}^{b}(n)\\ \bar{\kappa}_{\alpha,_{3,1}}^{b}(n)& \bar{\kappa}_{\alpha,_{3,2}}^{b}(n) & \bar{\kappa}_{\alpha,_{3,3}}^{b}(n) \end{array} \right]\left[ \begin{array}{ccc} 1 & 0 & 0 \\ \hat{\kappa}_{\alpha,_{2,1}}^{b}(n) & 1 & 0 \\ \hat{\kappa}_{\alpha,_{3,1}}^{b}(n) & \hat{\kappa}_{\alpha,_{3,2}}^{b}(n) & 1 \end{array} \right], \end{aligned}}} $$
and
$${} \small{ \begin{array}{l} \begin{array}{l} \boldsymbol{\Gamma}^{e}_{\alpha}(n) \,=\,\! \end{array} \left[\! \begin{array}{ccc} \bar{\kappa}_{\alpha,_{1,1}}^{e}(n)& \bar{\kappa}_{\alpha,_{1,2}}^{e}(n) & \bar{\kappa}_{\alpha,_{1,3}}^{e}(n)\\ \bar{\kappa}_{\alpha,_{2,1}}^{e}(n)& \bar{\kappa}_{\alpha,_{2,2}}^{e}(n) & \bar{\kappa}_{\alpha,_{2,3}}^{e}(n)\\ \bar{\kappa}_{\alpha,_{3,1}}^{e}(n)& \bar{\kappa}_{\alpha,_{3,2}}^{e}(n) & \bar{\kappa}_{\alpha,_{3,3}}^{e}(n) \end{array}\! \right]\!\! \left[\! \begin{array}{ccc} 1 & 0 & 0 \\ \hat{\kappa}_{\alpha,_{2,1}}^{b}(n) & 1 & 0 \\ \hat{\kappa}_{\alpha,_{3,1}}^{b}(n) & \hat{\kappa}_{\alpha,_{3,2}}^{b}(n) & 1 \end{array}\! \right]. \end{array} } $$

The prediction and joint process estimation error order update equations from (𝜗−1)th order to 𝜗th order for the four-channel section, where the stage number (m) is in the range given by \(\left ({N^{f}_{2}}-{N^{b}_{2}}\right) < m \leq {N^{f}_{2}}\), can be derived by following a similar procedure to two- and three-channel sections, and 4×1 error order update matrices can be subsequently obtained with 4×4 lower triangular and square coefficient matrices.

2.5 Matrix visualization

In order to visualize the cascading and functioning of two-channel, three-channel, and four-channel sections as a single equalizer, we provide a matrix representation of sequential Givens lattice orthogonalization by considering \(\left ({N^{f}_{1}},{N^{b}_{1}}\right)=(13,7)\) and \(\left ({N^{f}_{2}},{N^{b}_{2}}\right)=(13,2)\) DFEs for the first and second channels, and also organizing the elements of input data vectors y 1(n)=[y 1(n),y 1(n−1),…,y 1(n−11),y 1(n−12)] T , y 2(n)=[y 2(n),y 2(n−1),…,y 2(n−11),y 2(n−12)] T , \(\mathbf {x}_{1}(n)=\left [\bar {x}_{1}(n-1),\ldots, \bar {x}_{1}(n-6) \right ]^{T} \), and \(\mathbf {x}_{2}(n)=\left [\bar {x}_{2}(n-1)\right ]^{T}\) as columns of a matrix:
$${} {\fontsize{7.8pt}{9.9pt}\selectfont{\left[\! \begin{array}{lllllllll} y_{1}(n) & y_{1}(n-1) & \ldots & y_{1}(n-6) & y_{1}(n-7) & \ldots & y_{1}(n-11) & y_{1}(n-12) \\ y_{2}(n) & y_{2}(n-1) & \ldots & y_{1}(n-6) & y_{2}(n-7) & \ldots & y_{2}(n-11) & y_{2}(n-12) \\ & & & & \bar{x}_{1}(n-1)& \ldots & \bar{x}_{1}(n-5) & \bar{x}_{1}(n-6) \\ & & & & & & & \bar{x}_{2}(n-1) \end{array}\! \right]}} $$
(59)
by taking into consideration different numbers of parameters in the feed forward and feedback channels and shifting properties of input data. This matrix helps us to visualize the orthogonalization process, and thus to draw a diagram of the four channel DFE structure under consideration as in Figure 5. Note that the elements of the first and second rows are related to the input signals of the first and the second channels of the DFE under consideration, while the third and fourth rows are associated with the detected and fed back signals. Lattice orthogonalization begins with the elements of the first two rows using two-channel sequential lattice processing stages until the first fed back channel is incorporated as the new channel at a transitional stage, which is the \(\left ({N^{f}_{1}} - {N^{b}_{1}} + 1\right)\)th stage. Then, the orthogonalization continues with three-channel lattice stages until the fourth channel, which is related to the detected and fed back signal of the second channel, is taken into the process at another transitional stage, which is the \(\left ({N^{f}_{2}} - {N^{b}_{2}} + 1\right)\)th stage, and so the orthogonalization of input data finalizes with four-channel stages when the mean squared estimation error performance requirements are met, and thereby the kth desired signal, d k (n), is sequentially estimated using self-orthogonalized backward prediction error signals as follows:
$$\begin{array}{@{}rcl@{}}{} \hat{d}^{k}_{n}(i) &= &\sum\limits_{m=1}^{\left({N^{f}_{1}}-{N^{b}_{1}}\right)}\, \sum\limits_{j=1}^{2}\bar{\kappa}_{m,k,j}^{\ast}(n-1) \hat{b}^{j}_{m-1}(i) \\ && + \sum\limits_{m=\left({N^{f}_{1}}-{N^{b}_{1}}+1\right)}^{\left({N^{f}_{2}}-{N^{b}_{2}}\right)} \, \sum_{j=1}^{3} \bar{\kappa}_{m,k,j}^{\ast}(n-1) \hat{b}^{j}_{m-1}(i) \\ &&+\sum\limits_{m=\left({N^{f}_{2}}-{N^{b}_{2}}+1\right)}^{{N^{f}_{2}}} \, \sum\limits_{j=1}^{4}\bar{\kappa}_{m,k,j}^{\ast}(n-1) \hat{b}^{j}_{m-1}(i). \end{array} $$
(60)
Figure 5
Figure 5

Diagram of four-channel DFE using SPMGLSs.

Here, the first and second summations represent the estimation accomplished by the two-channel and three-channel sections, respectively, and the third summation is connected with the four-channel estimation section. In each section, \(\bar {\kappa }_{m,k,j}(n)\) represents the jth estimation reflection coefficient at the mth stage related to the kth channel as defined in the previous subsection, and \(\hat {b}^{j}_{m-1}(i) \) represents the jth element of the self-orthogonalized backward prediction error signal vector, \(\hat {\mathbf {b}}_{m-1}(i)\), at the input of the mth stage. The self-orthogonalized backward prediction error vector, \(\hat {\mathbf {b}}_{m-1}(i) \), is produced by the lower triangular transformation of the input backward prediction error vector, b m−1(i), using L m (n), and this operation is accomplished at the triangular shaped self-orthogonalization processor (related to forward prediction) of the mth SPMGLS. Note that the sizes of vectors, \(\hat {\mathbf {b}}_{m-1}(i) \) and b m−1(i), and matrix, L m (n), at different sections of the proposed lattice equalizer are as follows: 2×1 and 2×2 in two-channel section, 3×1 and 3×3 in three-channel section, and 4×1 and 4×4 in four-channel section, respectively.

3 Adaptive multiple systolic Viterbi detection

In order to achieve an all systolic equalizer architecture, we propose to use the systolic array processor approach in [36] for the design and implementation of Viterbi detection, so that a high degree of computational concurrency is obtained by operating simultaneously and in synchronization with the rest of equalizer circuitry. Accordingly, the most computationally intensive operation in the Viterbi detection of sent data is related to the comparator metric, and a systolic computation of this metric is accomplished by multiply and accumulate operations:
$${} \text{STMO}^{\,j}_{k} \,=\, \text{OSMI}^{j}_{k} + {z_{n}^{k}}(i) +\!\! \sum\limits_{n=0}^{3{N^{d}_{k}}-3} \!\!\!c_{k}(n) x_{k}(i-n),\, j\,=\,1,2,\ldots,\!\upsilon, $$
(61)
for υ branches leading from states at time instant i − 1 to each state at time instant i as illustrated in Figure 6. Herein, υ stands for the alphabet size, \(\text {OSMI}^{j}_{k}\) and \(\text {STMO}^{j}_{k}\) represent old survivor state metric input (OSMI) and state metric output (STMO) for the jth branch leading to the state constituted by the vector \(\left [\!x_{k}(i\,-\,1) x_{k}(i\,-\,2) \! \ldots x_{k}(i\,-\,{N^{d}_{k}}\,+\,1)\!\right ]\) related to the kth channel of equalizer, and \({z_{n}^{k}}(i)\) is the kth equalizer output as given in (5). Note that the elements of coefficient vector c k (i):
$${} \mathbf{c}_{k}(n)=\left[c_{k}(n),c_{k}(n-1),\ldots, c_{k}\left(n-2{N^{d}_{k}}+2\right)\right]^{T} $$
(62)
Figure 6
Figure 6

υ branches from states at time instant i −1 to each state at time instant i .

are computed as:
$$ c_{k}(n)=\sum\limits^{{N^{d}_{k}}-1}_{i=0} w_{k}(i) w_{k}(n-i) $$
(63)

where w k represents the coefficient vector for the kth ADIR filter as defined in (10). Note that a processing element (PE) of the systolic array in Figure 6 is symbolized with a circled PE, and the memory of computation cycle is designated as \(L=3{N^{d}_{k}}-3\) for the ease of illustration.

4 Computational complexity

The computational complexity can be calculated by considering two main sections of the proposed channel shortening equalizer. The first section implements the MIMO-DFE while the second one is related to the Viterbi processing. The number of operations required for the MIMO-DFE can be calculated by thinking about the number of operations per stage and the number of stages. The number of operations for a single SPMGLS with two, three, and four channels have been computed by making use of complexity calculations in [23] and [33] as 84,171, and 288, respectively. There are \(\left ({N^{f}_{1}}-{N^{b}_{1}}\right)\) stages in two-channel section, and thereby the number of required operations for this section sums up to \(84 \ \left ({N^{f}_{1}}-{N^{b}_{1}}\right)\). Similarly, the number operations for three and four channel sections are calculated as \(171 \left ({N^{b}_{1}}\,-\,{N^{b}_{2}}\right)\) and \(288\, {N^{b}_{2}}\), respectively. Therefore, the total number of operations for the MIMO-DFE in Figure 5 becomes \(84{N^{f}_{1}} \!+ 87{N^{b}_{1}} \!+ 117{N^{b}_{2}} \).

The complexity calculation for systolic Viterbi section can be accomplished as follows. The total number of processing elements in the systolic array that implements Viterbi processing per channel is given as \( \upsilon \times {N^{d}_{k}} \) in [36]. Each element in this array performs one addition and one multiplication, which are counted together as one operation. Then, the total number of operations for the systolic implementation of M Viterbi detectors are calculated as \( M \times \upsilon \times {N^{d}_{k}} \). Accordingly, the total computational complexity for the proposed equalizer taking into account both MIMO-DFE and multiple Viterbi detector sections becomes \(84{N^{f}_{1}} + 87{N^{b}_{1}} + 117{N^{b}_{2}} + M \upsilon {N^{d}_{k}} \).

We compare the computational complexity of the proposed channel shortening equalizer using a front-end feed forward equalizer (CSFFE) and a front-end decision feedback equalizer (CSDFE) with those of VE in Figures 7, 8, and 9. When generating the complexity curves, we assumed that \(N^{f}={N^{f}_{1}}={N^{f}_{2}}\) and \(N^{d}={N^{d}_{k}}=3\) for both CSFFE and CSDFE and additionally \(N_{b}={N^{b}_{1}}={N^{b}_{2}} = 0.25 \times \ N^{f}\) for CSDFE and N b =1 for CSFFE. The computational complexity for VE is calculated by taking into account the most computationally demanding operation, i.e., comparative metric calculation, and is given as M×(4μ+3)×υ μ−1 [38], where each channel from transmitter to receiver is assumed to have a memory of μ. In Figure 7, we plotted the complexity curves when M=2, υ=2, and μ=4,8,16. It shows that the proposed method is not computationally advantageous when the channel memory (μ=4) is two times the ADIR filter memory (N d −1=2). However, the complexity of the proposed method becomes advantageous regardless of feed forward filter length (N f ) used when the channel memory is four times the ADIR filter memory and far more beneficial when the channel memory is eight times the ADIR filter memory. Figure 8 displays that the computational advantage of the proposed method comparing to VE becomes more attractive when the alphabet size is increased from υ=2 to υ=4 even for the channel memory values of μ=4,8; and Figure 9 demonstrates that the computational advantage becomes less pronounced when the number of antennas increases from M=2 to 4.
Figure 7
Figure 7

Comparative computational complexity of the proposed equalizer vs. VE when M =2, υ =2, and N d −1=2.

Figure 8
Figure 8

Comparative computational complexity of the proposed equalizer vs. VE when M =2, υ =4, and N d −1=2.

Figure 9
Figure 9

Comparative computational complexity of the proposed equalizer vs. VE when M =4, υ =2, and N d −1=2.

The computational complexity vs. the number of antennas analysis has also been carried out due to the recent interest in massive MIMO for next-generation wireless systems [37]. We would like to point out that the computational complexities of CSDFE and CSFFE are larger than DFE and FFE, respectively, by an amount of M×υ×N d , assuming \(N^{d}={N^{d}_{k}}, \forall k\). In this analysis, we assumed that \(N^{f}={N^{f}_{k}}=12, \forall k\), \(N^{b}={N^{b}_{k}} = 0.25 \ \times \ N^{f}\), and \(N^{b}={N^{b}_{k}}=1\), k. In Figure 10, the computational complexity vs. number antennas curves for CSDFE, CSFFE, DFE, and FFE when υ=256 and N d −1=8, and M is increasing from 0 and to 150, are presented. For smaller values of υ and N d −1, the difference between the computational complexities of CSDFE and DFE or CSFFE and FFE, respectively, is minor, and the curves of CSDFE and DFE or CSFFE and FFE can not be discriminated, which implies that the performance increase by the implementation of CSDFE instead of DFE or CSFFE instead of FFE, respectively, is achieved at the expense of almost negligible computational complexity cost, as will be clearer in the next section. In Figure 11, we compare the computational complexity of the proposed method with those of VE when the channel memory values of μ=4,8, and 16 are used, the alphabet size is υ=2, the ADIR memory is assumed as N d −1=2, and M is increasing from 0 to 150. Finally, we repeat the same comparison in Figure 12 for υ=4 in order to demonstrate the effect of using larger alphabet size. Figure 11 shows that the proposed method is computationally advantageous comparing to VE when channel memory is larger than eight (μ>8), and in Figure 12, it can be seen that the computational complexity advantage of the proposed method improves when the alphabet size is increased from υ=2 to υ=4, and the proposed method becomes less complex comparing to VE even when μ=8.
Figure 10
Figure 10

Comparative computational complexities of CSDFE, DFE, CSFFE, and FFE when υ =256 and N d −1=8.

Figure 11
Figure 11

Comparative computational complexity of the proposed equalizer vs. VE when υ =2 and N d −1=2.

Figure 12
Figure 12

Comparative computational complexity of the proposed equalizer vs. VE when υ =4 and N d −1=2.

5 Experimental results

The performance of the proposed receiver was investigated by means of MSE and probability of error simulations. In these evaluations, we considered linear time-invariant channels with spectral nulls as well as time-varying channels. The following channel impulse response matrices were defined in order to be used in simulations that demonstrate performance of the proposed equalizer:

$$ \begin{aligned} \mathbf{h}_{1}(n)&= \left[ \begin{array}{cc} h_{a}(n) & \rho. h_{a}(n-1) \\ \rho. h_{a}(n-1) & h_{a}(n) \end{array} \right] \\ \mathbf{h}_{2}(n)&= \left[ \begin{array}{cc} \delta(n) & \rho. \delta(n-1) \\ \rho. \delta(n-1) & \delta(n) \end{array} \right] \end{aligned} $$
$$ \begin{array}{cc} \mathbf{h}_{3}(n)= \left[ \begin{array}{cc} h_{b}(n) & \rho. h_{b}(n-1) \\ \rho. h_{b}(n-1) & h_{b}(n) \end{array} \right] \end{array} $$
(64)
where δ(n) represents the dirac delta function, and the channel impulse response h a (n) is defined as \(h_{a}(n)=\sum _{j=0}^{4} a_{j} \delta (n-j)\), which has spectral nulls and a large eigenvalue spread (χ=1317.65). Herein, the channel coefficients have been taken from [39], and are given by:
$${} \textbf{a}_{j}=\left[0.227045 \, 0.460091 \, 0.688136 \, 0.460091 \, 0.227045\right] $$
(65)

with \(\sum _{j=0}^{4}{a_{j}^{2}}=1\). Note that eigenvalue spread (χ) was determined by using the method described in [40], assuming a feed forward equalizer with a memory of N f −1=18, and a noise variance of 0.001. The time-varying channel impulse response is defined as \(h_{b}(n)=\sum _{j=0}^{4} b_{j}(n)\delta (n-j)\), where b j (n) represents the jth time-variant attenuation factor, generated independently using the improved version of Jakes’ Rayleigh fading model in [41] by assuming a data rate of 100 kbytes/s and doppler shifts of f D =50 Hz and f D =10 Hz ; b j (n) is also normalized such that \(\sum _{j=0}^{4}{b_{j}^{2}}(n)=1\) for all n. By choosing the same channel for both direct and indirect channels, we demonstrate the performance under such a severe distortion situation that both ISI and ICI are significant, and therefore, the use of DFE is justified. On the other hand, we could have generated a longer channel with the same spectral characteristics (or eigenvalue spread), in which case we would not be able to benchmark the performance with those of VE, as the simulation of VE becomes computationally cumbersome for longer channels. Note that ρ represents the gain factor for the effect of ICI and takes values between 0 and 1. In this presentation, we consider two values of the gain factor, ρ=0 and ρ=1, which correspond to completely orthogonal and nonorthogonal transmissions, respectively [42].

In the simulations for the performance evaluation of the proposed method (CSDFE/CSFFE) with respect to FFE, DFE, and VE, and also in the simulations for the performance comparison of the proposed method when different ADIR filter memories are used, the input signal x(n) applied to the channel was made of uniformly distributed bipolar \((+/-) 1/\sqrt {2}\) random numbers because of relative simplicity it provided in simulations. Note that the uniformly distributed bipolar random numbers represent BPSK modulation supported by the IEEE 802.11n WLAN standard [43]. In order to account for the modulations that are both in the IEEE 802.11n WLAN and IEEE 802.16e MAN standards and to demonstrate the effect of higher modulation on probability of error performance, we also performed CSFFE/CSDFE simulations by using the input signal made of uniformly distributed random numbers taking values from (1/2+i/2,−1/2+i/2,−1/2−i/2,1/2−i/2), which represents QPSK modulation (\(i=\sqrt {-1}\)), and compared against the performance results of the proposed method for BPSK modulation in both time-invariant and time-variant channel cases. Moreover, we assume the complete knowledge of time-invariant channel and the knowledge of time-variant channel memory in VE and adaptive Viterbi equalizer (AVE) performance evaluations, respectively. On the other hand, the proposed method does not need any information about channel; however, if information about channel is already available and an evaluation on the badness of channel can be carried out, the result of this evaluation can used to determine the ADIR filter memories of the proposed equalizer so as to improve the performance.

The channel noise signal was additive white Gaussian noise (AWGN) with zero mean and is uncorrelated with the input signal. The received signal-to-noise ratio (SNR) per each channel of the receiver is defined as:
$$ \text{SNR}_{k}\overset{\bigtriangleup}{=}\frac{1}{2\sigma_{n_{k}}^{2}} $$
(66)

where \(\sigma _{n_{k}}^{2}\) is the variance of AWGN for the kth channel. Accordingly, SNRs for all channels of the receiver are equal, and the system SNR is defined as SNR=SNR k for k=1,2. The exponential weighting factors were 0.99 and 1.0 for the front-end equalizer and ADIR filter, respectively, when the channel was time-invariant. In time-varying channel case, they were assumed as 0.975 and 1 in order to better track the signal. The probability of error evaluations were conducted using 4×105 samples so that 2×105 samples per channel were used, and the simulations were carried out in training mode of receiver operation. The delays (D k ) for the desired signals in this mode of operation were assumed equal (D=D k ) for k=1,2, and D was chosen so as to minimize MSE; that is, D=(N f −1+−1)/2 when only front-end FFE is utilized, D=(N f −1+N b −1+−1)/2 when only front-end DFE is implemented, D=(N f −1+−1)/2−N d +1 for CSFFE, and D=(N f −1+N b −1+−1)/2−N d +1 for CSDFE. Note that the memories of feed forward and feedback sections of DFEs, ADIR filters, and channel impulse responses for k=1,2 were assumed equal, i.e., \(N^{f}-1={N_{k}^{f}}-1\), \( N^{b}-1={N_{k}^{b}}-1\), \( N^{d}-1={N_{k}^{d}}-1\), and −1= k −1. The noise variance per channel during MSE simulations was 0.001.

5.1 Time-invariant channel

The objective of simulations with the channel matrix h 1(n) is to display the performance of the proposed CSDFE with respect to CSFFE, FFE, DFE, and VE. In the simulations, we have taken into account the modularity and regularity properties of SPMGLSs and started simulations with FFE; after observing the performance, we altered the equalizer to CSFFE, then we added new SPMGLSs to alter the equalizer to DFE, subsequently to CSDFE. The memory of FFE (N f −1) was 18, while the memory of feedback channels (N b −1) for DFE was 4, and the memory of ADIR (N d −1) filter was 2.

In Figure 13, we present the MSE performance of the proposed equalizer when orthogonal transmission is used, i.e., ρ=0, and also compare its performance to those of CSFFE, FFE, and DFE together with the performance for the channel matrix h 2(n). In Figure 14, we provide the corresponding probability of error performance for orthogonal transmission and compare the performance of CSDFE with those of CSFFE, FFE, DFE, VE, and with the performance for the channel matrix h 2(n). It can be seen in these figures that the performance of CSDFE is better than those of CSFFE, FFE, and DFE, respectively. It also has the closest probability of error performance to that of VE. Note that neither MSE nor probability of error performances for the channel matrix h 2(n) with ρ=0 change with the use of different equalizers, CSDFE, CSFFE, DFE, or FFE, since this channel matrix with ρ=0 does not include ISI and ICI components. Figures 15 and 16, on the other hand, display MSE and probability of error comparisons when ρ=1, and also provide comparison with respect to the performance of DFE when the channel matrix h 2(n) is used, since the channel matrix h 2(n) with ρ=1 does not have ISI components.
Figure 13
Figure 13

MSE performance of the proposed equalizer for the channel matrix h 1 ( n ) when ρ =0.

Figure 14
Figure 14

Probability of error performance of the proposed equalizer for the channel matrix h 1 ( n ) when ρ =0.

Figure 15
Figure 15

MSE performance of the proposed equalizer for the channel matrix h 1 ( n ) when ρ =1.

Figure 16
Figure 16

Probability of error performance of the proposed equalizer for the channel matrix h 1 ( n ) when ρ =1.

The effect of ICI on the performance of CSDFE and CSFFE can seen by comparing MSE values between Figures 13 and 15 and by collating probability of error values between Figures 14 and 16. It can be deduced from these comparisons that the performance improvement that can achieved by the combination of CSDFE and orthogonal transmission is far more beneficial than using CSFFE with orthogonal transmission. Furthermore, Figures 17 and 18 show the performance improvement that can be achieved by using an ADIR filter memory (N d −1) of 3 instead of 2 for both CSFFE and CSDFE in ρ=0 and ρ=1 cases, respectively.
Figure 17
Figure 17

Probability of error comparison for different ADIR filter memories when h 1 ( n ) is used and ρ =0.

Figure 18
Figure 18

Probability of error comparison for different ADIR memories when h 1 ( n ) is used and ρ =1.

In order to investigate the effect of channel memory for a given ADIR filter memory on the performance of the proposed method, we generated channels with longer impulse responses than that of h a (n), nevertheless, we made sure that these channels have exactly the same spectral characteristics or eigenvalue spread with h a (n). We then produced the corresponding channel matrices using the same channel impulse responses for direct and indirect paths as in h 1(n) and repeated the aforementioned experiments and found out that the performance was not different from the ones displayed in Figures 13, 14, 15, 16, 17, and 18. Consequently, it can be said that, the eigenvalue spread of channel, not the memory as the channel shortener name implies, determines the performance of the proposed equalizer.

Subsequently, we examine the effect of using a higher modulation scheme on the probability of error performance of the proposed method, and in Figures 19 and 20, we present the performance degradation caused by switching modulation from BPSK to QPSK in orthogonal and nonorthogonal transmission cases, respectively.
Figure 19
Figure 19

Probability of error comparison for different modulation schemes when h 1 ( n ) is used and ρ =0.

Figure 20
Figure 20

Probability of error comparison for different modulation schemes when h 1 ( n ) is used and ρ =1.

5.2 Time-variant channel

Our objective in the simulations using the channel matrix h 3(n) is to present the performance of the proposed equalizer under two different time-varying channel conditions. Figure 21 presents the MSE performance results when f D =50 Hz and provides comparisons between CSFFE and CSDFE when ρ=1 and ρ=0. In Figure 22, we show the corresponding probability of error performances. Note that the probability of error values of CSFFE for ρ=1 and ρ=0 saturate approximately at SNR =25 dB to 2×10−2 and 1.5×10−2, respectively, the probability of error values of CSDFE, on the other hand, saturate approximately at SNR =27 dB to 10−3 and 6.3×10−4, which are closer to the probability of error values of AVE when ρ=0, that converges to approximately 23.4×10−5 at SNR =23 dB.
Figure 21
Figure 21

MSE performance of the proposed equalizer for the channel matrix h 3 ( n ) when f D =50 Hz.

Figure 22
Figure 22

Probability of error performance of the proposed equalizer for the channel matrix h 3 ( n ) when f D =50 Hz.

The second experiment under time-variant channel conditions was carried out using a lower doppler frequency so that the effect of doppler frequency on the equalizer performance can be demonstrated. When we compare the MSE performance results in Figure 23 with those of Figure 21, which is related to the MSE performance for f D =50 Hz, we see that the MSE performances of CSFFE for ρ=1 and ρ=0 when f D =50 Hz are 1.33 and 1.22 times, respectively, higher than those of CSFFE when f D =10 Hz. The same comparison for CSDFE yields that the MSE performances when f D =50 Hz are 2 and 2.85 times higher for ρ=0 and ρ=1, respectively, than the MSE performances when f D =10 Hz. Similar evaluations can be done for the probability of error performances, and it can be seen in Figure 24 that the probability of error curves of CSFFE for ρ=0 and ρ=1 cases saturate to lower values than those of CSFFE in Figure 22. On the other hand, whereas probability of error curves of CSDFE for ρ=0 and ρ=1 in Figure 22 reach to the probability of error value of 10−2 at SNRs of 14 and 17 dB, respectively, they converge to the same probability of error value at approximately 3.5 dB lower SNRs, i.e., at 10.5 and 13.5 dB in Figure 24.
Figure 23
Figure 23

MSE performance of the proposed equalizer for the channel matrix h 3 ( n ) when f D =10 Hz.

Figure 24
Figure 24

Probability of error performance of the proposed equalizer for the channel matrix h 3 ( n ) when f D =10 Hz.

We have performed two more probability of error simulations for the time-variant channel case using f D =50 Hz, the first of which was related to the performance improvement that can be gained by using an ADIR filter memory (N d −1) of 3 instead of 2. Accordingly, Figures 25 and 26 depict the achieved performance improvement for both CSFFE and CSDFE in ρ=0 and ρ=1 cases. The second simulation was for the effect of using QPSK modulation instead of BPSK, and Figures 27 and 28 therefore render the probability of error performance comparison when using QPSK modulation instead of BPSK in ρ=0 and ρ=1 cases, respectively.
Figure 25
Figure 25

Probability of error comparison for different ADIR filter memories when f D =50 Hz and ρ =0.

Figure 26
Figure 26

Probability of error comparison for different ADIR filter memories when f D =50 Hz and ρ =1.

Figure 27
Figure 27

Probability of error comparison for different modulation schemes when f D =50 Hz and ρ =0.

Figure 28
Figure 28

Probability of error comparison for different modulation schemes when f D =50 Hz and ρ =1.

It can be seen in these figures that the performance improvement attainable through orthogonal transmission under time-variant channel conditions is not as significant as time-invariant channel conditions.

6 Conclusions

A V-BLAST type channel shortening equalizer design for cognitive MIMO-OFDM radios has been presented. The V-BLAST property for frequency-selective channels is realized by completely orthogonalizing the input data using SPMGLSs, so that a systolic MIMO-DFE is accomplished at the front-end of the proposed channel shortening equalizer. Accordingly, ISI and ICI effects are suppressed due to completely orthogonalizing nature of the receiver structure. A systolic array implementation of multiple adaptive Viterbi detectors is also utilized in order to realize a channel shortening equalizer with high degree of computational concurrency. The matrix inversions, which are significant bottlenecks in receiver design, are avoided, scalar only operations are enabled. A highly modular, regular, order-recursive, and simple receiver architecture, which is suitable for the DSP chip- and FGPA-based signal processing implementations of MIMO-OFDM wireless communication systems, is obtained.

Spectrum sensing and positioning functions, important tasks for cognitive radios, can be accomplished at no cost by simply reconfiguring the front-end MIMO-DFE filter as spectral analysis and positioning filters, respectively. These properties make the proposed equalizer a good candidate for software defined cognitive radio receiver realizations of MIMO-OFDM systems.

The computational complexity of the proposed equalizer has been provided by separately taking into account the MIMO-DFE and multiple adaptive Viterbi detector sections. Then, the total complexity was compared to the complexity of VE for different channel and ADIR filter memories and different alphabet and antenna sizes.

The performance has been supplied in terms of MSE and probability of error analysis for orthogonal and nonorthogonal transmissions under time-invariant and time-variant channel conditions using two different modulation schemes, and it has been demonstrated that desirable performance results can be attained particularly under time-invariant channel conditions when orthogonal transmission and BPSK modulation is used together with CSDFE implementation. It has been also shown that the performance of a CSDFE is between those of Viterbi and DFEs under time-invariant as well as time-variant channel conditions.

It has been revealed that the channel shortener equalizer is indeed a reduced complexity Viterbi equalizer with its ADIR filter memory functioning as a trade-off parameter between performance and complexity for a given channel matrix. Another important property of the proposed equalizer is that it does not need channel information.

Declarations

Acknowledgements

The author is grateful to the Editor Prof. Costas Berberitis and anonymous reviewers for their useful comments.

Authors’ Affiliations

(1)
Piri Reis University, Tuzla, Istanbul, 34940, Turkey

References

  1. GJ Foschini, MJ Gans, On limits of wireless communications in a fading environment when using multiple antennas.Wireless Pers. Commun. 6(3), 311–335 (1998). doi:10.1023/A:1008889222784.Google Scholar
  2. GG Raleigh, JM Cioffi, Spatio-temporal coding for wireless communication. IEEE Trans. Commun. 46(3), 357–366 (1998). doi:10.1109/26.662641.View ArticleGoogle Scholar
  3. JR GL Stüber, SW Barry, YG Mclaughlin, MA Li, TG Ingram, Pratt, Broadband MIMO-OFDM wireless communications. Proc. IEEE. 92(2), 271–294 (2004). doi:10.1109/JPROC.2003.821912.View ArticleGoogle Scholar
  4. H Bölcskei, Advances in smart antennas - MIMO-OFDM wireless systems : basics, perspectives, and challenges. IEEE Wireless Commun. 13(4), 31–37 (2006). doi:10.1109/MWC.2006.1678163.View ArticleGoogle Scholar
  5. PW Wolniansky, GJ Foschini, GD Golden, RA Valenzuela, in Proceedings of URSI Int. Symp. Signals, Systems and Electronics. V-BLAST: an architecture for realizing very high data rates over the rich-scattering wireless channel (IEEE,Pisa, 1998), pp. 295–300. doi:10.1109/ISSSE.1998.738086.Google Scholar
  6. Q Liu, L Yang, in Proceedings of the 2004 IEEE Joint Conf. 10th Asia-Pacific Conf. Commun., and 5th Int. Symp. Multi-Dimensional Mobile Commun, 1. A simplified method for V-BLAST detection in MIMO OFDM communication (IEEE, 2004), pp. 30–33. doi:10.1109/APCC.2004.1391645.Google Scholar
  7. W Bei, Z Qi, in Proceedings of the IEEE 4th Int.Conf. Wireless Commun. Netw. and Mobile Comput. (WiCOM’08). A low complex V-BLAST detection algorithm for MIMO-OFDM system (IEEE,Dalian, 2008), pp. 1–4. doi:10.1109/WiCom.2008.386.Google Scholar
  8. L Ma, K Dickson, J McAllister, J McCanny, QR decomposition-based matrix inversion for high performance embedded MIMO receivers. IEEE Trans. Signal Process. 59(4), 1858–1867 (2011). doi:10.1109/TSP.2011.2105485.View ArticleGoogle Scholar
  9. RK Martin, CR Johnson, Adaptive equalization: transitioning from single-carrier to multicarrier systems. IEEE Signal Process. Mag. 22(6), 108–122 (2005). doi:10.1109/MSP.2005.1550193.View ArticleGoogle Scholar
  10. S Haykin, Cognitive radio: brain-empowered wireless communications. IEEE J. Sel. Areas Commun. 23(2), 201–220 (2005). doi:10.1109/JSAC.2004.839380.View ArticleGoogle Scholar
  11. M Dillinger, K Madani, N Alonistioti, Software Defined Radio: Architectures, Systems and Functions (John Wiley and Sons, England, 2003).Google Scholar
  12. O Anjum, T Ahonen, F Garzia, J Nurmi, C Brunelli, H Berg, State of the art baseband DSP platforms for software defined radio: a survey.EURASIP. J. Wirel. Commun. Netw. 2011, 5 (2011). doi:10.1186/1687–1499-2011-5.Google Scholar
  13. K He, L Crockett, R Stewart, Dynamic reconfiguration technologies based on FPGA in software defined radio system. J. Signal Process. Syst. 69(1), 75–85 (2012). doi:10.1007/s11265-011-0646-2.View ArticleGoogle Scholar
  14. J Im, M Cho, Y Jung, Y Jung, J Kim, A low-power and low-complexity baseband processor for MIMO-OFDM WLAN systems. J. Signal Process. Syst. 68(1), 19–30 (2012). doi:10.1007/s11265-010-0570-x.View ArticleGoogle Scholar
  15. AP Vinod, E M-K Lai, A Omondi, Special issue on signal processing for software defined radio handsets. J. Signal Process. Syst. 62(2), 113–115 (2011). doi:10.1007/s11265-009-0428-2.View ArticleGoogle Scholar
  16. B Wang, RJ Ray Liu, Advances in cognitive radio networks: a survey. IEEE J. Sel. Topics Signal Process. 5(1), 5–23 (2011). doi:10.1109/JSTSP.2010.2093210.View ArticleGoogle Scholar
  17. E Axell, G Leus, EG Larsson, HV Poor, Spectrum sensing for cognitive radio: state-of-the-art and recent advances. IEEE Signal Process. Mag. 29(3), 101–116 (2012). doi:10.1109/MSP.2012.2183771.View ArticleGoogle Scholar
  18. H Celebi, H Arslan, Enabling location and environment awareness in cognitive radios. Computer Commun. 31, 1114–1125 (2008). 10.1016/j.comcom.2008.01.006.View ArticleGoogle Scholar
  19. MT Ozden, Adaptive multichannel sequential lattice prediction filtering method for ARMA spectrum estimation in subbands. EURASIP J. Adv. Signal Process. 2013, 9 (2013). doi:10.1186/1687-6180-2013-9.Google Scholar
  20. MT Ozden, in Proceedings of the 2014 IEEE/ION Position, Location, and Navigation Symposium (PLANS 2014). Adaptive multichannel sequential lattice prediction filtering method for range estimation in cognitive radios, (2014), pp. 426–433. doi:10.1109/PLANS.2014.6851400.Google Scholar
  21. N Al-Dhahir, JM Cioffi, Efficiently computed reduced-parameter input-aided MMSE equalizers for ML detection: a unified approach. IEEE Trans. Inf. Theory. 42(3), 903–915 (1996). doi:10.1109/18.490553.View ArticleMATHGoogle Scholar
  22. N Al-Dhahir, FIR channel-shortening equalizers for MIMO ISI channels. IEEE Trans. Commun. 49(2), 213–218 (2001). doi:10.1109/26.905867.View ArticleMATHMathSciNetGoogle Scholar
  23. F Ling, JG Proakis, A generalized multichannel least squares lattice algorithm based on sequential processing stages. IEEE Trans. Acoust. Speech Signal Process. 32(2), 381–389 (1984). doi:10.1109/TASSP.1984.1164325.View ArticleGoogle Scholar
  24. J Ma, GY Li, BH Juang, Signal processing in cognitive radio. Proc. IEEE. 97(5), 805–823 (2009). doi:10.1109/JPROC.2009.2015707.View ArticleGoogle Scholar
  25. GJ Foschini, Layered space-time architecture for wireless communications in a fading environment when using multielement antennas. Bell. Labs. Tech. J. 1(2), 41–59 (1996). doi:10.1002/bltj.2015.View ArticleGoogle Scholar
  26. A Lozano, C Papadias, Layered space-time receivers for frequency-selective wireless channels. IEEE Trans. Commun. 50(1), 65–73 (2002). doi:10.1109/26.975751.View ArticleGoogle Scholar
  27. Y Jiang, J Li, WW Hager, Uniform channel decomposition for MIMO communications. IEEE Trans. Signal Process. 53(11), 4283–4294 (2005). doi:10.1109/TSP.2005.857052.View ArticleMathSciNetGoogle Scholar
  28. R Merched, NR Yousef, Fast techniques for computing finite-length MIMO MMSE decision feedback equalizers. IEEE Trans. Signal Process. 54(2), 701–711 (2006). doi:10.1109/TSP.2005.861900.View ArticleGoogle Scholar
  29. N Al-Dhahir, AH Sayed, The finite length multi-input multi-output MMSE-DFE. IEEE Trans. Signal Process. 48(10), 2921–2936 (2000). doi:10.1109/78.869048.View ArticleGoogle Scholar
  30. G Ginis, JM Cioffi, On the relation between V-BLAST and the GDFE. IEEE Commun. Lett. 5(9), 364–366 (2001). doi:10.1109/4234.951378.View ArticleGoogle Scholar
  31. J Wang, Y Jiang, GE Sobelman, Iterative computation of FIR MIMO MMSE-DFE with flexible complexity-performance tradeoff. IEEE Trans. Signal Process. 61(9), 2394–2404 (2013). doi:10.1109/TSP.2013.2245325.View ArticleGoogle Scholar
  32. PA Regalia, MG Bellanger, On the duality between fast QR methods and lattice methods in least squares adaptive filtering. IEEE Trans. Signal Process. 39(4), 879–891 (1991). doi:10.1109/78.80910.View ArticleGoogle Scholar
  33. F Ling, Givens rotation based least squares lattice and related algorithms. IEEE Trans. Signal Process. 39(7), 1541–1551 (1991). doi:10.1109/78.134393.View ArticleGoogle Scholar
  34. F Ling, D Manolakis, JG Proakis, A recursive modified Gram-Schmidt algorithm for least-squares estimation. IEEE Trans Acoust. Speech Signal Process. 34(4), 829–835 (1986). doi:10.1109/TASSP.1986.1164877.View ArticleGoogle Scholar
  35. MT Ozden, AH Kayran, Adaptive multichannel decision feedback equalization for Volterra type nonlinear communication channels. AEU - Int. J. Electron. Commun. 62(6), 430–437 (2008). doi:10.1016/j.aeue.2007.06.005.View ArticleGoogle Scholar
  36. JD Provence, SC Gupta, Systolic arrays for Viterbi processing in communication systems with a time-dispersive channel. IEEE Trans. Commun. 36(10), 1148–1156 (1988). doi:10.1109/26.7532.View ArticleMATHGoogle Scholar
  37. E Larsson, O Edfors, F Tufvesson, ML Marzetta, Massive MIMO for next generation wireless systems. IEEE Commun. Mag. 52(2), 186–195 (2014). 10.1109/MCOM.2014.6736761.View ArticleGoogle Scholar
  38. HC Myburgh, JC Olivier, Low complexity MLSE equalization in highly dispersive Rayleigh fading channels.EURASIP. J. Adv. Signal Process. 2010(1), 874874 (2010). doi:10.1155/2010/874874.Google Scholar
  39. JG Proakis, Digital Communications (McGraw-Hill, New York, 2000).Google Scholar
  40. S Haykin, Adaptive Filter Theory, 4th edn (Prentice-Hall, New Jersey, 2001).Google Scholar
  41. Y Rosa Zheng, C Xiao, Simulation models with correct statistical properties for Rayleigh fading channels. IEEE Trans. Commun. 51(6), 920–928 (2003). doi:10.1109/TCOMM.2003.813259.View ArticleGoogle Scholar
  42. W Zhang, X-G Xia, KB Letaief, Space-time/frequency coding for MIMO-OFDM in next generation broadband wireless systems. IEEE Wireless Commun. 14(3), 32–43 (2007). doi:10.1109/MWC.2007.386610.View ArticleGoogle Scholar
  43. AF Molisch, Wireless Communications, 2nd edn (John Wiley and Sons, UK, 2011).Google Scholar

Copyright

© Ozden; licensee Springer. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Advertisement