 Research
 Open Access
 Published:
A low complexity Hopfield neural network turbo equalizer
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 15 (2013)
Abstract
Abstract
In this article, it is proposed that a Hopfield neural network (HNN) can be used to jointly equalize and decode information transmitted over a highly dispersive Rayleigh fading multipath channel. It is shown that a HNN MLSE equalizer and a HNN MLSE decoder can be merged in order to realize a low complexity joint equalizer and decoder, or turbo equalizer, without additional computational complexity due to the decoder. The computational complexity of the Hopfield neural network turbo equalizer (HNNTE) is almost quadratic in the coded data block length and approximately independent of the channel memory length, which makes it an attractive choice for systems with extremely long memory. Results show that the performance of the proposed HNNTE closely matches that of a conventional turbo equalizer in systems with short channel memory, and achieves nearmatched filter performance in systems with extremely large memory.
Introduction
Turbo equalization has its roots in turbo coding, first proposed in [1] for the iterative decoding of concatenated convolutional codes. In [2, 3], the idea of turbo decoding was applied to systems transmitting convolutional coded information through multipath channels, in order to improve the biterror rate (BER) performance, with great success. Due to the computational complexity of its constituent maximum a posteriori (MAP) equalizer and MAP decoder, the computational complexity of these turbo equalizers are exponentially related to the channel impulse response (CIR) length as well as the encoder constraint length, limiting their effective use in systems where the channel memory and/or the encoder constraint length is large, with the MAP equalizer being the main culprit due to long channel delay spreads.
To mitigate the high computational complexity exhibited by the MAP equalizer, several authors have proposed suboptimal equalizers to replace the optimal MAP equalizer in the Turbo Equalizer structure, with complexity that is linearly related to the channel memory length. In [4, 5], it was shown how a minimum mean squared error (MMSE) equalizer is used in a Turbo Equalizer by modifying it to make use of prior information provided in the form of extrinsic information. Various authors have also proposed the use of decision feedback equalizers (DFE) while using extrinsic information as prior information to improve the BER performance after each iteration [6–10]. Also, in [11, 12] it was proposed that a soft interference canceler (SIC) be modified to make use of soft information in order to be used as a low complexity equalizer in a turbo equalizer, and in [13] the way in which a SIC incorporates soft information was modified to improve performance. The proposed equalizers inherently suffer from noise enhancement (MMSE) and error propagation (DFE and SIC) which limit their performance, and hence the overall performance of the turbo equalizers in which they are used. Due to the fact that none of the proposed equalizers are able to produce exact MAP estimates of the transmitted coded information, the performance of the Turbo Equalizer in which they are implemented will ultimately be worse than when an optimal MAP equalizer is utilized, due to the performance loss incurred at the output of these suboptimal equalizers. This tradeoff always exists: If one gains in terms complexity, one loses in terms of performance.
In this article, we propose to combat the performance loss due to suboptimal (or nonMAP) equalizer output, by combining the equalizer and the decoder into one equalizer/decoder structure, so that all information can be processed as a whole, and not be passed between the equalizer and the decoder. This vision has successfully been implemented and demonstrated by the authors in [14] using a dynamic Bayesian network (DBN) as basis. In this paper, however, we show that using the Hopfield neural network (HNN) [15] as the underlying structure also works well, and has a number of advantages as discussed in [16].
In [16], the authors proposed a maximum likelihood sequence estimation (MLSE) equalizer which is able to equalize Mary quadrature amplitude modulation (MQAM) modulated signals in systems with extremely long memory. The complexity of the equalizer proposed in [16] is quadratic in the data block length and approximately independent of the channel memory length. Its superior computational complexity is due to the high parallelism of its underlying neural network structure. It uses the HNN structure which enables fast parallel processing of information between neurons, producing ML sequence estimates at the output. It was shown in [16] that the performance of the HNN MLSE equalizer closely matches that of the Viterbi MLSE equalizer in short channels, and nearoptimally recombines the energy spread across the channel in order to achieve nearmatched filter performance when the channel is extremely long.
The HNN has also been shown by several authors to be able to decode balanced check codes [17, 18]. These codes, together with methods for encoding and decoding, were first proposed in [19], but it was later shown in [17, 18] that single codeword decoding can also be performed using the HNN. To date, balanced codes is the only class of codes that can be decoded with the HNN. The ability of the HNN to detect binary patterns allows it to determine the ML codeword from a predefined set of codewords. In this paper it is shown that the HNN ML decoder can be extended to allow for the ML estimation of a sequence of balanced check codes. It is therefore extendable to an MLSE decoder.
In this article, a novel turbo equalizer is developed by combining the HNN MLSE equalizer developed in [16] and a HNN MLSE decoder (used to decode balanced codes, and only balanced codes), resulting in the Hopfield neural network turbo equalizer (HNNTE), which can be used as replacement for a conventional turbo equalizer (CTE), made up of a equalizer/decoder pair, in systems with extremely long memory, where the coded symbols are interleaved before transmission through the multipath channel. The HNNTE is able to equalize and decode (balanced codes) in systems with extremely long memory, since the computational complexity is nearly independent of the channel memory length. Like the HNN MLSE equalizer, its superior complexity characteristics are due to the high parallelism of its underlying neural network structure.
This article is structured as follows. Section 2 presents a brief discussion on Turbo Equalization. Section 3 discusses the HNN in general, while the HNN MLSE equalizer and the HNN MLSE decoder are discussed in Section 4, followed by a discussion on the fusion of the two in order to realize the HNNTE. In Section 5, the results of a computational complexity analysis of the HNNTE and a CTE are presented, followed by a memory requirements analysis in Section 6. Simulation results are presented in Section 7 and conclusions are drawn in Section 8.
Turbo equalization
Turbo equalizers are used in multipath communication systems that make use of encoders, usually convolutional encoders, to encoded the source symbol sequence s of length N _{ u } (using some generator matrix G) at a rate R _{ c } to produce coded information symbols c of length N _{ c } = N _{ u } / R _{ c }, after which the coded symbols c are interleaved with a random interleaver before modulation and transmission. The interleaved coded symbols ć are transmitted through a multipath channel with a CIR length of L, causing intersymbol interference among adjacent transmitted symbols at the receiver. At the receiver the received intersymbol interference (ISI) corrupted coded symbols are matched filtered and used as input to the turbo equalizer. The received symbol sequence is given by
where n is a vector containing complex Gaussian noise samples and ć is the interleaved coded symbols given by
where J is an N _{ c } × N _{ c } interleaver matrix, and H is the N _{ c } × N _{ c } channel matrix
The turbo equalizer uses two a maximum a posterior (MAP) algorithms, one to equalize the ISIcorrupted received symbols and one to decode the equalized coded symbols, which iteratively exchange information. With each iteration of the system, extrinsic information is exchanged between the two MAP algorithms in order to improve the ability of each algorithm to produce correct estimates. This principle was first applied to Turbo Coding, where both MAP algorithms were MAP decoders [3], but has since been applied to iterative equalization and decoding (today known as Turbo Equalization) to reduce the BER performance of the coded multipath communication system [2–5].
Figure 1 shows the structure of the Turbo Equalizer. The MAP equalizer takes as input the ISIcorrupted received symbols r and the extrinsic information {L}_{e}^{D}\left(\widehat{\mathbf{s}}\right) (where \widehat{\mathbf{s}} the interleaved coded symbol estimates) and produces a sequence of posterior transmitted symbol loglikelihood ratio (LLR) estimates {L}^{E}\left(\widehat{\mathbf{s}}\right) (note that {L}_{e}^{D}\left(\widehat{\mathbf{s}}\right) is zero during the first iteration). Extrinsic information {L}_{e}^{E}\left(\widehat{\mathbf{s}}\right) is determined by
which is deinterleaved to produce {L}_{e}^{E}\left({\widehat{\mathbf{s}}}^{\prime}\right), which is used as input to the MAP decoder to produce a sequence of posterior coded symbol LLR estimates {L}^{D}\left({\widehat{\mathbf{s}}}^{\prime}\right). {L}^{D}\left({\widehat{\mathbf{s}}}^{\prime}\right) is used together with {L}_{e}^{E}\left({\widehat{\mathbf{s}}}^{\prime}\right) to determine the extrinsic information
{L}_{e}^{D}\left({\widehat{\mathbf{s}}}^{\prime}\right) is interleaved to produce {L}_{e}^{D}\left(\widehat{\mathbf{s}}\right). {L}_{e}^{D}\left(\widehat{\mathbf{s}}\right) is used together with the received symbols r in the MAP equalizer, with {L}_{e}^{D}\left(\widehat{\mathbf{s}}\right) serving to provide prior information on the received symbols. The equalizer again produces posterior information {L}^{E}\left(\widehat{\mathbf{s}}\right) of the interleaved coded symbols. This process continues until the outputs of the decoder settle, or until a predefined stopcriterion is met [3]. After termination, the output L\left(\widehat{\mathbf{u}}\right) of the decoder gives an estimate of the source symbols.
The proposed HNNTE is modeled on one HNN structure, implying that there is no exchange of extrinsic information between its constituent parts. Rather, all information is intrinsically processed in an iterative fashion.
The Hopfield neural network
The HNN was first proposed in [15] and it was shown in that the HNN can be used to solve combinatorial optimization problems as well as pattern recognition problems. In [15] Tank and Hopfield derived an energy function and showed how the HNN can be used to minimize this energy function, thus producing nearML sequence estimates at the output of the neurons. To enable the HNN to solve an optimization problem, the cost function of that problem is mapped to the HNN energy function, where after the HNN iteratively minimizes its energy function and performs nearMLSE. Also, to enable the HNN to solve a binary pattern recognition problem, the autocorrelation matrix of the set of patterns is used as the weights between the HNN neurons, while the noisy pattern to be recognized is used as the input to the HNN. Again, the HNN iteratively performs pattern recognition in order to produce the nearML patter at the output of the HNN.
Energy function
The Hopfield energy function can be written as [16]
where I is a column vector with N elements, X is an N × N matrix. Assuming that s, I, and X contain complex values, these variables can be written as [16]
where s and I are column vectors of length N, and X is an N × N matrix, where subscripts i and q are used to denote the respective inphase and quadrature components. X is the crosscorrelation matrix of the complex received symbols such that
implying that it is Hermitian. Therefore {\mathbf{X}}_{i}^{T}={\mathbf{X}}_{i} is symmetric and {\mathbf{X}}_{q}^{T}={\mathbf{X}}_{q} is skew symmetric [16]. By using the symmetric properties of X _{ i } and X _{ q }, (6) can be expanded and rewritten as
which in turn can be rewritten as [16]
It is clear that (9) is in the form of (6), where the variables in (6) are substituted as follows:
Equation (9) is used to derive the HNN MLSE equalizer, decoder, and eventually the HNNTE.
Iterative system
The HNN minimizes the energy function (6) with the following iterative system:
where u = {u _{1}, u _{2}, …, u _{ N }}^{T} is the internal state of the HNN, s = {s _{1}, s _{2}, …, s _{ N }}^{T} is the vector of estimated symbols, g(.) is the decision function associated with each neuron and i indicates the iteration number. β(.) is a function used for optimization as in [14].
The estimated symbol vector \left[{\mathbf{s}}_{i}^{T}{\mathbf{s}}_{q}^{T}\right] is updated with each iteration. \left[{\mathbf{I}}_{i}^{T}{\mathbf{I}}_{q}^{T}\right] contains the best blind estimate for s, and is therefore used as input to the network, while \left[\begin{array}{cc}{\mathbf{X}}_{i}& {\mathbf{X}}_{q}^{T}\\ {\mathbf{X}}_{q}& {\mathbf{X}}_{i}\end{array}\right]contains the crosscorrelation information of the received symbols. The system produces the MLSE estimates in s after Z iterations.
The Hopfield neural network turbo equalizer
In this section, the derivation of the HNNTE is discussed, by first deriving its constituent parts—the HNN MLSE equalizer and the HNN MLSE decoder—and then showing how the HNNTE is finally realized by combining the two.
HNN MLSE equalizer
The HNN MLSE equalizer was developed by the authors in [16]. The HNN MLSE equalizer was applied to singlecarrier MQAM modulated system with extremely long memory, where the CIR length was as long as L = 250, even though this is not a limit. The ability of the HNN MLSE equalizer to equalize signals in systems with highly dispersive channels is due to the fact that its complexity grows quadratically with an increase in transmitted data block size, and that it is approximately independent of the channel memory length. In the following the HNN MLSE equalizer developed in [16] will be presented, without spending time on the derivation.
It was shown in [16] that the correlation matrices X _{ i } and X _{ q } in (10), for a single carrier system transmitting a data block of length N through a multipath channel of length L with the data block initiated and terminated by L  1 known tail symbols, with values 1 for BPSK modulation and \frac{1}{\sqrt{2}}+j\frac{1}{\sqrt{2}} for MQAM modulation, can be determined by
and
where α = {α _{1}, α _{2}, …, α _{ L  1}} and γ = {γ _{1}, γ _{2}, …, γ _{ L  1}} are respectively, determined by
and
where k = 1, 2, 3, …, L  1 and i and q denote the inphase and quadrature components of the CIR coefficients.
Upon inspection it is easy to see from (12) through (15) that X _{ i } and X _{ q } can be determined using the respective inphase and quadrature components of the N × N channel matrix, with the inphase and quadrature components of the CIR, {\mathbf{h}}^{\left(i\right)}={\{{h}_{0}^{\left(i\right)},{h}_{1}^{\left(i\right)},\dots ,{h}_{L1}^{\left(i\right)}\}}^{T} and {\mathbf{h}}^{\left(q\right)}={\{{h}_{0}^{\left(q\right)},\phantom{\rule{0.3em}{0ex}}{h}_{1}^{\left(q\right)},\dots ,{h}_{L1}^{\left(q\right)}\}}^{T}, on the diagonals such that
and
Using H ^{(i)} and H ^{(q)} the correlation matrices in (12) and (13) can be determined by
which is simply
Also
which is
X _{ i } and X _{ q } are then used to construct the combined correlation matrix in (10).
It was also shown in [16] that the input vectors I _{ i } and I _{ q } in (10) are determined by
and
where \rho =1/\sqrt{2} for MQAM modulation, ρ = 1 in I _{ i } and ρ = 0 in I _{ q } for BPSK modulation, and Λ = {λ _{1}, λ _{2}, …, λ _{ N }} is determined by
and Ω = {ω _{1}, ω _{2}, …, ω _{ N }} is determined by
where k = 1, 2, 3, …, N with i and q again denoting the inphase and quadrature components of the respective elements. The combined input vector in (10) is therefore constructed as
Note that Λ and Ω can easily be determined by
and
where r ^{(i)} and r ^{(q)} are the respective inphase and quadrature components of the received symbols r = {r _{1}, r _{2}, …, r _{ N + L  1}}^{T}.
By deriving the crosscorrelation matrix X and the input vector I in (10), the model in (9) is complete, and the iterative system in (11) can be used to equalize MQAM modulated symbols transmitted through a channel with large CIR lengths. The HNN MLSE equalizer was evaluated in [16] for BPSK and 16QAM with performance reaching the matchedfilter bound in extremely long channels.
HNN MLSE decoder
The HNN has been shown to be able to decode balanced codes [17, 18]. A binary word of length m is said to be balanced if it contains exactly m / 2 ones and m / 2 zeros [19]. In addition, balanced codes have the property that no codeword is contained in another word, which simply means that positions of ones in one codeword will never be a subset of the positions of ones in another codeword [19].
The encoding process is described in [19] where the first k bits of the uncoded word is flipped in order to ensure the resulting codedword is “balanced,” whereafter the position k is appended to the balanced codeword before transmission. This encoding process is not followed here, as the set of m = 2^{n} balanced codewords are determined before hand, after which encoding is performed by mapping a set of n bits to 2^{n} balanced binary phaseshift keying (BPSK) symbols of length 2^{n}, or by mapping a set of 2n bits to 2^{n} balanced quaternary quadrature amplitude modulation (4QAM) symbols of length 2^{n}.
The HNN decoder developed here uses the set of predetermined codewords to determined the connection weights describing the level of connection between the neurons. It has previously been shown how a HNN can be used to decoded one balanced code at a time, but the HNN MLSE decoder we derive here is able to simultaneously decode any number of concatenated codewords in order to provide the ML transmitted sequence of codewords. After the HNN MLSE decoding, the ML BPSK or 4QAM codewords of length 2^{n} are demapped to n bits (or 2n bits for 4QAM), which completes the decoding process.
Codeword selection
The authors have found that WalshHadamard codes, widely used in code division multiple access (CDMA) systems [20], are desirable codes for this application, due to their seeming balance and orthogonality characteristics. WalshHadamard codes are linear codes that map n bits to 2^{n} codewords, where each set of codewords have a Hamming distance of 2^{n1} and a Hamming weight of 2^{n1}.
WalshHadamard codes are not “balanced” as described above. The first codeword is always allones, while subsets of some codewords are contained in others, violating both restrictions for balance. Instead of using the complete set of WalshHadamard codes to map n bits to 2^{n} codewords, a subset of codes in the WalshHadamard matrix is selected, duplicated and modified so as to construct a new set of 2^{n} codewords of length 2^{n}. Consider the set of length 2^{n} = 8 WalshHadamard codes
To construct a set of balanced codewords from H _{8}, a subset of 2^{n1} codewords is selected, which is used as the first 2^{n1} codewords in the new set of codewords. The second set of 2^{n1} codewords are constructed as follows:

1.
Reverse the order in which the first 2^{n1} codewords appear in the new set.

2.
Flip the bits of the reversed set of 2^{n1} codewords.
Assuming the subset selected from H _{8} above is the set H _{8,4:7} (implying that codewords in rows 4 through 7 are selected), the resulting set of 2^{n} balanced codewords is
It is clear that C _{8} is balanced in the sense that the rows (codewords) as well as the columns are balanced. It has been found that the HNN decoder performs better if the rows as well as the columns are balanced. The Hamming weight of C _{8} is still 2^{n1} = 2^{2}, while the Hamming distance increases slightly larger than 2^{n1} = 2^{2}.
By following the steps described above, any set of WalshHadamard codes of length 2^{n} can be used to create a new set of 2^{n} balanced codes of length m = 2^{n}.
Encoding
Encoding is performed by mapping a group of n bits to 2^{n} BPSK symbols, or a group of 2n bits to 2^{n} 4QAM symbols. Before encoding, the set of codewords {\mathbf{C}}_{{2}^{n}} derived from the set of WalshHadamard codes {\mathbf{H}}_{{2}^{n}} is made bipolar by converting the 0’s to 1.
BPSK encoding
When BPSK modulation is used, n bits are mapped to 2^{n} BPSK symbols. The n bits are used to determine an index k in the range 1– 2^{n}, which is then used to select a codeword from the set of codewords in {\mathbf{C}}_{{2}^{n}} such that the selected codeword \mathbf{c}={\mathbf{C}}_{{2}^{n}}\left(k\right). Table 1 shows the number of uncoded bits, codeword length, uncoded bit to coded symbol rate R _{ s } and the uncoded bit to coded bit rate R _{ c } (code rate) for different n.
4QAM encoding
When 4QAM modulation is used, 2n bits are mapped to 2^{n} 4QAM symbols. The first and second groups of n bits (out of 2n bits) are used to determine two indices, k ^{(i)} and k ^{(q)}, in the range 1– 2^{n}, one for the inphase part, and the other for the quaternary part of the codeword. The first index k ^{(i)} selects a codeword from {\mathbf{C}}_{{2}^{n}}^{\left(i\right)}, where {\mathbf{C}}_{{2}^{n}}^{\left(i\right)} is derived as before, and the second index k ^{(q)} selects a codeword from {\mathbf{C}}_{{2}^{n}}^{\left(q\right)}, which can be equal to {\mathbf{C}}_{{2}^{n}}^{\left(i\right)} or can be uniquely determined as explained earlier. The 4QAM “codeword” is then calculated as \mathbf{c}={\mathbf{C}}_{{2}^{n}}^{\left(i\right)}\left({k}^{\left(i\right)}\right)+j{\mathbf{C}}_{{2}^{n}}^{\left(q\right)}\left({k}^{\left(q\right)}\right), which is much like the result of coded modulation where groups of coded bits (in this case uncoded bits) are mapped to signal constellation points to improve spectral efficiency [20]. Table 2 shows the number of uncoded bits, codeword length, the uncoded bit to coded symbol rate R _{ s } and code rate R _{ c } for different 2n. Even though the code rate remains the same as with BPSK modulation, the throughput doubles as expected.
Decoder
The HNN is known to be able to recognize input patterns from a set of stored patterns [15, 21]. In the context of the HNN decoder, the patterns are the balanced codewords, and the HNN is able to determine the ML codeword from a set of codewords. This has been demonstrated before but only for one codeword at a time [17]. Therefore, if a received data block contains P codewords, the HNN will have to be applied P times in order to determine P ML codewords. However, the HNN MLSE decoder developed here is able to determine the most likely sequence of codewords using a single HNN. The HNN MLSE decoder is therefore applied once to a received data block containing any number of codewords.
After the HNN MLSE decoder has determined the sequence of most likely transmitted codewords, the codewords are demapped by calculating the Euclidean distance between each ML codeword and each codeword in {\mathbf{C}}_{{2}^{n}} for BPSK modulation, and each codeword in {\mathbf{C}}_{{2}^{n}}^{\left(i\right)}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}j{\mathbf{C}}_{{2}^{n}}^{\left(q\right)} for 4QAM modulation. The indices(s) corresponding to the codeword(s) that have the lowest Euclidean distance/distances is/are converted to bits, which completes the decoding phase.
The derivation of the HNN MLSE decoder entails the calculation of the crosscorrelation matrices X _{ i } and X _{ q }, and the input vectors I _{ i } and I _{ q } in (10). The HNN MLSE decoder is first derived for the decoding of a single codeword, after which it will be extended to enable the decoding of any number of codewords simultaneously. Derivations are performed for 4QAM only, since the BPSK HNN MLSE decoder is a simplification of its 4QAM counterpart.
Single codeword decoding
To enable the HNN to store a set of codewords, the average correlation between all pattern must be stored in the weights between the neurons. According to Hebb’s rule of autoassociative memory [22], the connection weight matrix, or correlation matrix, is calculated by taking the crosscorrelation of the patterns to be stored. Since we are working with complex symbols, there are two weight matrices to be calculated. The crosscorrelation matrices in (9) are calculated as
and
where \mathbf{C}={\mathbf{C}}_{{2}^{n}}^{\left(i\right)}+j{\mathbf{C}}_{{2}^{n}}^{\left(q\right)}, and {\mathbf{C}}_{{2}^{n}}^{\left(i\right)} and {\mathbf{C}}_{{2}^{n}}^{\left(q\right)} are the matrices containing the generated codewords as before, respectively, used for the inphase and quadrature components of the codeword. Note the similarities between the correlation matrices in (32) and (33) and those in (18) and (20). Also, the two input vectors are simply the real and imaginary components of the noisecorrupted received codeword, such that
and
where c is of length 2^{n} and n is a vector containing complex samples from the distribution \mathcal{N}(\mu ,{\sigma}^{2}), where μ in the range 1 = in the range 10 and σ is the noise standard deviation. After the ML codeword is detected, each detected codeword (of length 2^{n}) can be mapped back to n bits for BPSK modulation and 2n bits for 4QAM modulation.
Multiple codeword decoding
It was shown how the HNN can be used to decode single codewords, but the HNN decoder can be extended in order to detect ML transmitted sequences of codewords. This step is crucial in our quest of merging the HNN decoder with the HNN MLSE equalizer, since the HNN MLSE equalizer detects ML sequences of transmitted symbols. If the transmitted information is encoded, these sequences contain multiple codewords, and hence the HNN decoder must be extended to detect not only single codewords, but codeword sequences.
This extension is easily achieved by using the HNN parameters already derived in (32) through (35). Consider a system transmitting a sequence of P balanced codewords of length 2^{n}, where n is the length of the uncoded bitwords. The new correlation matrix is constructed by copying X in (10) along the diagonal according to the number of transmitted codewords P, such that
where \mathbf{X}=\left[\begin{array}{ll}{\mathbf{X}}_{i}& {\mathbf{X}}_{q}^{T}\\ {\mathbf{X}}_{q}& {\mathbf{X}}_{i}\end{array}\right] is repeated on the diagonal P times and ∅ implies that the rest of X ^{(P)} is empty, containing only 0’s.
Also the input vector I in (10), consisting of I _{ i } and I _{ q }, is also extended according to the number of transmitted codewords P such that
where
and
where c _{ p } is the p th codeword of length 2^{n}, where p = 1, 2, …, P, and n is of length 2^{n} P and contains complex samples from the distribution \mathcal{N}(\mu ,{\sigma}^{2}), where μ = 0 and σ is the noise standard deviation.
The extended crosscorrelation matrix and input vector in (36) and (37) can now be used to estimate the ML sequence of transmitted codewords, after which each detected codeword (of length 2^{n}) can be mapped back to n bits for BPSK modulation and 2n bits for 4QAM modulation.
HNN turbo equalizer
The HNNTE is an amalgamation of the HNN MLSE equalizer and the HNN MLSE decoder, which were discussed in the previous sections. In this section it is explained how the HNN MLSE equalizer and the HNN MLSE decoder are combined in order to perform iterative joint equalization and decoding (turbo equalization) using a single HNN structure. The HNNTE is able to jointly equalize and decode BPSK and 4QAM coded modulated signals in systems with highly dispersive multipath channels, with extremely low computational complexity compared to traditional turbo equalizers which employ a MAP equalizer/decoder pair.
System model
Since we already have complete models for the HNN MLSE equalizer and decoder, the combination of the two is fairly straightforward. In order to distinguish between equalizer and decoder parameters a number of redefinitions are in order. For the HNN MLSE equalizer the correlation matrix and input vector relating to (10), as derived in (22) and (27), are now X _{ E } and I _{ E }, respectively, and will henceforth be referred to as “equalizer correlation matrix” and “equalizer input vector”. Similarly the HNN MLSE decoder correlation matrix and input vector relating to (10), as derived in (36) and (37), are now X _{ D } and I _{ D }, respectively, and will henceforth be referred to as “decoder correlation matrix” and “decoder input vector”.
When a coded data block of length N _{ c } is transmitted through a multipath channel, X _{ E } and X _{ D } are determined according to (22) and (36), where both matrices are of size N _{ c } × N _{ c }. Since the function of the equalizer and the decoder has to be merged, it makes sense to somehow combine X _{ E } and X _{ D } to enable the equalizer to perform decoding, or to enable the decoder to perform equalization. This combination is performed by first normalizing X _{ D } with respect to X _{ E }, because of varying energy in a multipath fading channel between received data blocks. X _{ D } is therefore normalized with respect to X _{ E } such that
Next the new correlation matrix is determined as
The rationale behind the addition of the equalizer correlation matrix and the normalized decoder correlation matrix is that the connection weights in the decoder correlation matrix should bias those of the equalizer correlation matrix. Since X _{TE} contains X _{ E } offset by {\mathbf{X}}_{D}^{\left(\text{norm}\right)}, joint equalization and decoding is made possible.
The new input vector also needs to be calculated. I _{ D } contains the noisecorrupted coded symbols, while I _{ E } contains not only received coded symbol information, but also the ISI information. Note that when there is no multipath or fading (L = 1 and h _{0} = 1), I _{ E } reduces to I _{ D }. The new input vector used in the HNNTE is therefore simply
With the new correlation matrix X _{TE} and input vector I _{TE}, the HNNTE model is complete, and the iterative system in (11) can be used to jointly equalize and decode (turbo equalize) the transmitted coded information.
Transformation
Upon reception the received symbol vector has to be deinterleaved to restore the onetoone relationship between each element in r and c with respect to the first coefficient h _{0} of the CIR h = {h _{0}, h _{1}, …, h _{ L1}}^{T}. Deinterleaving r transforms the transmission model in (1). Substituting (2) in (1) and applying the deinterleaver, which is simply the Hermitian transpose of the interleaver matrix J, gives
which is equivalent to transmitting the coded symbol sequence c = G ^{T} s through a channel
Therefore (43) can be written as
Consequently the new channel matrix Q, rather than the conventional channel matrix H in (3), is used in the calculation of the equalizer correlation matrix X _{ E } derived in (22). Due to the above transformation, Q does not contain the CIR H on the diagonal as in H. Rather, each column in Q (of length N _{ c }) contains a unique random combination of all CIR coefficients (where the rest of the N _{ c }  L elements in a column are equal to 0), dictated by the randomization effect exhibited in Q due to the random interleaver. This randomization effect results from first multiplying the channel H with the interleaving matrix J and then deinterleaving by multiplying the result with J ^{T} (see (44)). Deinterleaving places the first CIR coefficient (h _{0}) on the diagonal of Q, restoring the onetoone relationship between each element in r and each corresponding coded transmitted symbol in c.
To illustrate this concept, consider the threedimensional representations of H J and Q in Figures 2a, b, 3a,b, 4a,b, and 5a,b, for a hypothetical system transmitting coded information through a multipath channel with CIR lengths of L = 1, L = 5, L = 10, and L = 20, respectively, with a block length N _{ c } = 80. Figure 2a,b show H J and Q for channels of length L = 1, where Figure 2a is clearly interleaved. It is also clear that the new channel Q in Figure 2b is deinterleaved, since the first coefficient h _{0} of the CIR has been restored to the diagonal of Q. Figure 3a and 5a show the interleaved channels for L = 5, L = 10, and L = 20, where Figure 3b and 5b show the new channels Q, again with the first CIR coefficient h _{0} restored to the diagonal. Even though h _{0} is restored to the diagonal of Q, it is clear that the rest of the CIR coefficients h _{1}, h _{2}, …, h _{ L1} are scattered throughout Q. As stated before, each column in Q contains a unique random combination of all CIR coefficients (with h _{0} on the diagonal for each column), dictated by the randomization effect exhibited in Q, where the rest of the N _{ c }  L elements in each column are equal to 0.
Computational complexity analysis
The computational complexity of the HNNTE is compared to that of the CTE by calculating the number of computations performed for each received data block, for a fixed set of system parameters. The number of computations are normalized by the coded data block length so as to factor out the effect of the length of the transmitted data block, which allows us to present the computational complexity in terms of the number of computations required per received coded symbol. The complexity of the HNNTE is quadratically related to the coded data block length, so a change in N _{ c } will still have an effect on the normalized computational complexity.
The computational complexity of the HNNTE was calculated as
where N _{ c } is the coded data block length, L is the CIR length, M is the modulation constellation alphabet size (2 for BPSK and 4 for 4QAM), Z _{HNNTE} is the number of iterations and k is the codeword length, which was chosen as k = 8 for a code rate of R _{ c } = 3 / 8. The first term in (46) is associated with the calculation of X _{ i } in (19) and X _{ q } in (21). The second term is associated with the calculation of Λ in (28) and Ω in (29). The third term is for the iterative calculation of the ML coded symbols in (11) while the second to last term in (46) is for the trivial ML detection of codewords after joint iterative MLSE equalization and decoding. The last term is due to the transformation in (43) through (45). Note that in the first and last terms of (46) the exponent is 2.376. It has been shown in [23] that the complexity of multiplication of two N × N matrices can be reduced from O(N ^{3}) to O(N ^{2.376}). However, due to the fact that cubic complexity matrix multiplication is still preferred in practical applications due to ease of implementation, (46) serves as a lower bound on the HNNTE computational complexity.
Therefore, the computational complexity of the HNNTE is approximately quadratic at best, or more realistically cubic in the coded data block length (N _{ c }), quadratic in the modulation constellation alphabet size (M), quadratic in the codeword length k, and approximately independent of the channel memory length (L).
The complexity of the CTE was determined as
where Z _{CTE} is the number of iterations and Q is the number of equalizer states, determined by 2^{L1} for BPSK modulation and 4^{L1} for 4QAM. The first term in (47) is associated with the equalizer while the second term is associated with MAP decoding. The computational complexity of the CTE is therefore linear in the coded data block length (N _{ c }), exponential in the channel memory length (L) and quadratic in the codeword length (k).
Figure 6 and shows the normalized computational complexity of the HNNTE and the CTE for coded data block lengths of N _{ c } = 80, N _{ c } = 160, N _{ c } = 320, N _{ c } = 640, N _{ c } = 1280, and N _{ c } = 2560, where Z _{HNNTE} = 25 and Z _{CTE} = 5, for BPSK and 4QAM modulation when O(N ^{2.376}) matrix multiplication complexity is considered. Figure 7 shows the same information as Figure 6, but with O(N ^{3}) matrix multiplication complexity. It is clear that the computational complexity of the HNNTE increases with an increase in coded data block length, but for realistic data block lengths the complexity of the HNNTE is superior to that of the CTE for channels with long memory. The HNNTE is computationally less complex for BSPK modulation than for 4QAM, but only slightly so. On the other hand, the complexity of the CTE grows exponentially with and increase in modulation order. From Figure 6 it is clear that the complexity of the HNNTE is almost quadratically related to the coded data block length and approximately independent of the channel memory length, which is more evident when L is increased. The normalized computational complexity of the HNNTE and the CTE (for O(N ^{2.376}) and O(N ^{3}) matrix multiplication complexity) for N _{ c } = 1280 using BPSK and 4QAM for extremely long channels is shown in Figure 8, where there is no comparison between the complexity of the HNNTE and that of the CTE, for both BSPK and 4QAM modulation.
Memory requirements analysis
The memory requirements of the HNNTE and the CTE are closely related to their respective computational complexities due to the structures employed by these algorithms. Table 3 describes the memory requirements of the HNNTE for each received data block. The total memory requirement for the HNNTE is 2{N}_{c}^{2}+6{N}_{c}+{N}_{c}+L1+2{({N}_{c}+L1)}^{2} where each variable is of type float, which uses 32 bits. The memory requirements of the CTE per data block is shows in Table 4. The total memory requirement of the CTE is N _{ c } M ^{L1} + 4N _{ c } + L. Figure 9 shows the memory requirement of the HNNTE and the CTE in bytes (32 bits = 8 bytes) for coded data block sizes of N _{ c } = 160, N _{ c } = 640, and N _{ c } = 2560 and CIR lengths increasing from L = 1 to L = 25. From Figure 9 it is clear that the memory requirement of the HNNTE remains constant over all channel lengths and modulation alphabet sizes, with less than 1 MB of memory required for N _{ c } = 160, 6.6 MB for N _{ c } = 640 and 100 MB for N _{ c } = 2560. The memory requirements of the CTE, however, grows exponentially with the channel memory length, since the size of the trellis structure used in the MAP equalizer grows according to the same measure. The breakeven point between the BPSK CTE and the HNNTE (for both BPSK and 4QAM) is L = 10.40 for N _{ c } = 160, L = 12.35 for N _{ c } = 640 and L = 14.30 for N _{ c } = 2560, beyond which the HNNTE require less memory than the CTE. Also, the breakeven point between the 4QAM CTE and the HNNTE is L = 5.68 for N _{ c } = 160, L = 6.66 for N _{ c } = 640 and L = 7.66 for N _{ c } = 2560. The memory requirements of the HNNTE are therefore more favorable when higher order modulation alphabets are employed.
Simulation results
The proposed HNNTE was evaluated in a mobile fading environment for BPSK and 4QAM modulation at a code rate of R _{ c } = n / k = 3 / 8. To simulated the fading effect of mobile channels, the Rayleigh fading simulator proposed in [24] was used to generate uncorrelated fading vectors. When imperfect channel state information (CSI) was assumed, least squares channel estimation was used using various amounts of training symbols in the transmitted data block. On the other hand, when perfect CSI was assumed, the CIR coefficients were “estimated” by taking the mean of the uncorrelated fading vectors. Simulations were performed for short and long channels at various mobile speeds. Simulations were also performed to compare the performance of the HNNTE and a CTE in short mobile fading channels for BPSK modulation. For all simulations the uncoded data block length was N _{ u } = 480 and the coded data block length was N _{ c } = 1280. In all simulations the frequency was hopped four times during each data block in order to further reduce the BER. For the CTE the number of iterations were Z = 5, and instead of using a fixed number of iterations for the HNNTE, we use the function Z({E}_{b}/{N}_{0})=2\left({5}^{({E}_{b}/{N}_{0})/5}\right) (which produces Z(E _{ b } / N _{0}) = {2, 4, 8, 10, 22, 55} for E _{ b }/N _{0} = {0, 2.5, 5, 7.5, 10}) to determine the number of iterations to be used given E _{ b } / N _{0}.
Figure 10 show the performance of the HNNTE and the CTE for channel lengths of L = 4, L = 6, and L = 8 at a fixed mobile speed of 20 km/h, assuming perfect CSI. The performance of the HNNTE is slightly better than that of the CTE for high SNR levels.
Figure 11 shows the performance of the HNNTE and the CTE for a channel of length L = 6 at mobile speeds of 3 km/h, 50 km/h, 80 km/h, 140 km/h, and 200 km/h, assuming perfect CSI. It is clear that the HNNTE outperforms the CTE at mobile speeds greater than 20 km/h, with the advantage of performance increasing with an increase in mobile speeds. It seems that the HNNTE is less affected by increasing mobile speeds, which suggests that the HNNTE is able to perform well in fastfading mobile environments.
Figure 12 shows the performance of the HNNTE and the CTE for a channel of length L = 6 at a mobile speed of 20 km/h, assuming imperfect CSI. To estimate the channel training sequences of length 4L, 6L, 8L, and 10L were used. From Figure 12 it is clear that the HNNTE is superior to the CTE at high SNR levels when perfect CSI is not available. The HNNTE seems to be less sensitive to channel estimation errors.
It is clear from Figures 10, 11, and 12 that the performance of the HNNTE is superior to that of a CTE in short channels at varying mobile speeds, for both perfect and imperfect CSI. The HNNTE outperforms the CTE in short channels, but with higher computational complexity. Figure 6 shows that the HNNTE is more computationally complex than the CTE for short channels (L<10), when the coded data block length is relatively small (N _{ u }<1280). However, the complexity of the HNNTE is vastly superior to that of the CTE for long channels. It might be argued that the HNNTE will perform better than the CTE since more iterations are used, but that is not true. It is stated in [3] that the performance of the CTE cannot be improved significantly beyond Z = 3 iterations in Rayleigh fading channels, so the performance gain of the HNNTE compared to the CTE is probably due to the fact that HNNTE is able to process all the available information internally as a whole, without having to exchange information between the equalizer and the decoder, as is the case in a CTE.
Figure 13 shows the performance of the HNNTE for channels of length L = 10, L = 20, L = 50, L = 100 at a fixed mobile speed of 20 km/h for BPSK and 4QAM modulation, assuming perfect CSI. It is clear that the performance for BPSK modulation is better than the performance for 4QAM, which is due to the fact that Gray coding cannot be applied in the encoding process described in Section 4.2.2. The performance loss is therefore warranted.
Figure 14 shows the performance of the HNNTE for a channel of length L = 50 at mobile speeds of 20 km/h, 80 km/h, 140 km/h, and 200 km/h for BPSK and 4QAM modulation, assuming perfect CSI. It is clear that an increase in mobile speed leads to a performance degradation, although not as much as expected. Again BPSK modulation performs better than 4QAM modulation.
Figure 15 shows the performance of the HNNTE for a channel of length L = 50 at a mobile speed of 20 km/h for BPSK and 4QAM modulation, assuming imperfect CSI. To estimate the channel, training sequences of length 4L, 6L, 8L, and 10L were used. As expected, a performance loss is incurred with a decrease in the number of training symbols. Again BPSK modulation outperforms 4QAM modulation.
Figure 16 shows the performance of the HNNTE for a channel of length L = 25 at a mobile speed of 20 km/h for BPSK and 4QAM modulation, assuming perfect CSI, for different numbers of iterations. The number of iterations were chosen to be Z = 5, Z = 10, Z = 20, and Z = 50. The BER performance increases with an increase in the number of iterations. Since the performance degradation due to a decrease in the number of iterations is low at low signal levels, we adopt an iteration schedule that is dependent on the signal level. As stated before, we use the following function to determine the number of iterations: Z({E}_{b}/{N}_{0})=2\left({5}^{({E}_{b}/{N}_{0})/5}\right).
Figure 17 shows the performance of the HNNTE for a channel of length L = 50 at a mobile speed of 20 km/h for BPSK and 4QAM modulation, assuming perfect CSI, for different code rates. The code rates were R _{ c } = 1 / 2 (2 / 4), R _{ c } = 3 / 8, R _{ c } = 1 / 4 (4/16), and R _{ c } = 5 / 32. From Figure 17 it is clear that the performance of the HNNTE increases with a decrease in the code rate, with 4QAM modulation performing worse than BPSK modulation.
From Figures 13, 14, 15, 16 and 17 it is clear that the HNNTE is able to jointly equalize and decode BPSK and 4QAM modulated signals, transmitted trough extremely long mobile fading channels. While the data rate using 4QAM modulation is twice that using BPSK modulation, the performance is worse for 4QAM modulation, due to the fact that Gray coding cannot be applied during coded modulation.
Conclusion
In this article, a low complexity turbo equalizer was developed which is able to jointly equalize and decode BPSK and 4QAM codedmodulated signals in systems transmitting interleaved information through a multipath fading channels. It uses the Hopfield neural network as framework and hence it was fittingly named the Hopfield Neural Network Turbo Equalizer, or HNNTE. The HNNTE is able to turbo equalize coded modulated BPSK and 4QAM signals in short as well as long multipath channels, slightly outperforming the CTE for short channels, although at higher computational cost. However, the HNNTE computational complexity in long channels is vastly superior to that of CTE. The computational complexity of the HNNTE is almost quadratically related to the coded data block length, while being approximately independent of the CIR length. This enables it to turbo equalize signals in systems with multiple hundreds of multipath elements. It was also demonstrated that the HNNTE is less susceptible than the CTE to channel estimation errors, and it also outperforms the CTE in fast fading channels. The performance of the HNNTE for BPSK modulation is better than for 4QAM modulation, since Gray coding cannot be employed due to the coded modulation explained in this paper, while the complexity for 4QAM is slightly higher.
References
Berrou C, Glavieux A, Thitimajshima P: Near Shannon limit errorcorrection and decoding: TurboCodes. Int. Conf. Commun 1993, 10641070.
Douillard C, Jezequel M, Berrou C, Picart A, Didier P, Glavieux A: Iterative correction of intersymbol intereference: turboequalization. Europ. Trans. Telecommun 1995, 6: 507511. 10.1002/ett.4460060506
Bauch G, Khorram H, Hagenauer J: Iterative equalization and decoding in mobile communication systems. Proceedings of European Personal Mobile Communications Conference (EPMCC) 1997, 307312.
Koetter R, Tuchler M, Singer AC: Turbo equalization. IEEE Signal Process. Mag 2004, 21(1):6780. 10.1109/MSP.2004.1267050
Koetter R, Tuchler M, Singer AC: Turbo equalization: principles and new results. IEEE Trans. Commun 2002, 50(5):754767. 10.1109/TCOMM.2002.1006557
Lopes RR, Barry JR: The soft feedback equalizer for turbo equalization of highly dispersive channels. IEEE Trans. Commun 2006, 54(5):783788.
DualHallen A, Hegaard C: Delayed decision feedback sequence estimation. IEEE Trans. Commun 1989, 37(5):428436. 10.1109/26.24594
Eyuboglu MV, Qureshi SU: Reducedstate sequence estimation with set partitioning and decision feedback. IEEE Trans. Commun 1988, 36(1):1320. 10.1109/26.2724
Wu J, Leong S, Lee K, Xiao C, Olivier JC: Improved BDFE using a priori information for turbo equalization. IEEE Trans. Wirel. Commun 2008, 7(1):233240.
Lou H, Xiao C: Softdecision feedback turbo equalization for multilevel modulations. IEEE Trans. Signal Process 2011, 59(1):186195.
Fijalkow I, Pirez D, Roumy A, Ronger S, Vila P: Improved interference cancellation for turboequalization. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2000, 416419.
Wang X, Poor HV: Iterative (turbo) soft interference cancellation and decoding for coded CDMA. IEEE Trans. Commun 1999, 47(7):10461061. 10.1109/26.774855
Ampeliotis D, Berberidis K: Low complexity turbo equalization for high data rate. EURASIP J. Commun. Network 2006, 2006(ID 25686):112.
Myburgh HC, Olivier JC: Reduced complexity turbo equalization using a dynamic Bayesian network. EURASIP J. Adv. Signal Process 2012. (Submitted for Publication)
Hopfield JJ, Tank DW: Neural computations of decisions in optimization problems. Biol. Cybern 1985, 52: 125. 10.1007/BF00336930
Myburgh HC, Olivier JC: Low complexity MLSE equalization in highly dispersive Rayleigh fading channels. EURASIP J. Adv. Signal Process 2010., 2010(ID 874874): http://asp.eurasipjournals.com/content/2010/1/874874
Wiberg N: A class of Hopfield decodable codes. Proceedings of the IEEESP Workshop on Neural Networks for Signal Processing 1993, 8897.
Wang Q, Bhargava VK: An error correcting neural network. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing 1989, 530533.
Knuth D: Efficient balanced codes. IEEE Trans. Inf. Theory 1986, IT32(1):530533.
Proakis JG: Digital Communications. New York: McGrawHill, International Edition; 2001.
Hopfield JJ: Artificial neural networks. IEEE Circ. Dev. Mag 1988, 4(5):310.
Hebb DO: The Organization of Behavior. New York: Wiley; 1949.
Winograd S, Coppersmith D: Matrix multiplication via arithmetic progressions. J. Symbolic Comput 1990, 9(3):251280. 10.1016/S07477171(08)800132
Zheng YR, Xiao C: Improved models for the generation of multiple uncorrelated Rayleigh fading waveforms. IEEE Commun. Lett 2002, 6: 256258.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
Both authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Myburgh, H.C., Olivier, J.C. A low complexity Hopfield neural network turbo equalizer. EURASIP J. Adv. Signal Process. 2013, 15 (2013). https://doi.org/10.1186/16876180201315
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201315
Keywords
 Turbo equalizer
 Hopfield neural network
 Rayleigh fading
 Low complexity