EURASIP Journal on Applied Signal Processing 2002:12, 1448–1459 c ○ 2002 Hindawi Publishing Corporation On the Blind Estimation of Baud-Rate Equalizer Performance

This paper proposes a new method for carrying out joint blind equalization and blind estimation of the bit-error-rate (BER) in the output of baud-rate FIR equalizers. A simple test for assessing decision errors in the output of the decision device is derived. A comparative study of several BER estimator methods is presented in terms of convergence rate and tracking capability of both static and dynamic channels. Simulations not only validate theoretical results but also point out the effectiveness of the new proposition in terms of low computational burden and accurate BER estimation. Finally, an application of the new proposition for the detection and correction of misconvergence due to local minima issues is also presented.


INTRODUCTION
The universal mobile telecommunication system (UMTS) norm [1] is the major current trend in mobile communications. This norm aims at establishing a global mobile system, wherein any terminal may communicate with any other terminal. The terminals may be located anywhere on Earth, and the terminals may also be mobile or not. At the same time, the deployment of UMTS requires interconnection of several different local telecommunication systems in order to provide the link between the two terminals. Of course, satellite communications play an important role in UMTS since they provide cost-effective international links [2,3].
Three major characteristics of the UMTS are discussed in the following [4].
(C1) Transmission rates currently present an increasingly growing demand [1]. (C2) The communication channel is a time-variant system of difficult characterization. In fact, temporal variations may be predicted with limited accuracy, and models are quite dependent both on the spatial or time scale [1]. In consequence, synchronization between the two communicating terminals is quite problematic due to severe fades, which requires special techniques for assuring the system performance, for example, adaptive transmission [5]. (C3) The management of a global communication system as UMTS is quite complex since it may be divided in several local subsystems. Recent work [6] pointed out that, for assuring competitive quality, reliability, and availability, UMTS wireless fault management should employ an overlay system that continuously evaluates the signal quality at the level of local subsystems. For instance, [6] proposes a monitoring system based on the estimation of the bit error rate (BER).

ADAPTIVE EQUALIZATION AND BUSSGANG ALGORITHMS
This work focuses on blind equalization [7,8]. In this case, the equalizer update is carried out by means of an algorithm which does not require the use of an exact copy of the transmitted signal. The following reasons motivate this choice.
(R1) Blind techniques may enhance the transmission rates since they do not require a training period of the equalizer. (R2) Blind techniques avoid the accurate synchronization between transmitter and receiver, which is a stringent requirement associated with the training period of supervised equalization.
By respectively comparing (C1) and (C2) to (R1) and (R2), we may conclude that blind techniques agree quite well with the growing transmission rate demands as well as avoiding the problematic synchronization associated with UMTS. Particularly, among the several blind techniques, this work focuses on Bussgang algorithms. This methodology consists of a recursive optimization procedure, which is derived based on the stochastic minimization of some cost function. This cost function is defined according to some statistical criterion. Bussgang algorithms present several interesting features such as simple implementation, low computational burden, and well-established theoretical results.
However, Bussgang algorithms do present drawbacks which are mainly connected to the Bussgang cost functions. In fact, it has been demonstrated in [8] that for practical purposes, at least one of the local minima of all Bussgang cost functions may be associated with a poor steady-state equalization or even no equalization at all. This means that, broadly speaking, Bussgang blind techniques cannot assure all the time that equalization will take place. In consequence, the performance of Bussgang equalizers is quite dependent on the initial values assigned to the algorithm parameters.
Of course, if Bussgang equalizers are to be used in an UMTS, then it is of paramount importance to develop methods to assess the equalizer performance, for example, the estimation of the BER in the output of the blind equalizer. Such procedure is motivated by two major reasons. Firstly, it enables to monitor, detect, and provide solutions for local minima problems associated with the problematic learning of Bussgang equalizers. Secondly, in view of UMTS characteristic (C3), the BER in the output of a Bussgang equalizer, which is associated with a system terminal, could be considered as a kind of signal quality measure at the level of a local subsystem. As the equalizer performs joint blind equalization and blind BER estimation, it is possible then to minimize the complexity of the "Performance Manager" proposed in [6] so that signal quality monitoring is partially carried out in a local basis. Figure 1 the classical mathematical model used for the analysis of adaptive equalizers, where {h(n)}, {c(n)}, and {v(n)} denote, respectively, the impulsive response of the channel, the linear equalizer, and the global system (channel plus linear equalizer). Besides,

Consider in
where the operator " * " is the discrete convolution. Suppose that (H1) the communication system model is baseband; (H2) the signal-to-noise ratio (SNR) is high so that the additive noise may be neglected; (H3) the information signal x(n) is zero-mean, iid, and M-PAM (where M is the number of modulation levels); (H4) the communication system is linear and stable.
Notice that, although hypothesis (H1), (H2), (H3), and (H4) are restrictive, they have been extensively used in the past [9,10] in order to analyse adaptive equalization. Besides, (H1), (H2), and (H4) are currently used [7,8,11] in order to derive important results in the field of local minima analysis. It may be demonstrated that the output of the linear equalizer is given by where d is equalization delay, dist(n) is distortion or intersymbol interference, N is channel model length, L is equalizer length, and v( j) is jth coefficient of the global system The main goal of the linear equalizer is to recover the information signal, so that at the output of the decision device, the "open-eye" condition is verified [11] x where Q is the distance between two adjacent levels of the M-ary PAM signal and e(n) is the equalization error or decision error. Notice that when (4) holds, the intersymbol interference dist(n) may be different from zero but the output of the decision device is equal to the transmitted signal. In consequence, no decision error has occurred (e(n) = 0). Conversely, if (4) does not hold, then a decision error has taken place (e(n) = 1). In the literature, (4) and (5) are rarely investigated. Most articles emphasize the following condition [12], which states that equalization is perfect as the intersymbol interference (3) is zero, Equations (6) and (7) are known in the literature as the "zero-forcing (ZF) condition."

PREVIOUS WORK
There are few works of literature devoted to the analysis and development of estimators for the BER in the output of blind equalizers. In [13], the authors propose a binary hypothesis test in order to detect errors due to an incorrect decision of the equalizer. Although such technique is quite effective and general, since it may be applied to both FIR and DFE equalizers, it presents high computational complexity and it does not work "on-line." In [14], the authors estimate the BER at the equalizer output by means of a neural network which computes the probability of wrong decisions. Although this method may be applied to nonlinear channels, the authors did not discuss the transient performance of the BER estimates which may be influenced by local minima problems connected with neural network learning.
In the most recent work of the author [15], a simple recursive method is developed in order to estimate the BER at the output of an adaptive equalizer. Such method is based on the blind identification of the channel by means of a high-order statistics (HOS) method followed by a simple check procedure, which recognizes whether equalization errors have taken place or not. The concept of "equalization error" is considered similar to "decision error," that is, when the output of the decision device of the equalizer is different from the transmitted signal. This simple check procedure has been derived based on the "open-eye condition" [8], which may be considered as an alternative theoretical framework with respect to the classical "zero-forcing" condition. The main idea behind this check is based on the following reasoning: by estimating the channel model, we may use it along with the equalizer coefficients in order to establish whether the eye is open or not. If the eye is open, then there is no decision error. Otherwise, an estimator of the BER is updated.
Although simulation results in [15] point out that the new technique provides a BER estimator of simple implementation as well as a low computational burden (with respect to the previous methods reported in [13,14]), the new method does present drawbacks. Since estimation of cumulants is a time-consuming task, subject to error-propagation effects, the convergence of the technique proposed in [15] is slow and the absolute computational burden is still very high. These drawbacks point out that the technique may not be suitable for coping with the time variations of the mobile channel.

BASIC RESULTS FOR ANALYSING THE OPEN-EYE AND REVIEW OF THE PREVIOUS METHOD
Since the distortion (3) is a function of the global system coefficients v( j) and since each coefficient v( j) is bounded due to the stable character of {v(n)}, we may define an upper bound for the distortion dist(n) which will be represented by the symbol Sup{dist(n)}. (The operator Sup{(·)} represents the maximum value of (·)). Suppose that (H5) the bound Sup{dist(n)} is derived by assuming the worst case, that is, the distortion is maximum; (H6) the bound is defined by taking into account just the effects of the transmitted signal x(n) so that Sup{dist(n)} is a function of {v(n)}.
Hypotheses (H5) and (H6) define a particular bound of the distortion, and they were proposed in [11,16] in order to analyse the open-eye condition (4) and (5). In both articles, the authors study the worst case of maximum distortion, which enables to cope with the general situation wherein the distortion may take any value lower than Sup{dist(n)}. This reasoning justifies the use of (H5) and (H6).
In [15], the author demonstrated the following theorem.
Based on Theorem 1, the author proposed in [15] the following simple recursive procedure in order to estimate the BER at the output of an adaptive equalizer (n denotes the iteration number). Notice that such method may be applied on-line whereas the existing estimators of the literature are not well suited for the recursive operation of the adaptive equalizer.
Procedure 1 (see [15]). Given d, L, M, and N for n = 1 to the total number of iterations, we step 1: equalize the input signal and calculate the recovered signalx(n); step 2: estimate the channel model {ĥ(n)}; step 3: estimate the global system v(n) = ĥ (n) * c(n) ; (10) (8) and (9) and calculateê(n); step 6: update the following estimator of the BER (expressed in percentage): Step 2 is performed by means of the blind identification of the channel model, which is based on HOS theory, for example, the "C(Q, k)" algorithm [17]. Therefore, the performance of the new technique depends on the accuracy of the estimated coefficientsv( j), which result from the convolution between the filter coefficients c(i) and the estimated channel coefficientsĥ(i). Since convolution involves additions and multiplications, the accuracy ofv( j) is quite low due to error propagation. This explains some limitations of Procedure 1 [15].

NEW THEORETICAL RESULTS FOR THE OPEN-EYE CONDITION
The following theorem, which is demonstrated in the appendix, establishes further relations between the equalizer coefficients and the open-eye condition.
where B is a real constant depending on the channel model Notice that hypothesis 0 < d < L − 1 is a common practice in the equalization literature [7,8]. In the following, we discuss the validation of Theorem 2 in the context of an application.  (12) and (13) and calculateê(n); step 5: update the following estimator of the BER (expressed in percentage): (15) Notice that, in Procedure 2, the method avoids the convolution associated with step 3 of Procedure 1, thus decreasing the error-propagation effects due to nonideal cumulant estimation. In consequence, Procedure 2 presents a lower computational burden and may lead to more accurate BER estimation with respect to Procedure 1. The blind estimation of the channel could be carried out by any HOS method, for example, the "C(Q, k)" algorithm [17]. This technique is chosen due to its low computational burden. The estimation of the delay d should consider the truncated Z-transform of the inverse system C(z). However, experimental results pointed out that considering d as the half of the equalizer length, according to the center-spike initialization method [8], leads to reasonable results for any linear channel model.

A NEW PROPOSITION FOR CARRYING OUT JOINT BLIND EQUALIZATION AND BLIND BER ESTIMATION
It should be stressed that the performance of Procedure 2 is dependent both on the HOS channel estimator as well as on the estimation of the equalization delay d. However, these issues will not be addressed in this paper. In consequence, it must be kept in mind that the performances of both Procedures 1 and 2 are subject to problems associated with channel model under/overestimation [18,19], as well as with the estimation of the delay d.

General description and performance criteria
A comparison among Procedures 1 and 2, the neural network described in [14], and the classical BER estimator, was carried out. These four estimators are, respectively, labelled as P1, P2, NN, and SP. The last one (classical estimator) is defined as the average of decision errors obtained by comparing the pilot signal x(n) to the recovered signalx(n) according to (5). A linear filter of L = 33 taps has been used as equalizer in all simulations along with the Sato algorithm [7], whereas the radial-basis function network of method SP has 13 centers in all situations.
The transmitted signal x(n) is M-PAM. All simulations and results presented are the average result of three situations M = 2, 4, and 8. Channel models are presented in Table 1. Model H1 has been used in [20] for assessing the viability of video signal transmission through satellite according to the European DVB (digital video broadcasting) norm, whereas model H2 is nonminimum phase and is a standard model for the simulation of neural network blind equalizers [21]. H3 has been used in [14] for testing the neural network which implements method NN. Models H4 and H5 emulate a mobile channel [22,23] which switches periodically between models H1/H2 and H3 at each 1000 iterations.
For all results reported in this paper, the step sizes of all blind equalization algorithms have been set in order to H4 (dynamic) [20,21] Switch between H1 and H3 at each 1000 iterations H5 (dynamic) Switch between H2 and H3 at each 1000 iterations achieve the same steady-state BER as fast as possible. Extensive simulation of several channel models, modulation types, and considering several SNRs pointed out that equalization takes place as the steady-state BER is lower than 5%. Besides, all results are the average among 60 Monte Carlo runs. Figure 2 presents the convergence of the BER estimators and defines the three criteria used for assessing the performance of the methods. Notice that classical estimator SP is supposed to establish the "optimal" performance of all techniques since it provides the exact calculation of the decision errors. The first criterion is the convergence time T, which corresponds to the number of iterations in order that the estimator attains its steady-state value. The second criterion is the difference (D) between the steady-state value of the estimated BER and the steady-state amplitude of the classical BER estimator (method SP). The third criterion is the quadratic error (QE), which evaluates the average quadratic difference between the value of the classical BER estimator and the amplitude of the other BER estimators considering all iterations of the convergence procedure. In Figure 2, QE is calculated by averaging the quadratic difference between all values of plot 1 (considering all iterations) and the respective values of plot 2 (considering all iterations). Hence, QE characterizes the tracking capability of any BER estimation method. A low value of QE means that the method closely follows the optimal SP estimator throughout the iterations.  Notice that criteria convergence time (T) and QE characterize the transient performance of BER estimators, whereas the difference (D) characterizes the steady-state performance.

Static channels
Tables 2, 3, and 4 present the results, wherein the step size of all methods was kept constant during the adaptation. Careful analysis of Tables 2, 3, and 4 points out that (a) concerning convergence (criterion T), P1 presents a slow convergence, whereas NN is disturbed by noise; (b) concerning tracking capability (criterion QE), the neural network NN and P2 closely follow the optimal estimator for high SNR values. For low SNR scenarios, all methods fail; however, P2 performance seems to be robust with respect to SNR; (c) concerning the accuracy of estimators (criterion D), procedure P2 is the most accurate one for all situations. In brief, among the three estimators P1, P2, and NN, the neural network and the method proposed in this paper present approximately the same performance, and they are able to track the optimal estimator SP. Considering the low SNR scenario of 10 dB, procedure P2 presents the best performance. Figure 3 depicts the performance of the several estimators in the case of the dynamic channel H4 for SNR = 30 dB. Procedure P2 and the neural network are able to track the optimal estimator closely, whereas procedure P1 cannot manage this channel. In this subsection, all algorithms were accelerated by means of adaptive step sizes. In the beginning of the adaptation, the amplitude of the step size is set to a high value, which is progressively decreased as learning takes place. The rate of step-size amplitude reduction is controlled by different laws, for example, exponential decrease or linear decrease [7]. Tables 5 and 6 summarize the results. The conclusions from Tables 5 and 6 are the same as for the last subsection. Table 7 presents the computational burden for the several methods in terms of real additions and real multiplications per iteration, which also includes the filtering operations associated with equalization (filtering of the incoming signal u(n) and coefficients update). Special operations such as the hyperbolic tangent of the neural network as well as the selection process of step 3 of Procedure 2 are not considered. Clearly, procedure P2 presents a reasonable computational requirement, with respect to both the optimal SP and the neural network. Table 8 presents the computational burden in terms of the average number of variables in memory for each technique. This quantity is associated with microprocessor data memory and represents the total number of scalar quantities which must be in memory in order to perform one iteration   Tables 7 and 8 were estimated for signal processors working in a serial mode (no parallel processing).

Presentation of the problem and management policy
The years 2000, 2001, 2002, and 2003 are characterized by a maximum solar activity [24] which impairs satellite communications in terms of several effects. From all these, this paper will focus on the well-known "SEU" (Single Event Upset) [25,26] which may lead to the change of bit values located at any place in the memory unit on board the satellite. In [27], the author has investigated the impacts of SEU on Bussgang equalizers. Simulations pointed out that this technique may  not enable a reasonable signal reconstruction quality, or even no equalization at all, if the microprocessor routine does not consider the influence of SEU. This subject is developed further in the following. Now, consider that the microprocessor implementing the Bussgang equalizer undergoes a SEU due to a solar flare event such that the value of one bit located at a place in the memory unit is changed. This SEU will influence the filter adaptation, and this effect could be modelled as a random slight deviation imposed on the algorithm variables [26]. In [27], these effects were analysed in detail, and it was pointed out that the worst impact on the system performance takes place as the SEU modifies C(n) by a random slight deviation of amplitude ∆C(n), where C(n) denotes the vector containing the filter coefficients at iteration n, µ denotes the step size, e(n) denotes the Bussgang error, and U(n) denotes the vector containing the signal at the input of the equalizer at iteration n.
It should be stressed that (16) is a kind of mathematical representation of the physical process leading to the changement of bits in some registers (or in some part of the RAM) of the microprocessor employed in the satellite receiver. Notice that the SEU may change a different number of memory bits each time that is called "severe upset" according to [25,26].  Notice also that the vector C(n) has length L where 32 < L < 264 for practical purposes [7]. In consequence, the worst impact of a SEU on the Bussgang algorithm may lead to a bit impairment so that 32 < B < 264, (17) where B is the number of filter coefficients influenced at the same time by the SEU which is associated with ∆C(n). Figure 4 depicts two BER plots of a Bussgang equalizer. Plot 1 is associated with a standard transient behaviour, where the BER departs from a high value and gradually decreases to a very low magnitude in the steady state. However, plot 2 illustrates the effect of a SEU, as discussed in [27]. If, by any reason, the SEU takes place at time n = S, then the BER suddenly presents a jump, increasing with high derivative in a very short period of time. Then, as the algorithms runs, the BER decreases again until the steady state is reached at time S1 (Figure 4). The time period T = S1 − S is called "recovery time" which corresponds to the time for the equalizer to overcome the SEU effect.
The unusual behaviour of the BER in Figure 4 may be explained as follows. The random slight deviation of amplitude ∆C(n) may drive the blind algorithm to a different situation, which could be equivalent to the initialization of the algorithm by means of a "nonoptimal" coefficient vector C(n = 0). If the algorithm departs from this "nonoptimal" initial condition, then the chances for achieving a lowamplitude BER are quite high. Table 9: General procedure for the management of SEU effects on Bussgang blind equalizers.
For each iteration n of the adaptive algorithm # Estimate the BER(n) # Estimate the BER derivative as follows: D = BER(n) − BER(n − 1) # If BER(n) > 50% and if D > 50% then => C(n + 1) = 0 0 0 · · · 1 · · · 0 0 0 T Clearly, based on the previous discussions, a simple way to detect the SEU effect would be monitoring the BER amplitude and checking its time derivative. Then, if this derivative is higher than a fixed bound, we could take an action to overcome the SEU effect. This countermeasure could be forcing the blind algorithm to restart adaptation, beginning with the optimal center-spike procedure [8]. Table 9 summarizes these guidelines.
In Table 9, the bound 50% for D was established, based on the experimental results presented in [27]. Notice that this strategy enables to cope with SEU by means of an auxiliary routine running on the microprocessor without any kind of special microelectronic hardening technique.

Results and discussion
All simulations regarding the strategy of Table 9 were carried out in a similar way as described in Section 8.1; however, just models H1 and H2 were considered. Tables 10 and 11 present results obtained for several situations. For each situation (one channel model and one SNR), the following procedure was employed. First, the optimal convergence of the algorithm was established, supposing the center-spike initialization [8]. As the algorithm achieves the steady state, the vector C(n) is impaired so that a quantity of B filter coefficients undergo an SEU. Then, the recovery time "T" is evaluated as well as the steady-state BER (SS-BER).
The procedure discussed in the last paragraph was repeated at least 100 times for each situation. Each time corresponds to a different choice of the group of B filter coefficients which undergo the SEU effect. All results of Tables  10 and 11 were calculated by taking the average among these 100 runs.
The results of Tables 10 and 11 may be summarized as follows.
(a) Recovery times and SS-BER associated with channel 2 are always higher than their respective counterparts associated with channel 1. (b) The recovery time and the SS-BER increase as B increases. (c) For B < 10, T and SS-BER increase as the SNR decreases. Conversely, for B > 10, T and SS-BER increase as the SNR increases. The last conclusion means that, when an extreme SEU effect takes place, additive noise contributes to the system to quickly overcome the SEU impact. This is, to some extent, a surprising result. It should be noticed from Table 11 that for B > 10, the application of the new BER estimator to channel H2 does not always lead the equalizer to a reasonable performance since the SS-BER must be equal or less than 5% in order to characterize perfect equalization. Such drawback of the SEU management proposition points out some limitations, which suggest that the "center-spike" initialization procedure [8] is not always the best strategy for all channel models.

CONCLUSIONS
In this paper, a new theoretical relationship concerning the open-eye condition was derived. The analysis was applied to the evaluation of blind equalizer performance, leading to a simple procedure which presents an interesting tradeoff between computational requirement, tracking capability, and BER accurate estimation for both static and dynamic channels. Simulations validate the theoretical analysis and point out that the proposed BER estimator could be used for practical purposes (e.g., fault management in the UMTS), performing local subsystem performance monitoring. Notice that the new method could be used at the level of subchannels in the context of multiuser communications. A comparative simulation study of several BER estimators has been carried out, pointing out that the neural network approach is not robust, as well as that the estimation of BER for low SNR scenarios represents an interesting challenge. In fact, it seems quite difficult to track the optimal estimator when the SNR is under 10 dB. Finally, the theoretical results were also successfully applied for the detection and management of misconvergence associated with single-event upsets of satellite communications. Current work addresses the extension of theory to complex modulations, multiuser communications, and low SNR situations. Influence associated with over/underestimation of the channel model order, as well as the issue on the equalization delay estimation, are also currently under study.

DEMONSTRATION OF THEOREM 2
Due to space limitations, just a sketched version of the demonstration is presented. The main background for the following analysis may be found in references [28,29]. The demonstration is divided into three steps. In the first one, an expression for the maximum absolute value of coefficient v(n) (where n = 0, 1, 2, . . . , N + L − 2 and n = d) is derived, whereas in the second step, an expression for the minimum absolute value of coefficient v(d) is derived. In the third step, the previous results are used to demonstrate (12). Demonstration of (13) is not provided since it follows a similar procedure as for (12).
Step (1) Then, applying the triangle inequality to the module of (A.1), we have Bounds for S1 and S2 are now established. Suppose that 0 < d < L− 1 which is a common practice in the equalization literature [7]. Beginning with (A.3) and taking into account that n = 0, 1, . . . , N + L − 2, n = d, then we conclude that |h(n − d)| may take one of the following values: Labelling Sup m {h(m)}, m = 1, 2, . . . , N − 1 as the maximum absolute value of the set of coefficients {|h(1)|, |h(2)|, . . . , |h(N − 1)|}, a maximal bound for S1 may be defined as follows: Turning to S2, notice that the main goal of the theorem is to analyse the mathematical relationship between coefficient c(d) and the other coefficients c(k), where k = 0, 1, . . . , L − 1 and k = d. Label the maximum absolute value of all coefficients c(k) as Sup q {|c(q)|}, where q = 0, 1, . . . , L − 1 and q = d. Since Sup q {|c(q)|} is a constant for any q and k, the following inequality presents a bound for sum S2: Since n = 0, 1, 2, . . . , N + L − 2, n = d; also, supposing that 0 < d < N − 1, we notice that (A.10) Notice that the choice of the maximum absolute value is not unique. The works [26,27] study an inequality closely related to the expression (A.10). One possible maximum is obtained by considering the equality signal in (A.10), which is validated by the information theory calculations in [27]. So, the following maximum is chosen: Step (2) In practice [7], it is common to use an equalizer length which is higher than the channel length 1 ≤ N < L, L ≥ 2 =⇒ (N + L − 2) ≥ 1. (A.14) So, combining (A.13) and (A.14), the integer b in (A.11) obeys the following inequality: Again, we may argue that this is not the unique choice for the minimum value. This issue is discussed in detail in references [28,29] which provide theoretical basis for the choice in (A.22).

Step (3): Final demonstration
It is supposed that the following condition holds by hypothesis: