On the complexity-performance trade-off in soft-decision decoding for unequal error protection block codes

Unequal error protection (UEP) codes provide a selective level of protection for different blocks of the information message. The effectiveness of two sub-optimum soft-decision decoding algorithms, namely generalized Chase-2 and weighted erasure decoding, is evaluated in this study for each protection class of UEP block codes. The performances of both algorithms are compared to that of the maximum likelihood algorithm in order to evaluate the performance loss of each protection class provided by less complex algorithms as well as their complexities are evaluated according to the number of arithmetic operations performed at each decoding step. Finally, numerical results and examples are provided which indicate that a trade-off between performance and complexity for each protection class is obtained. The results of this study can be used to select appropriate UEP coding and decoding schemes in applications that demand low energy consumption.


Introduction
One of the main challenges in the design of batterysupplied wireless devices is the minimization of their energy consumption [1][2][3][4]. It is known that forward error correction (FEC) decoders are responsible for a large part of energy consumption of such devices [5,6]. Since maximum likelihood (ML) decoding is often infeasible due to its complexity of exponential order, it is of interest to investigate sub-optimum decoding techniques in search of less complex alternatives.
Concerning block codes, a class of sub-optimum algorithms that deserves attention is composed of reliabilitybased soft-decision decoding techniques [7]. In this category, Chase-2 and weighted erasure decoding (WED) algorithms are recognized by their ease of implementation and reduced complexity when compared to the ML algorithm. The performance of the Chase-2 decoding *Correspondence: dcunha@cin.ufpe.br 2 Department of Computing Systems, Centro de Informática, Federal University of Pernambuco (CIn/UFPE), 50740-560, Recife-PE, Brazil Full list of author information is available at the end of the article algorithms applied to Bose-Chaudhuri-Hocquenghem codes is analyzed in [8].
In a number of wireless protocols, the importance of different bits in the information sequence often varies and certain blocks of this sequence need higher protection level than other blocks. This property is called unequal error protection (UEP) and can be obtained either by hierarchical modulation techniques [9,10] or by FEC schemes [11,12]. Such UEP methods have been applied to wireless and mobile computing applications [13][14][15], apart from several video and image coding standards as set partitioning hierarchical trees [16], ITU-T H.264 [17], and its extensions [18], and joint photographic expert group 2000 (JPEG 2000) [19]. Concerning UEP coding, the analysis of suboptimal decoding algorithms applied to UEP block codes has not been considered in the literature.
In this study, the effectiveness of sub-optimum softdecision decoding algorithms (generalized Chase-2 algorithm [20] and WED algorithm [21]) for each protection class of UEP block codes is evaluated using binary transmission over an additive white Gaussian noise (AWGN) channel. Performances of both algorithms are compared http://asp.eurasipjournals.com/content/2013/ 1/28 to that of the ML algorithm in order to evaluate the performance loss of each protection class provided by less complex algorithms. We also analyze the arithmetic decoding complexity of each algorithm to decode a received sequence. In addition, an analysis of the trade-off between performance and complexity of the algorithms for each protection class is done. Based on this analysis, we discuss the choice of the parameters of the decoder with the best complexity-performance trade-off, such as the number of test patterns, the error-correcting capability of the binary decoder, and the number of quantization levels.
The remainder of this article is structured as follows: In Section 2, concepts related to UEP coding are described. The soft-decision decoding algorithms are defined in Section 3, while the analysis of their decoding complexity in terms of mathematical operations is presented in Section 4. Section 5 presents simulation results. A trade-off between performance and complexity for both decoding algorithms is established in this section. Finally, conclusions are drawn in Section 6.

UEP block codes
Consider a binary linear code C j (n, k, d) in which n is the codeword length, k is the dimension of the code, and d is the minimum Hamming distance of C j . The generator matrix of C j is denoted by G j . Assume that w(uG j ) is the Hamming weight of the codeword x = uG j related to the information vector u. The separation vector of C j , ], measures the UEP provided by the code C j for ML decoding [22]. The ith position of s j is given by [22] where GF(2) is the binary Galois field. The smallest element of s j is the minimum Hamming distance of C j . A code C j is said to have equal error protection capability if all elements of s j are equal, otherwise C j has the UEP property. The error-correcting capability of the code C j is denoted by t * j . To illustrate these concepts, consider the linear block codes C 1 (16,5,5) and C 2 (25,8,5) with generator matrices G 1 and G 2 (calculated using the method proposed in [23]) given by Their separation vectors are s 1 = [ 8,8,5,5,5] and s 2 = [ 12,12,5,5,5,5,5,5], respectively. Thus, we can say that both codes are UEP codes with two distinct protection classes, denoted by cp 1 (higher protection class) and cp 2 (lower protection class).

Soft-decision decoding algorithms
Two decoding algorithms that deal with the least reliable positions of the received sequence, namely the generalized Chase-2 (GC-2) [20] and the WED [21] algorithms, are described in this section.

Generalized Chase-2 decoding algorithm
The GC-2 algorithm uses the sequence of real values observed at the output of the matched filter, r = [ r 0 , r 1 , . . . , r n−1 ], and the binary sequence y obtained by a hard quantization of r. For the AWGN channel, the real values of the sequence r correspond to the reliabilities α i such that α i = |r i |. Thus, the higher the value of α i , the lower the probability that the corresponding symbol had been strongly affected by the noise.
Let p be the number of least reliable positions of the sequence r, i.e., the positions that have the least values of α i . The value of p determines the set of test patterns S b = {b i }, i = 0, . . . , 2 p − 1, with cardinality |S b | = 2 p . At first, the GC-2 algorithm applies a binary decoder (with error-correcting capability t) to find an error pattern z associated to the sequence y i = y ⊕ b i , in which ⊕ represents sum modulo-2. If an error pattern z is obtained by the binary decoder, a it is added to the test pattern b i , resulting the pattern z i = z ⊕ b i . After that, the analog weight W α of the pattern z i is figured out according to If z is not found by binary decoding, the next test pattern b i is selected. The objective of the GC-2 algorithm is to find the pattern z i with minimum analog weight W α to estimate the transmitted codeword x, as x = y⊕z i . When a pattern z i is not selected (for all test patterns), then x = y. A detailed description of the GC-2 algorithm is found in [20,24] and a summary of its steps is presented in Table 1.

Table 1 Description of the steps of the GC-2 algorithm
Step Description and search the corresponding error pattern z 4 I f z is found, then get the pattern z i = z ⊕ b i , compute its analog weight W α , store the pattern z i that has the minimum analog weight, and go to the Step 5. Otherwise, go to Step 5 5 If there is still test patterns to generate, do i = i + 1, select b i , and go to Step 2. Otherwise, go to Step 6   6 If a pattern z i was stored, obtain the estimate x = y ⊕ z i . Otherwise, x = y Example 1. Consider the Hamming code C(7, 4, 3) whose error-correcting capability is equal to one. Assume that the codeword x = [1, 0, 0, 1, 0, 1, 1] is BPSK modulated and is transmitted over the AWGN channel. Suppose that the received sequence is r = According to the first step of the GC-2 algorithm, the sequence y = [ 1, 1, 0, 1, 1, 1, 1] is obtained by hard quantization of r. Let us assume that p = 2, so the two least reliable positions are the second and the fifth ones. Thus, considering all the combinations of 0's and 1's in these two least reliable positions, we have four test patterns b according to the set To obtain the sequence y 0 , the test pattern b 0 is selected, resulting in y 0 = y ⊕ b 0 = [1, 1, 0, 1, 1, 1, 1]. After computing the syndrome associated to this y 0 , we get the error pattern z = [0, 0, 1, 0, 0, 0, 0]. Since the error pattern z exists, the sequence z 0 = z ⊕ b 0 = [0, 0, 1, 0, 0, 0, 0] is achieved and its analog weight W α (z 0 ) is 0.8, according to (4). Repeating this procedure with the other test patterns from S b , the algorithm stores z i = z 2 = [0, 1, 0, 0, 1, 0, 0] as the sequence with the minimum analog weight (W α (z 2 ) = 0.15). Finally, the estimate x = y ⊕ z 2 = [1, 0, 0, 1, 0, 1, 1] is obtained, characterizing the correct codeword.

WED Algorithm
The WED algorithm is based on the quantization of the sequence r in Q = 2 m regions that are uniformly spaced by the quantization step δ. Figure 1 illustrates the quantization regions (denoted by R D j , 0 ≤ j ≤ Q − 1) for Q = 8 (m = 3). The optimal value of δ, denoted by δ op , that minimizes the bit error probability, can be obtained algebraically [25] or through computer simulations.
Given r and Q, two sequences (v and q) are obtained. The Next, a matrix A having the same dimensions of A is obtained from the binary decoding of the rows of A. The syndrome of each row of A is computed in order to find its associated error pattern. If an error pattern is found, it is added to the row of A to generate the row of A . Otherwise, the row of A is equal to the row of A.
We also define the vector , where each component f is the number of positions of the th row of A that differs from the th row of A. Using f, the reliability R of the th row of A is computed as [21] In the WED algorithm proposed in [21], the errorcorrecting capability of the binary decoder is t = t * = (d − 1)/2 . To allow the use of an arbitrary value of t, we propose a new reliability R given by It is assumed that R = 0 if the binary decoder cannot find the error pattern associated with the syndrome of the th row of A. This consideration is intended to reduce the reliability of sequences in which the high number of errors has made impossible the binary decoding. Also, the candidate sequences with fewer errors are favored.
Let S i 0 corresponds to the set of indices of the rows of A containing the bit 0 in ith column and S i 1 , the set of indices for the presence of the bit 1 in ith column. The ith bit is decoded as 0 if R v , the ith bit is obtained by hard-decision decoding of the component r i .
A detailed description of the WED algorithm is found in [21], and a summary of its steps is presented in Table 2. For a better understanding of this algorithm, we consider in the following example the same code, transmitted codeword, and received sequence of Example 1.

Table 2 Description of the steps of the WED algorithm
Step Description

Arithmetic complexity of the GC-2 and WED algorithms
The complexity of both algorithms considered in this article is evaluated according to the number of arithmetic operations performed at each decoding step. b Consider N s , N g , N m , and N c , the number of additions, additions modulo-2, multiplications and comparisons, respectively. Table 3 indicates the number of operations performed at each step of the GC-2 algorithm, as described in Table 1, for each decoded sequence. In Step 3, multiplications and additions modulo-2 correspond to the syndrome computing. Also in Step 3, we assume that there is no arithmetic operations associated to the search of an error pattern z (a lookup table may be used for this purpose). Operations related to Steps 1, 5, and 6 are omitted because either they are not performed for each test pattern or do not require arithmetic operations. Thus, they represent a very small percentage of the whole operations.
It is noteworthy that the operations in Step 4 depend on the result obtained in Step 3, i.e., depend on the success of the binary decoder in the search for an error pattern z associated with the sequence y i . Thus, it is necessary to estimate the average value of the operations performed in Step 4. For this, we define the relative frequency of computing W α as f A = N W /2 p , in which N W is the number of times that the analog weight W α is computed in the main loop of the algorithm. This value is evaluated via computer simulations in the next section (see also [26]).
In the case of the WED algorithm, defined Q and δ, the implementation of the algorithm follows the steps described in Table 2. The computing of the sequence v depends only on Q and does not need to be executed for each received sequence. Therefore, these operations are not considered in Table 4 that relates the number of operations required to implement the WED algorithm for each decoded sequence. In Step 1, it is considered the binary tree mapping [27], in which, for Q regions, m comparisons are needed to the quantization of a component r i .
Step 2 is omitted, because it does not require mathematical operations.
Finally, in Step 5, depending on the sequence that is being decoded, it may be necessary either (m − 1) or (m − 2) additions to perform the comparison

Table 3 Number of mathematical operations performed in the GC-2 algorithm for each decoded sequence
Step f A 2 p n f A 2 p n f A 2 p http://asp.eurasipjournals.com/content/2013/1/28

Table 4 Number of mathematical operations performed in the WED algorithm for each decoded sequence
Step It is considered the worst case for all n positions, totalizing n(m−1) additions per decoded sequence.

Numerical results
The performance of three decoding algorithms (ML, GC-2, and WED) is evaluated via computer simulations for the two UEP codes defined in Section 2 using binary transmission over the AWGN channel. Various configurations of the GC-2 and WED algorithms are considered by changing their parameters (t and p for GC-2; t and Q for WED), in order to compare their performance to that of the ML algorithm for each protection class. Using these results together with the operations in Tables 3 and 4, a trade-off between performance and complexity for both decoding algorithms is also established. In the following sections, the GC-2 and the WED algorithms will be denoted by GC-2(t, p) and WED(t, Q), respectively. Figure 2 shows the curves of the bit error probability (P b ) versus signal-to-noise ratio (SNR) E b /N 0 , in which E b is the energy per information bit and N 0 is the power spectral density of the noise, of the GC-2(2, 2) and GC-2(3, 4) algorithms for both classes of the UEP code C 1 . For this code, the maximum value of the error-correcting capability of the binary decoder (t) is assumed equal to 3 and the maximum value of p is such that the cardinality of S b is always lower than the search set of the ML algorithm

GC-2 decoding algorithm
For the GC-2(2, 2) algorithm, we observe that there is virtually no performance difference between the classes cp 1 and cp 2 . In addition, considering P b = 10 −4 , the SNR difference compared to the ML algorithm is approximately 2 and 1.1 dB for the classes cp 1 and cp 2 , respectively. For the GC-2(3, 4) algorithm, the SNR difference to the ML algorithm is 0.1 dB (cp 1 ) and 0.03 dB (cp 2 ).
To assess the complexity of the GC-2 algorithm, it is necessary to evaluate f A , as mentioned in Section 4. Figure 3 illustrates the values of f A as a function of E b /N 0 for GC-2(2, 2), GC-2(2, 7), GC-2(3, 2), GC-2(3, 7), GC-2(6, 2), and GC-2(6, 7) algorithms applied to the UEP code C 2 . For p = 2 and considering t = 2, t = 3, and t = 6, f A reaches its maximum value (it is estimated for all test patterns b i ), when E b /N 0 = 9.5, 8, and 4 dB, respectively. The reduction of SNR occurs due to the increased possibility of an error pattern z be found, a consequence of the increasing of the errorcorrecting capability of the binary decoder. We observe that for t = 2 and t = 3 (p = 7), there are test patterns b which do not produce calculations of W α (f A < 1), even in regions of high SNR (E b /N 0 > 7.5 dB). For example, considering the GC-2(2, 7) algorithm and E b /N 0 > 7.5 dB, it is very probable that the bit inversion resulting from the addition of test patterns b i causes errors in the sequence y. As the binary decoder used is able to correct only 2 errors in this algorithm, 31 estimates of W α (N W = 31) occur, which represents the sum of all test patterns of weight less than 2, resulting in f A = 31/128 0.242.
Finally, it is analyzed the compromise between performance and complexity of the GC-2 algorithms in terms of the SNR difference with respect to the ML algorithm related to the class cp i (for P b = 10 −4 ), namely i (dB), and the number of mathematical operations executed in the algorithm, defined as a 4-tuple MO = [N s ;N g ;N m ;N c ]. For the estimation of MO, it is necessary to determine in Step 4 of Table 3 the value of f A used to weight the number of operations. Provided the GC-2 algorithm and the protection class cp i , i = 1, 2, the SNR value corresponding to P b = 10 −4 is determined. With this SNR, we can identify the correspondent value of f A (see Figure 3). Tables 5 and 6 summarize the complexity-performance trade-off for various configurations of the GC-2 algorithm applied to the UEP codes C 1 and C 2 , respectively. For each intersection of a row (t) with a column (p), the values of i (dB) and MO required to achieve P b = 10 −4 are shown for each protection class. For both codes, these results indicate that an increasing in p (for a fixed t) provides better performance, however increases the complexity, since each operation shown in Table 3 grows exponentially with p. On the other hand, an increasing in t (for a fixed p) also results in an improved performance, but with a smaller increasing of complexity. For example, considering the class cp 1 of the code C 1 and the GC-2(2, 2) algorithm, we have 1 = of the error-correcting capability of the binary decoder is more advantageous than the increase of the number of test patterns of the GC-2 algorithm. Also, it is possible to observe (analyzing Tables 5 and 6) that, in most of the cases, 2 is smaller than 1 , indicating that the performance achieved by the class cp 2 is closer to the ML one than the obtained by the class cp 1 .

WED algorithm
The WED(t, Q) algorithm uses the reliability R defined in (7). The number of quantization regions considered is Q = 4, 16 and 1024. Table 7 illustrates the optimal value of the quantization step, δ op , (in the sense of minimizing P b ) for the WED(2, Q) algorithm (class cp 1 ) applied to the codes C 1 and C 2 . In general, increasing the number of quantization regions causes a decrease  Table 5 Results of performance and arithmetic complexity of the GC-2(t, p) algorithms applied to the UEP code C 1 (16, 5, 5)   in the value of δ op , reducing even more the spacing between adjacent regions. In addition, we can observe that the higher the value of Q, the lower the variation of δ op over the range of SNR considered. This behavior indicates that as Q increases, the optimal value of the quantization step becomes approximately constant. Figure 4 shows the curves of P b versus E b /N 0 of the WED(2, 4) and WED (3,16) algorithms for both classes of the UEP code C 1 . Similarly to observed in the GC-2(2, 2) algorithm, there is no performance difference between the two classes for the case t = 2, while for t = 3, the SNR difference to the ML algorithm is 0.9 dB (cp 1 ) and 1.2 dB (cp 2 ). Table 8 summarizes the complexity-performance tradeoff for various configurations of the WED algorithm applied to the UEP codes C 1 and C 2 . For each intersection of a row (t and cp i ) with a column (Q and a code C j ), we have i (dB) for P b = 10 −4 . For code C 1 , the error-correcting capability of the WED algorithm is t = 2, 3 and 4, while for code C 2 , t = 2, 3, 4, 5 and 6, as it was considered for the GC-2 algorithm. Thus, considering the complexity-performance trade-off, it is more advantageous to increase t, as in the GC-2(t, p) algorithm, than increase the number of quantization regions Q.
Finally, we compare both soft-decision decoding algorithms for a specific protection class, such as the higher  protection one (cp 1 ). To do this, we define a binary decoding ratio, denoted by γ , as The parameters p and Q are associated with the number of binary decodings that the GC-2(t, p) and WED(t, Q) algorithms, respectively, execute. When decoding a received sequence, the GC-2(t, p) algorithm does 2 p binary decodings, while the WED(t, Q) algorithm does log 2 Q ones. Thus, for making a fair comparison of the algorithms, we choose configurations where γ ∼ = 1. In this case, the WED algorithm can offer a performance closer to the ML curve (for the higher protection class), but at the price of increased complexity. For γ = 1 and code C 2 , we can see this comparing GC-2(5, 2) and WED (5,16) algorithms (see Tables 6 and 8). For the GC-2(5, 2) algorithm, 1 Table 8 that the performance of the WED algorithm degrades when t is high. The authors conjecture that this behavior is due to some limitation of the reliability R adopted.

Conclusions
In this study, the effectiveness of two sub-optimum softdecision decoding algorithms (GC-2(t, p) and WED(t, Q) algorithms) was investigated for each protection class of UEP block codes using binary transmission over an AWGN channel. It was verified the performance of both algorithms compared to that of the ML one. The behavior of the GC-2 algorithm was investigated for estimating the analog weight (Step 4) according to the variation of its parameters (t and p), while the WED algorithm was examined for a new proposed reliability according to the variation of its parameters (t and Q). To estimate the complexity of each algorithm, it was computed the number of arithmetic operations per decoded sequence. An analysis of the trade-off between performance and complexity of the algorithms was performed for each protection class assuming various configuration options. These analyses led us to conclude that, when choosing the parameters of the algorithms, the increase of the error-correcting capability of the binary decoder (t) was more advantageous in both cases. In addition, choosing the values of p and Q such that γ is close to one (for a fixed value of t), it was verified that the GC-2 algorithm is less complex, while the WED algorithm can offer (depending on the code adopted) a performance closer to the ML one.
Endnotes a Error pattern z associated with the syndrome of the sequence y i . b The complexity of decoding algorithms should be taken into consideration additional factors besides the arithmetic operations (like memory reads and writes). Since these factors are architecture dependent, we omit their contribution in this article.

Competing interests
The authors declare that they have no competing interests. 1 Third Center for Integrated Air Defense and Air Traffic Control (CINDACTA III), 51250-020, Recife-PE, Brazil. 2 Department of Computing Systems, Centro de