Skip to main content

On the complexity-performance trade-off in soft-decision decoding for unequal error protection block codes

Abstract

Unequal error protection (UEP) codes provide a selective level of protection for different blocks of the information message. The effectiveness of two sub-optimum soft-decision decoding algorithms, namely generalized Chase-2 and weighted erasure decoding, is evaluated in this study for each protection class of UEP block codes. The performances of both algorithms are compared to that of the maximum likelihood algorithm in order to evaluate the performance loss of each protection class provided by less complex algorithms as well as their complexities are evaluated according to the number of arithmetic operations performed at each decoding step. Finally, numerical results and examples are provided which indicate that a trade-off between performance and complexity for each protection class is obtained. The results of this study can be used to select appropriate UEP coding and decoding schemes in applications that demand low energy consumption.

1 Introduction

One of the main challenges in the design of battery-supplied wireless devices is the minimization of their energy consumption [14]. It is known that forward error correction (FEC) decoders are responsible for a large part of energy consumption of such devices [5, 6]. Since maximum likelihood (ML) decoding is often infeasible due to its complexity of exponential order, it is of interest to investigate sub-optimum decoding techniques in search of less complex alternatives.

Concerning block codes, a class of sub-optimum algorithms that deserves attention is composed of reliability-based soft-decision decoding techniques [7]. In this category, Chase-2 and weighted erasure decoding (WED) algorithms are recognized by their ease of implementation and reduced complexity when compared to the ML algorithm. The performance of the Chase-2 decoding algorithms applied to Bose–Chaudhuri–Hocquenghem codes is analyzed in [8].

In a number of wireless protocols, the importance of different bits in the information sequence often varies and certain blocks of this sequence need higher protection level than other blocks. This property is called unequal error protection (UEP) and can be obtained either by hierarchical modulation techniques [9, 10] or by FEC schemes [11, 12]. Such UEP methods have been applied to wireless and mobile computing applications [1315], apart from several video and image coding standards as set partitioning hierarchical trees [16], ITU-T H.264 [17], and its extensions [18], and joint photographic expert group 2000 (JPEG 2000) [19]. Concerning UEP coding, the analysis of suboptimal decoding algorithms applied to UEP block codes has not been considered in the literature.

In this study, the effectiveness of sub-optimum soft-decision decoding algorithms (generalized Chase-2 algorithm [20] and WED algorithm [21]) for each protection class of UEP block codes is evaluated using binary transmission over an additive white Gaussian noise (AWGN) channel. Performances of both algorithms are compared to that of the ML algorithm in order to evaluate the performance loss of each protection class provided by less complex algorithms. We also analyze the arithmetic decoding complexity of each algorithm to decode a received sequence. In addition, an analysis of the trade-off between performance and complexity of the algorithms for each protection class is done. Based on this analysis, we discuss the choice of the parameters of the decoder with the best complexity-performance trade-off, such as the number of test patterns, the error-correcting capability of the binary decoder, and the number of quantization levels.

The remainder of this article is structured as follows: In Section 2, concepts related to UEP coding are described. The soft-decision decoding algorithms are defined in Section 3, while the analysis of their decoding complexity in terms of mathematical operations is presented in Section 4. Section 5 presents simulation results. A trade-off between performance and complexity for both decoding algorithms is established in this section. Finally, conclusions are drawn in Section 6.

2 UEP block codes

Consider a binary linear code C j (n,k,d) in which n is the codeword length, k is the dimension of the code, and d is the minimum Hamming distance of C j . The generator matrix of C j is denoted by G j . Assume that w(u G j ) is the Hamming weight of the codeword x = u G j related to the information vector u. The separation vector of C j , s j =[ s j 0 ,, s j i ,, s j k 1 ], measures the UEP provided by the code C j for ML decoding [22]. The i th position of s j is given by [22]

s j i =min{w( uG j ):uGF ( 2 ) k , u i 0},0ik1,
(1)

where GF(2) is the binary Galois field. The smallest element of s j is the minimum Hamming distance of C j . A code C j is said to have equal error protection capability if all elements of s j are equal, otherwise C j has the UEP property. The error-correcting capability of the code C j is denoted by t j .

To illustrate these concepts, consider the linear block codes C 1(16,5,5) and C 2(25,8,5) with generator matrices G 1 and G 2 (calculated using the method proposed in [23]) given by

G 1 = 0 0 0 1 1 1 1 1 1 0 0 0 0 1 0 1 0 0 0 1 1 1 0 0 0 1 1 1 0 0 1 1 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 1 0 0 0
(2)
G 2 = 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 .
(3)

Their separation vectors are s 1 = [8,8,5,5,5] and s 2 = [12,12,5,5,5,5,5,5], respectively. Thus, we can say that both codes are UEP codes with two distinct protection classes, denoted by cp1 (higher protection class) and cp2 (lower protection class).

3 Soft-decision decoding algorithms

Two decoding algorithms that deal with the least reliable positions of the received sequence, namely the generalized Chase-2 (GC-2) [20] and the WED [21] algorithms, are described in this section.

3.1 Generalized Chase-2 decoding algorithm

The GC-2 algorithm uses the sequence of real values observed at the output of the matched filter, r = [r 0,r 1,…,r n−1], and the binary sequence y obtained by a hard quantization of r. For the AWGN channel, the real values of the sequence r correspond to the reliabilities α i such that α i = |r i |. Thus, the higher the value of α i , the lower the probability that the corresponding symbol had been strongly affected by the noise.

Let p be the number of least reliable positions of the sequence r, i.e., the positions that have the least values of α i . The value of p determines the set of test patterns S b = {b i },i = 0,…,2p− 1, with cardinality |S b | = 2p. At first, the GC-2 algorithm applies a binary decoder (with error-correcting capability t) to find an error pattern z associated to the sequence y i= y b i , in which represents sum modulo-2. If an error pattern z is obtained by the binary decoder,a it is added to the test pattern b i , resulting the pattern z i=z b i . After that, the analog weight W α of the pattern z iis figured out according to

W α ( z i )= k = 0 n 1 α k z k i .
(4)

If z is not found by binary decoding, the next test pattern b i is selected. The objective of the GC-2 algorithm is to find the pattern z i with minimum analog weight W α to estimate the transmitted codeword x, as x ̂ =y z i . When a pattern z iis not selected (for all test patterns), then x ̂ =y.

A detailed description of the GC-2 algorithm is found in [20, 24] and a summary of its steps is presented in Table 1.

Table 1 Description of the steps of the GC-2 algorithm

For a better understanding of the GC-2 algorithm, we consider the following example.

Example 1

Consider the Hamming code C(7,4,3) whose error-correcting capability is equal to one. Assume that the codeword x = [1,0,0,1,0,1,1] is BPSK modulated and is transmitted over the AWGN channel. Suppose that the received sequence is r = [1.5,0.05,−0.8,2.2,0.1,1.2,0.3].

According to the first step of the GC-2 algorithm, the sequence y= [1,1,0,1,1,1,1] is obtained by hard quantization of r. Let us assume that p = 2, so the two least reliable positions are the second and the fifth ones. Thus, considering all the combinations of 0’s and 1’s in these two least reliable positions, we have four test patterns b according to the set

S b = b 0 , b 1 , b 2 , b 3 = 0000000 , 0100000 , 0000100 , 0100100 .

To obtain the sequence y 0, the test pattern b 0 is selected, resulting in y 0 = y b 0 = [1,1,0,1,1,1,1]. After computing the syndrome associated to this y 0, we get the error pattern z = [0,0,1,0,0,0,0]. Since the error pattern z exists, the sequence z 0 = zb 0 = [0,0,1,0,0,0,0] is achieved and its analog weight W α (z 0) is 0.8, according to (4). Repeating this procedure with the other test patterns from S b , the algorithm stores z i = z 2 = 0 , 1 , 0 , 0 , 1 , 0 , 0 as the sequence with the minimum analog weight (W α (z 2) = 0.15). Finally, the estimate x ̂ =y z 2 = 1 , 0 , 0 , 1 , 0 , 1 , 1 is obtained, characterizing the correct codeword.

3.2 WED Algorithm

The WED algorithm is based on the quantization of the sequence r in Q = 2mregions that are uniformly spaced by the quantization step δ. Figure 1 illustrates the quantization regions (denoted by R D j ,0jQ1) for Q = 8 (m = 3). The optimal value of δ, denoted by δ op, that minimizes the bit error probability, can be obtained algebraically [25] or through computer simulations.

Figure 1
figure 1

Distribution of the quantization regions for Q = 8.

Given r and Q, two sequences (v and q) are obtained. First, consider the sequence v = [v 0,…,v ,…,v m−1], where each component v is given by

v = 2 m 1 Q 1 ,0m1.
(5)

The Q-ary sequence q = [q 0,q 1,…,q i ,…,q n−1],q i {0,…,Q−1} is defined such that q i = j, if r i R D j . Then, a matrix A of dimensions m×n is determined such that the i th column of A is the binary representation of q i .

Next, a matrix A’ having the same dimensions of A is obtained from the binary decoding of the rows of A. The syndrome of each row of A is computed in order to find its associated error pattern. If an error pattern is found, it is added to the row of A to generate the row of A’. Otherwise, the row of A’ is equal to the row of A.

We also define the vector f = [f 0,…,f ,…,f m−1], where each component f is the number of positions of the th row of A’ that differs from the th row of A. Using f, the reliability R of the th row of A’ is computed as [21]

R =max 0 , d 2 f .
(6)

In the WED algorithm proposed in [21], the error-correcting capability of the binary decoder is t = t = (d − 1)/2. To allow the use of an arbitrary value of t, we propose a new reliability R given by

R =max 0 , 2 t + 1 2 f .
(7)

It is assumed that R =0 if the binary decoder cannot find the error pattern associated with the syndrome of the th row of A. This consideration is intended to reduce the reliability of sequences in which the high number of errors has made impossible the binary decoding. Also, the candidate sequences with fewer errors are favored.

Let S 0 i corresponds to the set of indices of the rows of A’ containing the bit 0 in i th column and S 1 i , the set of indices for the presence of the bit 1 in i th column. The i th bit is decoded as 0 if

S 0 i R v > S 1 i R v

or as 1 if

S 0 i R v < S 1 i R v .

If S 0 i R v = S 1 i R v , the i th bit is obtained by hard-decision decoding of the component r i .

A detailed description of the WED algorithm is found in [21], and a summary of its steps is presented in Table 2. For a better understanding of this algorithm, we consider in the following example the same code, transmitted codeword, and received sequence of Example 1.

Table 2 Description of the steps of the WED algorithm

Example 2

Assume the mapping of r into four quantization regions (Q = 4) with δ = 0.2. According to the first step of the WED algorithm, the sequences q = [3,2,0,3,2,3,3] and v = [0.666,0.333] are obtained. Given q, the matrix A is obtained as

A= 1 1 0 1 1 1 1 1 0 0 1 0 1 1 .
(8)

Next, applying a binary decoding (with t = 1) to each row of A, we obtain the matrix A’ as

A = 1 1 1 1 1 1 1 1 0 0 1 0 1 1 .
(9)

From A and A’, we obtain f = [f 0,f 1] = [1,0]. Assuming t = 1, the reliabilities of the rows of the A’ are R 0 ′ = 1 and R 1 ′ = 3. For the first column of A’, we have S 0 0 ={} and S 1 0 ={0,1}, resulting in x ̂ 0 =0. For the second column of A’, S 0 1 ={1} and S 1 1 ={0}. Since R 1 υ 1 > R 0 υ 0 (1 > 0.666), we have x ̂ 1 =0. Continuing with the decoding, the estimate x ̂ = 1 , 0 , 0 , 1 , 0 , 1 , 1 is obtained. This is the correct codeword as it was obtained in the GC-2 algorithm.

4 Arithmetic complexity of the GC-2 and WED algorithms

The complexity of both algorithms considered in this article is evaluated according to the number of arithmetic operations performed at each decoding step.b Consider N s ,N g ,N m , and N c , the number of additions, additions modulo-2, multiplications and comparisons, respectively.

Table 3 indicates the number of operations performed at each step of the GC-2 algorithm, as described in Table 1, for each decoded sequence. In Step 3, multiplications and additions modulo-2 correspond to the syndrome computing. Also in Step 3, we assume that there is no arithmetic operations associated to the search of an error pattern z (a lookup table may be used for this purpose). Operations related to Steps 1, 5, and 6 are omitted because either they are not performed for each test pattern or do not require arithmetic operations. Thus, they represent a very small percentage of the whole operations.

Table 3 Number of mathematical operations performed in the GC-2 algorithm for each decoded sequence

It is noteworthy that the operations in Step 4 depend on the result obtained in Step 3, i.e., depend on the success of the binary decoder in the search for an error pattern z associated with the sequence y i. Thus, it is necessary to estimate the average value of the operations performed in Step 4. For this, we define the relative frequency of computing W α as f A = N W /2p, in which N W is the number of times that the analog weight W α is computed in the main loop of the algorithm. This value is evaluated via computer simulations in the next section (see also [26]).

In the case of the WED algorithm, defined Q and δ, the implementation of the algorithm follows the steps described in Table 2. The computing of the sequence v depends only on Q and does not need to be executed for each received sequence. Therefore, these operations are not considered in Table 4 that relates the number of operations required to implement the WED algorithm for each decoded sequence. In Step 1, it is considered the binary tree mapping [27], in which, for Q regions, m comparisons are needed to the quantization of a component r i . Step 2 is omitted, because it does not require mathematical operations.

Table 4 Number of mathematical operations performed in the WED algorithm for each decoded sequence

Finally, in Step 5, depending on the sequence that is being decoded, it may be necessary either (m − 1) or (m − 2) additions to perform the comparison S 0 i R v S 1 i R v . It is considered the worst case for all n positions, totalizing n(m − 1) additions per decoded sequence.

5 Numerical results

The performance of three decoding algorithms (ML, GC-2, and WED) is evaluated via computer simulations for the two UEP codes defined in Section 2 using binary transmission over the AWGN channel. Various configurations of the GC-2 and WED algorithms are considered by changing their parameters (t and p for GC-2; t and Q for WED), in order to compare their performance to that of the ML algorithm for each protection class. Using these results together with the operations in Tables 3 and 4, a trade-off between performance and complexity for both decoding algorithms is also established. In the following sections, the GC-2 and the WED algorithms will be denoted by GC-2 (t,p) and WED (t,Q), respectively.

5.1 GC-2 decoding algorithm

Figure 2 shows the curves of the bit error probability (P b ) versus signal-to-noise ratio (SNR) E b /N 0, in which E b is the energy per information bit and N 0 is the power spectral density of the noise, of the GC-2(2,2) and GC-2(3,4) algorithms for both classes of the UEP code C 1. For this code, the maximum value of the error-correcting capability of the binary decoder (t) is assumed equal to 3 and the maximum value of p is such that the cardinality of S b is always lower than the search set of the ML algorithm (|S b | < 2k− 1).

Figure 2
figure 2

Performance of ML and GC-2 ( t , p ) decoding algorithms for UEP code C 1 (16,5,5) considering the use of binary antipodal modulation and AWGN channel.

For the GC-2(2,2) algorithm, we observe that there is virtually no performance difference between the classes cp1 and cp2. In addition, considering P b = 10−4, the SNR difference compared to the ML algorithm is approximately 2 and 1.1 dB for the classes cp1 and cp2, respectively. For the GC-2(3,4) algorithm, the SNR difference to the ML algorithm is 0.1 dB (cp1) and 0.03 dB (cp2).

To assess the complexity of the GC-2 algorithm, it is necessary to evaluate f A , as mentioned in Section 4. Figure 3 illustrates the values of f A as a function of E b /N 0 for GC-2(2,2), GC-2(2,7), GC-2(3,2), GC-2(3,7), GC-2(6,2), and GC-2(6,7) algorithms applied to the UEP code C 2. For p = 2 and considering t = 2,t = 3, and t = 6,f A reaches its maximum value (it is estimated for all test patterns b i ), when E b /N 0 = 9.5, 8, and 4 dB, respectively. The reduction of SNR occurs due to the increased possibility of an error pattern z be found, a consequence of the increasing of the error-correcting capability of the binary decoder. We observe that for t = 2 and t = 3 (p = 7), there are test patterns b which do not produce calculations of W α (f A < 1), even in regions of high SNR (E b /N 0 > 7.5 dB). For example, considering the GC-2(2,7) algorithm and E b /N 0 > 7.5 dB, it is very probable that the bit inversion resulting from the addition of test patterns b i causes errors in the sequence y. As the binary decoder used is able to correct only 2 errors in this algorithm, 31 estimates of W α (N W = 31) occur, which represents the sum of all test patterns of weight less than 2, resulting in f A = 31/128 0.242.

Figure 3
figure 3

Relative frequency of the execution of the calculation of the analog weight W α in the GC-2 ( t , p ) algorithm applied to the code C 2 (25,8,5) as a function of E b / N 0 . Parameters of the GC-2 algorithm: (black square symbol): t = 2,p = 2; (red square symbol): t = 2,p = 7; (green circle symbol): t = 3,p = 2; (blue circle symbol): t=3,p = 7; (cyan triangle symbol): t = 6,p = 2; (magenta triangle symbol): t = 6,p = 7.

Finally, it is analyzed the compromise between performance and complexity of the GC-2 algorithms in terms of the SNR difference with respect to the ML algorithm related to the class cp i (for P b  = 10−4), namely Δ i  (dB), and the number of mathematical operations executed in the algorithm, defined as a 4-tuple MO = [ N s ; N g ; N m ; N c ]. For the estimation of MO, it is necessary to determine in Step 4 of Table 3 the value of f A  used to weight the number of operations. Provided the GC-2 algorithm and the protection class cp i ,i = 1,2, the SNR value corresponding to P b  = 10−4 is determined. With this SNR, we can identify the correspondent value of f A  (see Figure 3).

Tables 5 and 6 summarize the complexity-performance trade-off for various configurations of the GC-2 algorithm applied to the UEP codes C 1 and C 2, respectively. For each intersection of a row (t) with a column (p), the values of Δ i  (dB) and MO required to achieve P b  = 10−4 are shown for each protection class. For both codes, these results indicate that an increasing in p (for a fixed t) provides better performance, however increases the complexity, since each operation shown in Table 3 grows exponentially with p. On the other hand, an increasing in t (for a fixed p) also results in an improved performance, but with a smaller increasing of complexity. For example, considering the class cp1 of the code C 1 and the GC-2(2,2) algorithm, we have Δ 1 = 2.0 dB and MO = [N s ;N g ;N m ;N c ][58.3;790;770;3.89]. Moreover, the GC-2(3,2) provides Δ 1 = 0.9 dB and MO = [N s ;N g ;N m ;N c ][59.5;790;770;3.96], while the GC-2(2,4) yields Δ 1 = 0.8 dB and MO = [N s ;N g ;N m ;N c ][159.6;3,100;3,000;10.6], which represents a significant complexity increase related to the previous two cases, while the value of Δ 1 is approximately the same as obtained by the GC-2(3,2) algorithm. This analysis led us to conclude that the increase of the error-correcting capability of the binary decoder is more advantageous than the increase of the number of test patterns of the GC-2 algorithm. Also, it is possible to observe (analyzing Tables 5 and 6) that, in most of the cases, Δ 2 is smaller than Δ 1, indicating that the performance achieved by the class cp2 is closer to the ML one than the obtained by the class cp1.

Table 5 Results of performance and arithmetic complexity of the GC-2 ( t , p ) algorithms applied to the UEP code C 1 (16,5,5)
Table 6 Results of performance and arithmetic complexity of the GC-2 ( t , p ) algorithms applied to the UEP code C 2 (25,8,5)

5.2 WED algorithm

The WED (t,Q) algorithm uses the reliability R defined in (7). The number of quantization regions considered is Q = 4, 16 and 1024. Table 7 illustrates the optimal value of the quantization step, δ o p , (in the sense of minimizing P b ) for the WED (2,Q) algorithm (class cp1) applied to the codes C 1 and C 2. In general, increasing the number of quantization regions causes a decrease in the value of δ op, reducing even more the spacing between adjacent regions. In addition, we can observe that the higher the value of Q, the lower the variation of δ op over the range of SNR considered. This behavior indicates that as Q increases, the optimal value of the quantization step becomes approximately constant.

Table 7 Values of optimal quantization step δ op of the class cp 1 for the WED (2, Q ) algorithm and different values of E b / N 0

Figure 4 shows the curves of P b versus E b /N 0 of the WED(2,4) and WED(3,16) algorithms for both classes of the UEP code C 1. Similarly to observed in the GC-2(2,2) algorithm, there is no performance difference between the two classes for the case t = 2, while for t = 3, the SNR difference to the ML algorithm is 0.9 dB (cp1) and 1.2 dB (cp2).

Figure 4
figure 4

Performance of ML and WED ( t , Q ) decoding algorithms for UEP code C 1 (16,5,5) considering the use of binary antipodal modulation and AWGN channel.

Table 8 summarizes the complexity-performance trade-off for various configurations of the WED algorithm applied to the UEP codes C 1 and C 2. For each intersection of a row (t and cp i ) with a column (Q and a code C j ), we have Δ i (dB) for P b  = 10−4. For code C 1, the error-correcting capability of the WED algorithm is t = 2,3 and 4, while for code C 2, t = 2,3,4,5 and 6, as it was considered for the GC-2 algorithm. Thus, considering the complexity-performance trade-off, it is more advantageous to increase t, as in the GC-2 (t,p) algorithm, than increase the number of quantization regions Q.

Table 8 Results of performance ( Δ i in dB) and arithmetic complexity (MO) of the WED ( t , Q ) algorithms applied to the UEP codes C 1 (16,5,5) and C 2 (25,8,5)

Finally, we compare both soft-decision decoding algorithms for a specific protection class, such as the higher protection one (cp1). To do this, we define a binary decoding ratio, denoted by γ, as

γ= 2 p log 2 Q .
(10)

The parameters p and Q are associated with the number of binary decodings that the GC-2 (t,p) and WED (t,Q) algorithms, respectively, execute. When decoding a received sequence, the GC-2 (t,p) algorithm does 2p binary decodings, while the WED (t,Q) algorithm does log2Q ones. Thus, for making a fair comparison of the algorithms, we choose configurations where γ 1. In this case, the WED algorithm can offer a performance closer to the ML curve (for the higher protection class), but at the price of increased complexity. For γ = 1 and code C 2, we can see this comparing GC-2(5,2) and WED(5,16) algorithms (see Tables 6 and 8). For the GC-2(5,2) algorithm, Δ 1 = 1.3 dB and MO [95.7;1,800;1,800;3.99], while for the WED(5,16) one, Δ 1 = 1.0 dB and MO [179;1,832;1,808;229]. Another example is verified if the GC-2(4,3) and WED(4,1024) are compared (γ = 0.8 and code C 2). For the GC-2(4,3) algorithm, Δ 1 = 1.7 dB and MO [183;3,655;3,591;7.63], while for the WED one, Δ 1 = 1.4 dB and MO [485;4,580;4,520;535]. It should also be observed in Table 8 that the performance of the WED algorithm degrades when t is high. The authors conjecture that this behavior is due to some limitation of the reliability R adopted.

6 Conclusions

In this study, the effectiveness of two sub-optimum soft-decision decoding algorithms (GC-2 (t,p) and WED (t,Q) algorithms) was investigated for each protection class of UEP block codes using binary transmission over an AWGN channel. It was verified the performance of both algorithms compared to that of the ML one. The behavior of the GC-2 algorithm was investigated for estimating the analog weight (Step 4) according to the variation of its parameters (t and p), while the WED algorithm was examined for a new proposed reliability according to the variation of its parameters (t and Q). To estimate the complexity of each algorithm, it was computed the number of arithmetic operations per decoded sequence. An analysis of the trade-off between performance and complexity of the algorithms was performed for each protection class assuming various configuration options. These analyses led us to conclude that, when choosing the parameters of the algorithms, the increase of the error-correcting capability of the binary decoder (t) was more advantageous in both cases. In addition, choosing the values of p and Q such that γ is close to one (for a fixed value of t), it was verified that the GC-2 algorithm is less complex, while the WED algorithm can offer (depending on the code adopted) a performance closer to the ML one.

Endnotes

aError pattern z associated with the syndrome of the sequence y i.bThe complexity of decoding algorithms should be taken into consideration additional factors besides the arithmetic operations (like memory reads and writes). Since these factors are architecture dependent, we omit their contribution in this article.

References

  1. Lin T-H, Kaiser WJ, Pottie GJ: Integrated low-power communication system design for wireless sensor networks. IEEE Commun. Mag 2004, 42(12):142-150.

    Article  Google Scholar 

  2. Niewiadomska-Szynkiewicz E, Kwasniewski P, Windyga I: Comparative study of wireless sensor networks energy-efficient topologies and power save protocols. J. Telecommun. Inf. Technol 2009, 3: 68-75.

    Google Scholar 

  3. Gómez-Vilardebó J, Pérez-Neira AI, Nájar M: Energy efficient communications over the AWGN relay channel. IEEE Trans. Wirel. Commun 2010, 9(1):32-37.

    Article  Google Scholar 

  4. Zhu Y, Wu W, Pan J, Tang Y: An energy-efficient data gathering algorithm to prolong lifetime of wireless sensor networks. Comput. Commun 2010, 33(5):639-647. 10.1016/j.comcom.2009.11.008

    Article  Google Scholar 

  5. Howard SL, Schlegel C, Iniewski K: Error control coding in low-power wireless sensor networks: when is ECC energy-efficient? EURASIP. J. Wirel. Commun. Netw 2006, 2: 1-14.

    Article  Google Scholar 

  6. Kienle F, Wehn N, Meyr H: On complexity, energy- and implementation-efficiency of channel decoders. IEEE Trans. Commun 2011, 59(12):3301-3310.

    Article  Google Scholar 

  7. Fossorier M, Lin S, Snyders J: Reliability-based syndrome decoding of linear block codes. IEEE Trans. Inf. Theory IT-44 1998, 388-398.

    Google Scholar 

  8. Singh J, Pesch D: Application of energy efficient soft-decision error control in wireless sensor networks. Telecommun. Syst (Springer Netherlands) (2011) , pp. 1–11 http://dx.doi.org/10.1007/s11235-011-9588-z

    Google Scholar 

  9. Chang YC, Lee SW, Komiya R: A low complexity hierarchical QAM symbol bits allocation algorithm for unequal error protection of wireless video transmission. IEEE Trans. Consum. Electron 2009, 55(3):1089-1097.

    Article  Google Scholar 

  10. Nguyen HX, Nguyen HH, Le-Ngoc T: Signal transmission with unequal error protection in wireless relay networks. IEEE Trans. Veh. Technol 2010, 59(5):2166-2178.

    Article  MATH  Google Scholar 

  11. Pimentel C, Souza RD, Uchôa-Filho BF, Pellenz ME: Generalized punctured convolutional codes with unequal error protection. EURASIP J. Adv. Signal Process. 2008, 2008: Art. ID 280831, 1-6.

    Article  MATH  Google Scholar 

  12. Borade S, Nakiboglu B, Zheng L: Unequal error protection: an information-theoretic perspective. IEEE Trans. Inf. Theory 2009, 55(12):5511-5539.

    Article  MathSciNet  Google Scholar 

  13. Zhang S, Lau VKN: A novel unequal error protection (UEP) scheme using D-STTD for multicast service. IEEE Trans. Wirel. Commun 2009, 8(2):978-984.

    Article  Google Scholar 

  14. Arslan SS, Cosman PC, Milstein LB: Coded hierarchical modulation for wireless progressive image transmission. IEEE Trans. Veh. Technol 2011, 60(9):4299-4313.

    Article  Google Scholar 

  15. Kang K, Jeon WJ: Differentiated protection to video layers to improve perceived quality. IEEE Trans. Mobi. Comput 2012, 11(2):292-304.

    Article  Google Scholar 

  16. Thomos N, Boulgouris NV, Strintzis MG, Wireless image transmission using turbo codes and optimal unequal error protection: IEEE Trans. Image Process. 2005, 14(11):1890-1901.

    Article  Google Scholar 

  17. Qu Q, Modestino JW: An adaptive motion-based unequal error protection approach for real-time video transport over wireless IP networks. IEEE Trans. Multimed 2006, 8(5):1033-1044.

    Article  Google Scholar 

  18. Ha H, Yim C: Layer-weighted unequal error protection for scalable video coding extension of H.264/AVC. IEEE Trans. Consum. Electron 2008, 54(2):736-744.

    Article  Google Scholar 

  19. Zhang W, Shao X, Torki M, HajShirMohammadi A, Bajic IV: Unequal error protection codes for JPEG2000 images using short block length turbo codes. IEEE Commun. Lett 2011, 15(6):659-661.

    Article  Google Scholar 

  20. Tendolkar NN, Hartman CRP: Generalization of Chase algorithms for soft decision decoding of binary linear codes. IEEE Trans. Inf. Theory 1984, IT-30(5):714-721.

    Article  MATH  MathSciNet  Google Scholar 

  21. Weldon Jr E: Decoding binary block codes on Q-ary output channels. IEEE Trans. Inf. Theory 1971, 17(6):713-718. 10.1109/TIT.1971.1054713

    Article  MATH  MathSciNet  Google Scholar 

  22. Masnick B, Wolf J: On linear unequal error protection codes. IEEE Trans. Inf. Theory 1967, IT-3(4):600-607.

    Article  MATH  Google Scholar 

  23. van Gils WJ: On linear unequal error protection codes. EUT-Rep-82-WSK-02, Department of Mathematical and Computing Science. Eindhoven University of Technology, 1982

    MATH  Google Scholar 

  24. Chase D: A class of algorithms for decoding block codes with channel measurement information. IEEE Trans. Inf. Theory 1972, IT-18(1):170-182.

    Article  MathSciNet  MATH  Google Scholar 

  25. Chen WHJ, Fossorier MPC, Lin S: Optimum quantizer design for the weigthed erasure decoding algorithm. In Proceedings of the IEEE International Conference on Communications (ICC). (Vancouver, Canada; June 1999:838-842.

    Google Scholar 

  26. Albuquerque RC, Cunha DC, Pimentel C: An evaluation of the generalized Chase-2 algorithm applied to unequal error protection block codes. In Proceedings of the IEEE 3rd Latin-American Conference on Communications (LATINCOM). (Belém-PA, Brazil; October 2011:1-6.

    Google Scholar 

  27. Tenenbaum AM, Langsam Y, Augenstein MJ: Data Structures Using C. Facsimile edition: Prentice Hall; 1989.

    MATH  Google Scholar 

Download references

Acknowledgements

This study was supported in part by the State of Pernambuco Research Foundation (FACEPE) under Grant APQ-1060-3.04/10 and the Brazilian Council for Scientific and Technological Development (CNPq) under Grant 302535/2010-1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel C Cunha.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

de Albuquerque, R.C., Cunha, D.C. & Pimentel, C. On the complexity-performance trade-off in soft-decision decoding for unequal error protection block codes. EURASIP J. Adv. Signal Process. 2013, 28 (2013). https://doi.org/10.1186/1687-6180-2013-28

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-28

Keywords