Skip to main content

A new computational decoding complexity measure of convolutional codes

Abstract

This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

1 Introduction

Convolutional codes are widely adopted due to their capacity to increase the reliability of digital communication systems with manageable encoding/decoding complexity [1]. A convolutional code can be represented by a regular (or conventional) trellis which allows an efficient implementation of the maximum-likelihood decoding algorithm, known as the Viterbi algorithm (VA). In [2], the authors analyze different receiver implementations for the wireless network IEEE 802.11 standard [3], showing that the VA contributes with 35% of the overall power consumption. This consumption is strongly related with the decoding complexity which in turn is known to be highly dependent on the trellis representing the code. Therefore, the search for less complex trellis alternatives is essential to some applications, especially for those with severe power limitation.

A trellis consists of repeated copies of what is called a trellis module [46]. McEliece and Lin [4] defined a decoding complexity measure of a trellis module as the total number of edge symbols in the module (normalized by the number of information bits), called the trellis complexity of the module M, denoted by TC(M). In [4], a method to construct the so-called ‘minimal’ trellis module is provided. This module, which has an irregular structure presenting a time-varying number of states, minimizes various trellis complexity measures. Good convolutional codes with low-complexity trellis representation are tabulated in [513], which indicates a great interest in this subject. These works establish a performance-complexity tradeoff for convolutional codes.

The VA operating with hard decision over a trellis module M performs two arithmetic operations: integer sums and comparisons [4, 1315]. These operations are also considered as a complexity measure of other decoding algorithms, as those used by turbo codes [16]. The number of sums per information bit is equal to TC(M). Thus, this complexity measure represents the additive complexity of the trellis module M. On the other hand, the number of comparisons at a specific state of M is equal to the total number of edges reaching it minus one [14]. The total number of comparisons in M represents the merge complexity. Both trellis and merge complexities govern the complexity measures of the VA operating over a trellis module M. Therefore, considering only one of these complexities is not sufficient to determine the effort required by the decoding operations.

In this work, we propose a complexity measure, called computational complexity of M, denoted by TCC(M), that more adequately reflects the computational effort of decoding a convolutional code using a software implementation of the VA. This measure is defined by considering the total number of sums and comparisons in M as well as the respective computational cost (complexity) of the implementation of these arithmetical operations using a given digital signal processor. More specifically, these costs are measured in terms of machine cycles consumed by the execution of eachoperation.

For illustration purposes, we provide in Section 4 one example where we compare the conventional and the minimal trellis modules under the new trellis complexity for a specific architecture. We will see through other examples that two different convolutional codes having the same complexity TC(M) defined in [4] may compare differently under TCC(M). Therefore, interesting codes may have been overlooked in previous code searches. To remedy this problem, as another contribution of this work, a code search is conducted and the best convolutional codes (in terms of distance spectrum) with respect to TCC(M) are tabulated. We present a refined list of codes, with increasing values of TCC(M) of the minimal trellis for codes of rates 2/4, 3/5, 4/7, and 5/7.

The remainder of this paper is organized as follows. In Section 2, we define the number of arithmetic operations performed by the VA and define the computational complexity TCC(M). Section 3 presents the results of the code search. In Section 4, we determine the computational cost of each arithmetic operation. Comparisons between TC(M) and TCC(M) are given for codes of different rates and based on two trellis representations: the conventional and minimum trellis modules. Finally, in Section 5, we present the conclusions of this work.

2 Trellis module complexity

Consider a convolutional code C(n,k,ν), where ν, k, and n are the overall constraint length, the number of input bits, and the number of output bits, respectively. In general, a trellis module M for a convolutional code C(n,k,ν) consists of n trellis sections, 2 ν t states at depth t, 2 ν t + b t edges connecting the states from depth t to depth t+1, and l t bits labeling each edge from depth t to depth t+1, for 0≤tn-1 [5]. The decoding operation at each trellis section using the VA has three components: the Hamming distance calculation (HDC), the add-compare-select (ACS), and the RAM Traceback. Next, we analyze the arithmetic operations required by HDC and ACS over the trellis module M using the VA operating with hard decision. The RAM Traceback component does not require arithmetic operations; hence, it is not considered in this work. In this stage, the decoding is accomplished by tracing the maximum likelihood path backwards through the trellis ([1] Chapter 12).

We develop next a complexity metric in terms of arithmetic operations - summations (S), bit comparisons (Cb) and integer comparisons (Ci). We define the following notation for the number of operations of a given complexity measure

s S + c 1 C b + c 2 C i

to denote s summations, c1 bit comparisons, and c2 integer comparisons.

The HDC consists of calculating the Hamming distance between the received sequence and the coded sequence at each edge of a section t of the trellis module M. As each edge is labeled by l t bits, the same amount of bit comparison operations is required. The results of the bit comparisons are added with l t -1 sum operations. The total number of edges in this section is given by 2 ν t + b t ; therefore, l t 2 ν t + b t bit comparison operations and ( l t -1) 2 ν t + b t sum operations are required. From the above, we conclude that the total number of operations required by HDC at section t, denoted by T t HDC , is given by

T t HDC = l t - 1 2 ν t + b t (S)+ l t 2 ν t + b t C b .
(1)

The ACS performs the metric update of each state of the section module. First, each edge metric and the corresponding initial state metric are added together. Therefore, 2 ν t + b t sum operations are required. In the next step, all the accumulated edge metrics of the edges that converge to each state at section t+1 are compared, and the lowest one is selected. There are 2 ν t + 1 states at section t+1, and 2 ν t + b t edges between sections t and t+1. Therefore, 2 ν t + b t / 2 ν t + 1 edges per state are compared requiring ( 2 ν t + b t / 2 ν t + 1 )-1 comparison operations. Considering now all the states at section t+1, a total of 2 ν t + b t - 2 ν t + 1 integer comparison operations are required [12, 13]. We conclude that the total number of operations required by ACS at section t, denoted by T t ACS , is then given by

T t ACS = 2 ν t + b t (S)+ 2 ν t + b t - 2 ν t + 1 C i .
(2)

From (1) and (2), the total number of operations per information bit performed by the VA over a trellis module M is

T ( M ) = 1 k t = 0 n - 1 T t HDC + T t ACS = 1 k t = 0 n - 1 l t 2 ν t + b t C b + S + 2 ν t + b t - 2 ν t + 1 C i ,
(3)

where ν n = ν 0 . The trellis complexity per information bit TC(M) over a trellis module M, according to [4] is given by

TC(M)= 1 k t = 0 n - 1 l t 2 ν t + b t
(4)

and the merge complexity per information bit, MC(M), over a trellis module M is [12, 13]

MC(M)= 1 k t = 0 n - 1 2 ν t + b t - 2 ν t + 1 .
(5)

We rewrite (3) using (4) and (5) as follows:

T(M)=TC(M) C b + S +MC(M) C i .
(6)

For the conventional trellis module, Mconv, where l t =n, n=1, ν0=ν1=ν, and b0=k, we obtain

TC( M conv )= n k 2 k + ν
(7)
MC( M conv )= 2 ν 2 k - 1 k .
(8)

The minimal trellis module consists of an irregular structure with n sections which can present different number of states. Each edge is labeled with just one bit [4]. For this minimal trellis module, Mmin, where n=n, l t =1, ν t = ν ~ t t, b t = b ~ t t, and ν n =ν0, we obtain

TC( M min )= 1 k t = 0 n - 1 2 ν ~ t + b ~ t
(9)
MC M min = 1 k t = 0 n - 1 2 ν ~ t + b ~ t - 2 ν ~ t + 1 .
(10)

Example 1

Consider the convolutional code C1(7,3,3) generated by the generator matrix

G 1 (D)= 1 + D 1 + D 1 1 0 1 1 D 0 1 + D 1 + D 1 1 0 D D 0 D 1 + D 1 + D 1 + D .
(11)

The trellis and merge complexities of the conventional trellis module for C1(7,3,3) are TC(Mconv)=149.33 and MC(Mconv)=18.66. Therefore, we obtain from (6)

T ( M conv ) = 149.33 ( S + C b ) + 18.66 ( C i ) .

The single-section conventional trellis module Mconv has eight states with eight edges leaving each state, each edge labeled by 7 bits. The minimal trellis module for C1(7,3,3) is shown in Figure 1. Defining the state and the edge complexity profiles of Mmin as ν ~ = ν ~ 0 , , ν ~ n - 1 and b ~ = b ~ 0 , , b ~ n - 1 , respectively, we obtain ν ~ =(3,4,3,4,3,4,4) and b ~ =(1,0,1,0,1,0,0) for C1(7,3,3), resulting in TC(Mmin)=37.33 and MC(Mmin)=8. Similarly, we obtain from (6)

T ( M min ) = 37.33 ( S + C b ) + 8 ( C i ) .

In this example, the relative number of operations required by the minimal trellis module if compared with the conventional trellis is 25% for S and Cb and 42,8% for Ci.

Figure 1
figure 1

Minimal trellis for the C 1 (7, 3, 3) convolutional code. Solid edges represent ‘0’ codeword bits, while dashed lines represent ‘1’ codeword bits.

Once the number of operations performed by the VA over a trellis module is determined, we must obtain the individual cost of the arithmetic operations S, Cb, and Ci for a more appropriate complexity comparison.

2.1 Computational complexity of VA

Based on (6), we define in this subsection a computational complexity of a trellis module M, denoted by TCC(M). For this purpose, let Φ S , Φ C b , and Φ C i be the individual computational cost of the implementation of the arithmetic operations S, Cb, and Ci, respectively, in a particular architecture. This cost can be measured in terms of the machine cycles consumed by each operation, the power consumed by each operation, and many other cost measures. Thus,

TCC(M)=TC(M)( Φ C b + Φ S )+MC(M) Φ C i .
(12)

We observe that TCC(M) depends on two complexity measures of the module M: the trellis complexity, which represents the additive complexity, and the merge complexity. The relative importance of each complexity depends on a particular implementation, as will be discussed later. Note, however, that the complexity measure defined in (12) is general, in the sense that it is valid for any trellis module and relative operation costs.

In the next section, we conduct a new code search where TC(Mmin) and MC(Mmin) are taken as complexity measures. This refined search allows us to find new codes that achieve a wide range of error performance-decoding complexity trade-off.

3 Code search

Code search results for good convolutional codes for various rates can be found in the literature. In general, the objective of these code searches is to determine the best spectra for a list of fixed values of the trellis complexity, as performed in [6]. The search proposed in this paper considers both the trellis and merge complexities of the minimal trellis in order to obtain a list of good codes (in terms of distance spectrum) with more refined computational complexity values. We apply the search procedure defined in [5] for codes of rates 2/4, 3/5, 4/7, and 5/7. The main idea is to propose a number of templates for the generator matrix G(D) in trellis-oriented form with fixed TC(Mmin) and MC(Mmin) (the detailed procedure is provided in [5]). This sets the stage for having ensembles of codes with fixed decoding complexity through which an exhaustive search can be conducted. It should be mentioned that since we do not consider all possible templates in our code search, it is possible that for some particular examples, a code with given TC(Mmin) and MC(Mmin) better than the ones we have tabulated herein may be found elsewhere in the literature. Credit to these references is provided whenapplicable.

The results of the search are summarized in Tables 1, 2, 3, and 4. For each TC(Mmin) and MC(Mmin) considered, we list the free distance df, the first six terms of the code weight spectrum N, and the generator matrix G(D) of the best code found. The generator matrices are in octal form with the highest power in D in the most significant bit of the representation, i.e., 1+D+D2=1+2+4=7. The italicized entries in Tables 1, 2, 3, and 4 indicate codes presenting same TC(Mmin) but different MC(Mmin). For instance, for the rate 2/4 and TC(Mmin)=192 in Table 1, we obtain MC(Mmin)=48 and MC(Mmin)=64 with df=8 and df=9, respectively. New codes with a variety of trellis complexities are shown in these tables. The existing codes (with possibly different G(D)) are also indicated. We call the reader’s attention to the fact that other codes with TC(Mmin), MC(Mmin) or weight spectrum different from those in Tables 1, 2, 3, and 4 are documented in the literature (see for example [8] and [10]), and they could be used to extend the performance-complexity trade-off analysis performed in thiswork.

Table 1 Good convolutional codes of rate 2/4 for various TC( M min ) and MC( M min ) values
Table 2 Good convolutional codes of rate 3/5 for various TC( M min ) and MC( M min ) values
Table 3 Good convolutional codes of rate 4/7 for various TC( M min ) and MC( M min ) values
Table 4 Good convolutional codes of rate 5/7 for various TC( M min ) and MC( M min ) values

It should be remarked that the values of TCC shown in Tables 1, 2, 3, and 4 are calculated from (12) with the costs Φ S , Φ C b , and Φ C i implemented in the next section (refer to (13)) using C programming language running on a TMS320C55xx fixed-point digital signal processor (DSP) family from Texas Instruments, Inc. (Dallas, TX, USA) [17]. The cost of each operation, based on the number of machine cycles consumed by its execution, is substituted into (12) in order to obtain a computational complexity for this architecture. This allows us to compare the complexity of several trellis modules.

4 A case study

In this section, we describe the implementation of the operations S, Cb, and Ci to obtain the respective number of machine cycles based on simulations of the TMS320C55xx DSP from Texas Instruments. This device belongs to a family of well-known 16-bit fixed-point low-power consumption DSPs suited for telecommunication applications that require low power, low system cost, and high performance [17]. More details about this processor can be found in [18, 19]. We work with the integrated development environment (IDE) Code Composer Studio (CCStudio) version 4.1.1.00014 [19]. The simulations are conducted with the C55xx Rev2.x CPU Accurate Simulator. Once the number of machine cycles of each operation is obtained, we utilize (12) to have the computational complexity measure for a trellis module for this particular architecture.

4.1 DSP implementation of the VA operations

Tables 5, 6, and 7 show the implementation details of the operations S, Cb, and Ci, respectively. In each of these tables, the operation in C language, the corresponding C55x assembly language code generated by the compiler, a short description of the code, and the resulting number of machine cycles are given in the first, second, third, and fourth columns, respectively.

Table 5 Implementation of the S operation
Table 6 Implementation of the C b operation
Table 7 Implementation of the C i operation

4.1.1 Sum operation (S)

Table 5 shows the implementation details of the operation S. As we can observe from the third column, two storage operations and an S operation, each taking one machine cycle, are performed. Therefore, the operation S is performed with three machine cycles.

4.1.2 Bit comparison operation (C b )

The bit comparison operation Cb is implemented with a bitwise logical XOR instruction, assuming that each bit of the received word has been previously stored in an integer type variable. Table 6 shows the details of the implementation of this operation. Similarly, three machine cycles are necessary to implement the operation Cb.

4.1.3 Integer comparison operation (C i )

The operation Ci is implemented with an if-else statement, which includes the storage of the lowest accumulated edge metric. Table 7 shows how this operation is implemented. The if statement is used to compare two accumulated edge metrics, and the lowest one is stored at the integer type variable minor. In the third column, AR1 and AR2 are accumulator registers loaded with the accumulated metrics values, represented here by the variables B and A, respectively. Next, the metrics are compared; if B<A, then status bit TC1 is set, and the program flow is deviated to the label specified by @L1, where the value of B is stored at minor. Following this path, the code consumes 10 (=1 +1+1+6+1) machine cycles. Otherwise, if AB, then the value of A is stored in minor, and the program flow is deviated to the label specified by @L2, where the next instruction to be executed is located. The architecture of this processor cannot transfer the value stored at AR2 directly to memory. Instead, it copies the AR2 value to AR1 and then to the memory. Following this path, the code consumes 16 (=1+1+1+5+1+1+6) machine cycles. We consider the average value consumed by the operation, i.e., 13 machine cycles.

In summary, the computational cost of the VA operations is shown in Table 8.

Table 8 Computational cost of the operations of VA

4.2 Computational complexity

By substituting the results in Table 8 into (12), a computational complexity of a trellis module M, for this particular architecture, is

TCC(M)=TC(M)6+MC(M)13.
(13)

We observe that the weight of the latter is approximately two times the weight of the former. Hereafter, (13), a particular case of (12), will be referred to as the computational complexity measure even though the complexity analysis performed in this section is valid for the particular DSP in [17].

For the code C1(7,3,3) of Example 1, we obtain TCC(Mconv)=1138.5 and TCC(Mmin)=328. The computational complexity of the minimal trellis is 28.8% of the computational complexity of the conventional trellis. The trellis and merge complexities of the minimal trellis are 42.87% and 25% of the conventional trellis, respectively. In the remainder of this paper, we no longer consider the conventional trellis. In the next examples, we analyze the impact of trellis, merge, and computational complexities over codes of same rate.

Example 2.

Consider the convolutional codes C2(4,2,3) with profiles ν ~ =(3,2,3,4) and b ~ =(0,1,1,0), and C3(4,2,3) with profiles ν ~ =(3,3,3,3) and b ~ =(1,0,1,0). The generator matrices G2(D) and G3(D) are, respectively, given by

G 2 ( D ) = D + D 2 1 + D D 0 D 0 1 + D 1 + D

and

G 3 ( D ) = D 2 D 1 + D 1 + D 1 + D 1 + D D 1 .

The trellis, merge, and computational complexities of the minimum trellis module for C2 are, respectively, TC(Mmin)=24, MC(Mmin)=6, and TCC(Mmin)=222. For C3, these values are TC(Mmin)=24, MC(Mmin)=8, and TCC(Mmin)=248. Although both codes have the same trellis complexity, this is not true for the merge complexity. As a consequence, the computational complexity is not the same. Code C3 has a computational complexity approximately 11.7% higher with respect to C2. This fact indicates the importance of adopting the computational complexity to compare the complexity of convolutional codes.

Example 3.

Consider the convolutional codes C4(4,3,4) with profiles ν ~ =(4,4,4,4) and b ~ =(0,1,1,1), and C5(4,3,4) with profiles ν ~ =(4,5,4,4) and b ~ =(1,0,1,1), and generator matrices G4(D) and G5(D), respectively, given by

G 4 ( D ) = D D D 1 + D D 1 + D 1 1 D D + D 2 1 + D + D 2 0

and

G 5 ( D ) = 1 1 1 1 D D 2 D + D 2 1 D 2 D + D 2 1 + D 0 .

Code C4 presents TC(Mmin)=37.33, MC(Mmin)=16, and TCC(Mmin)=432, while code C5 presents TC(Mmin)=42.67, MC(Mmin)=16, and TCC(Mmin)=464. Both codes have the same merge complexity, but this is not true for the trellis and computational complexities. In this case, the computational complexity of C5 is 7.5% higher than that of C4.

It is clearly shown by these examples that considering only the trellis complexity as in [5, 10], or the merge complexity, it is not sufficient to obtain a real evaluation of the decoding complexity of a trellis module. This is the reason we propose the use of the computational complexity TCC(M).

As a final comment, note from the codes listed in Tables 1, 2, 3, and 4 that TC(M) is typically p times MC(M), where 2.4≤p≤4. This is a code behavior, and it is independent of the specific DSP. On the other hand, for the TMS320C55xx, MC(M) costs more than twice as much as TC(M), i.e., Φ C i =2.17( Φ C b + Φ S ). So, MC(M) has a great impact on the TCC(M) for this particular processor. It is possible, however, that the costs of TC(M) and MC(M) are totally different in another DSP. As a consequence, it is possible that given two different codes, the less complex code for one processor may not be the less complex one for the other processor. In other words, carrying out the computational complexity proposed in this paper is an essential step for determining the best choice of codes for a given DSP.

In the following, we provide simulation results of the bit error rate (BER) over the AWGN channel for some of the codes that appear in Tables 1, 2, 3, and 4. In particular, we consider two code rates, 2/4 and 3/5, and plot the BER versus E b /N0 for two pairs of codes for each rate. The pairs of codes are chosen so that the effect of a slight increase in the distance spectra may become apparent in terms of error performance.

For the case of rate 2/4, as shown in Figure 2, we consider the two codes with TC(Mmin)=96 and the two codes with TC(Mmin)=192 listed in Table 1 in the manuscript. One of the codes with TC(Mmin)=96 has MC(Mmin)=24 while the other has MC(Mmin)=32. Such change in MC(Mmin), and thus in the overall complexity, is sufficient to slightly improve the distance spectrum (for the same free distance). As shown in Figure 2, this is sufficient to make the more complex code perform around 0.2 dB better in terms of required E b /N0 at a BER of 10-5. For the case of TC(Mmin)=192, one of the codes has MC(Mmin)=48 while the other has MC(Mmin)=64. In this case, the increase in complexity is sufficient to increase the free distance of the second code with respect to the first, resulting in an advantage in terms of required E b /N0 at a BER of 10-5 around 0.2 dB as well.

Figure 2
figure 2

BER versus E b /N 0 for codes with the same TC( M min ) and different MC( M min ) . Rate 2/4 as listed in Table 1.

Results for a higher rate, 3/5, are presented in Figure 3. We investigate the BER of the codes with TC(Mmin)=26.66 and TC(Mmin)=74.66 listed in Table 2 in the manuscript. For the case of TC(Mmin)=26.66, the MC(Mmin) are 8.00 and 9.33, while for TC(Mmin)=74.66, the MC(Mmin) are 21.33 and 26.66. These changes in MC(Mmin) for the same TC(Mmin) are sufficient to give a slight improvement in the distance spectra of the more complex codes. As can be seen from Figure 3, such improvement in distance spectra yields a performance advantage in terms of required E b /N0 at a BER of 10-5 around 0.3 dB.

Figure 3
figure 3

BER versus E b /N 0 for codes with the same TC( M min ) and different MC( M min ) . Rate 3/5 as listed in Table 2.

5 Conclusions

In this paper, we have presented a computational decoding complexity measure of convolutional codes to be decoded by a software implementation of the VA with hard decision. More precisely, this measure is related with the number of machine cycles consumed by the decoding operation. A case study was conducted by determining the number of arithmetic operations and the corresponding computational costs of execution based on a typical DSP used for low-power telecommunications applications. More general analysis related to other processor architectures is considered for future work.

We calculated the trellis, merge, and computational complexities of codes of various rates. In considering codes of same rate, those which present the same trellis complexity can present different computational complexities. Therefore, the computational complexity proposed in this work is a more adequate measure in terms of computational effort. A good computational complexity refinement is obtained from a code search conducted in this work.

References

  1. Lin S, Costello DJ: Error Control Coding. Prentice Hall, Upper Saddle River; 2004.

    MATH  Google Scholar 

  2. Bougard B, Pollin S, Lenoir G, Eberle W, Van der Perre L, Catthoor F, Dehaene W: Energy-scalability enhancement of wireless local area network transceivers. In Proceedings of the IEEE Workshop on Signal Processing Advances in Wireless Communications. Leuven, Belgium, 11–14 July; 2004:449-453.

    Google Scholar 

  3. IEEE: IEEE Standard 802.11, Wireless LAN Medium Access Control (MAC) and Physical (PHY) Layer Specifications: High Speed Physical Layer in the 5 GHz band. IEEE,, Piscataway; 1999.

    Google Scholar 

  4. McEliece RJ, Lin W: The trellis complexity of convolutional codes. IEEE Trans. Inform. Theory 1996, 42(6):1855-1864. 10.1109/18.556680

    Article  MathSciNet  MATH  Google Scholar 

  5. Uchôa-Filho BF, Souza RD, Pimentel C, Jar M: Convolutional codes under a minimal trellis complexity measure. IEEE Trans. Commun 2009, 57(1):1-5.

    Article  Google Scholar 

  6. Tang HH, Lin MC: On (n,n-1) convolutional codes with low trellis complexity. IEEE Trans. Commun 2002, 50(1):37-47. 10.1109/26.975742

    Article  Google Scholar 

  7. Bocharova IE, Kudryashov BD: Rational rate punctured convolutional codes for soft-decision Viterbi decoding. IEEE Trans. Inform. Theory 1997, 43(4):1305-1313. 10.1109/18.605600

    Article  MathSciNet  MATH  Google Scholar 

  8. Rosnes E, Ytrehus Ø: Maximum length convolutional codes under a trellis complexity constraint. J. Complexity 2004, 20: 372-408. 10.1016/j.jco.2003.08.018

    Article  MathSciNet  MATH  Google Scholar 

  9. Uchôa-Filho BF, Souza RD, Pimentel C, Lin M-C: Generalized punctured convolutional codes. IEEE Commun. Lett 2005, 9(12):1070-1072. 10.1109/LCOMM.2005.1576591

    Article  Google Scholar 

  10. Katsiotis A, Rizomiliotis P, Kalouptsidis N: New constructions of high-performance low-complexity convolutional codes. IEEE Trans. Commun 2010, 58(7):1950-1961.

    Article  Google Scholar 

  11. Hug F, Bocharova I, Johannesson R, Kudryashov BD: Searching for high-rate convolutional codes via binary syndrome trellises. In Proceedings of the International Symposium on Information Theory. Seoul, Korea, 28 June–3 July; 2009:1358-1362.

    Google Scholar 

  12. Benchimol I, Pimentel C, Souza RD: Sectionalization of the minimal trellis module for convolutional codes. In Proceedings of the 35th International Conference on Telecommunications and Signal Processing (TSP). Prague, Czech Republic, 3–4 July; 2012:227-232.

    Google Scholar 

  13. Katsiotis A, Rizomiliotis P, Kalouptsidis N: Flexible convolutional codes: variable rate and complexity. IEEE Trans. Commun 2012, 60(3):608-613.

    Article  Google Scholar 

  14. Vardy A: Trellis structure of codes. In Handbook of Coding Theory. Edited by: Pless V, Huffman W. Elsevier,, The Netherlands; 1998:1989-2117.

    Google Scholar 

  15. McEliece RJ: On the BCJR trellis for linear block codes. IEEE Trans. Inform. Theory 1996, 42(4):1072-1092. 10.1109/18.508834

    Article  MathSciNet  MATH  Google Scholar 

  16. Moritz GL, Souza RD, Pimentel C, Pellenz ME, Uchôa-Filho BF, Benchimol I: Turbo decoding using the sectionalized minimal trellis of the constituent code: performance-complexity trade-off. IEEE Trans. Commun 2013, 61(9):3600-3610.

    Article  Google Scholar 

  17. Texas Instruments Inc: TMS320C55x Technical Overview. Texas Instruments, Inc., Dallas; 2000.

    Google Scholar 

  18. Kuo SM, Lee BH, Tian W: Real-Time Digital Signal Processing: Implementations and Applications. Wiley, New York; 2006.

    Book  Google Scholar 

  19. Texas Instruments Inc: TMS320C55x DSP CPU Reference Guide. Texas Instruments, Inc., Dallas; 2004.

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by FAPEAM, FACEPE, and CNPq (Brazil).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cecilio Pimentel.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Benchimol, I.B., Pimentel, C., Souza, R.D. et al. A new computational decoding complexity measure of convolutional codes. EURASIP J. Adv. Signal Process. 2014, 173 (2014). https://doi.org/10.1186/1687-6180-2014-173

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-173

Keywords