- Research
- Open access
- Published:

# A new computational decoding complexity measure of convolutional codes

*EURASIP Journal on Advances in Signal Processing*
**volume 2014**, Article number: 173 (2014)

## Abstract

This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

## 1 Introduction

Convolutional codes are widely adopted due to their capacity to increase the reliability of digital communication systems with manageable encoding/decoding complexity [1]. A convolutional code can be represented by a regular (or conventional) trellis which allows an efficient implementation of the maximum-likelihood decoding algorithm, known as the Viterbi algorithm (VA). In [2], the authors analyze different receiver implementations for the wireless network IEEE 802.11 standard [3], showing that the VA contributes with 35% of the overall power consumption. This consumption is strongly related with the decoding complexity which in turn is known to be highly dependent on the trellis representing the code. Therefore, the search for less complex trellis alternatives is essential to some applications, especially for those with severe power limitation.

A trellis consists of repeated copies of what is called a trellis module [4–6]. McEliece and Lin [4] defined a decoding complexity measure of a trellis module as the total number of edge symbols in the module (normalized by the number of information bits), called the trellis complexity of the module *M*, denoted by TC(*M*). In [4], a method to construct the so-called ‘minimal’ trellis module is provided. This module, which has an irregular structure presenting a time-varying number of states, minimizes various trellis complexity measures. Good convolutional codes with low-complexity trellis representation are tabulated in [5–13], which indicates a great interest in this subject. These works establish a performance-complexity tradeoff for convolutional codes.

The VA operating with hard decision over a trellis module *M* performs two arithmetic operations: integer sums and comparisons [4, 13–15]. These operations are also considered as a complexity measure of other decoding algorithms, as those used by turbo codes [16]. The number of sums per information bit is equal to TC(*M*). Thus, this complexity measure represents the additive complexity of the trellis module *M*. On the other hand, the number of comparisons at a specific state of *M* is equal to the total number of edges reaching it minus one [14]. The total number of comparisons in *M* represents the merge complexity. Both trellis and merge complexities govern the complexity measures of the VA operating over a trellis module *M*. Therefore, considering only one of these complexities is not sufficient to determine the effort required by the decoding operations.

In this work, we propose a complexity measure, called computational complexity of *M*, denoted by TCC(*M*), that more adequately reflects the computational effort of decoding a convolutional code using a software implementation of the VA. This measure is defined by considering the total number of sums and comparisons in *M* as well as the respective computational cost (complexity) of the implementation of these arithmetical operations using a given digital signal processor. More specifically, these costs are measured in terms of machine cycles consumed by the execution of eachoperation.

For illustration purposes, we provide in Section 4 one example where we compare the conventional and the minimal trellis modules under the new trellis complexity for a specific architecture. We will see through other examples that two different convolutional codes having the same complexity TC(*M*) defined in [4] may compare differently under TCC(*M*). Therefore, interesting codes may have been overlooked in previous code searches. To remedy this problem, as another contribution of this work, a code search is conducted and the best convolutional codes (in terms of distance spectrum) with respect to TCC(*M*) are tabulated. We present a refined list of codes, with increasing values of TCC(*M*) of the minimal trellis for codes of rates 2/4, 3/5, 4/7, and 5/7.

The remainder of this paper is organized as follows. In Section 2, we define the number of arithmetic operations performed by the VA and define the computational complexity TCC(*M*). Section 3 presents the results of the code search. In Section 4, we determine the computational cost of each arithmetic operation. Comparisons between TC(*M*) and TCC(*M*) are given for codes of different rates and based on two trellis representations: the conventional and minimum trellis modules. Finally, in Section 5, we present the conclusions of this work.

## 2 Trellis module complexity

Consider a convolutional code *C*(*n*,*k*,*ν*), where *ν*, *k*, and *n* are the overall constraint length, the number of input bits, and the number of output bits, respectively. In general, a trellis module *M* for a convolutional code *C*(*n*,*k*,*ν*) consists of *n*^{′} trellis sections, {2}^{{\nu}_{t}} states at depth *t*, {2}^{{\nu}_{t}+{b}_{t}} edges connecting the states from depth *t* to depth *t*+1, and *l*_{
t
} bits labeling each edge from depth *t* to depth *t*+1, for 0≤*t*≤*n*^{′}-1 [5]. The decoding operation at each trellis section using the VA has three components: the *Hamming* distance calculation (HDC), the add-compare-select (ACS), and the RAM *Traceback*. Next, we analyze the arithmetic operations required by HDC and ACS over the trellis module *M* using the VA operating with hard decision. The RAM *Traceback* component does not require arithmetic operations; hence, it is not considered in this work. In this stage, the decoding is accomplished by tracing the maximum likelihood path backwards through the trellis ([1] Chapter 12).

We develop next a complexity metric in terms of arithmetic operations - summations (*S*), bit comparisons (*C*_{b}) and integer comparisons (*C*_{i}). We define the following notation for the number of operations of a given complexity measure

to denote *s* summations, *c*_{1} bit comparisons, and *c*_{2} integer comparisons.

The HDC consists of calculating the *Hamming* distance between the received sequence and the coded sequence at each edge of a section *t* of the trellis module *M*. As each edge is labeled by *l*_{
t
} bits, the same amount of bit comparison operations is required. The results of the bit comparisons are added with *l*_{
t
}-1 sum operations. The total number of edges in this section is given by {2}^{{\nu}_{t}+{b}_{t}}; therefore, {l}_{t}\phantom{\rule{0.3em}{0ex}}{2}^{{\nu}_{t}+{b}_{t}} bit comparison operations and ({l}_{t}-1){2}^{{\nu}_{t}+{b}_{t}} sum operations are required. From the above, we conclude that the total number of operations required by HDC at section *t*, denoted by {T}_{t}^{\text{HDC}}, is given by

The ACS performs the metric update of each state of the section module. First, each edge metric and the corresponding initial state metric are added together. Therefore, {2}^{{\nu}_{t}+{b}_{t}} sum operations are required. In the next step, all the accumulated edge metrics of the edges that converge to each state at section *t*+1 are compared, and the lowest one is selected. There are {2}^{{\nu}_{t+1}} states at section *t*+1, and {2}^{{\nu}_{t}+{b}_{t}} edges between sections *t* and *t*+1. Therefore, {2}^{{\nu}_{t}+{b}_{t}}/{2}^{{\nu}_{t+1}} edges per state are compared requiring ({2}^{{\nu}_{t}+{b}_{t}}/{2}^{{\nu}_{t+1}})-1 comparison operations. Considering now all the states at section *t*+1, a total of {2}^{{\nu}_{t}+{b}_{t}}-{2}^{{\nu}_{t+1}} integer comparison operations are required [12, 13]. We conclude that the total number of operations required by ACS at section *t*, denoted by {T}_{t}^{\text{ACS}}, is then given by

From (1) and (2), the total number of operations per information bit performed by the VA over a trellis module *M* is

where {\nu}_{{n}^{\prime}}={\nu}_{0}. The trellis complexity per information bit TC(*M*) over a trellis module *M*, according to [4] is given by

and the merge complexity per information bit, MC(*M*), over a trellis module *M* is [12, 13]

We rewrite (3) using (4) and (5) as follows:

For the conventional trellis module, *M*_{conv}, where *l*_{
t
}=*n*, *n*^{′}=1, *ν*_{0}=*ν*_{1}=*ν*, and *b*_{0}=*k*, we obtain

The minimal trellis module consists of an irregular structure with *n* sections which can present different number of states. Each edge is labeled with just one bit [4]. For this minimal trellis module, *M*_{min}, where *n*^{′}=*n*, *l*_{
t
}=1, {\nu}_{t}={\stackrel{~}{\nu}}_{t}\forall t, {b}_{t}={\stackrel{~}{b}}_{t}\forall t, and *ν*_{
n
}=*ν*_{0}, we obtain

###
**Example**
**1**

Consider the convolutional code *C*_{1}(7,3,3) generated by the generator matrix

The trellis and merge complexities of the conventional trellis module for *C*_{1}(7,3,3) are TC(*M*_{conv})=149.33 and MC(*M*_{conv})=18.66. Therefore, we obtain from (6)

The single-section conventional trellis module *M*_{conv} has eight states with eight edges leaving each state, each edge labeled by 7 bits. The minimal trellis module for *C*_{1}(7,3,3) is shown in Figure 1. Defining the state and the edge complexity profiles of *M*_{min} as \stackrel{~}{\mathit{\nu}}=\left({\stackrel{~}{\nu}}_{0},\dots ,{\stackrel{~}{\nu}}_{n-1}\right) and \stackrel{~}{\mathbf{b}}=\left({\stackrel{~}{b}}_{0},\dots ,{\stackrel{~}{b}}_{n-1}\right), respectively, we obtain \stackrel{~}{\mathit{\nu}}=(3,4,3,4,3,4,4) and \stackrel{~}{\mathbf{b}}=(1,0,1,0,1,0,0) for *C*_{1}(7,3,3), resulting in TC(*M*_{min})=37.33 and MC(*M*_{min})=8. Similarly, we obtain from (6)

In this example, the relative number of operations required by the minimal trellis module if compared with the conventional trellis is 25% for *S* and *C*_{b} and 42,8% for *C*_{i}.

Once the number of operations performed by the VA over a trellis module is determined, we must obtain the individual cost of the arithmetic operations *S*, *C*_{b}, and *C*_{i} for a more appropriate complexity comparison.

### 2.1 Computational complexity of VA

Based on (6), we define in this subsection a computational complexity of a trellis module *M*, denoted by TCC(*M*). For this purpose, let *Φ*_{
S
}, {\Phi}_{{C}_{\mathrm{b}}}, and {\Phi}_{{C}_{\mathrm{i}}} be the individual computational cost of the implementation of the arithmetic operations *S*, *C*_{b}, and *C*_{i}, respectively, in a particular architecture. This cost can be measured in terms of the machine cycles consumed by each operation, the power consumed by each operation, and many other cost measures. Thus,

We observe that TCC(*M*) depends on two complexity measures of the module *M*: the trellis complexity, which represents the additive complexity, and the merge complexity. The relative importance of each complexity depends on a particular implementation, as will be discussed later. Note, however, that the complexity measure defined in (12) is general, in the sense that it is valid for any trellis module and relative operation costs.

In the next section, we conduct a new code search where TC(*M*_{min}) and MC(*M*_{min}) are taken as complexity measures. This refined search allows us to find new codes that achieve a wide range of error performance-decoding complexity trade-off.

## 3 Code search

Code search results for good convolutional codes for various rates can be found in the literature. In general, the objective of these code searches is to determine the best spectra for a list of fixed values of the trellis complexity, as performed in [6]. The search proposed in this paper considers both the trellis and merge complexities of the minimal trellis in order to obtain a list of good codes (in terms of distance spectrum) with more refined computational complexity values. We apply the search procedure defined in [5] for codes of rates 2/4, 3/5, 4/7, and 5/7. The main idea is to propose a number of templates for the generator matrix *G*(*D*) in trellis-oriented form with fixed TC(*M*_{min}) and MC(*M*_{min}) (the detailed procedure is provided in [5]). This sets the stage for having ensembles of codes with fixed decoding complexity through which an exhaustive search can be conducted. It should be mentioned that since we do not consider all possible templates in our code search, it is possible that for some particular examples, a code with given TC(*M*_{min}) and MC(*M*_{min}) better than the ones we have tabulated herein may be found elsewhere in the literature. Credit to these references is provided whenapplicable.

The results of the search are summarized in Tables 1, 2, 3, and 4. For each TC(*M*_{min}) and MC(*M*_{min}) considered, we list the free distance *d*_{f}, the first six terms of the code weight spectrum *N*, and the generator matrix *G*(*D*) of the best code found. The generator matrices are in octal form with the highest power in *D* in the most significant bit of the representation, i.e., 1+*D*+*D*^{2}=1+2+4=7. The italicized entries in Tables 1, 2, 3, and 4 indicate codes presenting same TC(*M*_{min}) but different MC(*M*_{min}). For instance, for the rate 2/4 and TC(*M*_{min})=192 in Table 1, we obtain MC(*M*_{min})=48 and MC(*M*_{min})=64 with *d*_{f}=8 and *d*_{f}=9, respectively. New codes with a variety of trellis complexities are shown in these tables. The existing codes (with possibly different *G*(*D*)) are also indicated. We call the reader’s attention to the fact that other codes with TC(*M*_{min}), MC(*M*_{min}) or weight spectrum different from those in Tables 1, 2, 3, and 4 are documented in the literature (see for example [8] and [10]), and they could be used to extend the performance-complexity trade-off analysis performed in thiswork.

It should be remarked that the values of TCC shown in Tables 1, 2, 3, and 4 are calculated from (12) with the costs *Φ*_{
S
}, {\Phi}_{{C}_{\mathrm{b}}}, and {\Phi}_{{C}_{\mathrm{i}}} implemented in the next section (refer to (13)) using C programming language running on a TMS320C55xx fixed-point digital signal processor (DSP) family from Texas Instruments, Inc. (Dallas, TX, USA) [17]. The cost of each operation, based on the number of machine cycles consumed by its execution, is substituted into (12) in order to obtain a computational complexity for this architecture. This allows us to compare the complexity of several trellis modules.

## 4 A case study

In this section, we describe the implementation of the operations *S*, *C*_{b}, and *C*_{i} to obtain the respective number of machine cycles based on simulations of the TMS320C55xx DSP from Texas Instruments. This device belongs to a family of well-known 16-bit fixed-point low-power consumption DSPs suited for telecommunication applications that require low power, low system cost, and high performance [17]. More details about this processor can be found in [18, 19]. We work with the integrated development environment (IDE) Code Composer Studio (CCStudio) version 4.1.1.00014 [19]. The simulations are conducted with the *C55xx Rev2.x CPU Accurate Simulator*. Once the number of machine cycles of each operation is obtained, we utilize (12) to have the computational complexity measure for a trellis module for this particular architecture.

### 4.1 DSP implementation of the VA operations

Tables 5, 6, and 7 show the implementation details of the operations *S*, *C*_{b}, and *C*_{i}, respectively. In each of these tables, the operation in C language, the corresponding C55x assembly language code generated by the compiler, a short description of the code, and the resulting number of machine cycles are given in the first, second, third, and fourth columns, respectively.

#### 4.1.1 Sum operation (S)

Table 5 shows the implementation details of the operation *S*. As we can observe from the third column, two storage operations and an S operation, each taking one machine cycle, are performed. Therefore, the operation *S* is performed with three machine cycles.

#### 4.1.2 Bit comparison operation (*C*_{
b
})

*C*

The bit comparison operation *C*_{b} is implemented with a bitwise logical XOR instruction, assuming that each bit of the received word has been previously stored in an integer type variable. Table 6 shows the details of the implementation of this operation. Similarly, three machine cycles are necessary to implement the operation *C*_{b}.

#### 4.1.3 Integer comparison operation (*C*_{
i
})

*C*

The operation *C*_{i} is implemented with an *if-else* statement, which includes the storage of the lowest accumulated edge metric. Table 7 shows how this operation is implemented. The *if* statement is used to compare two accumulated edge metrics, and the lowest one is stored at the integer type variable *minor*. In the third column, AR1 and AR2 are accumulator registers loaded with the accumulated metrics values, represented here by the variables B and A, respectively. Next, the metrics are compared; if *B*<*A*, then status bit TC1 is set, and the program flow is deviated to the label specified by @L1, where the value of B is stored at *minor*. Following this path, the code consumes 10 (=1 +1+1+6+1) machine cycles. Otherwise, if *A*≤*B*, then the value of A is stored in *minor*, and the program flow is deviated to the label specified by @L2, where the next instruction to be executed is located. The architecture of this processor cannot transfer the value stored at AR2 directly to memory. Instead, it copies the AR2 value to AR1 and then to the memory. Following this path, the code consumes 16 (=1+1+1+5+1+1+6) machine cycles. We consider the average value consumed by the operation, i.e., 13 machine cycles.

In summary, the computational cost of the VA operations is shown in Table 8.

### 4.2 Computational complexity

By substituting the results in Table 8 into (12), a computational complexity of a trellis module *M*, for this particular architecture, is

We observe that the weight of the latter is approximately two times the weight of the former. Hereafter, (13), a particular case of (12), will be referred to as the computational complexity measure even though the complexity analysis performed in this section is valid for the particular DSP in [17].

For the code *C*_{1}(7,3,3) of Example 1, we obtain TCC(*M*_{conv})=1138.5 and TCC(*M*_{min})=328. The computational complexity of the minimal trellis is 28.8% of the computational complexity of the conventional trellis. The trellis and merge complexities of the minimal trellis are 42.87% and 25% of the conventional trellis, respectively. In the remainder of this paper, we no longer consider the conventional trellis. In the next examples, we analyze the impact of trellis, merge, and computational complexities over codes of same rate.

**Example** **2**.

Consider the convolutional codes *C*_{2}(4,2,3) with profiles \stackrel{~}{\mathit{\nu}}=(3,2,3,4) and \stackrel{~}{\mathbf{b}}=(0,1,1,0), and *C*_{3}(4,2,3) with profiles \stackrel{~}{\mathit{\nu}}=(3,3,3,3) and \stackrel{~}{\mathbf{b}}=(1,0,1,0). The generator matrices *G*_{2}(*D*) and *G*_{3}(*D*) are, respectively, given by

and

The trellis, merge, and computational complexities of the minimum trellis module for *C*_{2} are, respectively, TC(*M*_{min})=24, MC(*M*_{min})=6, and TCC(*M*_{min})=222. For *C*_{3}, these values are TC(*M*_{min})=24, MC(*M*_{min})=8, and TCC(*M*_{min})=248. Although both codes have the same trellis complexity, this is not true for the merge complexity. As a consequence, the computational complexity is not the same. Code *C*_{3} has a computational complexity approximately 11.7% higher with respect to *C*_{2}. This fact indicates the importance of adopting the computational complexity to compare the complexity of convolutional codes.

**Example** **3**.

Consider the convolutional codes *C*_{4}(4,3,4) with profiles \stackrel{~}{\mathit{\nu}}=(4,4,4,4) and \stackrel{~}{\mathbf{b}}=(0,1,1,1), and *C*_{5}(4,3,4) with profiles \stackrel{~}{\mathit{\nu}}=(4,5,4,4) and \stackrel{~}{\mathbf{b}}=(1,0,1,1), and generator matrices *G*_{4}(*D*) and *G*_{5}(*D*), respectively, given by

and

Code *C*_{4} presents TC(*M*_{min})=37.33, MC(*M*_{min})=16, and TCC(*M*_{min})=432, while code *C*_{5} presents TC(*M*_{min})=42.67, MC(*M*_{min})=16, and TCC(*M*_{min})=464. Both codes have the same merge complexity, but this is not true for the trellis and computational complexities. In this case, the computational complexity of *C*_{5} is 7.5% higher than that of *C*_{4}.

It is clearly shown by these examples that considering only the trellis complexity as in [5, 10], or the merge complexity, it is not sufficient to obtain a real evaluation of the decoding complexity of a trellis module. This is the reason we propose the use of the computational complexity TCC(*M*).

As a final comment, note from the codes listed in Tables 1, 2, 3, and 4 that TC(*M*) is typically *p* times MC(*M*), where 2.4≤*p*≤4. This is a code behavior, and it is independent of the specific DSP. On the other hand, for the TMS320C55xx, MC(*M*) costs more than twice as much as TC(*M*), i.e., {\Phi}_{{C}_{\mathrm{i}}}=2.17({\Phi}_{{C}_{\mathrm{b}}}+{\Phi}_{S}). So, MC(*M*) has a great impact on the TCC(*M*) for this particular processor. It is possible, however, that the costs of TC(*M*) and MC(*M*) are totally different in another DSP. As a consequence, it is possible that given two different codes, the less complex code for one processor may not be the less complex one for the other processor. In other words, carrying out the computational complexity proposed in this paper is an essential step for determining the best choice of codes for a given DSP.

In the following, we provide simulation results of the bit error rate (BER) over the AWGN channel for some of the codes that appear in Tables 1, 2, 3, and 4. In particular, we consider two code rates, 2/4 and 3/5, and plot the BER versus *E*_{
b
}/*N*_{0} for two pairs of codes for each rate. The pairs of codes are chosen so that the effect of a slight increase in the distance spectra may become apparent in terms of error performance.

For the case of rate 2/4, as shown in Figure 2, we consider the two codes with TC(*M*_{min})=96 and the two codes with TC(*M*_{min})=192 listed in Table 1 in the manuscript. One of the codes with TC(*M*_{min})=96 has MC(*M*_{min})=24 while the other has MC(*M*_{min})=32. Such change in MC(*M*_{min}), and thus in the overall complexity, is sufficient to slightly improve the distance spectrum (for the same free distance). As shown in Figure 2, this is sufficient to make the more complex code perform around 0.2 dB better in terms of required *E*_{
b
}/*N*_{0} at a BER of 10^{-5}. For the case of TC(*M*_{min})=192, one of the codes has MC(*M*_{min})=48 while the other has MC(*M*_{min})=64. In this case, the increase in complexity is sufficient to increase the free distance of the second code with respect to the first, resulting in an advantage in terms of required *E*_{
b
}/*N*_{0} at a BER of 10^{-5} around 0.2 dB as well.

Results for a higher rate, 3/5, are presented in Figure 3. We investigate the BER of the codes with TC(*M*_{min})=26.66 and TC(*M*_{min})=74.66 listed in Table 2 in the manuscript. For the case of TC(*M*_{min})=26.66, the MC(*M*_{min}) are 8.00 and 9.33, while for TC(*M*_{min})=74.66, the MC(*M*_{min}) are 21.33 and 26.66. These changes in MC(*M*_{min}) for the same TC(*M*_{min}) are sufficient to give a slight improvement in the distance spectra of the more complex codes. As can be seen from Figure 3, such improvement in distance spectra yields a performance advantage in terms of required *E*_{
b
}/*N*_{0} at a BER of 10^{-5} around 0.3 dB.

## 5 Conclusions

In this paper, we have presented a computational decoding complexity measure of convolutional codes to be decoded by a software implementation of the VA with hard decision. More precisely, this measure is related with the number of machine cycles consumed by the decoding operation. A case study was conducted by determining the number of arithmetic operations and the corresponding computational costs of execution based on a typical DSP used for low-power telecommunications applications. More general analysis related to other processor architectures is considered for future work.

We calculated the trellis, merge, and computational complexities of codes of various rates. In considering codes of same rate, those which present the same trellis complexity can present different computational complexities. Therefore, the computational complexity proposed in this work is a more adequate measure in terms of computational effort. A good computational complexity refinement is obtained from a code search conducted in this work.

## References

Lin S, Costello DJ:

*Error Control Coding*. Prentice Hall, Upper Saddle River; 2004.Bougard B, Pollin S, Lenoir G, Eberle W, Van der Perre L, Catthoor F, Dehaene W: Energy-scalability enhancement of wireless local area network transceivers. In

*Proceedings of the IEEE Workshop on Signal Processing Advances in Wireless Communications*. Leuven, Belgium, 11–14 July; 2004:449-453.IEEE:

*IEEE Standard 802.11, Wireless LAN Medium Access Control (MAC) and Physical (PHY) Layer Specifications: High Speed Physical Layer in the 5 GHz band.*IEEE,, Piscataway; 1999.McEliece RJ, Lin W: The trellis complexity of convolutional codes.

*IEEE Trans. Inform. Theory*1996, 42(6):1855-1864. 10.1109/18.556680Uchôa-Filho BF, Souza RD, Pimentel C, Jar M: Convolutional codes under a minimal trellis complexity measure.

*IEEE Trans. Commun*2009, 57(1):1-5.Tang HH, Lin MC: On (n,n-1) convolutional codes with low trellis complexity.

*IEEE Trans. Commun*2002, 50(1):37-47. 10.1109/26.975742Bocharova IE, Kudryashov BD: Rational rate punctured convolutional codes for soft-decision Viterbi decoding.

*IEEE Trans. Inform. Theory*1997, 43(4):1305-1313. 10.1109/18.605600Rosnes E, Ytrehus Ø: Maximum length convolutional codes under a trellis complexity constraint.

*J. Complexity*2004, 20: 372-408. 10.1016/j.jco.2003.08.018Uchôa-Filho BF, Souza RD, Pimentel C, Lin M-C: Generalized punctured convolutional codes.

*IEEE Commun. Lett*2005, 9(12):1070-1072. 10.1109/LCOMM.2005.1576591Katsiotis A, Rizomiliotis P, Kalouptsidis N: New constructions of high-performance low-complexity convolutional codes.

*IEEE Trans. Commun*2010, 58(7):1950-1961.Hug F, Bocharova I, Johannesson R, Kudryashov BD: Searching for high-rate convolutional codes via binary syndrome trellises. In

*Proceedings of the International Symposium on Information Theory*. Seoul, Korea, 28 June–3 July; 2009:1358-1362.Benchimol I, Pimentel C, Souza RD: Sectionalization of the minimal trellis module for convolutional codes. In

*Proceedings of the 35th International Conference on Telecommunications and Signal Processing (TSP)*. Prague, Czech Republic, 3–4 July; 2012:227-232.Katsiotis A, Rizomiliotis P, Kalouptsidis N: Flexible convolutional codes: variable rate and complexity.

*IEEE Trans. Commun*2012, 60(3):608-613.Vardy A: Trellis structure of codes. In

*Handbook of Coding Theory*. Edited by: Pless V, Huffman W. Elsevier,, The Netherlands; 1998:1989-2117.McEliece RJ: On the BCJR trellis for linear block codes.

*IEEE Trans. Inform. Theory*1996, 42(4):1072-1092. 10.1109/18.508834Moritz GL, Souza RD, Pimentel C, Pellenz ME, Uchôa-Filho BF, Benchimol I: Turbo decoding using the sectionalized minimal trellis of the constituent code: performance-complexity trade-off.

*IEEE Trans. Commun*2013, 61(9):3600-3610.Texas Instruments Inc:

*TMS320C55x Technical Overview*. Texas Instruments, Inc., Dallas; 2000.Kuo SM, Lee BH, Tian W:

*Real-Time Digital Signal Processing: Implementations and Applications*. Wiley, New York; 2006.Texas Instruments Inc:

*TMS320C55x DSP CPU Reference Guide*. Texas Instruments, Inc., Dallas; 2004.

## Acknowledgements

This work was supported in part by FAPEAM, FACEPE, and CNPq (Brazil).

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Benchimol, I.B., Pimentel, C., Souza, R.D. *et al.* A new computational decoding complexity measure of convolutional codes.
*EURASIP J. Adv. Signal Process.* **2014**, 173 (2014). https://doi.org/10.1186/1687-6180-2014-173

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-6180-2014-173