A Systematic Approach to Modiﬁed BCJR MAP Algorithms for Convolutional Codes

Since Berrou, Glavieux and Thitimajshima published their landmark paper in 1993, di ﬀ erent modiﬁed BCJR MAP algorithms have appeared in the literature. The existence of a relatively large number of similar but di ﬀ erent modiﬁed BCJR MAP algorithms, derived using the Markov chain properties of convolutional codes, naturally leads to the following questions. What is the relationship among the di ﬀ erent modiﬁed BCJR MAP algorithms? What are their relative performance, computational complexities, and memory requirements? In this paper, we answer these questions. We derive systematically four major modiﬁed BCJR MAP algorithms from the BCJR MAP algorithm using simple mathematical transformations. The connections between the original and the four modiﬁed BCJR MAP algorithms are established. A detailed analysis of the di ﬀ erent modiﬁed BCJR MAP algorithms shows that they have identical computational complexities and memory requirements. Computer simulations demonstrate that the four modiﬁed BCJR MAP algorithms all have identical performance to the BCJR MAP algorithm.


INTRODUCTION
In 1993, Berrou et al. [1] introduced new types of codes, called turbo codes, which have demonstrated performance close to the theoretical limit predicted by information theory [2].In the iterative decoding strategy for turbo codes, a soft-input soft-output (SISO) MAP algorithm is used to perform the decoding operation for the two constituent recursive systematic convolutional codes (RSC).The SISO MAP algorithm presented in [1], which is called the BGT MAP algorithm in [3], is a modified version of the BCJR MAP algorithm proposed in [4].The BGT MAP algorithm formally appears very complicated.Later, Pietrobon and Barbulescu derived a simpler modified BCJR MAP algorithm [5], which is called the PB MAP algorithm [3].However, the PB MAP algorithm is not a direct simplification of the BGT MAP algorithm, even though they share similar structures.In [3], the BGT MAP algorithm is directly simplified to obtain a new modified BCJR MAP algorithm that keeps the structure of the BGT MAP algorithm but uses simpler recursive procedures.This new modified BCJR MAP algorithm is called the SBGT MAP algorithm in [3].The main difference between the SBGT and BGT MAP algorithms lies in the fact that for the BGT MAP algorithm, the forward and backward recursions (cf.[1, equations (21) and ( 22)]) are formulated in such a way that redundant divisions are involved, whereas in the SBGT MAP algorithm, these redundant computations are removed.
In [3], it is also shown that the symmetry of the trellis diagram of an RSC code can be utilized (albeit implicitly) to derive another modified BCJR MAP algorithm which possesses a structure that is dual to that of the SBGT MAP algorithm and has the same signal processing and memory requirements.This new modified BCJR MAP algorithm is called the dual SBGT MAP algorithm in [3].The Dual SBGT MAP algorithm will be called the DSBGT MAP algorithm in this paper.
The BCJR and the modified BCJR MAP algorithms are all derived from first principles by utilizing the Markov chain properties of convolutional codes.Some of the modified BCJR MAP algorithms, as well as the BCJR itself, have actually been implemented in hardware.From both theoretical and practical perspectives, it is of great interest and importance to acquire an understanding of the exact relationship among the different modified BCJR MAP algorithms and their relative advantages.
In this paper, we first derive the BCJR MAP algorithm from first principles for a rate 1/n recursive systematic convolutional code, where n ≥ 2 is any positive integer.We then systematically derive the aforementioned modified BCJR MAP algorithms and a dual version of the PB MAP algorithm from the BCJR MAP algorithm using simple mathematical transformations.By doing this, we succeed in establishing simple connections among these algorithms.In particular, we show that the modified BCJR MAP algorithm of Pietrobon and Barbulescu can be directly derived from the SBGT MAP algorithm via two simple permutations.
A detailed analysis of the BCJR and the four modified BCJR MAP algorithms formulated in this paper shows that they all have identical computational complexities and memory requirements when implemented appropriately.Systematic computer simulations demonstrate that the four modified BCJR MAP algorithms all have identical performance to the BCJR MAP algorithm.
This paper is organized as follows.In Section 2, the now classical BCJR MAP algorithm is revisited and the notation and terminology used in this paper are introduced.In Section 3, it is shown how the SBGT MAP algorithm can be derived from the BCJR MAP algorithm.In Section 4, a dual version of the SBGT MAP algorithm (the dual SBGT MAP algorithm or the DSBGT MAP algorithm) is derived from the BCJR MAP algorithm.In Section 5, it is shown how the PB MAP algorithm of Pietrobon and Barbulescu can be directly derived from the SBGT MAP algorithm by performing simple permutations on the nodes of the trellis diagram of an RSC code.In Section 6, by performing similar permutations, a new modified BCJR MAP algorithm, called the DPB MAP algorithm in this paper, is derived from the DSBGT MAP algorithm.The DPB MAP algorithm can be considered a dual version of the modified BCJR MAP algorithm of Pietrobon and Barbulescu presented in Section 5.In Section 7, a detailed comparative analysis of computational complexities and memory requirements is carried out, where the BCJR and the four modified BCJR MAP algorithms are shown to have the same computational complexities and memory requirements.In Section 8, computer simulations are discussed, which were performed for the rate 1/2 and rate 1/3 turbo codes defined in the CDMA2000 standard using the BCJR, SBGT, DSBGT, PB, and DPB MAP algorithms.As expected, under identical simulation conditions, the BCJR and the four modified BCJR MAP algorithms formulated here all have identical BER (bit error rate) and FER (frame error rate) performance.Finally, Section 9 concludes this paper.

THE BCJR MAP ALGORITHM REVISITED
To characterize the precise relationship between the original BCJR MAP algorithm and the modified BCJR MAP algorithms, we will present a detailed derivation of the original BCJR MAP algorithm in this section and, in doing so, set up the notation and terminology of this paper.Our derivations show that a proper initialization of the β sequence in the BCJR MAP algorithm in fact does not require any a priori assumptions on the final state of the recursive systematic convolutional code.In other words, no information on the final encoder state is required in the derivation of the original BCJR MAP algorithm.This statement also holds true for the modified BCJR MAP algorithms.Note that in [4], it is assumed that the final encoder state is the all-zero state.
Let n ≥ 2, v ≥ 1, τ ≥ 1 be positive integers and consider a rate 1/n constraint length v + 1 binary recursive systematic convolutional (RSC) code.Given an input data bit i and an encoder state m, the rate 1/n RSC encoder makes a state transition from state m to a unique new state S and produces an n-bit codeword X.The new encoder state S will be denoted by S i f (m), i = 0, 1.The n bits of the codeword X consist of the systematic data bit i and n − 1 parity check bits.These n − 1 parity check bits will be denoted, respectively, by Y 1 (i, m), Y 2 (i, m), . . ., Y n−1 (i, m).On the other hand, there is a unique encoder state T from which the encoder makes a state transition to the state m for an input bit i.The encoder state T will be denoted by S i b (m), i = 0, 1.The relationship among the encoder state m and the encoder states ) is a one-to-one correspondence from the set M = {0, 1, . . ., 2 v − 1} onto itself.In other words, each of the four mappings S 0 b , S 1 b , S 0 f , and Assume the encoder starts at the all-zero state S 0 = 0 and encodes a sequence of information data bits d 1 , d 2 , d 3 , . . ., d τ .At time t, the input into the encoder is d t , which induces the encoder state transition from S t−1 to S t and generates an n-bit codeword (vector) X t .The codewords X t are BPSK modulated and transmitted through an AWGN channel.The matched filter at the receiver yields a sequence of noisy sample vectors Y t = 2X t − 1 + N t , t = 1, 2, 3, . . ., τ, where 1 is the n-dimensional vector with all its components equal to 1, X t is an n-bit codeword consisting of zeros and ones, and N t is an n-dimensional random vector with i.i.d.zero-mean Gaussian noise components with variance σ 2 > 0. Since there are v ≥ 1 memory cells in the RSC encoder, there are M = 2 v encoder states, represented by the nonnegative where r (1)   t is the matched filter output sample generated by the systematic data bit d t and r (2)  t , . . ., r (n)   t are matched filter output samples generated by the n − 1 parity check bits Y 1 (d t , S t−1 ), . . ., Y n−1 (d t , S t−1 ), respectively.Let Λ(d t ) and L a (d t ) are called, respectively, the a posteriori probability (APP) sequence and the a priori information sequence of the input data sequence d t .In the first half iteration of the turbo decoder, L a (d t ) = 0, since the input data sequence d t is assumed i.i.d.The BCJR MAP algorithm centres around the computation of the following joint probabilities: where To compute λ t (m) and σ t (m , m), let us define the probability sequences ( At this stage, it is important to emphasize that β τ (m) and α 0 (m) are not yet defined.In other words, the boundary conditions or initial values for the backward and forward recursions are undetermined.The boundary values (initial conditions) will be determined shortly from the inherent logical consistency among the computed probabilities.Now assume that 1 ≤ t ≤ τ − 1.We have Here we used the equality which follows from the Markov chain property that if S t is known, events after time t do not depend on Y t 1 .Similar facts are used in a number of places in this paper.The reader is referred to [6] for more detailed discussions on Markov chains.Now let t = τ.We have Here for the first time, we have defined Note that β τ (m) was not defined in (5).
It can be shown that σ t (m , m) can be expressed in terms of the α, β, and γ sequences.In fact, if 2 ≤ t ≤ τ − 1, we have and if t = τ, we obtain Here we used the Markov chain property and the definition that β τ (m) = 1.
It remains to check the case t = 1.If t = 1, we have where we have defined α 0 (m Since it is assumed that the recursive systematic convolutional (RSC) code always starts from the all-zero state S 0 = 0, we have To proceed further, we digress here to introduce some notation.A directed branch on the trellis diagram of a recursive systematic convolutional (RSC) code is completely characterized by the node it emanates from and the node it reaches.In other words, a directed branch on the trellis diagram of an RSC code is identified by an ordered pair of nonnegative integers (m , m), where 0 ≤ m , m ≤ 2 v − 1.We remark here that not every ordered pair of integers (m , m) can be used to identify a directed branch.Let B t,0 = {(m , m) : ) represents the set of all the directed branches on the trellis diagram of an RSC code where the tth input bit d t is 0 (resp., 1).
With the above definitions, we are now in a position to present the forward and backward recursions for the α and β sequences and the formula for computing the APP sequence Λ(d t ).
We can further simplify and reformulate the BCJR MAP algorithm for a binary rate 1/n recursive systematic convolutional code.In fact, where Substituting ( 18), ( 19) into (12) and (13), we obtain Here we used the fact that for any given state m, the probability Pr{S By Proposition A.1 in the appendix, we have, for j = 0, 1, By Proposition A.2 in the appendix, we also have, for j = 0, 1, and where μ t > 0 is a positive constant independent of j and m and L c = 2/σ 2 is called the channel reliability coefficient.Using ( 21) and ( 22), the identity (20) can be rewritten as where δ t = μ t /(1 + exp(L a (d t ))) and for j = 0, 1 and 0 Similarly, from ( 14), ( 15), ( 18), (19), and using Propositions A.1 and A.2 in the appendix, it can be shown that where Using mathematical induction, it can be shown that the multiplicative constants δ t , δ t+1 can be set to 1 without changing the APP sequence Λ(d t ) (cf.Proposition A.3 in the appendix) and the BCJR MAP algorithm can be finally formulated as follows.Let the α sequence be computed by the forward recursion and let the β sequence be computed by the backward recursion The APP sequence Λ(d t ) is then computed by where Λ e (d t ), the extrinsic information for data bit d t , is defined by The BCJR MAP algorithm can be reformulated systematically in a number of different ways, resulting in the so-called modified BCJR MAP algorithms.They are discussed in the following sections.

THE SBGT MAP ALGORITHM
In this section, we derive the SBGT MAP algorithm from the BCJR MAP algorithm.For i = 0, 1 and 1 ≤ t ≤ τ, let Equation ( 17) can then be rewritten as Moreover, α i t (m) admits the probabilistic interpretation: It is shown below that α i t (m) can be computed by the following forward recursions and β t (m) can be computed by ( 14), (15), and (18), which are repeated here for easy reference: In fact, from (12) and (13), it follows that for 1 Substituting (36) into (30), we obtain, for 2 ≤ t ≤ τ, Here we used (18) and the fact that for any m with (m , m) ∈ B t,i , γ 1−i (Y t , m , m) = 0 (cf.Proposition A.4 in the appendix).This proves the forward recursions (34) for 2 ≤ t ≤ τ.Using (30) and the fact that α 0 (0) = 1 and α 0 (m) = 0, m = 0, it can be verified directly that the forward recursion (37) holds also for t = 1 if α i 0 (m) are defined by Using essentially the same argument as the one used in the proof of Proposition A.3 in the appendix, it can be shown that the values of α i 0 (m) can be reinitialized as α j 0 (0) = 1, α j 0 (m ) = 0, j = 0, 1 , m = 0.This proves the forward recursions (34) for α i t (m).Equations (34), (35), and (31) constitute a simplified version of the modified BCJR MAP algorithm developed by Berrou et al. in the classical paper [1].We remark here that the main difference between the version presented here and the version in [1] is that the redundant divisions in [1, equations (20), (21)] are now removed.As mentioned in the introduction, for brevity, the modified BCJR MAP algorithm of [1] is called the BGT MAP algorithm and its simplified version presented in this section is called the SBGT MAP algorithm (or simply called the SBGT algorithm).
Using (19), (A.2), (A.4), and applying a mathematical induction argument similar to the one used in the proof of Proposition A.3 in the appendix, the SBGT MAP algorithm can be further simplified and reformulated.Details are omitted here due to space limitations and the reader is referred to [3] for similar simplifications.In summary, the APP sequence Λ(d t ) is computed by (31), where α i t (m) are computed by the forward recursions and β t (m) are computed by ( 27) which is repeated here for easy reference and comparisons: Note that the branch metric Γ t ( j, m) is defined in (24).

THE DUAL SBGT (DSBGT) MAP ALGORITHM
This section derives from the BCJR MAP algorithm a dual version of the SBGT MAP algorithm.For i = 0, 1, and 1 ≤ t ≤ τ, let Using this notation, (17) can be rewritten as Moreover, β i t (m) admits the probabilistic interpretation: The sequence α t (m) is computed recursively by ( 12), (13), and (18), which are repeated here for easy reference and comparisons: The sequence β i t (m) is computed recursively by the following backward recursions as will be shown next: In fact, from ( 14) and (15), it follows that for 1 ≤ t ≤ τ − 1, S. Wang and F. Patenaude 9 Substituting (47) into (41) and using (18), we obtain, for 1 ≤ t ≤ τ − 1, Here we used the fact that for (m, m )∈B t,i , γ 1−i (Y t , m, m )= 0 (cf.Proposition A.4 in the appendix).This proves the backward recursions (46) for 1 ≤ t ≤ τ−1.Using (41) and the fact that As in the derivation of the SBGT MAP algorithm, using a mathematical induction argument similar to the one used in the proof of Proposition A.3 in the appendix, it can be shown that the values of β j τ+1 (m ) can be reinitialized as β j τ+1 (m ) = 1, j = 0, 1, m = 0, 1, . . ., 2 v − 1, without having any impact on the final computation of Λ(d t ).This completes the proof of the backward recursive relations (46) for the β i t (m) sequence.Equations ( 45), (46), and (42) constitute an MAP algorithm that is dual in structure to the SBGT MAP algorithm.It is thus called the dual SBGT MAP algorithm in [3].In this paper, the dual SBGT MAP algorithm will be called the DSBGT MAP algorithm (or simply called the DSBGT algorithm).
Using ( 19), (A.2), (A.4), and applying a mathematical induction argument similar to the one used in the proof of Proposition A.3 in the appendix, the DSBGT MAP algorithm can be further simplified and reformulated (details are omitted).The APP sequence Λ(d t ) is computed by (42) where α t (m) are computed by (26) which is repeated here for easy reference and comparisons: and β i t (m) are computed by the backward recursions (50)

THE PB MAP ALGORITHM DERIVED FROM THE SBGT MAP ALGORITHM
In this section, we show that the modified BCJR MAP algorithm of Pietrobon and Barbulescu can be derived from the SBGT MAP algorithm via simple permutations.
In fact, since the two mappings S 1 f and S 0 f are one-toone correspondences from the set {0, 1, 2, . . ., 2 v − 1} onto itself, from (31) it follows that the APP sequence Λ(d t ) can be rewritten as Define Then the APP sequence Λ(d t ) can be computed by It can be verified that The two equations of (54) show that a i t (m) and b i t (m) are exactly the same as the α i t (m) and β i t (m) sequences defined in [5].
We can immediately derive the forward and backward recursions for a i t (m) and b i t (m) from the recursions (34) and (35).

EURASIP Journal on Applied Signal Processing
In fact, from the third equation of (34) it follows that for where From the first and second equations of (34), it follows that for i = 0, 1, The backward recursions for b i t (m) are similarly derived.In fact, from the second equation of (35), it follows that for 1 and from the first equation of (35), it follows that Equations ( 53), ( 55), ( 56), (57), (58), and (59) constitute the modified BCJR MAP algorithm of Pietrobon and Barbulescu developed in [5].As mentioned in the introduction, for brevity, this algorithm is also called the PB MAP algorithm (or simply called the PB algorithm).Using (19), (A.2), (A.4), and applying a mathematical induction argument similar to the one used in the proof of Proposition A.3 in the appendix, the PB MAP algorithm can be further simplified and reformulated as follows.The APP sequence Λ(d t ) is computed by (53), where a i t (m) are computed by the forward recursions and b i t (m) are computed by the backward recursions (61)

THE DUAL PB (DPB) MAP ALGORITHM
The dual SBGT (DSBGT) MAP algorithm presented in Section 4 can be reformulated via permutations to obtain a dual version of the PB MAP algorithm.In fact, since the two mappings S 1 b and S 0 b are one-toone correspondences from the set {0, 1, 2, . . ., 2 v − 1} onto itself, from (42) it follows that the APP sequence Λ(d t ) can be rewritten as Define Then the APP sequence Λ(d t ) can be computed by The two sequences g i t (m) and h i t (m) admit the following probabilistic interpretations: From the third equation of (45), it follows that for 2 ≤ t ≤ τ, and from the first and second equations of (45), we obtain Similarly, from the second equation of (46), it follows that for 1 ≤ t ≤ τ, and from the first equation of (46), it follows that Equations ( 64), (66), (67), (68), and (69) constitute a dual version of the modified BCJR MAP algorithm of Pietrobon and Barbulescu.For brevity, it is called the dual PB (DPB) MAP algorithm (or simply called the DPB algorithm).The duality that exists between the PB MAP algorithm and the DPB MAP algorithm derives from the fact that the DPB MAP algorithm is obtained by permuting nodes on the trellis diagram of the systematic convolutional code from the DSBGT MAP algorithm while the PB MAP algorithm is obtained in a similar way from the SBGT MAP algorithm.Using (19), (A.2), (A.4), and applying a mathematical induction argument similar to the one used in the proof of Proposition A.3 in the appendix, the DPB MAP algorithm can be further simplified and reformulated as follows.The APP sequence Λ(d t ) is computed by (64), where g i t (m) are computed by the forward recursions and h i t (m) are computed by the backward recursions Note that Γ t ( j, m) is defined in (24).

Complexity comparisons in the linear domain
Dualities between the SBGT and DSBGT and between the PB and DPB MAP algorithms immediately imply that the SBGT and DSBGT MAP algorithms have identical computational complexities and memory requirements and so do the PB and DPB MAP algorithms.We next show that the SBGT and PB MAP algorithms have identical computational complexities and memory requirements too.In fact, for i = 0, 1, from the second equation of (61), we obtain The identity (73) shows that for any given t, . Thus in the PB MAP algorithm, only one of the two sequences b 1 t (m), b 0 t (m), 1 ≤ t ≤ τ, needs to be computed in the backward recursion.Comparing (39), ( 40), (60), (61), we see that the SBGT and PB MAP algorithms have identical computational complexities and memory requirements.It follows that the SBGT, DSBGT, PB, and DPB MAP algorithms all have identical computational complexities and memory requirements.To compare the BCJR and the modified BCJR MAP algorithms, it suffices to analyze the BCJR and the DSBGT.We will show next that the BCJR and DSBGT MAP algorithms also have identical computational complexities and memory requirements.
First, we note that the branch metrics Γ t (i, m) are used in both the forward and backward recursions for the BCJR and DSBGT MAP algorithms.To minimize the computational load, the branch metrics Γ t (i, m) are stored and reused (see [7]).Let us first compute the number of arithmetic operations required to decode a single bit for the BCJR.The branch metric Γ t ( j, m), defined by (24), is computed by For each decoded bit, there are a total of B = min{2 v+1 , 2 n } different branch metrics to be calculated, each requiring n−1 additions and a single exponentiation.Note that the scaling operation by L c is performed prior to turbo decoding and therefore should be ignored here.To compute α t (m) in the forward recursion (26), which is reproduced here, two multiplications and one addition are needed (assuming that the B branch metrics are already computed and stored).The branch metrics are reused in the backward recursion (27), which is reproduced here, hence only two multiplications and one addition are required to compute a single β t (m).Finally, Λ(d t ), defined in (28), is computed by We note that the terms can be used to reduce the number of additions by half in the computation of β 0 t (m) and ).An examination of (42) and the recursions (49) and ( 50) then shows that to decode a single bit, there are B exponentiations, 1+(n−1)B+ 2M +2(M −1) = (n−1)B +4M −1 additions, and 2M +2M + 2M + 1 = 6M + 1 multiplications.Memory is required for M values of α t (m) and B values of branch metrics Γ t ( j, m) or a total of B + M units.
The preceding calculations show that indeed the BCJR and DSBGT MAP algorithms have identical computational complexities and memory requirements, and therefore the BCJR and the four modified BCJR MAP algorithms all have identical computational complexities and memory requirements.

Complexity comparisons in the log domain
In the log domain, exponentiation operations in the linear domain disappear, multiplications are converted into additions, and additions in the recursions are converted into the so-called E operation defined in [7, equation (21)] based on the formula ln e x + e y = max(x, y) where the function ln(1 + e −|x−y| ) is replaced by a lookup table.Following the same analysis as in the previous subsection, it can be shown that the BCJR and the four modified BCJR MAP algorithms also have identical computational complexities and memory requirements in the log domain.

Initialization of the backward recursion
In this paper, the BCJR and the four modified BCJR MAP algorithms are formulated for a truncated or nonterminated binary convolutional code.If the binary convolutional code is terminated so that the final encoder state is the zero state, then it can be shown that the β t (m) sequence for the BCJR MAP algorithm can be initialized by setting β τ (0) = 1 and β τ (m) = 0, m = 0. To show why this is the case, let us look at the backward recursion (15), which is reproduced here: Since the code is terminated at the zero state, for t = τ − 1, from ( 18) and ( 19), we can see that the terms γ t+1 (m, m ) = γ τ (m, m ) in ( 81) are all zero except for γ t+1 (m, 0), which is the only term that may be nonzero.This implies that the β t (m) sequence can be initialized by setting β τ (0) = 1 and resetting β τ (m A similar argument applies to the SBGT, DSBGT, PB, and DPB MAP algorithms as well.For a terminated binary convolutional code, we have the following initialization strategies.For the backward recursion (40) in the SBGT MAP algorithm, the sequence β t (m) is initialized by setting β τ (0) = 1 and β τ (m) = 0, m = 0.For the backward recursions (50) in the DSBGT MAP algorithm, the two sequences β 0 t (m) and β

SIMULATIONS
The BCJR and the four modified BCJR MAP algorithms formulated in this paper are all mathematically equivalent and should produce identical results in the linear domain.To verify this, the rate 1/2 and rate 1/3 turbo codes defined in the CDMA2000 standard were tested for the AWGN channel with the interleaver size selected to be 1146.At least, 500 bit errors were accumulated for each selected value of E b /N 0 .Under the same simulation conditions (same random number generators starting at the same seeds), it turns out that indeed the BCJR and the four modified BCJR MAP algorithms all have identical BER (bit error rate) and FER (frame error rate) performance.More specifically, they generate exactly the same number of bit errors and exactly the same number of frame errors under identical simulation conditions (cf.Figures 2 and 3).
The BCJR and the four modified BCJR MAP algorithms are expected to have identical performance in the log domain since they have identical performance in the linear domain.

CONCLUSIONS
In this paper, four different modified BCJR MAP algorithms have been systematically derived from the BCJR MAP algorithm via mathematical transformations.The simple connections among these algorithms are thus established.It is shown that the BCJR and the four modified BCJR MAP algorithms have identical computational complexities and memory requirements.Computer simulations confirmed that the BCJR and the four modified BCJR MAP algorithms all have identical performance in an AWGN channel.
The BCJR and the modified BCJR MAP algorithms presented in this paper are formulated for a rate 1/n convolutional code.It can be shown that these algorithms can all be extended to a general rate k/n recursive systematic convolutional code.These extensions will be treated elsewhere.
same time, with the terms in (78) computed only once.It follows that to compute Λ(d t ), 2M + 1 multiplications and 2(M − 1) additions are required (the single division is considered equivalent to a multiplication, the single natural logarithm operation is ignored, and M = 2 v ).