An FPGA Implementation of (3 , 6) -Regular Low-Density Parity-Check Code Decoder

Because of their excellent error-correcting performance, low-density parity-check (LDPC) codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-o ﬀ s between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3 ,k )-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1 / 2 (3 , 6)-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54Mbps and achieves BER 10 − 6 at 2dB over AWGN channel while performing maximum 18 decoding iterations.


INTRODUCTION
In the past few years, the recently rediscovered low-density parity-check (LDPC) codes [1,2,3] have received a lot of attention and have been widely considered as next-generation error-correcting codes for telecommunication and magnetic storage. Defined as the null space of a very sparse M × N parity-check matrix H, an LDPC code is typically represented by a bipartite graph, usually called Tanner graph, in which one set of N variable nodes corresponds to the set of codeword, another set of M check nodes corresponds to the set of parity-check constraints and each edge corresponds to a nonzero entry in the parity-check matrix H. (A bipartite graph is one in which the nodes can be partitioned into two sets, X and Y , so that the only edges of the graph are between the nodes in X and the nodes in Y .) An LDPC code is known as ( j, k)-regular LDPC code if each variable node has the degree of j and each check node has the degree of k, or in its parity-check matrix each column and each row have j and k nonzero entries, respectively. The code rate of a ( j, k)-regular LDPC code is 1 − j/k provided that the paritycheck matrix has full rank. The construction of LDPC codes is typically random. LDPC codes can be effectively decoded by the iterative belief-propagation (BP) algorithm [3] that, as illustrated in Figure 1, directly matches the Tanner graph: decoding messages are iteratively computed on each variable node and check node and exchanged through the edges between the neighboring nodes.
Recently, tremendous efforts have been devoted to analyze and improve the LDPC codes error-correcting capability, see [4,5,6,7,8,9,10,11] and so forth. Besides their powerful error-correcting capability, another important reason why LDPC codes attract so many attention is that the iterative BP decoding algorithm is inherently fully parallel, thus a great potential decoding speed can be expected.
The high-speed decoder hardware implementation is obviously one of the most crucial issues determining the extent of LDPC applications in the real world. The most natural solution for the decoder architecture design is to directly instantiate the BP decoding algorithm to hardware: each variable node and check node are physically assigned their own processors and all the processors are connected through an interconnection network reflecting the Tanner graph connectivity. By completely exploiting the parallelism of the BP decoding algorithm, such fully parallel decoder can achieve very high decoding speed, for example, a 1024-bit, rate-1/2 LDPC code fully parallel decoder with the maximum symbol throughput of 1 Gbps has been physically implemented using ASIC technology [12]. The main disadvantage of such  fully parallel design is that with the increase of code length, typically the LDPC code length is very large (at least several thousands), the incurred hardware complexity will become more and more prohibitive for many practical purposes, for example, for 1-K code length, the ASIC decoder implementation [12] consumes 1.7M gates. Moreover, as pointed out in [12], the routing overhead for implementing the entire interconnection network will become quite formidable due to the large code length and randomness of the Tanner graph. Thus high-speed partly parallel decoder design approaches that achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. For any given LDPC code, due to the randomness of its Tanner graph, it is nearly impossible to directly develop a high-speed partly parallel decoder architecture. To circumvent this difficulty, Boutillon et al. [13] proposed a decoderfirst code design methodology: instead of trying to conceive the high-speed partly parallel decoder for any given random LDPC code, use an available high-speed partly parallel decoder to define a constrained random LDPC code. We may consider it as an application of the well-known "Think in the reverse direction" methodology. Inspired by the decoder-first code design methodology, we proposed a joint code and decoder design methodology in [14] for (3, k)-regular LDPC code partly parallel decoder design. By jointly conceiving the code construction and partly parallel decoder architecture design, we presented a (3, k)-regular LDPC code partly parallel decoder structure in [14], which not only defines very good (3, k)-regular LDPC codes but also could potentially achieve high-speed partly parallel decoding.
In this paper, applying the joint code and decoder design methodology, we develop an elaborate (3, k)-regular LDPC code high-speed partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2 (3, 6)-regular LDPC code decoder using Xilinx Virtex FPGA (Field Programmable Gate Array) device. In this work, we significantly modify the original decoder structure [14] to improve the decoding throughput and simplify the control logic design. To achieve good error-correcting capability, the LDPC code decoder architecture has to possess randomness to some extent, which makes the FPGA implementations more challenging since FPGA has fixed and regular hardware resources. We propose a novel scheme to realize the random connectivity by concatenating two routing networks, where all the random hardwire routings are localized and the overall routing complexity is significantly reduced. Exploiting the good minimum distance property of LDPC codes, this decoder employs parity check as the earlier decoding stopping criterion to achieve adaptive decoding for energy reduction. With the maximum 18 decoding iterations, this FPGA partly parallel decoder supports a maximum of 54 Mbps symbol throughput and achieves BER (bit error rate) 10 −6 at 2 dB over AWGN channel.
This paper begins with a brief description of the LDPC code decoding algorithm in Section 2. In Section 3, we briefly describe the joint code and decoder design methodology for (3, k)-regular LDPC code partly parallel decoder design. In Section 4, we present the detailed high-speed partly parallel decoder architecture design. Finally, an FPGA implementation of a (3, 6)-regular LDPC code partly parallel decoder is discussed in Section 5.

DECODING ALGORITHM
Since the direct implementation of BP algorithm will incur too high hardware complexity due to the large number of multiplications, we introduce some logarithmic quantities to convert these complicated multiplications into additions, which lead to the Log-BP algorithm [2,15].
Before the description of Log-BP decoding algorithm, we introduce some definitions as follows. Let H denote the M × N sparse parity-check matrix of the LDPC code and H i, j denote the entry of H at the position (i, j). We define the set of bits n that participate in parity-check m as ᏺ(m) = {n : H m,n = 1}, and the set of parity-checks m in which bit n participates as ᏹ(n) = {m : H m,n = 1}. We denote the set ᏺ(m) with bit n excluded by ᏺ(m) \ n, and the set ᏹ(n) with parity-check m excluded by ᏹ(n) \ m.
where γ m,n = γ n + m ∈ᏹ(n)\m β m ,n . For each n, update the pseudoposterior log-likelihood ratio (LLR) λ n as We call α m,n and β m,n in the above algorithm extrinsic messages, where α m,n is delivered from variable node to check node and β m,n is delivered from check node to variable node.
Each decoding iteration can be performed in fully parallel fashion by physically mapping each check node to one individual check node processing unit (CNU) and each variable node to one individual variable node processing unit (VNU).
Moreover, by delivering the hard decision x i from each VNU to its neighboring CNUs, the parity-check H · x can be easily performed by all the CNUs. Thanks to the good minimum distance property of LDPC code, such adaptive decoding scheme can effectively reduce the average energy consumption of the decoder without performance degradation.
In the partly parallel decoding, the operations of a certain number of check nodes or variable nodes are timemultiplexed, or folded [16], to a single CNU or VNU. For an LDPC code with M check nodes and N variable nodes, if its partly parallel decoder contains M p CNUs and N p VNUs, we denote M/M p as CNU folding factor and N/N p as VNU folding factor.

JOINT CODE AND DECODER DESIGN
In this section, we briefly describe the joint (3, k)-regular LDPC code and decoder design methodology [14]. It is well known that the BP (or Log-BP) decoding algorithm works well if the underlying Tanner graph is 4-cycle free and does not contain too many short cycles. Thus the motivation of this joint design approach is to construct an LDPC code that not only fits to a high-speed partly parallel decoder but also has the average cycle length as large as possible in its 4-cyclefree Tanner graph. This joint design process is outlined as follows and the corresponding schematic flow diagram is shown in Figure 2.
(1) Explicitly construct two matrices H 1 and H 2 in such a way that H = [H T 1 , H T 2 ] T defines a (2, k)-regular LDPC code C 2 whose Tanner graph has the girth 1 of 12.
(2) Develop a partly parallel decoder that is configured by a set of constrained random parameters and defines a (3, k)-regular LDPC code ensemble, in which each code is a subcode of C 2 and has the parity-check matrix  Figure 3, where both H 1 and H 2 are L · k by L · k 2 submatrices. Each block matrix I x,y in H 1 is an L × L identity matrix and each block matrix P x,y in H 2 is obtained by a cyclic shift of an L × L identity matrix. Let T denote the right cyclic shift operator where T u (Q) represents right cyclic shifting matrix Q by u columns, then Notice that in both H 1 and H 2 , each row contains k 1's and each column contains a single 1. Thus, the matrix H = [H T 1 , H T 2 ] T defines a (2, k)-regular LDPC code C 2 with L · k 2 variable nodes and 2L · k check nodes. Let G denote the Tanner graph of C 2 , we have the following theorem regarding to the girth of G.
, then the girth of G is 12 and there is at least one 12-cycle passing each check node.

Partly parallel decoder
Based on the specific structure of H, a principal (3, k)-regular LDPC code partly parallel decoder structure was presented in [14]. This decoder is configured by a set of constrained random parameters and defines a (3, k)-regular LDPC code ensemble. Each code in this ensemble is essentially constructed by inserting extra L · k check nodes to the high-girth (2, k)regular LDPC code C 2 under the constraint specified by the decoder. Therefore, it is reasonable to expect that the codes in this ensemble more likely do not contain too many short cycles and we may easily select a good code from it. For real applications, we can select a good code from this code ensemble as follows: first in the code ensemble, find several codes with relatively high-average cycle lengths, then select the one leading to the best result in the computer simulations.
The principal partly parallel decoder structure presented in [14] has the following properties.
(i) It contains k 2 memory banks, each one consists of several RAMs to store all the decoding messages associated with L variable nodes. (ii) Each memory bank associates with one address generator that is configured by one element in a constrained random integer set . (iii) It contains a configurable random-like one-dimensional shuffle network with the routing complexity scaled by k 2 . (iv) It contains k 2 VNUs and k CNUs so that the VNU and CNU folding factors are L·k 2 /k 2 = L and 3L·k/k = 3L, respectively. (v) Each iteration completes in 3L clock cycles in which only CNUs work in the first 2L clock cycles and both CNUs and VNUs work in the last L clock cycles.
Over all the possible and , this decoder defines a (3, k)regular LDPC code ensemble in which each code has the parity-check matrix where the submatrix H 3 is jointly specified by and S.

PARTLY PARALLEL DECODER ARCHITECTURE
In this paper, applying the joint code and decoder design methodology, we develop a high-speed (3, k)-regular LDPC code partly parallel decoder architecture based on which a 9216-bit, rate-1/2 (3, 6)-regular LDPC code partly parallel decoder has been implemented using Xilinx Virtex FPGA device. Compared with the structure presented in [14], this partly parallel decoder architecture has the following distinct characteristics.
(i) It employs a novel concatenated configurable random two-dimensional shuffle network implementation scheme to realize the random-like connectivity with low routing overhead, which is especially desirable for FPGA implementations. (ii) To improve the decoding throughput, both the VNU folding factor and CNU folding factor are L instead of L and 3L in the structure presented in [14]. (iii) To simplify the control logic design and reduce the memory bandwidth requirement, this decoder completes each decoding iteration in 2L clock cycles in which CNUs and VNUs work in the 1st and 2nd L clock cycles, alternatively.
Following the joint design methodology, we have that this decoder should define a (3, k)-regular LDPC code ensemble in which each code has L · k 2 variable nodes and 3L · k check nodes and, as illustrated in Figure 4, the parity-check matrix of each code has the form T where H 1 and H 2 have the explicit structures as shown in Figure 3 and the random-like H 3 is specified by certain configuration parameters of the decoder. To facilitate the description of the . . decoder architecture, we introduce some definitions as follows: we denote the submatrix consisting of the L consecutive columns in H that go through the block matrix I x,y as H (x,y) in which, from left to right, each column is labeled as h (x,y) i with i increasing from 1 to L, as shown in Figure 4. We label the variable node corresponding to column h . . , L constitute a variable node group VG x,y . Finally, we arrange the L · k check nodes corresponding to all the L·k rows of submatrix H i into check node group CG i . Figure 5 shows the principal structure of this partly parallel decoder. It mainly contains k 2 PE blocks PE x,y , for 1 ≤ x and y ≤ k, three bidirectional shuffle networks π 1 , π 2 , and π 3 , and 3 · k CNUs. Each PE x,y contains one memory bank RAMs x,y that stores all the decoding messages, including the intrinsic and extrinsic messages and hard decisions, associated with all the L variable nodes in the variable node group VG x,y , and contains one VNU to perform the variable node computations for these L variable nodes. Each bidirectional shuffle network π i realizes the extrinsic message exchange between all the L·k 2 variable nodes and the L·k check nodes in CG i . The k CNU i, j , for j = 1, . . . , k, perform the check node computations for all the L · k check nodes in CG i . This decoder completes each decoding iteration in 2L clock cycles, and during the first and second L clock cycles, it works in check node processing mode and variable node processing mode, respectively. In the check node processing mode, the decoder not only performs the computations of all the check nodes but also completes the extrinsic message exchange between neighboring nodes. In variable node processing mode, the decoder only performs the computations of all the variable nodes.
The intrinsic and extrinsic messages are all quantized to five bits and the iterative decoding datapaths of this partly parallel decoder are illustrated in Figure 6, in which the datapaths in check node processing and variable node processing are represented by solid lines and dash dot lines, respectively. As shown in Figure 6, each PE block PE x,y contains five RAM blocks: EXT RAM i for i = 1, 2, 3, INT RAM, and DEC RAM. Each EXT RAM i has L memory locations and the location with the address d − 1 (1 ≤ d ≤ L) contains the extrinsic messages exchanged between the variable node v (x,y) d in VG x,y and its neighboring check node in CG i . The INT RAM and DEC RAM store the intrinsic message and hard decision associated with node v (x,y) d at the memory location with the address d − 1 (1 ≤ d ≤ L). As we will see later, such decoding messages storage strategy could greatly simplify the control logic for generating the memory access address.
For the purpose of simplicity, in Figure 6 we do not show the datapath from INT RAM to EXT RAM i's for extrinsic message initialization, which can be easily realized in L clock cycles before the decoder enters the iterative decoding process.

Check node processing
During the check node processing, the decoder performs the computations of all the check nodes and realizes the extrinsic message exchange between all the neighboring nodes. At the beginning of check node processing, in each PE x,y the memory location with address d − 1 in EXT RAM i contains 6bit hybrid data that consists of 1-bit hard decision and 5-bit variable-to-check extrinsic message associated with the variable node v (x,y) d in VG x,y . In each clock cycle, this decoder performs the read-shuffle-modify-unshuffle-write operations to convert one variable-to-check extrinsic message in each EXT RAM i to its check-to-variable counterpart. As illustrated in Figure 6, we may outline the datapath loop in check node processing as follows: (1) read: one 6-bit hybrid data h (i) x,y is read from each EXT RAM i in each PE x,y ; (2) shuffle: each hybrid data h (i) x,y goes through the shuffle network π i and arrives at CNU i, j ; (3) modify: each CNU i, j performs the parity check on the 6 input hard decision bits and generates the 6 output 5bit check-to-variable extrinsic messages β (i) x,y based on the 6 input 5-bit variable-to-check extrinsic messages; (4) unshuffle: send each check-to-variable extrinsic message β (i) x,y back to the PE block via the same path as its variable-to-check counterpart; (5) write: write each β (i) x,y to the same memory location in EXT RAM i as its variable-to-check counterpart.
All the CNUs deliver the parity-check results to a central control block that will, at the end of check node processing, determine whether all the parity-check equations specified by the parity-check matrix have been satisfied, if yes, the decoding for current code frame will terminate.
To achieve higher decoding throughput, we implement the read-shuffle-modify-unshuffle-write loop operation by five-stage pipelining as shown in Figure 7, where CNU is one-stage pipelined. To make this pipelining scheme feasible, we realize each bidirectional I/O connection in the three Active during check node processing h (1) x,y · · · CNU 2, j 6 bits 5 bits x,y · · · CNU 3, j 6 bits 5 bits   shuffle networks by two distinct sets of wires with opposite directions, which means that the hybrid data from PE blocks to CNUs and the check-to-variable extrinsic messages from CNUs to PE blocks are carried on distinct sets of wires. Compared with sharing one set of wires in time-multiplexed fashion, this approach has higher wire routing overhead but obviates the logic gate overhead due to the realization of timemultiplex and, more importantly, make it feasible to directly pipeline the datapath loop for higher decoding throughput.
In this decoder, one address generator AG (i) x,y associates with one EXT RAM i in each PE x,y . In the check node processing, AG (i) x,y generates the address for reading hybrid data and, due to the five-stage pipelining of datapath loop, the address for writing back the check-to-variable message is ob-tained via delaying the read address by five clock cycles. It is clear that the connectivity among all the variable nodes and check nodes, or the entire parity-check matrix, realized by this decoder is jointly specified by all the address generators and the three shuffle networks. Moreover, for i = 1, 2, 3, the connectivity among all the variable nodes and the check nodes in CG i is completely determined by AG (i) x,y and π i . Following the joint design methodology, we implement all the address generators and the three shuffle networks as follows. (1) x,y and π 1 The bidirectional shuffle network π 1 and AG (1) x,y realize the connectivity among all the variable nodes and all the check nodes in CG 1 as specified by the fixed submatrix H 1 . Recall π 3 Input data from PE blocks

Implementations of AG
. .

bit
Stage I: intrarow shuffle Output data to CNU 3, j 's Stage II: intracolumn shuffle  Figure 4 and the extrinsic messages associated with node v (x,y) d are always stored at address d − 1. Exploiting the explicit structure of H 1 , we easily obtain the implementation schemes for AG (1) x,y and π 1 as follows: (i) each AG (1) x,y is realized as a log 2 L -bit binary counter that is cleared to zero at the beginning of check node processing; (ii) the bidirectional shuffle network π 1 connects the k PE x,y with the same x-index to the same CNU.

Implementations of AG (2)
x,y and π 2 The bidirectional shuffle network π 2 and AG (2) x,y realize the connectivity among all the variable nodes and all the check nodes in CG 2 as specified by the fixed matrix H 2 . Similarly, exploiting the extrinsic messages storage strategy and the explicit structure of H 2 , we implement AG (2) x,y and π 2 as follows: (i) each AG (2) x,y is realized as a log 2 L -bit binary counter that only counts up to the value L − 1 and is loaded with the value of ((x − 1) · y) mod L at the beginning of check node processing; (ii) the bidirectional shuffle network π 2 connects the k PE x,y with the same y-index to the same CNU.
Notice that the counter load value for each AG (2) x,y directly comes from the construction of each block matrix P x,y in H 2 as described in Section 3.

Implementations of AG (3)
x,y and π 3 The bidirectional shuffle network π 3 and AG (3) x,y jointly define the connectivity among all the variable nodes and all the check nodes in CG 3 , which is represented by H 3 as illustrated in Figure 4. In the above, we show that by exploiting the specific structures of H 1 and H 2 and the extrinsic messages storage strategy, we can directly obtain the implementations of each AG (i) x,y and π i , for i = 1, 2. However, the implementa-tions of AG (3) x,y and π 3 are not easy because of the following requirements on H 3 : (1) the Tanner graph corresponding to the parity-check matrix T should be 4-cycle free; (2) to make H random to some extent, H 3 should be random-like.
As proposed in [14], to simplify the design process, we separately conceive AG (3) x,y and π 3 in such a way that the implementations of AG (3) x,y and π 3 accomplish the above first and second requirements, respectively.

Implementations of AG (3)
x,y We implement each AG (3) x,y as a log 2 L -bit binary counter that counts up to the value L − 1 and is initialized with a constant value t x,y at the beginning of check node processing. Each t x,y is selected in random under the following two constraints: (1) given x, t x,y1 = t x,y2 , for all y 1 , y 2 ∈ {1, . . . , k}; (2) given y, t x1,y − t x2,y ≡ ((x 1 − x 2 ) · y) mod L, for all x 1 , x 2 ∈ {1, . . . , k}.
It can be proved that the above two constraints on t x,y are sufficient to make the entire parity-check matrix H always correspond to a 4-cycle free Tanner graph no matter how we implement π 3 .

Implementation of π 3
Since each AG (3) x,y is realized as a counter, the pattern of shuffle network π 3 cannot be fixed, otherwise the shuffle pattern of π 3 will be regularly repeated in the H 3 , which means that H 3 will always contain very regular connectivity patterns no matter how random-like the pattern of π 3 itself is. Thus we should make π 3 configurable to some extent. In this paper, we propose the following concatenated configurable random shuffle network implementation scheme for π 3 . Figure 8 shows the forward path (from PE x,y to CNU 3, j ) of the bidirectional shuffle network π 3 . In each clock cycle, it realizes the data shuffle from a x,y to c x,y by two concatenated stages: intrarow shuffle and intracolumn shuffle. Firstly, the a x,y data block, where each a x,y comes from PE x,y , passes an intrarow shuffle network array in which each shuffle network Ψ (r) x shuffles the k input data a x,y to b x,y for 1 ≤ y ≤ k. Each Ψ (r) x is configured by 1-bit control signal s (r) x leading to the fixed random permutation R x if s (r) x = 1, or to the identity permutation (Id) otherwise. The reason why we use the Id pattern instead of another random shuffle pattern is to minimize the routing overhead, and our simulations suggest that there is no gain on the error-correcting performance by using another random shuffle pattern instead of Id pattern. The kbit configuration word s (r) changes every clock cycle and all the L k-bit control words are stored in ROM R. Next, the b x,y data block goes through an intracolumn shuffle network array in which each Ψ (c) y shuffles the k b x,y to c x,y for 1 ≤ x ≤ k. Similarly, each Ψ (c) y is configured by 1-bit control signal s (c) y leading to the fixed random permutation C y if s (c) y = 1, or to Id otherwise. The k-bit configuration word s (c) y changes every clock cycle and all the L k-bit control words are stored in ROM C. As the output of forward path, the k c x,y with the same x-index are delivered to the same CNU 3, j . To realize the bidirectional shuffle, we only need to implement each configurable shuffle network Ψ (r) x and Ψ (c) y as bidirectional so that π 3 can unshuffle the k 2 data backward from CNU 3, j to PE x,y along the same route as the forward path on distinct sets of wires. Notice that, due to the pipelining on the datapath loop, the backward path control signals are obtained via delaying the forward path control signals by three clock cycles.
To make the connectivity realized by π 3 random-like and change each clock cycle, we only need to randomly generate the control words s (r) x and s (c) y for each clock cycle and the fixed shuffle patterns of each R x and C y . Since most modern FPGA devices have multiple metal layers, the implementations of the two shuffle arrays can be overlapped from the bird's-eye view. Therefore, the above concatenated implementation scheme will confine all the routing wires to small area (in one row or one column), which will significantly reduce the possibility of routing congestion and reduce the routing overhead.

Variable node processing
Compared with the above check node processing, the operations performed in the variable node processing is quite simple since the decoder only needs to carry out all the variable node computations. Notice that at the beginning of variable node processing, the three 5-bit check-to-variable extrinsic messages associated with each variable node v  Figure 9: Three-stage pipelining of the variable node processing datapath.
hard decisions. As shown in Figure 6, we may outline the datapath loop in variable node processing as follows: (1) read: in each PE x,y , three 5-bit check-to-variable extrinsic messages β (i) x,y and one 5-bit intrinsic messages γ x,y associated with the same variable node are read from the three EXT RAM i and INT RAM at the same address; (2) modify: based on the input check-to-variable extrinsic messages and intrinsic message, each VNU generates the 1-bit hard decision x x,y and three 6-bit hybrid data x,y is written back to the same memory location as its check-to-variable counterpart and x x,y is written to DEC RAM.
The forward path from memory to VNU and backward path from VNU to memory are implemented by distinct sets of wires and the entire read-modify-write datapath loop is pipelined by three-stage pipelining as illustrated in Figure 9.
Since all the extrinsic and intrinsic messages associated with the same variable node are stored at the same address in different RAM blocks, we can use only one binary counter to generate all the read address. Due to the pipelining of the datapath, the write address is obtained via delaying the read address by three clock cycles.

CNU and VNU architectures
Each CNU carries out the operations of one check node, including the parity check and computation of check-tovariable extrinsic messages. Figure 10 shows the CNU architecture for check node with the degree of 6. Each input x (i) is a 6-bit hybrid data consisting of 1-bit hard decision and 5-bit variable-to-check extrinsic message. The parity check is performed by XORing all the six 1-bit hard decisions. Each 5-bit variable-to-check extrinsic messages is represented by sign-magnitude format with a sign bit and four magnitude bits. The architecture for computing the check-to-variable extrinsic messages is directly obtained from (3). The function f (x) = log((1 + e −|x| )/(1 − e −|x| )) is realized by the LUT (lookup table) that is implemented as a combinational logic block in FPGA. Each output 5-bit check-to-variable extrinsic message y (i) is also represented by sign-magnitude format.
Each VNU generates the hard decision and all the variable-to-check extrinsic messages associated with one variable node. Figure 11 shows the VNU architecture for variable node with the degree of 3. With the input 5-bit intrinsic message z and three 5-bit check-to-variable extrinsic messages y (i) associated with the same variable node, VNU generates three 5-bit variable-to-check extrinsic messages and 1-bit hard decision according to (4) and (5), respectively. To enable each CNU to receive the hard decisions to perform parity check as described above, the hard decision is combined with each 5-bit variable-to-check extrinsic message to form 6-bit hybrid data x (i) as shown in Figure 11. Since each input check-to-variable extrinsic message y (i) is represented by sign-magnitude format, we need to convert it to two's complement format before performing the additions. Before going through the LUT that realizes f (x) = log((1 + e −|x| )/(1 − e −|x| )), each data is converted back to the sign-magnitude format.

Data Input/Output
This partly parallel decoder works simultaneously on three consecutive code frames in two-stage pipelining mode: while one frame is being iteratively decoded, the next frame is loaded into the decoder, and the hard decisions of the previous frame are read out from the decoder. Thus each INT RAM contains two RAM blocks to store the intrinsic messages of both current and next frames. Similarly, each DEC RAM contains two RAM blocks to store the hard decisions of both current and previous frames. The design scheme for intrinsic message input and hard decision output is heavily dependent on the floor planning of  the k 2 PE blocks. To minimize the routing overhead, we develop a square-shaped floor planning for PE blocks as illustrated in Figure 12 and the corresponding data input/output scheme is described in the following.
(1) Intrinsic data input. The intrinsic messages of next frame is loaded, 1 symbol per clock cycle. As shown in Figure 12, the memory location of each input intrinsic data is determined by the input load address that has the width of ( log 2 L + log 2 k 2 ) bits in which log 2 k 2 bits specify which PE block (or which INT RAM) is being accessed and the other log 2 L bits locate the memory location in the selected INT RAM. As shown in Figure 12, the primary intrinsic data and load address input directly connect to the k PE blocks PE 1,y for 1 ≤ y ≤ k, and from each PE x,y the intrinsic data and load address are delivered to the adjacent PE block PE x+1,y in pipelined fashion. (2) Decoded data output. The decoded data (or hard decisions) of the previous frame is read out in pipelined fashion. As shown in Figure 12, the primary log 2 Lbit read address input directly connects to the k PE blocks PE x,1 for 1 ≤ x ≤ k, and from each PE x,y the read address are delivered to the adjacent block PE x,y+1 in pipelined fashion. Based on its input read address, each PE block outputs 1-bit hard decision per clock cycle. Therefore, as illustrated in Figure 12, the width of pipelined decoded data bus increases by 1 after going through one PE block, and at the rightmost side, we obtain k k-bit decoded output that are combined together as the k 2 -bit primary decoded data output.

FPGA IMPLEMENTATION
Applying the above decoder architecture, we implemented a (3, 6)-regular LDPC code partly parallel decoder for L = 256 using Xilinx Virtex-E XCV2600E device with the package FG1156. The corresponding LDPC code length is N = L · k 2 = 256 · 6 2 = 9216 and code rate is 1/2. We obtain the constrained random parameter set for implementing π 3 and each AG (3) x,y as follows: first generate a large number of parameter sets from which we find few sets leading to relatively high Tanner graph average cycle length, then we select one set leading to the best performance based on computer simulations.
The target XCV2600E FPGA device contains 184 large on-chip block RAMs, each one is a fully synchronous dualport 4K-bit RAM. In this decoder implementation, we configure each dual-port 4K-bit RAM as two independent single-port 256 × 8-bit RAM blocks so that each EXT RAM i can be realized by one single-port 256 × 8-bit RAM block. Since each INT RAM contains two RAM blocks for storing the intrinsic messages of both current and next code frames, we use two single-port 256 × 8-bit RAM blocks to implement one INT RAM. Due to the relatively small memory size requirement, the DEC RAM is realized by distributed RAM that provides shallow RAM structures implemented in CLBs. Since this decoder contains k 2 = 36 PE blocks, each one incorporates one INT RAM and three EXT RAM i's, we totally utilize 180 single-port 256 × 8-bit RAM blocks (or 90 dual-port 4K-bit RAM blocks). We manually configured the placement of each PE block according to the floor-planning scheme as shown in Figure 12. Notice that such placement  scheme exactly matches the structure of the configurable shuffle network π 3 as described in Section 4.1.3, thus the routing overhead for implementing the π 3 is also minimized in this FPGA implementation. From the architecture description in Section 4, we know that, during each clock cycle in the iterative decoding, this decoder need to perform both read and write operations on each single-port RAM block EXT RAM i. Therefore, suppose the primary clock frequency is W, we must generate a 2 × W clock signal as the RAM control signal to achieve read-and-write operation in one clock cycle. This 2 × W clock signal is generated using the delay-locked loop (DLL) in XCV2600E.
To facilitate the entire implementation process, we extensively utilized the highly optimized Xilinx IP cores to instantiate many function blocks, that is, all the RAM blocks, all the counters for generating addresses, and the ROMs used to store the control signals for shuffle network π 3 . Moreover, all the adders in CNUs and VNUs are implemented by ripplecarry adder that is exactly suitable for Xilinx FPGA implementations thanks to the on-chip dedicated fast arithmetic carry chain.
This decoder was described in the VHDL (hardware description language) and SYNOPSYS FPGA Express was used to synthesize the VHDL implementation. We used the Xilinx Development System tool suite to place and route the synthesized implementation for the target XCV2600E device with the speed option −7. Table 1 shows the hardware resource utilization statistics. Notice that 74% of the total utilized slices, or 8691 slices, were used for implementing all the CNUs and VNUs. Figure 13 shows the placed and routed design in which the placement of all the PE blocks are constrained based on the on-chip RAM block locations.
Based on the results reported by the Xilinx static timing analysis tool, the maximum decoder clock frequency can be 56 MHz. If this decoder performs s decoding iterations for each code frame, the total clock cycle number for decoding one frame will be 2s · L + L, where the extra L clock cycles is due to the initialization process, and the maximum symbol decoding throughput will be 56 · k 2 · L/(2s · L + L) = 56·36/(2s+1) Mbps. Here, we set s = 18 and obtain the maximum symbol decoding throughput as 54 Mbps. Figure 14 shows the corresponding performance over AWGN channel with s = 18, including the BER, FER (frame error rate), and the average iteration numbers.

CONCLUSION
Due to the unique characteristics of LDPC codes, we believe that jointly conceiving the code construction and partly parallel decoder design should be a key for practical high-speed LDPC coding system implementations. In this paper, applying a joint design methodology, we developed a (3, k)-regular LDPC code high-speed partly parallel decoder architecture design and implemented a 9216bit, rate-1/2 (3, 6)-regular LDPC code decoder on the Xilinx XCV2600E FPGA device. The detailed decoder architecture and floor planning scheme have been presented and a concatenated configurable random shuffle network implementation is proposed to minimize the routing overhead for the random-like shuffle network realization. With the maximum 18 decoding iterations, this decoder can achieve up to 54 Mbps symbol decoding throughput and the BER 10 −6 at 2 dB over AWGN channel. Moreover, exploiting the good minimum distance property of LDPC code, this decoder uses parity check after each iteration as earlier stopping criterion to effectively reduce the average energy consumption.