Skip to main content

A concatenation scheme of LDPC codes and source codes for flash memories

Abstract

Recently, low-density parity-check (LDPC) codes have been applied in flash memories to correct errors. However, as verified in this article, their performance degrades rapidly as the number of stuck cells increases. Thus, this paper presents a concatenation reliability scheme of LDPC codes and source codes, which aims to improve the performance of LDPC codes for flash memories with stuck cells. In this scheme, the locations of stuck cells is recorded by source codes in the write process such that erasures rather than wrong log-likelihood ratios on these cells are given in the read process. Then, LDPC codes correct these erasures and soft errors caused by cell-to-cell interferences. The analyses of channel capacity and compression rates of source codes with side information show that the memory cost of the proposed scheme is moderately low. Simulation results verify that the proposed scheme outperforms the traditional scheme with only LDPC codes.

Introduction

In flash memories, as programming and erasure (P/E) cycles increase, some cells may be stuck at a voltage level not related to the data being written in, resulting in stuck cells or defective memory cells [1, 2]. At present, a class of hard-decision decoding error correction codes (ECC), Bose Chaudhuri Hocquenghem (BCH) codes, are employed to correct such stuck errors caused by stuck cells as well as soft errors [3, 4], induced by cell-to-cell interferences. Moreover, if the number of stuck cells in one page exceeds a certain limit, the whole page will be discarded and masked.

As scaling is becoming more and more aggressive, reliability is also becoming one of the most important challenges in multi-level cell (MLC) flash memories now and future. Therefore, a class of more powerful soft-decision decoding ECCs, low-density parity-check (LDPC) codes, [57], are introduced to improve the reliability of flash memories [3, 4, 8]. Different from hard-decision decoders, stuck errors are very harmful to soft-decision decoders, since their log-likelihood ratios (LLRs) could have a large magnitude such that soft-decision decoders may mistakenly consider them correct with very high reliability. From our simulation results, even a small number of these stuck errors may trap soft-decision message-passing decoders of LDPC codes. It causes severe performance degradation, such as worse waterfall thresholds and slower slope of bit error rate (BER) curves.

In this article, we present a concatenation reliability scheme of LDPC codes and source codes to alleviate the performance degradation. The key idea is to erase the wrong LLRs of stuck cells for message-passing decoders according to their locations provided by source codes. The scheme consists of two parts, namely the write and the read parts. When the controller is writing data into flash memories, it is also designed to detect stuck cells. The locations of these stuck cells are compressed by source codes with side information to efficiently reduce the memory cost. In the read part, the compressed location information is recovered and the wrong LLRs related to these locations are erased, i.e., set their LLRs to zero. Then, LDPC decoders correct both these erasures and soft errors.

We also compare the proposed concatenation scheme with the traditional one from the angle of error performance and memory cost. On one hand, it can significantly outperforms the traditional one, since it removes the wrong LLRs for LDPC decoders. On the other hand, capacity and compression rate analyses indicate that it moderately requires a small percentage of memory cost. Simulation results show that it achieves significantly better error performance than the traditional one, even if the traditional one uses LDPC codes with lower rate. Because the proposed scheme can tolerate more stuck errors, it is also able to prolong the life of flash memories, besides reliability.

The rest of the article consists of four sections. In “Introduction” section, we introduce some backgrounds on channel models of flash memories and LDPC codes. The proposed concatenation scheme is presented in “Concatenation scheme of LDPC codes and source codes” section. Analyses and simulation are given in “Analyses and simulation” section. This article is concluded in the last section.

Background

Channel models for different types errors

In flash memories, each cell uses 2svoltage levels L 1 , L 2 ,, L 2 s to represent s stored bits. If s=1, they are single-level cell flash memories; if s>1, they are 2s MLC flash memories. The writing processing, i.e., voltage level programming, is realized by incremental step pulse program with an incremental program step voltage Δ Vpp [4]. Under the ideal condition, the distributions of voltage levels except the first one are uniform distributions with the width of Δ V pp . The distribution of the first level is a wide Gaussian-like distribution. Since these distributions do not overlap each other, no errors occur. However, because of inevitable cell-to-cell interferences in real systems, all the voltage distributions change to Gaussian-like distributions and may overlap (Figure 1). Thus, soft errors may occur. Cell-to-cell interferences [3, 4] can be modeled as an additive white Gaussian noise (AWGN) channel with pulse-amplitude modulation (PAM). In other words, the voltage level Li has the output

R j = L i + N i
(1)

for the j th flash cell through the AWGN channel, where Ni is the additive Gaussian noise with zero mean and variance σ i 2 .

Figure 1
figure 1

Voltage distributions with cell-to-cell interferences. Each level counts 22k cells.

In addition, since P/E cycles gradually degrade flash memory medium, some cells may permanently be stuck with one voltage level [1, 2], named stuck errors or stuck-at-faults. In this article, we mainly consider cells stuck at the highest level which is a common type of stuck errors after many P/E cycles. Suppose that there are u% stuck cells in the flash memory. Such errors can be described by a stuck channel with a transition probability p = 0.01 u as shown in Figure 2. Each level Li has the same probability p to be stuck at the highest level L 2 s , noted p{ Li }.

Figure 2
figure 2

Model of stuck channel in flash memories.

LDPC codes

A binary regular (n, k) LDPC code [6] is defined as the null space of an m × n sparse parity-check matrixH, m  ≥ nk, over GF(2) with the following structural properties: (1) each row has constant weight ρ; (2) each column has constant weight γ; and (3) no two rows (or two columns) can have more than one position where they both have 1-components. H is said to be (γ ρ)-regular and the code given by the null space of H is called a (γ ρ)-regular LDPC code. Property (3) is referred to as the row-column (RC) constraint. Most LDPC codes investigated in flash memories are regular due to high code rate and low error-floor requirements.

Consider a binary message vector m of length k and an (n, k) binary LDPC code. Its corresponding codeword c of length n is m · G and satisfies c · HT = 0, where G is the generator matrix of the LDPC code.

Soft-decision decoding algorithms for LDPC codes are mostly iterative in nature and devised based on message-passing. The most well known ones [6] are the sum-product algorithm and its simplified version, the min-sum (MS) algorithm [9].

Concatenation scheme of LDPC codes and source codes

Channel model for both soft errors and stuck errors

Two types of errors in flash memories are introduced in “Channel models for different types errors” section. Soft errors caused by cell-to-cell interferences are described by an AWGN channel. Stuck errors caused by stuck cells are described by a stuck sub-channel. In existing researches, the two types of errors from the two channels are separately considered for correction [3, 4, 10]. However, both of them exist in real systems at the same time, especially at middle-late stage of memory life. Thus, we consider a channel model consisting of two sub-channels, the AWGN sub-channel for soft errors and the stuck channel for stuck errors. Furthermore, it can be time-varying according to P/E cycles thus more realistic. As a result, the reliability scheme based on the proposed channel model will be more efficient in applications and make it possible to extend the life of memory cells.

The block diagram of the channel model is shown in Figure 3. Consider 2s level flash memories and a binary data vector c of length n = snas the input. By Gray mapping, it is stored into a voltage level vector V of length of n. After the first stuck sub-channel, the vector of V becomes

S= p {V},
(2)

where Sj = p{ Vj } for the j th cell, j = 0,1,…,n− 1. If Sj is not stuck, it will pass the second AWGN sub-channel and the voltage level Sj will change to Rj = Li + Ni ; otherwise, Rj = Li .

Figure 3
figure 3

Channel model composed of AWGN sub-channel and stuck sub-channel.

The proposed concatenation scheme

Until now only ECCs, e.g., BCH codes or LDPC codes, are used to correct errors in practice, since the AWGN sub-channel dominants the performance in the early stage (i.e., few P/E cycles) of flash memories. However, as the number of P/E cycles increases, the stuck sub-channel will bring significant effects, especially harmful to probability decoders like message-passing decoders of LDPC codes, and will finally kill the memories.

If we only consider the reading process in ECC schemes, the controller can only receive voltage levels R which may be mixed with stuck errors. From these voltage levels, LLRs are calculated based on the AWGN sub-channel with PAM modulation. Since these LLRs are mixed with wrong magnitudes and signs, even the maximum likelihood codeword based on these LLRs may not be correct. In “Analyses and simulation” section, we will show that a small number of stuck cells will cause significant performance degradation.

However, if we also consider both the writing process, the decoder will be able to distinguish normal cells and stuck cells by the information provided by the controller.

As described in [4], MLC flash memories are written by a program and verify scheme. If some cells are stuck, their voltages keep unchanged such that they can be detected by a well-designed controller. As shown in Figure 4, we propose that the controller detects stuck cells and then records their locations into a vector l of length n, in the writing process, where lj = 1 if j th cell is stuck; otherwise lj = 0. To reduce the memory cost, the location vector l is compressed into a vector z. In the next section, we will present the compression approach and its memory cost.

Figure 4
figure 4

The write part of the reliability concatenation scheme of LDPC codes and sources codes.

In the reading process, as shown in Figure 5, the controller receives voltage levels R from the current page and the compressed vector from the compressed location page. The vector r of LLRs [11] is calculated based on the AWGN sub-channel with PAM modulation from R. After the location information l is recovered from the compressed vector z and the side information v[12] (see “Memory cost of source codes” section), the LLRs on the stuck locations are set to zero, i.e., rsj + t= 0 if lt = 0, where t = 0,1,…,s − 1. Then, the LLRs are sent to LDPC decoders to output the decoded data.

Figure 5
figure 5

The read part of the concatenation reliability scheme of LDPC codes and source codes.

Analyses and simulation

In this section, we first would like to analyze the capacity of the flash channel models. Then, based on the analyses of channel capacity and compression rates of source codes, the memory cost of source codes and the overall code rate of the proposed scheme are presented. Last, simulation results verify that the proposed scheme achieves better performance than the traditional one, even if the traditional one deploys lower rate LDPC codes.

Capacity analysis

It is interesting to know the capacity C of the proposed channel model. First, it is clear that the channel consisting of two sub-channels is a Markov Chain. Thus, its capacity is upper bounded by the capacity of the AWGN sub-channel Ca and the capacity of the stuck sub-channel Cs [13],

Cmin{ C a , C s }.
(3)

Consider 4-level flash memories, i.e., 2 s= 4. The capacity of the stuck sub-channel Cs is

C s =(1p) log 2 ( p p +3(1p))+p log 2 p ( 1 p ) + 3 p ( 1 p ) p ( 1 p ) + ( 1 p ) p p .
(4)

The derivation of Cs is given in the Appendix.

By the proposed scheme, stuck errors are transferred to erasures. Thus, the channel can be viewed as the concatenation of the AWGN sub-channel and the 2 s-ary erasure sub-channel (Figure 6). It is well known that the capacity of the erasure sub-channel Ce with erasure probability p is s (1 − p) [13]. If s = 2, then the capacity of the erasure sub-channel is

C e =2(1p).
(5)
Figure 6
figure 6

Channel model with source codes with side information.

In the next section, the capacity of the stuck sub-channel (4) and the erasure sub-channel (5) is used to derive the memory cost of source codes.

Memory cost of source codes

In this article, we propose to deploy lossless source codes to compressed the location vector l into the compressed vector z. Moreover, as shown in Figure 5, side information [12, 14] from voltage levels is used to further reduce the memory cost of compression.

It is well known in information theory that compression rates of lossless source codes can efficiently approach [15] the entropy of the original data. For the stuck location vector, its entropy is

n H(l)= n H( l i )= n (p log 2 p(1p) log 2 p).
(6)

In addition, according to Slepian-Wolf theorem [12], the theoretical limit of lossless compression of vector l is improved to the conditional entropy H (l|v), with the help of side information v, where l and v are correlated with Pr( li vi ) < 0.5. Since stuck cells stay in the highest level as shown in Figure 2, the voltage level vector R from data pages provides side information to improve the efficiency of source codes. In other words, if the R ̂ j L 2 s , the j th cell is not stuck, i.e., lj = 0, where R ̂ j is the hard-decision of Rj [11]. Let us describe the side information by the vector v=[ v 0 , v 1 ,, v n 1 ], where vi = 0 if R ̂ j L 2 s ; else vi = 1. Then, the cell is stuck, i.e., lj = 1 only if vi = 1. Suppose that the data bit is discrete uniform distribution in ‘0’ and ‘1’. Recall that the transition probability is p. The probability of stuck errors is p (2 s−1)/2s, since there is a probability 1/2s that stuck cells are coincidentally written into L 2 s such that no errors occur in these cells. Then, the probability that vi = 1 is p+(1p)/ 2 s = 1 + p ( 2 s 1 ) 2 s and the conditional probabilities are Pr ( li = 0| vi = 0) = 1, Pr ( li = 1| vi = 0) = 0, Pr( l i =0| v i =1)= 1 1 + p ( 2 s 1 ) , and Pr( l i =1| v i =1)= p ( 2 s 1 ) 1 + p ( 2 s 1 ) . Thus, the conditional entropy H (l|v)

lllH ( l | v ) = n H ( l i | v i ) = n 1 + p ( 2 s 1 ) 2 s Pr ( l i = 1 | v i = 1 ) log 2 1 Pr ( l i = 1 | v i = 1 ) + Pr ( l i = 0 | v i = 1 ) log 2 1 Pr ( l i = 0 | v i = 1 ) , = n ( 1 + p ( 2 s 1 ) ) 2 s p ( 2 s 1 ) 1 + p ( 2 s 1 ) log 2 p ( 2 s 1 ) 1 + p ( 2 s 1 ) + 1 1 + p ( 2 s 1 ) log 2 1 1 + p ( 2 s 1 ) .
(7)

Thus, the compression rate Rsc of source codes is about

R sc H ( l | v ) s n .
(8)

Suppose that the code rate of the LDPC code is Rcc , then the overall code rate of the concatenation scheme is

R or = R cc /(1+ R sc ).
(9)

The stuck location changes much slower, since stuck cells occur only after many P/E cycles on data pages. In other words, P/E cycles happen much less often in the compressed location pages than data pages. Thus, we could consider that compressed location pages contain few errors before the flash memories die. A high rate and short ECC is strong enough to guarantee the reliability, which has a small impact on the overall code rate. From brevity, we does not involve the error correcting issue of compressed location pages in this article.

It is clear that the proposed scheme takes about H (l|v) bits to record the stuck locations. On the other hand, the stuck sub-channel is replaced by the erasure sub-channel, which has larger channel capacity. Recall the capacity of the erasure sub-channel Cs (4) and the capacity of the erasure sub-channel (5). We define the memory cost of source codes as

C mc =1 C e / ( 1 + R sc ) C s =1 C e C s ( 1 + R sc ) .
(10)

In Figure 7, we show that the memory cost is moderately low and grows almost linearly with the transition probability p when p is less than 0.02. It is also worth to mention that the combination of the BEC sub-channel and the AWGN channel is inherent better the combination of the stuck sub-channel and the AWGN channel for the soft-decision decoders, since the harmful wrong LLRs have been removed with the help of source codes. As we will show in the next section, the proposed scheme outperforms the traditional ECC only scheme.

Figure 7
figure 7

Memory cost of the concatenation scheme.

Simulation results

In this section, we will show the performance degradation of LDPC codes caused by stuck cells. Then, the proposed scheme is compared with the traditional scheme with only one ECC.

In our simulation, we consider 2-bit (4-level) flash memories, i.e., s=2. Gray mapping is used to map 2 bits of data to a voltage level Li , where L 1, L 2, L 3, and L 4 represent ‘00’, ‘01’, ‘11’, and ‘10’, respectively. Consider cell-to-cell interferences as the AWGN sub-channel, their means are all zero and their variances σ i 2 are 4σ 2, σ 2, σ 2, and 2σ 2, where σ 2 is measured by

V σ 2 10 log 10 ( L 4 L 1 ) 2 σ 2 .

Clearly, the larger V σ 2 , the less cell-to-cell interferences. A similar definition can be found in [16]. Moreover, two percentages, 0.5 and 1%, of stuck cells are investigated. From (8), the compression rates of source codes for 0.5 and 1% are about 0.0161 and 0.0245, respectively.

In the proposed scheme, a (16000,15000) LDPC code with constant column weight 4 and code rate 15/16 is used. The source code costs about 0.0141 × 16000 ≈ 226 bits to record the location vector l if p = 0.005 and 0.0245 × 16000≈ 392 bits if p = 0.01. Thus, their overall rates (9) are 0.924 if p = 0.05 and 0.915 if p = 0.01. To be fair, we add the same overhead into the traditional scheme such that a (16226,15000) LDPC code with constant column weight 4 and a (16392,15000) LDPC code with constant column weight 4 are used in the traditional scheme for p = 0.005 and p = 0.01, respectively. All the LDPC codes are decoded by the MS algorithm [9] with 50 iterations.

The simulation results are shown in Figure 8. First, we show the severe performance degradation caused by a small number of stuck errors. Consider the traditional scheme, if there are 0.5% of stuck cells in the flash memory, the performance degradation between no stuck cells and 0.5% stuck cells is about 1.8 dB at BER 10 − 6. Moreover, the slope of the BER curve with stuck cells obviously reduces. If the percentage of stuck cells increases to 1%, the performance degradation is 2.5 dB at BER 10−3. In addition, the BER curve with stuck cells becomes flat. Second, we compare the traditional scheme with the concatenation scheme. If there are 0.5% stuck cells, the concatenation scheme provides about 1.3 dB at BER 10 − 6over the traditional scheme with the (16226,15000) LPDC code. Moreover, the proposed scheme outperforms the traditional scheme with the (16500,15000) LPDC code 0.4 dB at BER 10 − 6, whose code rate 0.909 is lower than 0.924. If the percentage increases to 1%, the performance gain is much larger, about 1.7 dB at BER 10−3. It is worth mentioning that though the concatenation scheme suffers from stuck cells, the slopes of its BER curves are as well as LDPC codes without stuck errors.

Figure 8
figure 8

BER of different schemes with different percentages of stuck cells.

Conclusion

Cell-to-cell interferences and stuck cells are the two most important reliability issues in flash memories. In this article, we present a concatenation reliability scheme of LDPC codes and source codes with side information to enhance the reliability. The proposed scheme erases stuck cells according to their recorded stuck locations such that the performance of soft-decision decoding algorithms for LDPC codes is significantly improved. Simulation results indicate that the proposed scheme greatly outperforms the traditional scheme and does not have reduction in the slopes of BER curves. Moreover, the memory cost of recording stuck locations is moderately low and grows almost linearly of the transition probability p.

Last we would like to give some remarks on the concatenation scheme:

  1. (1)

    Due to the asymmetric property of the stuck sub-channel, Figure 2, it is not optimal to set LLRs to zero for erased cells. Thus, we set the LLR to α log2 instead of zero in the simulation, where α is a scaling factor. However, no obvious coding gain is observed, since the amount of stuck cells is relatively small and little bias on them will not have obvious impacts on LDPC decoders. Thus, this article sets zero for simplicity.

  2. (2)

    In this article, side information from the asymmetric property, Figure 2, is used to reduce the memory cost of source codes. Other side informations are also amicable to improve the source codes’ efficiency, as long as they are correlated to the stuck location vector l with Pr ( li vi )<0.5 [14].

Appendix

Capacity of 4-level stuck sub-channel

The channel capacity of the stuck sub-channel is defined as follows,

C s = max p ( V i ) I ( V ; S ) .

By constructing a new function with the Lagrange multiplier λ,

F = I ( V ; S ) λ V p ( V i ) ,

we obtain a function system

F p ( V i ) = I ( V ; S ) λ V p ( V i ) p ( V i ) = 0 V p ( V i ) = 1 .

As is known, the mutual information of the channel is

I(V;S)= i = 1 s j = 1 s p( V i )p( S j | V i ) log 2 p ( S j | V i ) p ( S j ) ,
(11)

where p ( Vi )’s represent the probability contribution of the input while the probability of the output is given by

p ( S j ) = i = 1 s p ( V i ) p ( S j | V i ) .

The functions are transformed as below,

j = 1 s p ( S j | V i ) log 2 p ( S j | V i ) p ( S j ) = log 2 e + λ V p ( V i ) = 1 ,

where e represents the Euler number. For the 4-level stuck-sub-channel,

p ( S 1 | V 1 ) log 2 p ( S 1 | V 1 ) p ( S 1 ) + p ( S 4 | V 1 ) log 2 p ( S 4 | V 1 ) p ( S 4 ) = log 2 e + λ , p ( S 2 | V 2 ) log 2 p ( S 2 | V 2 ) p ( S 2 ) + p ( S 4 | V 2 ) log 2 p ( S 4 | V 2 ) p ( S 4 ) = log 2 e + λ , p ( S 3 | V 3 ) log 2 p ( S 3 | V 3 ) p ( S 3 ) + p ( S 4 | V 3 ) log 2 p ( S 4 | V 3 ) p ( S 4 ) = log 2 e + λ , p ( S 4 | V 4 ) log 2 p ( S 4 | V 4 ) p ( S 4 ) = log 2 e + λ , p ( V 1 ) + p ( V 2 ) + p ( V 3 ) + p ( V 4 ) = 1 ,

where log2e + λ equals to the channel capacity. Since p(S 4|V 1) = p (S 4|V 2) = p (S 4|V 3) = p, p(S 1|V 1) = p (S 2|V 2) = p (S 3|V 3) = 1 − p = q, and p (S 4|V 4) = 1,

q log 2 1 p ( V 1 ) + p log 2 p p + qp ( V 4 ) = log 2 e + λ , q log 2 1 p ( V 2 ) + p log 2 p p + qp ( V 4 ) = log 2 e + λ , q log 2 1 p ( V 3 ) + p log 2 p p + qp ( V 4 ) = log 2 e + λ , log 2 1 p + qp ( V 4 ) = log 2 e + λ , p ( V 1 ) + p ( V 2 ) + p ( V 3 ) + p ( V 4 ) = 1 .

From Equations (12) – (16), we obtain

q log 2 p ( V 1 ) + p log 2 p p log 2 ( p + qp ( V 4 ) ) = log 2 ( p + qp ( V 4 ) ) .

Therefore,

p( V 1 )= 1 p p + 3 q =p( V 2 )=p( V 3 )= q ( V 4 ) 3 .
(17)

If p (V 4) ≥ 0, (17) is the optimum distribution of the input. Thus,

C s = I V ; S | p ( V 1 ) , p ( V 2 ) , p ( V 3 ) , p ( V 4 ) = q log 2 ( p p + 3 q ) + p log 2 p q + 3 pq p q + q p p .

However, if p (V 4) < 0, we set p (V 4) to 0 to solve Equations (12) – (16) again and obtain

p ( V 1 ) = p ( V 2 ) = p ( V 3 ) = 1 3 .

In this case,

C s = q log 2 3 .

References

  1. Pavan P, Bez R, Olivo P, Zanoni E: Flash memory cells—an overview. Proc. IEEE 1997, 85: 1248-1271. 10.1109/5.622505

    Article  Google Scholar 

  2. NAND Flash Design and Use Considerations Introduction, Micron TN-29-17 [http://download.micron.com/pdf/technotes/nand/tn2917.pdf]

  3. Wang J, Dong G, Zhang T, Wesel R: Mutual-information optimized quantization for LDPC decoding of accurately modeled flash data. 2012. arXiv:1202.1325v1 [cs.IT]

    Google Scholar 

  4. Dong G, Xie N, Zhang T: On the use of soft-decision error-correction codes in NAND Flash memory. IEEE Trans. Circuits Syst. I 2010, 58(2):429-439.

    Article  MathSciNet  Google Scholar 

  5. Gallager R: Low density parity check codes. IRE Trans. Inf. Theory 1962, 8: 21-28. 10.1109/TIT.1962.1057683

    Article  MathSciNet  MATH  Google Scholar 

  6. Ryan WE, Lin S: Channel Codes: Classical and Modern. 2009.

    Book  Google Scholar 

  7. Franceschini M, Ferrari G, Raheli R: Does the performance of LDPC codes depend on the channel. IEEE Trans. Commun 2006, 54(12):2129-2132.

    Article  Google Scholar 

  8. Zhang F, Pfister H, Jiang A: LDPC codes for rank modulation in flash memories. IEEE International Symposium on Information Theory June 2010. pp. 859–863

    Google Scholar 

  9. Fossorier M, Mihaljevic M, Imai H: Reduced complexity iterative decoding of low-density parity check codes based on belief propagation. IEEE Trans. Commun 1999, 58(5):673-680.

    Article  Google Scholar 

  10. Kuznetsov AV, Kasami T, Yamamura S: An error correcting scheme for defective memory. IEEE Trans. Inf. Theory 1978, 24(6):712-718. 10.1109/TIT.1978.1055963

    Article  MathSciNet  MATH  Google Scholar 

  11. Proakis J: Digital Communications. 1989.

    Google Scholar 

  12. Slepian D, Wolf JK: Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19: 471-480. 10.1109/TIT.1973.1055037

    Article  MathSciNet  MATH  Google Scholar 

  13. McEliece RJ: The Theory of Information and Coding: A Mathematical Framework for Communication. 1977.

    MATH  Google Scholar 

  14. Liveris A, Xiong Z, Georghiades C: Compression of binary sources with side information at the decoder using LDPC codes. IEEE Commun. Lett 2002, 6(10):440-442.

    Article  Google Scholar 

  15. Cover TM, Thomas JA: Elements of Information Theory. 1991.

    Book  MATH  Google Scholar 

  16. Zhang X, Zhu J, Wu Y: Efficient one-pass Chase soft-decision BCH decoder for multi-level cell NAND flash memory. August 7–10, 2011.

    Book  Google Scholar 

Download references

Acknowledgements

The authors would like to thank all the anonymous reviewers for their valuable comments. This study was supported by the National Natural Science Foundation of China (61071070).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zulin Wang.

Additional information

Competing interest

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Huang, Q., Pan, S., Zhang, M. et al. A concatenation scheme of LDPC codes and source codes for flash memories. EURASIP J. Adv. Signal Process. 2012, 208 (2012). https://doi.org/10.1186/1687-6180-2012-208

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-208

Keywords