 Research
 Open Access
 Published:
Joint source/channel iterative arithmetic decoding with JPEG 2000 image transmission application
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 114 (2012)
Abstract
Motivated by recent results in Joint Source/Channel coding and decoding, we consider the decoding problem of Arithmetic Codes (AC). In fact, in this article we provide different approaches which allow one to unify the arithmetic decoding and error correction tasks. A novel lengthconstrained arithmetic decoding algorithm based on Maximum A Posteriori sequence estimation is proposed. The latter is based on softinput decoding using a priori knowledge of the sourcesymbol sequence and the compressed bitstream lengths. Performance in the case of transmission over an Additive White Gaussian Noise channel is evaluated in terms of Packet Error Rate. Simulation results show that the proposed decoding algorithm leads to significant performance gain while exhibiting very low complexity. The proposed soft input arithmetic decoder can also generate additional information regarding the reliability of the compressed bitstream components. We consider the serial concatenation of the AC with a Recursive Systematic Convolutional Code, and perform iterative decoding. We show that, compared to tandem and to trellisbased SoftInput SoftOutput decoding schemes, the proposed decoder exhibits the best performance/complexity tradeoff. Finally, the practical relevance of the presented iterative decoding system is validated under an image transmission scheme based on the JPEG 2000 standard and excellent results in terms of decoded image quality are obtained.
1 Introduction
Joint Source/Channel (JSC) coding and decoding have become an area of strong interest because the separation between source and channel coding has turned out to be unjustified in practical systems due to limited block lengths and the residual redundancy in the data bits which remain after source encoding.
On the other hand, the hostile nature of the communication channel requires some form of error protection. The limitation on the bandwidth necessitates the use of efficient entropyapproaching source codes, generally, variablelength codes (Huffman or arithmetic codesACs). However, variablelength compressed bitstreams are susceptible to error propagation, and the need for error protection increases. In this scope, JSC decoding for variablelength coding is receiving increasing attention.
Arithmetic Coding [1, 2] is currently being deployed in a growing number of compression standards such as JPEG 2000 [3] for still pictures and H264 [4] for video sequences. Arithmetic coding yields higher compression performance when compared to other lossless compression methods since it can allocate fractional numbers of bits to input symbols. However, the arithmetic decoder has poor resynchronization properties which motivated the development of JSC techniques based on ACs.
The first contributions, classified as resilient entropy coding techniques, tends to prevent error propagation by using proper synchronization markers or considering some framing techniques [5]. A scheme reserving a space for an extra symbol (called forbidden symbol) that is not in the source alphabet, so never transmitted, has been proposed in [6]. The technique has been shown to provide excellent error detection while introducing redundancy in the compressed bitstream. However, this extra rate is small considering the error detection capability. The forbidden symbol technique was then integrated in different decoding schemes to provide error correction. Associated to an automatic repeat request (ARQ) protocol, the technique was used for error correction in [7, 8].
More recent studies tend to apply softinput sequence estimation algorithms for arithmetic decoding instead of hardinput classical decoding. The proposed schemes consider finite state representations for the arithmetic decoding machine to apply wellknown algorithms used in channel decoding such as Viterbi, ListViterbi [9–15]. In [9], an AC that embeds channel coding is presented to enforce a minimum Hamming distance between encoded sequences, then a Maximum A Posteriori (MAP) estimator is proposed for arithmetic decoding. In [10, 11], sequential decoding schemes were applied on binary trees and path pruning technique was used based on the forbidden symbol error detection. Sayir [12] used an arithmetic encoder adding redundancy in the compressed bitstream by introducing gaps in the coding space, and performs sequential decoding. In [13], authors used Bayesian networks to model the quasiarithmetic encoder, and considered adding redundancy by introducing synchronization markers. A new threedimensional bitsynchronized trellis representing the finite precision AC was recently proposed in [14]. A similar trellis was used in [15], where authors computed bounds on the error probability obtained with an AC using the forbidden symbol technique.
Recent contributions proposed to exploit the efficiency of turbo decoding by integrating the arithmetic decoder in an iterative decoding process [13, 16–18]. These contributions used finitestate machine representations for the AC to apply SoftInput SoftOutput (SISO) decoding algorithms. Iterative decoding is then performed by concatenating the ACs with a Recursive Systematic Convolutional Code (RSCC). It is worth pointing out that the techniques introduced in [13, 16] were applied for JPEG 2000 compressed image transmission. We notice that all the proposed techniques rely on specific trellis constructions with eventually pruning, which results in various efficiencies in terms of error correction performance. However, in the proposed trellises the states number increases with the source symbol sequence length L, and the source alphabet size U. Thus, the decoding complexity becomes intractable for large values of L and U. Recently, some contributions considered JSC decoding methods including a LowDensity ParityCheck Code (LDPC) for channel coding with application to image transmission. In [19], the authors considered ratecompatible LDPC codes to apply unequal error protection on the compressed JPEG 2000 bitstream. Extra information provided by the errorresilience mode of the JPEG 2000 encoder was delivered to the LDPC decoder and iterative decoding was performed in [20].
This article is devoted to a different decoding algorithm, with lowcomplexity, for softinput decoding of ACs in the case of transmission over a noisy channel. The main objective is the development of a SISO arithmetic decoder that is able to improve the error correction performance with a reasonable complexity and efficient compression behavior. First, we propose a new lowcomplexity arithmetic decoder that supposes the decoder to know the source symbol sequence length L and the compressed bitstream size l. Then, the decoding task is based on the search of the MAP sequence among lengthvalid sequences (bitstreams of length l decoding exactly L symbols). The proposed algorithm is inspired from the Chase algorithm [21] and called Chaselike arithmetic decoder. The second contribution of this study is a new scheme for SISO arithmetic decoding. The latter is obtained through a slight modification of the Chaselike arithmetic decoder and generates additional bits reliability measure. Results corresponding to iterative decoding in the case of serial concatenation of an AC with a RSCC are presented and compared to the tandem decoding and to a trellisbased iterative decoding scheme [17, 18]. The last major contribution of the article is the implementation of the proposed SISO arithmetic decoder within the JPEG 2000 decoder and the analysis of the improvements obtained by iterative JSC decoding. In fact, the proposed SISO arithmetic decoder is applied to the JPEG 2000 entropy encoding stage which uses an adaptive binary AC (MQ coder).
The article is organized as follows. Section 2 briefly introduces the principles of arithmetic coding. In Section 3, the system model and the MAP sequence decoding metric are reported. The Chaselike arithmetic decoder is also detailed in this section and its performances are compared with a recent solution using a trellis representation of the AC with Viterbilike decoding [15]. Section 4 addresses a new scheme for lowcomplexity SISO arithmetic decoding. Numerical results corresponding to iterative JSC are discussed and compared to tandem decoding and trellisbased arithmetic decoding presented in [17, 18]. In Section 5, the application of the proposed iterative decoding approach to a JPEG 2000 image communication system is described. Finally, Section 6 draws our conclusions and offers directions for future work.
2 Overview of arithmetic coding
The objective of the arithmetic encoder is to map a sequence of symbols s = {s_{1,} s_{2},..., s_{ L } } onto a binary string b that represents the probability of the input sequence [1]. The encoder performs this mapping based on the available source model. In the following, we will consider the case of a binary memoryless source fully described by the probabilities p_{0} = Pr(s_{ k } = 0) and p_{1} = Pr(s_{ k } = 1). Arithmetic encoding is performed by recursively computing the probability interval I(s) = [Low, High) corresponding to the input string. At initialization, I(s) is set to I_{0} = [0, 1). Then, for every recursion k, 0 ≤ k < L, the update of I_{ k } is done as follows:
where ${K}_{{s}_{k}}$ is the cumulative probability of symbol s_{ k } given by ${K}_{{s}_{k}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\sum}_{i=0}^{i=k1}{p}_{i}.$A coding example is presented in Figure 1 with a binary source alphabet with probabilities p_{0} = 0.7 and p_{1} = 0.3. We represent the interval partitioning corresponding to the encoding of the sequence s = 010.
Once all L symbols have been processed, the output b corresponds to the shortest binary string contained in I(s). Decoding follows the dual process.
For long source sequences, such an algorithm needs an infinite precision machine (Low and High get smaller and smaller while encoding). In [2], the authors proposed a new implementation that made arithmetic coding feasible in practice. They used integer representations for probabilities with scaling techniques. The initial interval [0, 1) was substituted by [0, W ), where W = 2^{p}, p >= 2 being the bit size of the initial interval. The scaling is performed by doubling the size of the interval I(s) = [Low, High) when one of the following conditions holds:

E1: 0 ≤ High < W/ 2: We double Low and High and we output 0 followed by U_{3} ones, then U_{3} is reset to 0.

E2: W/ 2 ≤ Low < W : We double Low and High after subtracting W/ 2 and we output 1 followed by U_{3} zeros, then U_{3} is reset to 0.

E3: W/4 ≤ Low < W/2 ≤ High < 3W/4: We double Low and High after subtracting W/4 and we increase U_{3} by 1 (no output).
Note that U_{3} represents the number of the last E3 scalings done, and is initialized to 0.
The described AC is based on the binary source statistics, and it is essential that, for an encoded symbol index i, the encoder and the decoder use the same probabilities p_{0} and p_{1}. Static arithmetic coding supposes that source statistics were transmitted to the decoder with no error, which results in additional bits and consequently a compression loss. Such situation is outperformed with adaptive arithmetic coding, where p_{0} and p_{1} are initialized to 0.5, then, for every symbol encoding step they are updated. Such scheme induces no remarkable compression loss when long source symbol sequences are used [2]. In the following, we address new softinput decoding scheme which can be applied to both adaptive and static ACs.
To manage AC sensitivity to errors, authors of [6] proposed to use an extra symbol μ with probability ε > 0 to detect transmission errors. This symbol is introduced in the source alphabet but never transmitted. The forbidden symbol technique implies a reduction of the coding space by a factor of (1ε), thus, reduces compression efficiency. The amount of added rate redundancy is R_{ ac } = log_{2} (1ε) bits/symbol [7]. In the presence of transmission error, due to the low resynchronization probability, the decoder will reveal a forbidden symbol after a delay that is inversely proportional to ε.
3 Lowcomplexity softinput arithmetic decoding method
3.1 Softinput arithmetic decoding
The considered system consists of a finite alphabet source, a classical arithmetic encoder, an Additive White Gaussian Noise (AWGN) channel and an arithmetic decoder. The source generates packets of L symbols s = (s_{1},..., s_{ L }). Each packet is, then, compressed using the arithmetic encoder and the resulting binary stream b = (b_{1},..., b_{ l }) is transmitted over an AWGN channel. The latter delivers the sequence r = (r_{1},..., r_{ l }). A Binary Phase Shift Keying (BPSK) modulation is considered, then r_{ j } can be written as r_{ j } = h_{ j } + n_{ j }, j = 1,..., l, where h_{ j } is the BPSK modulated value of b_{ j } given by ${h}_{j}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\sqrt{{E}_{b}}\left(1\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}2.{b}_{j}\right)$ where E_{ b } is the energy per information bit, and n_{ j } is a Gaussian noise sample with zero mean and variance σ^{2}.
At the receiver, the arithmetic decoder is based on the MAP criterion which corresponds to the search of the best sequence s^{i} satisfying:
Therefore, the problem is equivalent to the maximization of the following metric
where ${\mathbf{b}}^{k}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\left({b}_{1}^{k},\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}{b}_{1}^{k}\right)$ is the bitstream resulting from the arithmetic encoding of the source symbol sequence ${\mathbf{s}}^{k}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\left({s}_{1}^{k},\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},{s}_{L}^{k}\right).$
The metric (4) includes the channel transition probability P(rb^{k}), the a priori source probability P (s^{k}), and the term P(r). The latter can be ignored when comparing candidate sequences since it is constant for all candidates. In the case of an AWGN channel, the channel transition probability can be written
where
and the function d_{ E }(x, y) denotes the Euclidean distance between x and y.
When taking the logarithm of (4) and using the equations (5) and (6), the problem can be reformulated as maximizing the following metric
The exhaustive decoding approach calculates the metric M^{k} for all possible pairs (s^{k}_{,} b^{k}) in order to select the best binary stream. In this study, we assume that the source symbol sequence length L and the compressed bitstream length l are known at the decoder side. This information is used to detect invalid binary sequences in the search space. Therefore, the search space is limited to streams of l bits that yield exactly L symbols by arithmetic decoding. It is obvious that evaluating the MAP metric (7) for all possible candidates is infeasible. On the one hand, for practical values of l, it is impossible to store all the combinations of l bits, in order to select the valid sequences in terms of length. On the other hand, the evaluation of all valid sequences requires a huge decoding delay because of the large cardinality of the search space.
Thus, it is necessary to use a suboptimal decoding algorithm to reduce the search space size. In the following, we describe a suboptimal softinput arithmetic decoder, inspired from the Chase algorithm [21] which was initially proposed for softinput decoding of linear block codes.
3.2 The proposed decoding algorithm
The proposed decoding algorithm aims to find the compressed bitstream that is lengthvalid, and whose corresponding decoded sequence has the best metric M^{k}. We use a Chase IItype algorithm [21] in order to achieve a lowcomplexity softinput suboptimal arithmetic decoding. In fact, this algorithm reduces complexity by restricting the search space to only the Q most probable sequences.
We recall that the arithmetic decoding machine is based on a recursive procedure which terminates when all l bits are processed or when L symbols are obtained in the decoded sequence. However, the proposed decoder has to use information about L to detect erroneous sequences and improve decoding performance. To address this requirement, a proper AC termination strategy is implemented. In fact, the arithmetic encoder terminated each input sequence with an EndofBlock (EoB) symbol. The same rule is enforced at the decoder and only sequences that decode exactly L symbols and whose EoB symbol is determined by the last two bits is considered to be correct. This supplementary error detection tool can ameliorate the arithmetic decoder performance since it reduces the size of the search space and consequently, increases the Hamming distance between candidates.
A classical decoding scheme consists in applying an arithmetic decoding to binary sequence y = (y_{1}, . . . , y_{ l }) obtained by hard decision applied on the received values r. The softinput decoder uses the extra information given by r called reliability. The reliability of a component y_{ j } is defined using the Log Likelihood Ratio (LLR) of decision y_{ j }, which, for an AWGN channel, can be written
For a stationary channel, the LLR can be normalized and the reliability of y_{ j } is given by r_{ j }. The proposed softinput arithmetic decoding is as follows:

1.
Evaluate the hard decision vector y = (y_{1},..., y_{ l }) and the corresponding reliability vector Λ(y) = (y_{1},..., y_{ l }),

2.
determine the positions of the q least reliable binary elements of y based on Λ(y),

3.
form test patterns t^{i}, 0 < i ≤ Q where Q = 2^{q}: all the lelements binary vectors ${t}_{j}^{i}$, 0 < j ≤ l having a maximum weight q with all the possible bit combinations in the least reliable positions,

4.
form test sequences z^{i}, 0 < i ≤ Q with ${z}_{j}^{i}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{y}_{j}\phantom{\rule{0.3em}{0ex}}\oplus \phantom{\rule{0.3em}{0ex}}{t}_{j}^{i}$, for 0 < j ≤ l, where ⊕ is the XOR function,

5.
decode all the test sequences z^{i} using classical arithmetic decoding. If a sequence z^{i} decode exactly L symbols and its EoB symbol is correct, we compute its metric using (7), and append it to the subset of competitor valid sequences ψ.

6.
Finally, the decoded bitstream corresponds to the sequence having the best metric M^{k} in the subset ψ.
3.3 Chaselike arithmetic decoding performance
The performances of the proposed softinput arithmetic decoder are evaluated in terms of Packet Error Rate (PER) with different values of test pattern weight q = 1, 2, 4, and 8 bits. A memoryless quaternary source is considered with packets of L = 128 symbols according to the probabilities presented in Table 1. Each packet is encoded by a static arithmetic encoder. In all the experiments, the residual redundancy at the output of the arithmetic encoder will be considered as a coding rate R_{ ac }. We consider the case where the source statistics are assumed to be transmitted to the decoder without error. The mean length achieved by the arithmetic encoder is $\stackrel{\u0304}{l}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}1.93$ bits per symbol resulting in coding rate R_{ac} = 0.9. As a reference, we evaluated the performance of the classical arithmetic decoder. The PER obtained for the described schemes as a function of the signaltonoise ratio $\frac{{E}_{b}}{{N}_{0}}$ are depicted in Figure 2.
Figure 2 shows that the proposed softinput arithmetic decoder improves the performance compared to the classical arithmetic decoder. In fact, with one additional AC decoding test sequence (q = 1 bit), we achieve a gain of 1.2 dB at a PER = 10^{3}. Increasing the value of the test pattern weight q induces an improvement in performance reaching 2 dB at PER = 10^{}^{2}.
We also notice that for low to medium values of signaltonoise ratio, increasing q enables remarkable performance amelioration. For high values of $\frac{{E}_{b}}{{N}_{0}}$, a low value of q is sufficient to achieve the maximum performance. In this case, increasing q results in increasing the complexity without improving the performance. In the following, we will use the value of q = 4 bits since it represents a good tradeoff between performance and complexity. For a complexity of 16 AC hard decoding, q = 4 bits includes an improvement of 1.6 dB at PER = 10^{}^{3}.
3.4 Comparison with trellisbased arithmetic decoding
In this section, we propose to compare the proposed softinput arithmetic decoder with a trellisbased arithmetic decoding scheme using the forbidden symbol technique with probability ε.
We consider a binary source delivering source sequences of L = 512 symbols according to the probabilities p_{0} = 0.2 and p_{1} = 0.8. The entropy of this source is H = 0.72 bits per symbol. The mean length achieved by a static arithmetic encoder is $\stackrel{\u0304}{l}$ = 0.87 bits per symbol inducing a coding rate R_{ ac } = 0.93. The same coding rate is obtained when considering the system based on Chaselike decoding (R_{ac} = 0.93). However, the forbidden symbol probability ε causes more redundancy in the compressed bitstream. Therefore, the R_{ac} value varies according to ². The R_{ac} values corresponding to ε = 0.0, and 0.2 are, respectively, 0.96 and 0.67. The trellisbased decoding uses the softinput Viterbi algorithm based on the Maximum Likelihood (ML) criterion. The trellisbased reference scheme is presented in [15] where the finite precision static AC is described by a bit clock trellis. In Figure 3, the PER of the described schemes are reported versus signaltonoise ratio $\frac{{E}_{b}}{{N}_{0}}$. The value of q = 4 bits is considered for Chaselike decoding.
The simulation results show that for a very noisy channel ($\frac{{E}_{b}}{{N}_{0}}$ < 7 dB) the arithmetic trellisbased decoding with ε = 0.2 is more efficient than the Chaselike arithmetic decoding. But, for a medium to high $\frac{{E}_{b}}{{N}_{0}}$ values, the Chaselike softinput arithmetic decoder presents the best performance. In fact, it exhibits a considerable gain of about 1.1 dB over the best configuration of the trellisbased Viterbi decoding with ε = 0.2.
This gain is essentially due to the additional information used by the proposed Chaselike algorithm to perform arithmetic decoding. In fact, in the trellis proposed by BenJamaa et al. [15], all binary paths of length l are valid candidates; the constraint on L is not considered. However, all possible candidates for Chaselike decoding are lengthvalid sequences, which results in a greater Hamming distance between competitors, and consequently better performance.
On the other hand, the Chaselike arithmetic decoding complexity is constant for increasing source sequence length L, and alphabet cardinal U. The decoding complexity can be approximated to 2^{q} arithmetic classical decoding operations and depends only on q. The states number representing the arithmetic encoding machine of [15] increases for bigger values of L and U. Furthermore, the trellis construction needs the transmission of the source statistics as side information. Consequently, trellisbased decoding is very hard to apply with adaptive AC (the trellis changes with symbol probabilities). The proposed softinput arithmetic decoder is very simple and can easily be extended to the adaptive contextbased ACs.
4 Iterative decoding method for the serially concatenated ACs
In the previous section, we showed that the proposed softinput arithmetic decoder takes profit from the bits' reliability at the output of the channel, which resulted in reducing the PER with respect to classical arithmetic decoding. In this section, it is modified in order to offer additional information concerning the compressed bitstream components' reliability. Then, it is applied in an iterative decoding scheme composed of an AC and an RSCC. The system performance are evaluated in terms of PER.
4.1 Concatenated AC and RSCC transmission system description
As shown in Figure 4, in the considered system the source generates sequences ${\mathbf{s}}^{h}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\left({s}_{1}^{h},\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{s}_{L}^{h}\right)$ of L symbols. An arithmetic encoder encodes the source symbol sequences. The bitstreams obtained ${\mathbf{b}}^{h}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\left({b}_{1}^{h},\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}},\phantom{\rule{0.3em}{0ex}}{b}_{{l}_{h}}^{h}\right)$ are assembled to form the binary sequence b = (b_{1},..., b_{ K }), which is, then, scrambled by a random interleaver π. The interleaved sequence is encoded by a RSCC. The obtained sequence is transmitted over an AWGN channel by means of BPSK modulation.
At the receiver, we apply an iterative decoding based on information exchange between a lowcomplexity SISO Chaselike arithmetic decoder, detailed below, and the RSCC decoder using the optimal MAP algorithm. Performances of the iterative decoding scheme involving the Chaselike algorithm are evaluated and compared to tandem decoding results. As mentioned, major JSC iterative decoding contributions consider trellisbased algorithms for arithmetic SISO decoding. To evaluate the efficiency of our decoder with respect to such schemes, a comparison to an iterative decoding scheme with an arithmetic trellisbased decoder is proposed. The reference decoder was presented in [17, 18], and the authors used a bidimensional bitclock trellis to model the arithmetic encoding machine. Then, to generate soft bitreliability estimates, a modified SOVA [22] algorithm was proposed.
4.2 Lowcomplexity SISO arithmetic decoding
As mentioned in the previous section, the proposed softinput arithmetic decoder does not use a finitestate machine to model the AC. Thus, BCJR [23] or SOVA [22] algorithms are not applicable. The main idea is not to generate bits a posteriori LLRs exact estimation but to define bits reliability factor.
The reliability of a component ${\widehat{b}}_{j}$ of the decoded sequence $\widehat{\mathbf{b}}$ is defined using the a posteriori LLR of the transmitted bit according to
Often, contributions dealing with channel decoding assume an AWGN channel and equal a priori probabilities P (b_{ j } = 1) = P (b_{ j } = 0), and use the expression given in Equation (8). However, in the case of iterative channel and arithmetic decoding, all the components of the decoded sequence do not have the same reliability and consequently the a priori term can be refined.
We have seen in the previous section that the proposed Chaselike arithmetic decoder calculates the J (J ≤ Q) most likely lengthvalid compressed bitstreams ${\widehat{\mathbf{b}}}^{{}^{i}}.$ The decoded sequence $\widehat{\mathbf{b}}$ corresponds to the best sequence, among the J candidates, according to the metric given in Equation (7). It is clear that the positions j where we have the same bit for all possible candidate sequences $\left({\widehat{b}}_{j}^{{i}_{1}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\widehat{b}}_{j}^{{i}_{2}}\phantom{\rule{0.3em}{0ex}}\forall {i}_{1}\phantom{\rule{0.3em}{0ex}}\ne \phantom{\rule{0.3em}{0ex}}{i}_{2}\right)$ are the most reliable positions. In the following, we assume such bits as reliably decoded, and assign to them a constant extrinsic information ${w}_{i}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\beta \cdot \left(1\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}2.{\widehat{b}}_{j}\right).$ We notify that a similar reliability definition was proposed in [24] for SISO decoding of linear block codes and that the value of β is determined experimentally. All the other bits are supposed nonreliable and we assign to them an extrinsic information equal to zero. It is worth noticing that in some cases, especially with a relatively noisy channel, the Chaselike arithmetic decoder is not able to find a lengthvalid codeword among the Q test sequences. In this case, we propose the decoding rule $\widehat{\mathbf{b}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\mathbf{y}$ and to assign to all its components an extrinsic information equal to 0.
4.3 Iterative arithmetic decoding performance
We consider a memoryless binary source with p_{0} = 0.2 and p_{1} = 0.8 (H = 0.72 bits per symbol) delivering packets of L = 512 symbols. Each packet is encoded by a static arithmetic encoder. The bitstreams delivered by the static arithmetic encoding of P = 4 packets are concatenated, then encoded by a 8state RSCC with rate $R\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\frac{1}{2}.$ In Figure 5, the PER of the described scheme is reported versus signaltonoise ratio $\frac{{E}_{b}}{{N}_{0}}.$ We also depict the simulation results obtained with an earlier proposed iterative decoder based on a trellis representation of the AC which is SISO decoded using SOVA and ListSOVA algorithms. The trellis decoder may use the forbidden symbol technique with probability ε to insert a controlled redundancy in the compressed bitstream that improves the arithmetic decoding performance. The curves in Figure 5 correspond to the results obtained at the fifth iteration. The tandem scheme involves the softinput Viterbi algorithm for RSC decoding and the classical hardinput arithmetic decoder.
Simulation results show that significant gains are obtained with respect to the tandem scheme, for the different considered decoding algorithms. However, the gain varies with the channel signaltonoise ratio $\frac{{E}_{b}}{{N}_{0}}.$ For low $\frac{{E}_{b}}{{N}_{0}}$ values, the Chaselike arithmetic decoder is less efficient than the trellisbased decoder with ε = 0.2. However, for medium to high $\frac{{E}_{b}}{{N}_{0}}.$ values, the Chaselike algorithm performance are better than the best configuration of trellisbased decoding using the modified SOVA algorithm. At a PER = 10^{}^{3}, an improvement of 0.3 dB in favor of the ListSOVA algorithm is obtained compared to Chaselike decoding.
Note that the main advantage of Chaselike decoding is that its complexity remains constant for increasing source sequence length L, and alphabet cardinal U. In fact, as mentioned in the previous section, the decoding complexity is determined by the number of hard decoding operations given by 2^{q}. In the case of the evaluated scheme we have q = 4 bits, and 16 classical arithmetic decoding operations are required, and which results in a reasonable complexity. On the other hand, the Chaselike algorithm is very simple and can be easily implemented for different types of ACs such as the contextual adaptive AC. However, the trellisbased arithmetic decoder is very hard to implement with such type of ACs.
In the next section, we will valid the Chaselike arithmetic decoding advantages in the context of an image transmission system using the JPEG 2000 standard which considers a contextual adaptive AC.
5 Application to JPEG 2000 coded images transmitted over a noisy channel
The objective, in this section, is to improve the performance of an image transmission system using JPEG 2000 standard for compression and a Convolutional Code as a channel code.
The considered coding scheme uses the JPEG 2000 encoder which compresses the source image at D_{s} bits per pixel (bpp). In JPEG 2000 coding, the 97 filters are used for wavelet transform and the number of resolution levels is equal to five. The wavelet domain is divided into rectangular regions called codeblocks. The JPEG 2000 encoding machine defines multiple quality layers that help image reconstruction at different rates (scalability). In the following experiment, we consider a single quality layer for simplicity and the scheme can be generalized to multiple quality layers. The values of D_{s} = 0.4 bpp and D_{s} = 1 bpp are considered. The bitstream generated by the JPEG 2000 encoder is composed of headers describing the coding parameters followed by a sequence of packets containing the encoded data. We assume that the data contained in the headers are transmitted without error. The JPEG 2000 uses a contextbased binary adaptive AC called the MQcoder. The codeblocks are independently encoded by the AC [3].
The experimental setup is as follows. The test image Lena 512 × 512 initially coded at 8 bpp is considered. By analogy to the system proposed in the previous section, each bitstream resulting from the compression of P = 4 codeblocks form the message b. The latter is scrambled then, coded by an 8state RSCC with rate $R=\frac{1}{2}.$ Finally, the coded image is transmitted over an AWGN channel by means of the BPSK modulation. In our simulation, we used an open source implementation of JPEG 2000 called OpenJPEG (J2000 library). More details about the implementation are available in http://www.openjpeg.org.
At the receiver side, we apply the iterative decoding, described in the previous section, based on information transfer between the JPEG 2000 decoder and the channel decoder. The JPEG 2000 decoder uses the proposed SISO arithmetic decoder described in Section 4.2. The test pattern weight q is fixed to 4 bits. The value of the extrinsic information β given by the SISO arithmetic decoder is experimentally optimized and we used, for $\frac{{E}_{b}}{{N}_{0}}=4\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}$ a value of β = 2.5, and for $\frac{{E}_{b}}{{N}_{0}}\ge 4.5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}$ a β = 4.0. The BCJR [23] algorithm is applied for RSCC decoding with softinputs and outputs.
Performance are measured in terms of average PSNR over 500 independent image transmissions. As a reference, we consider a system with the same parameters and classical JPEG 2000 decoding. The simulation results are reported in Figures 6 and 7 for the respective bit rates D_{ s } = 0.4 and D_{ s } = 1 bpp at the output of JPEG 2000 encoder.
It can be seen that remarkable gains are obtained in terms of average PSNR. In fact, for D^{s} = 0.4 bpp and at $\frac{{E}_{b}}{{N}_{0}}=4.5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}$ the proposed algorithm exhibits a PSNR gain of 5 dB. The iterative decoding yields a significant gain of 6.2 dB over the tandem scheme. On the other hand, for the same bit rate (D_{s} = 0.4 bpp), the studied system reaches approximately an average PSNR value of 35 dB at $\frac{{E}_{b}}{{N}_{0}}=5.0dB.$ Whereas, the tandem scheme achieves this value at $\frac{{E}_{b}}{{N}_{0}}=6.5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}\mathsf{\text{.}}$ Therefore, the iterative decoding allows a gain of 1.5 dB in terms of $\frac{{E}_{b}}{{N}_{0}}.$
For D_{ s } = 1 bpp and at $\frac{{E}_{b}}{{N}_{0}}=4.5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB,}}$ an average PSNR gain of 3.8 dB is obtained at the first iteration with respect to the tandem scheme. This gain increases with iterations to reach the value of 8.5 dB over the tandem scheme. Moreover, from $\frac{{E}_{b}}{{N}_{0}}=5.0\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}$ the value of the average PSNR obtained at the third iteration is approximately 39 dB. However, this value is reached by the tandem scheme at $\frac{{E}_{b}}{{N}_{0}}=6.5\phantom{\rule{2.77695pt}{0ex}}\mathsf{\text{dB}}\mathsf{\text{.}}$ Therefore, the iterative decoding enables a gain of 1.5 dB in terms of $\frac{{E}_{b}}{{N}_{0}}.$
In Figure 8, we give examples of reconstructed images in order to illustrate the improvement obtained by the proposed iterative decoder in terms of visual quality. The figure shows improvement in image quality over iterations, which results in increasing the PSNR value.
6 Conclusion
In this article, we have proposed novel lowcomplexity decoding algorithms for ACs based on the Chase algorithm. The schemes were tested for transmission across an AWGN channel with BPSK signaling and shows numerous significant advantages. First, we showed that the softinput arithmetic decoder achieves good error correction performance, has low complexity and can be easily extended to adaptive ACs, unlike the trellisbased arithmetic decoders. Second, we showed that the Chaselike algorithm can be slightly modified to generate additional information regarding the reliability of the decoded bits. Such improvement allow iterative decoding in the case of the serial concatenation of an AC with a channel code. As a second experiment, the concatenation of an AC with a RSCC was considered and iterative decoding results were investigated. The scheme is a JSC decoding approach where AC embeds compression and error correction in a single stage. Simulation results shows significant performance improvement when compared to tandem decoding and to our previous iterative decoding scheme using a trellisbased SISO arithmetic decoder. Moreover, the presented iterative system has profitably been exploited in the case of JPEG 2000 image transmission. In fact, improvements in terms of average PSNR, and visual quality were observed compared to standard JPEG 2000 decoding.
References
 1.
Rissanen JJ, Langdon GG: Arithmetic coding. IBM J Res Dev 1979, 23(2):149162.
 2.
Witten IH, Neal RM, Cleary JG: Arithmetic coding for data compression. Comm ACM 1987, 30(6):520540. 10.1145/214762.214771
 3.
Taubman DS, Marcellin MW: JPEG 2000: image compression fundamentals, standards and practice. Kluwer Academic Publishers; 2002.
 4.
Richardson IEG: H.264 and MPEG4 video compression: video coding for next generation multimedia. John Wiley And Sons Ltd; 2003.
 5.
Redmill DW, Kingsbury NG: The EREC: an errorresilient technique for coding variablelength blocks of data. IEEE Trans Image Processing 1996, 5(4):565574. 10.1109/83.491333
 6.
Boyd C, Cleary JG, Irvine SA, RinsmaMelchert I, Witten IH: Integrating error detection into arithmetic coding. IEEE trans Communications 1997, 145: 13.
 7.
Chou J, Ramchandran K: Arithmetic codingbased continuous error detection for efficient ARQbased image transmission. IEEE J Sel Areas Commun 2000, 18(6):861867. 10.1109/49.848240
 8.
Anand R, Ramchandran K, Kozintsev IV: Continuous error detection (CED) for reliable communication. IEEE Trans Communications 2001, 49(9):15401549. 10.1109/26.950341
 9.
Elmasry G, Shi Y: MAP symbol decoding of arithmetic coding with embedded channel coding. Proceedings of the Wireless Communications and Networking Conf.: New Orleans, USA 1999, 2: 988992.
 10.
Pettijohn BD, Hoffman W, Sayood K: Joint source/channel coding using arithmetic codes. IEEE Trans Communications 2001, 49: 826836. 10.1109/26.923806
 11.
Grangetto M, Cosman P, Olmo G: Joint source/channel coding and MAP decoding of arithmetic coding. IEEE Trans Communications 2005, 35: 10071016.
 12.
Sayir J: Arithmetic coding for noisy channels. Proceedings of the IEEE Information Theory Workshop: Kruger National Park, South Africa 1999, 6971.
 13.
Guionnet T, Guillemot C: Soft decoding and synchronization of arithmetic codes: Application to image transmission over noisy channels. IEEE Trans on Image Processing 2003, 12: 15991609. 10.1109/TIP.2003.819307
 14.
Dongsheng B, Hoffman W, Sayood K: State machine interpretation of arithmetic codes for joint source and channel coding. Proceedings of the IEEE Data Compression Conference: Snowbird, Utah, USA 2006, 143152.
 15.
BenJamaa S, Weidmann C, Kieffer M: Analytical tools for optimizing the error correction performance of arithmetic codes. IEEE Trans Communications 2008, 56(9):14581468.
 16.
Grangetto M, Scanavino B, Olmo G, Bendetto S: Iterative decoding of serially concatenated arithmetic and channel codes with JPEG 2000 applications. IEEE Transactions on Image Processing 2007, 16(6):15571567.
 17.
Zribi A, Zaibi S, Pyndiah R, Bouallegue A: Lowcomplexity joint source/channel turbo decoding of arithmetic codes with image transmission application. Proceedings of the IEEE Data Compression Conference: Snowbird, Utah, USA 2009, 472.
 18.
Zribi A, Zaibi S, Pyndiah R, Bouallegue A: Lowcomplexity joint source/channel turbo decoding of arithmetic codes. Proceedings of the IEEE intl symp on turbo codes and related topics: Lausanne, Switzerland 2009, 385389.
 19.
Pan X, Cuhadar A, Banihashemi AH: Combined source and channel coding with JPEG2000 and ratecompatible lowdensity Paritycheck codes. IEEE Transactions on Signal processing 2006, 54(3):11601164.
 20.
Pu L, Wu Z, Bilgin A, Marcellin MW, Vasic B: LDPCbased iterative joint sourcechannel decoding for JPEG2000. IEEE Trans Image Process 2007, 16(2):577581.
 21.
Chase D: A class of algorithms for decoding block codes with channel measurement information. IEEE Transactions on Information Theory 1972, 18: 170182. 10.1109/TIT.1972.1054746
 22.
Hagenauer J, Hoeher P: A Viterbi algorithm with softdecision outputs and its applications. Proceedings of the IEEE Globecom: Dallas, TX, USA 1989, 1117.
 23.
Bahl L, Cocke J, Jelinek F, Raviv J: Optimal decoding of linear codes for minimizing symbol error rate. IEEE Transactions on Information Theory 1974, 20(2):284287.
 24.
Pyndiah R: Nearoptimum decoding of product codes: block turbo codes. IEEE Transactions on Communications 1998, 46(8):10031010. 10.1109/26.705396
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Zaibi, S., Zribi, A., Pyndiah, R. et al. Joint source/channel iterative arithmetic decoding with JPEG 2000 image transmission application. EURASIP J. Adv. Signal Process. 2012, 114 (2012). https://doi.org/10.1186/168761802012114
Received:
Accepted:
Published:
Keywords
 Packet Error Rate
 Additive White Gaussian Noise Channel
 Extrinsic Information
 Arithmetic Code
 Iterative Decode