- Research
- Open Access

# A family of chaotic pure analog coding schemes based on baker’s map function

- Yang Liu
^{1}Email author, - Jing Li
^{1}, - Xuanxuan Lu
^{1}, - Chau Yuen
^{2}and - Jun Wu
^{3}

**2015**:58

https://doi.org/10.1186/s13634-015-0243-9

© Liu et al. 2015

**Received:**16 January 2015**Accepted:**22 June 2015**Published:**11 July 2015

## Abstract

This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker’s map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker’s and single-input baker’s analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

## Keywords

- Analog error correction code
- Chaotic dynamic system
- Mean square error (MSE)
- Maximum likelihood (ML) decoding
- Minimum mean square error (MMSE) decoding

## 1 Introduction

Currently pervasive communication systems in practice are almost digital-based. Shannon’s source-channel separation theorem has long convinced people that information can be transmitted without loss of optimality by a two-step procedure: compression and encoding. This fundamental result has laid the foundation for the typical structure of modern digital communication systems—the tandem structure of source coding followed by channel coding. Although digital communication systems have been well developed over the last decades, they has inherent drawbacks. First, to transmit continuous-alphabet sources, signals are quantized, which introduces permanent loss in information. Second, to precisely represent real-valued signals via digits, the bandwidth is usually expanded. Moreover, the subsequent channel coding procedure makes transmission further bandwidth demanding. Third, the digital error correction codes are highly signal-to-noise-ratio (SNR) dependent. Take turbo and low-density parity-check (LDPC) codes as examples. When the receiving SNR is under some threshold value, the decoding performance is usually very poor. On the contrary, once SNR exceeds this threshold, their bit error ratio (BER) falls down drastically in a narrow SNR range (waterfall region). This ungraceful degradation in performance can cause problems in applications. A typical scenario is the broadcasting system, where the SNR for different receivers can vary over a large range. At the same time, the digital error correction codes are not energy efficient since more transmission power slightly increases performance as long as the receiving SNR is modestly above the threshold. Last but not least, digital error correction codes with satisfying performance usually require a long block length, which introduces high latency for decoding and processing at the receiver.

In addition to the classical source-channel separate digital system, the analog transmission system can serve as an alternative solution to data transmission. The analog system has advantages over its pure digital peers—it does not introduce the granularity noise and its performance evolves gracefully with SNR. Most of the analog transmission systems ever presented in literature are joint source-channel coding (JSCC) systems, where compression and encoding are performed in one step and signals are in pure analog or hybrid-digital-analog (HDA) form. The study of analog communication can date back to the papers [1–4]. Reference [4] shows that direct transmission of a Gaussian source over an additive white Gaussian noisy (AWGN) channel with no bandthwidth expansion or compressionis optimal. For the bandwidth expansion case, [5] obtains the result that the fastest decay speed of the mean square error (MSE) cannot be better than the square inverse of the SNR. Although until now, no practical schemes have been found to achieve this decaying speed. In [6, 7], the optimal linear analog codes are treated. The design of practical nonlinear analog coding schemes has always been an open issue. Some interesting paradigms have been found. Fuldseth [8] and Chung [9] discuss numerical-based analog signal encoding schemes. Vaishampayan and Costa [10] propose a class of analog dynamic systems constructed by first-order derivative equations, which generate algebraic analog codes on torus or sphere. Cai and Modestino [11], Hekland et al. [12], and Floor and Ramstad [13] study the design of the Shannon-Kotel’nikov curve. The minimum mean square error decoding schemes for the Shannon-Kotel’nikov analog codes and their modified version combined with hybrid digital signals are discussed in [14, 15].

Among the family of analog coding schemes, one special class is constructed through chaotic dynamic systems. In dynamic systems, the signal sequence is generated by iteratively invoking some predefined mapping function. To be specific, the next signal (state) is obtained by performing a mapping to the current signal (state), and the whole signal (state) sequence is initialized by the input signal. For a chaotic dynamic system, the function governing the signal generation (state transition) is chosen as chaotic functions. Chaotic functions are characterized by their fast divergence, which is more well known as the remarkable *butterfly effect*. This property means that even a very tiny difference in initial inputs will soon result in significantly different signal sequences. From the signal space expansion viewpoint, this indicates that a pair of points in source space with small distance will have a large distance in the code space. So chaotic dynamic systems can potentially entitle signals with error resistance. The seminal work [16] proposes an analog system based on the tent map dynamic system, and its performance is extensively discussed in [17]. As with the analysis performed in [18, 19], the drawback of the tent map code is that its performance convergence Cramer-Rao lower bound (CRLB) requires very high SNR. Rosenhouse and Weiss [20] propose an improvement scheme by protecting the itinerary of the tent map codes with digital error correction codes. However, this hybrid-digital-analog scheme still suffers from the drawbacks rooted in digital error correction codes.

In this paper, we focus on a new pure analog chaotic dynamic encoding scheme, which is constructed via a two-dimensional chaotic function—baker’s map. This structure is closely related to and more complicated than the one reported in [16]. The specific contributions of this paper include the following: we develop various decoding methods for the baker’s coding system and analyze its MSE performance. Based on that, we proceed to propose two improved coding structures and extend various decoding methods to these new structures. These proposed improvements effectively balance the protection for all source signals and have more satisfying MSE performance compared to the tent map code. We also compare our proposed analog coding scheme with the classical source-channel separate digital coding scheme, where turbo code is applied. By using equal power and bandwidth, our proposed coding scheme outperforms the digital turbo scheme over a wide SNR range.

This paper is organized as follows: in Section 2, the original baker’s dynamic system is discussed, including its encoding structure, decoding methods, and performance analysis. Two modified chaotic systems based on the baker’s system are discussed in Sections 3 and 4, including their encoding and decoding schemes. In Section 5, numerical results and discussions are presented and performance is discussed. Section 6 concludes the paper.

In this paper, we assume that the source signals are mutually independent and uniformly distributed on the interval [−1,1], which is also adopted in previous works [16] and [17]. By this assumption, the MMSE decoding method has closed form solution and we can compare performance with previous works. However, it should be pointed out that the maximum likelihood (ML) decoding method does not require this condition and is applicable to signal with arbitrary distribution. We assume that the transmission channel is AWGN and the decoding methods obtained can be easily extended to block fading channel.

## 2 The baker’s map analog coding scheme

In this section, we introduce the analog encoding scheme based on the baker’s map function. The baker’s map function, *F* : [0,1]^{2}↦[0,1]^{2}, is a piecewise-linear chaotic function given as follows:

*x*) is given, its value can be determined. The “inverse” symmetric tent map function with the sign

*s*of

*x*given is \(G_{s}^{-1}(y)=s\frac {1-y}{2}\). Comparing the baker’s map and the symmetric tent map functions, the baker’s map can be alternatively defined via the symmetric tent map function as follows:

*x*

_{0},

*y*

_{0})∈ [−1,1]

^{2}, a chaotic signal sequence is generated by repeatedly invoking baker’s mapping, i.e.,

where *N* is the bandwidth expansion. This sequence can be viewed as a rate- 1/*N* analog code with *x*
_{0} and *y*
_{0} as continuous information “bits”. In the following, we use *x*=[*x*
_{0},*x*
_{1},⋯,*x*
_{
N−1}]^{
T
} and *y*=[*y*
_{0},*y*
_{1},⋯,*y*
_{
N−1}]^{
T
} to denote the codewords of two input signals, respectively.

*itinerary*, which is defined as \({s}=[s_{0}, s_{1},\cdots \!, s_{N-2}]\triangleq [\mathsf {sign}(x_{0}), \mathsf {sign}(x_{1}), \cdots \!, \mathsf {sign}(x_{N- 2})]\). In fact, if the itinerary of the code sequence is given,

*x*

_{ k }’s and

*y*

_{ k }’s can all be expressed as affine functions of

*x*

_{0}and

*y*

_{0}. Specifically,

*x*

_{ k }and

*y*

_{ k }can be represented via (

*x*

_{0},

*y*

_{0}) in the following form:

*s*. For a specific

*s*, they can be obtained in the following recursive way:

^{ N−1}itineraries one-to-one maps onto a partition

^{1}of the feasible space of

*x*

_{0}, i.e., the segment [−1,+1]. The itinerary

**s**is a function of input

*x*

_{0}. For any specific itinerary

**s**, the admissible values of

*x*

_{0}fall in a segment of length 1/2

^{ N−2}, which is called a

*cell*and denoted as \(C_{\mathbf {s}}\triangleq [e_{l,{s}}, e_{u,{s}}]\). The two endpoints

*e*

_{ l,s }and

*e*

_{ u,s }of the cell associated with

*s*are determined as

*N*=2, the itinerary has 1 bit, i.e.,

**s**∈{+1,−1}. The two corresponding cells are respectively the left and right half of the segment [−1,+1]. This concept is extended to length of

*N*=

*n*+1 in the right of Fig. 1 a, where

*G*

^{(n)}(

*x*) denotes

*n*-fold composition of

*G*(·). In fact, once the itinerary

**s**

_{ j }is given, the endpoints \(\phantom {\dot {i}\!}e_{l,\mathbf {s}_{j}}\) and \(\phantom {\dot {i}\!}e_{u,\mathbf {s}_{j}}\) of the cell and the affine parameters \(\left \{a_{k,\mathbf {s}_{j}},b_{k,\mathbf {s}_{j}},c_{k,\mathbf {s}_{j}},d_{k,\mathbf {s}_{j}}\right \}\) can all be determined as functions of

**s**

_{ j }, as shown in Fig. 1 b.

Next, we discuss decoding schemes for the above baker’s dynamic encoding system.

### 2.1 Maximum likelihood decoding

*n*=0,1,⋯,

*N*−1. We denote

*r*

_{ x }=[

*r*

_{ x,1},

*r*

_{ x,2},⋯,

*r*

_{ x,N−1}]

^{ T }and

*r*

_{ y }=[

*r*

_{ y,1},

*r*

_{ y,2},⋯,

*r*

_{ y,N−1}]

^{ T }. The likelihood function of the observation sequences

*r*

_{ x },

*r*

_{ y }with given source (

*x*

_{0},

*y*

_{0}) is

*x*

_{ k },

*y*

_{ k }are functions of

*x*

_{0}and

*y*

_{0}.

Based on connections between itineraries and cells discussed in (5)–(8), the original maximum likelihood (ML) estimation problem in (11) can be further transformed into

*s*, the inner minimization problem in Eq. (12) is convex and quadratic. Without considering the constraints, its optimal solution \((x_{0,{s}}^{*}, y_{0,{s}}^{*})\) is given in a closed form

*a*

_{ s }=[

*a*

_{0,s },⋯,

*a*

_{ N−1,s }]

^{ T },

*b*

_{ s }=[

*b*

_{0,s },⋯,

*b*

_{ N−1,s }]

^{ T },

*c*

_{ s }=[

*c*

_{0,s },⋯,

*c*

_{ N−1,s }]

^{ T }, and

*d*

_{ s }=[

*d*

_{0,s },⋯,

*d*

_{ N−1,s }]

^{ T }. Taking into account that the feasible (

*x*

_{0},

*y*

_{0}) associated with

*s*should lie within admissible range, a limiting procedure must be performed to obtain the solution to the inner minimization with specific

*s*, i.e.,

*x*

_{0},

*y*

_{0}) is obtained as

The ML decoding scheme does not require a priori knowledge of the source’s distribution. So it is applicable regardless of the probability distribution of the source.

### 2.2 Minimum mean square error decoding

where *X* is random parameter to be determined and *y* is a specific realization of the noisy observation *Y*. It is worth noting that the above general solution usually cannot result in a closed form solution for concrete problems. Fortunately, under the uniform distribution assumption of the source signal, the closed form MMSE estimator for the baker’s map can be obtained.

*Q*(·) is the well-known Gaussian-Q function which is defined as

*x*

_{0}and

*y*

_{0}is given in a closed form as follows:

The detailed proof of the above result is rather involved and relegated to the Appendix.

### 2.3 Mixed ML-MMSE decoding scheme

The MMSE estimator involves highly nonlinear numerical evaluations, like the Q-function, which are computation demanding and costly for implementation. Next, we introduce some kind of mixed ML and MMSE estimator for the baker’s analog code.

*x*

_{0},

*y*

_{0}). For specific itinerary

*s*, by packing the codewords

*x*and

*y*into one vector

*v*and using (5), we can rewrite the baker’s dynamic system as follows:

where parameters *a*
_{
s
}, *b*
_{
s
}, *c*
_{
s
}, and *d*
_{
s
} are defined in (6).

*s*can be obtained. By substituting

*s*in (21) with the ML detection \(\hat {{s}}^{\text {ML}}\) and packing the received signals

*r*

_{ x }and

*r*

_{ y }into one vector \({r}=\left [{r}_{x}^{T}, {r}_{y}^{T}\right ]^{T}\), (9) can be expressed in a compact form

Thus, the baker’s map code is equivalent to a (2*N*,2) linear analog code with encoder *G*
_{
s
}.

**u**in the above equation becomes the standard minimum MSE receiving problem, whose solution is the well-known Wiener filter and given as [22]

A slicing operation then follows the above Wiener filtering to ensure the final estimates \(\hat {x}_{0}^{\mathrm {ML-MMSE}}\) and \(\hat {y}_{0}^{\mathrm {ML-MMSE}}\) lie in \(\left [e_{l,\hat {\mathbf {s}}^{\text {ML}}}, e_{u,\hat {\mathbf {s}}^{\text {ML}}}\right ]\) and [−1,+1], respectively.

For the mixed ML-MMSE method, ML decoding is performed to obtain \(\hat {{s}}^{\text {ML}}\). Then, the Wiener filtering and limiting procedure follows. The mixed ML-MMSE decoding method requires a priori knowledge of the source and involves only linear computation operations.

### 2.4 Performance analysis

*E*

_{ u }means the average power for each source signal, and

*N*

_{0}denotes the unilateral power spectral density, i.e.,

*N*

_{0}=2

*σ*

^{2}. The ML, MMSE, and ML-MMSE decoding algorithms have identical MSE performance for high SNR. In low SNR range, the MMSE decoding method has the best performance.

In the following, we analyze the MSE performance of the baker’s dynamic coding system by considering the Cramer-Rao lower bound. CRLB is a lower bound for the unbiased estimator [23]. It should be pointed out that the ML decoding methods discussed above are the biased estimator due to the slicing operations. However, when the SNR is large, the decoding error is sufficiently small such that the slicing rarely impacts the decoding result. So CRLB can precisely predict the decoding error when the SNR is modestly large and is useful a tool to understand the system’s performance. This will also be verified by the following numerical results.

*x*

_{0}is given as [23]

*p*(

**r**

_{ x },

**r**

_{ y }|

*x*

_{0},

*y*

_{0}) is defined in (10), and \(\mathsf {E}_{x_{0}}(\cdot)\) denotes the expectation with respect to

*x*

_{0}. The recursive relations in (6) and the fact \({s_{k}^{2}}=1\) are used to obtain (26). Similarly, the CRLB for

*y*

_{0}obtained as

*N*is modestly large, \(\text {CRLB}_{x_{0}}\approx 3\sigma ^{2}/4^{N}\). Each increment in

*N*can decrease the decoding distortion of

*x*

_{0}by 3/4. Comparatively, increment in

*N*slightly improves

*y*

_{0}determination, which is nearly a constant as 3

*σ*

^{2}/4. The CRLB reveals that the two sources are under unequal protection and there is insufficient coding gain on

*y*

_{0}. Recall that the

*x*-sequence in codewords is obtained by continuously stretching and shifting the signal. Intuitively, the signal is locally magnified. In comparison, the

*y*-sequence is obtained by compressing the signal. That is why the terms of 2

^{ N }and 2

^{−N }appear in the denominator of CRLB for

*x*

_{0}and

*y*

_{0}, respectively. This insight is verified by Fig. 3, where separate MSE decoding performances of

*x*

_{0}and

*y*

_{0}are plotted with their CRLBs illustrated as benchmarks. Although

*x*

_{0}has an obvious coding gain,

*y*

_{0}is poorly protected and its distortion dominates the overall decoding performance.

From the CRLB analysis, we realize that the bottleneck of the baker’s analog code lies in the weak protection to *y*
_{0}. Thus, to improve the baker’s map code, effective protection should also be performed to *y*
_{0}.

## 3 Improvement I—mirrored baker’s analog code

As analyzed in the last section, the unsatisfying performance of the original baker’s map lies in the poor protection of *y*
_{0}. To enhance the protection of *y*
_{0}, a natural idea is to perform a second original baker’s map encoding by switching the roles of *x*
_{0} and *y*
_{0}. Thus, both *x*
_{0} and *y*
_{0} obtain balanced and effective protection. This idea leads to the improvement scheme to be discussed in this section—the mirrored baker’s dynamic coding system. The mirrored baker’s structure comprises two branches, with its first branch being the original baker’s encoder and the second branch exchanging the roles of *x*
_{0} and *y*
_{0} to perform the original baker’s encoding for a second time. For a given *N*, the mirrored baker’s system forms a (4*N*,2) analog code.

*x*

_{0}is the tent map encoded and so does

*y*

_{0}in the second branch. The codewords associated with

*x*

_{0}and

*y*

_{0}of the two branches are denoted as {

*x*

_{1},

*y*

_{1}} and {

*x*

_{2},

*y*

_{2}}, respectively, with their corresponding noisy observations as {

*r*

_{1,x },

*r*

_{1,y }} and {

*r*

_{2,x },

*r*

_{2,y }}, respectively. The encoding procedure is expressed as

*x*

_{1,0}=

*x*

_{2,0}=

*x*

_{0}and

*y*

_{1,0}=

*y*

_{2,0}=

*y*

_{0}. The observations are represented as

*s*

_{1}and

*s*

_{2}from the first and second branches, respectively, the two of which compose the entire itinerary for the mirrored baker’s system. As previously discussed,

**s**

_{1}indicates a partition of the feasible domain of

*x*

_{0}. So does

**s**

_{2}to

*y*

_{0}. The entire feasible domain for the source pair (

*x*

_{0},

*y*

_{0}), which is a 2×2 square centered at the origin on the plane, is uniformly divided into 2

^{(2N−2)}cells, with each cell being a tiny square having edge of length 2

^{−(N−2)}. Assuming that the source (

*x*

_{0},

*y*

_{0}) is known to live in some specific cell, the itineraries

*s*

_{1}and

*s*

_{2}can be determined and the codewords can be expressed as affine functions:

*k*=0,1,⋯,

*N*−2. The parameters \(\phantom {\dot {i}\!}\{a_{1,k,{s}_{1}},b_{1,k,{s}_{1}}, c_{1,k,{s}_{1}}, d_{1,k,{s}_{1}}\}\) and \(\phantom {\dot {i}\!}\{a_{2,k,{s}_{2}},b_{2,k,{s}_{2}},c_{2,k,{s}_{2}}, d_{2,k,{s}_{2}}\}\) are for the first and the second branches, respectively, and can be determined recursively for

*k*=0,⋯,

*N*−2 as follows:

We denote \(\phantom {\dot {i}\!}{a}_{j,{s}_{j}}=[a_{j,0,{s}_{j}},a_{j,1,{s}_{j}},\cdots,a_{j,N-1,{s}_{j}} ]^{T}\), *j*=1,2 and define \(\phantom {\dot {i}\!}{b}_{j,{s}_{j}},{c}_{j,{s}_{j}}\) and \(\phantom {\dot {i}\!}{d}_{j,{s}_{j}}\) in the same way for *j*=1,2.

*s*

_{1},

*s*

_{2}}, we denote its indicated admissible cell has projection \(\phantom {\dot {i}\!}C_{\mathbf {s}_{1}}\) onto

*x*

_{0}feasible domain and projection \(\phantom {\dot {i}\!}C_{\mathbf {s}_{2}}\) onto

*y*

_{0}feasible domain, i.e.,

Next, we discuss decoding methods for the mirrored baker’s dynamic system. These decoding methods are obtained by straightforwardly extending the results for the original baker’s system. In the following, main results are provided with details omitted.

### 3.1 ML decoding

*s*

_{1},

*s*

_{2}}, the optimal solution of the inner minimization of the above equation is given as

The ML estimation is given by selecting the (*x*0,*s*
_{1},*s*
_{2}inner,*y*0,*s*
_{1},*s*
_{2}inner) among different itineraries {*s*
_{1},*s*
_{2}} which minimizes the outer minimization in (34).

### 3.2 MMSE decoding

*y*

_{0}contributes to the itinerary, the integration of

*y*

_{0}should be decomposed into parts over different \(\phantom {\dot {i}\!}C_{{s}_{2}}\)’s. The MMSE estimation of

*x*

_{0}can be given as

*y*

_{0}for the mirrored baker’s map code is given as follows:

### 3.3 ML-MMSE decoding

*s*

_{1},

*s*

_{2}} is given, the codewords of the mirrored baker’s map system can be represented as an affine function of the original source (

*x*

_{0},

*y*

_{0}). The corresponding coefficients can be determined recursively by using Eqs. (31) and (32). Thus, the mirrored baker’s dynamic system can be rewritten as follows:

*x*

_{0},

*y*

_{0}} as follows:

Then, a limiting procedure is performed to obtain admissible decoding results.

## 4 Improvement II—single-input (1-D) baker’s analog code

Inspired by the performance analysis in Section 2.4, to enhance the original baker’s map performance, effective protection must be performed equally to all sources. Besides the mirrored structure proposed in the last section, here we propose an alternative improving strategy that is to feed the *y*-sequence with input *x*
_{0}, which actually forms a single-input (1-D) baker’s analog code. By feeding the two inputs of the original baker’s map with one source *x*
_{0}, the problem of poor protection of *y*
_{0} vanishes and protection of *x*
_{0} is enhanced. In other words, the protection to all sources is equal and strengthened. Furthermore, another unconspicuous yet profound aspect of motivation of this 1-D scheme is that it performs a hidden repetition code of the itinerary, which is explained in full detail as follows.

*y*-sequence does not help to protect the itinerary since each of its signal is uncorrelated with

*x*

_{0}. Recall that the codeword of the

*y*-sequence of the baker’s system is generated by the inverse tent map function using sign sequence from the

*x*-sequence. By feeding the

*y*-sequence with

*x*

_{0}, we have

*y*

_{1}=

*G*sign(

*x*

_{0})−1(

*x*

_{0}). Equivalently,

*x*

_{0}=

*G*(

*y*

_{1}). So actually

*y*

_{1}can be regarded as the state immediately before

*x*

_{0}in the tent dynamic system, which we denote as

*x*

_{−1}. Following this manner, we can regard

*y*

_{ i }as the immediate previous state of

*y*

_{ i−1}in a tent map dynamic sequence for

*i*=2,⋯,

*N*−1. Thus, by rewriting the

*y*-sequence signal as \(\{y_{N-1}, y_{N-2}, \cdots, y_{0}\}\triangleq \{x_{-(N-1)},x_{-(N-2)},\cdots, x_{0}\}\) and concatenating it with the

*x*-sequence signals, we actually obtain a long tent map analog code (except that there are two copies of

*x*

_{0}here). Moreover, this obtained equivalent tent map sequence has its special pattern: the first half itinerary is reversely identical with the second half itinerary. In other words, the 1-D baker’s analog code actually constructs a hidden

*repetition code*for the itinerary sequence. Both the

*x*- and

*y*- sequences now become analog “parity bits” of the itinerary. This interesting alternative view of the 1-D baker’s dynamic system is illustrated in Fig. 4.

Next, sticking to the notations introduced above for the baker’s system, we give out the decoding results for this one-dimensional baker’s analog code.

### 4.1 ML decoding scheme

*s*, the optimal solution to inner minimization \(x_{0,{s}}^{*}\) is obtained by

The ML estimate is obtained by going over all possible itineraries and selecting the \(x_{0,{s}}^{\text {inner}}\) which minimizes the likelihood function.

### 4.2 MMSE decoding scheme

### 4.3 ML-MMSE decoding scheme

*x*

_{0}as

## 5 Simulation results and discussions

*E*

_{ u }represents the average power for each source signal and

*N*

_{0}denotes the unilateral power spectral density. In our experiment, the source signals are independent and uniformly distributed over [−1,+1]. For each coding system, codes with

*N*=3 and

*N*=5 are tested. The associated CRLBs (determined explicitly in Eq. (48)) and uncoded performance are plotted to serve as benchmarks. Numerical results verify the validity of the decoding algorithms developed in previous sections and show that both of the mirrored and single-input structure have improved MSE performance of the original baker’s coding system.

Generally, the distortion of analog transmission systems can be decomposed into two parts [2]: anomalous distortion and weak distortion. Weak distortion, stemming from the channel noise, can become very small and close to zero as long as the channel noise is sufficiently small. As analyzed in [13], to reduce the distortion of estimation, the transmitted signal must be stretched as much as possible, which can be intuitively seen as “amplifying” the signal. However, due to transmission power constraint, transmitted signals have to be bounded, and thus, the stretching cannot be arbitrarily extensive without folding. This means the stretched signal will have multiple folds. The ML decoding projects the received signal to a valid codeword with minimum Euclidean distance. Projection onto an erroneous fold results into an anomalous distortion, which introduces a rather notable estimation error. In practical code design, the weak distortion and the anomalous distortion are two competing aspects—lengthening the codeword curve will relieve the weak distortion but will inevitably introduce more folds and a narrower space between folds and hence a higher chance for anomalous distortion; likewise, shortening the codeword curve will reduce the chance for anomalous distortion but increase the weak distortion. The key is to strike a best balance between these competing factors.

Specifically, the weak error can be accurately characterized by the CRLB, and the anomalous error can be roughly indicated by the BER.

*x*

_{0}and

*y*

_{0}of the mirrored baker’s system is given in the following, which is also CRLB of the single-input baker’s code

*N*) is given as

This means under equal bandwidth expansion (or code rate), the tent map system will always have a lower weak distortion.

*N*=5, each of which has itinerary length of 4. The BER of each itinerary bit for different systems is illustrated in the sub-figures in Fig. 8. It should be noted that in Fig. 8, the tent map code has a code rate of 1/5 while the mirrored baker’s and single-input baker’s systems have a code rate of 1/10. The BER performance for the first four itinerary bits of the tent map system with rate 1/10 is even worse than that for the tent map code with rate 1/5.

From the figures in Fig. 8, the mirrored baker’s map code and single-input baker’s map code have obvious advantage in the itinerary BER performance. The mirrored structure exhibits equal protection for different itinerary bits, and the BER decays with steeper slope than that of the tent map code. Comparatively, the single-input baker’s system presents an unequal protection of different itinerary bits. The BER for itinerary bits with smaller indices decays much faster than that with larger indices. Since errors in itinerary bit with smaller index cause more serious distortion, the single-input baker’s system performs a clever unequal protection to itinerary bits adaptive to their significance. This also explains the single-input baker’s map code’s advantageous performance over the mirrored baker’s map code in the medium SNR range.

From the above comparison, it can be seen that although the improved baker’s analog codes have larger weak distortion than the tent map code, their anomalous distortion has been effectively suppressed. The modified baker’s map codes achieve a better balance between the protection against two kinds of distortion and consequently outperform the tent map code in a wide SNR range.

*σ*=1, such that 99.7

*%*of the probability mass falls in the region of [−1,+1]. The signal value is set as +1 if it exceeds +1,and −1 if it drops below −1. We performed mirrored baker’s coding on this truncated Gaussian source, and the results are shown in Fig. 9. It should be noted that in the figure, the OPTA bound is calculated with the true Gaussian source (the only source that is analytically tractable). Since the simulated coding schemes use a truncated Gaussian source, we therefore see a small discrepancy, and the baker’s code actually appears to slightly outperform the OPTA at the low SNR region. At the same time, we also plot the series of the Shannon-Kotel’nikov spirals with parameters optimized for different channel SNR (Fig. 9 in [12]). It should be noted that the MSE performance of the mirrored baker’s code and Shannon-Kotel’nikov spirals in Fig. 9 are obtained by the ML method, which can be improved by the MMSE method according to [14] and our previous discussion.

The advantage of the parameterized Shannon-Kotel’nikov spiral curve approach is that by optimizing the parameters with respect to the source distribution and the channel condition, the performance of the code can be made within some 5 dB from the OPTA [12]. The cost, however, is that one must know the exact source distribution and the accurate SNR information. As shown in Fig. 9, each curve represents a Shannon-Kotel’nikov spiral with its parameter optimized towards one specific channel SNR. Every time the channel condition changes (i.e., a different SNR), the parameter(s) must be adapted or the code will suffer from a quick performance deterioration due to channel mismatch.

The proposed baker’s analog codes do not require the knowledge of the source distribution nor the channel SNR in order to perform encoding and ML decoding. Instead of designing a sequence of codes, the one optimized for each channel SNR in [12], in our approach, a single code is used for a wide range of SNR range. Figure 9 reflects that our proposal’s SDR (in dB) has identical slope for high channel SNR, or diversity, as that of optimized Shannon-Kotel’nikov spirals. The improved baker’s analog codes universally outperform the Shannon-Kotel’nikov spirals optimized for low-channel SNR and have obvious advantage in the low SNR range for all Shannon-Kotel’nikov spirals. Additionally, the ML decoding algorithm of our proposed chaotic analog codes has simple closed-form expression, which is absent for spiral codes.

- 1.
Analog: (6,2) analog code is used by utilizing the mirrored baker’s code with

*N*=2 and puncturing the system signals (*y*_{0},*x*_{0}) for the second branch. Assuming that codewords are transmitted using in-phase and quadrature forms (which can be regarded as*∞*-QAM modulation), the system has bandwidth expansion of 3/2. - 2.
Digital-EEP: 8-bit quantization, (3072,2048,2/3) turbo code, and 256-QAM are used. System bandwidth expansion is 3/2.

- 3.
Digital-UEP1: 8-bit quantization is performed. The four least significant bits (LSB) are left uncoded. The four most significant bits (MSB) are encoded by (4096,2048,1/2) turbo code. Both the coded and uncoded bits are 256-QAM modulated. System bandwidth is 3/2.

- 4.
Digital-UEP2: 8-bit quantization is performed. The two LBS are uncoded. The six MSB are encoded by (3410,2046,3/5) turbo code. All bits are 256-QAM modulated. System bandwidth is 3/2.

- 5.
Digital-UEP3: 8-bit quantization is performed. The four LSB are uncoded. The four MSB are encoded by (2560,2048,4/5) turbo code. The coded and uncoded bits go through 64-QAM modulation. System bandwidth is 3/2.

*E*

_{ u }/

*N*

_{ o }range, which is due to the fact that the digital error correction codes’ performance boosts drastically in a very narrow SNR range (the so-called waterfall region). The digital codes’ resilience to noise, although powerful, is finally suppressed by the quantization noise. Comparatively, analog coding schemes have a very graceful performance evolution, and their distortion can be made arbitrarily small if the channel is sufficiently good.

## 6 Conclusions

This paper introduces a family of pure analog chaotic dynamic encoding schemes based on the baker’s map function. We first discuss the coding scheme using the original baker’s map function, including its encoding and decoding schemes. Mean square error analysis indicates that the intrinsic unbalanced protection of its input results in an unsatisfying performance. Based on that, two improvement encoding schemes are proposed—mirrored baker’s and single-input baker’s system. These two schemes provide sufficient protection to all encoded analog sources. The various decoding methods for the original baker’s coding system are extended to the modified systems. Compared to the classical tent map analog code, the improved baker’s map encoding schemes achieve a better balance between the anomalous and weak distortion and have advantageous performance in a wide practical SNR range. Moreover, our improved encoding schemes also exhibit competition or even better performance than the classical analog joint source-channel coding scheme, especially in the low SNR range, while maintaining much lower complexity in the decoding procedure. We also compare the analog and conventional digital systems using turbo code to transmit analog source signals. The digital systems suffer from granularity noise due to quantization, large decoding latency, and threshold effect. Comparatively, the analog coding scheme has a graceful performance degradation and outperforms over a wide SNR region.

## 7 Appendix

In this appendix, we provide detailed proof of the closed-form solution of the MMSE decoder for the original baker’s map code in (19).

*x*

_{0}can be given as

*x*

_{0}and

*y*

_{0}are independently uniformly distributed over the range [−1,+1]. To proceed with the above derivation, we introduce some intermediate parameters as follows:

*f*(

*r*

_{ x },

*r*

_{ y }) still needs to be determined, which can be calculated as

*I*

_{2}(

*s*) and

*I*

_{4}(

*s*) are defined in (58) and (62), respectively. Here we further introduce the following notations:

*Q*(·) is the well-known Gaussian-Q function which is defined as

*I*

_{1}(

*s*),

*I*

_{2}(

*s*),

*I*

_{3}(

*s*), and

*I*

_{4}(

*s*) defined previously can be given by use of the notations in (17) as

*x*

_{0}and

*y*

_{0}as follows:

The proof has been completed.

## 8 Endnote

^{1} Here we ambiguously use the terminology partition, since every two adjacent cells overlap with their common endpoints. But this does not harm the decoding procedure.

## Declarations

### Acknowledgements

This work is supported by the National Science Foundation under Grant Nos. 0928092, 1133027, and 1343372.

## Authors’ Affiliations

## References

- CE Shannon, Communication in the presence of noise. Proceedings of IRE. 37(1), 10–21 (1949).MathSciNetView ArticleGoogle Scholar
- VA Kotel’nikov,
*The Theory of Optimum Noise Immunity*(McGraw-Hill, New York, NY, USA, 1959).Google Scholar - JM Wozencraft,
*Principles of Communication Engineering*(John Wiley & Sons, Hoboken, New Jersey, USA, 1965).Google Scholar - TJ Goblick, Theoretical limitation on the transmission of data from analog sources. IEEE Trans. Inform. Theory. 11, 558–566 (1965).View ArticleGoogle Scholar
- J Ziv, The behavior of analog communication systems. IEEE Trans. Inform. Theory. 16, 587–594 (1970).MathSciNetView ArticleGoogle Scholar
- KH Lee, DP Petersen, Optimal linear coding for vector channels. IEEE Trans. Commun. 24, 1283–1290 (1976).View ArticleGoogle Scholar
- Y Liu, J Li, K Xie, in 46th Annual Conference on Information Sciences and Systems (CISS). Analysis of linear channel codes with continuous code space (Princeton, USA, 2012).Google Scholar
- A Fuldseth, Robust subband video compression for noisy channels with multilevel signaling (Dissertation, Norwegian University of Science and Technology, 1997).Google Scholar
- S-Y Chung, On the construction of some capacity-approaching coding schemes (Dissertation, Massachusetts Institute of Technology, 2000).Google Scholar
- V Vaishampayan, SIR Costa, Curves on a sphere shift-map dynamics and error control for continuous alphabet sources. IEEE Trans. Inform. Theory. 47, 1658–1672 (2003).MathSciNetView ArticleGoogle Scholar
- X Cai, JW Modestino, in 40th Annual Conference on Information Sciences and Systems (CISS). Bandwidth expansion Shannon mapping for analog error-control coding (Princeton, USA, 2006).Google Scholar
- F Hekland, PA Floor, TA Ramstad, Shannon-Kotel’nikov mappings in joint source-channel coding. IEEE Trans. Commun. 57, 94–105 (2009).View ArticleGoogle Scholar
- PA Floor, TA Ramstad, Shannon-Kotel’nikov mappings for analog point-to-point communications. http://arxiv.org/abs/0904.1538.
- Y Hu, J Garcia-Frias, M Lamarca, Analog joint source-channel coding using non-linear curves and mmse decoding. IEEE Trans. Commun. 59, 3016–3026 (2011).View ArticleGoogle Scholar
- G Brante, RD Souza, J Garcia-Frias, Spatial diversity using analog joint source channel coding in wireless channels. IEEE Trans. Commun. 61, 301–311 (2013).View ArticleGoogle Scholar
- HC Papadopoulus, GW Wornell, Maximum likelihood estimation of a class of chaotic signals. IEEE Trans. Inform. Theory. 41, 312–317 (1995).View ArticleGoogle Scholar
- B Chen, GW Wornell, Analog error-correction codes based on chaotic dynamical systems. IEEE Trans. Commun. 46, 881–890 (1998).View ArticleGoogle Scholar
- I Hen, N Merha, On the threshold effect in the estimation of chaotic sequences. IEEE Trans. Inform. Theory. 50, 2894–2904 (2004).MathSciNetView ArticleGoogle Scholar
- SM Kay, Asymptotic maximum likelihood estimator performance for chaotic signals in noise. IEEE Trans. Signal Process. 43, 1009–1012 (1995).View ArticleGoogle Scholar
- I Rosenhouse, AJ Weiss, Combined analog and digital error-correcting codes for analog information sources. IEEE Trans. Commun. 55, 2073–2083 (2007).View ArticleGoogle Scholar
- SM Kay,
*Fundamentals of Statistical Signal Processing, Volume I, Estimation Theory*(Prentice Hall, Upper Saddle River, New Jersey, USA, 1993).Google Scholar - S Haykin,
*Adaptive Filter Theory*, 4th Ed. (Prentice Hall, Berlin, Germany, 2002).Google Scholar - HV Poor,
*An Introduction to Signal Detection and Estimation*, 2nd Ed. (Springer, Upper Saddle River, New Jersey, USA, 1998).Google Scholar - TM Cover, JA Thomas,
*Elements of Information Theory*(John Wiely & Sons, Hoboken, New Jersey, USA, 1991).View ArticleGoogle Scholar

## Copyright

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.