Skip to main content

A family of chaotic pure analog coding schemes based on baker’s map function

Abstract

This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker’s map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker’s and single-input baker’s analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

1 Introduction

Currently pervasive communication systems in practice are almost digital-based. Shannon’s source-channel separation theorem has long convinced people that information can be transmitted without loss of optimality by a two-step procedure: compression and encoding. This fundamental result has laid the foundation for the typical structure of modern digital communication systems—the tandem structure of source coding followed by channel coding. Although digital communication systems have been well developed over the last decades, they has inherent drawbacks. First, to transmit continuous-alphabet sources, signals are quantized, which introduces permanent loss in information. Second, to precisely represent real-valued signals via digits, the bandwidth is usually expanded. Moreover, the subsequent channel coding procedure makes transmission further bandwidth demanding. Third, the digital error correction codes are highly signal-to-noise-ratio (SNR) dependent. Take turbo and low-density parity-check (LDPC) codes as examples. When the receiving SNR is under some threshold value, the decoding performance is usually very poor. On the contrary, once SNR exceeds this threshold, their bit error ratio (BER) falls down drastically in a narrow SNR range (waterfall region). This ungraceful degradation in performance can cause problems in applications. A typical scenario is the broadcasting system, where the SNR for different receivers can vary over a large range. At the same time, the digital error correction codes are not energy efficient since more transmission power slightly increases performance as long as the receiving SNR is modestly above the threshold. Last but not least, digital error correction codes with satisfying performance usually require a long block length, which introduces high latency for decoding and processing at the receiver.

In addition to the classical source-channel separate digital system, the analog transmission system can serve as an alternative solution to data transmission. The analog system has advantages over its pure digital peers—it does not introduce the granularity noise and its performance evolves gracefully with SNR. Most of the analog transmission systems ever presented in literature are joint source-channel coding (JSCC) systems, where compression and encoding are performed in one step and signals are in pure analog or hybrid-digital-analog (HDA) form. The study of analog communication can date back to the papers [14]. Reference [4] shows that direct transmission of a Gaussian source over an additive white Gaussian noisy (AWGN) channel with no bandthwidth expansion or compressionis optimal. For the bandwidth expansion case, [5] obtains the result that the fastest decay speed of the mean square error (MSE) cannot be better than the square inverse of the SNR. Although until now, no practical schemes have been found to achieve this decaying speed. In [6, 7], the optimal linear analog codes are treated. The design of practical nonlinear analog coding schemes has always been an open issue. Some interesting paradigms have been found. Fuldseth [8] and Chung [9] discuss numerical-based analog signal encoding schemes. Vaishampayan and Costa [10] propose a class of analog dynamic systems constructed by first-order derivative equations, which generate algebraic analog codes on torus or sphere. Cai and Modestino [11], Hekland et al. [12], and Floor and Ramstad [13] study the design of the Shannon-Kotel’nikov curve. The minimum mean square error decoding schemes for the Shannon-Kotel’nikov analog codes and their modified version combined with hybrid digital signals are discussed in [14, 15].

Among the family of analog coding schemes, one special class is constructed through chaotic dynamic systems. In dynamic systems, the signal sequence is generated by iteratively invoking some predefined mapping function. To be specific, the next signal (state) is obtained by performing a mapping to the current signal (state), and the whole signal (state) sequence is initialized by the input signal. For a chaotic dynamic system, the function governing the signal generation (state transition) is chosen as chaotic functions. Chaotic functions are characterized by their fast divergence, which is more well known as the remarkable butterfly effect. This property means that even a very tiny difference in initial inputs will soon result in significantly different signal sequences. From the signal space expansion viewpoint, this indicates that a pair of points in source space with small distance will have a large distance in the code space. So chaotic dynamic systems can potentially entitle signals with error resistance. The seminal work [16] proposes an analog system based on the tent map dynamic system, and its performance is extensively discussed in [17]. As with the analysis performed in [18, 19], the drawback of the tent map code is that its performance convergence Cramer-Rao lower bound (CRLB) requires very high SNR. Rosenhouse and Weiss [20] propose an improvement scheme by protecting the itinerary of the tent map codes with digital error correction codes. However, this hybrid-digital-analog scheme still suffers from the drawbacks rooted in digital error correction codes.

In this paper, we focus on a new pure analog chaotic dynamic encoding scheme, which is constructed via a two-dimensional chaotic function—baker’s map. This structure is closely related to and more complicated than the one reported in [16]. The specific contributions of this paper include the following: we develop various decoding methods for the baker’s coding system and analyze its MSE performance. Based on that, we proceed to propose two improved coding structures and extend various decoding methods to these new structures. These proposed improvements effectively balance the protection for all source signals and have more satisfying MSE performance compared to the tent map code. We also compare our proposed analog coding scheme with the classical source-channel separate digital coding scheme, where turbo code is applied. By using equal power and bandwidth, our proposed coding scheme outperforms the digital turbo scheme over a wide SNR range.

This paper is organized as follows: in Section 2, the original baker’s dynamic system is discussed, including its encoding structure, decoding methods, and performance analysis. Two modified chaotic systems based on the baker’s system are discussed in Sections 3 and 4, including their encoding and decoding schemes. In Section 5, numerical results and discussions are presented and performance is discussed. Section 6 concludes the paper.

In this paper, we assume that the source signals are mutually independent and uniformly distributed on the interval [−1,1], which is also adopted in previous works [16] and [17]. By this assumption, the MMSE decoding method has closed form solution and we can compare performance with previous works. However, it should be pointed out that the maximum likelihood (ML) decoding method does not require this condition and is applicable to signal with arbitrary distribution. We assume that the transmission channel is AWGN and the decoding methods obtained can be easily extended to block fading channel.

2 The baker’s map analog coding scheme

In this section, we introduce the analog encoding scheme based on the baker’s map function. The baker’s map function, F : [0,1]2[0,1]2, is a piecewise-linear chaotic function given as follows:

$$\begin{array}{*{20}l}{} \left[ \begin{array}{c} x \\ y \end{array} \right]= F(u,v)= \left[ \begin{array}{c} 1-2\mathsf{sign}(u)u\\ \frac{1}{2}\mathsf{sign}(u)(1-v) \end{array} \right], \ \ \ -1\leq u,v \leq 1. \end{array} $$
((1))

The above baker’s map has a close connection with the symmetric tent map function discussed in [16] and [17], which is defined as

$$\begin{array}{*{20}l} G(x)=1-2|x|,\ \ \ \ -1\leq x\leq 1. \end{array} $$
((2))

Although the symmetric tent map in (2) is non-invertible, once the sign(x) is given, its value can be determined. The “inverse” symmetric tent map function with the sign s of x given is \(G_{s}^{-1}(y)=s\frac {1-y}{2}\). Comparing the baker’s map and the symmetric tent map functions, the baker’s map can be alternatively defined via the symmetric tent map function as follows:

$$\begin{array}{*{20}l} \left[ \begin{array}{c} x \\ y \end{array} \right]= F(u,v)= \left[ \begin{array}{c} G(u) \\ G_{\mathsf{sign}(u)}^{-1}(v) \end{array} \right], \ \ \ -1 \leq u, v \leq 1. \end{array} $$
((3))

Based on the baker’s map function above, a dynamic analog encoding scheme can be performed. For a pair of independent source (x 0,y 0) [−1,1]2, a chaotic signal sequence is generated by repeatedly invoking baker’s mapping, i.e.,

$$\begin{array}{*{20}l} \left[ \begin{array}{c} x_{n+1} \\ y_{n+1} \end{array} \right]= F(x_{n},y_{n}), \ \ \ n=0, 1, \cdots, N-2, \end{array} $$
((4))

where N is the bandwidth expansion. This sequence can be viewed as a rate- 1/N analog code with x 0 and y 0 as continuous information “bits”. In the following, we use x=[x 0,x 1,,x N−1]T and y=[y 0,y 1,,y N−1]T to denote the codewords of two input signals, respectively.

An important concept about the baker’s dynamic encoding system is the itinerary, which is defined as \({s}=[s_{0}, s_{1},\cdots \!, s_{N-2}]\triangleq [\mathsf {sign}(x_{0}), \mathsf {sign}(x_{1}), \cdots \!, \mathsf {sign}(x_{N- 2})]\). In fact, if the itinerary of the code sequence is given, x k ’s and y k ’s can all be expressed as affine functions of x 0 and y 0. Specifically, x k and y k can be represented via (x 0,y 0) in the following form:

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} x_{k,{s}}(x_{0},y_{0})&=a_{k,{s}}x_{0}+b_{k,{s}}, \\ y_{k,{s}}(x_{0},y_{0})&=c_{k,{s}}y_{0}+d_{k,{s}}. \end{array} \right.\ \ \ \ k=0, 1, \cdots, N-1. \end{array} $$
((5))

The affine parameters in (5) are functions of itinerary s. For a specific s, they can be obtained in the following recursive way:

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} a_{k+1,{s}}&=-2s_{k}a_{k,{s}}, \\ b_{k+1,{s}}&=1-2s_{k}b_{k,{s}}, \\ c_{k+1,{s}}&=-\frac{1}{2}s_{k}c_{k,{s}}, \\ d_{k+1,{s}}&=\frac{1}{2}s_{k}(1-d_{k,{s}}), \end{array} \right.\ \ \ \ k=0, \cdots, N-2, \end{array} $$
((6))

with the starting point

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} a_{0,{s}}&=1, \\ b_{0,{s}}&=0, \\ c_{0,{s}}&=1, \\ d_{0,{s}}&=0. \end{array} \right. \end{array} $$
((7))

In fact, the collection of 2N−1 itineraries one-to-one maps onto a partition1 of the feasible space of x 0, i.e., the segment [−1,+1]. The itinerary s is a function of input x 0. For any specific itinerary s, the admissible values of x 0 fall in a segment of length 1/2N−2, which is called a cell and denoted as \(C_{\mathbf {s}}\triangleq [e_{l,{s}}, e_{u,{s}}]\). The two endpoints e l,s and e u,s of the cell associated with s are determined as

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} e_{l,{s}}&=\min\left\{\frac{-b_{N-1,{s}}+1}{a_{N-1,{s}}}, \frac{-b_{N-1,{s}}-1}{a_{N-1,{s}}}\right\}, \\ e_{u,{s}}&=\max\left\{\frac{-b_{N-1,{s}}+1}{a_{N-1,{s}}}, \frac{-b_{N-1,{s}}-1}{a_{N-1,{s}}}\right\}. \end{array} \right. \end{array} $$
((8))

This concept is illustrated in Fig. 1. In the left part of Fig. 1 a, when N=2, the itinerary has 1 bit, i.e., s{+1,−1}. The two corresponding cells are respectively the left and right half of the segment [−1,+1]. This concept is extended to length of N=n+1 in the right of Fig. 1 a, where G (n)(x) denotes n-fold composition of G(·). In fact, once the itinerary s j is given, the endpoints \(\phantom {\dot {i}\!}e_{l,\mathbf {s}_{j}}\) and \(\phantom {\dot {i}\!}e_{u,\mathbf {s}_{j}}\) of the cell and the affine parameters \(\left \{a_{k,\mathbf {s}_{j}},b_{k,\mathbf {s}_{j}},c_{k,\mathbf {s}_{j}},d_{k,\mathbf {s}_{j}}\right \}\) can all be determined as functions of s j , as shown in Fig. 1 b.

Fig. 1
figure 1

Partition and itinerary. a When N = 1, itinerary s has just 1 bit, +1 or −1 (left); for general N=n + 1, itinerary s has 2n patterns (right). Each specific pattern s j corresponds to one segment (cell) \(\protect \phantom {\dot {i}\!}C_{\mathbf {s}_{j}}\) of the feasible region. b The feasible region [−1,+1]2 is partitioned into 2N − 1 cells, with each cell \(\protect \phantom {\dot {i}\!}C_{\mathbf {s}_{j}}\) corresponding to one specific itinerary pattern s j . The parameters in the affine representation of the codewords and the endpoints of the cell can be determined once the itinerary s j is given

Next, we discuss decoding schemes for the above baker’s dynamic encoding system.

2.1 Maximum likelihood decoding

Under the AWGN channel assumption, the received signal can be represented as

$$\begin{array}{*{20}l} \left\{ \begin{array}{c} r_{x,n}=x_{n}+n_{x,n},\\ r_{y,n}=y_{n}+n_{y,n}, \end{array} \right.\ \ \ \ n=0,1,\cdots, N-1, \end{array} $$
((9))

where \(n_{x,n},\ n_{y,n}\stackrel {\text {i.i.d.}}{\sim }\mathcal {N}(0,\sigma ^{2})\), n=0,1,,N−1. We denote r x =[r x,1,r x,2,,r x,N−1]T and r y =[r y,1,r y,2,,r y,N−1]T. The likelihood function of the observation sequences r x ,r y with given source (x 0,y 0) is

$${} {\fontsize{9.4pt}{9.6pt}\selectfont{\begin{aligned} p({r}_{x},{r}_{y}|x_{0}, y_{0})=(2\pi\sigma^{2})^{-N}\exp\left\{-\frac{\|{r}_{x}-{x}\|^{2}+\|{r}_{y}-{y}\|^{2}}{2\sigma^{2}}\right\}. \end{aligned}}} $$
((10))

The maximum likelihood estimate of source pair \(\hat {x}_{0}^{\text {ML}}, \hat {y}_{0}^{\text {ML}}\) is

$${} \begin{aligned} \left\{\hat{x}_{0}^{\text{ML}}, \hat{y}_{0}^{\text{ML}}\right\}&=\underset{-1\leq x_{0}, y_{0}\leq1}{\arg\max}p({r}_{x},{r}_{y}|x_{0}, y_{0})\\ &=\underset{-1\leq x_{0}, y_{0}\leq1}{\arg\min}\sum_{k=0}^{N-1}\left[\left(r_{x,k}-x_{k}(x_{0},y_{0})\right)^{2}\right.\\&\quad\qquad\qquad\qquad+\left. \left(r_{y,k}-y_{k}(x_{0},y_{0})\right)^{2}\right]. \end{aligned} $$
((11))

The last equality emphasizes the fact that all x k ,y k are functions of x 0 and y 0.

$$\begin{array}{*{20}l} \left(\hat{x}_{0}^{\text{ML}}, \hat{y}_{0}^{\text{ML}}\right)&=\underset{\mathbf{s}, x_{0}\in C_{\mathbf{s}}}{\arg\min}\sum_{k=0}^{N-1}\left\{\left[r_{x,k}-(a_{k,{s}}x_{0}+b_{k,{s}})\right]^{2}+ \left[r_{y,k}-(c_{k,{s}}y_{0}+d_{k,{s}})\right]^{2}\right\} \\ &=\underset{{s}}{\arg\min}\left\{\underset{\stackrel{e_{1,{s}}\leq x_{0}\leq e_{2,{s}}}{-1\leq y_{0}\leq1}}{\min}\sum_{k=0}^{N-1}\left\{\left[r_{x,k}-(a_{k,{s}}x_{0}+b_{k,{s}})\right]^{2}+ \left[r_{y,k}-(c_{k,{s}}y_{0}+d_{k,{s}})\right]^{2}\right\}\right\}. \end{array} $$
((12))

Based on connections between itineraries and cells discussed in (5)–(8), the original maximum likelihood (ML) estimation problem in (11) can be further transformed into

For any given itinerary s, the inner minimization problem in Eq. (12) is convex and quadratic. Without considering the constraints, its optimal solution \((x_{0,{s}}^{*}, y_{0,{s}}^{*})\) is given in a closed form

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} x_{0,{s}}^{*}&=\frac{{a}_{{s}}^{T}({r}_{{x}}-{b}_{{s}})}{{a}_{{s}}^{T}{a}_{{s}}}, \\ y_{0,{s}}^{*}&=\frac{{c}_{{s}}^{T}({r}_{{y}}-{d}_{{s}})}{{c}_{{s}}^{T}{c}_{{s}}}, \end{array} \right. \end{array} $$
((13))

where a s =[a 0,s ,,a N−1,s ]T, b s =[b 0,s ,,b N−1,s ]T, c s =[c 0,s ,,c N−1,s ]T, and d s =[d 0,s ,,d N−1,s ]T. Taking into account that the feasible (x 0,y 0) associated with s should lie within admissible range, a limiting procedure must be performed to obtain the solution to the inner minimization with specific s, i.e.,

$$\begin{array}{*{20}l} x_{0,{s}}^{\text{inner}}=\left\{ \begin{array}{lc} e_{l,{s}}, &\ \ \ \text{if}\; x_{0,{s}}^{*}<e_{l,{s}}\\ e_{u,{s}}, &\ \ \ \text{if}\; x_{0,{s}}^{*}>e_{u,{s}}\\ x_{0,{s}}^{*} &\ \ \ \ \text{otherwise.} \end{array} \right.,\\ y_{0,{s}}^{\text{inner}}=\left\{ \begin{array}{lc} -1, &\ \ \ \text{if}\; y_{0,{s}}^{*}<-1\\ +1, &\ \ \ \text{if}\; x_{0,{s}}^{*}>+1\\ y_{0,{s}}^{*}, &\ \ \ \ \text{otherwise.} \end{array} \right.. \end{array} $$
((14))

Since there are totally a finite number of possible itinerary patterns, by enumerating all possible itineraries and selecting the \(\{x_{0,{s}}^{\text {inner}}, y_{0,{s}}^{\text {inner}}\}\) which minimizes the outer minimization, the ML estimation of (x 0,y 0) is obtained as

$${} \begin{aligned} \left(\hat{x}_{0}^{\text{ML}}, \hat{y}_{0}^{\text{ML}}\right)&=\underset{{s}}{\arg\min}\left\{\sum_{k=0}^{N-1}\left\{\left[r_{x,k}-(a_{k,{s}}x_{0,{s}}^{\text{inner}}+b_{k,{s}})\right]^{2}\right.\right.\\&\quad\qquad\qquad+ \left.\left.\left[r_{y,k}-(c_{k,{s}}y_{0,{s}}^{\text{inner}}+d_{k,{s}})\right]^{2}\right\}\!\!\vphantom{\left\{\sum_{k=0}^{N-1}\left\{\left[r_{x,k}-(a_{k,{s}}x_{0,{s}}^{\text{inner}}+b_{k,{s}})\right]^{2}\right.\right.}\right\} \end{aligned} $$

The ML decoding scheme does not require a priori knowledge of the source’s distribution. So it is applicable regardless of the probability distribution of the source.

2.2 Minimum mean square error decoding

The ML decoding method is not optimal in the sense of mean square error performance. In this subsection, we focus on the minimum mean square error (MMSE) solution to the baker’s dynamic system. The MMSE estimator is given in a general form as [21]

$$\begin{array}{*{20}l} \hat{X}^{\text{MMSE}}(y)=\mathsf{E}\{X|y\}=\int xf(x|y)\mathrm{d}x, \end{array} $$
((15))

where X is random parameter to be determined and y is a specific realization of the noisy observation Y. It is worth noting that the above general solution usually cannot result in a closed form solution for concrete problems. Fortunately, under the uniform distribution assumption of the source signal, the closed form MMSE estimator for the baker’s map can be obtained.

To provide the result of the MMSE decoder, here we introduce the following notations:

$$\begin{array}{*{20}l} A_{1}=\|{a}_{{s}}\|^{2}; \ \ \ B_{1}={a}_{{s}}^{T}({b}_{{s}}-{r}_{x}); \ \ \ C_{1}=\|{b}_{{s}}-{r}_{x}\|^{2}; \\ A_{2}=\|{c}_{{s}}\|^{2}; \ \ \ B_{2}={c}_{{s}}^{T}({d}_{{s}}-{r}_{y}); \ \ \ C_{2}=\|{d}_{{s}}-{r}_{y}\|^{2}; \end{array} $$
((16))

and

$${} {\fontsize{8.6pt}{9.6pt}\selectfont{\begin{aligned} E_{1}&=\exp\left\{\frac{{B_{1}^{2}}-A_{1}C_{1}}{2\sigma^{2}A_{1}}\right\}; \ \ D_{1}=Q\left(\frac{\sqrt{A_{1}}}{\sigma}e_{l,{s}}+\frac{B_{1}}{\sigma\sqrt{A_{1}}}\right)\\&\quad-Q\left(\frac{\sqrt{A_{1}}}{\sigma}e_{u,{s}}+\frac{B_{1}}{\sigma\sqrt{A_{1}}}\right); \\ E_{2}&=\exp\left\{\frac{{B_{2}^{2}}-A_{2}C_{2}}{2\sigma^{2}A_{2}}\right\}; \ \ D_{2}=Q\left(-\frac{\sqrt{A_{2}}}{\sigma}+\frac{B_{2}}{\sigma\sqrt{A_{2}}}\right)\\&\quad-Q\left(\frac{\sqrt{A_{2}}}{\sigma}+\frac{B_{2}}{\sigma\sqrt{A_{2}}}\right); \\ J_{1}&=\exp\!\left\{\!-\frac{1}{2\sigma^{2}}A_{1}\left(\!e_{l,{s}}\,+\,\frac{B_{1}}{A_{1}}\!\right)^{2}\!\right\}-\exp\!\left\{\!-\frac{1}{2\sigma^{2}}A_{1}\left(\!e_{u,{s}}+\frac{B_{1}}{A_{1}}\!\right)^{2}\right\}; \\ J_{2}&=\exp\!\left\{\!-\frac{1}{2\sigma^{2}}A_{2}\left(\!1+\frac{B_{2}}{A_{2}}\!\right)^{2}\!\right\}-\exp\!\left\{\!-\frac{1}{2\sigma^{2}}A_{2}\left(\!-1+\frac{B_{2}}{A_{2}}\!\right)^{2}\right\}, \end{aligned}}} $$
((17))

where the function Q(·) is the well-known Gaussian-Q function which is defined as

$$\begin{array}{*{20}l} Q(x)\triangleq\int^{\infty}_{x}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{t^{2}}{2}}\mathrm{d}t. \end{array} $$
((18))

The MMSE estimator of x 0 and y 0 is given in a closed form as follows:

$$\begin{array}{*{20}l} \hat{x}_{0}^{\text{MMSE}}&=\frac{\sum_{{s}}\sqrt{\frac{2\pi}{A_{2}}}E_{1}E_{2}D_{2}\left(\frac{\sigma}{A_{1}}J_{1}-\frac{\sqrt{2\pi}B_{1}}{A_{1}^{3/2}}D_{1}\right)}{\sum_{{s}}\frac{2\pi} {\sqrt{A_{1}A_{2}}}E_{1}E_{2}D_{1}D_{2}}, \end{array} $$
((19))
$$\begin{array}{*{20}l} \hat{y}_{0}^{\text{MMSE}}&=\frac{\sum_{{s}}\sqrt{\frac{2\pi}{A_{1}}}E_{1}E_{2}D_{1}\left(\frac{\sigma}{A_{2}}J_{2}-\frac{\sqrt{2\pi}B_{2}}{A_{2}^{3/2}}D_{2}\right)}{\sum_{{s}}\frac{2\pi}{\sqrt{A_{1}A_{2}}}E_{1}E_{2}D_{1}D_{2}}. \end{array} $$
((20))

The detailed proof of the above result is rather involved and relegated to the Appendix.

2.3 Mixed ML-MMSE decoding scheme

The MMSE estimator involves highly nonlinear numerical evaluations, like the Q-function, which are computation demanding and costly for implementation. Next, we introduce some kind of mixed ML and MMSE estimator for the baker’s analog code.

As previously discussed, once the itinerary is given, the analog codewords can be written as an affine function in (x 0,y 0). For specific itinerary s, by packing the codewords x and y into one vector v and using (5), we can rewrite the baker’s dynamic system as follows:

$$\begin{array}{*{20}l} {v}=\left[ \begin{array}{c} {x}\\ {y} \end{array} \right]= \underbrace{\left[ \begin{array}{cc} {a}_{{s}} & {0}\\ {0} & {c}_{{s}} \end{array} \right]}_{\mathbf{G}_{\mathbf{s}}^{T}} \underbrace{\left[ \begin{array}{c} x_{0}\\ y_{0} \end{array} \right]}_{\mathbf{u}}+ \underbrace{\left[ \begin{array}{c} {b}_{{s}}\\ {d}_{{s}} \end{array} \right]}_{\mathbf{t}_{{s}}} ={G}^{T}_{{s}}{u}+{t}_{{s}}, \end{array} $$
((21))

where parameters a s , b s , c s , and d s are defined in (6).

Recall that in ML decoding, a detection of the itinerary s can be obtained. By substituting s in (21) with the ML detection \(\hat {{s}}^{\text {ML}}\) and packing the received signals r x and r y into one vector \({r}=\left [{r}_{x}^{T}, {r}_{y}^{T}\right ]^{T}\), (9) can be expressed in a compact form

$$\begin{array}{*{20}l} {r}^{\prime}_{\hat{{s}}^{\text{ML}}}={r}-{t}_{\hat{{s}}^{\text{ML}}}={G}^{T}_{\hat{{s}}^{\text{ML}}}{u}. \end{array} $$
((22))

Thus, the baker’s map code is equivalent to a (2N,2) linear analog code with encoder G s .

Now the problem to determine the source signal u in the above equation becomes the standard minimum MSE receiving problem, whose solution is the well-known Wiener filter and given as [22]

$$\begin{array}{*{20}l} \hat{{u}}_{\text{MMSE}}\left(\hat{{s}}^{\text{ML}}\right)=\left({G}_{\hat{{s}}^{\text{ML}}}{G}^{T}_{\hat{{s}}^{\text{ML}}}+3\sigma^{2}{I}\right)^{-1}{G}_{\hat{{s}}^{\text{ML}}}{r}^{\prime}_{\hat{{s}}^{\text{ML}}}. \end{array} $$
((23))

A slicing operation then follows the above Wiener filtering to ensure the final estimates \(\hat {x}_{0}^{\mathrm {ML-MMSE}}\) and \(\hat {y}_{0}^{\mathrm {ML-MMSE}}\) lie in \(\left [e_{l,\hat {\mathbf {s}}^{\text {ML}}}, e_{u,\hat {\mathbf {s}}^{\text {ML}}}\right ]\) and [−1,+1], respectively.

For the mixed ML-MMSE method, ML decoding is performed to obtain \(\hat {{s}}^{\text {ML}}\). Then, the Wiener filtering and limiting procedure follows. The mixed ML-MMSE decoding method requires a priori knowledge of the source and involves only linear computation operations.

2.4 Performance analysis

In Fig. 2, the different decoding algorithms’ performance for the baker’s analog codes with different lengths is plotted. E u means the average power for each source signal, and N 0 denotes the unilateral power spectral density, i.e., N 0=2σ 2. The ML, MMSE, and ML-MMSE decoding algorithms have identical MSE performance for high SNR. In low SNR range, the MMSE decoding method has the best performance.

Fig. 2
figure 2

MSE performance of different decoding algorithms for the baker’s dynamic system

In the following, we analyze the MSE performance of the baker’s dynamic coding system by considering the Cramer-Rao lower bound. CRLB is a lower bound for the unbiased estimator [23]. It should be pointed out that the ML decoding methods discussed above are the biased estimator due to the slicing operations. However, when the SNR is large, the decoding error is sufficiently small such that the slicing rarely impacts the decoding result. So CRLB can precisely predict the decoding error when the SNR is modestly large and is useful a tool to understand the system’s performance. This will also be verified by the following numerical results.

The Cramer-Rao lower bound for x 0 is given as [23]

$$\begin{array}{*{20}l}{} \text{CRLB}_{x_{0}}^{\text{baker}}&=-\mathsf{E}^{-1}_{x_{0}}\left\{\frac{\partial^{2}}{\partial {x_{0}^{2}}}{\log{p(\mathbf{r}_{x},\mathbf{r}_{y}|x_{0}, y_{0})}}\right\} \end{array} $$
((24))
$$\begin{array}{*{20}l} &=-\mathsf{E}^{-1}_{x_{0}}\left\{\!\frac{\partial^{2}}{\partial {x_{0}^{2}}}\left(\frac{-1}{2\sigma^{2}}\sum_{k=0}^{N-1}\!\left((r_{x,k}-a_{k,\mathbf{s}}x_{0}-b_{k,\mathbf{s}})^{2}\right.\right.\right. \\&\qquad\qquad+\left.\left.\left.\!\!\!(r_{y,k}\!\,-\,\!c_{k,\mathbf{s}}y_{0}-d_{k,\mathbf{s}})^{2}\right)\!\!\vphantom{\left(\frac{-1}{2\sigma^{2}}\sum_{k=0}^{N-1}\!\left((r_{x,k}-a_{k,\mathbf{s}}x_{0}-b_{k,\mathbf{s}})^{2}\right.\right.}\right)\!\!\vphantom{\left\{\!\frac{\partial^{2}}{\partial {x_{0}^{2}}}\left(\frac{-1}{2\sigma^{2}}\sum_{k=0}^{N-1}\!\left((r_{x,k}-a_{k,\mathbf{s}}x_{0}-b_{k,\mathbf{s}})^{2}\right.\right.\right.}\right\} \end{array} $$
((25))
$$\begin{array}{*{20}l} &=\frac{\sigma^{2}}{\sum_{k=0}^{N-1}a_{k,\mathbf{s}}^{2}}=\frac{3\sigma^{2}}{4^{N}-1}. \end{array} $$
((26))

where p(r x ,r y |x 0,y 0) is defined in (10), and \(\mathsf {E}_{x_{0}}(\cdot)\) denotes the expectation with respect to x 0. The recursive relations in (6) and the fact \({s_{k}^{2}}=1\) are used to obtain (26). Similarly, the CRLB for y 0 obtained as

$${} \begin{aligned} \text{CRLB}_{y_{0}}^{\text{baker}}&=-E^{-1}_{y_{0}}\left\{\left(\frac{\partial}{\partial y_{0}}{\log{p(\mathbf{r}_{x},\mathbf{r}_{y}|x_{0}, y_{0})}}\right)^{2}\right\} \\&=\frac{\sigma^{2}}{\sum_{k=0}^{N-1}{c_{k}^{2}}} =\frac{3\sigma^{2}}{4(1-(1/4)^{N})}. \end{aligned} $$
((27))

When N is modestly large, \(\text {CRLB}_{x_{0}}\approx 3\sigma ^{2}/4^{N}\). Each increment in N can decrease the decoding distortion of x 0 by 3/4. Comparatively, increment in N slightly improves y 0 determination, which is nearly a constant as 3σ 2/4. The CRLB reveals that the two sources are under unequal protection and there is insufficient coding gain on y 0. Recall that the x-sequence in codewords is obtained by continuously stretching and shifting the signal. Intuitively, the signal is locally magnified. In comparison, the y-sequence is obtained by compressing the signal. That is why the terms of 2N and 2N appear in the denominator of CRLB for x 0 and y 0, respectively. This insight is verified by Fig. 3, where separate MSE decoding performances of x 0 and y 0 are plotted with their CRLBs illustrated as benchmarks. Although x 0 has an obvious coding gain, y 0 is poorly protected and its distortion dominates the overall decoding performance.

Fig. 3
figure 3

MSE performance for x 0 and y 0 of the baker’s system

From the CRLB analysis, we realize that the bottleneck of the baker’s analog code lies in the weak protection to y 0. Thus, to improve the baker’s map code, effective protection should also be performed to y 0.

3 Improvement I—mirrored baker’s analog code

As analyzed in the last section, the unsatisfying performance of the original baker’s map lies in the poor protection of y 0. To enhance the protection of y 0, a natural idea is to perform a second original baker’s map encoding by switching the roles of x 0 and y 0. Thus, both x 0 and y 0 obtain balanced and effective protection. This idea leads to the improvement scheme to be discussed in this section—the mirrored baker’s dynamic coding system. The mirrored baker’s structure comprises two branches, with its first branch being the original baker’s encoder and the second branch exchanging the roles of x 0 and y 0 to perform the original baker’s encoding for a second time. For a given N, the mirrored baker’s system forms a (4N,2) analog code.

Here we adjust our notations for the new system to make our following discussions clear. The two codewords associated with two branches are labeled with subscripts 1 and 2, respectively. In the first branch, x 0 is the tent map encoded and so does y 0 in the second branch. The codewords associated with x 0 and y 0 of the two branches are denoted as {x 1,y 1} and {x 2,y 2}, respectively, with their corresponding noisy observations as {r 1,x ,r 1,y } and {r 2,x ,r 2,y }, respectively. The encoding procedure is expressed as

$$\begin{array}{*{20}l} &\left[ \begin{array}{c} x_{1,n+1} \\ y_{1,n+1} \end{array} \right]= F(x_{1,n},y_{1,n}),\\& \left[ \begin{array}{c} y_{2,n+1} \\ x_{2,n+1} \end{array} \right]= F(y_{2,n},x_{2,n}),\ n=0,\cdots,N-2; \end{array} $$
((28))

with x 1,0=x 2,0=x 0 and y 1,0=y 2,0=y 0. The observations are represented as

$$\begin{array}{*{20}l} \left\{ \begin{array}{c} r_{j,x,n}=x_{j,n}+n_{j,x,n},\\ r_{j,y,n}=y_{j,n}+n_{j,y,n}, \end{array} \right.,\ j=1,2; \ n=0,1,\cdots, N-1. \end{array} $$
((29))

The mirrored baker’s dynamic system has two itineraries s 1 and s 2 from the first and second branches, respectively, the two of which compose the entire itinerary for the mirrored baker’s system. As previously discussed, s 1 indicates a partition of the feasible domain of x 0. So does s 2 to y 0. The entire feasible domain for the source pair (x 0,y 0), which is a 2×2 square centered at the origin on the plane, is uniformly divided into 2(2N−2) cells, with each cell being a tiny square having edge of length 2−(N−2). Assuming that the source (x 0,y 0) is known to live in some specific cell, the itineraries s 1 and s 2 can be determined and the codewords can be expressed as affine functions:

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} x_{1,k,{s}_{1}}(x_{0},y_{0})&=a_{1,k,{s}_{1}}x_{0}+b_{1,k,{s}_{1}}, \\ y_{1,k,{s}_{1}}(x_{0},y_{0})&=c_{1,k,{s}_{1}}y_{0}+d_{1,k,{s}_{1}}, \end{array} \right.\\ \left\{ \begin{array}{ll} x_{2,k,{s}_{2}}(x_{0},y_{0})&=a_{2,k,{s}_{2}}x_{0}+b_{2,k,{s}_{2}}, \\ y_{2,k,{s}_{2}}(x_{0},y_{0})&=c_{2,k,{s}_{2}}y_{0}+d_{2,k,{s}_{2}}, \end{array} \right. \end{array} $$
((30))

with k=0,1,,N−2. The parameters \(\phantom {\dot {i}\!}\{a_{1,k,{s}_{1}},b_{1,k,{s}_{1}}, c_{1,k,{s}_{1}}, d_{1,k,{s}_{1}}\}\) and \(\phantom {\dot {i}\!}\{a_{2,k,{s}_{2}},b_{2,k,{s}_{2}},c_{2,k,{s}_{2}}, d_{2,k,{s}_{2}}\}\) are for the first and the second branches, respectively, and can be determined recursively for k=0,,N−2 as follows:

$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} \left\{ \begin{array}{l} a_{1,k+1,{s}_{1}}=-2s_{1,k}a_{1,k,{s}_{1}}, \\ b_{1,k+1,{s}_{1}}=1-2s_{1,k}b_{1,k,{s}_{1}}, \\ c_{1,k+1,{s}_{1}}=-\frac{1}{2}s_{1,k}c_{1,k,{s}_{1}}, \\ d_{1,k+1,{s}_{1}}=\frac{1}{2}s_{1,k}(1-d_{1,k,{s}_{1}}), \end{array} \right.\ \left\{ \begin{array}{l} c_{2,k+1,{s}_{2}}=-2s_{2,k}c_{2,k,{s}_{2}}, \\ d_{2,k+1,{s}_{2}}=1-2s_{2,k}d_{2,k,{s}_{2}}, \\ a_{2,k+1,{s}_{2}}=-\frac{1}{2}s_{2,k}a_{2,k,{s}_{2}}, \\ b_{2,k+1,{s}_{2}}=\frac{1}{2}s_{2,k}(1-b_{2,k,{s}_{2}}), \end{array} \right. \end{aligned}}} $$
((31))

with the starting point

$$\begin{array}{*{20}l} \left\{ \begin{array}{l} a_{1,0,{s}_{1}}=a_{2,0,{s}_{2}}=1, \\ b_{1,0,{s}_{1}}=b_{2,0,{s}_{2}}=0, \\ c_{1,0,{s}_{1}}=c_{2,0,{s}_{2}}=1, \\ d_{1,0,{s}_{1}}=d_{2,0,{s}_{2}}=0. \end{array} \right. \end{array} $$
((32))

We denote \(\phantom {\dot {i}\!}{a}_{j,{s}_{j}}=[a_{j,0,{s}_{j}},a_{j,1,{s}_{j}},\cdots,a_{j,N-1,{s}_{j}} ]^{T}\), j=1,2 and define \(\phantom {\dot {i}\!}{b}_{j,{s}_{j}},{c}_{j,{s}_{j}}\) and \(\phantom {\dot {i}\!}{d}_{j,{s}_{j}}\) in the same way for j=1,2.

For a specific itinerary {s 1,s 2}, we denote its indicated admissible cell has projection \(\phantom {\dot {i}\!}C_{\mathbf {s}_{1}}\) onto x 0 feasible domain and projection \(\phantom {\dot {i}\!}C_{\mathbf {s}_{2}}\) onto y 0 feasible domain, i.e.,

$$ \begin{aligned} &\qquad\qquad x_{0}\in C_{\mathbf{s}_{1}}=\left[e_{1,l,{s}_{1}}, e_{1,u,{s}_{1}}\right], \\&\qquad\qquad y_{0}\in C_{\mathbf{s}_{2}}=\left[e_{2,l,{s}_{2}}, e_{2,u,{s}_{2}}\right], \text{with}\\ &\left\{ \begin{array}{l} e_{1,l,{s}_{1}}=\min\left\{\frac{-b_{1,N-1,{s}_{1}}+1}{a_{1,N-1,{s}_{1}}}, \frac{-b_{1,N-1,{s}_{1}}-1}{a_{N-1,{s}_{1}}}\right\},\\ e_{1,u,{s}_{1}}=\max\left\{\frac{-b_{1,N-1,{s}_{1}}+1}{a_{1,N-1,{s}_{1}}}, \frac{-b_{1,N-1,{s}_{1}}-1}{a_{N-1,{s}_{1}}}\right\}, \end{array} \right.\\& \left\{ \begin{array}{l} e_{2,l,{s}_{2}}=\min\left\{\frac{-d_{2,N-1,{s}_{2}}+1}{c_{2,N-1,{s}_{2}}}, \frac{-d_{2,N-1,{s}_{2}}-1}{c_{2,N-1,{s}_{2}}}\right\},\\ e_{2,u,{s}_{2}}=\max\left\{\frac{-d_{2,N-1,{s}_{2}}+1}{c_{2,N-1,{s}_{2}}}, \frac{-d_{2,N-1,{s}_{2}}\,-\,1}{c_{2,N-1,{s}_{2}}}\right\}, \end{array} \right. \end{aligned} $$
((33))

Next, we discuss decoding methods for the mirrored baker’s dynamic system. These decoding methods are obtained by straightforwardly extending the results for the original baker’s system. In the following, main results are provided with details omitted.

3.1 ML decoding

In this subsection, the ML decoding of the mirrored baker’s map code is presented. The estimate \(\hat {x}_{0}^{\text {ML}}, \hat {y}_{0}^{\text {ML}}\) maximizing the likelihood function is equivalently given as

$${} \begin{aligned} \left(\hat{x}_{0}^{\text{ML}}, \hat{y}_{0}^{\text{ML}}\right)&=\underset{{s}_{1},{s}_{2}}{\arg\min}\left\{\underset{\stackrel{e_{1,l,{s}_{1}}\leq x_{0}\leq e_{1,u,{s}_{1}}}{e_{2,l,{s}_{2}}\leq y_{0}\leq e_{2,u,{s}_{2}}}}{\min}\sum_{j=1}^{2}\sum_{k=0}^{N-1}\right.\\&\left.\quad\qquad\qquad\left\{\left[r_{j,x,k}-\left(a_{j,k,{s}_{j}}x_{0}+b_{j,k,{s}_{j}}\right)\right]^{2}\right.\right.\\ &\quad\qquad\qquad+\left.\left.\left[r_{j,y,k}-\left(c_{j,k,{s}_{j}}y_{0}+d_{j,k,{s}_{j}}\right)\right]^{2}\right\}\!\!{\vphantom{\underset{\stackrel{e_{1,l,{s}_{1}}\leq x_{0}\leq e_{1,u,{s}_{1}}}{e_{2,l,{s}_{2}}\leq y_{0}\leq e_{2,u,{s}_{2}}}}{\min}}}\right\}. \end{aligned} $$
((34))

For a given pair of sequences {s 1,s 2}, the optimal solution of the inner minimization of the above equation is given as

$$\begin{array}{*{20}l} \left\{ \begin{array}{ll} x_{0,{s}_{1},{s}_{2}}^{*}&=\frac{{a}_{{s}_{1}}^{T}\left({r}_{1,x}-{b}_{{s}_{1}}\right)+{a}_{{s}_{2}}^{T}\left({r}_{2,x}-{b}_{{s}_{2}}\right)}{{a}_{{s}_{1}}^{T}{a}_{{s}_{1}}+{a}_{{s}_{2}}^{T}{a}_{{s}_{2}}}, \\ y_{0,{s}_{1}, {s}_{2}}^{*}&=\frac{{c}_{{s}_{1}}^{T}\left({r}_{1,y}-{d}_{{s}_{1}}\right)+{c}_{{s}_{2}}^{T}\left({r}_{2,y}-{d}_{{s}_{2}}\right)}{{c}_{{s}_{1}}^{T}{c}_{{s}_{1}}+{c}_{{s}_{2}}^{T}{c}_{{s}_{2}}}, \end{array} \right. \end{array} $$
((35))

followed by the hard limiter

$$\begin{array}{*{20}l} x_{0,{s}_{1},{s}_{2}}^{\text{inner}}&=\left\{ \begin{array}{l} e_{1,l,{s}_{1}},\text{if}\ x_{0,{s}_{1},{s}_{2}}^{*}<e_{1,l,{s}_{1}}\\ e_{1,u,{s}_{1}},\text{if}\ x_{0,{s}_{1},{s}_{2}}^{*}>e_{1,u,{s}_{1}}\\ x_{0,{s}_{1}, {s}_{2}}^{*}, \text{otherwise.} \end{array} \right. \\y_{0,{s}_{1},{s}_{2}}^{\text{inner}}&=\left\{ \begin{array}{l} e_{2,l,{s}_{2}},\text{if}\; y_{0,{s}_{1},{s}_{2}}^{*}<e_{2,l,{s}_{2}}\\ e_{2,u,{s}_{2}},\text{if}\; y_{0,{s}_{1},{s}_{2}}^{*}>e_{2,u,{s}_{2}}\\ y_{0,{s}}^{*},\text{otherwise.} \end{array} \right.. \end{array} $$
((36))

The ML estimation is given by selecting the (x0,s 1,s 2inner,y0,s 1,s 2inner) among different itineraries {s 1,s 2} which minimizes the outer minimization in (34).

3.2 MMSE decoding

To introduce the MMSE decoding results for the mirrored baker’s system, we adopt the following notations:

$${} \begin{aligned} &\left\{ \begin{array}{l} \bar{A}_{1}=\|{a}_{{s}_{1}}\|^{2}+\|{a}_{{s}_{2}}\|^{2}; \\ \bar{B}_{1}={a}_{{s}_{1}}^{T}\left({b}_{{s}_{1}}-{r}_{1,x}\right)\,+\,{a}_{{s}_{2}}^{T}\left({b}_{{s}_{2}}-{r}_{2,x}\right);\\ \bar{C}_{1}=\|{b}_{{s}_{1}}-{r}_{1,x}\|^{2}+\|{b}_{{s}_{2}}-{r}_{2,x}\|^{2}; \end{array} \right.\\& \left\{ \begin{array}{l} \bar{A}_{2}=\|{c}_{{s}_{1}}\|^{2}+\|{c}_{{s}_{2}}\|^{2};\\ \bar{B}_{2}={c}_{{s}_{1}}^{T}\left({d}_{{s}_{1}}-{r}_{1,y}\right)+{c}_{{s}_{2}}^{T}\left({d}_{{s}_{2}}-{r}_{2,y}\right);\\ \bar{C}_{2}=\|{d}_{{s}_{1}}-{r}_{1,y}\|^{2}+\|{d}_{{s}_{2}}-{r}_{2,y}\|^{2}; \end{array} \right.\\ &{\kern5pt}\quad\bar{E}_{j}=\exp\left\{\frac{\bar{B}_{j}^{2}-\bar{A}_{j}\bar{C}_{j}}{2\sigma^{2}\bar{A}_{j}}\right\}; \bar{D}_{j}=Q\left(\frac{\sqrt{\bar{A}_{j}}}{\sigma}e_{j,l,{s}_{j}}+\frac{\bar{B}_{j}}{\sigma\sqrt{\bar{A}_{j}}}\right)\\ &\qquad-Q\left(\frac{\sqrt{\bar{A}_{j}}}{\sigma}e_{j,u,{s}_{j}}+\frac{\bar{B}_{j}}{\sigma\sqrt{\bar{A}_{j}}}\right);\\ &{\kern5pt}\quad\bar{J}_{j}=\exp\left\{-\frac{1}{2\sigma^{2}}\bar{A}_{j}\left(e_{j,l,{s}_{j}}+\frac{\bar{B}_{j}}{\bar{A}_{j}}\right)^{2}\right\}\\ &\qquad-\exp\left\{-\frac{1}{2\sigma^{2}}\bar{A}_{j}\left(e_{j,u,{s}_{j}}+\frac{\bar{B}_{j}}{\bar{A}_{j}}\right)^{2}\right\},j=1,2. \end{aligned} $$
((37))

The calculation of the MMSE estimation still follows similar lines as discussed for the single baker’s system. The major difference is that since the sign sequence of y 0 contributes to the itinerary, the integration of y 0 should be decomposed into parts over different \(\phantom {\dot {i}\!}C_{{s}_{2}}\)’s. The MMSE estimation of x 0 can be given as

$$\begin{array}{*{20}l} \hat{x}_{0}^{\text{MMSE}}&=E\left\{x_{0}|{r}_{1,x}, {r}_{1,y}, {r}_{2,x}, {r}_{2,y}\right\}\\&=\int_{-1}^{+1} x_{0}\,f\left(x_{0}|{r}_{1,x}, {r}_{1,y}, {r}_{2,x}, {r}_{2,y}\right)\mathrm{d}x_{0} \\ &=\sum_{{s}_{1}\}}\int_{C_{{s}_{1}}}x_{0}\frac{f\left({r}_{1,x},{r}_{1,y},{r}_{2,x},{r}_{2,y}|x_{0}\right)f(x_{0})} {f({r}_{1,x},{r}_{1,y},{r}_{2,x},{r}_{2,y})}\mathrm{d}x_{0} \end{array} $$
((38))
$$\begin{array}{*{20}l} &=\frac{1}{4\,f\left({r}_{1,x},{r}_{1,y},{r}_{2,x}, {r}_{2,y}\right)}\sum_{\{{s}_{1}\}}\int_{C_{{s}_{1}}} x_{0}\sum_{\{{s}_{2}\}}\int_{C_{{s}_{2}}}\\&\qquad f\left({r}_{1,x},{r}_{1,y},{r}_{2,x}, {r}_{2,y}|x_{0},y_{0}\right)\mathrm{d}y_{0}\mathrm{d}x_{0} \\ &=\frac{\underset{\{{s}_{1},{s}_{2}\}}{\sum}\sqrt{\frac{2\pi}{\bar{A}_{2}}}\bar{E}_{1}\bar{E}_{2}\bar{D}_{2}\left(\frac{\sigma}{\bar{A}_{1}}\bar{J}_{1}-\frac{\sqrt{2\pi}\bar{B}_{1}}{\bar{A}_{1}^{3/2}}\bar{D}_{1}\right)}{\underset{\{{s}_{1},{s}_{2}\}}{\sum}\frac{2\pi}{\sqrt{\bar{A}_{1}\bar{A}_{2}}}\bar{E}_{1}\bar{E}_{2}\bar{D}_{1}\bar{D}_{2}}. \end{array} $$
((39))

Similarly, the MMSE estimation of y 0 for the mirrored baker’s map code is given as follows:

$$\begin{array}{*{20}l} \hat{y}_{0}^{\text{MMSE}} &=\frac{\underset{\{{s}_{1},{s}_{2}\}}{\sum}\sqrt{\frac{2\pi}{\bar{A}_{1}}}\bar{E}_{1}\bar{E}_{2}\bar{D}_{1}\left(\frac{\sigma} {\bar{A}_{2}}\bar{J}_{2}-\frac{\sqrt{2\pi}\bar{B}_{2}}{\bar{A}_{2}^{3/2}}\bar{D}_{2}\right)}{\underset{\{{s}_{1},{s}_{2}\}}{\sum}\frac{2\pi}{\sqrt{\bar{A}_{1}\bar{A}_{2}}}\bar{E}_{1}\bar{E}_{2}\bar{D}_{1}\bar{D}_{2}}. \end{array} $$
((40))

3.3 ML-MMSE decoding

If the itinerary {s 1,s 2} is given, the codewords of the mirrored baker’s map system can be represented as an affine function of the original source (x 0,y 0). The corresponding coefficients can be determined recursively by using Eqs. (31) and (32). Thus, the mirrored baker’s dynamic system can be rewritten as follows:

$$\begin{array}{*{20}l}{} {v}=\left[ \begin{array}{c} {x}_{1}\\ {y}_{1} \\ {x}_{2} \\ {y}_{2} \end{array} \right]= \left[\! \begin{array}{cc} {a}_{{s}_{1}} & {0}\\ {0} & {c}_{{s}_{1}}\\ {a}_{{s}_{2}} & {0}\\ {0} & {c}_{{s}_{2}} \end{array} \!\right] \left[ \begin{array}{c} x_{0}\\ y_{0} \end{array} \right]+ \left[\! \begin{array}{c} {b}_{{s}_{1}}\\ {d}_{{s}_{1}}\\ {b}_{{s}_{2}}\\ {d}_{{s}_{2}} \end{array}\! \right] ={G}^{T}_{{s}_{1},{s}_{2}}{u}+{t}_{{s}_{1},{s}_{2}} \end{array} $$
((41))

We can first perform the ML estimation discussed in Section 3.1 and thus obtain the ML detection of the itinerary \(\{\hat {s}_{1}^{\text {ML}},\hat {s}_{2}^{\text {ML}}\}\). By taking the ML detection of the itinerary as true value, the linear MMSE estimator is invoked to estimate the original value of {x 0,y 0} as follows:

$${} {\fontsize{8.6pt}{9.6pt}\selectfont{\begin{aligned} \hat{{u}}_{\text{MMSE}}\left(\hat{{s}}_{1}^{\text{ML}},\hat{{s}}_{2}^{\text{ML}}\right)=\left(\!{G}_{\hat{{s}}_{1}^{\text{ML}},\hat{{s}}_{2}^{\text{ML}}}{G}^{T}_{\hat{{s}}_{1}^{\text{ML}},\hat{{s}}_{2}^{\text{ML}}}\!\,+\,\!3\sigma^{2}{I}\!\right)^{-1}\!\!{G}_{\hat{{s}}^{\text{ML}}}\left({r}\!\,-\,\!{t}_{\hat{{s}}_{1}^{\text{ML}}, \hat{{s}}_{2}^{\text{ML}}}\right). \end{aligned}}} $$
((42))

Then, a limiting procedure is performed to obtain admissible decoding results.

4 Improvement II—single-input (1-D) baker’s analog code

Inspired by the performance analysis in Section 2.4, to enhance the original baker’s map performance, effective protection must be performed equally to all sources. Besides the mirrored structure proposed in the last section, here we propose an alternative improving strategy that is to feed the y-sequence with input x 0, which actually forms a single-input (1-D) baker’s analog code. By feeding the two inputs of the original baker’s map with one source x 0, the problem of poor protection of y 0 vanishes and protection of x 0 is enhanced. In other words, the protection to all sources is equal and strengthened. Furthermore, another unconspicuous yet profound aspect of motivation of this 1-D scheme is that it performs a hidden repetition code of the itinerary, which is explained in full detail as follows.

As pointed out in the papers [18] and [19], reliably determining the itinerary is a key factor impacting decoding performance. In the original baker’s analog coding system, the y-sequence does not help to protect the itinerary since each of its signal is uncorrelated with x 0. Recall that the codeword of the y-sequence of the baker’s system is generated by the inverse tent map function using sign sequence from the x-sequence. By feeding the y-sequence with x 0, we have y 1=G sign(x 0)−1(x 0). Equivalently, x 0=G(y 1). So actually y 1 can be regarded as the state immediately before x 0 in the tent dynamic system, which we denote as x −1. Following this manner, we can regard y i as the immediate previous state of y i−1 in a tent map dynamic sequence for i=2,,N−1. Thus, by rewriting the y-sequence signal as \(\{y_{N-1}, y_{N-2}, \cdots, y_{0}\}\triangleq \{x_{-(N-1)},x_{-(N-2)},\cdots, x_{0}\}\) and concatenating it with the x-sequence signals, we actually obtain a long tent map analog code (except that there are two copies of x 0 here). Moreover, this obtained equivalent tent map sequence has its special pattern: the first half itinerary is reversely identical with the second half itinerary. In other words, the 1-D baker’s analog code actually constructs a hidden repetition code for the itinerary sequence. Both the x- and y- sequences now become analog “parity bits” of the itinerary. This interesting alternative view of the 1-D baker’s dynamic system is illustrated in Fig. 4.

Fig. 4
figure 4

1-D baker’s dynamic encoding system

Next, sticking to the notations introduced above for the baker’s system, we give out the decoding results for this one-dimensional baker’s analog code.

4.1 ML decoding scheme

Similar to the previous discussion, for each given itinerary s, the optimal solution to inner minimization \(x_{0,{s}}^{*}\) is obtained by

$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} x_{0,{s}}^{\text{inner}}\,=\,\left\{ \begin{array}{l} e_{l,{s}},\text{if}\; x_{0,{s}}^{*}\!<\!e_{l,{s}},\\ e_{u,{s}},\text{if}\; x_{0,{s}}^{*}\!>\!e_{u,{s}},\\ x_{0,{s}_{1},{s}_{2}}^{*} \text{otherwise,} \end{array} \right.\ \text{with}\ x_{0,{s}}^{*}=\frac{{a}_{{s}}^{T}({r}_{{x}}\,-\,{b}_{{s}})\,+\,{c}_{{s}}^{T}({r}_{{y}}\,-\,{d}_{{s}})}{{a}_{{s}}^{T}{a}_{{s}}\,+\,{c}_{{s}}^{T}{c}_{{s}}}. \end{aligned}}} $$
((43))

The ML estimate is obtained by going over all possible itineraries and selecting the \(x_{0,{s}}^{\text {inner}}\) which minimizes the likelihood function.

4.2 MMSE decoding scheme

Defining the following parameters

$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} &A=\|{a}_{{s}}\|^{2}+\|{c}_{{s}}\|^{2}; \ \ \ B={a}_{{s}}^{T}({b}_{{s}}-{r}_{x})+{c}_{{s}}^{T}({d}_{{s}}-{r}_{y}); \\& C=\|{b}_{{s}}-{r}_{x}\|^{2}+\|{d}_{{s}}-{r}_{y}\|^{2}; \\ &E=\exp\left\{\frac{B^{2}-AC}{2\sigma^{2}A}\right\}; \ \ D=Q\left(\frac{\sqrt{A}}{\sigma}e_{l,{s}}+\frac{B}{\sigma\sqrt{A}}\right)\\ &\qquad-Q\left(\frac{\sqrt{A}}{\sigma}e_{u,{s}}+\frac{B}{\sigma\sqrt{A}}\right);\\ &J=\exp\left\{-\frac{1}{2\sigma^{2}}A\left(e_{l,{s}}+\frac{B}{A}\right)^{2}\right\}-\exp\left\{-\frac{1}{2\sigma^{2}}A\left(e_{u,{s}}+\frac{B}{A}\right)^{2}\right\}\!, \end{aligned}}} $$
((44))

The MMSE estimate if given as

$$\begin{array}{*{20}l} \hat{x}_{0}^{\text{MMSE}}&=\frac{\sum_{{s}}\sqrt{\frac{2\pi}{A_{2}}}E\left(\frac{\sigma}{A}J-\frac{\sqrt{2\pi}B}{A^{3/2}}D\right)}{\sum_{{s}}\sqrt{\frac{2\pi}{A}}ED}. \end{array} $$
((45))

4.3 ML-MMSE decoding scheme

Assume that the ML detection of the itinerary is \(\hat {\mathbf {s}}_{\text {ML}}\), then the received signal can be written in an affine form of x 0 as

$$\begin{array}{*{20}l} \left[ \begin{array}{c} {r}_{x}\\ {r}_{y} \end{array} \right]= \left[ \begin{array}{c} {a}_{\hat{{s}}^{\text{ML}}} \\ {c}_{\hat{{s}}^{\text{ML}}} \end{array} \right]x_{0} + \left[ \begin{array}{c} {b}_{\hat{{s}}^{\text{ML}}} \\ {d}_{\hat{{s}}^{\text{ML}}} \end{array} \right]. \end{array} $$
((46))

The linear MMSE estimate is obtained by performing a limiting procedure to the following value:

$$\begin{array}{*{20}l} \hat{x}_{0}^{\text{MMSE}}(\hat{{s}}^{\text{ML}})=\frac{{a}_{\hat{{s}}^{\text{ML}}}^{T}({r}_{x}-{b}_{\hat{{s}}^{\text{ML}}})+{c}_{\hat{{s}}^{\text{ML}}}^{T}({r}_{y}-{d}_{\hat{{s}}^{\text{ML}}})}{\|{a}_{\hat{{s}}^{\text{ML}}}\|^{2}+\|{c}_{\hat{{s}}^{\text{ML}}}\|^{2}+3\sigma^{2}}. \end{array} $$
((47))

5 Simulation results and discussions

In this section, numerical results and discussions are presented. The MSE performance of ML, MMSE, and ML-MMSE decoding algorithms for mirrored baker’s and single-input baker’s system is presented in Figs. 5 and 6, respectively, where E u represents the average power for each source signal and N 0 denotes the unilateral power spectral density. In our experiment, the source signals are independent and uniformly distributed over [−1,+1]. For each coding system, codes with N=3 and N=5 are tested. The associated CRLBs (determined explicitly in Eq. (48)) and uncoded performance are plotted to serve as benchmarks. Numerical results verify the validity of the decoding algorithms developed in previous sections and show that both of the mirrored and single-input structure have improved MSE performance of the original baker’s coding system.

Fig. 5
figure 5

MSE of different decoding algorithms for the mirrored baker’s analog code

Fig. 6
figure 6

MSE performance of different decoding algorithms for the single-input baker’s analog code

Figure 7 compares the performance of the mirrored and single-input baker’s map and the tent map analog codes proposed in [16, 17], where code rates of 1/6 and 1/10 are considered for each coding scheme. Although the tent map encoding scheme can be proved to have a lower CRLB, its actual performance is disadvantageous to the improved baker’s schemes over a wide SNR range.

Fig. 7
figure 7

MSE performance of the tent map code, mirrored baker’s map code, and single-input baker’s map code (code rate 1/10)

Generally, the distortion of analog transmission systems can be decomposed into two parts [2]: anomalous distortion and weak distortion. Weak distortion, stemming from the channel noise, can become very small and close to zero as long as the channel noise is sufficiently small. As analyzed in [13], to reduce the distortion of estimation, the transmitted signal must be stretched as much as possible, which can be intuitively seen as “amplifying” the signal. However, due to transmission power constraint, transmitted signals have to be bounded, and thus, the stretching cannot be arbitrarily extensive without folding. This means the stretched signal will have multiple folds. The ML decoding projects the received signal to a valid codeword with minimum Euclidean distance. Projection onto an erroneous fold results into an anomalous distortion, which introduces a rather notable estimation error. In practical code design, the weak distortion and the anomalous distortion are two competing aspects—lengthening the codeword curve will relieve the weak distortion but will inevitably introduce more folds and a narrower space between folds and hence a higher chance for anomalous distortion; likewise, shortening the codeword curve will reduce the chance for anomalous distortion but increase the weak distortion. The key is to strike a best balance between these competing factors.

Specifically, the weak error can be accurately characterized by the CRLB, and the anomalous error can be roughly indicated by the BER.

The CRLB for x 0 and y 0 of the mirrored baker’s system is given in the following, which is also CRLB of the single-input baker’s code

$${} {\fontsize{9.2pt}{9.6pt}\selectfont{\begin{aligned} \text{CRLB}_{x_{0}}^{\text{mirror}}&=-E^{-1}_{x_{0}}\left\{\left(\frac{\partial^{2}}{\partial {x_{0}^{2}}}{\log{p(\mathbf{r}_{1,x},\mathbf{r}_{1,y}, \mathbf{r}_{2,x},\mathbf{r}_{2,y}|x_{0}, y_{0})}}\right)^{2}\right\} \\ &=\frac{\sigma^{2}}{\sum_{k=0}^{N-1}a_{1,k}^{2}+\sum_{k=0}^{N-1}a_{2,k}^{2}}\\ &=\frac{\sigma^{2}}{\sum_{k=0}^{N-1}2^{2k}+\sum_{k=0}^{N-1}2^{-2k}}\\ &=\frac{3\sigma^{2}}{4^{N}-4^{1-N}+3}=\text{CRLB}_{y_{0}}^{\text{mirror}}=\text{CRLB}_{y_{0}}^{1-d}. \end{aligned}}} $$
((48))

For comparison,the CRLB for the tent map code code rate 1/(2N) is given as

$$\begin{array}{*{20}l} \text{CRLB}_{x_{0}}^{\text{tent}}=\frac{3\sigma^{2}}{4^{2N}-1}. \end{array} $$
((49))

It is not hard to verify the fact that

$$\begin{array}{*{20}l} \text{CRLB}_{x_{0}}^{\text{tent}}<\text{CRLB}_{x_{0}}^{\text{mirror}}=\text{CRLB}_{x_{0}}^{1-d}, \forall N\in\mathbb{N}^{+}. \end{array} $$
((50))

This means under equal bandwidth expansion (or code rate), the tent map system will always have a lower weak distortion.

For tent map and baker’s map coding systems, itinerary errors cause anomalous distortion. To compare the anomalous distortion of different analog coding systems, we examine the bit error rate (BER) performance of the itinerary bits for each code. We test the tent map code, mirrored baker’s code, and single-input code with N=5, each of which has itinerary length of 4. The BER of each itinerary bit for different systems is illustrated in the sub-figures in Fig. 8. It should be noted that in Fig. 8, the tent map code has a code rate of 1/5 while the mirrored baker’s and single-input baker’s systems have a code rate of 1/10. The BER performance for the first four itinerary bits of the tent map system with rate 1/10 is even worse than that for the tent map code with rate 1/5.

Fig. 8
figure 8

BER of itinerary bits for the tent map code, mirrored baker’s map code, and single-input baker’s map code (N=5)

From the figures in Fig. 8, the mirrored baker’s map code and single-input baker’s map code have obvious advantage in the itinerary BER performance. The mirrored structure exhibits equal protection for different itinerary bits, and the BER decays with steeper slope than that of the tent map code. Comparatively, the single-input baker’s system presents an unequal protection of different itinerary bits. The BER for itinerary bits with smaller indices decays much faster than that with larger indices. Since errors in itinerary bit with smaller index cause more serious distortion, the single-input baker’s system performs a clever unequal protection to itinerary bits adaptive to their significance. This also explains the single-input baker’s map code’s advantageous performance over the mirrored baker’s map code in the medium SNR range.

From the above comparison, it can be seen that although the improved baker’s analog codes have larger weak distortion than the tent map code, their anomalous distortion has been effectively suppressed. The modified baker’s map codes achieve a better balance between the protection against two kinds of distortion and consequently outperform the tent map code in a wide SNR range.

Next, we compare the baker’s map code with optimum performance theoretically attainable (OPTA) and existing analog coding schemes in literature [1214]. OPTA can be obtained by equating the rate distortion function with the channel capacity. From [24], we know that the rate distortion function depends on the source distribution and usually does not have a closed-form expression. One of the few exceptions is the Gaussian source, whose distortion function can be obtained analytically (Theorem 13.3.2 in [24]). In the Gaussian case, OPTA can be obtained in a closed form and this is part of the reasons why the existing literature tends to choose Gaussian sources as the case of study, like [1215] do. However, Gaussian sources cannot be fed directly to the family of baker’s map encoders, whose inputs are required to be bounded ([−1,+1]). Nevertheless, to make our proposal comparable with OPTA and other previous works, we perform the comparison in an approximated manner by using a truncated Gaussian source. The source signal is first generated from the Gaussian distribution \(\mathcal {N}(m,\sigma ^{2})=\mathcal {N}(0,(1/3)^{2})\). We then truncate it using a limiting range of 3σ=1, such that 99.7 % of the probability mass falls in the region of [−1,+1]. The signal value is set as +1 if it exceeds +1,and −1 if it drops below −1. We performed mirrored baker’s coding on this truncated Gaussian source, and the results are shown in Fig. 9. It should be noted that in the figure, the OPTA bound is calculated with the true Gaussian source (the only source that is analytically tractable). Since the simulated coding schemes use a truncated Gaussian source, we therefore see a small discrepancy, and the baker’s code actually appears to slightly outperform the OPTA at the low SNR region. At the same time, we also plot the series of the Shannon-Kotel’nikov spirals with parameters optimized for different channel SNR (Fig. 9 in [12]). It should be noted that the MSE performance of the mirrored baker’s code and Shannon-Kotel’nikov spirals in Fig. 9 are obtained by the ML method, which can be improved by the MMSE method according to [14] and our previous discussion.

Fig. 9
figure 9

Approximated OPTA, SDR of the mirrored baker’s map code, and SDR of Shannon-Kotel’nikov spirals with different parameters

The advantage of the parameterized Shannon-Kotel’nikov spiral curve approach is that by optimizing the parameters with respect to the source distribution and the channel condition, the performance of the code can be made within some 5 dB from the OPTA [12]. The cost, however, is that one must know the exact source distribution and the accurate SNR information. As shown in Fig. 9, each curve represents a Shannon-Kotel’nikov spiral with its parameter optimized towards one specific channel SNR. Every time the channel condition changes (i.e., a different SNR), the parameter(s) must be adapted or the code will suffer from a quick performance deterioration due to channel mismatch.

The proposed baker’s analog codes do not require the knowledge of the source distribution nor the channel SNR in order to perform encoding and ML decoding. Instead of designing a sequence of codes, the one optimized for each channel SNR in [12], in our approach, a single code is used for a wide range of SNR range. Figure 9 reflects that our proposal’s SDR (in dB) has identical slope for high channel SNR, or diversity, as that of optimized Shannon-Kotel’nikov spirals. The improved baker’s analog codes universally outperform the Shannon-Kotel’nikov spirals optimized for low-channel SNR and have obvious advantage in the low SNR range for all Shannon-Kotel’nikov spirals. Additionally, the ML decoding algorithm of our proposed chaotic analog codes has simple closed-form expression, which is absent for spiral codes.

Last, we compare the proposed analog encoding system with the conventional digital encoding systems for analog signal transmission. In our experiment, the source signals are uniformly distributed between the range [−1,+1]. For digital systems, uniform quantization and turbo codes with recursive systematic convolutional code \(\left (1,\frac {1+D+D^{2}+D^{3}}{1+D+D^{3}}\right)\) are used. The BCJR (log-MAP) algorithm with eight decoding iterations is performed for decoding the turbo code. Uniform puncturing is utilized to appropriately adjust the code rate when applicable. Due to the different significance of bits obtained by quantization, equal error protection (EEP) and unequal error protection (UEP) are considered. The details of the tested systems are given as follows:

  1. 1.

    Analog: (6,2) analog code is used by utilizing the mirrored baker’s code with N=2 and puncturing the system signals (y 0,x 0) for the second branch. Assuming that codewords are transmitted using in-phase and quadrature forms (which can be regarded as -QAM modulation), the system has bandwidth expansion of 3/2.

  2. 2.

    Digital-EEP: 8-bit quantization, (3072,2048,2/3) turbo code, and 256-QAM are used. System bandwidth expansion is 3/2.

  3. 3.

    Digital-UEP1: 8-bit quantization is performed. The four least significant bits (LSB) are left uncoded. The four most significant bits (MSB) are encoded by (4096,2048,1/2) turbo code. Both the coded and uncoded bits are 256-QAM modulated. System bandwidth is 3/2.

  4. 4.

    Digital-UEP2: 8-bit quantization is performed. The two LBS are uncoded. The six MSB are encoded by (3410,2046,3/5) turbo code. All bits are 256-QAM modulated. System bandwidth is 3/2.

  5. 5.

    Digital-UEP3: 8-bit quantization is performed. The four LSB are uncoded. The four MSB are encoded by (2560,2048,4/5) turbo code. The coded and uncoded bits go through 64-QAM modulation. System bandwidth is 3/2.

The performance of the proposed analog and four digital systems is plotted in Fig. 10. The proposed analog code exhibits an obvious advantage to the digital competitors over a wide range when SNR has low and medium values. The digital systems enter their waterfall region at rather high SNR and exhibits error floor, which is the result of the quantization noise. In fact, due to the bandwidth limitation, quantization noise will always exist for the digital transmission schemes and eventually forms an error floor that limits the overall system performance even as the SNR increases to infinity. In Fig. 10, the digital coding schemes outperform the analog scheme in a narrow E u /N o range, which is due to the fact that the digital error correction codes’ performance boosts drastically in a very narrow SNR range (the so-called waterfall region). The digital codes’ resilience to noise, although powerful, is finally suppressed by the quantization noise. Comparatively, analog coding schemes have a very graceful performance evolution, and their distortion can be made arbitrarily small if the channel is sufficiently good.

Fig. 10
figure 10

Analog signal transmission system vs digital system

6 Conclusions

This paper introduces a family of pure analog chaotic dynamic encoding schemes based on the baker’s map function. We first discuss the coding scheme using the original baker’s map function, including its encoding and decoding schemes. Mean square error analysis indicates that the intrinsic unbalanced protection of its input results in an unsatisfying performance. Based on that, two improvement encoding schemes are proposed—mirrored baker’s and single-input baker’s system. These two schemes provide sufficient protection to all encoded analog sources. The various decoding methods for the original baker’s coding system are extended to the modified systems. Compared to the classical tent map analog code, the improved baker’s map encoding schemes achieve a better balance between the anomalous and weak distortion and have advantageous performance in a wide practical SNR range. Moreover, our improved encoding schemes also exhibit competition or even better performance than the classical analog joint source-channel coding scheme, especially in the low SNR range, while maintaining much lower complexity in the decoding procedure. We also compare the analog and conventional digital systems using turbo code to transmit analog source signals. The digital systems suffer from granularity noise due to quantization, large decoding latency, and threshold effect. Comparatively, the analog coding scheme has a graceful performance degradation and outperforms over a wide SNR region.

7 Appendix

In this appendix, we provide detailed proof of the closed-form solution of the MMSE decoder for the original baker’s map code in (19).

Following notations in Section 2, we start from Eq. (15); the MMSE estimate of x 0 can be given as

$$\begin{array}{*{20}l} {}\hat{x}_{0}^{\text{MMSE}}&=\mathsf{E}\{x_{0}|{r}_{x}, {r}_{y}\}=\int_{-1}^{+1} x_{0}f(x_{0}|{r}_{x}, {r}_{y})\mathrm{d}x_{0} \end{array} $$
((51))
$$\begin{array}{*{20}l} &=\sum_{{s}}\int_{C_{{s}}}x_{0}f(x_{0}|{r}_{x}, {r}_{y})\mathrm{d}x_{0} \end{array} $$
((52))
$$\begin{array}{*{20}l} &=\sum_{{s}}\int_{C_{{s}}}x_{0}\frac{f({r}_{x}, {r}_{y}|x_{0})f(x_{0})}{f({r}_{x}, {r}_{y})}\mathrm{d}x_{0} \end{array} $$
((53))
$${\kern10pt} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} &=\frac{1}{2\,f({r}_{x}, {r}_{y})}\sum_{{s}}\int_{C_{{s}}}x_{0}\int_{-1}^{+1}f({r}_{x},{r}_{y}|x_{0},y_{0}) f(y_{0}|x_{0})\mathrm{d}y_{0}\mathrm{d}x_{0} \end{aligned}}} $$
((54))
$$\begin{array}{*{20}l}{\kern10pt} &=\frac{1}{4\,f({r}_{x}, {r}_{y})}\sum_{{s}}\int_{C_{{s}}}x_{0}\int_{-1}^{+1}f({r}_{x},{r}_{y}|x_{0},y_{0})\mathrm{d}y_{0}\mathrm{d}x_{0}\\ &=\frac{1}{4\,f({r}_{x}, {r}_{y})}\sum_{{s}}\int_{C_{{s}}}x_{0}\int_{-1}^{+1}\left[\frac{1}{\sqrt{2\pi}\sigma}\right]^{2N}\\&\quad\exp\left\{-\frac{1}{2\sigma^{2}}\sum_{k=0}^{N-1}\left\{\left[r_{x,k} -\!(a_{k,{s}_{n}}x_{0}\,+\,b_{k,{s}_{n}})\right]^{2}\right.\right. \end{array} $$
((55))
$$\begin{array}{*{20}l} &\qquad\quad+\left.\left.\left[r_{y,k}-(c_{k,{s}_{n}}y_{0}\,+\,d_{k,{s}_{n}})\right]^{2}\right\}{\vphantom{-\frac{1}{2\sigma^{2}}\sum_{k=0}^{N-1}}}\right\} \mathrm{d}y_{0}\mathrm{d}x_{0}. \end{array} $$
((56))

In the above equations, we utilize the fact that x 0 and y 0 are independently uniformly distributed over the range [−1,+1]. To proceed with the above derivation, we introduce some intermediate parameters as follows:

$$\begin{array}{*{20}l} A_{1}=\|{a}_{{s}}\|^{2}; \ \ \ B_{1}={a}_{{s}}^{T}({b}_{{s}}-{r}_{x}); \ \ \ C_{1}=\|{b}_{{s}}-{r}_{x}\|^{2}; \\ A_{2}=\|{c}_{{s}}\|^{2}; \ \ \ B_{2}={c}_{{s}}^{T}({d}_{{s}}-{r}_{y}); \ \ \ C_{2}=\|{d}_{{s}}-{r}_{y}\|^{2}; \end{array} $$
((57))

Thus, the calculation in (51) can be further written as

$${} {\fontsize{8.1pt}{9.6pt}\selectfont{\begin{aligned} \hat{x}_{0}^{\text{MMSE}}&=\!\frac{\left(2\pi\sigma^{2}\right)^{-N}}{4\,f({r}_{x}, {r}_{y})}\sum_{{s}}\!\left\{\underbrace{\int_{C_{{s}_{n}}}x_{0}\exp\!\left\{\!-\frac{1}{2\sigma^{2}}\left[A_{1}{x_{0}^{2}}\,+\,2B_{1}x_{0}\,+\,C_{1}\right]\right\}\!\mathrm{d}x_{0}}_{I_{1}({s})}\right.\\ &\left.\qquad\qquad\qquad \cdot\underbrace{\int_{-1}^{+1}\exp\left\{-\frac{1}{2\sigma^{2}}\left[A_{2}{y_{0}^{2}}+2B_{2}y_{0}+C_{2}\right]\right\}\mathrm{d}y_{0}}_{I_{2}({s})}\right\}. \end{aligned}}} $$
((58))

Similarly, the MMSE estimator of \(\hat {y}_{0}^{\text {MMSE}}\) can be also obtained starting from (15) and is determined as

$$\begin{array}{*{20}l}{} \hat{y}_{0}^{\text{MMSE}}&=E\{y_{0}|{r}_{x}, {r}_{y}\}=\int_{-1}^{+1} y_{0}f(y_{0}|{r}_{x}, {r}_{y})\mathrm{d}y_{0} \end{array} $$
((59))
$$\begin{array}{*{20}l} &=\int_{-1}^{+1} y_{0}\int_{-1}^{+1} \frac{f({r}_{x},{r}_{y}|x_{0},y_{0})f(y_{0})f(x_{0}|y_{0})}{f({r}_{x}, {r}_{y})} \mathrm{d}x_{0} \mathrm{d}y_{0} \end{array} $$
((60))
$$\begin{array}{*{20}l} &=\frac{1}{4f({r}_{x},{r}_{y})}\!\int_{-1}^{+1} \!y_{0}\sum_{n=0}^{2^{N-1}-1}\int_{C_{{s}_{n}}}\!f({r}_{x},{r}_{y}|x_{0},y_{0})\mathrm{d}x_{0} \mathrm{d}y_{0} \end{array} $$
((61))
$${\kern10pt} {\fontsize{7.8pt}{9.6pt}\selectfont{\begin{aligned} &=\frac{\left(2\pi\sigma^{2}\right)^{-N}}{4f({r}_{x}, {r}_{y})}\sum_{{s}}\left\{\underbrace{\int_{-1}^{+1}\!y_{0}\exp\!\left\{\!-\frac{1}{2\sigma^{2}}\left[\!A_{2}{y_{0}^{2}}+2B_{2}y_{0}+C_{2}\!\right]\!\right\}\!\mathrm{d}y_{0}}_{I_{3}({s})}\right.\\ &\left.\qquad\qquad\qquad \cdot\underbrace{\int_{C_{{s}_{n}}}\exp\left\{-\frac{1}{2\sigma^{2}}\left[A_{1}{x_{0}^{2}}+2B_{1}x_{0}+C_{1}\right]\right\}\mathrm{d}x_{0}}_{I_{4}({s})}\right\}. \end{aligned}}} $$
((62))

Observing Eqs. (58) and (62), the term f(r x ,r y ) still needs to be determined, which can be calculated as

$$\begin{array}{*{20}l} f({r}_{x}, {r}_{y})&=\int_{-1}^{+1}\int_{-1}^{+1}f({r}_{x}, {r}_{y}|x_{0},y_{0})f(x_{0})f(y_{0})\mathrm{d}x_{0}\mathrm{d}y_{0} \end{array} $$
((63))
$$\begin{array}{*{20}l} &=\frac{\left(2\pi\sigma^{2}\right)^{-N}}{4}\sum_{{s}}\left(I_{2}({s})I_{4}({s})\right) \end{array} $$
((64))

where I 2(s) and I 4(s) are defined in (58) and (62), respectively. Here we further introduce the following notations:

$${} {\fontsize{8.2pt}{9.6pt}\selectfont{\begin{aligned} {}E_{1}&=\exp\left\{\frac{{B_{1}^{2}}-A_{1}C_{1}}{2\sigma^{2}A_{1}}\right\}; \ \ D_{1}=Q\left(\frac{\sqrt{A_{1}}}{\sigma}e_{l,{s}}+\frac{B_{1}}{\sigma\sqrt{A_{1}}}\right) \\&\quad-Q\left(\frac{\sqrt{A_{1}}}{\sigma}e_{u,{s}}+\frac{B_{1}}{\sigma\sqrt{A_{1}}}\right);\\ {}E_{2}&=\exp\left\{\frac{{B_{2}^{2}}-A_{2}C_{2}}{2\sigma^{2}A_{2}}\right\}; \ \ D_{2}=Q\left(-\frac{\sqrt{A_{2}}}{\sigma}+\frac{B_{2}}{\sigma\sqrt{A_{2}}}\right) \\&\quad-Q\left(\frac{\sqrt{A_{2}}}{\sigma}+\frac{B_{2}}{\sigma\sqrt{A_{2}}}\right);\\ {}J_{1}&=\exp\left\{\!-\frac{1}{2\sigma^{2}}A_{1}\left(e_{l,{s}}+\frac{B_{1}}{A_{1}}\right)^{2}\!\right\}-\exp\left\{\!-\frac{1}{2\sigma^{2}}A_{1}\left(e_{u,{s}}+\frac{B_{1}}{A_{1}}\right)^{2}\!\right\}; \\ {}J_{2}&=\exp\left\{-\frac{1}{2\sigma^{2}}A_{2}\left(1+\frac{B_{2}}{A_{2}}\right)^{2}\right\}-\exp\left\{-\frac{1}{2\sigma^{2}}A_{2}\left(-1+\frac{B_{2}}{A_{2}}\right)^{2}\right\}, \end{aligned}}} $$
((65))

where the function Q(·) is the well-known Gaussian-Q function which is defined as

$$\begin{array}{*{20}l} Q(x)\triangleq\int^{\infty}_{x}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{t^{2}}{2}}\mathrm{d}t. \end{array} $$
((66))

After some manipulations, the integrals I 1(s), I 2(s), I 3(s), and I 4(s) defined previously can be given by use of the notations in (17) as

$${} \begin{aligned} I_{1}({s})=E_{1}\left(\frac{\sigma^{2}}{A_{1}}J_{1}-\frac{\sqrt{2\pi}B_{1}\sigma}{A_{1}^{3/2}}D_{1}\right); I_{2}({s})&=\sqrt{\frac{2\pi}{A_{2}}}\sigma E_{2}D_{2}; \\ I_{3}({s})=E_{2}\left(\frac{\sigma^{2}}{A_{2}}J_{2}-\frac{\sqrt{2\pi}B_{2}\sigma}{A_{2}^{3/2}}D_{2}\right); I_{4}({s})&=\sqrt{\frac{2\pi}{A_{1}}}\sigma E_{1}D_{1}; \end{aligned} $$
((67))

Thus, by substituting Eqs. (57), (65) and (67) into (58) and (62), we can finally obtain the MMSE estimator of x 0 and y 0 as follows:

$$\begin{array}{*{20}l} \hat{x}_{0}^{\text{MMSE}}&=\frac{\sum_{{s}}\sqrt{\frac{2\pi}{A_{2}}}E_{1}E_{2}D_{2}\left(\frac{\sigma}{A_{1}}J_{1}- \frac{\sqrt{2\pi}B_{1}}{A_{1}^{3/2}}D_{1}\right)}{\sum_{{s}}\frac{2\pi} {\sqrt{A_{1}A_{2}}}E_{1}E_{2}D_{1}D_{2}}, \end{array} $$
((68))
$$\begin{array}{*{20}l} \hat{y}_{0}^{\text{MMSE}}&=\frac{\sum_{{s}}\sqrt{\frac{2\pi}{A_{1}}}E_{1}E_{2}D_{1}\left(\frac{\sigma}{A_{2}}J_{2}-\frac{\sqrt{2\pi}B_{2}}{A_{2}^{3/2}}D_{2}\right)}{\sum_{{s}}\frac{2\pi}{\sqrt{A_{1}A_{2}}}E_{1}E_{2}D_{1}D_{2}}. \end{array} $$
((69))

The proof has been completed.

8 Endnote

1 Here we ambiguously use the terminology partition, since every two adjacent cells overlap with their common endpoints. But this does not harm the decoding procedure.

References

  1. CE Shannon, Communication in the presence of noise. Proceedings of IRE. 37(1), 10–21 (1949).

    Article  MathSciNet  Google Scholar 

  2. VA Kotel’nikov, The Theory of Optimum Noise Immunity (McGraw-Hill, New York, NY, USA, 1959).

    Google Scholar 

  3. JM Wozencraft, Principles of Communication Engineering (John Wiley & Sons, Hoboken, New Jersey, USA, 1965).

    Google Scholar 

  4. TJ Goblick, Theoretical limitation on the transmission of data from analog sources. IEEE Trans. Inform. Theory. 11, 558–566 (1965).

    Article  Google Scholar 

  5. J Ziv, The behavior of analog communication systems. IEEE Trans. Inform. Theory. 16, 587–594 (1970).

    Article  MathSciNet  Google Scholar 

  6. KH Lee, DP Petersen, Optimal linear coding for vector channels. IEEE Trans. Commun. 24, 1283–1290 (1976).

    Article  Google Scholar 

  7. Y Liu, J Li, K Xie, in 46th Annual Conference on Information Sciences and Systems (CISS). Analysis of linear channel codes with continuous code space (Princeton, USA, 2012).

  8. A Fuldseth, Robust subband video compression for noisy channels with multilevel signaling (Dissertation, Norwegian University of Science and Technology, 1997).

  9. S-Y Chung, On the construction of some capacity-approaching coding schemes (Dissertation, Massachusetts Institute of Technology, 2000).

  10. V Vaishampayan, SIR Costa, Curves on a sphere shift-map dynamics and error control for continuous alphabet sources. IEEE Trans. Inform. Theory. 47, 1658–1672 (2003).

    Article  MathSciNet  Google Scholar 

  11. X Cai, JW Modestino, in 40th Annual Conference on Information Sciences and Systems (CISS). Bandwidth expansion Shannon mapping for analog error-control coding (Princeton, USA, 2006).

  12. F Hekland, PA Floor, TA Ramstad, Shannon-Kotel’nikov mappings in joint source-channel coding. IEEE Trans. Commun. 57, 94–105 (2009).

    Article  Google Scholar 

  13. PA Floor, TA Ramstad, Shannon-Kotel’nikov mappings for analog point-to-point communications. http://arxiv.org/abs/0904.1538.

  14. Y Hu, J Garcia-Frias, M Lamarca, Analog joint source-channel coding using non-linear curves and mmse decoding. IEEE Trans. Commun. 59, 3016–3026 (2011).

    Article  Google Scholar 

  15. G Brante, RD Souza, J Garcia-Frias, Spatial diversity using analog joint source channel coding in wireless channels. IEEE Trans. Commun. 61, 301–311 (2013).

    Article  Google Scholar 

  16. HC Papadopoulus, GW Wornell, Maximum likelihood estimation of a class of chaotic signals. IEEE Trans. Inform. Theory. 41, 312–317 (1995).

    Article  Google Scholar 

  17. B Chen, GW Wornell, Analog error-correction codes based on chaotic dynamical systems. IEEE Trans. Commun. 46, 881–890 (1998).

    Article  Google Scholar 

  18. I Hen, N Merha, On the threshold effect in the estimation of chaotic sequences. IEEE Trans. Inform. Theory. 50, 2894–2904 (2004).

    Article  MathSciNet  Google Scholar 

  19. SM Kay, Asymptotic maximum likelihood estimator performance for chaotic signals in noise. IEEE Trans. Signal Process. 43, 1009–1012 (1995).

    Article  Google Scholar 

  20. I Rosenhouse, AJ Weiss, Combined analog and digital error-correcting codes for analog information sources. IEEE Trans. Commun. 55, 2073–2083 (2007).

    Article  Google Scholar 

  21. SM Kay, Fundamentals of Statistical Signal Processing, Volume I, Estimation Theory (Prentice Hall, Upper Saddle River, New Jersey, USA, 1993).

    Google Scholar 

  22. S Haykin, Adaptive Filter Theory, 4th Ed. (Prentice Hall, Berlin, Germany, 2002).

    Google Scholar 

  23. HV Poor, An Introduction to Signal Detection and Estimation, 2nd Ed. (Springer, Upper Saddle River, New Jersey, USA, 1998).

    Google Scholar 

  24. TM Cover, JA Thomas, Elements of Information Theory (John Wiely & Sons, Hoboken, New Jersey, USA, 1991).

    Book  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Science Foundation under Grant Nos. 0928092, 1133027, and 1343372.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Y., Li, J., Lu, X. et al. A family of chaotic pure analog coding schemes based on baker’s map function. EURASIP J. Adv. Signal Process. 2015, 58 (2015). https://doi.org/10.1186/s13634-015-0243-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-015-0243-9

Keywords