- Research
- Open Access
- Published:

# Distributed transform coding via source-splitting

*EURASIP Journal on Advances in Signal Processing*
**volume 2012**, Article number: 78 (2012)

## Abstract

Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as *source-split distributed transform coding* (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply *source-splitting* using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using *conditional entropy constrained* (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

## 1 Introduction

Many new applications such as multi-camera imaging systems rely on networks of distributed wireless sensors to acquire signals in the form of high-dimensional vectors [1]. In such situations, an encoder in each sensor quantizes a vector of observation variables (without exchanging any information with other sensors) and transmits its output to a central processor which jointly decodes all the sources. The strong statistical dependencies among the signals observed by different sensors can be exploited in the decoder to reduce the transmission bit-rate of each sensor. This problem, in general, is referred to as distributed (or multiterminal) vector quantization (VQ). The design of a distributed VQ for a large number of source variables is a difficult task. A practically simpler, yet very effective approach to quantizing a large number of correlated variables by using a bank of single variable quantizers is transform coding (TC) [2–4]. Clearly, TC can be used for distributed VQ when separately observed vectors have both inter-vector and intra-vector statistical dependencies, a situation typical in applications such as camera networks. Most of the previous work [5–7] studies Wyner-Ziv (WZ) transform coding (WZ-TC), which is a special case of more general multiterminal transform coding (MT-TC) [8]. In WZ-TC, a single source is quantized given that the decoder has access to side information about the source.

Information-theoretic studies of distributed transform coding (DTC) can be found in [5, 8]. In [8], the optimal linear transform for Gaussian WZ-TC under the mean square-error (MSE) criterion is shown to be the conditional Karhunen-Loéve transform (CKLT), which is a natural extension of the result in [2]. This result is based on the assumption that each transform coefficient is compressed by a rate-distortion (RD) optimal WZ quantizer and hence describes the optimal performance theoretically attainable (OPTA) in Gaussian WZ-TC. However, the optimal solution to the more general MT-TC problem remains unsolved, even for the Gaussian case. In [8], an iterative descent algorithm for determining the OPTA of Gaussian MT-TC problem is given. It is shown that, while this algorithm [referred to as the distributed KLT (DKLT)] always converges to a solution, the final solution is not necessarily the global optimum. In any case, the practical implementation of distributed quantizers implied by the DKLT remains an open problem. In [5] WZ-TC based on high-rate scalar quantization and ideal Slepian-Wolf (SW) coding [9] is studied. In particular, it is shown that, for jointly Gaussian vectors, CKLT followed by uniform scalar quantization is asymptotically optimal, a natural extension of the result in [10] for entropy-coded quantization at high-rate. More importantly, the bit-allocations and quantizer-step sizes found in [5] can be used for practical design of WZ-transform codes as long as high-rate approximations hold. However, we note that, even when scalar quantizers are used, achieving good performance with this approach still requires the use of a subsequent block-based SW coding method (e.g., Turbo codes or LDPC codes). Other previous studies on WZ-TC can be found in [6, 7]. However, they rely on WZ scalar quantization of transform coefficients. Such methods are therefore most suitable for applications requiring low coding delay as their performance is strictly inferior to block-based quantization.

In contrast to WZ-TC, we consider in this article the practical design of two-terminal transform codes for jointly Gaussian vectors in which arbitrary transmission rates can be assigned to each terminal. Our approach is based on the idea of *source-splitting*[11, 12] to convert the two-terminal TC problem into two WZ-TC problems. Since transform codes quantize linear projections, we perform source splitting in terms of optimal linear approximations, i.e., a linear approximation of one source is provided as decoder side-information for the other source. The proposed source-split DTC (SP-DTC) approach only requires the design of two-sets of WZ quantizers sequentially, and avoids having to iteratively optimize two sets of WZ quantizers to each other as in [8]. However, this approach requires the solution of a bit allocation problem involving dependent WZ quantizers. To solve this problem for Gaussian sources, we propose an efficient tree-search algorithm, which can be used to find the a good SP-DTC under different models for quantization of transform coefficients. When used with the RD-optimal WZ quantization model [8], this algorithm can potentially locate the optimal SP-DTC for Gaussian sources. In practice, with constraints imposed on tree-search complexity, the algorithm yields a near-optimal solution. We refer to the optimal solution to the Gaussian problem as the source-split DKLT (SP-DKLT). Using this algorithm, we numerically compute the rate-region achievable with a SP-DKLT code for two examples of jointly Gaussian vector sources. This study shows that, when there is sufficient inter-source correlation, optimal SP-DKLT codes can achieve substantially better performance than independent transform codes for the two sources. However, we find that the rates achievable with SP-DKLT codes are strictly inside the optimal achievable rate-region predicted by the DKLT algorithm of [8]. In order to approach the performance predicted by the optimal SP-DKLT in practice, block WZ quantization of transform coefficients is required. For implementation of block WZ quantizers, we consider the use of trellis-coded quantization (TCQ) followed by SW coding. This two stage approach is known to achieve the RD function of Gaussian WZ coding [13]. In order practically implement this approach, we introduce the idea of designing *conditional entropy constrained* TCQ (CEC-TCQ) based on analytically found bit-allocations. We present experimental results to demonstrate that practical implementations of SP-DTCs for Gaussian sources can closely approach the performance limits indicated by the optimal SP-DKLT. On the other hand when SW coded high-rate scalar quantization model [5] is assumed for encoding transform coefficients, the tree-search algorithm proposed in this article can also be used to find asymptotically good SP-DTC codes for scalar quantization based implementations. These codes can be readily implemented using CEC scalar quantizers (CEC-SQ) as demonstrated by experimental studies presented in this article. In our experimental study, we also investigate the design of good SP-DTCs based on widely used discrete cosine transform (DCT).

This article is organized as follows. Section 2 presents a review of WZ-TC of Gaussian vectors and motivates the particular approach introduced in this article. Section 3 presents the idea of SP-DTC and develops the tree-search algorithm for finding the optimal transforms and the bit-allocation for SP-DKLT codes. Section 4 computes the achievable rate region of SP-DKLT codes for two example Gaussian source models, and presents experimental results obtained by designing SP-DTCs based on both KLT and DCT. Finally, some concluding remarks are given in Section 5.

*Notation*: As usual, bold letters denote vectors and matrices, upper case denotes random variables, and lower case denotes realizations. **Σ**_{
X
}denotes the auto-covariance matrix of the vector **X**. **Σ**_{
XY
}and **Σ**_{
X|Y
}, respectively denote the joint covariance matrix of (**X**, **Y**) and the conditional covariance matrix of **X** given **Y**. The eigenvalues *λ*_{1} , . . ., *λ*_{
M
} of a *M × M* covariance matrix are always indexed such that *λ*_{1}*≥ λ*_{2} . . . ≥ *λ*_{
M
} , and the corresponding KLT matrix has the structure $\mathbf{T}=\left({\mathbf{u}}_{1}^{T},\dots ,{\mathbf{u}}_{M}^{T}\right)$, where **u**_{
m
}is the eigenvector associated *λ*_{
m
} .

## 2 WZ-TC of Gaussian vectors

Consider encoding of a Gaussian vector $\mathbf{X}\in {\mathbb{R}}^{{M}_{1}}$ using *B* bits per vector, given that the decoder has access to a jointly Gaussian vector $\mathbf{Y}\in {\mathbb{R}}^{{M}_{2}}$. Assume that both vectors have zero mean, and let the auto-covariance matrix of **X** be Σ_{
X
}= *E*{**XX**^{T}}. In WZ-TC, a linear transform is first applied to **X** and each component of the *transform coefficient vector* **U** = **T**^{T}**X** is separately compressed by a WZ quantizer, considering **Y** as decoder side-information, where **T** is a *M*_{1} × *M*_{1} unitary matrix. Let **Û** be the quantized value of **U**. The decoder then estimates the source vector based on **Û** and **Y**. We wish to find the optimal transform and the allocation of *B* bits among *M*_{1} transform coefficients, which minimize the quantization MSE $E\left\{{\u2225\mathbf{X}-\widehat{\mathbf{X}}\u2225}^{2}\right\}$, where $\widehat{\mathbf{X}}=E\left\{\mathbf{X}|\widehat{\mathbf{U}},\mathbf{Y}\right\}$ is the optimal estimate (at the decoder) of the source vector. The solution of this problem requires an analytical model for coefficient quantization. To this end, [8] considers RD optimal WZ quantization (RD-WZQ) model, the solution based on which is appropriate for practical block quantization techniques such as TCQ. On the other hand [5] considers SW-coded high-rate scalar quantization (SWCHRSQ) model.

Let the eigenvalues of the conditional covariance matrix **Σ**_{X|Y}= *E*{**XX**^{T}|**Y**} be $\mathit{\lambda}={\left({\lambda}_{1},\dots ,{\lambda}_{{M}_{1}}\right)}^{T}$. The CKLT of **X** given **Y** is defined as ** θ**=

**T**

^{T}

**X**, where

**T**is

*M*

_{1}×

*M*

_{1}unitary matrix such that Σ

_{ X|Y }=

**TΛT**

^{T}, where $\mathbf{\Lambda}=\mathsf{\text{diag}}\left({\lambda}_{1},{\lambda}_{2},...,{\lambda}_{{M}_{1}}\right)$[8]. It is easy to verify that

*E*{

*θθ*^{T}

**|Y**} =

**Λ**, i.e., the components of

**are conditionally uncorrelated, given**

*θ***Y**. For convenience, define

where *N ≤ M*_{1} is a positive integer.

### 2.1 RD-WZQ model

When RD-WZQ model is used, the quantization MSE for *U*_{
m
} is given by [14]

where *λ*_{
m
} = var(*U*_{
m
}*|* **Y**) and var(*·|·*) denotes the conditional variance. The optimal solution to the WZ-TC problem under RD-WZQ model is given by the following theorem.

**Theorem 1** *Given jointly Gaussian* **X** *and* **Y** *as defined above, and a total bit budget of B bits, if each transform coefficient U*_{
m
}*, m* = 1, . . ., *M*_{1}, *where* **U** = **T**^{T}**X**, *is quantized by an RD optimal WZ quantizer which uses* **Y** *as decoder side-information, then the transform* **T** *which minimizes* $E{\u2225\mathbf{X}-\widehat{\mathbf{X}}\u2225}^{2}$ *is the CKLT of* **X** *given* **Y**, *and the number of bits allocated to quantizing U*_{
m
} *is*

*where N ≤ M is the largest integer for which λ*_{
m
} *≥ d*^{∗} (** λ**,

*B, N*),

*m*= 1, . . .,

*N. The quantization MSE of the mth coefficient is*

*and the overall MSE is*

**Proof 1** *Directly follows from* [[8], Section III-B].

Note that RD-WZQ model implies infinite-dimensional VQ of each coefficient and hence the above MSE is the OPTA in the Gaussian WZ-TC problem.

### 2.2 SWC-HRSQ model

The asymptotically (in rate) optimal solution to the WZ-TC problem under SWC-HRSQ model is given by the following theorem.

**Theorem 2** *Let* **X**, **Y**, *and B be as in* Theorem 1. *If each transform coefficient U*_{
m
} *, m* = 1, . . ., *M*_{1} , *where* **U** = **T**^{T}**X**, *is quantized by a high-rate scalar quantizer and the quantizer output is encoded by a SW code which uses* **Y** *as decoder side-information, then the transform* **T** *which asymptotically minimizes*$E{\u2225\mathbf{X}-\widehat{\mathbf{X}}\u2225}^{2}$*is the CKLT of* **X** *given* **Y** *and the bit allocation is given by* (2). *Furthermore, the asymptotically optimal quantizer for U*_{
m
}*, m* = 1, . . ., *N is a uniform quantizer with step-size* $\mathrm{\Delta}=\sqrt{\left(2\pi e\right){d}^{*}\left(\mathit{\lambda},B,N\right)}$*. The resulting quantization MSE is*

**Proof 2** *See Section "Proof of Theorem* 2*" in Appendix* 1.

### 2.3 Sufficiency of scalar side-information

The WZ quantizers with vector-valued decoder side-information as considered in Theorem 1 are difficult to design in practice. However, the following theorem establishes that when CKLT is used and RD-WZQ model applies for quantization of coefficients, a linear transformation of the side-information vector can be used to convert the vector side-information problem into an equivalent scalar side-information problem. Furthermore, [[5], Section 6] shows that this result applies in asymptotic sense to the SWC-HRSQ model as well.

**Theorem 3** *Let the mean-zero vectors*$\mathbf{X}\in {\mathbb{R}}^{{M}_{1}}$*and*$\mathbf{Y}\in {\mathbb{R}}^{{M}_{2}}$*be jointly Gaussian, and let* **T** *be the CKLT of* **X** *given* **Y**. *Suppose that transform coefficients U*_{
m
} *, m* = 1, . . ., *M*_{1}, *where* **U** = **T**^{T}**X**, *are each compressed by an RD optimal WZ quantizer relative to decoder side-information* **Y**. *Then, the minimum MSE (MMSE) estimate ũ*_{
m
}*(* **y** *) = E*{*U*_{
m
}*|* **y**} *of U*_{
m
} *given* **Y** = **y** *is a sufficient statistic for decoder side-information for quantizing U*_{
m
} .

**Proof 3** *See Section "Proof of Theorem* 3*" in Appendix* 1.

Wyner-Ziv transform coding is a special case of more general MT-TC where two or more terminals apply TC to their respective inputs and transmit the quantized outputs to a single decoder which exploits the inter-source correlation to jointly reconstruct all the sources. In this case, the problem is to optimally allocate a given bit budget among all the terminals such that the total MSE is minimized. However, the closed-from solution to this problem appears difficult, due to the inter-dependence of the encoders in different terminals. An iterative descent algorithm is given in [8] for solving the Gaussian MT-TC problem. Given a total bit-budget, the bit-rate of the system is incremented by a small amount in each iteration, and the optimal WZ-TC for each terminal is determined by fixing the encoders of all other terminals and considering their outputs as decoder-side information. The solution that gives the MMSE is accepted and the iterations are repeated until the total bit-budget is exhausted. While this algorithm, referred to as the DKLT algorithm, is guaranteed to converge to at least a locally optimal solution, there is no tractable way to implement the quantizers implied by the final solution since it is not practical to optimize a set of near-optimal WZ quantizers in each iteration of this algorithm. Note also that, DKLT requires joint decoding of two vector sources.

## 3 Source-splitting based distributed TC

In general, designing a multi-terminal VQ is more difficult than designing a WZ-VQ, due to the mutual dependence among the encoders. However, one could use WZ-VQs to realize a multi-terminal VQ by using source-splitting [12]. It is known that in the quadratic-Gaussian case, source-splitting can be used to realize any rate-pair in the achievable rate-region by only using ideal WZ-VQs which correspond to the corner-points of the achievable rate-region [[12], Section V-C], [15]. While the same optimality properties cannot be claimed for source-splitting by linear transforms, the aforementioned observation still provides us the motivation to take a similar approach in practically realizing the DTCs which can operate at arbitrary rates, by using only WZ quantizers.

A block diagram of the SP-DTC system is shown in Figure 1. Let the total number of bits available for encoding two jointly Gaussian vectors ${\mathbf{X}}_{1}\in {\mathbb{R}}^{{M}_{1}}$ and ${\mathbf{X}}_{2}\in {\mathbb{R}}^{{M}_{2}}$ be *B* bits. The terminal 1 performs source splitting by providing an approximation ${\mathbf{Y}}_{1}^{\prime}\in {\mathbb{R}}^{{N}_{1}^{\prime}}$ of **X**_{1} at the rate ${B}_{1}^{\prime}\left(<B\right)$ bits/vector as decoder side-information for WZ coding of terminal 2, where ${N}_{1}^{\prime}\le {M}_{1}$. In a TC framework, the goal is to provide the best (in MMSE sense) linear approximation of **X**_{1} as the decoder side-information. Therefore, ${\mathbf{Y}}_{1}^{\prime}$ is the ${B}_{1}^{\prime}$-bit approximation of a linear projection ${\mathbf{U}}_{1}^{\prime}={\mathbf{T}}_{1}^{\prime T}{\mathbf{X}}_{1}$, where ${\mathbf{T}}_{1}^{\prime}$ is a *M*_{1} × *M*_{1} unitary matrix. Given the side-information ${\mathbf{Y}}_{1}^{\prime}$ at the decoder, the terminal 2 quantizes a linear projection ${\mathbf{U}}_{2}={\mathbf{T}}_{2}^{T}{\mathbf{X}}_{2}$ of **X**_{2} using ${B}_{2}\left(<B-{B}_{1}^{\prime}\right)$ bits/vector, where **T**_{2} is a *M*_{2} × *M*_{2} unitary matrix. Let ${\mathbf{Y}}_{2}\in {\mathbb{R}}^{{N}_{2}}$, be the quantized value of **U**_{2}, where *N*_{2}*≤ M*_{2}. Then, given the quantized linear projections of both **X**_{1} and **X**_{2} available at the decoder, the terminal 1 quantizes the a linear projection ${\mathbf{U}}_{1}^{\u2033}={\mathbf{T}}_{1}^{\u2033T}{\mathbf{X}}_{1}$ of **X**_{1} using ${B}_{1}^{\u2033}=B-{B}_{1}^{\prime}-{B}_{2}$ bits/vector, where ${\mathbf{T}}_{1}^{\u2033}$ is a *M*_{1} × *M*_{1} unitary matrix. Let ${\mathbf{Y}}_{1}^{\u2033}\in {\mathbb{R}}^{{N}_{1}^{\u2033}}$ be the quantized value of ${\mathbf{U}}_{1}^{\u2033}$, where ${N}_{1}^{\u2033}\le {M}_{1}$. In the receiver, each source vector is reconstructed by a WZ decoder. The MMSE optimal reconstructions for **X**_{1} and **X**_{2} are, respectively given by ${\widehat{\mathbf{X}}}_{1}=E\left\{{\mathbf{X}}_{1}|{\mathbf{Y}}_{1}^{\u2033},\mathbf{V}\right\}$ and ${\widehat{\mathbf{X}}}_{2}=E\left\{{\mathbf{X}}_{2}|\mathbf{V}\right\}$, where $\mathbf{V}={\left({\mathbf{Y}}_{1}^{\prime}\phantom{\rule{0.3em}{0ex}}{\mathbf{Y}}_{2}\right)}^{T}$. The total transmission rate for source **X**_{1} is thus ${B}_{1}={B}_{1}^{\prime}+{B}_{1}^{\u2033}$ bits/vector. The rates used by terminals 1 and 2 in bits/sample are given by *R*_{1} = *B*_{1}/*M*_{1} and *R*_{2} = *B*_{2}/*M*_{2}, respectively.

Let ${\mathbf{U}}_{1}^{\prime}={\left({U}_{1,1}^{\prime},\dots ,{U}_{1,{M}_{1}}^{\prime}\right)}^{T}$, ${\mathbf{U}}_{1}^{\u2033}={\left({U}_{1,1}^{\u2033},\dots ,{U}_{1,{M}_{1}}^{\u2033}\right)}^{T}$ and ${\mathbf{U}}_{2}={\left({U}_{2,1},\dots ,{U}_{2,{M}_{2}}\right)}^{T}$. Also, let the bit-rates allocated to quantizing these transform coefficients be ${\mathbf{r}}_{1}^{\prime}={\left({r}_{1,1}^{\prime},\dots ,{r}_{1,{M}_{1}}^{\prime}\right)}^{T}$, ${\mathbf{r}}_{1}^{\u2033}={\left({r}_{1,1}^{\u2033},\dots ,{r}_{1,{M}_{1}}^{\u2033}\right)}^{T}$, and ${\mathbf{r}}_{2}={\left({r}_{2,1},\dots ,{r}_{2,{M}_{2}}\right)}^{T}$ respectively, and define $\mathbf{r}={\left({\mathbf{r}}_{1}^{\prime},{\mathbf{r}}_{1}^{\u2033},{\mathbf{r}}_{2}\right)}^{T}.$ Given a total of *B* bits for encoding both **X**_{1} and **X**_{2}, the design of a SP-DTC involves determining the values of the transforms ${\mathbf{T}}_{1}^{\prime}$, ${\mathbf{T}}_{1}^{\u2033}$, and ${\mathbf{T}}_{2}^{\u2033}$, and a bit allocation among the transform coefficients $\mathbf{U}={\left({\mathbf{U}}_{1}^{\prime},{\mathbf{U}}_{1}^{\u2033},{\mathbf{U}}_{2}\right)}^{T}$ such that the total MSE

is minimized. By writing the quantization MSEs of ${U}_{1,i}^{\u2033}$ and *U*_{2,j}as ${d}_{1,i}^{\u2033}\left({r}_{1,i}^{\u2033},{\mathbf{r}}_{1}^{\prime},{\mathbf{r}}_{2}\right)$, and ${d}_{2,j}\left({r}_{2,j},{\mathbf{r}}_{1}^{\prime}\right)$, respectively, *i* = 1, . . ., *M*_{1}, *j* = 1, . . ., *M*_{2}, the total MSE can be expressed as

The bit allocation problem can now be stated as follows:

Given a total bit-budget of *B* bits

subject to

where $\mathbf{r}={\left({r}_{1},\dots ,{r}_{2{M}_{1}+{M}_{2}}\right)}^{T}$. The explicit solution of this minimization problem is unfortunately intractable due to the inter-dependence of the three transform codes involved.

However, an explicit solution can be found for a variant of this problem obtained by fixing ${B}_{1}^{\prime}$, ${B}_{1}^{\u2033}$ and *B*_{2}, so that the number of bits allocated to each transform code is fixed and it is only required to optimize the bit allocation among the quantizers within each transform code. For simplicity, we refer to this problem as the *constrained bit-allocation problem*. In the following, an explicit solution to this problem is derived. Based on the result, we then present a tree-search algorithm to solve the *unconstrained* problem (8). Under both RD-WZQ and SWC-HRSQ models, the optimal transforms for Gaussian sources are CKLTs. Therefore, we refer to the solution to problem (8) as the SP-DKLT.

### 3.1 Solution to the constrained bit-allocation problem

#### 3.1.1 RD optimal quantization

Let ${B}_{1}^{\prime}$, ${B}_{1}^{\u2033}$, and *B*_{2} be fixed in Figure 1 and let the coefficient quantization be represented by the RD-WZQ model (for components of ${\mathbf{U}}_{1}^{\prime}$, this reduces to the non-distributed RD-optimal quantization). From *Theorem* 1, it follows that the MMSE optimal transform (in the sense of providing the best linear approximation as decoder side information for terminal 2) ${\mathbf{T}}_{1}^{\prime}$ is the KLT of **X**_{1}. Let the eigenvalues of ${\sum}_{{X}_{1}}$ be ${\mathit{\lambda}}_{1}^{\prime}=\left({\lambda}_{1,1}^{\prime},\dots ,{\lambda}_{1,{M}_{1}}^{\prime}\right)$. Then, using (2), the optimal bit allocation for ${U}_{1,m}^{\prime}$ can be given by

for some ${N}_{1}^{\prime}\le {M}_{1}$. Let the quantized value of ${U}_{1,m}^{\prime}$ be ${\mathit{\xdb}}_{1,m}^{\prime}$. We note that $E{U}_{1,m}^{\prime 2}={\lambda}_{1,m}^{\prime}$ for *m* = 1, . . ., *M*_{1} and $E{\left({U}_{1,m}^{\prime}-{\mathit{\xdb}}_{1,m}^{\prime}\right)}^{2}={d}^{*}\left({\mathit{\lambda}}_{1}^{\prime},{B}_{1}^{\prime},{N}_{1}^{\prime}\right)$ for $m=1,\dots ,{N}_{1}^{\prime}$. In RD theoretic sense, the quantized value (up to a scaling factor) of the mean-zero Gaussian variable ${U}_{1,m}^{\prime}$ can be given by ${Y}_{1,m}^{\prime}={U}_{1,m}^{\prime}+{Z}_{1,m}^{\prime}$, $m=1,\dots ,{N}_{1}^{\prime}$, where ${Z}_{1,m}^{\prime}$ is a mean-zero Gaussian random-variable independent of ${U}_{1,m}^{\prime}$ such that

see [[16], Section 10.3.2]. Therefore, we can write

where ${\mathbf{K}}_{1}^{\prime \left({N}_{1}^{\prime}\right)}$ denotes the ${M}_{1}\times {N}_{1}^{\prime}$ matrix consisting of first ${N}_{1}^{\prime}$ columns of ${\mathbf{K}}_{1}^{\prime}$. The covariance matrix of "quantization noise" vector ${\mathbf{Z}}_{1}^{\prime}={\left({Z}_{1,1}^{\prime},\dots ,{Z}_{1,{N}_{1}^{\prime}}^{\prime}\right)}^{T}$ is given by ${\sum}_{{Z}_{1}^{\prime}}=\mathsf{\text{diag}}\left({Z}_{1,1}^{\prime},\dots ,{Z}_{1,{N}_{1}^{\prime}}^{\prime}\right)$. According to (11), ${\mathbf{Y}}_{1}^{\prime}$ and **X**_{1} are jointly Gaussian, and it follows that

Furthermore, **X**_{2} and ${\mathbf{Y}}_{1}^{\prime}$ are jointly Gaussian with the conditional covariance matrix

Next consider TC **X**_{2} given ${\mathbf{Y}}_{1}^{\prime}$ as decoder side-information. From *Theorem* 1, it follows that the $E{\u2225{\mathbf{X}}_{2}-{\widehat{\mathbf{X}}}_{2}\u2225}^{2}$ is minimized by choosing **T**_{2} as the CKLT of **X**_{2} given ${\mathbf{Y}}_{1}^{\prime}$ and by applying RD optimal WZ quantization to each element of ${\mathbf{U}}_{2}={\mathbf{T}}_{2}^{T}{\mathbf{X}}_{2}$ given decoder side information ${\mathbf{Y}}_{1}^{\prime}$ based on a bit-allocation specified by the eigenvalues of ${\sum}_{{X}_{2}|{Y}_{1}^{\prime}}$, ${\mathit{\lambda}}_{2}=\left({\lambda}_{2,1},\dots ,{\lambda}_{2,{M}_{2}}\right)$. The optimal bit allocation for *U*_{2,m}is given by

for some *N*_{2}*≤ M*_{2}. The resulting MSE is given by

As before, the quantized value of **U**_{2} up to a scaling factor, can be represented by [[8], Theorem 3],

where ${\mathbf{K}}_{2}^{\left(N2\right)}$ denotes the *M*_{2} × *N*_{2} matrix consisting of first *N*_{2} columns of **K**_{2} and the quantization noise ${\mathbf{Z}}_{2}\in {\mathbb{R}}^{{N}_{2}}$ is a mean zero iid Gaussian vector independent of **X**_{2}. The covariance matrix of **Z**_{2} is given by ${\sum}_{{Z}_{2}}=\mathsf{\text{diag}}\left({Z}_{2,1},\dots ,{Z}_{2,{N}_{2}}\right)$, where $E{Z}_{2,m}^{2}$, *m* = 1, . . ., *N*_{2} are given by

The covariance matrix of **Y**_{2} is

Therefore **X**_{1} and $\mathbf{V}={\left({\mathbf{Y}}_{1}^{\prime}\phantom{\rule{0.3em}{0ex}}{\mathbf{Y}}_{2}\right)}^{T}$ are jointly Gaussian with the cross-covariance matrix

where

and

Finally, consider quantizing **X**_{1}, given $\mathbf{V}={\left(\begin{array}{cc}\hfill {\mathbf{Y}}_{1}^{\prime}\hfill & \hfill {\mathbf{Y}}_{2}\hfill \end{array}\right)}^{T}\in {\mathbb{R}}^{{N}_{1}^{\prime}+{N}_{2}}$ as the decoder side-information. As before, $E{\u2225{\mathbf{X}}_{1}-{\widehat{\mathbf{X}}}_{1}\u2225}^{2}$ is minimized by choosing ${\mathbf{T}}_{1}^{\u2033}$ as the CKLT of **X**_{2} given **V**, and RD optimal WZ quantization of each element of ${\mathbf{U}}_{1}^{\u2033}={\mathbf{T}}_{1}^{\u2033T}{\mathbf{X}}_{1}$ given **V**, based on a bit-allocation specified by the eigenvalues of ${\sum}_{{X}_{1}|V}$, ${\mathit{\lambda}}_{1}^{\u2033}={\left({\lambda}_{1,m}^{\u2033},\dots ,{\mathit{\lambda}}_{{M}_{1}}^{\u2033}\right)}^{T}$. The bit rate allocated to quantizing ${U}_{1,m}^{\u2033}$ given by

for some ${N}_{1}^{\u2033}\le M1$ and the resulting MSE is

Given a rate tuple $\left({B}_{1}^{\prime},{B}_{1}^{\u2033},{B}_{2}\right)$, the MMSE achievable with a SP-DKLT code for two jointly Gaussian vectors is given by

#### 3.1.2 High-resolution scalar quantization and SW coding

Due to *Theorem* 2, the expressions for bit-allocations given by (9), (14) and (20) applies to SWC-HRSQ model as well. However, the resulting MSE and hence the quantization noise variances are different to those of the RD-WZQ model. More specifically, since the decoder side-information in a SP-DTC depends on quantization noise of the other terminals, the optimal transforms **T**_{2} and ${\mathbf{T}}_{1}^{\u2033}$ and the associated bit allocations obtained with the SWC-HRSQ model are different to those obtained with the RD-WZQ model. In order to make the problem tractable, we assume that the quantization noise of SWC-HRSQ model also follows (11) and (16) (for a discussion on the validity of this assumption, see [17]). This assumption essentially allows us to compute the conditional covariance matrices ${\sum}_{{X}_{2}|{Y}_{1}^{\prime}}$ and ${\sum}_{{X}_{1}|V}$ as in the previous case, and then apply (9), (14) and (20) to find the optimal transforms and the bit-allocations. However, due to (27), the quantization noise variance in (10) in this case is given by

A similar expression exists for the quantization noise variance in (17).

### 3.2 A tree-search solution to the unconstrained bit-allocation problem

We note that the optimal solution to the unconstrained problem defined in (8) corresponds to the MMSE solution of the constrained problem over the set of rate-tuples $\mathcal{S}=\left\{\left({B}_{1}^{\prime},{B}_{1}^{\u2033},{B}_{2}\right):{B}_{1}^{\prime}\in \left(0,B\right),{B}_{1}^{\u2033}\in \left(0,B\right),{B}_{2}\in \left(0,B\right),{B}_{1}^{\prime}+{B}_{1}^{\u2033}+{B}_{2}\le B\right\}$. This set is shown in Figure 2. One approach to locating the MMSE solution is to search over an appropriately discretized grid of points inside . As we will see, even though an exhaustive search on a fine grid can be prohibitively complex, a much simpler constrained tree-search algorithm exists which can be used to locate the required solution with a very high probability.

The proposed algorithm is a generalization of a class of bit-allocation algorithms in which a small fraction Δ*B* of the total bit-budget *B* is allocated to the "most deserving" quantizer among a set of quantizers in an incremental fashion, until the entire bit-budget is exhausted [[4], Section 8.4]. Unfortunately, this type of a greedy search cannot guarantee that the final solution is overall optimal and can yield poor results in our problem where the bit allocation among three sets of dependent quantizers must be achieved. On the other hand, if the increment Δ*B* is chosen small enough, a near-optimal solution can be found by resorting to a tree-search. Even though a full tree-search is intractable, a simple algorithm referred to as the (*M, L*)-*algorithm*[18] exists for detecting the minimum cost path in the tree with a high probability. We use this insight to formulate a tree-search algorithm for solving the unconstrained bit allocation problem, in which a set of constrained bit allocation problems are solved in each iteration.

In order to describe the proposed tree-search algorithm in detail, let Δ*B* be the incremental amount of bits to be allocated in each step of the search, where 0 *<* Δ*B* ≪ *B*. The algorithm is initialized by setting $\left({B}_{1}^{\prime},{B}_{1}^{\u2033},{B}_{2}\right)=\left(0,0,0\right)$, i.e., the origin in Figure 2. Now if we are to allocate Δ*B* bits to only one of the three transform codes ${\mathbf{T}}_{1}^{\prime}$, ${\mathbf{T}}_{1}^{\u2033}$ or **T**_{2}, then there are three possible choices for the rate-tuple $\left({B}_{1}^{\prime},{B}_{1}^{\u2033},{B}_{2}\right)$, namely (Δ*B*, 0, 0), (0, Δ*B*, 0), and (0, 0, Δ*B*). For each of these choices, we can explicitly solve the constrained bit allocation problem as described in the previous section and find the MMSE solution. Each of these candidate solutions can be viewed as a node in a tree as shown in Figure 3. The root node of the tree corresponds to a SP-DTC of rate 0, and a node in the first level of nodes obtained in the first iteration of the algorithm corresponds to a SP-DTC of rate of Δ*B* bits per source pair (**X**_{1}, **X**_{2}). In the second iteration of the algorithm, we allocate Δ*B* more bits to each of the three candidate SP-DTCs (but to one of the SP-DTCs at a time) in the first level of nodes. Note that, for each SP-DTC we can allocate Δ*B* bits in three different ways, i.e., Δ*B* bits can be added to either ${B}_{1}^{\prime}$, ${B}_{1}^{\u2033}$, or *B*_{2}. This requires the solution of three constrained bit allocations problems for each of the 3 nodes in the first-level. As a result, the tree will be extended to a second level of 3^{2} nodes, in which each node corresponds to a SP-DTC of 2Δ*B* bits per source-pair, as shown in Figure 3. We can repeat this procedure, allocating Δ*B* bits to each of terminal node of the tree in a given iteration, until all *B* bits are exhausted. After the final iteration, the tree would consist of *L* = ⌈*B*/Δ*B*⌉ levels with 3^{L}nodes in the last level (terminal nodes). Each terminal node corresponds to a candidate SP-DTC of rate *B*, and rate-tuples of these SP-DTCs lie on the plane ${B}_{1}^{\prime}+{B}_{1}^{\u2033}+{B}_{2}=B$ in Figure 2. The MMSE terminal node of the tree is the optimal solution to the unconstrained bit allocation problem, provided that the latter solution is on the search-grid. If Δ*B* is chosen small enough, then we can ensure that the optimal solution is nearly on the search-grid. Suppose that, in each iteration, the algorithm saves the MSE of the solution to the constrained bit-allocation problem associated with each node. In theory, the optimal solution can be found by an exhaustive tree-search, using the MSE of a node [given by (22)] as the path-cost. In order to practically implement the tree-search, we use the (*M, L*)-algorithm, in which the parameter *M* can be chosen to reduce the complexity at the expense of decreased accuracy (i.e., the probability of detecting the lowest cost path in the tree). In the (*M, L*) algorithm [[18], p. 216] for a tree of depth *L*, one only retains the *M* best (lowest MSE) nodes in each iteration. When *M* = 1 we have a completely greedy search. On the other hand when *M*_{
n
} = 3^{n}in the *n* th iteration, we have a full-tree search which has a complexity that grows exponentially with the iteration number. When *M* (1 *≤ M ≤* 3^{L}) is a prescribed constant, the complexity is linear in *M*, independent of the number of iterations. In obtaining the simulation results presented in Section 4, *M* = 27 and *L* = 135 (Δ*B* = 0.2) were found to be sufficient to obtain near optimal results. For example, it was observed that even for *M* = 81 and *L* = 405, nearly the same result was obtained.

## 4 Numerical results and discussion

*Source model A*: Let the components of **X**_{1} be *M*_{1} consecutive samples of a first-order Gauss-Markov process with a unit-variance and the correlation coefficient *|ρ|* < 1, i.e., *X*_{1,m}= *ρX*_{1,(m-1)}+ *Z*_{
m
} *, m* = 2, . . ., *M*_{1}, where *Z*_{
m
}*, m* = 1, . . ., *M*_{1} are mean-zero iid Gaussian variables such that $E{Z}_{m}^{2}=1-{\rho}^{2}$. The auto-covariance matrix ${\sum}_{{X}_{1}}$ is a Teoplitz matrix with the first row $\left(1,\rho ,{\rho}^{2},\dots ,{\rho}^{{M}_{1}-1}\right)$. Now define the components of **X**_{2} to be noisy observations of the components of **X**_{1}, i.e., *X*_{2,m}= *γX*_{1,m}+ *W*_{
m
} , where *|γ|* < 1 and *W*_{
m
} is a mean-zero, iid Gaussian variable with $E{W}_{m}^{2}=1-{\gamma}^{2}$, *m* = 1, . . ., *M*_{1} (*M*_{2} = *M*_{1}). It follows that, ${\sum}_{{X}_{2}}$ is a Teoplitz matrix with the first row $\left(1,{\gamma}^{2}\rho ,{\gamma}^{2}{\rho}^{2},\dots ,{\gamma}^{2}{\rho}^{{M}_{1}-1}\right)$. Furthermore, the cross-covariance matrix ${\sum}_{{X}_{1}{X}_{2}}={\gamma}^{2}{\sum}_{{X}_{1}}$. Note that **X**_{1} and **X**_{2} are not statistically similar and the components of **X**_{1} are more correlated than those of **X**_{2}.

*Source model B*: Consider a spatial Gaussian random field in which the correlation function decays with distance *d* according to the squared exponential model [19]. We define the random vectors **X**_{1} and **X**_{2} to be observations picked-up by a pair of sensor arrays placed in this random filed. In this case, the auto-covariance matrix of **X**_{1} is given by ${\left[{\sum}_{{X}_{1}}\right]}_{ij}=exp\left\{-{\left(\alpha {d}_{ij}\right)}^{2}\right\}$, where *α* is a constant and *d*_{
ij
} is the distance between *X*_{1,i}and *X*_{1,j}. The auto-covariance matrix of **X**_{2} also has a similar form. For simplicity assume that the sensors in each array are placed on a *M × M* square grid of unit spacing (i.e., *M*_{1} = *M*_{2} = *M*^{2}), the two arrays are on parallel planes separated by a distance *r*, and the two grids are aligned so that the distance between *X*_{1i}and *X*_{2i}is *r* for all *i*. With this setup, the distance between *X*_{1i}and *X*_{2j}is $\sqrt{{d}_{ij}^{2}+{r}^{2}}$, and the cross-covariance matrix is given by ${\left[{\sum}_{{X}_{1}{X}_{2}}\right]}_{ij}=\theta \phantom{\rule{0.3em}{0ex}}exp\left\{-{\left(\alpha {d}_{ij}\right)}^{2}\right\}$, where *θ* = exp {- (*αr*)^{2}}. This sensor structure ensures that **X**_{1} and **X**_{2} are statistically similar. However, ${\sum}_{{X}_{1}{X}_{2}}$ can be chosen independently (by choosing array separation *r*) of ${\sum}_{{X}_{1}}$ and ${\sum}_{{X}_{2}}$.

### 4.1 RD performance

We compute the rate-pairs (*R*_{1}, *R*_{2}) achievable with a SP-DKLT code for a given a total MSE *D*, by fixing *R*_{1} (or *R*_{2}) and then searching for minimum *R*_{2} (or *R*_{1}) required to achieve the MSE *D* [given by (6)]. The rate-pairs achievable for *D* = 0.01 with SP-DKLT coding of vectors from *source model A* with *ρ* = 0.9, *γ* = 0.9, are plotted in Figure 4. These values of *ρ* and *γ* result in a source cross-covariance matrix with the largest element 0.9. In Figure 4, the curve "SP-DKLT (**X**_{1} split)" corresponds to a system in which the input to terminal 1 (which applies source splitting) is **X**_{1} as shown in Figure 1. The curve "SP-DKLT (**X**_{2} split)" corresponds to a system in which the input to the terminal 1 is **X**_{2}. Note that the two curves are not symmetric in rates and they coincide if *R*_{1} and *R*_{2} are inter-changed in one of the curves. This is because **X**_{1} and **X**_{2} have different auto-covariance matrices, and hence inter-changing the rates is equivalent to inter-changing the terminals. Importantly, this result indicates that when the two sources are not statistically identical, which source is chosen for splitting does not affect the SP-DKLT performance. Table 1 lists the best bit allocations found by the tree-search algorithm for SP-DKLT codes shown in Figure 4. Note that, for the same (*B*_{1}, *B*_{2}), the rate-split between ${B}_{1}^{\prime}$ and ${B}_{2}^{\u2033}$ when **X**_{1} is applied to the terminal 1 is not identical to that when **X**_{2} is applied to the terminal 1.

Figure 4 also shows the rate region achievable if each source is independently compressed using the KLT (i.e., only intra-vector correlation is utilized), labeled IKLT, and the OPTA lower bound for distributed TC predicted by the iterative DKLT algorithm [8]. The performance of both distributed and non-distributed TC of *source model A* degrades as *ρ* decreases, since both auto- and cross-covariance matrices of **X**_{1} and **X**_{2} are functions of *ρ*. Next consider the *source model B* for which the lowest achievable rate-pairs corresponding to *D* = 0.005 are plotted in Figure 5. The source parameter *α* = 0.32 results in auto-covariance matrices whose largest off-diagonal element is 0.9. Also recall that *θ* is the largest element in the source cross-covariance matrix. Note that changing *θ* only affects the cross-covariance matrix, and hence has no effect on the best achievable rates for independent coding of the two sources. On the other hand, as *θ* increases, the rates achievable with distributed coding do improve. Since in *source model B*, the two sources are statistically similar, the curves in Figure 5 are symmetric in rates and the optimal bit allocation does not depend on which source is chosen for splitting.

The RD performance in Figures 4 and 5 indicate that SP-DKLT codes can significantly outperform IKLT codes at all rates when there is sufficient correlation between the two distributed sources. The performance of SP-DKLT coding necessarily approaches OPTA (DKLT) bound when either *R*_{1} or *R*_{2} is sufficiently high. That is, the terminal with the higher bit rate can independently transform code its input with negligible distortion, and the other terminal can then apply WZ-TC at the minimum rate achievable with "almost unquantized" decoder side-information. However, for both source-models the rate-region achievable with source-splitting is strictly inside that of DKLT. In other words, there are some rate-pairs inside the DKLT rate-region for a given MSE *D*, which cannot to be achieved by a SP-DKLT code. A closely related issue is that, for a range of values of (*R*_{1}, *R*_{2}), the sumrate *R*_{1} + *R*_{2} of SP-DKLT codes remains constant and reaches its minimum. For example, it can be seen from Figure 4 and Table 1 that the sum-rate is about 4.125 bits when the rate of **X**_{1} is in the range 1.375 - 2.5 bits/sample. From Table 1, it can be seen that when the sum-rate is greater than its minimum value, the optimal SP-DKLT code approaches a WZ transform code, i.e., no source-splitting occurs. This situation, which also exists in Figure 5, suggests that optimal SP-DKLT codes for the sum-rate at which a given *D* can be achieved, are equivalent to time-sharing [12] of two "corner points". Figure 6 illustrates this situation for optimal SP-DKLT codes at *D* = 0.005 for *source model B* (*θ* = 0.9 in Figure 5). It should however be noted that, unlike source-splitting, time-sharing between the two terminals requires synchronization of their encoders [11].

### 4.2 Design examples

In this section, we focus on the practical design of SP-DKLT codes for a given pair of rates (*R*_{1}, *R*_{2}) based on both scalar and block-quantization. RD-WZQ model used in Section 3.1.1 implies infinite block-length WZ-VQ of each coefficient. A practically realizable approach to block WZ quantization is SWC-TCQ [20]. Experimental results obtained with LDPC codes of block length up to 10^{6} bits and TCQs up to 8,192 states are presented in [20] for quadratic Gaussian WZ quantization, which indicate that performance very close to the theoretical limit can be achieved with SWC-TCQ. Motivated by these results, we aim to implement SP-DTCs which can approach theoretical performance predicted in Section 3.1.1 using TCQ and SW coding for encoding transform coefficients. However, the SWC-TCQ design procedure followed in [20] is to first design a TCQ whose MSE satisfies a constraint (by choosing a sufficiently high rate) and then to estimate the output conditional entropy (which is the target rate of the SW code) of the resulting TCQ. This is sufficient for verifying the achievable rate pairs for a given MSE which is the goal of [20]. Our problem is different in that the rate of the SW code is specified by the solution to the bit-allocation problem and our goal is to design a TCQ which minimizes the MSE, subject to a constraint on the output conditional entropy. This requires an alternative formulation of the design procedure, which we refer to as CEC-TCQ. In previous work on non-distributed quantization, entropy constrained TCQ (EC-TCQ) has been investigated in [21–24]. CEC-TCQ is a modification of EC-TCQ in [21, 22] to accommodate block SW-coding of the TCQ output relative to a decoder side-information sequence. Our formulation of CEC-TCQ follows the *supersetentropy* formulation of EC-TCQ in [22].

Suppose that a sequence of source samples {*U*_{
n
} ∈ℝ} has to be quantized, given that the sequence {*Y*_{
n
} ∈ℝ}, is available at the decoder as side-information, where *n* = 1, 2, . . . denotes the discrete-time. Similar to an ordinary TCQ [25], a CEC-TCQ uses a size ${2}^{{R}_{\mathsf{\text{TCQ}}}+1}$ scalar codebook to quantize the input sequence *U*_{1}, *U*_{2}, . . ., into a *R*_{TCQ} bits/sample output sequence *Û*_{1}, *Û*_{2}, . . . . However, CEC-TCQ output satisfies the additional property that the conditional entropy *H* (*Û*_{
n
}*|Y*_{
n
} ) = *E*{- log_{2}*P* (*Û*_{
n
}*|Y*_{
n
} )} *≤ R* for some given *R*. It follows from [9] that if the CEC-TCQ output is SW-coded relative to the decoder side-information sequence {*Y*_{
n
} } then *LR* bits are sufficient to (almost) losslessly transmit a sequence of *L* source samples as *L → ∞*. The optimal CEC-TCQ minimizes the MSE *E*{(*U*_{
n
} *- Û*_{
n
} )^{2}}, subject to the constraint *E*{- log_{2}*P* (*Û*_{
n
}*|Y*_{
n
} )} *≤ R*, or equivalently, minimizes the Lagrangian

where *β* > 0 is the Lagrange multiplier. This implies that, given a specific sequence of input samples *u*_{1}, *u*_{2}, . . ., the CEC-TCQ encoder should use the Viterbi algorithm based on the path-cost function

In a rate *R*_{TCQ} TCQ, each codeword *c*_{
k
} , $k=1,\dots ,{2}^{{R}_{\mathsf{\text{TCQ}}}+1}$, in the codebook is labeled with *R*_{TCQ}-bit binary string [25]. Let *b*_{
i
} , $i=1,\dots ,{2}^{{R}_{\mathsf{\text{TCQ}}}}$ be these binary labels. Then, to compute (25), the cost *βE*{- log_{2}*P* (*b*_{
i
}*|Y*)} must also be stored for each binary label, where *Y* is the random variable representing the decoder side-information. For a fixed value of *β*, we can use a slight modification of the algorithm in [21] for optimizing the CEC-TCQ codebook, by replacing the codeword entropies *E*{- log_{2}*P*(*b*_{
i
} )} by the conditional entropies *E*{- log_{2}*P* (*b*_{
i
}*|Y*)}, and by using a training sequence of (*U*_{
n
}*, Y*_{
n
} ) pairs. In order to approximate the expectations by sample averages computed from training data, the side-information variable *Y* is discretized to $\mathit{\u0176}\in \left\{{\eta}_{1},\dots {\eta}_{\mathcal{Y}}\right\}$, where is a large enough positive integer. Then *E*{- log_{2}*P* (*B* = *b*_{
i
} *|Y*)} can be approximated by

where $B\in \left\{{b}_{1},\dots ,{b}_{{2}^{{R}_{\mathsf{\text{TCQ}}}}}\right\}$ is the binary-labeled output of the TCQ. Given a TCQ code-book, the probabilities *P*(*B* = *b*_{
i
} |*Ŷ = η*_{
k
} ) and *P*(*Ŷ = η*_{
k
} |*B* = *b*_{
i
} ) can be estimated using the training set [it is sufficient to estimate *p*_{
i, k
} = *P*(*B* = *b*_{
i
}*, Ŷ = η*_{
k
} ), $k=1,\dots \mathcal{Y},i=1,\dots ,N$]. To complete the design, it is necessary to search for the value of *β* for which *E*{- log_{2}*P* (*Û*_{
n
} *|Y*_{
n
} )} *≈ R* by repeating the codebook optimization for an appropriately chosen sequence of *β* values.

For block WZ-code designs, the transforms and the bit allocations are found by RD-WZQ model (Section 3.1.1) and WZ quantizers are implemented using CEC-TCQ followed by binary SW coding. More specifically, the rate found by the bit allocation algorithm for each transform coefficient is used as the conditional entropy constraint in the design of a CEC-TCQ for that coefficient. As described in Section 2.3, the CEC-TCQ designs are based on scalar-side information obtained by a linear transform of the vector side-information at the decoder, see *Theorem* 3. All CEC-TCQ designs are based on the 8-state trellis used in JPEG2000 [[26], Figure 3.16]. For trellis encoding and decoding, a sequence length of 256 source samples has been used. For design and testing quantizers, sample sequences of length 5 × 10^{5} have been used. Since, the main focus this paper is the design of transforms and the quantizers, we assume ideal SW coding of the binary output of each CEC-TCQ, so that our results do not depend on any particular SW coding method. In a practical implementation (e.g., [20]), near optimal performance can be obtained by employing a sufficiently long SW code (note that sequence length for SW-coding can be chosen arbitrary larger than the sequence length used for TCQ encoding). This type of coding is well suited for applications such as distributed image compression, where the coding is inherently block-based.

We also consider SP-DKLT code designs based on scalar quantization. In this case, the transforms and bit allocations are found by using the SWC-HRSQ model (Section 3.1.2). While it is possible to use the step-size predicted by SWC-HRSQ model to design uniform quantizers, we found that such quantizers in reality do not satisfy the required entropy constraint at lower rates. We instead use conditional entropy constrained scalar quantizers (CEC-SQ), designed by modifying the algorithm in [27] to accommodate a conditional entropy constraint similar to CEC-TCQ approach above.

The reconstruction signal-to-noise ratio (RSNR) [with MSE as given by (6)] of SP-DKLT code designs for *source model B* is shown in the rows labeled *Design* in Table 2 where SP-DKLT/CECSQ and SP-DKLT/CECTCQ refer to scalar quantization and TCQ based designs respectively. The rows labeled *Analytical* show the performance predicted by the SWC-HRSQ and RD-WZQ models upon which the transforms and bit-allocations are based (note however that the performance predicted by SWC-HRSQ model is not necessarily an upper-bound for CEC-SQ designs which are not constrained to be uniform quantizers). We compare the performance of our SP-DKLT code designs with IKLT codes for which the bit-allocations are obtained by using either entropy coded high-rate quantization model (for scalar quantizer design) [[4], Section 9.9] or RD-optimal quantization model (for block quantizer design) [[16], Section 10.3.3] for Gaussian variables. The IKLT codes with scalar quantization have been implemented by using entropy constrained scalar quantizers (EC-SQ) while those with block quantizers have been implemented by using EC-TCQ [23], where we assume ideal entropy coding of the quantizer outputs. In Table 2, IKLT/ECSQ and IKLT/ECTCQ respectively refer to these designs.

From a practical view point, the use of DCT instead of KLT is interesting [26]. We therefore consider the design of SP-DTCs based on the DCT, referred to as source-split distributed DCT (SP-DDCT) codes. Since DCT is a fixed transform, we only need to optimize the bit allocations. To do this, we assume that DCT is approximately a decorrelating transform for Gaussian vectors [4]. Then, the bit-allocations given by (9), (14), and (20) are still valid provided that the eigenvalues in these expressions are replaced by the variances of the corresponding DCT coefficients. The rest of the design procedure is the same as that with SP-DKLT. The RSNR of DCT based designs are presented in Table 3 (again, the performance predicted by SWC-HRSQ model is not an upper bound for corresponding practical codes). The results show that, for this particular source model, even the scalar quantization based SP-DDCT codes outperform TCQ-based IDCT codes.

## 5 Concluding remarks

Rate-distortion analysis and experimental results demonstrate that SP-DTC is a promising practical approach to implementing distributed VQ of high-dimensional correlated vectors. The comparisons shown in Table 2, as well as similar comparisons for *source model A* and the source model in [[8], Example 6], indicate that these codes can substantially outperform the independent transform codes, when there is sufficient inter-vector correlation. This approach has also been demonstrated to be effective for DCT-based systems. Therefore, the proposed approach can be potentially used in applications such as stereo image compression when inter-camera communication is impractical. Our RD analysis however indicates that the achievable rate-region of SP-DKLT codes for jointly Gaussian sources is strictly inside that predicted by the DKLT of [8]. An interesting avenue of future work is to find implementable distributed transforms codes which can achieve the rate-pairs below the "time-sharing" line in Figure 6. Another issue is the extension of the proposed approach to more than two vector sources. In principle, source-splitting can be easily applied to more than two sources. However, with more than two vector sources, the complexity of the bit-allocation will be significantly higher.

## 1 Appendix

### 1.1 Proof of Theorem 2

The optimality of CKLT is proved in [5]. To prove the optimality of the bit allocation, consider high-rate scalar quantization of the transform coefficient *U*_{
m
} and ideal SW coding of the quantizer output *Û*_{
m
} at the rate *r*_{
m
} = *H* (*Û*_{
m
} *|* **Y**) bits/sample, *m* = 1, . . ., *M*_{1}, where *H* (*·|·*) denotes the conditional entropy [16]. In this case, the asymptotically optimal scalar quantizer for each coefficient is known to be uniform [5]. For high-rate uniform quantization, *H* (*Û*_{
m
}*|* **Y**) *≈ h*(*U*_{
m
}*|* **Y**) - log_{2}(Δ_{
m
}), where Δ_{
m
}is the quantizer step-size and *h*(*·|·*) is the conditional differential entropy [[16], Section 8.3]. Since the conditional variance of the transform coefficient *U*_{
m
} , given the side-information **Y** is $E\left\{{U}_{m}^{2}|\mathbf{Y}\right\}={\lambda}_{m}$, and (*U*_{
m
} , **Y**) are jointly Gaussian, it follows that *h*(*U*_{
m
}*|* **Y**) = (1/2) log_{2} (2*πeλ*_{
m
} ) [16], and hence ${\mathrm{\Delta}}_{m}=\sqrt{2\pi e{\lambda}_{m}}{2}^{-{r}_{m}}$. Therefore, the MSE of high-rate uniform quantization followed by ideal SW coding of *U*_{
m
} is

Since (27) and (1) are the same within a constant factor of *πe/* 6 (which is identical for all transform coefficients), it is easy to verify that the optimal bit-allocation solution under SWC-HRSQ model is also given by (2). However, the MSE of *m* th coefficient in this case is

and hence the overall MSE is given by (5). Furthermore,

for *m* = 1, . . ., *N*.

### 1.2 Proof of Theorem 3

For jointly Gaussian and mean-zero **X** and **Y**, there exists a matrix **A** and a mean-zero Gaussian vector **W**_{1} independent of **Y** such that **X** = **AY** + **W**_{1}, where $\mathbf{A}={\sum}_{XY}{\sum}_{Y}^{-1}$, ${\sum}_{{W}_{1}}={\sum}_{X|Y}=\mathbf{T}\mathbf{\Lambda}{\mathbf{T}}^{T}$, and **Λ** is the diagonal matrix of eigenvalues of ∑_{X|Y}. Furthermore **U** = **T**^{T}**AY** + **W**_{2}, where **W**_{2} = **T**^{T}**W**_{1} is an uncorrelated Gaussian vector since ${\sum}_{{W}_{2}}={\mathbf{T}}^{T}{\sum}_{{W}_{1}}\mathbf{T}=\mathbf{\Lambda}$ (note that for CKLT, **T**^{-1} = **T**). Therefore ∑_{U|Y}= **Λ**. The MMSE estimate of **U** given **Y** is **Ũ** = *E*{**U**|**Y**} = **T**^{T}*E*{**X** *|* **Y**} = **T**^{T}**AY**. Thus, **U** = **Ũ** + **W**_{2} and **∑**_{
U|Ũ
}= **Λ**, and it follows that *U*_{
i
} is independent of *Ũ*_{
j
} if *j* ≠ *i* and var(*U*_{
i
}*| Ũ*_{
i
} ) = var(*U*_{
i
} *|* **Y**), where var(*·|·*) denotes the conditional variance. Now, since *h*(*U*_{
i
}*|* **Y**) = *h*(*U*_{
i
}*| Ũ*_{
i
} ), we conclude that *Ũ*_{
i
} is a sufficient statistic [16] for decoder side-information **Y** in WZ quantization of *U*_{
i
} .

## References

- 1.
Xiong Z, Liveris AD, Cheng S: Distributed source coding for sensor networks.

*IEEE Signal Process Mag*2004, 21(5):80-94. - 2.
Huang JY, Schultheiss P: Block quantization of correlated random vectors.

*IEEE Trans Commun*1963, 11(9):289-296. - 3.
Goyal V: Theoretical foundations of transform coding.

*IEEE Signal Process Mag*2001, 18(5):9-21. - 4.
Gersho A, Gray RM:

*Vector Quantization and Signal Compression*. Kluwer Academic Publishers, Norwell MA, USA; 1992. - 5.
Rebollo-Monedero D, Rane S, Aaron A, Girod B: High-rate quantization and transform coding with side information at the decoder.

*Signal Process*2006, 88(11):3160-3179. - 6.
Vosoughi A, Scaglione A: Precoding and decoding paradigms for distributed vector data compression.

*IEEE Trans Signal Process*2007, 55(4):1445-1459. - 7.
Chen X, Tuncel E: Low-delay prediction and transform-based Wyner-Ziv coding.

*IEEE Trans Signal Process*2011, 59(2):653-666. - 8.
Gastpar M, Dragotti PL, Vetterli M: The distributed Karhunen-Loéve transform.

*IEEE Trans Inf Theory*2006, 52: 5177-5196. - 9.
Slepian D, Wolf JK: Noiseless coding of correlated information sources.

*IEEE Trans Inf Theory*1973, 19(4):471-480. - 10.
Gish H, Peirce JP: Asymptotically efficient quantizing.

*IEEE Trans Inf Theory*1968, 14: 676-683. - 11.
Rimoldi B, Urbanke R: Asynchronous Slepian-Wolf coding via source-splitting. In

*IEEE Int Symp Inform Theory (ISIT)*. Ulm, Germany; 1997:271. - 12.
Zamir R, Shamai S, Erez U: Nested linear/lattice codes for structured multiterminal bining.

*IEEE Trans Inf Theory*2002, 48(6):1250-1276. - 13.
Wagner AB, Tavildar S, Viswanath P: Rate region of the quadratic Gaussian two-encoder source-coding problem.

*IEEE Trans Inf Theory*2008, 54(5):1938-1961. - 14.
Wyner AD, Ziv J: The rate-distortion function for source coding with side information.

*IEEE Trans Inf Theory*1976, 22: 1-10. - 15.
Yang Y, Stankovic V, Xiong Z, Zhao W: On multiterminal source code design.

*IEEE Trans Inf Theory*2008, 54(5):2278-2302. - 16.
Cover TM, Thomas JA:

*Elements of Information Theory*. John Wiley, New York, USA; 1991. - 17.
Marco D, Neuhoff DL: The validity of additive noise model for uniform scalar quantizers.

*IEEE Trans Inf Theory*2005, 51(5):1739-1755. - 18.
Berger T:

*Rate Distortion Theory: A Mathematical Basis for Data Compression*. Prentice-Hall, Englewood Cliffs NJ, USA; 1971. - 19.
Vuran MC, Akan OB, Akylidz IF: Spatio-temporal correlation: theory and applications for wireless networks.

*Comput Netw*2004, 45: 245-259. - 20.
Yang Y, Cheng S, Xiong Z, Zhao W: Wyner-Ziv coding based on TCQ and LDPC codes.

*IEEE Trans Commun*2009, 57(2):376-387. - 21.
Fischer TR, Wang M: Entropy-cosntrianed trellis-coded quantization.

*IEEE Trans Inf Theory*1992, 38(2):415-426. - 22.
Marcellin MW: On entropy-cosntrianed trellis-coded quantization.

*IEEE Trans Commun*1994, 42: 14-16. - 23.
Marcellin MW: Transform coding of images using trellis-coded quantization. In

*Proc ICASSP*. Albuquerque NM, USA; 1990:2241-2244. - 24.
Farvardin N, Ran X, Lee CC: Adaptive DCT coding of images using entropy-cosntrianed trellis-coded quantization. In

*Proc ICASSP*. Minneapolis MN, USA; 1993:397-400. - 25.
Marcellin MW, Fischer TR: Trellis coded quantization memoryless and Gauss-Markov sources.

*IEEE Trans Commun*1990, 38: 82-93. - 26.
Taubman DS, Marcellin MW:

*JPEG 2000: Image Compression Fundamentals, Standrads and Practice*. Kluwer Academic Publishers, Norvell MA, USA; 2004. - 27.
Chou P, Lookabaugh T, Gray RM: Entropy-constrained vector quantization.

*IEEE Trans Acoust Speech Signal Process*1989, 37: 31-42.

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The author declares that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Yahampath, P. Distributed transform coding via source-splitting.
*EURASIP J. Adv. Signal Process.* **2012, **78 (2012). https://doi.org/10.1186/1687-6180-2012-78

Received:

Accepted:

Published:

### Keywords

- distributed transform coding
- Wyner-Ziv quantization
- multi-terminal quantization
- Karhunen-Loéve transform (KLT)
- optimal bit-allocation