Proof of Lemma 1 Let\theta =\left[\begin{array}{c}\hfill {\theta}_{1}\hfill \\ \hfill {\theta}_{2}\hfill \end{array}\right] where *θ*_{1} and *θ*_{2} are also column vectors. Then, we have

\mathit{{\rm A}}\theta =\left[\begin{array}{cc}\hfill {\Phi}_{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\Phi}_{2}\hfill \end{array}\right]\phantom{\rule{0.2em}{0ex}}\theta =\left[\begin{array}{cc}\hfill {\Phi}_{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\Phi}_{2}\hfill \end{array}\right]\phantom{\rule{0.5em}{0ex}}\left[\begin{array}{c}\hfill {\theta}_{1}\hfill \\ \hfill {\theta}_{2}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\Phi}_{1}{\theta}_{1}\hfill \\ \hfill {\Phi}_{2}{\theta}_{2}\hfill \end{array}\right]

(65)

and

{\Vert \mathit{{\rm A}}\theta \Vert}_{{l}_{2}}^{2}={\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}+{\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}

(66)

As *Φ*_{1} is an *M*_{1} × *N*_{1} Gaussian matrix whose entries are i.i.d. random variables drawn according to normal distribution with mean zero and variance\frac{1}{{M}_{1}} and *Φ*_{2} is an *M*_{2} × *N*_{2} Gaussian matrix whose entries are i.i.d. random variables drawn according to normal distribution with mean zero and variance\frac{1}{{M}_{2}}, we establish

E\left({\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}\right)={\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}

(67)

and

E\left({\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}\right)={\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}

(68)

Hence, we have

E\left({\Vert \mathit{{\rm A}}\theta \Vert}_{{l}_{2}}^{2}\right)={\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}+{\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}={\Vert \theta \Vert}_{{l}_{2}}^{2}

(69)

Moreover, it is proved in[31] and[32] that

P\left(\left|{\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right)\le 2{e}^{-\frac{{M}_{1}{\delta}^{2}}{8}}

(70)

and

P\left(\left|{\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right)\le 2{e}^{-\frac{{M}_{2}{\delta}^{2}}{8}}

(71)

Therefore, we have

P\left(-\delta {\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\le {\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\le \delta {\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right)\ge 1-2{e}^{-\frac{{M}_{1}{\delta}^{2}}{8}}

(72)

and

P\left(-\delta {\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\le {\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\le \delta {\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right)\ge 1-2{e}^{-\frac{{M}_{2}{\delta}^{2}}{8}}

(73)

Then, it suffice to show that

\begin{array}{l}P\left(\left\{\left|{\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right|\le \delta {\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right\}\right.\hfill \\ \cap \left\{\left|{\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right|\le \delta {\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right\})\ge 1-2{e}^{-\frac{{M}_{1}{\delta}^{2}}{8}}-2{e}^{-\frac{{M}_{2}{\delta}^{2}}{8}}\end{array}

(74)

We can use the union bound to show that

\begin{array}{l}P\left(\left|{\Vert \mathit{{\rm A}}\theta \Vert}_{{l}_{2}}^{2}-{\Vert \theta \Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert \theta \Vert}_{{l}_{2}}^{2}\right)\le P\left(\left\{\phantom{\rule{0.1em}{0ex}}\left|{\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right|\right.\right.\hfill \\ \phantom{\rule{1em}{0ex}}\ge \delta {\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\left\}\cup \left(\right)\left\{\left|{\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right\}\right.\end{array}\phantom{\rule{1em}{0ex}}\le P\left(\left|{\Vert {\Phi}_{1}{\theta}_{1}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert {\theta}_{1}\Vert}_{{l}_{2}}^{2}\right)\\ \phantom{\rule{1em}{0ex}}+P\left(\left|{\Vert {\Phi}_{2}{\theta}_{2}\Vert}_{{l}_{2}}^{2}-{\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert {\theta}_{2}\Vert}_{{l}_{2}}^{2}\right)\le 2{e}^{-\frac{{M}_{1}{\delta}^{2}}{8}}+2{e}^{-\frac{{M}_{2}{\delta}^{2}}{8}}\n

(75)

There is certainly a constant *C*(*δ*) > 0 for *δ* ∈ (0, 1) so that

{e}^{-\frac{{M}_{1}{\delta}^{2}}{8}}+{e}^{-\frac{{M}_{2}{\delta}^{2}}{8}}={e}^{-MC\left(\delta \right)}

(76)

which yields that

C\left(\delta \right)=-\frac{\mathit{log}\left({e}^{-\frac{{M}_{1}{\delta}^{2}}{8}}+{e}^{-\frac{{M}_{2}{\delta}^{2}}{8}}\right)}{M}

(77)

Thus, we can conclude that

P\left(\left|{\Vert \mathit{{\rm A}}\theta \Vert}_{{l}_{2}}^{2}-{\Vert \theta \Vert}_{{l}_{2}}^{2}\right|\ge \delta {\Vert \theta \Vert}_{{l}_{2}}^{2}\right)\le 2{e}^{-MC\left(\delta \right)}

(78)

*Proof of Corollary 1* The class *X* of interest is a finite set of objects *x* which are voiced segments. Denote then

X=\left\{{x}_{k}:{x}_{k}\phantom{\rule{0.3em}{0ex}}\text{is}\phantom{\rule{0.24em}{0ex}}\text{the}\phantom{\rule{0.4em}{0ex}}{k}^{\text{th}\phantom{\rule{0.1em}{0ex}}}\phantom{\rule{0.1em}{0ex}}\text{frame}\phantom{\rule{0.3em}{0ex}}\text{of}\phantom{\rule{0.2em}{0ex}}\text{voiced}\phantom{\rule{0.3em}{0ex}}\text{speech}\phantom{\rule{0.3em}{0ex}}\text{signals}\right\}\text{.}

(79)

When the sensing matrix is the dense Gaussian random matrix, the projection vector of the (*i* + 1)^{th} frame of voiced speech signal *x*_{i + 1} is denoted by *y*_{i + 1} and then

{y}_{i+1}=\Phi {x}_{i+1}\text{.}

(80)

In terms of Eq. (45) and Eq. (46), the entries of *y*_{i + 1} are i.i.d. Gaussian random variables with mean 0 and variance\frac{1}{M}{\Vert {x}_{i+1}\Vert}_{{l}_{2}}^{2}. And the quantization vector of *y*_{i + 1} is denoted by

{\widehat{y}}_{i+1}={y}_{i+1}+{e}_{i+1}=\Phi {x}_{i+1}+{e}_{i+1}

(81)

where{e}_{i+1}={\left[\begin{array}{cccc}\hfill {e}_{i+1}\left(1\right)\hfill & \hfill {e}_{i+1}\left(2\right)\hfill & \hfill \cdots \hfill & \hfill {e}_{i+1}\left(M\right)\hfill \end{array}\right]}^{T} is the quantization error vector of the (*i* + 1)^{th} frame. The quantization error vectors for all the voiced segments in *X* can be represented by a matrix\stackrel{\u2015}{e}=\left[\begin{array}{cccc}\hfill {e}_{1}\hfill & \hfill {e}_{2}\hfill & \hfill \cdots \hfill & \hfill {e}_{\left|X\right|}\hfill \end{array}\right] where |*X*| denotes the cardinality of the set *X*. When *Q* = 32, according to the results in[30], *m* = 2.9. Then for an adaptive quantizaer, in light with Eq. (54), we have,

E\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)=3.317\times {10}^{-3}{\sigma}_{i+1}^{2}\phantom{\rule{0.5em}{0ex}}\left(k=1,2\cdots M\right)

(82)

and

E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)=ME\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)=3.317\times {10}^{-3}M{\sigma}_{i+1}^{2}\text{.}

(83)

We can find a subset in *X* denoted by *V* that can be represented as

V=\left\{k:{\Vert {x}_{k}\Vert}_{{l}_{2}}^{2}={\Vert {x}_{i+1}\Vert}_{{l}_{2}}^{2},{x}_{k}\in X\right\}\text{.}

(84)

Let *ε* > 0 and we have

{\epsilon}^{2}=\underset{j\in V}{\text{sup}}{\Vert {e}_{j}\Vert}_{{l}_{2}}^{2}\text{.}

(85)

There exist a constant *C*_{
a
} such that

{\epsilon}^{2}={C}_{a}E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)

(86)

As{\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\le {\epsilon}^{2}, we have

{\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\le {C}_{a}E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)

(87)

In this paper, we are just concerned with the impact of quantization on reconstruction. Therefore, we assume that{\Vert \theta -{\theta}_{K}\Vert}_{{l}_{1}} extends to zero. While the voiced speech signal is compressible with respect to an orthonormal basis, we have

\begin{array}{l}{\Vert {x}_{i+1}-{x}_{i+1}^{*}\Vert}_{{l}_{2}}^{2}={\Vert \Psi \left({\theta}_{i+1}-{\theta}_{i+1}^{*}\right)\Vert}_{{l}_{2}}^{2}\hfill \\ \phantom{\rule{0.3em}{0ex}}={\Vert {\theta}_{i+1}-{\theta}_{i+1}^{*}\Vert}_{{l}_{2}}^{2}\le 3.317\times {10}^{-3}{C}_{1}^{2}{C}_{a}M{\sigma}_{i+1}^{2}\end{array}

(88)

where *x*_{i + 1}* = *Ψθ*_{i + 1}* and *θ*_{i + 1}* is the solution to

\mathit{\text{min}}{\Vert {\theta}_{i+1}\Vert}_{{l}_{1}}\text{s.t}\phantom{\rule{0.25em}{0ex}}{\Vert {\widehat{y}}_{i+1}-\Phi \Psi {\theta}_{i+1}\Vert}_{{l}_{2}}\le \epsilon \text{.}

(89)

Therefore,

\begin{array}{l}{\text{SNR}}_{a}\ge 10{\mathit{\text{log}}}_{10}\left(\frac{M{\sigma}_{i+1}^{2}}{3.317\times {10}^{-3}{C}_{1}^{2}{C}_{a}M{\sigma}_{i+1}^{2}}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=24.792-10{\mathit{\text{log}}}_{10}\left({C}_{1}^{2}{C}_{a}\right)\text{.}\end{array}

(90)

However, for a fixed quantizer, when\frac{{\sigma}_{i}}{{\sigma}_{i+1}}=1.25, according to Eq. (56), we establish

E\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)=4.309\times {10}^{-3}{\sigma}_{i+1}^{2}\text{.}

(91)

Then, we have

E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)=ME\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)=4.309\times {10}^{-3}M{\sigma}_{i+1}^{2}\text{.}

(92)

Let *ε*_{1} > 0 and we have

{\epsilon}_{1}^{2}=\underset{j\in V}{\mathit{\text{sup}}}{\Vert {e}_{j}\Vert}_{{l}_{2}}^{2}\text{.}

(93)

There exist a constant{C}_{{f}_{1}} so that

{\epsilon}_{1}^{2}={C}_{{f}_{1}}E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)=4.309\times {10}^{-3}{C}_{{f}_{1}}M{\sigma}_{i+1}^{2}

(94)

Then we have

\begin{array}{l}{\text{SNR}}_{f}\ge 10{\mathit{\text{log}}}_{10}\left(\frac{M{\sigma}_{i+1}^{2}}{4.309\times {10}^{-3}{C}_{1}^{2}{C}_{{f}_{1}}M{\sigma}_{i+1}^{2}}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=23.656-10{\mathit{\text{log}}}_{10}\left({C}_{1}^{2}{C}_{{f}_{1}}\right)\end{array}

(95)

Similarly, when\frac{{\sigma}_{i}}{{\sigma}_{i+1}}=0.75, we have

E\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)=8.305\times {10}^{-3}{\sigma}_{i+1}^{2}

(96)

Thus, we can establish that

\begin{array}{l}{\text{SNR}}_{f}\ge 10{\mathit{\text{log}}}_{10}\left(\frac{M{\sigma}_{i+1}^{2}}{8.305\times {10}^{-3}{C}_{1}^{2}{C}_{{f}_{2}}M{\sigma}_{i+1}^{2}}\right)\hfill \\ \phantom{\rule{1em}{0ex}}=20.8067-10{\mathit{\text{log}}}_{10}\left({C}_{1}^{2}{C}_{{f}_{2}}\right)\end{array}

(97)

Let{C}_{q}=\mathit{\text{max}}\left({C}_{a},{C}_{{f}_{1}},{C}_{{f}_{2}}\right) and then we obtain

{\text{SNR}}_{a}\ge 24.792-10{\mathit{\text{log}}}_{10}{C}_{1}^{2}{C}_{q}\text{.}

When\frac{{\sigma}_{i}}{{\sigma}_{i+1}}=1.25, we have

{\text{SNR}}_{f}\ge 23.656-10{\mathit{\text{log}}}_{10}{C}_{1}^{2}{C}_{q}\text{.}

When\frac{{\sigma}_{i}}{{\sigma}_{i+1}}=0.75, we have

{\text{SNR}}_{f}\ge 20.8067-10{\mathit{\text{log}}}_{10}{C}_{1}^{2}{C}_{q}\text{.}

*Proof of Corollary 2* When the CS matrix is the TBD matrix, then we have

{y}_{i+1}=\mathit{{\rm A}}{\theta}_{i+1}=\left[\begin{array}{cc}\hfill {\Phi}_{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\Phi}_{2}\hfill \end{array}\right]{\theta}_{i+1}=\left[\begin{array}{cc}\hfill {\Phi}_{1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\Phi}_{2}\hfill \end{array}\right]\phantom{\rule{0.3em}{0ex}}\left[\begin{array}{c}\hfill {\theta}_{i+1,1}\hfill \\ \hfill {\theta}_{i+1,2}\hfill \end{array}\right]

where *θ*_{i + 1} is the coefficient vector of *x*_{i + 1} with respect to DCT. In terms of Eqs. (50), (51), (52), denote then

{\sigma}_{i+1,1}^{2}=\frac{1}{{M}_{1}}{\Vert {\theta}_{i+1,1}\Vert}_{{l}_{2}}^{2}

(98)

and

{\sigma}_{i+1,2}^{2}=\frac{1}{{M}_{2}}{\Vert {\theta}_{i+1,2}\Vert}_{{l}_{2}}^{2}\text{.}

(99)

Moreover, according to the characteristic of the voiced segments, *σ*_{i + 1,1} ≫ *σ*_{i + 1,2}. As an adaptive quantizer, [−*mσ*_{i + 1,1}, *mσ*_{i + 1,1}] is used as the quantization range of the (*i* + 1)^{th} projection vector *y*_{i + 1} and\Delta =\frac{2m{\sigma}_{i+1,1}}{Q}. In light with Eq. (54), for an adaptive quantizer, we have

\begin{array}{l}E\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)=3.317\hfill \\ \phantom{\rule{8.5em}{0ex}}\times {10}^{-3}{\sigma}_{i+1,1}^{2}\phantom{\rule{0.12em}{0ex}}\left(k=1,2\cdots {M}_{1}\right)\end{array}

(100)

And in terms of Eq. (56), we have

\begin{array}{l}E\left({\left({e}_{i+1}\left(k\right)\right)}^{2}\right)\approx \frac{{\Delta}^{2}}{12}=2.738\hfill \\ \phantom{\rule{8.5em}{0ex}}\times {10}^{-3}{\sigma}_{i+1,1}^{2}\phantom{\rule{0.12em}{0ex}}(k={M}_{1}+1,{M}_{1}\\ \phantom{\rule{8.5em}{0ex}}+2,\cdots {M}_{1}+{M}_{2})\end{array}

(101)

and

\begin{array}{l}E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)={M}_{1}E\left({\left({e}_{i+1}\left({M}_{1}\right)\right)}^{2}\right)\hfill \\ \phantom{\rule{8em}{0ex}}+{M}_{2}E\left({\left({e}_{i+1}\left({M}_{1}+{M}_{2}\right)\right)}^{2}\right)\\ \phantom{\rule{6.5em}{0ex}}=3.317\times {10}^{-3}{M}_{1}{\sigma}_{i+1,1}^{2}\\ \phantom{\rule{8em}{0ex}}+2.738\times {10}^{-3}{M}_{2}{\sigma}_{i+1,1}^{2}\end{array}

(102)

We can find a subset in *X* denoted by *V* that can be represented as

\begin{array}{l}V=\left\{k:{\Vert {\theta}_{k,1}\Vert}_{{l}_{2}}^{2}={\Vert {\theta}_{i+1,1}\Vert}_{{l}_{2}}^{2},{\Vert {\theta}_{k,2}\Vert}_{{l}_{2}}^{2}={\Vert {\theta}_{i+1,2}\Vert}_{{l}_{2}}^{2},\right.\hfill \\ \phantom{\rule{1em}{0ex}}{\theta}_{k}\text{is\hspace{0.17em}the\hspace{0.17em}DCT\hspace{0.17em}coefficients\hspace{0.17em}vector\hspace{0.17em}of}\phantom{\rule{0.12em}{0ex}}{x}_{k},{x}_{k}\in X\}\end{array}

(103)

We define that{\epsilon}^{2}=\underset{j\in V}{\text{sup}}{\Vert {e}_{j}\Vert}_{{l}_{2}}^{2}\text{.} There exist a constant C_{
b
} such that

{\epsilon}^{2}={C}_{b}E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)

(104)

Therefore, we establish

{\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\le {C}_{b}E\left({\Vert {e}_{i+1}\Vert}_{{l}_{2}}^{2}\right)

(105)

As stated in corollary 1, we extend{\Vert \theta -{\theta}_{K}\Vert}_{{l}_{1}} to zero. Thus, we establish{\Vert {x}_{i+1}-{x}_{i+1}^{*}\Vert}_{{l}_{2}}^{2}={\Vert \Psi \left({\theta}_{i+1}-{\theta}_{i+1}^{*}\right)\Vert}_{{l}_{2}}^{2}={\Vert {\theta}_{i+1}-{\theta}_{i+1}^{*}\Vert}_{{l}_{2}}^{2}\le {C}_{1}^{2}{C}_{b}\left(3.317\times {10}^{-3}{M}_{1}{\sigma}_{i+1,1}^{2}+2.738\times {10}^{-3}{M}_{2}{\sigma}_{i+1,1}^{2}\right) where *θ*_{i + 1}^{*} is the solution to

\mathit{min}{\Vert {\theta}_{i+1}\Vert}_{{l}_{1}}\text{s.t}.{\Vert {\widehat{y}}_{i+1}-\mathit{{\rm A}}{\theta}_{i+1}\Vert}_{{l}_{2}}\le \epsilon

(106)

and then we have

{x}_{i+1}^{*}=\Psi {\theta}_{i+1}^{*}\text{.}

(107)

Then, we have

\begin{array}{l}{\text{SNR}}_{a}\ge 10{\mathit{\text{log}}}_{10}\hfill \\ \phantom{\rule{2em}{0ex}}\frac{{M}_{1}{\sigma}_{i+1,1}^{2}+{M}_{2}{\sigma}_{i+1,2}^{2}}{{C}_{1}^{2}{C}_{b}\left(3.317\times {10}^{-3}{M}_{1}{\sigma}_{i+1,1}^{2}+2.738\times {10}^{-3}{M}_{2}{\sigma}_{i+1,1}^{2}\right)}\\ \phantom{\rule{1.5em}{0ex}}\ge 10{\mathit{\text{log}}}_{10}\\ \phantom{\rule{1.5em}{0ex}}\frac{{M}_{1}{\sigma}_{i+1,1}^{2}}{{C}_{1}^{2}{C}_{b}\left(3.317\times {10}^{-3}{M}_{1}{\sigma}_{i+1,1}^{2}+2.738\times {10}^{-3}{M}_{2}{\sigma}_{i+1,1}^{2}\right)}\\ \phantom{\rule{0.2em}{0ex}}=10{\mathit{\text{log}}}_{10}\frac{{u}_{l}}{{C}_{1}^{2}{C}_{b}\left(3.317\times {10}^{-3}{u}_{l}+2.738\times {10}^{-3}{u}_{h}\right)}\end{array}

(108)

Moreover, for a fixed quantizer, when\frac{{\sigma}_{i,1}}{{\sigma}_{i+1,1}}=0.75, we can prove in the same way that there exist a constant{C}_{{f}_{3}} so that

{\text{SNR}}_{f}\ge 10{\mathit{\text{log}}}_{10}\frac{{u}_{l}}{{C}_{1}^{2}{C}_{{f}_{3}}\left(8.305\times {10}^{-3}{u}_{l}+1.534\times {10}^{-3}{u}_{h}\right)}\text{.}

(109)

And when\frac{{\sigma}_{i,1}}{{\sigma}_{i+1,1}}=1.25, we can prove in the same way that there exist a constant{C}_{{f}_{4}} so that

{\text{SNR}}_{f}\ge 10{\mathit{\text{log}}}_{10}\frac{{u}_{l}}{{C}_{1}^{2}{C}_{{f}_{4}}\left(4.39\times {10}^{-3}{u}_{l}+4.2775\times {10}^{-3}{u}_{h}\right)}

(110)

Let{C}_{p}=\mathit{\text{max}}\left({C}_{b,}{C}_{{f}_{3}},{C}_{{f}_{4}}\right), and then we can conclude that

{\text{SNR}}_{a}\ge 10{\mathit{\text{log}}}_{10}\frac{{u}_{l}}{{C}_{1}^{2}{C}_{p}\left(3.317\times {10}^{-3}{u}_{l}+2.738\times {10}^{-3}{u}_{h}\right)}\text{.}

When\frac{{\sigma}_{i,1}}{{\sigma}_{i+1,1}}=0.75, we have

{\text{SNR}}_{f}\ge 10{\mathit{\text{log}}}_{10}\frac{{u}_{l}}{{C}_{1}^{2}{C}_{p}\left(8.305\times {10}^{-3}{u}_{l}+1.534\times {10}^{-3}{u}_{h}\right)}\text{.}

When\frac{{\sigma}_{i,1}}{{\sigma}_{i+1,1}}=1.25, we have

{\text{SNR}}_{f}\ge 10{\mathit{\text{log}}}_{10}\frac{{u}_{l}}{{C}_{1}^{2}{C}_{p}\left(4.39\times {10}^{-3}{u}_{l}+4.2775\times {10}^{-3}{u}_{h}\right)}\text{.}