Proof of Lemma 1 Let where θ1 and θ2 are also column vectors. Then, we have
(65)
and
(66)
As Φ1 is an M1 × N1 Gaussian matrix whose entries are i.i.d. random variables drawn according to normal distribution with mean zero and variance and Φ2 is an M2 × N2 Gaussian matrix whose entries are i.i.d. random variables drawn according to normal distribution with mean zero and variance, we establish
(67)
and
(68)
Hence, we have
(69)
Moreover, it is proved in[31] and[32] that
(70)
and
(71)
Therefore, we have
(72)
and
(73)
Then, it suffice to show that
(74)
We can use the union bound to show that
(75)
There is certainly a constant C(δ) > 0 for δ ∈ (0, 1) so that
(76)
which yields that
(77)
Thus, we can conclude that
(78)
Proof of Corollary 1 The class X of interest is a finite set of objects x which are voiced segments. Denote then
(79)
When the sensing matrix is the dense Gaussian random matrix, the projection vector of the (i + 1)th frame of voiced speech signal xi + 1 is denoted by yi + 1 and then
(80)
In terms of Eq. (45) and Eq. (46), the entries of yi + 1 are i.i.d. Gaussian random variables with mean 0 and variance. And the quantization vector of yi + 1 is denoted by
(81)
where is the quantization error vector of the (i + 1)th frame. The quantization error vectors for all the voiced segments in X can be represented by a matrix where |X| denotes the cardinality of the set X. When Q = 32, according to the results in[30], m = 2.9. Then for an adaptive quantizaer, in light with Eq. (54), we have,
(82)
and
(83)
We can find a subset in X denoted by V that can be represented as
(84)
Let ε > 0 and we have
(85)
There exist a constant C
a
such that
(86)
As, we have
(87)
In this paper, we are just concerned with the impact of quantization on reconstruction. Therefore, we assume that extends to zero. While the voiced speech signal is compressible with respect to an orthonormal basis, we have
(88)
where xi + 1* = Ψθi + 1* and θi + 1* is the solution to
(89)
Therefore,
(90)
However, for a fixed quantizer, when, according to Eq. (56), we establish
(91)
Then, we have
(92)
Let ε1 > 0 and we have
(93)
There exist a constant so that
(94)
Then we have
(95)
Similarly, when, we have
(96)
Thus, we can establish that
(97)
Let and then we obtain
When, we have
When, we have
Proof of Corollary 2 When the CS matrix is the TBD matrix, then we have
where θi + 1 is the coefficient vector of xi + 1 with respect to DCT. In terms of Eqs. (50), (51), (52), denote then
(98)
and
(99)
Moreover, according to the characteristic of the voiced segments, σi + 1,1 ≫ σi + 1,2. As an adaptive quantizer, [−mσi + 1,1, mσi + 1,1] is used as the quantization range of the (i + 1)th projection vector yi + 1 and. In light with Eq. (54), for an adaptive quantizer, we have
(100)
And in terms of Eq. (56), we have
(101)
and
(102)
We can find a subset in X denoted by V that can be represented as
(103)
We define that There exist a constant C
b
such that
(104)
Therefore, we establish
(105)
As stated in corollary 1, we extend to zero. Thus, we establish where θi + 1* is the solution to
(106)
and then we have
(107)
Then, we have
(108)
Moreover, for a fixed quantizer, when, we can prove in the same way that there exist a constant so that
(109)
And when, we can prove in the same way that there exist a constant so that
(110)
Let, and then we can conclude that
When, we have
When, we have