A Generalized Algorithm for Blind Channel Identiﬁcation with Linear Redundant Precoders

It is well known that redundant ﬁlter bank precoders can be used for blind identiﬁcation as well as equalization of FIR channels. Several algorithms have been proposed in the literature exploiting trailing zeros in the transmitter. In this paper we propose a generalized algorithm of which the previous algorithms are special cases. By carefully choosing system parameters, we can jointly optimize the system performance and computational complexity. Both time domain and frequency domain approaches of channel identiﬁcation algorithms are proposed. Simulation results show that the proposed algorithm outperforms the previous ones when the parameters are optimally chosen, especially in time-varying channel environments. A new concept of generalized signal richness for vector signals is introduced of which several properties are studied.


INTRODUCTION
Wireless communication systems often suffer from a problem due to multipath fading which makes the channels frequency-selective. Channel coefficients are often unknown to the receiver so that channel identification needs to be done before equalization can be performed. Among techniques for identifying unknown channel coefficients, blind methods have long been of great interest. In the literature many blind methods have been proposed based on the knowledge of second-order statistics (SOS) or higher-order statistics of the transmitted symbols [1,2]. These methods often need to accumulate a large number of received symbols until channel coefficients can be estimated accurately. This requirement leads to a disadvantage when the system is working over a fast-varying channel. A deterministic blind method using redundant filterbank precoders was proposed by Scaglione et al. [3] by exploiting trailing zeros introduced at the transmitter. Figure 1 shows a typical linear redundant precoded system. Source symbols are divided into blocks with size M and linearly precoded into P-symbol blocks which are then transmitted on the channel. It is well known that when P ≥ M + L, where L is the maximum order of the FIR channel, interblock interference (IBI) can be completely eliminated in absence of noise. When the block size M increases, the bandwidth efficiency η = (M + L)/M approaches unity asymptotically. The deterministic method proposed in [3] (which we will call the SGB method) exploits trailing zeros with length L introduced in each transmitted block and assumes the input sequence is rich. That is, the matrix composed of finite source blocks achieves full rank.
The method in [3] requires the receiver to accumulate at least M blocks before channel coefficients can be identified. This prevents the system from identifying channel coefficients accurately when the channel is fast-varying, especially when the block size M is large. More recently, Manton and Neumann pointed out that the channel could be identifiable with only two received blocks [4]. An algorithm based on viewing the channel identification problem as finding the greatest common divisor (GCD) of two polynomials is proposed in [5] (which we will call the MNP method). Eventhough it greatly reduces the number of received blocks needed for channel identification, the algorithm has much more computational complexity especially when the block size M is large.
In this paper, we propose a generalized algorithm of which the SGB algorithm proposed in [3] and the MNP algorithm in [5] are both special cases. By carefully choosing parameters, the system performance and computational 2 EURASIP Journal on Advances in Signal Processing  complexity can be jointly optimized. The rest of the paper is organized as follows. Section 2 describes the system structure with linear precoder filter banks and reviews several existing blind algorithms. In Section 3 we present the generalized algorithm and derive the conditions on the input sequence under which the algorithm operates properly. In Section 4 we propose a frequency domain version of the generalized algorithm. The concept of generalized signal richness is introduced in Section 5 and some properties thereof are studied in detail. Simulation results and complexity analysis of both time and frequency domain approaches are presented in Section 6. In particular, simulations under timevarying channel environments are presented to demonstrate the strength of the proposed algorithm against channel variation. Finally, conclusions are made in Section 7. Some of the results in the paper have been presented at a conference [6].

Notations
Boldfaced lower-case letters represent column vectors. Boldfaced upper-case letters and calligraphic upper case letters are reserved for matrices. Superscripts as in A T and A † denote the transpose and transpose-conjugate operations, respectively, of a matrix or a vector. All the vectors and matrices in this paper are complex-valued. In the figures "↑ P" represents an expander and "↓ P" a decimator [7].

Redundant filter bank precoders
Consider the multirate communication system [8] depicted in Figure 1. The source symbols s 1 (n), s 2 (n), . . . , s M (n) may come from M different users or from a serial-to-parallel operation on data of a single user. For convenience we consider the blocked version s(n) as indicated. The vector s(n) is precoded by a P ×M matrix R(z) where P > M. The information with redundancy is then sent over the channel H(z). We assume H(z) is an FIR channel with a maximum order L, that is, The signal is corrupted by channel noise e(n). The received symbols y(n) are divided into P × 1 block vectors y(n). The M × P matrix G(z) is the channel equalizer and s 1 (n), s 2 (n), . . . , s M (n) are the recovered symbol streams. Also, for simplicity we define h as the column vector [h0 h 1 · · · h L T ]. We set that is, the redundancy introduced in a block is equal to the maximum channel order.

Trailing zeros as transmitter guard interval
Suppose we choose the precoder R(z) = [ R1 0 ] where R 1 is an M × M constant invertible matrix and the L × M zero matrix 0 represents zero-padding with length L in each transmitted block, as indicated in Figure 2. For simplicity of describing the algorithms, in this section we assume the noise is absent.
Vector u(n)

Block of L zeros
Noise e(n) where H M = T (h, M) is the full-banded Toeplitz channel matrix. As long as vector h is nonzero, the matrix H M has full column rank M. Now, we assume the signal s(n) is rich, that is, there exists an integer J such that the matrix S has full row rank M. Since R 1 is an M × M invertible matrix, we conclude that the P × J matrix Y has rank M. So there exist L linearly independent vectors that are left annihilators of Y. In other words, there exists a P × L matrix U 0 such that The channel coefficients h can then be determined by solving (5). In practice where channel noise is present, the computation of the annihilators is replaced with the computation of the eigenvectors corresponding to the smallest L singular values of Y. In this and the following sections, the channel noise term is not shown explicitly.
Note that this algorithm [3] works under the assumption that S has full row rank M. Obviously J ≥ M is a necessary condition for this assumption. This means the receiver must accumulate at least M blocks (i.e., a duration of M(M + L) symbols) before channel identification can be performed. This could be a disadvantage when the system is working over a fast-varying channel.

The GCD approach
Another approach proposed in [5] requires only two received blocks for blind channel identification. Recall that the channel is described by By multiplying [1 x x 2 · · · x P−1 ] to both sides of (6), we obtain where y(x) are polynomial representations of the output vector, channel vector, and input vector, respectively. This means, (6) is nothing but a polynomial multiplication. Now, suppose we have two received blocks y(1) and y(2), and let y 1 (x) = h(x)u 1 (x) and y 2 (x) = h(x)u 2 (x) represent the polynomial forms of these. Then the channel polynomial h(x) can be found as the GCD of y 1 (x) and y 2 (x), given that the input polynomials u 1 (x) and u 2 (x) are coprime to each other.

EURASIP Journal on Advances in Signal Processing
To compute the GCD of y 1 (x) and y 2 (x), we first construct a (2P − 1) × 2P matrix [9] One can verify that When u 1 (x) and u 2 (x) are coprime to each other, it can be shown that the matrix U has full rank M + P − 1 (see Section 5). Since H M+P−1 = T (h, M + P − 1) also has rank M + P − 1, rank(Y P ) = M + P − 1 and hence Y P has L left annihilators (i.e., there exists a (2P − 1) × L matrix U 0 such that U † 0 Y = 0). These annihilators are also annihilators of each column of matrix H M+P−1 , and we can therefore, in absence of noise, identify channel coefficients h 0 , h 1 , . . . , h L up to a scalar ambiguity. In presence of noise, the columns of U 0 would be selected as the eigenvectors associated with the smallest singular values of Y P .

Connection to the earlier literature
The MNP method described above can be viewed as a dual version of the subspace methods proposed in the earlier literature in multichannel blind identification [10,11]. In the subspace method in [11], the single source can be estimated as the GCD of the received data from two (more generally N) different antennas. The MNP method [5] swaps the roles of data blocks and multichannel coefficients.

A GENERALIZED ALGORITHM
In this section we propose a generalized algorithm of which each of the two algorithms described in the previous section is a special case. Comparing the two algorithms described above, we find that the MNP approach needs much fewer received blocks for blind identifiability. However, it has more computational complexity. Each received block is repeated P times to build a big matrix. Using the generalized algorithm, we can choose the number of repetitions and the number of received blocks freely as long as they satisfy a certain constraint.

Algorithm description
Observe (6) again and note that it can be rewritten as where T (·, ·) is defined as in (1). Here Q can be any positive integer. Note that in the MNP method Q is chosen as P, as described in the previous section. Suppose the receiver gath- Note that U (J) Q has size (M + Q − 1) × QJ and Y (J) Q has size (P + Q − 1) × QJ. For notational simplicity, from now on we will use subscript Q as in N Q to denote N Q = N +Q−1 where N denotes a positive integer. In particular, Notice that they still have the relationship The size of Σ is M Q × M Q since both H MQ and U (J) Q have full rank M Q . The columns of the M Q × L matrix U 0 are left annihilators of matrix Y (J) and also of H since U (J) has full row rank. Suppose Form the Hankel matrices Vector h can thus be identified up to a scalar ambiguity.
B. Su and P. P. Vaidyanathan

Q-repetition and shifting operation
As we can see in the previous section, the repetition and shifting operation on a vector signal is crucial in the generalized algorithm. Figure 3 gives a block diagram of this operation. For future notational convenience, the subscript Q as in v Q (n) denotes the result of this operation on a vector signal. By viewing (11) and applying this operation on y(n) and u(n), we obtain the relationship for any positive integer Q.

Special cases of the algorithm
The blind channel identification algorithm described above uses two parameters: (a) the number of received blocks J; (b) the number of repetitions per block Q. A number of points should be noted here: (1) the algorithm works for any J and Q as long as U (J) Q has full row rank M Q . This is the only constraint for choosing parameters J and Q; (2) note that if we choose Q = 1 and J ≥ M, then the algorithm reduces to the SGB algorithm [3]; (3) if we choose Q = P and J = 2, it becomes the MNP algorithm [5].
So both the SGB method and the MNP method are a special case of the proposed algorithm. Since Also note that we cannot choose J = 1 since U (J) Q can never have full rank unless the block size M = 1. This is consistent with the theory that two blocks are required for blind channel identification [4]. While the inequality (19) is a necessary condition for U (J) Q to have full rank, it is not sufficient because it also depends on the values of entries of u(n). Nevertheless, when inequality (19) is satisfied, the probability of U (J) Q having full rank is usually close to unity in practice, especially when a large symbol constellation is used. Thus, appears to be a selection that minimizes the computational cost given the number of received blocks J. A detailed study on the conditions for U (J) Q to have full rank is presented in Section 5.
When J = 2, Q can be chosen as small as M − 1 rather than P. If we take J = 3, Q = (M − 1/2) makes the matrix Y twice smaller. We can choose Q = 1 only when J ≥ M. This coincides with the SGB algorithm which uses a richness assumption [3].

FREQUENCY DOMAIN APPROACH
In this section we slightly modify the blind identification algorithm and directly estimate the frequency responses of the channel at different frequency bins and equalize the channel in the frequency domain. We call the modified algorithm frequency domain approach. Some of the ideas come from [12]. The receiver structure for the frequency domain approach is shown in Figure 4. To demonstrate how this system works, observe the P Q × M Q full-banded Toeplitz channel matrix Define a row vector v T ρ = [1 ρ −1 · · · ρ −(PQ−1) ] with ρ a nonzero complex number. Due to full-banded Toeplitz struc- where H(ρ) = L k=0 h k ρ −k is the channel z-transform evaluated at z = ρ.
Let N be chosen as an integer greater than or equal to P Q , and let ρ 1 , ρ 2 , . . . , ρ N be distinct nonzero complex numbers. Consider an N × P Q matrix V N×PQ whose ith row is v T ρi : It is easy to verify that  where is a diagonal matrix with frequency domain channel coefficients as the diagonal entries. Now, when we gather receiving blocks and repeat them as in (12), we get the following matrix: Since we have Y (J) Q = H MQ U (J) Q in absence of noise, by multiplying V N×PQ and Y (J) Q , we have Recall that rank(Y (J) Q )=rank(U (J) Q ) = M Q . Since ρ 1 , ρ 2 , . . . , ρ N are all distinct, the matrix Z has the same rank as Y (J) Q . The dimension of the null space of matrix Z is hence N − M Q . By performing SVD on Z, we can find these N − M Q left annihilators of Z, which are also annihilators of Λ N V N×MQ . There exists an ( Q has full rank, this implies Suppose Then by observing the i jth entry of (28), we have for all i, j, Here h N is the row vector in (25). Form the M Q × N matrices Then from (30) we have U h N = 0. Then the frequency domain channel coefficients h N can be estimated by solving this equation. After the frequency domain channel coefficients are estimated, the received symbols can be equalized directly in the frequency domain, as in DMT systems.
Recall that we have the freedom to choose N as any integer greater than or equal to P Q and the values of ρ i , 1 ≤ i ≤ N as any nonzero complex number in the z-domain. In this paper, we use N = P Q and Note that since H(z) is an Lth order system, there are at most L values among H(ρ i ) which can be zero (channel nulls). By choosing N ≥ P Q , there are at least M Q nonzero values among H(ρ i ), i = 1, 2, . . . , P Q . In practice we can choose to equalize the received symbols in frequency bins associated with the largest M Q frequency responses H(ρ i ) to enhance the system performance. This provides resistance to channel nulls.

GENERALIZED SIGNAL RICHNESS
For the generalized blind channel identification method proposed in this paper to work properly, the matrix U (J) Q defined in (13) must have full row rank for given parameters J and Q. An obvious necessary condition has been presented as inequality (19) in Section 3. The sufficiency, however, depends on the content of signal u(n). When Q = 1 and u(n) is rich, then there exists J such that U (J) Q = [u(0) u(1) · · · u(J − 1)] has full rank. When Q > 1, u(n) requires another kind of richness property so that U (J) Q has full rank for a finite integer J. We call this property the generalized signal richness and define it as follows.
has full row rank M + Q − 1.
Several interesting properties of generalized signal richness will be presented in this section. The reason why we use the notation of (1/Q) will soon be clear when these properties are presented.
Proof. See the appendix. Lemma 1 states a basic property of generalized signal richness: the smaller the value of Q is, the "stronger" the condition of (1/Q)-richness is. For example, if an M ×1 sequence s(n) is 1-rich, or simply rich, then it is (1/Q)-rich for any positive integer Q. On the contrary, a (1/2)-rich signal s(n) is not necessarily 1-rich. We can thus define a measure of generalized signal richness for a given M×1 sequence s(n) as follows.

Definition 2.
Given an M × 1 sequence s(n), n ≥ 0, the degree of nonrichness of s(n) is defined as Recall that the larger the degree of nonrichness Q min is, the weaker the richness of the signal s(n) is. If s(n) is not (1/Q)-rich for any Q, then Q min = ∞. The property of an infinite degree of nonrichness can be described in the following lemma. We use the notation p M (x) to denote the column vector: Lemma 2. Consider an M × 1 sequence s(n). The following statements are equivalent: (1) s(n) is not (1/Q)-rich for any Q; (2) the degree of nonrichness of s(n) is infinity; (3) either there exists a complex number α such that [1 α · · · α M−1 ] is an annihilator of s(n) or [0 · · · 0 1] is an annihilator of s(n); (4) either polynomials p n (x) = p T M (x)s(n), n ≥ 0 share a common zero (at α) or their orders are all less than M − 1.

Proof. See the appendix.
Note that the statement [0 · · · 0 1] is an annihilator of s(n) in condition (3) and the statement that polynomials p n (x) have orders less than M − 1 in condition (4) can be interpreted as the special situation when the common zero α is at infinity.
If an M × 1 sequence s(n) has a finite degree of nonrichness, or s(n) is (1/Q)-rich for some integer Q, then it can be shown that the maximum possible value of Q min is M − 1, as described in the following lemma. (1/(M − 1))-richness is thus the weakest form of generalized richness. When using the MNP method [9], this weakest form of generalized richness is very crucial. If this weakest form of richness of s(n) is not achieved, then by Lemma 2 s(n) has an infinite degree of non-richness and polynomials p T M (x)s(n) have a common factor (x − α). Then as in Section 2.3, when we take GCD of the polynomials representing the received blocks, the receiver would be unable to determine whether the factor (x − α) belongs to the channel polynomial or is a common factor of the symbol polynomials. Therefore, if the input signal s(n) has infinite degree of non-richness, all methods proposed in this paper will fail for all Q.
Furthermore, the MNP method proposed in [5] uses Q = P. Using Lemma 3, we see that using Q = M − 1 is sufficient if we are computing the GCD of polynomials representing received blocks and the following two conditions are true: (1) the GCD is known to have a degree less than or equal to L; (2) the degree of each symbol polynomial is less than or equal to M −1. Using Q = P not only is computationally unnecessary, but also, as we will see in simulation results in Section 6, has sometimes a worse performance than using Q = M − 1 in presence of noise.
The sufficiency of Q = M−1 can also be understood from the point of view of polynomial theory. Suppose polynomials a(x) and b(x) have degrees less than or equal to P − 1 and have a greatest common denominator d(x) whose degree is less than or equal to L. Suppose a(x) = d(x)a 1 (x) and b(x) = d(x)b 1 (x) and both a 1 (x) and b 1 (x) have degrees less than or equal to M−1 and they are coprime to each other. Then there exists polynomials p(x) and q(x) whose degree are less than or equal to M − 2 such that 1 = p(x)a 1 (x) + q(x)b 1 (x) and thus d(x) = p(x)a(x) + q(x)b(x).

Connection to earlier literature
An earlier proposition mathematically equivalent to Lemma 3 has been presented in the single-input-multiple-output (SIMO) blind equalization literature [10,13]. We review it here briefly.

Then T Q (h) has full column rank if and only if
Here h[n] was used to refer to the impulse response of a J × 1 channel. Q stands for the observation period in the multiple-channel receiver end. Conditions (1) and (2) imply that the channel has finite impulse response. Condition (3) can be met by increasing the observation period Q. While this old proposition focuses on the coefficients of multiple channels rather than values of transmitted symbols, it is mathematically equivalent to the statement that s(n) is (1/(M −1))rich if and only if polynomials p T M (x)s(n) do not share common zeros. The case of Q < M − 1, however, has not been considered earlier in the literature, to the best of our knowledge.

Remarks on generalized signal richness
In this section we introduced the concept of generalized signal richness. Given an M × 1 signal s(n), n ≥ 0, the degree of non-richness Q min was defined. For an input signal with a degree of non-richness Q min , we can choose any and some finite J for the generalized algorithm proposed in Section 3 to work properly. The possible values of Q min are 1, 2, . . . , M − 1, and ∞. If s(n) has an infinite degree of nonrichness, the algorithm proposed in this paper will fail for all Q. The degree of non-richness of a signal s(n) directly depends on its content. A deeper study of degree of nonrichness will be presented elsewhere [14].

SIMULATIONS AND DISCUSSIONS
In this section, several simulation results, comparisons, and discussions will be presented. We will first test our proposed method and compare it with the existing methods [3,5] described in Section 2. Secondly, we will compare the performances of time domain versus frequency domain approaches and show that under some channel conditions the frequency domain approach outperforms the time domain approach. Finally, we will analyze and compare the computational complexity of algorithms proposed in this paper.

Simulations of time domain approaches
A Rayleigh fading channel of order L = 4 is used. The size of transmitted blocks is M = 8 and received block size is P = M+L = 12. The normalized least squared channel estimation error, denoted as E ch , is used as the figure of merit for channel identification and is defined as follows: where h and h are the estimated and the true channel vectors, respectively. The simulated normalized channel estimation error is shown in Figure 5 and the corresponding BER is presented in Figure 6. When the number of blocks J = 10, the MNP method (with the number of block repetitions Q = 12) outperforms the SGB method (Q = 1) by a considerable range. Taking Q = 2 saves a lot of computation and yet yields a good performance as indicated. Furthermore, in the case of J = 2, the system with Q = 8 even outperforms the original MNP method with Q = 12. This also strengthens our argument in Section 5 that choosing Q as large as P is unnecessary.
B. Su and P. P. Vaidyanathan  For frequency domain approach, the normalized least squared channel error is defined as

Simulations of frequency domain approaches
where and h is the estimation of h. Simulation results show that frequency domain approach outperforms time domain approach especially when the noise level is high. While the frequency domain approach does not in general beat the time domain approach for a random channel, it has been consistently observed that frequency domain approach performs better than time domain approach when the last channel coefficient h(L) has a small magnitude (i.e., at least one zero of H(z) is close to the origin).
Since we have the freedom to choose values of coefficients ρ i , the receiver can adjust ρ i dynamically according to the a priori knowledge of the approximated channel zero locations. This is especially useful when the channel coefficients are changing slowly from block to block.

Complexity analysis
For the algorithms presented in Section 3, the SVD computation dominates the computational complexity. The number of blocks J, the number of repetitions per block Q, and the received block size P decide the size of the matrix on which SVD is taken. The complexity of SVD operation on an n × m matrix [15] is on the order of O(mn 2 ) with m ≥ n. Since Y (J) Q has size (P +Q−1)×QJ, the complexity is O(QJ(P +Q−1) 2 ). We can see that the complexity can be greatly reduced by choosing a smaller Q. Recall that the SGB method [3] uses Q = 1 and the MNP method [5] uses Q = P. We thus have the following arguments: (i) the MNP method has a complexity around 4P times the complexity of the SGB method for any J. A choice of Q between 1 and P could be seen as a compromise between system performance and complexity; (ii) when J is large, we have the freedom to choose a smaller Q, as explained in the previous section.
For the frequency domain approach presented in Section 4, an additional matrix multiplication is required. This demands extra computational complexity of the order of O(JP 2 Q ). However, if the values ρ i are chosen as equally spaced on the unit circle, an FFT algorithm can be exploited and the computational complexity will be reduced to O(JP Q log P Q ) and is negligible compared to the complexity of SVD operations.

Simulations for time-varying channels
In this section, we demonstrate the capability of the proposed generalized blind identification algorithm in time-varying channels environments. The received symbols can be expressed as where the (L + 1)-tap channel coefficients h(n, k) vary as the time index n changes. We generate the channel coefficients as follows. During a time interval T, the channel coefficients change from h 1 (k) to h 2 (k), where h 1 (k) and h 2 (k), 0 ≤ k ≤ L represent two sets of (L + 1)-tap independent coefficients. The variation of the coefficient is done by linear interpolation such that In our simulation, we choose T = 180. Coefficients of h 1 (k) and h 2 (k) are given in Table 1. The size of transmitted blocks is M = 8 and received block size is P = M + L = 12 (so the channel coefficients completely change after 15 blocks). Simulations are performed under different choices of J and Q, as indicated in Figures 8 and 9. The normalized least squared where h is the estimated channel and h is the averaged coefficients during the time the channel is being estimated: In Figure 8 we see that when J = 10, the time range is too large for the algorithm to estimate the time-varying channel accurately. The performance for J = 2 is much better in high SNR region because the channel does not vary too much during the time of two blocks. However, in low SNR region the performance for J = 2 becomes bad. The case for J = 4 has the best performance among all other choices because the channel does not vary too much during the duration of four receiving blocks, and more data are available for accurate estimation. This simulation result provides clues about how we can choose the optimal J: if the channel variation is fast (T is smaller) we need a smaller J while we can use a larger J when T is larger.

Remarks on choosing the optimal parameters
According to the simulations results above, we summarize here a general guideline to choose a set of optimal parameters in practice.
(1) When the channel is constant and for a fixed Q, a larger J appears to have a better performance (as shown in Figure 5) since more data are available for accurate estimation. (2) When the channel is time-varying, the optimal choice of J depends on the speed of channel variation. Simulation results in Figures 8 and 9 suggest when the channel coefficients completely change in N blocks, a choice of J ≈ N/4 could be appropriate. (3) Suppose J is given, a choice of Q as the smallest integer that satisfies inequality (19) often has a satisfactory performance. A slightly larger Q can sometimes be better (see Figure 5 for J = 10) at the expense of a slightly increased complexity. However, if Q is too large, the performance could be even worse (see Figure 5 for J = 2, Q = 12).
The guidelines above are given by observing the simulation results. An analytically optimal set of J and Q is still under investigation.

Noise handling for large J
It should be noted that when J is very large (and Q = 1), the proposed method behaves like a traditional subspace method using second-order statistics. Suppose B. Su and P. P. Vaidyanathan 11 where E (J) is composed of J columns of noise vectors e(n).
The autocorrelation matrix of received blocks can be estimated as If the input signal and channel noise are uncorrelated, we can write R yy as where R uu = E[u(n)u † (n)] and R ee = E[e(n)e † (n)] are autocorrelation matrices of input blocks and noise vectors, respectively. If R ee is known (e.g., if the noise is white and noise variance is N 0 , then R ee = N 0 I P ), an improved estimation of annihilators of matrix H can be performed by taking eigendecomposition of R yy − R ee , which results in better channel estimation [3]. This technique, however, does not apply when J is small.

CONCLUDING REMARKS
In this paper we proposed a generalized algorithm for blind channel identification with linear redundant precoders. The number of received blocks J ≥ 2 can be chosen freely depending on the speed of channel variation. The minimum number of repetitions Q of each received block is derived to optimize the computational complexity while retaining good performance. Simulation shows that when the system parameter Q is properly chosen, the generalized algorithm outperforms previously reported special cases, especially in a time-varying channel environments.
A frequency domain version of the generalized algorithm is also presented. Simulation result shows that it outperforms time domain approach at low SNR region for certain types of channels, for example, channels with a zero close to the origin. Since we have the freedom to choose different frequency parameters in the frequency domain approach, certain choices other than equally spaced grids on the unit circle can be used to improve the system performance for different channel zero locations. An even more challenging problem might be to analytically derive the optimal frequency points for a specific type of channel.
The concept of generalized signal richness for a vector signal is introduced. With the degree of non-richness of the input signal decided, we can determine the minimum number of repetitions theoretically. A complete set of necessary and sufficient conditions for signals satisfying generalized signal richness is still under investigation. The study of effect of a linear precoder on the property of generalized signal richness could also be a challenging problem. P. P. Vaidyanathan received the B.Tech. and M.Tech. degrees in radiophysics and electronics, from the University of Calcutta, and the Ph.D. degree in electrical and computer engineering from the University of California at Santa Barbara, in 1982. Since then he has been with the Faculty of Electrical Engineering at the California Institute of Technology. He has authored many papers in the signal processing area. He has received several awards for excellence in teaching at the California Institute of Technology. In 1989, he received the IEEE ASSP Senior Paper Award. In 1990, he was recipient of the S. K. Mitra Memorial Award from the Institute of Electronics and Telecommunications