EURASIP Journal on Applied Signal Processing 2004:13, 2034–2041 c ○ 2004 Hindawi Publishing Corporation Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

. Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second order statistical criterion (like reconstruction error or output variance), and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger’s rule and APEX, as well as a structurally similar matrix perturbation-based method.


INTRODUCTION
Principal components analysis (PCA) is a well-known statistical technique that has been widely applied to solve important signal processing problems like feature extraction, signal estimation, detection, and speech separation [1,2,3,4].Many analytical techniques exist, which can solve PCA once the entire input data is known [5].However, most of the analytical methods require extensive matrix operations and hence they are unsuited for real-time applications.Further, in many applications such as direction of arrival (DOA) tracking, adaptive subspace estimation, and so forth, signal statistics change over time rendering the block methods virtually unacceptable.In such cases, fast, adaptive, on-line solutions are desirable.Majority of the existing algorithms for PCA are based on standard gradient procedures [2,3,6,7,8,9], which are extremely slow converging, and their performance depends heavily on step-sizes used.To alleviate this, subspace methods have been explored [10,11,12].However, many of these subspace techniques are computationally intensive.The recently proposed fixed-point PCA algorithm [13] showed fast convergence with little or no change in complexity compared with gradient methods.However, this method and most of the existing methods in literature rely on using the standard deflation technique, which brings in sequential convergence of principal components that potentially reduces the overall speed of convergence.We recently explored a simultaneous principal component extraction algorithm called SIPEX [14] which reduced the gradient search only to the space of orthonormal matrices by using Givens rotations.Although SIPEX resulted in fast and simultaneous convergence of all principal components, the algorithm suffered from high computational complexity due to the involved trigonometric function evaluations.A recently proposed alternative approach suggested iterating the eigenvector estimates using a first-order matrix perturbation formalism for the sample covariance estimate with every new sample obtained in real time [15].However, the performance (speed and accuracy) of this algorithm is hindered by the general Toeplitz structure of the perturbed covariance matrix.In this paper, we will present an algorithm that undertakes a similar perturbation approach, but in contrast, the covariance matrix will be decomposed into its eigenvectors and eigenvalues at all times, which will reduce the perturbation step to be employed on the diagonal eigenvalue matrix.This further restriction of structure, as expected, alleviates the difficulties encountered in the operation of the previous first-order perturbation algorithm, resulting in a fast converging and accurate subspace tracking algorithm.
This paper is organized as follows.First, we present a brief definition of the PCA problem to have a self-contained paper.Second, the proposed recursive PCA (RPCA) algorithm is motivated, derived, and extended to non-stationary and complex-valued signal situations.Next, a set of computer experiments is presented to demonstrate the convergence speed and accuracy characteristics of RPCA.Finally, we conclude the paper with remarks and observations about the algorithm.

PROBLEM DEFINITION
PCA is a well-known problem and is extensively studied in the literature as we have pointed out in the introduction.However, for the sake of completeness, we will provide a brief definition of the problem in this section.For simplicity, and without loss of generality, we will consider a real-valued zeromean, n-dimensional random vector x and its n projections y 1 , . . ., y n such that y j = w T j x, where w j 's are unit-norm vectors defining the projection dimensions in the n-dimensional input space.
The first principal component direction is defined as the solution to the following constrained optimization problem, where R is the input covariance matrix: The subsequent principal components are defined by including additional constraints to the problem that enforce the orthogonality of the sought component to the previously discovered ones: The overall solution to this problem turns out to be the eigenvector matrix of the input covariance R. In particular, the principal component directions are given by the eigenvectors of R arranged according to their corresponding eigenvalues (largest to smallest) [5].
In signal processing applications, the needs are different.The input samples are usually acquired one at a time (i.e., sequentially as opposed to in batches), which necessitates sample-by-sample update rules for the covariance and its eigenvector estimates.In this setting, this analytical solution is of little use, since it is not practical to update the input covariance estimate and solve a full eigendecomposition problem per sample.However, utilizing the recursive structure of the covariance estimate, it is possible to come up with a recursive formula for the eigenvectors of the covariance as well.This will be described in the next section.

RECURSIVE PCA DESCRIPTION
Suppose a sequence of n-dimensional zero-mean wide-sense stationary input vectors x k are arriving, where k is the sample (time) index.The sample covariance estimate at time k for the input vector is1 , where Q and Λ denote the orthonormal eigenvector and diagonal eigenvalue matrices, respectively.Also define α k = Q T k−1 x k .Substituting these definitions in (3), we obtain the following recursive formula for the eigenvectors and eigenvalues: Clearly, if we can determine the eigendecomposition of the matrix By direct comparison, the recursive update rules for the eigenvectors and the eigenvalues are determined to be In spite of the fact that the matrix [(k − 1)Λ k−1 + α k α T k ] has a special structure much simpler than that of a general covariance matrix, determining the eigendecomposition V k D k V T k analytically is difficult.However, especially if k is large, the problem can be solved in a simpler way using a matrix perturbation analysis approach.This will be described next.

Perturbation analysis for rank-one update
] is strongly diagonally dominant; hence (due to the Gershgorin theorem) its eigenvalues will be close to those of the diagonal portion (k − 1)Λ k−1 .In addition, its eigenvectors will also be close to identity (i.e., the eigenvectors of the diagonal portion of the sum).
In summary, the problem reduces to finding the eigendecomposition of a matrix in the form (Λ + αα T ), that is, a rank-one update on a diagonal matrix Λ, using the following approximations: D = Λ + P Λ and V = I + P V , where P Λ and P V are small perturbation matrices.The eigenvalue perturbation matrix P Λ is naturally diagonal.With these definitions, when VDV T is expanded, we get Equating (7) to Λ+αα T , and assuming that the terms P V ΛP T V and P V P Λ P T V are negligible, we get The orthonormality of V brings an additional equation that characterizes P V .Substituting V = I + P V in VV T = I, and assuming that P V P T V ≈ 0, we have P V = −P T V .Combining the fact that the eigenvector perturbation matrix P V is antisymmetric with the fact that P Λ and D are diagonal, the solutions for the perturbation matrices are found from (8) as follows: the ith diagonal entry of P Λ is α 2 i and the (i, j)th entry of P V is α i α j /(λ j + α 2 j − λ i − α 2 i ) if j = i, and 0 if j = i.

The recursive PCA algorithm
The RPCA algorithm is summarized in Algorithm 1.There are a few practical issues regarding the operation of the algorithm, which will be addressed in this subsection.
(f) Normalize the norms of eigenvector estimates by k is a diagonal matrix containing the squared norms of the columns of Q k .
Algorithm 1: The recursive PCA algorithm outline.

Selecting the memory depth parameter
In a stationary situation, where we would like to weight each individual sample equally, this parameter must be set to λ k = 1/k.In this case, the recursive update for the covariance matrix is as shown in (3).In a nonstationary environment, a first-order dynamical forgetting strategy could be employed by selecting a fixed decay rate.Setting λ k = λ corresponds to the following recursive covariance update equation: Typically, in this forgetting scheme, λ ∈ (0, 1) is selected to be very small.Considering that the average memory depth of this recursion is 1/λ samples, the selection of this parameter presents a trade-off between tracking capability and estimation variance.

Initializing the eigenvectors and the eigenvalues
The natural way to initialize the eigenvector matrix Q 0 and the eigenvalue matrix Λ 0 is to use the first N 0 samples to obtain an unbiased estimate of the covariance matrix and determine its eigendecomposition (N 0 > n).The iterations in step (2) can then be applied to the following samples.This means in step (2) k = N 0 + 1, . . ., N. In the stationary case (λ k = 1/k), this means in the first few iterations of step (2) the perturbation approximations will be least accurate (compared to the subsequent iterations).This is simply due to k not being strongly diagonally dominant for small values of k.Compensating the errors induced in the estimations at this stage might require a large number of samples later on.This problem could be avoided if in the iteration stage (step (2)) the index k could be started from a large initial value.In order to achieve this without introducing any bias to the estimates, one needs to use a large number of samples in the initialization (i.e., choose a large N 0 ).In practice, however, this is undesirable.The alternative is to perform the initialization still using a small number of samples (i.e., a small N 0 ), but setting the memory depth parameter to λ k = 1/(k + (τ − 1)N 0 ).This way, when the iterations start at sample k = N 0 + 1, the algorithm thinks that the initialization is actually performed using γ = τN 0 samples.Therefore, from the point of view of the algorithm, the data set looks like The corresponding covariance estimator is then naturally biased.At the end of the iterations, the estimated covariance matrix is where Consequently, we conclude that the bias introduced to the estimation by tricking the algorithm can be asymptotically diminished (as N → ∞).
In practice, we actually do not want to solve for an eigendecomposition problem at all.Therefore, one could simply initialize the estimated eigenvector to identity (Q 0 = I) and the eigenvalues to the sample variances of each input entry over N 0 samples (Λ 0 = diag R N0 ).We then start the iterations over the samples k = 1, . . ., N and set the memory depth parameter to λ k = 1/(k − 1 + γ).Effectively this corresponds to the following biased (but asymptotically unbiased as N → ∞) covariance estimate: This latter initialization strategy is utilized in all the computer experiments that are presented in the following sections. 2n the case of a forgetting covariance estimator (i.e., λ k = λ), the initialization bias is not a problem, since its effect will diminish in accordance with the forgetting time constant any way.Therefore, in the nonstationary case, once again, we suggest using the latter initialization strategy: Q 0 = I and Λ 0 = diag R N0 .In this case, in order to guarantee the accuracy of the first order perturbation approximation, we need to choose the forgetting factor λ such that the ratio (1 − λ)/λ is large.Typically, a forgetting factor λ < 10 −2 will yield accurate results, although if necessary values up to λ = 10 −1 could be utilized.

Extension to complex-valued PCA
The extension of RPCA to complex-valued signals is trivial.Basically, all matrix-transpose operations need to be replaced by Hermitian (conjugate-transpose) operators.Below, we briefly discuss the derivation of the complex-valued RPCA algorithm following the steps of the real-valued version.
The sample covariance estimate for zero-mean complex data is given by where the eigendecomposition is Note that the eigenvalues are still real-valued in this case, but the eigenvectors are complex vectors.Defining x k and following the same steps as in ( 4) to (8), we determine that P V = −P H V .Therefore, as opposed to the expressions derived in Section 3.1, here the complex conjugation * and magnitude | • | operations are utilized.The ith diagonal entry of P Λ is found to be |α i | 2 and the (i, j)th entry of P V is , and 0 if j = i.The algorithm in Algorithm 1 is utilized as it is except for the modifications mentioned in this section.

NUMERICAL EXPERIMENTS
The PCA problem is extensively studied in the literature and there exist an excessive variety of algorithms to solve this problem.Therefore, an exhaustive comparison of the proposed method with existing algorithms is not practical.Instead, a comparison with a structurally similar algorithm (which is also based on first-order matrix perturbations) will be presented [15].We will also comment on the performances of traditional benchmark algorithms like Sanger's rule and APEX in similar setups, although no explicit detailed numerical results will be provided.

Convergence speed analysis
In the first experimental setup, the goal is to investigate the convergence speed and accuracy of the RPCA algorithm.For this, n-dimensional random vectors are drawn from a normal distribution with an arbitrary covariance matrix.In particular, the theoretical covariance matrix of the data is given by AA T , where A is an n × n real-valued matrix whose entries are drawn from a zero-mean unit-variance Gaussian distribution.This process results in a wide range of eigenspreads (as shown in Figure 1), therefore the convergence results shown here encompass such effects.
Specifically, the results of the 3-dimensional case study are presented here, where the data is generated by 3dimensional normal distributions with randomly selected covariance matrices.A total of 1000 simulations (Monte Carlo runs) are carried out for each of the three target eigenvector estimation accuracies (measured in terms of degrees between the estimated and actual eigenvectors): 10  of iterations it takes the algorithm to converge to the target eigenvector accuracy in all eigenvectors (not just the principal component).The histograms of convergence times (up to 10000 samples) for these three target accuracies are shown in Figure 2, where everything above 10000 is also lumped into the last bin.In these Monte Carlo runs, the initial eigenvector estimates were set to the identity matrix and the randomly selected data covariance matrices were forced to have eigenvectors such that all the initial eigenvector estimation errors were at least 25 • .The initial γ value was set to 400 and the decay time constant was selected to be 50 samples.Values in this range were found to work best in terms of final accuracy and convergence speed in extensive Monte Carlo runs.It is expected that there are some cases, especially those with high eigenspreads, which require a very large number of samples to achieve very accurate eigenvector estimations, especially for the minor components.The number of iterations required for convergence to a certain accuracy level is also expected to increase with the dimensionality of the problem.For example, in the 3-dimensional case, about 2% of the simulations failed to converge within 10 • in 10000 on-line iterations, whereas this ratio is about 17% for 5 dimensions.The failure to converge within the given number of iterations is observed for eigenspreads over 5 × 10 4 .
In a similar setup, Sanger's rule achieves a mean convergence speed of 8400 iterations with a standard deviation of 2600 iterations.This results in an average eigenvector direction error of about 9 • with a standard deviation of 8 • .APEX on the other hand converges rarely to within 10 • .Its average eigenvector direction error is about 30 • with a standard deviation of 15 • .

Comparison with first-order perturbation PCA
The first-order perturbation PCA algorithm [15] is structurally similar to the RPCA algorithm presented here.The main difference is the nature of the perturbed matrix: the former works on a perturbation approximation for the com-plete covariance matrix, whereas the latter considers the perturbation of a diagonal matrix.We expect this structural restriction to improve performance in terms of overall algorithm performance.To test this hypothesis, an experimental setup similar to the one in Section 4.1 is utilized.This time, however, the data is generated by a colored time series using a time-delay line (making the procedure a temporal PCA case study).Gaussian white noise is colored using a two-pole filter whose poles are selected from a random uniform distribution on the interval (0, 1).A set of 15 Monte Carlo simulations was run on 3-dimensional data generated according to this procedure.The two parameters of the first-order perturbation method were set to ε = 10 −3 /6.5 and δ = 10 −2 .The parameters of RPCA were set to γ 0 = 300 and τ = 100.The average eigenvector direction estimation convergence curves are shown in Figure 3.
Often, signal subspace tracking is necessary in signal processing applications dealing with nonstationary signals.To illustrate the performance of RPCA for such cases, a piecewise stationary colored noise sequence is generated by filtering white Gaussian noise with single-pole filters with the following poles: 0.5, 0.7, 0.3, 0.9 (in order of appearance).The forgetting factor is set to a constant λ = 10 −3 .The two parameters of the first-order perturbation method were again set to ε = 10 −3 /6.5 and δ = 10 −2 .The results of 30 Monte Carlo runs were averaged to obtain Figure 4.

Direction of arrival estimation
The use of subspace methods for DOA estimation in sensor arrays has been extensively studied (see [14] and the references therein).In Figure 5, a sample run from a computer simulation of DOA according to the experimental setup described in [14] is presented to illustrate the performance of the complex-valued RPCA algorithm.To provide a benchmark (and an upper limit in convergence speed), we also performed this simulation using Matlab's eig function several times on the sample covariance estimate.The latter typically converged to the final accuracy demonstrated here within 10-20 samples.The RPCA estimates on the other hand take a few hundred samples due to the transient in the γ value.The main difference in the application of RPCA is that typical DOA algorithm will convert the complex PCA problem into a structured PCA problem with double the number of dimensions, whereas the RPCA algorithm works directly with the complex-valued input vectors to solve the original complex PCA problem.

An example with 20 dimensions
The numerical examples considered in the previous examples were 3-dimensional and 12-dimensional (6 dimensions in complex variables).The latter did not require all the eigenvectors to converge since only the 6-dimensional signal subspace was necessary to estimate the source directions; hence the problem was actually easier than 12 dimensions.To demonstrate the applicability to higher-dimensional situations, an example with 20 dimensions is presented here.The PCA algorithms generally cannot cope well with higherdimensional problems because the interplay between two  competing structural properties of the eigenspace makes a compromise from one or the other increasingly difficult.Specifically, these two characteristics are the eigenspread (max λ i / min λ i ) and the distribution of ratios of consecutive eigenvalues (λ n /λ n−1 , . . ., λ 2 /λ 1 ) when they are ordered from largest to smallest (where λ n >   eigenvalues).Large eigenspreads lead to slow convergence due to the scarcity of samples representing the minor components.In small-dimensional problems, this is typically the dominant issue that controls the convergence speeds of PCA algorithms.On the other hand, as the dimensionality increases, while very large eigenspreads are still undesirable due  to the same reason, smaller and previously acceptable eigenspread values too become undesirable because consecutive eigenvalues approach each other.This causes the discriminability of the eigenvectors corresponding to these eigenvalues diminish as their ratio approaches unity.Therefore, the trade-off between small and large eigenspreads becomes significantly difficult.Ideally, the ratios between consecutive eigenvalues must be identical for equal discriminability of all subspace components.Variations from this uniformity will result in faster convergence in some eigenvectors, while others will suffer from almost spherical subspaces indiscriminability.
In Figure 6, the convergence of the 20 estimated eigenvectors to their corresponding true values is illustrated in terms of the angle between them (in degrees) versus the number of on-line iterations.The data is generated by a 20-dimensional jointly Gaussian distribution with zero mean, and a covariance matrix with eigenvalues equal to the powers (from 0 to 19) of 1.5 and eigenvectors selected randomly. 3This result is typical of higher-dimensional cases where major components converge relatively fast and minor components take much longer (in terms of samples and iterations) to reach the same level of accuracy.

CONCLUSIONS
In this paper, a novel approximate fixed-point algorithm for subspace tracking is presented.The fast tracking capability is enabled by the recursive nature of the complete eigenvector matrix updates.The proposed algorithm is feasible for real-time implementation since the recursions are based on well-structured matrix multiplications that are the consequences of the rank-one perturbation updates exploited in 3 This corresponds to an eigenspread of 1.5 19 ≈ 2217.the derivation of the algorithm.Performance comparisons with traditional algorithms as well as a structurally similar perturbation-based approach demonstrated the advantages of the recursive PCA algorithm in terms of convergence speed and accuracy.

Special Issue on
Advanced Signal Processing and Computational Intelligence Techniques for Power Line Communications

Call for Papers
In recent years, increased demand for fast Internet access and new multimedia services, the development of new and feasible signal processing techniques associated with faster and low-cost digital signal processors, as well as the deregulation of the telecommunications market have placed major emphasis on the value of investigating hostile media, such as powerline (PL) channels for high-rate data transmissions.Nowadays, some companies are offering powerline communications (PLC) modems with mean and peak bit-rates around 100 Mbps and 200 Mbps, respectively.However, advanced broadband powerline communications (BPLC) modems will surpass this performance.For accomplishing it, some special schemes or solutions for coping with the following issues should be addressed: (i) considerable differences between powerline network topologies; (ii) hostile properties of PL channels, such as attenuation proportional to high frequencies and long distances, high-power impulse noise occurrences, time-varying behavior, and strong inter-symbol interference (ISI) effects; (iv) electromagnetic compatibility with other well-established communication systems working in the same spectrum, (v) climatic conditions in different parts of the world; (vii) reliability and QoS guarantee for video and voice transmissions; and (vi) different demands and needs from developed, developing, and poor countries.
These issues can lead to exciting research frontiers with very promising results if signal processing, digital communication, and computational intelligence techniques are effectively and efficiently combined.
The goal of this special issue is to introduce signal processing, digital communication, and computational intelligence tools either individually or in combined form for advancing reliable and powerful future generations of powerline communication solutions that can be suited with for applications in developed, developing, and poor countries.
Topics of interest include (but are not limited to) Authors should follow the EURASIP JASP manuscript format described at the journal site http://asp.hindawi.com/.Prospective authors should submit an electronic copy of their complete manuscripts through the EURASIP JASP manuscript tracking system at http://www.hindawi.com/mts/,according to the following timetable:

Call for Papers
Biometric identification has established itself as a very important research area primarily due to the pronounced need for more reliable and secure authentication architectures in several civilian and commercial applications.The recent integration of biometrics in large-scale authentication systems such as border control operations has further underscored the importance of conducting systematic research in biometrics.Despite the tremendous progress made over the past few years, biometric systems still have to reckon with a number of problems, which illustrate the importance of developing new biometric processing algorithms as well as the consideration of novel data acquisition techniques.Undoubtedly, the simultaneous use of several biometrics would improve the accuracy of an identification system.For example the use of palmprints can boost the performance of hand geometry systems.Therefore, the development of biometric fusion schemes is an important area of study.Topics related to the correlation between biometric traits, diversity measures for comparing multiple algorithms, incorporation of multiple quality measures, and so forth need to be studied in more detail in the context of multibiometrics systems.Issues related to the individuality of traits and the scalability of biometric systems also require further research.The possibility of using biometric information to generate cryptographic keys is also an emerging area of study.Thus, there is a definite need for advanced signal processing, computer vision, and pattern recognition techniques to bring the current biometric systems to maturity and allow for their large-scale deployment.
This special issue aims to focus on emerging biometric technologies and comprehensively cover their system, processing, and application aspects.Submitted articles must not have been previously published and must not be currently submitted for publication elsewhere.Topics of interest include, but are not limited to, the following: • Fusion of biometrics Authors should follow the EURASIP JASP manuscript format described at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP manuscript tracking system at http://www.hindawi.com/mts/,according to the following timetable:

( 1 )
Initialize Q 0 and Λ 0 .(2) At each time instant k do the following.(a) Get input sample x k .(b) Set memory depth parameter λ k .(c) Calculate α k = Q T k−1 x k .(d) Find perturbations P V and P Λ corresponding to 1 − λ k Λ k−1 + λ k α k α T k .(e) Update eigenvector and eigenvalue matrices:

Figure 1 :
Figure 1: Distribution of eigenspread values for AA T , where A 3×3 is generated to have Gaussian distributed random entries.

Figure 3 :
Figure3: The average eigenvector direction estimation errors, defined as the angle between the actual and the estimated eigenvectors, versus iterations are shown for the first-order perturbation method (thin dotted lines) and for RPCA (thick solid lines).

Figure 4 :
Figure4: The average eigenvector direction estimation errors, defined as the angle between the actual and the estimated eigenvectors, versus iterations for the first-order perturbation method (thin dotted lines) and for RPCA (thick solid lines) in a piecewise stationary situation are shown.The eigenstructure of the input abruptly changes every 5000 samples.

Figure 5 :
Figure5: Direction of arrival estimation in a linear sensor array using complex-valued RPCA in a 3-source 6-sensor case.

Figure 6 :
Figure 6: The convergence of the angle error between the estimated eigenvectors (using RPCA) and their corresponding true eigenvectors in a 20-dimensional PCA problem is shown versus on-line iterations.

Deniz
Erdogmus received his B.S. degrees in electrical engineering and mathematics in 1997, and his M.S. degree in electrical engineering, with emphasis on systems and control, in 1999, all from the Middle East Technical University, Turkey.He received his Ph.D. in electrical engineering from the University of Florida, Gainesville, in 2002.Since 1999, he has been with the Computational NeuroEngineering Laboratory, University of Florida, working with Jose Principe.His current research interests include information-theoretic aspects of adaptive signal processing and machine learning, as well as their applications to problems in communications, biomedical signal processing, and controls.He is the recipient of the IEEE SPS 2003 Young Author Award, and is a Member of IEEE, Tau Beta Pi, and Eta Kappa Nu.Yadunandana N. Rao received his B.E. degree in electronics and communication engineering in 1997, from the University of Mysore, India, and his M.S. degree in electrical and computer engineering in 2000, from the University of Florida, Gainesville, Fla.From 2000 to 2001, he worked as a design engineer at GE Medical Systems, Wis.Since 2001, he has been working toward his Ph.D. in the Computational NeuroEngineering Laboratory (CNEL) at the University of Florida, under the supervision of Jose C. Principe.His current research interests include design of neural analog systems, principal components analysis, generalized SVD with applications to adaptive systems for signal processing and communications.Hemanth Peddaneni received his B.E. degree in electronics and communication engineering from Sri Venkateswara University, Tirupati, India, in 2002.He is now pursuing his Master's degree in electrical engineering at the University of Florida.His research interests include neural networks for signal processing, adaptive signal processing, wavelet methods for time series analysis, digital filter design/implementation, and digital image processing.Anant Hegde graduated with an M.S. degree in electrical engineering from the University of Houston, Tex.During his Master's, he worked in the Bio-Signal Analysis Laboratory (BSAL) with his research mainly focusing on understanding the production mechanisms of event-related potentials such as P50, N100, and P300.Hegde is currently pursuing his Ph.D. research in the Computational NeuroEngineering Laboratory (CNEL) at the University of Florida, Gainesville.His focus is on developing signal processing techniques for detecting asymmetric dependencies in multivariate time structures.His research interests are in EEG analysis, neural networks, and communication systems.Jose C. Principe is a Distinguished Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Florida, where he teaches advanced signal processing, machine learning, and artificial neural networks (ANNs) modeling.He is BellSouth Professor and the Founder and Director of the University of Florida Computational NeuroEngineering Laboratory (CNEL).His primary area of interest is processing of time-varying signals with adaptive neural models.The CNEL has been studying signal and pattern recognition principles based on information theoretic criteria (entropy and mutual information).Dr. Principe is an IEEE Fellow.He is a Member of the ADCOM of the IEEE Signal Processing Society, Member of the Board of Governors of the International Neural Network Society, and Editor in Chief of the IEEE Transactions on Biomedical Engineering.He is a Member of the Advisory Board of the University of Florida Brain Institute.Dr. Principe has more than 90 publications in refereed journals, 10 book chapters, and 200 conference papers.He directed 35 Ph.D. dissertations and 45 Master's theses.He has recently wrote an interactive electronic book entitled Neural and Adaptive Systems: Fundamentals Through Simulation published by John Wiley and Sons.
• , 5 • , and 2 • .The convergence time is measured in terms of the number • Analysis of facial/iris/palm/fingerprint/hand images • Unobtrusive capturing and extraction of biometric information from images/video • Biometric identification systems based on face/iris/palm/fingerprint/voice/gait/signature • Emerging biometrics: ear, teeth, ground reaction force, ECG, retina, skin, DNA • Biometric systems based on 3D information • User-specific parameterization • Biometric individuality • Biometric cryptosystems • Quality measure of biometrics data • Sensor interoperability • Performance evaluation and statistical analysis