Robust Iris Verification Based on Local and Global Variations
© Nima Tajbakhsh et al. 2010
Received: 22 December 2009
Accepted: 25 June 2010
Published: 14 July 2010
This work addresses the increasing demand for a sensitive and user-friendly iris based authentication system. We aim at reducing False Rejection Rate (FRR). The primary source of high FRR is the presence of degradation factors in iris texture. To reduce FRR, we propose a feature extraction method robust against such adverse factors. Founded on local and global variations of the texture, this method is designed to particularly cope with blurred and unfocused iris images. Global variations extract a general presentation of texture, while local yet soft variations encode texture details that are minimally reliant on the image quality. Discrete Cosine Transform and wavelet decomposition are used to capture the local and global variations. In the matching phase, a support vector machine fuses similarity values obtained from global and local features. The verification performance of the proposed method is examined and compared on CASIA Ver.1 and UBIRIS databases. Efficiency of the method contending with degraded images of the UBIRIS is corroborated by experimental results where a significant decrease in FRR is observed in comparison with other algorithms. The experiments on CASIA show that despite neglecting detailed texture information, our method still provides results comparable to those of recent methods.
High level security is a very complicated predicament of contemporary era. Dealing with issues like border-crossing security, restricted areas access control, warding off terrorist attacks, and information security is critically essential in modern societies. Traditional methods like password protection or identification cards have run their courses and nowadays are regarded suboptimal. The need for eliminating the risk of such identification means has been shifting researchers' attention to unique characteristics of human biometrics. Being stable over the lifetime and known as a noninvasive biometric, the human iris is accepted as one of the most popular and reliable identification means, providing high accuracy for the task of personal identification. Surrounded between the pupil and the white sclera, the iris has a complex and stochastic structure containing randomly distributed and irregularly shaped microstructures, generating a rich and informative texture pattern in the iris.
Pioneering work on iris recognition—as the basis of many commercial systems—was done by Daugman . In his algorithm, 2D Gabor filters are adopted to extract orientational texture features. After filtering the image, complex pixel values depending on the signs of the real and imaginary parts are encoded in four possible arrangements of two binary bits (i.e., ). The dissimilarity between a pair of codes is measured by their Hamming distance based on an exclusive-OR operation.
Judging based on recently published articles, one can conclude that making improvement to the performance of the segmentation and feature extraction modules has received the most attention. Applied in a modified form compatible with challenges involved with iris segmentation, contour-based methods [21–23] have been shown to make a great progress towards handling noisy and low contrast iris images. However, a robust feature extraction technique capable of handling degraded images is still lacking. The following subsection gives a critical analysis of the most related works which have recently been proposed in the literature. Further details of historical development and current state of the art methods can be found in the comprehensive survey by Bowyer et al. .
Ma et al. [25, 26] propose two different approaches to capture sharp variations along the angular direction of the iris texture. The first approach  is based on utilizing Gaussian-Hermite moments of the extracted intensity signals and the second one  is founded on the position sequence of local sharp variation points obtained through a class of quadratic spline wavelets. The accuracy of both methods highly depends on to what extent the sharp variations of the texture can be captured. In the case of out-of-focus and motion blurred iris images, obtaining the sharp variation points will not be a trivial task. Monro et al.  utilize 1D Discrete Cosine Transform (DCT) and zero-crossing of the adjacent patches to generate a binary code corresponding to each iris pattern. This method is founded on small overlapping 2D patches defined in an unwrapped iris image. To eliminate image artifacts and also to simplify the registration between the iris patterns, weighted average operators are applied on each 2D patch. Although the method outperforms the Daugman's  and Ma's  algorithms, the databases authors use for experiments almost exclusively contains images with eyelid and eyelash obstruction, and thus no conclusion can be drawn as to the method's robustness against degrading effects of the noise factors. Inspired by , Poursaberi and Araabi  suggest a feature extraction method based on the wavelet coefficients of and subimages of decomposed iris images. Although not reliant on texture details and thus giving a robust presentation, this method cannot achieve a satisfactory performance on larger iris databases as the global information of the texture cannot solely reveal the unique characteristics of the human iris. H.-A. Park and K. R. Park  propose to use 1D long and short Gabor filters for extracting local and global features of the iris texture. The local and global features are combined by a Support Vector Machine- (SVM-) based score level fusion strategy. This method has successfully been tested on two private iris databases; however, there is no information about what percent of the captured images are affected by the degradation factors even though the method is expected to perform well coping with degraded images. An entropy-based coding to cope with noisy iris images is suggested by Proença and Alexandre . The rationale for choosing entropy as the basis of the generated signatures is the fact that this index reflects the amount of information that can be extracted from a texture region. The higher the entropy, the more details in the texture. The authors also propose a method to measure the similarity between entropy-based signatures. Although the method outperforms traditional iris recognition methods particularly facing nonideal images, it fails to capture much essential information. When entropy alone is used to code a given iris texture, some valuable information is missed. Entropy can only measure dispersal of illumination intensity in the overlapped patches and do not deal with gray level values of pixels or correlation between overlapped patches. Besides, the heuristic method needs to be trained which limits the generalization of the recognition method. Vatsa et al.  develop a comprehensive framework to improve accuracy of the recognition system and to accelerate the recognition process. The authors propose an SVM-based learning approach to enhance the image quality, utilize 1D log Gabor filter to capture global characteristics of the texture, and make use of Euler numbers to extract local topological features. To accelerate the matching process, instead of comparing an iris image against all templates in the database, a subset of the most plausible candidates are selected based on the local features and then, an SVM-based score level fusion strategy is adopted to combine local and global features. Miyazawa et al.  suggest 2D Fourier transform to measure similarity of two iris patterns, avoiding challenges involved with feature-based recognition methods. The authors also introduce the idea of 2D Fourier Phase Code (FPC) to eliminate the need for the storage of the whole iris database in the system, addressing the greatest drawback of correlation-based recognition methods. However, it is not clear how the proposed approach handles blurred and out-of-focus images even though several contributions have been made to recognize the irises with texture deformation and eyelids occlusion. A new approach with high flexibility based on the ordinal measures of the texture is proposed by Sun and Tan . The main idea behind the ordinal measures is to uncover inherent relations between adjacent blocks of the iris patterns. To extract ordinal measures of texture, multilobe differential filters (MLDFs) are adopted. The ordinal measures provide a high level of robustness against dust on eyeglasses, partial occlusions, and sensor noise; however, like all filter-based methods, the recognition accuracy depends on the degree to which muscular structures are visible in the texture.
Addressing the above-mentioned challenges, this paper proposes an efficient iris recognition algorithm using local and global variations of the texture. On the ground that degraded iris images contain smooth variations, blurred informative structures, and a high level of occlusion, we design our feature extraction strategy in a way to capture soft and fundamental information of the texture.
Our motivation is to handle the challenges involved with the recognition of VL iris images particularly those taken by portable electronic devices. We explain our motivation through discussing the advantages and disadvantages of performing the recognition task in VL illumination.
The majority of methods proposed in the literature have aimed at recognizing iris images taken under near infrared (NIR) illumination. The reason seems to lie in the wide usage of the NIR cameras in commercial iris recognition systems. This popularity originates from the fact that NIR cameras are minimally affected by unfavorable conditions. However, when it comes to securing portable electronic devices, economical concerns take on the utmost importance. Being cost-effective, the low-resolution color cameras replace costly NIR imaging systems in such applications. Therefore, it is worth doing research on how to cope with the challenges involved with visible light (VL) iris images. This research line is at an incipient stage and deserves further investigation.
Despite the high information content of color iris images and economical aspect of VL cameras, the iris images acquired under the VL illumination are prone to unfavorable effects of environmental illumination. For instance, specular reflections in pupil and iris complicate the segmentation process and corrupt some informative regions of the texture. These facts inspired us to develop a method for extracting information from the rich iris texture taken under the VL illumination in a way that the extracted information is minimally affected by the noise factors in the image.
The rest of the paper is organized as follows. Section 2 gives an overview of the preprocessing stage including iris segmentation and normalization modules. Section 3 explains the proposed feature extraction method along with the matching specifications. Section 4 presents our experimental results on the UBIRIS and CASIA ver.1 databases. Conclusions are given in Section 5.
2. Image Preprocessing
Prior to feature extraction, the iris region must be segmented from the image and mapped into a predefined format. This process can suppress the degrading effects caused by pupil dilation/contraction, camera-to-eye distance, and head tilt. In this section, we briefly describe the segmentation method and give some details about normalization and image enhancement modules.
where is an eye image, is the radius to search for, is a Gaussian smoothing function with the blurring factor , and is the contour of the circle given by . This operator scans the input image for a circle having a maximum gradient change along a circular arc of radius and center coordinates . The segmentation process begins with finding the outer boundary located between the iris and the white sclera. Due to the high contrast, the outer boundary can be detected while is set for a coarse scale of analysis. Since the presence of the eyelids and eyelashes significantly increases the computed gradient, the arc is restricted to the area not affected by them. Hence, the areas confined to two opposing 90° cones centered on the horizontal axis are searched for the outer boundary. Indeed, the method is performed on the part of the texture located near the horizontal axis. Thereafter, the algorithm looks for the inner boundary with finer blurring factor as this border is not as strong as the outer one. In this stage, to avoid being affected by the specular reflection, the part of the arc located in 90° cone centered on the vertical axis which partially covers the lower part of the iris is set aside. The operator is applied iteratively with the amount of smoothing progressively reduced in order to reach precise localization of the inner boundary.
where , , , , and are the iris region, Cartesian coordinates, corresponding polar coordinates, coordinates of the pupil, and iris boundaries along the direction, respectively. We performed this method for normalization and selected 128 pixels along and 512 pixels along and got a 51 28 unwrapped strip.
The quality of iris images could be affected by a variety of factors. As this quality degradation significantly influences the performance of feature extraction and matching processes, it must be handled properly. In general, one can classify the underlying factors in two main categories namely, noncooperative subject behavior and non-ideal environmental illumination. Although the effects of such factors could partially be mitigated by means of a robust feature extraction strategy, they must be alleviated in the image enhancement module as well, making texture features more salient.
Thus far, many approaches have been proposed to enhance the quality of iris images of which the local ones seem to be more effective dealing with texture irregularities as they somehow prevent deteriorating the good-quality regions and altering the features of the iris image. On this ground, to get a uniform distributed illumination and better contrast, we apply a local histogram-based image enhancement to the normalized NIR iris images. Since the NIR images used in our experiments are not highly occluded by the eyelids and eyelashes, with no further processing, they are fed into the feature extraction phase. On the contrary, the VL images suffer from a high level of occlusion which turns the upper half of the iris into an unreliable and somewhat uninformative region. Although some recently developed methods aim at identifying and isolating these local regions in an iris image, they are often time-consuming and not accurate enough, letting some occluded regions in and thus significant performance degradation is observed. Hence, we discarded the upper half region and fed the VL iris images with 256-pixel wide and 128-pixel height to the feature extraction strategy.
3. Proposed Feature Extraction Method
Robustness against the degradation factors is essential for a reliable verification. A typical source of error in the iris recognition systems is lacking similarity between two iris patterns pertaining to the same individual. This mainly stems from the texture deformation, occluded regions, and the degradation factors like motion blurriness and lack of focus. The more the method is reliant on texture details, the more is the prone to failure verification. Generally, the existing methods dealing with NIR iris images tend to capture sharp variations of the texture and detailed information of the muscular structure like position and orientation of fibers. However, from blurred and unfocused iris images, no high frequency information can be obtained. Such dramatic performance degradation can be observed in the experiments conducted in .
In the following, we explain the proposed local and global variations in detail, including the parameters obtained from the training sets and the length of final binary feature vectors. The values reported as the optimal parameters are identical for both NIR and VL images; however, the reported code length for the local and global feature vectors just applies to the VL images. These values depend on the size of images, and since the NIR images are twice the size of VL images in the angular direction, the related values for NIR images are twice as big as the stated values for those of VL images.
3.1. Global Variations
On each column, a window with 10-pixel wide is placed, and the average of the intensity values in this window is computed. Repeating this process for all columns leads to a 1D signature that reflects the global intensity variation of the texture along the angular direction. The signature includes some high frequency fluctuations that are probably created as a result of noise. Another probable reason is the high contrast and quality of the texture in the corresponding regions. In the best case, high frequency components of the signature are not reliable. Since the purpose is to robustly reveal the similarity of two iris patterns and regarding to the fact that these fluctuations are susceptible to the image quality, the signature is smoothed to achieve a more reliable presentation. In order to smooth the signature, a moving average filter with 20-pixel long is applied. Although more reliable for comparison, the smoothed signatures lose a considerable amount of information. To compensate for missing information, a solution may be to adopt a method which locally and in a redundant manner extracts salient features of the signature. Therefore, we perform 1D DCT on overlapped segments of the signature. To that end, the signature is divided into several segments with 20 samples in length which share 10 overlapping samples with each adjacent segment. On each segment, 1D DCT is performed and a subset of coefficients are selected. Because of the soft behavior of the smoothed signature, essential information is roughly summarized in the first five DCT coefficients. Then, the first coefficient of each segment is put in a sequence. Performing the same task for the other four coefficients results in five sequences of numbers that can be regarded as five 1D signals. Indeed, instead of the original signature, five informative 25-sample signals are obtained. In this way, the smoothed signature is compressed by half of the original length.
To encode the obtained signals, we apply two different coding strategies in accordance with the characteristic of the selected coefficients. The generated 1D signal based on the first DCT coefficient contains positive values presenting the average value of each segment. Therefore, a coding strategy based on the first derivative of the generated 1D signal is performed, that is, to substitute positive and negative derivatives with one and zero. Since the remaining four generated signals include variations around zero, a zero-crossing detector is adopted to encode the signals. Finally, corresponding to each part of the iris, a binary code containing bits is generated. Concatenating the obtained codes leads to 250-bit global binary vector. Figure 3 illustrates how the binary vector pertaining to the global variations of the lower region is created.
3.2. Local Variations
When using wavelet decomposition, the key point is to ascertain which subband is the most liked with the smooth behavior of the intensity signals. For this purpose, reconstruction of the intensity signals based on different sub-bands was visually examined. Confirmed with our experiments, approximation coefficients of the third level of decomposition can efficiently display the low frequency variations of the intensity signals. To encode the coefficients, zero-crossing presentation is used and a binary vector containing 32 bits is obtained. Applying the same strategy on 24 intensity signals, a 768-bit binary vector is achieved.
In the second approach, the goal is to summarize the information content of soft variations in a few DCT coefficients. To that end, we smooth the intensity signals with a moving average filter. Then, each smoothed signal is divided to nonoverlapping 10-pixel long segments. After performing 1D DCT on each segment, the first two DCT coefficients are selected. Concatenating the DCT coefficients obtained from the consecutive segments results in two 1D signals which each contains 25 samples. To get a binary presentation, zero-crossing of the signals' first derivate is applied. This algorithm produces a 1200-bit binary vector for a given iris pattern. The final 1968-bit global binary vector is produced by concatenating the vectors obtained from the above two approaches.
To compare two iris images, we use the nearest neighbor approach as the classifier, and the Hamming distance as the similarity measure. To compensate for the eye rotation during the acquisition process, we store eight additional local and global binary feature vectors. This is accomplished by horizontal shifting of 3, 6, 9, and 12 pixels on either side in the normalized images. During verification, the local binary feature vector of a test iris image is compared against the other nine vectors of the stored template and the minimum distance is chosen. The same procedure is repeated for all training samples and the minimum result is selected as the matching hamming distance based on the local feature vector. A similar approach is applied to obtain the matching hamming distance based on the global feature vector. To decide about the identity of the test iris image, the fusion rule explained below is adopted to obtain the final similarity from the computed matching distances.
3.4. Fusion Strategy
The SVM provides a powerful tool to address many pattern recognition problems in which the observations lie in a high dimensional feature space. One of the main advantages of the SVM is to provide an upper band for generalization error based on the number of support vectors in the training set. Although traditionally used for classification purposes, the SVM has recently been adopted as a strong score fusion method. For instance, it has successfully been applied to iris recognition methods (e.g., [12, 14]), giving rise to a better performance in comparison with that of statistical fusion rules or kernel-based match score fusion methods. Besides, the SVM classifier has some advantages over Artificial Neural Networks (ANNs) and often outperforms them. In contrast to ANNs which suffer from the existence of multiple local minima solutions, SVM training always finds a global minimum. While ANNs are prone to overfitting, an SVM classifier provides us with a soft decision boundary and hence a superior generalization capability. Above all, an SVM classifier is insensitive to the relative numbers of training examples in positive and negative classes which plays a critical role in our classification problem. Accordingly, here, to take advantage of both local and global features derived from the iris texture, the SVM is employed to fuse dissimilarity values. In the following, we briefly explain how the SVM serves as a fusion rule.
The output of the matching module, the two hamming distances, represents a point in 2D distance space. To compute the final matching distance, the genuine and imposter classes based on the training set must be defined. The pairs of hamming distances computed between every two iris images of the same individual constitute the points belonging to the genuine class. The imposter class is comprised of the pairs of hamming distances explaining the dissimilarity between every two iris images of different individuals. Here, to ascertain the fusion strategy means to map all the points lying in the distance space into a 1D space in which the points of different classes gain maximum separability. For this purpose, the SVM is adopted to determine the separating boundary between the genuine and imposter classes. Using different kernels makes it possible to define linear and nonlinear boundaries and consequently a variety of linear and nonlinear fusion rules. The position and distance of the new test point relative to the decision boundary determine the sign and absolute value of the fused distance, respectively.
In this section, first, we describe the iris databases and algorithms used for evaluating the performance of the proposed feature extraction algorithm. Thereafter, the experimental results along with the details of the fusion strategy are presented.
To evaluate the efficiency of the proposed method on the iris images taken under both VL and NIR illumination (UBIRIS+CASIA).
To examine the effectiveness of our method dealing with non-ideal VL iris images (UBIRIS).
To clear up doubts over the usefulness of the proposed method dealing with almost ideal NIR iris images (CASIA).
To assess the effects of the anatomical structures of the irises belonging to the European and Asian subjects (UBIRIS+CASIA).
The CASIA Ver.1 database is one of the most commonly used iris image databases for evaluation purposes, and there are many papers reporting experimental results on this database. The CASIA Ver.1 contains 756 iris images pertaining to 108 Asian individuals taken in two different sessions. We choose three samples taken in the first session to form the training set and all samples captured in the second session serve as the test samples. This protocol is consistent with the widely accepted practice for testing biometrics algorithms and also is followed by many papers in the literature. It should be noted that we are aware of the fact that the pupil region of the captured images in this database has been edited by CASIA. However, this merely facilitates segmentation process and does not affect the feature extraction and matching phases. Some samples of the CASIA Ver.1 are depicted in Figure 5(a).
The UBIRIS database is composed of 1877 images from 241 European subjects captured in two different sessions. The images in the first session are gathered in a way that the adverse effects of the degradation factors are reduced to a minimum whereas the images captured in the second session have irregularities in reflection, contrast, natural luminosity, and focus. We use one high quality image and one low quality iris image per subject as the training set and put the remaining images in the test set. For this purpose, we manually inspect the image quality of each individual's iris image. Figure 5(b) shows some samples of the UBIRIS database.
4.2. Methods Used for Comparison
To compare our approach with other methods, we use three-feature extraction strategies suggested in [29, 37, 38]. The wavelet-based method  yields results that are comparable with several well-known methods. The other method  is a filter-based approach and can be considered as a Daugman-like algorithm. The corresponding authors of both papers provided us with the source codes, thus permitting to have a fair comparison. We also use the publicly available MATLAB source code of the iris recognition algorithm  which is widely used for comparison purposes. It should be noted that during our experiments, no strategy is adopted for detecting the eyelids and eyelashes; we just discard the upper half of the iris to eliminate the eyelashes. However, as the Masek's method is equipped with a template generation module and is able to cope with occluded eye images, we do not discard the upper half of the iris and feed the whole normalized image to the feature extraction module.
Furthermore, there exist few iris images suffering from nonlinear texture deformation because of mislocalization of the iris. We deliberately do not modify and let them enter the feature extraction and matching process. Although segmentation errors can significantly increase the overlap between inter- and intraclass distributions , letting this error in the process simulates what happens in practical applications and also permits us to compare the robustness of the implemented methods and the one proposed dealing with the texture deformation.
Comparison between the error rates obtained from the proposed method and the other state-of-the-art algorithms for the UBIRIS and CASIA Version1 databases.
It is noteworthy that we cannot draw a comparison between existing methods suggested for addressing the UBIRIS database and the proposed approach. It should be noted that different research teams make different assumptions while using the database. Some researchers only use a subset of iris images, others discard highly degraded images which fail in segmentation process, and still others make use of one session of the UBIRIS for evaluation purposes. In our experiments, we combined both sessions of the UBIRIS and divided the whole into the test and training sets, giving us to have a solid evaluation of our method on a large number of iris images. Besides, implementing the mentioned methods merely based on their publications results in an unfair comparison. Therefore, we cannot compare the performance of the proposed approach with other state-of-the-art methods. Nevertheless, according to our results, we believe that our method's performance is one of the best for the UBIRIS database.
In this paper, we proposed a new feature extraction method based on the local and global variations of the iris texture. To combine information obtained from the local and global variations, an SVM-based fusion strategy at score level was performed. Experimental results on the UBIRIS database showed that the authentication performance of the proposed method is superior to that of other recent methods. It implies the robustness of our approach dealing with degradation factors existing in many of the UBIRIS iris images. However, the obtained results from the CASIA Version1 indicated that the efficiency of the proposed method relatively declines when it encounters almost ideal NIR iris images. Although, compared with the other methods, there is no significant decrease in the performance, it is expected that in larger NIR databases performance manifests more degradation. Indeed, the more the NIR images, the more the effects of neglecting texture details are revealed.
It should be noted that in the presented work our target was to extract texture information from degraded VL iris images, and this was achieved at the expense of neglecting high frequency behavior of the texture. Therefore, it is more reasonable to apply the proposed method as a complementary approach just to recognize noisy and non-ideal iris images and leave rather ideal images to traditional iris recognition methods. This will enhance acceptability of iris-based authentication systems through preventing subjects from being repeatedly involved in image acquisition process and relaxing some constraints imposed on their behavior during the acquisition process.
The authors would like to thank Soft Computing and Image Analysis Group from University of Beira Interior-Portugal for the use of the UBIRIS Iris Image Database. Portions of the research in this paper use the CASIA-Iris Version 1 collected by the Chinese Academy of Sciences' Institute of Automation (CASIA). They would like to specially thank Mr. Ahmad Poursaberi and Mr. Azad Aziz-zadeh for providing them with the source codes. The authors would also like to thank the reviewers for providing constructive and helpful comments.
- Daugman JG: High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence 1993, 15(11):1148-1161. 10.1109/34.244676View ArticleGoogle Scholar
- Bowyer KW, Baker SE, Hentz A, Hollingsworth K, Peters T, Flynn PJ: Factors that degrade the match distribution in iris biometrics. Identity in the Information Society 2009, 2(3):327-343. 10.1007/s12394-009-0037-zView ArticleGoogle Scholar
- Proença H, Alexandre LA: Iris segmentation methodology for non-cooperative recognition. IEE Proceedings: Vision, Image and Signal Processing 2006, 153(2):199-205. 10.1049/ip-vis:20050213Google Scholar
- Park KR, Park H-A, Kang BJ, Lee EC, Jeong DS: A study on iris localization and recognition on mobile phones. EURASIP Journal on Advances in Signal Processing 2008, 2008:-12.Google Scholar
- Proença H: Iris recognition: a method to segment visible wavelength iris images acquired on-the-move and at-a-distance. Proceedings of the 4th International Symposium on Visual Computing (ISVC '08), December 2008, Las Vegas, Nev, USA, Lecture Notes in Computer Science 5358: 731-742.Google Scholar
- Li P, Liua X, Xiaoa L, Songa Q: Robust and accurate iris segmentation in very noisy iris images. Image and Vision Computing 2010, 28(2):246-253. 10.1016/j.imavis.2009.04.010View ArticleGoogle Scholar
- Puhan NB, Sudha N, Kaushalram AS: Efficient segmentation technique for noisy frontal view iris images using Fourier spectral density. Signal, Image and Video Processing. In pressGoogle Scholar
- Thornton J, Savvides M, Vijaya Kumar BV: A Bayesian approach to deformed pattern matching of iris images. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007, 29(4):596-606.View ArticleGoogle Scholar
- Takano H, Kobayashi H, Nakamura K: Rotation invariant iris recognition method adaptive to ambient lighting variation. IEICE Transactions on Information and Systems 2007, E90-D(6):955-962. 10.1093/ietisy/e90-d.6.955View ArticleGoogle Scholar
- Schuckers SAC, Schmid NA, Abhyankar A, Dorairaj V, Boyce CK, Hornak LA: On techniques for angle compensation in nonideal iris recognition. IEEE Transactions on Systems, Man, and Cybernetics B 2007, 37(5):1176-1190.View ArticleGoogle Scholar
- Huang J, You X, Yuan Y, Yang F, Lin L: Rotation invariant iris feature extraction using Gaussian Markov random fields with non-separable wavelet. Neurocomputing 2010, 73(4–6):883-894.View ArticleGoogle Scholar
- Park H-A, Park KR: Iris recognition based on score level fusion by using SVM. Pattern Recognition Letters 2007, 28(15):2019-2028. 10.1016/j.patrec.2007.05.017View ArticleGoogle Scholar
- Proença H, Alexandre LA: Iris recognition: an entropy-based coding strategy robust to noisy imaging environments. Proceedings of the 3rd International Symposium on Visual Computing (ISVC '07), November 2007, Lake Tahoe, Nev, USA, Lecture Notes in Computer Science 4841: 621-632.Google Scholar
- Vatsa M, Singh R, Noore A: Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing. IEEE Transactions on Systems, Man, and Cybernetics B 2008, 38(4):1021-1035.View ArticleGoogle Scholar
- Miyazawa K, Ito K, Aoki T, Kobayashi K, Nakajima H: An effective approach for Iris recognition using phase-based image matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 2008, 30(10):1741-1756.View ArticleGoogle Scholar
- Sun Z, Tan T: Ordinal measures for iris recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2009, 31(12):2211-2226.View ArticleGoogle Scholar
- Adam M, Rossant F, Amiel F, Mikovikova B, Ea T: Eyelid localization for iris identification. Radioengineering 2008, 17(4):82-85.Google Scholar
- Mina T, Parka R: Eyelid and eyelash detection method in the normalized iris image using the parabolic Hough model and Otsu's thresholding method. Pattern Recognition Letters 2009, 30(12):1138-1143. 10.1016/j.patrec.2009.03.017View ArticleGoogle Scholar
- Liu X, Li P, Song Q: Eyelid localization in iris images captured in less constrained environment. Proceedings of the 3rd International Conference on Advances in Biometrics (ICB '09), June 2009, Alghero, Italy, Lecture Notes in Computer Science 5558: 1140-1149.View ArticleGoogle Scholar
- Hollingsworth KP, Bowyer KW, Flynn PJ: The best bits in an Iris code. IEEE Transactions on Pattern Analysis and Machine Intelligence 2009, 31(6):964-973.View ArticleGoogle Scholar
- Ross A, Shah S: Segmenting non-ideal irises using geodesic active contours. Proceedings of Biometric Symposium (BCC '06), September 2006, Baltimore, Md, USA 1-6.Google Scholar
- Barzegar N, Moin MS: A new user dependent iris recognition system based on an area preserving pointwise level set segmentation approach. EURASIP Journal on Advances in Signal Processing 2009, 2009:-13.Google Scholar
- Daugman J: New methods in iris recognition. IEEE Transactions on Systems, Man, and Cybernetics B 2007, 37(5):1167-1175.View ArticleGoogle Scholar
- Bowyer KW, Hollingsworth K, Flynn PJ: Image understanding for iris biometrics: a survey. Computer Vision and Image Understanding 2008, 110(2):281-307. 10.1016/j.cviu.2007.08.005View ArticleGoogle Scholar
- Ma L, Tan T, Wang Y, Zhang D: Local intensity variation analysis for iris recognition. Pattern Recognition 2004, 37(6):1287-1298. 10.1016/j.patcog.2004.02.001View ArticleGoogle Scholar
- Ma L, Tan T, Wang Y, Zhang D: Efficient iris recognition by characterizing key local variations. IEEE Transactions on Image Processing 2004, 13(6):739-750. 10.1109/TIP.2004.827237View ArticleGoogle Scholar
- Monro DM, Rakshit S, Zhang D: DCT-bsed iris recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007, 29(4):586-595.View ArticleGoogle Scholar
- Lim S, Lee K, Byeon O, Kim T: Efficient iris recognition through improvement of feature vector and classifier. ETRI Journal 2001, 23(2):61-70. 10.4218/etrij.01.0101.0203View ArticleGoogle Scholar
- Poursaberi A, Araabi BN: Iris recognition for partially occluded images: methodology and sensitivity analysis. EURASIP Journal on Advances in Signal Processing 2007, 2007:-12.Google Scholar
- Nighswander-Rempel SP, Riesz J, Gilmore J, Meredith P: A quantum yield map for synthetic eumelanin. Journal of Chemical Physics 2005, 123(19):-6.Google Scholar
- Perna G, Frassanito MC, Palazzo G, Gallone A, Mallardi A, Biagi PF, Capozzi V: Fluorescence spectroscopy of synthetic melanin in solution. Journal of Luminescence 2009, 129(1):44-49. 10.1016/j.jlumin.2008.07.014View ArticleGoogle Scholar
- Tajbakhsh N, Araabi BN, Soltanianzadeh H: An intelligent decision combiner applied to noncooperative iris recognition. Proceedings of the 11th International Conference on Information Fusion (FUSION '08), June-July 2008, Cologne, GermanyGoogle Scholar
- Tajbakhsh N, Araabi BN, Soltanianzadeh H: An intelligent decision combiner applied to noncooperative iris recognition. Proceedings of the 11th International Conference on Information Fusion (FUSION '08), June-July 2008, Cologne, GermanyGoogle Scholar
- Hosseini MS, Araabi BN, Soltanian-Zadeh H: Pigment melanin: pattern for iris recognition. IEEE Transactions on Instrumentation and Measurement 2010, 59(4):792-804. special issue on BiometricsView ArticleGoogle Scholar
- Grabowski K, Sankowski W, Zubert M, Napieralska M: Illumination influence on iris identification algorithms. Proceedings of the 15th International Conference on Mixed Design of Integrated Circuits and Systems (MIXDES '08), June 2008, Poznan, Poland 571-574.Google Scholar
- Tajbakhsh N, Araabi BN, Soltanian-Zadeh H: Noisy iris verification: a modified version of local intensity variation method. Proceedings of the 3rd International Conference on Advances in Biometrics (ICB '09), June 2009, Alghero, Italy, Lecture Notes in Computer Science 5558: 1150-1159.View ArticleGoogle Scholar
- Ahmadi H, Pousaberi A, Azizzadeh A, Kamarei M: An efficient iris coding based on gauss-laguerre wavelets. Proceedings of the 2nd IAPR/IEEE International Conference on Advances in Biometrics (ICB '07), August 2007, Seoul, South Korea, Lecture Notes in Computer Science 4642: 917-926.Google Scholar
- Masek L, Kovesi P: MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. School of Computer Science and Software Engineering, University of Western Australia; 2003. http://people.csse.uwa.edu.au/pk/studentprojects/libor/Google Scholar
- CASIA Iris Image Database, Institute of Automation, Chinese Academy of Sciences, http://www.sinobiometrics.com/
- Proença H, Alexandre LA: UBIRIS: a noisy iris image database. Proceedings of the 13th International Conference on Image Analysis and Processing (ICIAP '05), September 2005, Cagliari, Italy, Lecture Notes in Computer Science 3617: 970-977.Google Scholar
- Proença H, Alexandre LA: Iris recognition: analysis of the error rates regarding the accuracy of the segmentation stage. Image and Vision Computing 2010, 28(1):202-206. 10.1016/j.imavis.2009.03.003View ArticleGoogle Scholar
- Franc V, Hlaváč V: Statistical pattern recognition toolbox for MATLAB. http://cmp.felk.cvut.cz/cmp/software/stprtool/
- Platt J: Sequential minimal optimization: a fast algorithm for training support vector machines. In Advances in Kernel Methods-Support Vector Learning. MIT Press, Cambridge, Mass, USA; 1999:185-208.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.