Skip to main content

Robust Iris Verification Based on Local and Global Variations

Abstract

This work addresses the increasing demand for a sensitive and user-friendly iris based authentication system. We aim at reducing False Rejection Rate (FRR). The primary source of high FRR is the presence of degradation factors in iris texture. To reduce FRR, we propose a feature extraction method robust against such adverse factors. Founded on local and global variations of the texture, this method is designed to particularly cope with blurred and unfocused iris images. Global variations extract a general presentation of texture, while local yet soft variations encode texture details that are minimally reliant on the image quality. Discrete Cosine Transform and wavelet decomposition are used to capture the local and global variations. In the matching phase, a support vector machine fuses similarity values obtained from global and local features. The verification performance of the proposed method is examined and compared on CASIA Ver.1 and UBIRIS databases. Efficiency of the method contending with degraded images of the UBIRIS is corroborated by experimental results where a significant decrease in FRR is observed in comparison with other algorithms. The experiments on CASIA show that despite neglecting detailed texture information, our method still provides results comparable to those of recent methods.

1. Introduction

High level security is a very complicated predicament of contemporary era. Dealing with issues like border-crossing security, restricted areas access control, warding off terrorist attacks, and information security is critically essential in modern societies. Traditional methods like password protection or identification cards have run their courses and nowadays are regarded suboptimal. The need for eliminating the risk of such identification means has been shifting researchers' attention to unique characteristics of human biometrics. Being stable over the lifetime and known as a noninvasive biometric, the human iris is accepted as one of the most popular and reliable identification means, providing high accuracy for the task of personal identification. Surrounded between the pupil and the white sclera, the iris has a complex and stochastic structure containing randomly distributed and irregularly shaped microstructures, generating a rich and informative texture pattern in the iris.

Pioneering work on iris recognition—as the basis of many commercial systems—was done by Daugman [1]. In his algorithm, 2D Gabor filters are adopted to extract orientational texture features. After filtering the image, complex pixel values depending on the signs of the real and imaginary parts are encoded in four possible arrangements of two binary bits (i.e., ). The dissimilarity between a pair of codes is measured by their Hamming distance based on an exclusive-OR operation.

After Daugman's work, many researchers have proposed new methods with comparable performances to that of Daugman's algorithm. They have mainly aimed at enhancing the system accuracy, reducing computational burden and providing more compact codes. Despite the great progress, the user-friendliness of iris-based recognition systems is still a challenging issue and degrades significantly when iris images are affected by degradation factors such as motion blurriness, lack of focus, and eyelids and eyelashes occlusion. In addition, there exist other issues like pupil dilation, contact lenses, and template aging which increase False Rejection Rate (FRR), degrading the user-friendliness of the systems [2]. Therefore, recent research efforts have been focused on developing some approaches to increase the acceptability of iris recognition systems. Generally, the current research lines aiming at addressing the acceptability challenge could be classified into four main categories as follows.

  1. (i)

    Segmenting noisy and partially occluded iris images [37].

  2. (ii)

    Compensating for the eye rotation and deformation of the iris texture [811].

  3. (iii)

    Developing robust feature extraction strategies to cope with degraded images [1216].

  4. (iv)

    Detecting the eyelids and eyelashes, and assessing the reliability of the generated codes [1720].

Judging based on recently published articles, one can conclude that making improvement to the performance of the segmentation and feature extraction modules has received the most attention. Applied in a modified form compatible with challenges involved with iris segmentation, contour-based methods [2123] have been shown to make a great progress towards handling noisy and low contrast iris images. However, a robust feature extraction technique capable of handling degraded images is still lacking. The following subsection gives a critical analysis of the most related works which have recently been proposed in the literature. Further details of historical development and current state of the art methods can be found in the comprehensive survey by Bowyer et al. [24].

1.1. State-of-the-Art

Ma et al. [25, 26] propose two different approaches to capture sharp variations along the angular direction of the iris texture. The first approach [25] is based on utilizing Gaussian-Hermite moments of the extracted intensity signals and the second one [26] is founded on the position sequence of local sharp variation points obtained through a class of quadratic spline wavelets. The accuracy of both methods highly depends on to what extent the sharp variations of the texture can be captured. In the case of out-of-focus and motion blurred iris images, obtaining the sharp variation points will not be a trivial task. Monro et al. [27] utilize 1D Discrete Cosine Transform (DCT) and zero-crossing of the adjacent patches to generate a binary code corresponding to each iris pattern. This method is founded on small overlapping 2D patches defined in an unwrapped iris image. To eliminate image artifacts and also to simplify the registration between the iris patterns, weighted average operators are applied on each 2D patch. Although the method outperforms the Daugman's [1] and Ma's [26] algorithms, the databases authors use for experiments almost exclusively contains images with eyelid and eyelash obstruction, and thus no conclusion can be drawn as to the method's robustness against degrading effects of the noise factors. Inspired by [28], Poursaberi and Araabi [29] suggest a feature extraction method based on the wavelet coefficients of and subimages of decomposed iris images. Although not reliant on texture details and thus giving a robust presentation, this method cannot achieve a satisfactory performance on larger iris databases as the global information of the texture cannot solely reveal the unique characteristics of the human iris. H.-A. Park and K. R. Park [12] propose to use 1D long and short Gabor filters for extracting local and global features of the iris texture. The local and global features are combined by a Support Vector Machine- (SVM-) based score level fusion strategy. This method has successfully been tested on two private iris databases; however, there is no information about what percent of the captured images are affected by the degradation factors even though the method is expected to perform well coping with degraded images. An entropy-based coding to cope with noisy iris images is suggested by Proença and Alexandre [13]. The rationale for choosing entropy as the basis of the generated signatures is the fact that this index reflects the amount of information that can be extracted from a texture region. The higher the entropy, the more details in the texture. The authors also propose a method to measure the similarity between entropy-based signatures. Although the method outperforms traditional iris recognition methods particularly facing nonideal images, it fails to capture much essential information. When entropy alone is used to code a given iris texture, some valuable information is missed. Entropy can only measure dispersal of illumination intensity in the overlapped patches and do not deal with gray level values of pixels or correlation between overlapped patches. Besides, the heuristic method needs to be trained which limits the generalization of the recognition method. Vatsa et al. [14] develop a comprehensive framework to improve accuracy of the recognition system and to accelerate the recognition process. The authors propose an SVM-based learning approach to enhance the image quality, utilize 1D log Gabor filter to capture global characteristics of the texture, and make use of Euler numbers to extract local topological features. To accelerate the matching process, instead of comparing an iris image against all templates in the database, a subset of the most plausible candidates are selected based on the local features and then, an SVM-based score level fusion strategy is adopted to combine local and global features. Miyazawa et al. [15] suggest 2D Fourier transform to measure similarity of two iris patterns, avoiding challenges involved with feature-based recognition methods. The authors also introduce the idea of 2D Fourier Phase Code (FPC) to eliminate the need for the storage of the whole iris database in the system, addressing the greatest drawback of correlation-based recognition methods. However, it is not clear how the proposed approach handles blurred and out-of-focus images even though several contributions have been made to recognize the irises with texture deformation and eyelids occlusion. A new approach with high flexibility based on the ordinal measures of the texture is proposed by Sun and Tan [16]. The main idea behind the ordinal measures is to uncover inherent relations between adjacent blocks of the iris patterns. To extract ordinal measures of texture, multilobe differential filters (MLDFs) are adopted. The ordinal measures provide a high level of robustness against dust on eyeglasses, partial occlusions, and sensor noise; however, like all filter-based methods, the recognition accuracy depends on the degree to which muscular structures are visible in the texture.

Addressing the above-mentioned challenges, this paper proposes an efficient iris recognition algorithm using local and global variations of the texture. On the ground that degraded iris images contain smooth variations, blurred informative structures, and a high level of occlusion, we design our feature extraction strategy in a way to capture soft and fundamental information of the texture.

1.2. Motivation

Our motivation is to handle the challenges involved with the recognition of VL iris images particularly those taken by portable electronic devices. We explain our motivation through discussing the advantages and disadvantages of performing the recognition task in VL illumination.

The majority of methods proposed in the literature have aimed at recognizing iris images taken under near infrared (NIR) illumination. The reason seems to lie in the wide usage of the NIR cameras in commercial iris recognition systems. This popularity originates from the fact that NIR cameras are minimally affected by unfavorable conditions. However, when it comes to securing portable electronic devices, economical concerns take on the utmost importance. Being cost-effective, the low-resolution color cameras replace costly NIR imaging systems in such applications. Therefore, it is worth doing research on how to cope with the challenges involved with visible light (VL) iris images. This research line is at an incipient stage and deserves further investigation.

In addition to economical concerns, the color iris images are capable of conveying pigment information which is not practically visible in NIR images. This mainly comes from spectral characteristics of eumelanin pigments distributed over the iris stroma. Studies, like [30, 31], show that the iris pigments are slightly excited in the NIR wavelength, and thus little information can be obtained in this illumination range. On the contrary, the highest excitement level of the iris pigments occurs when they are irradiated by VL wavelength and thus a high level of pigment information can be gained. The presence of pigment information is verified by our previous experiments [3234] where information fusion of VL and NIR images led to a significant enhancement of recognition performance. It should be noted that the pigment effect is something beyond just texture color. To clarify this issue, we divert readers' attention to the fact that an iris image captured in a specific wavelength of the VL spectrum solely can reveal pigment's texture information while it does not provide any color information. Figure 1 illustrates an intuitive understanding of pigment information. In this figure, three pairs of VL and NIR images from three different subjects are shown so that their information content can be compared. Note that, in some regions of the VL iris texture highlighted by the blue circles, one can find some pigment information that is not visible in the corresponding regions of the NIR image. The greater deal of potential information in the VL iris texture is also confirmed by an analysis of power spectrum density [35]. In this work, it is demonstrated that images taken under the VL illumination contain much more details than that of the NIR illumination.

Figure 1
figure 1

Three pairs of VL and NIR iris images from three different subjects. The regions highlighted by the blue circles contain some pigment information that is not visible in the corresponding regions of the NIR images.

Despite the high information content of color iris images and economical aspect of VL cameras, the iris images acquired under the VL illumination are prone to unfavorable effects of environmental illumination. For instance, specular reflections in pupil and iris complicate the segmentation process and corrupt some informative regions of the texture. These facts inspired us to develop a method for extracting information from the rich iris texture taken under the VL illumination in a way that the extracted information is minimally affected by the noise factors in the image.

The rest of the paper is organized as follows. Section 2 gives an overview of the preprocessing stage including iris segmentation and normalization modules. Section 3 explains the proposed feature extraction method along with the matching specifications. Section 4 presents our experimental results on the UBIRIS and CASIA ver.1 databases. Conclusions are given in Section 5.

2. Image Preprocessing

Prior to feature extraction, the iris region must be segmented from the image and mapped into a predefined format. This process can suppress the degrading effects caused by pupil dilation/contraction, camera-to-eye distance, and head tilt. In this section, we briefly describe the segmentation method and give some details about normalization and image enhancement modules.

2.1. Segmentation

We implemented the integro-differential operator proposed by Daugman [1] to find both the inner and outer iris borders, given by

(1)

where is an eye image, is the radius to search for, is a Gaussian smoothing function with the blurring factor, and is the contour of the circle given by . This operator scans the input image for a circle having a maximum gradient change along a circular arc of radius and center coordinates. The segmentation process begins with finding the outer boundary located between the iris and the white sclera. Due to the high contrast, the outer boundary can be detected while is set for a coarse scale of analysis. Since the presence of the eyelids and eyelashes significantly increases the computed gradient, the arc is restricted to the area not affected by them. Hence, the areas confined to two opposing 90° cones centered on the horizontal axis are searched for the outer boundary. Indeed, the method is performed on the part of the texture located near the horizontal axis. Thereafter, the algorithm looks for the inner boundary with finer blurring factor as this border is not as strong as the outer one. In this stage, to avoid being affected by the specular reflection, the part of the arc located in 90° cone centered on the vertical axis which partially covers the lower part of the iris is set aside. The operator is applied iteratively with the amount of smoothing progressively reduced in order to reach precise localization of the inner boundary.

2.2. Normalization

After locating the inner and outer iris borders, to compensate for the varying size of the pupil and capturing distance, the segmented irises are mapped into a dimensionless polar coordinate system, according to the Daugman's Rubber Sheet algorithm. Daugman [1] suggested a normal Cartesian-to-polar transform that remaps each pixel in the iris area into a pair of polar coordinates, where and are on the intervals and , respectively. This unwrapping is formulated as follows:

(2)

such that

(3)

where, , , , and are the iris region, Cartesian coordinates, corresponding polar coordinates, coordinates of the pupil, and iris boundaries along the direction, respectively. We performed this method for normalization and selected 128 pixels along and 512 pixels along and got a 5128 unwrapped strip.

2.3. Enhancement

The quality of iris images could be affected by a variety of factors. As this quality degradation significantly influences the performance of feature extraction and matching processes, it must be handled properly. In general, one can classify the underlying factors in two main categories namely, noncooperative subject behavior and non-ideal environmental illumination. Although the effects of such factors could partially be mitigated by means of a robust feature extraction strategy, they must be alleviated in the image enhancement module as well, making texture features more salient.

Thus far, many approaches have been proposed to enhance the quality of iris images of which the local ones seem to be more effective dealing with texture irregularities as they somehow prevent deteriorating the good-quality regions and altering the features of the iris image. On this ground, to get a uniform distributed illumination and better contrast, we apply a local histogram-based image enhancement to the normalized NIR iris images. Since the NIR images used in our experiments are not highly occluded by the eyelids and eyelashes, with no further processing, they are fed into the feature extraction phase. On the contrary, the VL images suffer from a high level of occlusion which turns the upper half of the iris into an unreliable and somewhat uninformative region. Although some recently developed methods aim at identifying and isolating these local regions in an iris image, they are often time-consuming and not accurate enough, letting some occluded regions in and thus significant performance degradation is observed. Hence, we discarded the upper half region and fed the VL iris images with 256-pixel wide and 128-pixel height to the feature extraction strategy.

3. Proposed Feature Extraction Method

Robustness against the degradation factors is essential for a reliable verification. A typical source of error in the iris recognition systems is lacking similarity between two iris patterns pertaining to the same individual. This mainly stems from the texture deformation, occluded regions, and the degradation factors like motion blurriness and lack of focus. The more the method is reliant on texture details, the more is the prone to failure verification. Generally, the existing methods dealing with NIR iris images tend to capture sharp variations of the texture and detailed information of the muscular structure like position and orientation of fibers. However, from blurred and unfocused iris images, no high frequency information can be obtained. Such dramatic performance degradation can be observed in the experiments conducted in [36].

The goal of our feature extraction strategy is to reduce the adverse effects of the degradations to a minimum through extracting texture information minimally affected by the noise factors. To do this, we utilize global variations combined with local but soft variations of the texture along the angular direction. The global variations can potentially reduce the adverse effects of the local noisy regions, and the local variations make it possible to extract essential texture information from the blurred and unfocused images. To take the advantage of both feature sets, we adopt an SVM-based fusion rule prior to performing the matching module. Figure 2 depicts an algorithmic overview of the proposed method.

Figure 2
figure 2

An algorithmic overview of the proposed recognition method.

In the following, we explain the proposed local and global variations in detail, including the parameters obtained from the training sets and the length of final binary feature vectors. The values reported as the optimal parameters are identical for both NIR and VL images; however, the reported code length for the local and global feature vectors just applies to the VL images. These values depend on the size of images, and since the NIR images are twice the size of VL images in the angular direction, the related values for NIR images are twice as big as the stated values for those of VL images.

3.1. Global Variations

Due to different textural behavior in pupillary and ciliary zones and also to reduce the negative effects of the local noisy regions, the image is divided into two distinct parts by the green dashed line as depicted in Figure 3. The following strategy is performed on each part, and the resulting codes are augmented to form the final global feature vector.

Figure 3
figure 3

An overview of the proposed method for extracting global texture variations. The green dashed line separates the region of interest into two subregions which for each the global feature extraction is performed. The red cross indicates the omission of the left half of the normalized image corresponding with the upper half of the iris which is often occluded by the eyelids and eyelashes. Note that, in the case of NIR images, the upper half of the iris is not discarded.

On each column, a window with 10-pixel wide is placed, and the average of the intensity values in this window is computed. Repeating this process for all columns leads to a 1D signature that reflects the global intensity variation of the texture along the angular direction. The signature includes some high frequency fluctuations that are probably created as a result of noise. Another probable reason is the high contrast and quality of the texture in the corresponding regions. In the best case, high frequency components of the signature are not reliable. Since the purpose is to robustly reveal the similarity of two iris patterns and regarding to the fact that these fluctuations are susceptible to the image quality, the signature is smoothed to achieve a more reliable presentation. In order to smooth the signature, a moving average filter with 20-pixel long is applied. Although more reliable for comparison, the smoothed signatures lose a considerable amount of information. To compensate for missing information, a solution may be to adopt a method which locally and in a redundant manner extracts salient features of the signature. Therefore, we perform 1D DCT on overlapped segments of the signature. To that end, the signature is divided into several segments with 20 samples in length which share 10 overlapping samples with each adjacent segment. On each segment, 1D DCT is performed and a subset of coefficients are selected. Because of the soft behavior of the smoothed signature, essential information is roughly summarized in the first five DCT coefficients. Then, the first coefficient of each segment is put in a sequence. Performing the same task for the other four coefficients results in five sequences of numbers that can be regarded as five 1D signals. Indeed, instead of the original signature, five informative 25-sample signals are obtained. In this way, the smoothed signature is compressed by half of the original length.

To encode the obtained signals, we apply two different coding strategies in accordance with the characteristic of the selected coefficients. The generated 1D signal based on the first DCT coefficient contains positive values presenting the average value of each segment. Therefore, a coding strategy based on the first derivative of the generated 1D signal is performed, that is, to substitute positive and negative derivatives with one and zero. Since the remaining four generated signals include variations around zero, a zero-crossing detector is adopted to encode the signals. Finally, corresponding to each part of the iris, a binary code containing bits is generated. Concatenating the obtained codes leads to 250-bit global binary vector. Figure 3 illustrates how the binary vector pertaining to the global variations of the lower region is created.

3.2. Local Variations

The proposed method to encode the local variation is founded on the idea of the intensity signals—suggested by Ma et al. [25, 26] and the goal is to extract soft variations robust against the degradation factors. To that end, we exploit the energy compaction property of DCT and the multiresolution property of wavelet decomposition to capture the soft changes of the intensity signals. To generate the intensity signals, we divide the normalized iris to overlapping horizontal patches as depicted in Figure 4. Then, each patch is averaged along the radial direction that results in a 1D intensity signal. We use 10 pixels in height patches having five overlapping rows, thus 24 intensity signals are obtained.

Figure 4
figure 4

An overview of the proposed method for extracting local texture variations. The colored rectangles separate the region of interest into 24 tracks each 10 pixels in height. The local feature extraction is performed on each track. For visualization purposes, the height of each track is overemphasized.

When using wavelet decomposition, the key point is to ascertain which subband is the most liked with the smooth behavior of the intensity signals. For this purpose, reconstruction of the intensity signals based on different sub-bands was visually examined. Confirmed with our experiments, approximation coefficients of the third level of decomposition can efficiently display the low frequency variations of the intensity signals. To encode the coefficients, zero-crossing presentation is used and a binary vector containing 32 bits is obtained. Applying the same strategy on 24 intensity signals, a 768-bit binary vector is achieved.

In the second approach, the goal is to summarize the information content of soft variations in a few DCT coefficients. To that end, we smooth the intensity signals with a moving average filter. Then, each smoothed signal is divided to nonoverlapping 10-pixel long segments. After performing 1D DCT on each segment, the first two DCT coefficients are selected. Concatenating the DCT coefficients obtained from the consecutive segments results in two 1D signals which each contains 25 samples. To get a binary presentation, zero-crossing of the signals' first derivate is applied. This algorithm produces a 1200-bit binary vector for a given iris pattern. The final 1968-bit global binary vector is produced by concatenating the vectors obtained from the above two approaches.

3.3. Matching

To compare two iris images, we use the nearest neighbor approach as the classifier, and the Hamming distance as the similarity measure. To compensate for the eye rotation during the acquisition process, we store eight additional local and global binary feature vectors. This is accomplished by horizontal shifting of 3, 6, 9, and 12 pixels on either side in the normalized images. During verification, the local binary feature vector of a test iris image is compared against the other nine vectors of the stored template and the minimum distance is chosen. The same procedure is repeated for all training samples and the minimum result is selected as the matching hamming distance based on the local feature vector. A similar approach is applied to obtain the matching hamming distance based on the global feature vector. To decide about the identity of the test iris image, the fusion rule explained below is adopted to obtain the final similarity from the computed matching distances.

3.4. Fusion Strategy

The SVM provides a powerful tool to address many pattern recognition problems in which the observations lie in a high dimensional feature space. One of the main advantages of the SVM is to provide an upper band for generalization error based on the number of support vectors in the training set. Although traditionally used for classification purposes, the SVM has recently been adopted as a strong score fusion method. For instance, it has successfully been applied to iris recognition methods (e.g., [12, 14]), giving rise to a better performance in comparison with that of statistical fusion rules or kernel-based match score fusion methods. Besides, the SVM classifier has some advantages over Artificial Neural Networks (ANNs) and often outperforms them. In contrast to ANNs which suffer from the existence of multiple local minima solutions, SVM training always finds a global minimum. While ANNs are prone to overfitting, an SVM classifier provides us with a soft decision boundary and hence a superior generalization capability. Above all, an SVM classifier is insensitive to the relative numbers of training examples in positive and negative classes which plays a critical role in our classification problem. Accordingly, here, to take advantage of both local and global features derived from the iris texture, the SVM is employed to fuse dissimilarity values. In the following, we briefly explain how the SVM serves as a fusion rule.

The output of the matching module, the two hamming distances, represents a point in 2D distance space. To compute the final matching distance, the genuine and imposter classes based on the training set must be defined. The pairs of hamming distances computed between every two iris images of the same individual constitute the points belonging to the genuine class. The imposter class is comprised of the pairs of hamming distances explaining the dissimilarity between every two iris images of different individuals. Here, to ascertain the fusion strategy means to map all the points lying in the distance space into a 1D space in which the points of different classes gain maximum separability. For this purpose, the SVM is adopted to determine the separating boundary between the genuine and imposter classes. Using different kernels makes it possible to define linear and nonlinear boundaries and consequently a variety of linear and nonlinear fusion rules. The position and distance of the new test point relative to the decision boundary determine the sign and absolute value of the fused distance, respectively.

4. Experiments

In this section, first, we describe the iris databases and algorithms used for evaluating the performance of the proposed feature extraction algorithm. Thereafter, the experimental results along with the details of the fusion strategy are presented.

4.1. Databases

To evaluate the performance of the proposed feature extraction method, we selected two iris databases, namely, CASIA Version 1 [39] and UBIRIS [40]. The rationale behind choosing these databases is described as follows

  1. (i)

    To evaluate the efficiency of the proposed method on the iris images taken under both VL and NIR illumination (UBIRIS+CASIA).

  2. (ii)

    To examine the effectiveness of our method dealing with non-ideal VL iris images (UBIRIS).

  3. (iii)

    To clear up doubts over the usefulness of the proposed method dealing with almost ideal NIR iris images (CASIA).

  4. (iv)

    To assess the effects of the anatomical structures of the irises belonging to the European and Asian subjects (UBIRIS+CASIA).

In the following, a brief description of the databases along with conditions under which experiments are conducted is given.

  1. (i)

    The CASIA Ver.1 database is one of the most commonly used iris image databases for evaluation purposes, and there are many papers reporting experimental results on this database. The CASIA Ver.1 contains 756 iris images pertaining to 108 Asian individuals taken in two different sessions. We choose three samples taken in the first session to form the training set and all samples captured in the second session serve as the test samples. This protocol is consistent with the widely accepted practice for testing biometrics algorithms and also is followed by many papers in the literature. It should be noted that we are aware of the fact that the pupil region of the captured images in this database has been edited by CASIA. However, this merely facilitates segmentation process and does not affect the feature extraction and matching phases. Some samples of the CASIA Ver.1 are depicted in Figure 5(a).

  2. (ii)

    The UBIRIS database is composed of 1877 images from 241 European subjects captured in two different sessions. The images in the first session are gathered in a way that the adverse effects of the degradation factors are reduced to a minimum whereas the images captured in the second session have irregularities in reflection, contrast, natural luminosity, and focus. We use one high quality image and one low quality iris image per subject as the training set and put the remaining images in the test set. For this purpose, we manually inspect the image quality of each individual's iris image. Figure 5(b) shows some samples of the UBIRIS database.

Figure 5
figure 5

Iris samples from (a) CASIA Ver. 1 and (b) UBIRIS.

4.2. Methods Used for Comparison

To compare our approach with other methods, we use three-feature extraction strategies suggested in [29, 37, 38]. The wavelet-based method [29] yields results that are comparable with several well-known methods. The other method [37] is a filter-based approach and can be considered as a Daugman-like algorithm. The corresponding authors of both papers provided us with the source codes, thus permitting to have a fair comparison. We also use the publicly available MATLAB source code of the iris recognition algorithm [38] which is widely used for comparison purposes. It should be noted that during our experiments, no strategy is adopted for detecting the eyelids and eyelashes; we just discard the upper half of the iris to eliminate the eyelashes. However, as the Masek's method is equipped with a template generation module and is able to cope with occluded eye images, we do not discard the upper half of the iris and feed the whole normalized image to the feature extraction module.

Furthermore, there exist few iris images suffering from nonlinear texture deformation because of mislocalization of the iris. We deliberately do not modify and let them enter the feature extraction and matching process. Although segmentation errors can significantly increase the overlap between inter- and intraclass distributions [41], letting this error in the process simulates what happens in practical applications and also permits us to compare the robustness of the implemented methods and the one proposed dealing with the texture deformation.

4.3. Results

We use a free publicly available toolbox [42] compatible with MATLAB environment to implement the SVM classifier. Since a quadratic programming-based learning method is suitable for a very limited number of training samples, the Sequential Minimal Optimizer (SMO) solver [43] is utilized for the SVM classifier design in the distance space. We performed extensive experiments for both databases to determine optimal kernel and its associated parameters. The number of support vectors, the mean squared error value, and the classification accuracy are used as our measures to determine the optimal kernel. At last, it was ascertained that the Radial Basis Function (RBF) with achieves the best results for both databases. Scatter plot of the observations generated on the preconstructed training sets and their separating boundaries for the UBIRIS and CASIA Ver.1 using the optimal parameters are depicted in Figure 6. In this figure, the green circles represent the calculated distances between the iris images of different individuals, the red dots stand for those of the same individuals, the black solid line indicates the SVM boundary, and the black dashed lines delineate the margin boundaries.

Figure 6
figure 6

Scatter plot of the genuine and imposter classes along with the discriminating boundary for (a) UBIRIS database, (b) CASIA Version1 database. The black solid curve indicates the SVM boundary, and the black dashed curves delineate the margin boundaries.

Figure 7 shows the receiver operating characteristic (ROC) plots for the UBIRIS and CASIA Ver.1 databases using local variations, global variations, and those obtained from the SVM-based score level fusion algorithm. As expected, the SVM-based fusion approach performs the best compared with the local and global verification. The FRR of the individual features is high, but the fusion algorithm reduces it and provides the FRR of 2.0% at 0.01% False Acceptance Rate (FAR) on the UBIRIS database and 2.3% on the CASIA Ver.1 database. From Figure 7(b), it is interpreted that the global variations on the CASIA database cannot yield enough discriminating information although it provides complementary information for the local variations. This originates from the nonuniform distribution of texture information in the NIR images. Indeed, the signature obtained from the outer area of the iris does not reveal sufficient texture details, and this decreases the discriminating power of the global variations.

Figure 7
figure 7

ROC plots of the local variations, the global variations, and the SVM-based fusion rule for (a) UBIRIS database (b) CASIA Version1 database. As it is seen, applying the fusion rule achieves a significant enhancement for both CASIA and UBIRIS databases. Note that the decimal values on the horizontal axis are not in the percentage format (i.e., 0.0001 stands for 0.01 %)

To present a quantitative comparison, we summarize the resulting values of the Equal Error Rate (EER), the FRR (@ FAR =   .01%), and the Fisher Ratio (FR) for the proposed approach and the other implemented methods in Table 1. In the case of UBIRIS, the proposed method gives the least FRR and EER and also yields the maximum separability of the inter- and intra-class distributions, whereas the other implemented methods exhibit unreliable performance with high FRR (low level of acceptability). This implies that the proposed method extracts essential information from the degraded iris images and this capability indicates the effectiveness of our method for less constrained image capture setups like what happens in mobile electronic devices. In the case of CASIA, the Poursaberi's approach except for the EER measure gives the best performance while our method yields comparable results. The reason for the low performance of our method dealing with the NIR images originates from our design approach. Indeed, in the strategies we exploited to extract both local and global variations, the details of the iris texture are deliberately omitted in order to achieve a robust presentation of texture information. This may decrease the efficiency of the proposed method dealing with high quality iris images, and this performance degradation manifests itself further facing larger ideal NIR databases. In other words, a reliable iris texture presentation is achieved at the expense of some detailed information loss. At last, it should be noted that we cannot compare our results with that reported in [25, 26]. This is due to the limitations on the usage rights of the CASIA ver.1 and that only a subset of images used in [25, 26] is available online.

Table 1 Comparison between the error rates obtained from the proposed method and the other state-of-the-art algorithms for the UBIRIS and CASIA Version1 databases.

It is noteworthy that we cannot draw a comparison between existing methods suggested for addressing the UBIRIS database and the proposed approach. It should be noted that different research teams make different assumptions while using the database. Some researchers only use a subset of iris images, others discard highly degraded images which fail in segmentation process, and still others make use of one session of the UBIRIS for evaluation purposes. In our experiments, we combined both sessions of the UBIRIS and divided the whole into the test and training sets, giving us to have a solid evaluation of our method on a large number of iris images. Besides, implementing the mentioned methods merely based on their publications results in an unfair comparison. Therefore, we cannot compare the performance of the proposed approach with other state-of-the-art methods. Nevertheless, according to our results, we believe that our method's performance is one of the best for the UBIRIS database.

5. Conclusion

In this paper, we proposed a new feature extraction method based on the local and global variations of the iris texture. To combine information obtained from the local and global variations, an SVM-based fusion strategy at score level was performed. Experimental results on the UBIRIS database showed that the authentication performance of the proposed method is superior to that of other recent methods. It implies the robustness of our approach dealing with degradation factors existing in many of the UBIRIS iris images. However, the obtained results from the CASIA Version1 indicated that the efficiency of the proposed method relatively declines when it encounters almost ideal NIR iris images. Although, compared with the other methods, there is no significant decrease in the performance, it is expected that in larger NIR databases performance manifests more degradation. Indeed, the more the NIR images, the more the effects of neglecting texture details are revealed.

It should be noted that in the presented work our target was to extract texture information from degraded VL iris images, and this was achieved at the expense of neglecting high frequency behavior of the texture. Therefore, it is more reasonable to apply the proposed method as a complementary approach just to recognize noisy and non-ideal iris images and leave rather ideal images to traditional iris recognition methods. This will enhance acceptability of iris-based authentication systems through preventing subjects from being repeatedly involved in image acquisition process and relaxing some constraints imposed on their behavior during the acquisition process.

References

  1. Daugman JG: High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence 1993, 15(11):1148-1161. 10.1109/34.244676

    Article  Google Scholar 

  2. Bowyer KW, Baker SE, Hentz A, Hollingsworth K, Peters T, Flynn PJ: Factors that degrade the match distribution in iris biometrics. Identity in the Information Society 2009, 2(3):327-343. 10.1007/s12394-009-0037-z

    Article  Google Scholar 

  3. Proença H, Alexandre LA: Iris segmentation methodology for non-cooperative recognition. IEE Proceedings: Vision, Image and Signal Processing 2006, 153(2):199-205. 10.1049/ip-vis:20050213

    Google Scholar 

  4. Park KR, Park H-A, Kang BJ, Lee EC, Jeong DS: A study on iris localization and recognition on mobile phones. EURASIP Journal on Advances in Signal Processing 2008, 2008:-12.

    Google Scholar 

  5. Proença H: Iris recognition: a method to segment visible wavelength iris images acquired on-the-move and at-a-distance. Proceedings of the 4th International Symposium on Visual Computing (ISVC '08), December 2008, Las Vegas, Nev, USA, Lecture Notes in Computer Science 5358: 731-742.

    Google Scholar 

  6. Li P, Liua X, Xiaoa L, Songa Q: Robust and accurate iris segmentation in very noisy iris images. Image and Vision Computing 2010, 28(2):246-253. 10.1016/j.imavis.2009.04.010

    Article  Google Scholar 

  7. Puhan NB, Sudha N, Kaushalram AS: Efficient segmentation technique for noisy frontal view iris images using Fourier spectral density. Signal, Image and Video Processing. In press

  8. Thornton J, Savvides M, Vijaya Kumar BV: A Bayesian approach to deformed pattern matching of iris images. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007, 29(4):596-606.

    Article  Google Scholar 

  9. Takano H, Kobayashi H, Nakamura K: Rotation invariant iris recognition method adaptive to ambient lighting variation. IEICE Transactions on Information and Systems 2007, E90-D(6):955-962. 10.1093/ietisy/e90-d.6.955

    Article  Google Scholar 

  10. Schuckers SAC, Schmid NA, Abhyankar A, Dorairaj V, Boyce CK, Hornak LA: On techniques for angle compensation in nonideal iris recognition. IEEE Transactions on Systems, Man, and Cybernetics B 2007, 37(5):1176-1190.

    Article  Google Scholar 

  11. Huang J, You X, Yuan Y, Yang F, Lin L: Rotation invariant iris feature extraction using Gaussian Markov random fields with non-separable wavelet. Neurocomputing 2010, 73(4–6):883-894.

    Article  Google Scholar 

  12. Park H-A, Park KR: Iris recognition based on score level fusion by using SVM. Pattern Recognition Letters 2007, 28(15):2019-2028. 10.1016/j.patrec.2007.05.017

    Article  Google Scholar 

  13. Proença H, Alexandre LA: Iris recognition: an entropy-based coding strategy robust to noisy imaging environments. Proceedings of the 3rd International Symposium on Visual Computing (ISVC '07), November 2007, Lake Tahoe, Nev, USA, Lecture Notes in Computer Science 4841: 621-632.

    Google Scholar 

  14. Vatsa M, Singh R, Noore A: Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing. IEEE Transactions on Systems, Man, and Cybernetics B 2008, 38(4):1021-1035.

    Article  Google Scholar 

  15. Miyazawa K, Ito K, Aoki T, Kobayashi K, Nakajima H: An effective approach for Iris recognition using phase-based image matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 2008, 30(10):1741-1756.

    Article  Google Scholar 

  16. Sun Z, Tan T: Ordinal measures for iris recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2009, 31(12):2211-2226.

    Article  Google Scholar 

  17. Adam M, Rossant F, Amiel F, Mikovikova B, Ea T: Eyelid localization for iris identification. Radioengineering 2008, 17(4):82-85.

    Google Scholar 

  18. Mina T, Parka R: Eyelid and eyelash detection method in the normalized iris image using the parabolic Hough model and Otsu's thresholding method. Pattern Recognition Letters 2009, 30(12):1138-1143. 10.1016/j.patrec.2009.03.017

    Article  Google Scholar 

  19. Liu X, Li P, Song Q: Eyelid localization in iris images captured in less constrained environment. Proceedings of the 3rd International Conference on Advances in Biometrics (ICB '09), June 2009, Alghero, Italy, Lecture Notes in Computer Science 5558: 1140-1149.

    Article  Google Scholar 

  20. Hollingsworth KP, Bowyer KW, Flynn PJ: The best bits in an Iris code. IEEE Transactions on Pattern Analysis and Machine Intelligence 2009, 31(6):964-973.

    Article  Google Scholar 

  21. Ross A, Shah S: Segmenting non-ideal irises using geodesic active contours. Proceedings of Biometric Symposium (BCC '06), September 2006, Baltimore, Md, USA 1-6.

    Google Scholar 

  22. Barzegar N, Moin MS: A new user dependent iris recognition system based on an area preserving pointwise level set segmentation approach. EURASIP Journal on Advances in Signal Processing 2009, 2009:-13.

    Google Scholar 

  23. Daugman J: New methods in iris recognition. IEEE Transactions on Systems, Man, and Cybernetics B 2007, 37(5):1167-1175.

    Article  Google Scholar 

  24. Bowyer KW, Hollingsworth K, Flynn PJ: Image understanding for iris biometrics: a survey. Computer Vision and Image Understanding 2008, 110(2):281-307. 10.1016/j.cviu.2007.08.005

    Article  Google Scholar 

  25. Ma L, Tan T, Wang Y, Zhang D: Local intensity variation analysis for iris recognition. Pattern Recognition 2004, 37(6):1287-1298. 10.1016/j.patcog.2004.02.001

    Article  Google Scholar 

  26. Ma L, Tan T, Wang Y, Zhang D: Efficient iris recognition by characterizing key local variations. IEEE Transactions on Image Processing 2004, 13(6):739-750. 10.1109/TIP.2004.827237

    Article  Google Scholar 

  27. Monro DM, Rakshit S, Zhang D: DCT-bsed iris recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 2007, 29(4):586-595.

    Article  Google Scholar 

  28. Lim S, Lee K, Byeon O, Kim T: Efficient iris recognition through improvement of feature vector and classifier. ETRI Journal 2001, 23(2):61-70. 10.4218/etrij.01.0101.0203

    Article  Google Scholar 

  29. Poursaberi A, Araabi BN: Iris recognition for partially occluded images: methodology and sensitivity analysis. EURASIP Journal on Advances in Signal Processing 2007, 2007:-12.

    Google Scholar 

  30. Nighswander-Rempel SP, Riesz J, Gilmore J, Meredith P: A quantum yield map for synthetic eumelanin. Journal of Chemical Physics 2005, 123(19):-6.

  31. Perna G, Frassanito MC, Palazzo G, Gallone A, Mallardi A, Biagi PF, Capozzi V: Fluorescence spectroscopy of synthetic melanin in solution. Journal of Luminescence 2009, 129(1):44-49. 10.1016/j.jlumin.2008.07.014

    Article  Google Scholar 

  32. Tajbakhsh N, Araabi BN, Soltanianzadeh H: An intelligent decision combiner applied to noncooperative iris recognition. Proceedings of the 11th International Conference on Information Fusion (FUSION '08), June-July 2008, Cologne, Germany

    Google Scholar 

  33. Tajbakhsh N, Araabi BN, Soltanianzadeh H: An intelligent decision combiner applied to noncooperative iris recognition. Proceedings of the 11th International Conference on Information Fusion (FUSION '08), June-July 2008, Cologne, Germany

    Google Scholar 

  34. Hosseini MS, Araabi BN, Soltanian-Zadeh H: Pigment melanin: pattern for iris recognition. IEEE Transactions on Instrumentation and Measurement 2010, 59(4):792-804. special issue on Biometrics

    Article  Google Scholar 

  35. Grabowski K, Sankowski W, Zubert M, Napieralska M: Illumination influence on iris identification algorithms. Proceedings of the 15th International Conference on Mixed Design of Integrated Circuits and Systems (MIXDES '08), June 2008, Poznan, Poland 571-574.

    Google Scholar 

  36. Tajbakhsh N, Araabi BN, Soltanian-Zadeh H: Noisy iris verification: a modified version of local intensity variation method. Proceedings of the 3rd International Conference on Advances in Biometrics (ICB '09), June 2009, Alghero, Italy, Lecture Notes in Computer Science 5558: 1150-1159.

    Article  Google Scholar 

  37. Ahmadi H, Pousaberi A, Azizzadeh A, Kamarei M: An efficient iris coding based on gauss-laguerre wavelets. Proceedings of the 2nd IAPR/IEEE International Conference on Advances in Biometrics (ICB '07), August 2007, Seoul, South Korea, Lecture Notes in Computer Science 4642: 917-926.

    Google Scholar 

  38. Masek L, Kovesi P: MATLAB Source Code for a Biometric Identification System Based on Iris Patterns. School of Computer Science and Software Engineering, University of Western Australia; 2003. http://people.csse.uwa.edu.au/pk/studentprojects/libor/

    Google Scholar 

  39. CASIA Iris Image Database, Institute of Automation, Chinese Academy of Sciences, http://www.sinobiometrics.com/

  40. Proença H, Alexandre LA: UBIRIS: a noisy iris image database. Proceedings of the 13th International Conference on Image Analysis and Processing (ICIAP '05), September 2005, Cagliari, Italy, Lecture Notes in Computer Science 3617: 970-977.

    Google Scholar 

  41. Proença H, Alexandre LA: Iris recognition: analysis of the error rates regarding the accuracy of the segmentation stage. Image and Vision Computing 2010, 28(1):202-206. 10.1016/j.imavis.2009.03.003

    Article  Google Scholar 

  42. Franc V, Hlaváč V: Statistical pattern recognition toolbox for MATLAB. http://cmp.felk.cvut.cz/cmp/software/stprtool/

  43. Platt J: Sequential minimal optimization: a fast algorithm for training support vector machines. In Advances in Kernel Methods-Support Vector Learning. MIT Press, Cambridge, Mass, USA; 1999:185-208.

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank Soft Computing and Image Analysis Group from University of Beira Interior-Portugal for the use of the UBIRIS Iris Image Database. Portions of the research in this paper use the CASIA-Iris Version 1 collected by the Chinese Academy of Sciences' Institute of Automation (CASIA). They would like to specially thank Mr. Ahmad Poursaberi and Mr. Azad Aziz-zadeh for providing them with the source codes. The authors would also like to thank the reviewers for providing constructive and helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamid Soltanian-Zadeh.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Tajbakhsh, N., Araabi, B. & Soltanian-Zadeh, H. Robust Iris Verification Based on Local and Global Variations. EURASIP J. Adv. Signal Process. 2010, 979058 (2010). https://doi.org/10.1155/2010/979058

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/979058

Keywords