# A procedure to locate the eyelid position in noisy videokeratoscopic images

- Tim Schäck
^{1}Email authorView ORCID ID profile, - Michael Muma
^{1}, - Weaam Alkhaldi
^{1}and - Abdelhak M. Zoubir
^{1}

**2016**:136

https://doi.org/10.1186/s13634-016-0433-0

© The Author(s) 2016

**Received: **29 June 2016

**Accepted: **1 December 2016

**Published: **13 December 2016

## Abstract

In this paper, we propose a new procedure to robustly determine the eyelid position in high-speed videokeratoscopic images. This knowledge is crucial in videokeratoscopy to study the effects of the eyelids on the cornea and on the tear film dynamics. Difficulties arise due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. The proposed procedure uses robust *M*-estimation to fit a parametric model to a set of eyelid edge candidate pixels. To detect these pixels, firstly, nonlinear image filtering operations are performed to remove the eyelashes. Secondly, we propose an image segmentation approach based on morphological operations and active contours to provide the set of candidate pixels. Subsequently, a verification procedure reduces this set to pixels that are likely to contribute to an accurate fit of the eyelid edge. We propose a complete framework, for which each stage is evaluated using real-world videokeratoscopic images. This methodology allows for automatic localization of the eyelid edges and is applicable to replace the currently used time-consuming manual labeling approach, while maintaining its accuracy.

### Keywords

Videokeratoscopy Eyelid detection Nonlinear filtering Image segmentation Robust*M*-estimation

## 1 Introduction

A keratoscope is an ophthalmological instrument that allows for non-invasive imaging of the topography of the human cornea, which is the outer surface of the eye [1]. The cornea is the largest contributor to the eye’s refractive power, and its topography is of critical importance when determining the quality of vision and corneal health. For example, astigmatism may occur if the cornea has an irregular or toric curvature. Videokeratoscopy allows for studying the dynamics of the corneal topography [2–5].

Another important application of videokeratoscopy is the analysis of tear film stability in the inter-blink interval. Ocular discomfort can be caused by dry spots which occur if the tear film is destabilized. The tear film build-up and break-up times can be estimated from videokeratoscopic images if the data acquisition rate is sufficiently high [6–9]. Videokeratoscopy is also involved in the study of the dynamic response of the corneal anterior surface to mechanical forces. These mechanical forces are exerted by the eyelids during horizontal eye movements in a downward gaze. More information on the applications of high-speed videokeratoscopy can be found in [10].

Eyelid localization in images is an active area of research, and important applications are, for example, iris recognition systems and drowsiness detection [12–15]. To the best of our knowledge, the case of videokeratoscopic images is still an open research question. In fact, even today, the very time-consuming manual selection of candidate pixels followed by a parametric fit of a parabola in the least squares sense is still the routine operation.

In addition to the difficulty of localizing the image’s region of interest, videokeratoscopy for eye research imposes strong requirements concerning the accuracy of the model of the eyelid edge. The conventional approach to fit a parabola does not always provide a sufficiently accurate approximation to the real curvature. In some images, a non-symmetrical model may be necessary to describe the entire eyelid including the parts covering the sclera. In this paper, we therefore propose and evaluate some alternative models.

*Contributions:* In this paper, a new procedure is proposed to robustly determine the eyelid position in high-speed videokeratoscopic images. The proposed method allows for automatic localization of the eyelid edges which replaces the currently used time-consuming manual labeling. We propose to use robust *M*-estimation to fit a parametric model to a set of eyelid edge candidate pixels. In this way, we account for outliers in the candidate pixels. These are present due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. In the case of the parabola, an alternative robust fit by the Hough transform is also discussed. To detect these pixels, first, nonlinear image filtering operations are performed to remove the eyelashes. In particular, we propose a method based on the gradient direction variance and a wavelet-based method which adapts the procedure of [14] to videokeratoscopic images. Subsequently, an image segmentation approach based on morphological operations and active contours is proposed to provide the set of candidate pixels. We propose and evaluate new linear and nonlinear eyelid curvature models as alternatives to the conventionally used parabola. A real-world data performance analysis is provided to examine the error rates of the proposed models.

*Organization:* Section 2 is dedicated to the proposal and description of the robust procedure to locate the eyelid position in noisy videokeratoscopic images. Section 3 provides real-data experiments and results. Section 4 concludes the paper.

## 2 The proposed procedure for eyelid position estimation in videokeratoscopy

### 2.1 Nonlinear image filtering for eyelash removal

In this step, videokeratoscopic images are processed such that the subsequent algorithms are able to detect candidate pixels that are located on the eyelid edge. Similar to iris recognition systems [12, 14], an important factor that affects the quality of the eyelid position estimation are the eyelashes. Additional challenges to be considered in videokeratoscopy are the blur of the image and the ring pattern of the Placido disk.

We investigate two different approaches to remove the eyelashes from videokeratoscopic images. The first is based on the gradient direction variance and the second is a wavelet-based method.

#### 2.1.1 Gradient direction variance-based method

We briefly revisit the method by Zhang et al. [18] that is based on nonlinear conditional directional filtering and describe its adaptation to videokeratoscopic images.

*x*- and

*y*-directions are denoted as

*G*

_{ x }and

*G*

_{ y }, respectively. Here, the local gradient direction

*ϕ*and the magnitude ∇ are calculated as follows:

*L*is applied to the surrounding pixels to determine a new value of the classified pixel.

#### 2.1.2 Wavelet-based method

For iris recognition systems, an effective wavelet-based method for eyelash removal was introduced by Aligholizadeh et al. [14]. Wavelets can be used to decompose the eye image into components that appear at different resolutions. The key advantage of the wavelet transform, compared to the traditional Fourier transform, is its position-frequency localization property, allowing features that occur at the same position and resolution to be matched up.

Videokeratoscopic images are more challenging compared to the images considered in [14]. For this reason, applying the above method only results in a reduction and not the removal of eyelashes. Additional steps are necessary to determine eyelid edge pixels. Our proposed approach is introduced subsequently.

### 2.2 Active contours method for eyelid edge pixel candidate detection

After applying nonlinear image filters to the initial videokeratoscopic image, an active contours image segmentation method is presented to detect pixels in videokeratoscopic images that lie on the eyelid edge. This method outperformed other image segmentation approaches, such as region growing [19], watershed segmentation [20], and empirical and gradient-based methods [16] we studied before, but we do not report for space considerations.

Active contours are widely used in image segmentation to delineate an object contour within an image. The general idea of Kass et al. [21], who introduced the active contour model (also called snakes), was to minimize the energy associated to the current contour as a sum of an internal and external energy. The internal energy term controls the smoothness of the contour and is minimized when the snake’s shape matches the shape of the sought object; the external energy term attracts the contour towards the object and is minimized when the snake is at the object’s boundary. An initial estimate is required which is refined by means of energy minimization.

*n*points

*v*

_{ i }=(

*x*

_{ i },

*y*

_{ i }), where

*i*=0,…

*n*−1. The deformation of their contours depends on their energy function

with *E*
_{internal} representing the internal energy of the snake, *E*
_{image} denoting the image forces acting on the spline, and *E*
_{con} representing the external constraint forces introduced by the user. *E*
_{image} and *E*
_{con} form the external energy acting on the spline.

In the case of videokeratoscopy, the recurrent structure of the image allows to incorporate higher-level prior knowledge to obtain an initial estimate. We propose to apply morphological operations to the output of the nonlinearly filtered image (stage 1). In particular, the nonlinearly filtered image is eroded and dilated with morphological discs.

*A*is a set in \(\mathbb {Z}^{2}\), then

*a*=(

*a*

_{1},

*a*

_{2}) is considered to be an element of

*A*if

*a*∈

*A*. This corresponds to a pixel lying within a region of the image.

*Dilation*is thereby defined as

*A*and

*B*denote sets within \(\mathbb {Z}^{2}\) and

*B*

^{ s }is the the reflection of set

*B*

*B*when the center of

*B*moves inside

*A*.

*Erosion*is defined as

*B*by point

*z*=(

*z*

_{1},

*z*

_{2}), denoted as

*B*

_{ z }, becomes

*B*when

*B*moves inside

*A*. In our approach, we combine the two operations, which is referred to in mathematical morphology as

*opening*: the erosion of

*A*by

*B*, followed by dilation of the result by

*B*.

Opening generally smooths the contour of a set by breaking its narrow isthmuses and by eliminating small holes in the set.

*B*with radius

*r*

_{d}=2 pixels for dilation and radius

*r*

_{e}=16 pixels for erosion. If the size of the structuring element is chosen properly, the eyelashes can be effectively suppressed. From the resulting image, a global image threshold is calculated by Otsu’s method [22] to convert the image to a binary image. Figure 9 displays an example of an obtained binary image that is used as an initial estimate for active contours.

### 2.3 Candidate verification by using image statistics and polar coordinate fit

Before candidate pixels are fit to a parametric model, a candidate verification algorithm analyzes characteristics of the candidate pixels in order to remove pixels which are unlikely to contribute to an accurate fit of the eyelid.

The verification is based on a set of characteristics: the intensity averages, the column intensity decline, and a polar coordinate fit, which are combined to obtain a verification of candidates.

#### 2.3.1 Intensity averages

#### 2.3.2 Column intensity decline

The next characteristic is motivated by the fact that, in general, for videokeratoscopic images, the pixels above an eyelid are brighter due to the skin compared to the eyelid, which itself is characterized by a dark region. Thus, the intensity decline serves as indicator that a candidate pixel belongs to the set of eyelid edge pixels.

Before the intensity gradients are calculated, it is necessary to filter the column intensity values to reduce the effect of the ring patterns from the Placido disk. In our experiments, a median filter of length *L*
_{2}=15 is applied to suppress the ring patterns and a moving average filter of length *L*
_{1}=25 further smooths the intensity curve.

Positive weight is given towards the overall decision if the differentiated column intensity values of the candidate pixel and that of its adjacent columns fall below zero. Thus, the candidate pixel lies inside an intensity decline.

#### 2.3.3 Polar coordinate fit

The third characteristic to verify the candidate pixels is based on a robust parabolic fit of the candidate pixels in the polar coordinate domain. After shifting the center of the image to the center of the pupil, a polar image can be determined by a Cartesian to polar coordinate transformation. In polar coordinates, the eyelids’ shape is similar to a parabolic curve.

The algorithm to calculate the polar coordinate fit consists of four steps which we discuss in the sequel.

*Step 1: Finding the center of the pupil.*A videokeratoscopic image is characterized by the ring pattern which is projected onto the iris. The center of the rings is also the center of the pupil. This fact can be exploited by a circular Hough transform [23], which is a robust method to find circles in an image. The circular Hough transform is applied to the original videokeratoscopic images to find circles with radii

*ρ*between 75 and 125 pixels and center coordinates

*c*

_{ x }and

*c*

_{ y }. The parametrized equation of the circle is given by

The maximum in the Hough space is determined to find the best fitting parameter set.

*Step 2: Transformation in the polar domain.*Based on the center coordinates of the pupil, a Cartesian to polar transformation can be performed. The new coordinate system consists of the variable

*ρ*for the radius as in Eq. (12) and

*φ*for the angle which can be derived by

Due to the circular dimension of the new coordinate system, the rectangular original image is cropped in the corners. For our purpose, the cropping can be neglected since only insignificant image areas are dropped. This would result in information loss, only if the center of the iris is very far away from the center of the image, which is not usually the case for videokeratoscopic images.

*Step 3: Robust fitting of a parabolic curve.*In this step, a robustly estimated parabolic model is fitted to the candidate pixels. See Section 2.4 for possible robust estimation methods. An example is provided in Fig. 13.

*Step 4: Outlier detection.*The last step is to compare the candidate pixels with the fitted curve of step 3. For this purpose, the smallest distances

*d*

_{ i }between the candidate pixels and the fitted curve are calculated. Then, the corresponding robust estimate of the standard deviation is determined by

where the median absolute deviation (MAD) is given as MAD(*d*
_{
i
})=median_{
i
}(|*d*
_{
i
}−median_{
j
}(*d*
_{
j
})|) and *Φ*
^{−1} is the inverse of the cumulative distribution function for the standard normal distribution. To detect outliers, a threshold is set to \(T_{1} = 3\cdot \hat {\sigma }_{\text {rob}}\). The 3-\(\hat {\sigma }_{\text {rob}} \) rule is justified by the fact that for \(d_{i} \sim \mathcal {N}(\mu,\sigma ^{2})\), the probability of *d*
_{
i
} taking a value above 3*σ* is unlikely, i.e., \(\phantom {\dot {i}\!}\text {Pr}(|d_{i}-\mu _{d_{i}}|<3\sigma)=99.73\).

#### 2.3.4 Candidate verification

*c*

_{ i,k }of each characteristic

*i*are weighted for each candidate

*k*according to its significance and compared to a threshold:

### 2.4 Robust model fitting

In this section, we present robust approaches to fit linear and nonlinear parameterized curve models to the verified eyelid edge candidate pixels. Due to the eyelashes and low image quality in videokeratoscopic images, we suggest to use robust *M*-estimation for the unknown model parameters. We chose *M*-estimators, because even after candidate pixel verification, the Gaussian assumption may only hold approximately. An alternative robust fit via the Hough transform is also discussed exemplarily for quadratic polynomials.

#### 2.4.1 Curve models

To provide the best possible accuracy, we investigate the applicability of a wide range of curve models.

*quadratic polynomial*

*cubic*

*fourth-order polynomial*

Higher polynomial orders are not considered since they would result in curvatures that over-fit the data. As there is no physical motivation to restrict our attention to linear models, we also consider some nonlinear models that are potentially suitable parametrizations of the eyelid edge.

*Rational functions*, which are described by a nominator polynomial function

*P*(

*x*) and a denominator polynomial function

*Q*(

*x*)

*first-order Fourier series*

*a*

_{0},

*a*

_{1}, and

*b*

_{1}are given by

Also, in this case, higher orders were excluded to avoid an over-fitting of the data and to avoid modeling artefacts that would be introduced by the periodicity.

with *f*(*x*)≥0. The motivation for applying pdf type functions is that shifted and rotated versions of *f*(*x*) are able to well parametrize the eyelid edge using only a few parameters. Since there is no theoretical justification, or practical investigation that suggests a particular distribution, we consider the following candidates.

*Weibull*pdf [24, 25]

is described by the scale parameter *σ* and its shape parameter *λ*. Here, *x*≥0 and *σ*,*λ*>0.

*Gamma*pdf

with *x*≥0 and *σ*,*λ*>0.

*Fréchet*pdf

where *x*≥0 and *σ*,*λ*>0.

*Type I Dagum*pdf [26, 27]

where *x*≥0 and *a*,*b*,*p*>0.

*log-logistic*pdf

with scale parameter *α* and shape parameter *β*, where *x*≥0 and *α*,*β*>0.

*Rice*pdf [28]

where *x*≥0 and *ν*,*σ*≥0 with *ν* being the distance between the reference point and the center of the bivariate distribution. *σ* is the scale parameter, and *I*
_{0}(*x*) is the modified Bessel function of the first kind with order zero.

*skew normal*pdf [29]

*α*represents the skew,

*ξ*the location,

*ω*the scale parameter.

*ϕ*(

*x*) is the standard normal pdf

*Φ*(

*x*) denotes the cumulative distribution function given by

*z*) is defined as

For *α*=0, the skew normal distribution reduces to the normal distribution. For an increasing absolute value of *α*, the skewness also increases. The distribution is right skewed for *α*>0, and for *α*<0, the distribution is left skewed.

Before fitting these models, the candidate pixels must be aligned and normalized to account for rotation or scaling. For this, a ground line is drawn from the lowest candidate pixel to the left to the lowest candidate pixel to the right in the image. The ground line is then rotated to a horizontal line and all candidate pixels are rotated with the same angle. The scale is normalized to one in both axes, and the fitting is performed on these transformed candidate pixels.

#### 2.4.2 Robust estimation

*Y*

_{ n }is a scalar random variable,

**X**

_{ n }is a vector of random variables, θ are the unknown parameters of interest, and

*V*

_{ n }is a random variable of the errors. The residuals

*R*

_{ n }can be obtained by

In our notation, vectors and matrices are bold and random variables are in uppercase.

*f*(

**X**

_{ n },θ) is a nonlinear function and the corresponding residuals

*R*

_{ n }are obtained by

*Gaussian maximum likelihood estimate*is defined by

where the loss function *ρ*(*x*)=*x*
^{2} coincides with that of a least squares estimator (LSE). It is well known that this estimator is very sensitive to departures from the Gaussian data assumption. Robust statistics formalize the theory of approximate parametric models [30]. On the one hand, like classical parametric methods, robust methods are able to leverage upon a parametric model, but on the other hand, they do not depend critically on the exact fulfilment of the model assumptions. In this sense, robust statistics are very close to engineering intuition and signal processing demands [31]. *M*-estimators robustify maximum likelihood estimation (MLE) by introducing a bounded score function \(\psi (x) = \frac {\partial \rho (x)}{\partial x}\).

*M*-estimator is the

*least absolute deviations (LAD) estimator*, for which estimates are obtained by solving

with *ρ*(*x*)=|*x*| and \(\hat {\sigma }_{\text {rob}}\) as given by (14). It belongs to the class *M*-estimators with monotone score function.

*M*-estimators with re-descending score functions, we consider

*Tukey’s bisquare estimator*which uses

Choosing the tuning constant to be *k*=4.685 ensures 95% efficiency w.r.t. the MLE when the data exactly follows the nominal Gaussian model [32]. To obtain estimates for linear models, the minimization problem of (43) is easily solved using an iteratively reweighted least squares approach, as described in [32]. The LAD can serve as starting point for Tukey’s biweight method. For nonlinear models, we used the *trust-region* method [33], which represents an improvement over the popular Levenberg-Marquardt algorithm [34, 35].

#### 2.4.3 Hough transform for parabolic curve detection

The Hough transform [36] is widely used in digital image processing and computer vision to isolate features of a particular shape within an image. Circular or parabolic Hough transforms have been applied to accurately detect the iris or eyelid boundary [37–39], respectively.

Based on geometrical limitations, boundaries for the parameters in Eq. (16) are determined so as to span a finite size 3-D accumulator array, the Hough space. Within these boundaries, all possible parabolas are evaluated for each candidate pixel. If the corresponding parametrized parabola matches a candidate pixel, the value of a point in the Hough space is incremented.

## 3 Real-data experiments

This section presents the evaluation metric, the experimental setup, and the results of the proposed procedure.

### 3.1 Experimental setup

### 3.2 Evaluation metric

Here, \(\boldsymbol {\hat {\theta }}\) represents the estimated parameters, *N*
^{ref} is the number of reference pixels, and \(\hat {y}_{m}(\boldsymbol {\hat {\theta }})\) defines the closest curve pixel to the reference pixel \(y_{m}^{\text {ref}}\). A pixel in a videokeratoscopic image corresponds to approximately 20 *μ*m.

After evaluating Eq. (45) for all reference images, we report on the mean and standard deviation (STD) taken over all images. We also calculate the median and MAD over all images since they are robust estimates of the mean and standard deviation that are not severely influenced by severely divergent results on single images.

### 3.3 Results

Overall results of the proposed methods for all possible combinations of the stages that are shown in Fig. 4 have been computed

Rank | Filtering | Model | Fitting | Mean ± STD |
---|---|---|---|---|

1 | GDV | Rice | LAD | 3.54 ± 1.44 |

2 | GDV | Rice | Bisquare | 3.68 ± 1.49 |

3 | GDV | Parabola | LAD | 4.32 ± 2.07 |

4 | GDV | Cubic | Bisquare | 4.34 ± 1.51 |

5 | GDV | Cubic | LAD | 4.42 ± 1.72 |

6 | GDV | Parabola | Bisquare | 4.51 ± 2.64 |

7 | GDV | Rational | LAD | 4.64 ± 1.91 |

8 | GDV | Fourier | LAD | 4.69 ± 2.10 |

9 | GDV | Fourth-order | LAD | 4.75 ± 2.11 |

10 | GDV | Fourier | Bisquare | 4.80 ± 1.99 |

11 | GDV | Dagum | LAD | 4.84 ± 1.54 |

13 | GDV | Skew-normal | Bisquare | 4.86 ± 2.42 |

17 | GDV | Weibull | LAD | 5.11 ± 2.09 |

42 | GDV | Log-logistic | LAD | 5.71 ± 2.86 |

71 | Wavelet | Fréchet | Bisquare | 6.10 ± 3.36 |

78 | Wavelet | Parabola | Hough | 6.13 ± 4.42 |

184 | No filtering | Gamma | Bisquare | 7.17 ± 1.41 |

*M*-estimation and the Rice function falls behind, as it can be seen in Table 3. This indicates that the Hough transform performs well in the majority of the cases but is largely outperformed by the

*M*-estimators for a small subset of images.

Overall results of the proposed methods (in decreasing accuracy order)

Rank | Filtering | Model | Fitting | Median ± MAD |
---|---|---|---|---|

1 | GDV | Parabola | Hough | 3.50 ± 1.37 |

2 | GDV | Rice | LAD | 3.52 ± 1.04 |

3 | GDV | Parabola | Bisquare | 3.54 ± 0.91 |

4 | GDV | Fourth-order | LAD | 3.61 ± 1.80 |

5 | GDV | Rice | Bisquare | 3.62 ± 1.37 |

6 | GDV | Parabola | LAD | 3.66 ± 1.17 |

7 | GDV | Rational | Bisquare | 3.82 ± 0.61 |

8 | GDV | Fourier | Bisquare | 3.83 ± 1.57 |

9 | GDV | Fourth-order | Bisquare | 3.91 ± 2.28 |

10 | GDV | Skew-normal | Bisquare | 4.03 ± 2.22 |

11 | GDV | Cubic | LAD | 4.04 ± 1.74 |

32 | Wavelet | Weibull | Bisquare | 4.62 ± 2.11 |

40 | Wavelet | Fréchet | Bisquare | 4.86 ± 2.27 |

42 | GDV | Dagum | LAD | 4.90 ± 1.08 |

60 | GDV | Log-logistic | LAD | 5.16 ± 2.40 |

113 | GDV | Gamma | LAD | 5.71 ± 3.03 |

We next assess the performance of each individual stage of our proposed procedure.

Results of the two nonlinear image filtering approaches and without any filtering in RMS deviations over all images

GDV | Wavelet | None | |
---|---|---|---|

Mean ± STD | 12.27 ± 13.26 | 10.31 ± 7.32 | 11.51 ± 8.60 |

Median ± MAD | 6.60 ± 4.58 | 8.23 ± 4.80 | 8.73 ± 4.89 |

Results of the candidate verification procedure in RMS deviations over all images

Verification | No. of verification | |
---|---|---|

Mean ± STD | 13.62 ± 15.73 | 13.67 ± 14.47 |

Median ± MAD | 9.27 ± 5.26 | 9.35 ± 5.06 |

*M*-estimation approach. Although the Hough transform exhibits considerably higher computational complexity than the

*M*-estimator, it is inferior in terms of mean accuracy.

Results of the robust fitting approaches in RMS deviations over all images

| Hough transform | |
---|---|---|

Mean ± STD | 12.19 ± 13.62 | 18.46 ± 25.39 |

Median ± MAD | 8.81 ± 5.12 | 10.48 ± 5.75 |

*M*-estimator to the LAD estimator. The choice of the two loss functions of the

*M*-estimator is less significant, as both methods achieve similar results in terms of accuracy. However, the computational complexity of Tukey’s estimator is slightly higher than that of the LAD.

Results of the two robust estimation methods in RMS deviations over all ten images

Tukey’s bisquare | LAD | |
---|---|---|

Mean ± STD | 13.43 ± 14.24 | 13.38 ± 14.50 |

Median ± MAD | 9.30 ± 5.16 | 9.25 ± 5.11 |

Based on the presented results, we suggest for further eyelid localization research to consider the usage of *M*-estimators instead of the Hough transform, as it achieves similar results in terms of accuracy but is significantly less computational demanding. Furthermore, we recommend to also consider different curvature models than the parabola. Candidate verification does not seem to be required when using robust estimators.

## 4 Conclusions

We proposed a new procedure to robustly estimate the position of the eyelid edges in high-speed videokeratoscopic images. The proposed method applies eylash removal before segmenting the image with an active contours approach that is initialized by a contour that is obtained from morphological opening and closing operations. The position of the eyelids are verified and, finally, parametric curve models are fitted by applying robust parameter estimators to the selected pixels. Real-data experiments showed that the Rice model and the parabola achieved best results. Furthermore, robust regression outperforms the Hough transform as a robust fitting method in terms of processing time and is similar in terms of accuracy. The overall precision of the proposed approach is in the order of 10^{−2} mm and allows for replacing the currently used time-consuming manual labeling.

## Declarations

### Acknowledgements

The authors would like to thank D.R. Iskander and the staff at Contact Lens and Visual Optics Laboratory (CLVOL) at the School of Optometry, Queensland University of Technology, in Brisbane, Australia, for their efforts in collecting the videokeratoscopic data and for their advice and to the anonymous reviewers for their useful comments on the proposed approach. The work of M. Muma was supported by the project HANDiCAMS which acknowledges the financial support of the Future and Emerging Technologies (FET) Programme within the Seventh Framework Programme for Research of the European Commission (HANDiCAMS), under FET-Open grant number: 323944.

### Competing interests

The authors declare that they have no competing interests.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- AE Reynolds,
*Corneal Topography: Measuring and Modifying the Cornea*(Springer, New York, 1992).Google Scholar - W Alkhaldi, DR Iskander, AM Zoubir, MJ Collins, Enhancing the standard operating range of a Placido disk videokeratoscope for corneal surface estimation. IEEE Trans. Biomed. Eng.
**56**(3), 800–809 (2009).View ArticleGoogle Scholar - W Alkhaldi, DR Iskander, AM Zoubir, Model-order selection in Zernike polynomial expansion of corneal surfaces using the efficient detection criterion. IEEE Trans. Biomed. Eng.
**57**(10), 2429–2437 (2010).View ArticleGoogle Scholar - W Alkhaldi,
*Statistical signal and image processing techniques in corneal modeling. PhD thesis*(Technische Universität Darmstadt, Germany, 2010).Google Scholar - M Muma, AM Zoubir, in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE Int. Conf. On. Robust model order selection for corneal height data based on τ estimation (Prague, 2011), pp. 4096–4099.Google Scholar
- DR Iskander, MJ Collins, B Davis, Evaluating tear film stability in the human eye with high-speed videokeratoscopy. IEEE Trans. Biomed. Eng.
**52**(11), 1939–1949 (2005).View ArticleGoogle Scholar - D Alonso-Caneiro, J Turuwhenua, DR Iskander, MJ Collins, Diagnosing dry eye with dynamic-area high-speed videokeratoscopy. J. Biomed. Opt.
**16**(7), 076012 (2011).View ArticleGoogle Scholar - DH Szczesna-Iskander, DR Iskander, Future directions in non-invasive measurements of tear film surface kinetics. Optom. Vis. Sci.
**89**(5), 749–759 (2012).View ArticleGoogle Scholar - DH Szczesna-Iskander, D Alonso-Caneiro, DR Iskander, Objective measures of pre-lens tear film dynamics versus visual responses. Optom. Vis. Sci.
**93**(8), 872–880 (2016).View ArticleGoogle Scholar - DR Iskander, MJ Collins, Applications of high-speed videokeratoscopy. Clin. Exp. Optom.
**88**(4), 223–231 (2005).View ArticleGoogle Scholar - J Nemeth, B Erdelyi, B Csakany, P Gaspar, A Soumelidis, F Kahlesz, Z Lang, High-speed videotopographic measurement of tear film build-up time. Invest. Ophthalmol. Vis. Sci.
**43**(6), 1783–1790 (2002).Google Scholar - YK Jang, BJ Kang, KR Park, A study on eyelid localization considering image focus for iris recognition. Pattern Recognit. Lett.
**29**(11), 1698–1704 (2008).View ArticleGoogle Scholar - X Liu, P Li, Q Song, in Proceedings of the 3rd International Conference on Advances in Biometrics (ICB ’09). Lectures Notes in Computer Science, 5558. Eyelid localization in iris images captured in less constrained environment (Alghero, Italy, 2009), pp. 1140–1149.Google Scholar
- MJ Aligholizadeh, SH Javadi, R Sabbaghi-Nadooshan, K Kangarloo, in International Conference on Biometrics and Kansei Engineering. An effective method for eyelashes segmentation using wavelet transform (Takamatsu, 2011), pp. 185–188.Google Scholar
- F Bernard, CE Deuter, P Gemmar, H Schachinger, Eyelid contour detection and tracking for startle research related eye-blink measurements from high-speed video records. Comput. Methods Prog. Biomed.
**112**(1), 22–37 (2013).View ArticleGoogle Scholar - JF Canny, A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell.
**8**(6), 679–698 (1986).View ArticleGoogle Scholar - G Sicuranza,
*Nonlinear Image Processing*(Academic Press, San Diego, USA, 2000).MATHGoogle Scholar - D Zhang, DM Monro, S Rakshit, in IEEE International Conference on Image Processing. Eyelash removal method for human iris recognition (Atlanta, 2006), pp. 285–288.Google Scholar
- R Adams, L Bischof, Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell.
**16**(6), 641–647 (1994).View ArticleGoogle Scholar - S Beucher, F Meyer, The morphological approach to segmentation: the watershed transformation. Opt. Eng.
**34:**, 433 (1992).Google Scholar - M Kass, A Witkin, D Terzopoulos, Snakes: active contour models. Int. J. Comput. Vis.
**1:**, 321–331 (1988).View ArticleMATHGoogle Scholar - N Otsu, A threshold selection method from gray-level histograms. Automatica.
**11:**, 23–27 (1975).Google Scholar - D Ballard, Generalizing the hough transform to detect arbitrary shapes. Pattern Recognit.
**13:**, 111–122 (1981).View ArticleMATHGoogle Scholar - P Rosin, E Rammler, The laws governing the fineness of powdered coal. J. Inst. Fuel.
**7:**, 29–36 (1933).Google Scholar - W Weibull, A statistical distribution function of wide applicability. J. Appl. Mech.
**18:**, 293–297 (1951).MATHGoogle Scholar - C Dagum, in Proceedings of the 40th Session of the International Statistical Institute, 46. A model of income distribution and the conditions of existence of moments of finite order (Warsaw, 1975), pp. 196–202.Google Scholar
- C Dagum, A new model of personal income distribution: specification and estimation. Economie Appliquée.
**30:**, 413–436 (1977).Google Scholar - SO Rice, Mathematical analysis of random noise. Bell. Syst. Tech. J.
**24:**, 146–156 (1945).MathSciNetView ArticleMATHGoogle Scholar - A Azzalini, A Dalla Valle, The multivariate skew-normal distribution. Biometrika.
**83**(4), 715–726 (1996).MathSciNetView ArticleMATHGoogle Scholar - PJ Huber, Robust estimation of a location parameter. Ann. Math. Statist.
**35:**, 73–101 (1964).MathSciNetView ArticleMATHGoogle Scholar - AM Zoubir, V Koivunen, Y Chakhchoukh, M Muma, Robust estimation in signal processing: a tutorial-style treatment of fundamental concepts. IEEE Signal Process. Mag.
**29**(4), 61–80 (2012).View ArticleGoogle Scholar - RA Maronna, DR Martin, VJ Yohai,
*Robust Statistics. Wiley Series in Probability and Statistics*(Wiley, Chichester, 2006).Google Scholar - MA Branch, TF Coleman, Y Li, A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems. SIAM J. Sci. Comput.
**21:**, 1–23 (1999).MathSciNetView ArticleMATHGoogle Scholar - K Levenberg, A method for the solution of certain problems in least squares. Quart. Appl. Math.
**2:**, 164–168 (1944).MathSciNetMATHGoogle Scholar - DW Marquardt, An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Indus. Appl. Math.
**11**(2), 431–441 (1963).MathSciNetView ArticleMATHGoogle Scholar - PV Hough, BW Powell, A method for faster analysis of bubble chamber photographs. Nuovo Cimento Ser. 10.
**18:**, 1184–1191 (1960).View ArticleGoogle Scholar - L Masek, Recognition of human iris patterns for biometric identification. Technical report (The University of Western Australia, 2003).Google Scholar
- P Li, X Liu, L Xiao, Q Song, Robust and accurate iris segmentation in very noisy iris images. Image Vis. Comput.
**28**(2), 246–253 (2010).View ArticleGoogle Scholar - DS Jeong, JW Hwang, BJ Kang, KR Park, CS Won, D-K Park, J Kim, A new iris segmentation method for non-ideal iris images. Image Vis. Comput.
**28**(2), 254–260 (2010).View ArticleGoogle Scholar - M Muma, Robust estimation and model order selection for signal processing. PhD thesis (2014).Google Scholar