 Research
 Open Access
A procedure to locate the eyelid position in noisy videokeratoscopic images
 Tim Schäck^{1}Email authorView ORCID ID profile,
 Michael Muma^{1},
 Weaam Alkhaldi^{1} and
 Abdelhak M. Zoubir^{1}
https://doi.org/10.1186/s1363401604330
© The Author(s) 2016
 Received: 29 June 2016
 Accepted: 1 December 2016
 Published: 13 December 2016
Abstract
In this paper, we propose a new procedure to robustly determine the eyelid position in highspeed videokeratoscopic images. This knowledge is crucial in videokeratoscopy to study the effects of the eyelids on the cornea and on the tear film dynamics. Difficulties arise due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. The proposed procedure uses robust Mestimation to fit a parametric model to a set of eyelid edge candidate pixels. To detect these pixels, firstly, nonlinear image filtering operations are performed to remove the eyelashes. Secondly, we propose an image segmentation approach based on morphological operations and active contours to provide the set of candidate pixels. Subsequently, a verification procedure reduces this set to pixels that are likely to contribute to an accurate fit of the eyelid edge. We propose a complete framework, for which each stage is evaluated using realworld videokeratoscopic images. This methodology allows for automatic localization of the eyelid edges and is applicable to replace the currently used timeconsuming manual labeling approach, while maintaining its accuracy.
Keywords
 Videokeratoscopy
 Eyelid detection
 Nonlinear filtering
 Image segmentation
 Robust Mestimation
1 Introduction
A keratoscope is an ophthalmological instrument that allows for noninvasive imaging of the topography of the human cornea, which is the outer surface of the eye [1]. The cornea is the largest contributor to the eye’s refractive power, and its topography is of critical importance when determining the quality of vision and corneal health. For example, astigmatism may occur if the cornea has an irregular or toric curvature. Videokeratoscopy allows for studying the dynamics of the corneal topography [2–5].
Another important application of videokeratoscopy is the analysis of tear film stability in the interblink interval. Ocular discomfort can be caused by dry spots which occur if the tear film is destabilized. The tear film buildup and breakup times can be estimated from videokeratoscopic images if the data acquisition rate is sufficiently high [6–9]. Videokeratoscopy is also involved in the study of the dynamic response of the corneal anterior surface to mechanical forces. These mechanical forces are exerted by the eyelids during horizontal eye movements in a downward gaze. More information on the applications of highspeed videokeratoscopy can be found in [10].
Eyelid localization in images is an active area of research, and important applications are, for example, iris recognition systems and drowsiness detection [12–15]. To the best of our knowledge, the case of videokeratoscopic images is still an open research question. In fact, even today, the very timeconsuming manual selection of candidate pixels followed by a parametric fit of a parabola in the least squares sense is still the routine operation.
In addition to the difficulty of localizing the image’s region of interest, videokeratoscopy for eye research imposes strong requirements concerning the accuracy of the model of the eyelid edge. The conventional approach to fit a parabola does not always provide a sufficiently accurate approximation to the real curvature. In some images, a nonsymmetrical model may be necessary to describe the entire eyelid including the parts covering the sclera. In this paper, we therefore propose and evaluate some alternative models.
Contributions: In this paper, a new procedure is proposed to robustly determine the eyelid position in highspeed videokeratoscopic images. The proposed method allows for automatic localization of the eyelid edges which replaces the currently used timeconsuming manual labeling. We propose to use robust Mestimation to fit a parametric model to a set of eyelid edge candidate pixels. In this way, we account for outliers in the candidate pixels. These are present due to the very low contrast of videokeratoscopic images and because of the occlusions caused by the eyelashes. In the case of the parabola, an alternative robust fit by the Hough transform is also discussed. To detect these pixels, first, nonlinear image filtering operations are performed to remove the eyelashes. In particular, we propose a method based on the gradient direction variance and a waveletbased method which adapts the procedure of [14] to videokeratoscopic images. Subsequently, an image segmentation approach based on morphological operations and active contours is proposed to provide the set of candidate pixels. We propose and evaluate new linear and nonlinear eyelid curvature models as alternatives to the conventionally used parabola. A realworld data performance analysis is provided to examine the error rates of the proposed models.
Organization: Section 2 is dedicated to the proposal and description of the robust procedure to locate the eyelid position in noisy videokeratoscopic images. Section 3 provides realdata experiments and results. Section 4 concludes the paper.
2 The proposed procedure for eyelid position estimation in videokeratoscopy
2.1 Nonlinear image filtering for eyelash removal
In this step, videokeratoscopic images are processed such that the subsequent algorithms are able to detect candidate pixels that are located on the eyelid edge. Similar to iris recognition systems [12, 14], an important factor that affects the quality of the eyelid position estimation are the eyelashes. Additional challenges to be considered in videokeratoscopy are the blur of the image and the ring pattern of the Placido disk.
We investigate two different approaches to remove the eyelashes from videokeratoscopic images. The first is based on the gradient direction variance and the second is a waveletbased method.
2.1.1 Gradient direction variancebased method
We briefly revisit the method by Zhang et al. [18] that is based on nonlinear conditional directional filtering and describe its adaptation to videokeratoscopic images.
Sobel edge filters: xdirection, image region, and ydirection (from left to right)

2.1.2 Waveletbased method
For iris recognition systems, an effective waveletbased method for eyelash removal was introduced by Aligholizadeh et al. [14]. Wavelets can be used to decompose the eye image into components that appear at different resolutions. The key advantage of the wavelet transform, compared to the traditional Fourier transform, is its positionfrequency localization property, allowing features that occur at the same position and resolution to be matched up.
Videokeratoscopic images are more challenging compared to the images considered in [14]. For this reason, applying the above method only results in a reduction and not the removal of eyelashes. Additional steps are necessary to determine eyelid edge pixels. Our proposed approach is introduced subsequently.
2.2 Active contours method for eyelid edge pixel candidate detection
After applying nonlinear image filters to the initial videokeratoscopic image, an active contours image segmentation method is presented to detect pixels in videokeratoscopic images that lie on the eyelid edge. This method outperformed other image segmentation approaches, such as region growing [19], watershed segmentation [20], and empirical and gradientbased methods [16] we studied before, but we do not report for space considerations.
Active contours are widely used in image segmentation to delineate an object contour within an image. The general idea of Kass et al. [21], who introduced the active contour model (also called snakes), was to minimize the energy associated to the current contour as a sum of an internal and external energy. The internal energy term controls the smoothness of the contour and is minimized when the snake’s shape matches the shape of the sought object; the external energy term attracts the contour towards the object and is minimized when the snake is at the object’s boundary. An initial estimate is required which is refined by means of energy minimization.
with E _{internal} representing the internal energy of the snake, E _{image} denoting the image forces acting on the spline, and E _{con} representing the external constraint forces introduced by the user. E _{image} and E _{con} form the external energy acting on the spline.
In the case of videokeratoscopy, the recurrent structure of the image allows to incorporate higherlevel prior knowledge to obtain an initial estimate. We propose to apply morphological operations to the output of the nonlinearly filtered image (stage 1). In particular, the nonlinearly filtered image is eroded and dilated with morphological discs.
Opening generally smooths the contour of a set by breaking its narrow isthmuses and by eliminating small holes in the set.
2.3 Candidate verification by using image statistics and polar coordinate fit
Before candidate pixels are fit to a parametric model, a candidate verification algorithm analyzes characteristics of the candidate pixels in order to remove pixels which are unlikely to contribute to an accurate fit of the eyelid.
The verification is based on a set of characteristics: the intensity averages, the column intensity decline, and a polar coordinate fit, which are combined to obtain a verification of candidates.
2.3.1 Intensity averages
2.3.2 Column intensity decline
The next characteristic is motivated by the fact that, in general, for videokeratoscopic images, the pixels above an eyelid are brighter due to the skin compared to the eyelid, which itself is characterized by a dark region. Thus, the intensity decline serves as indicator that a candidate pixel belongs to the set of eyelid edge pixels.
Before the intensity gradients are calculated, it is necessary to filter the column intensity values to reduce the effect of the ring patterns from the Placido disk. In our experiments, a median filter of length L _{2}=15 is applied to suppress the ring patterns and a moving average filter of length L _{1}=25 further smooths the intensity curve.
Positive weight is given towards the overall decision if the differentiated column intensity values of the candidate pixel and that of its adjacent columns fall below zero. Thus, the candidate pixel lies inside an intensity decline.
2.3.3 Polar coordinate fit
The third characteristic to verify the candidate pixels is based on a robust parabolic fit of the candidate pixels in the polar coordinate domain. After shifting the center of the image to the center of the pupil, a polar image can be determined by a Cartesian to polar coordinate transformation. In polar coordinates, the eyelids’ shape is similar to a parabolic curve.
The algorithm to calculate the polar coordinate fit consists of four steps which we discuss in the sequel.
The maximum in the Hough space is determined to find the best fitting parameter set.
Due to the circular dimension of the new coordinate system, the rectangular original image is cropped in the corners. For our purpose, the cropping can be neglected since only insignificant image areas are dropped. This would result in information loss, only if the center of the iris is very far away from the center of the image, which is not usually the case for videokeratoscopic images.
where the median absolute deviation (MAD) is given as MAD(d _{ i })=median_{ i }(d _{ i }−median_{ j }(d _{ j })) and Φ ^{−1} is the inverse of the cumulative distribution function for the standard normal distribution. To detect outliers, a threshold is set to \(T_{1} = 3\cdot \hat {\sigma }_{\text {rob}}\). The 3\(\hat {\sigma }_{\text {rob}} \) rule is justified by the fact that for \(d_{i} \sim \mathcal {N}(\mu,\sigma ^{2})\), the probability of d _{ i } taking a value above 3σ is unlikely, i.e., \(\phantom {\dot {i}\!}\text {Pr}(d_{i}\mu _{d_{i}}<3\sigma)=99.73\).
2.3.4 Candidate verification
2.4 Robust model fitting
In this section, we present robust approaches to fit linear and nonlinear parameterized curve models to the verified eyelid edge candidate pixels. Due to the eyelashes and low image quality in videokeratoscopic images, we suggest to use robust Mestimation for the unknown model parameters. We chose Mestimators, because even after candidate pixel verification, the Gaussian assumption may only hold approximately. An alternative robust fit via the Hough transform is also discussed exemplarily for quadratic polynomials.
2.4.1 Curve models
To provide the best possible accuracy, we investigate the applicability of a wide range of curve models.
Higher polynomial orders are not considered since they would result in curvatures that overfit the data. As there is no physical motivation to restrict our attention to linear models, we also consider some nonlinear models that are potentially suitable parametrizations of the eyelid edge.
Also, in this case, higher orders were excluded to avoid an overfitting of the data and to avoid modeling artefacts that would be introduced by the periodicity.
with f(x)≥0. The motivation for applying pdf type functions is that shifted and rotated versions of f(x) are able to well parametrize the eyelid edge using only a few parameters. Since there is no theoretical justification, or practical investigation that suggests a particular distribution, we consider the following candidates.
is described by the scale parameter σ and its shape parameter λ. Here, x≥0 and σ,λ>0.
with x≥0 and σ,λ>0.
where x≥0 and σ,λ>0.
where x≥0 and a,b,p>0.
with scale parameter α and shape parameter β, where x≥0 and α,β>0.
where x≥0 and ν,σ≥0 with ν being the distance between the reference point and the center of the bivariate distribution. σ is the scale parameter, and I _{0}(x) is the modified Bessel function of the first kind with order zero.
For α=0, the skew normal distribution reduces to the normal distribution. For an increasing absolute value of α, the skewness also increases. The distribution is right skewed for α>0, and for α<0, the distribution is left skewed.
Before fitting these models, the candidate pixels must be aligned and normalized to account for rotation or scaling. For this, a ground line is drawn from the lowest candidate pixel to the left to the lowest candidate pixel to the right in the image. The ground line is then rotated to a horizontal line and all candidate pixels are rotated with the same angle. The scale is normalized to one in both axes, and the fitting is performed on these transformed candidate pixels.
2.4.2 Robust estimation
In our notation, vectors and matrices are bold and random variables are in uppercase.
where the loss function ρ(x)=x ^{2} coincides with that of a least squares estimator (LSE). It is well known that this estimator is very sensitive to departures from the Gaussian data assumption. Robust statistics formalize the theory of approximate parametric models [30]. On the one hand, like classical parametric methods, robust methods are able to leverage upon a parametric model, but on the other hand, they do not depend critically on the exact fulfilment of the model assumptions. In this sense, robust statistics are very close to engineering intuition and signal processing demands [31]. Mestimators robustify maximum likelihood estimation (MLE) by introducing a bounded score function \(\psi (x) = \frac {\partial \rho (x)}{\partial x}\).
with ρ(x)=x and \(\hat {\sigma }_{\text {rob}}\) as given by (14). It belongs to the class Mestimators with monotone score function.
Choosing the tuning constant to be k=4.685 ensures 95% efficiency w.r.t. the MLE when the data exactly follows the nominal Gaussian model [32]. To obtain estimates for linear models, the minimization problem of (43) is easily solved using an iteratively reweighted least squares approach, as described in [32]. The LAD can serve as starting point for Tukey’s biweight method. For nonlinear models, we used the trustregion method [33], which represents an improvement over the popular LevenbergMarquardt algorithm [34, 35].
2.4.3 Hough transform for parabolic curve detection
The Hough transform [36] is widely used in digital image processing and computer vision to isolate features of a particular shape within an image. Circular or parabolic Hough transforms have been applied to accurately detect the iris or eyelid boundary [37–39], respectively.
Based on geometrical limitations, boundaries for the parameters in Eq. (16) are determined so as to span a finite size 3D accumulator array, the Hough space. Within these boundaries, all possible parabolas are evaluated for each candidate pixel. If the corresponding parametrized parabola matches a candidate pixel, the value of a point in the Hough space is incremented.
3 Realdata experiments
This section presents the evaluation metric, the experimental setup, and the results of the proposed procedure.
3.1 Experimental setup
3.2 Evaluation metric
Here, \(\boldsymbol {\hat {\theta }}\) represents the estimated parameters, N ^{ref} is the number of reference pixels, and \(\hat {y}_{m}(\boldsymbol {\hat {\theta }})\) defines the closest curve pixel to the reference pixel \(y_{m}^{\text {ref}}\). A pixel in a videokeratoscopic image corresponds to approximately 20 μm.
After evaluating Eq. (45) for all reference images, we report on the mean and standard deviation (STD) taken over all images. We also calculate the median and MAD over all images since they are robust estimates of the mean and standard deviation that are not severely influenced by severely divergent results on single images.
3.3 Results
Overall results of the proposed methods for all possible combinations of the stages that are shown in Fig. 4 have been computed
Rank  Filtering  Model  Fitting  Mean ± STD 

1  GDV  Rice  LAD  3.54 ± 1.44 
2  GDV  Rice  Bisquare  3.68 ± 1.49 
3  GDV  Parabola  LAD  4.32 ± 2.07 
4  GDV  Cubic  Bisquare  4.34 ± 1.51 
5  GDV  Cubic  LAD  4.42 ± 1.72 
6  GDV  Parabola  Bisquare  4.51 ± 2.64 
7  GDV  Rational  LAD  4.64 ± 1.91 
8  GDV  Fourier  LAD  4.69 ± 2.10 
9  GDV  Fourthorder  LAD  4.75 ± 2.11 
10  GDV  Fourier  Bisquare  4.80 ± 1.99 
11  GDV  Dagum  LAD  4.84 ± 1.54 
13  GDV  Skewnormal  Bisquare  4.86 ± 2.42 
17  GDV  Weibull  LAD  5.11 ± 2.09 
42  GDV  Loglogistic  LAD  5.71 ± 2.86 
71  Wavelet  Fréchet  Bisquare  6.10 ± 3.36 
78  Wavelet  Parabola  Hough  6.13 ± 4.42 
184  No filtering  Gamma  Bisquare  7.17 ± 1.41 
Overall results of the proposed methods (in decreasing accuracy order)
Rank  Filtering  Model  Fitting  Median ± MAD 

1  GDV  Parabola  Hough  3.50 ± 1.37 
2  GDV  Rice  LAD  3.52 ± 1.04 
3  GDV  Parabola  Bisquare  3.54 ± 0.91 
4  GDV  Fourthorder  LAD  3.61 ± 1.80 
5  GDV  Rice  Bisquare  3.62 ± 1.37 
6  GDV  Parabola  LAD  3.66 ± 1.17 
7  GDV  Rational  Bisquare  3.82 ± 0.61 
8  GDV  Fourier  Bisquare  3.83 ± 1.57 
9  GDV  Fourthorder  Bisquare  3.91 ± 2.28 
10  GDV  Skewnormal  Bisquare  4.03 ± 2.22 
11  GDV  Cubic  LAD  4.04 ± 1.74 
32  Wavelet  Weibull  Bisquare  4.62 ± 2.11 
40  Wavelet  Fréchet  Bisquare  4.86 ± 2.27 
42  GDV  Dagum  LAD  4.90 ± 1.08 
60  GDV  Loglogistic  LAD  5.16 ± 2.40 
113  GDV  Gamma  LAD  5.71 ± 3.03 
We next assess the performance of each individual stage of our proposed procedure.
Results of the two nonlinear image filtering approaches and without any filtering in RMS deviations over all images
GDV  Wavelet  None  

Mean ± STD  12.27 ± 13.26  10.31 ± 7.32  11.51 ± 8.60 
Median ± MAD  6.60 ± 4.58  8.23 ± 4.80  8.73 ± 4.89 
Results of the candidate verification procedure in RMS deviations over all images
Verification  No. of verification  

Mean ± STD  13.62 ± 15.73  13.67 ± 14.47 
Median ± MAD  9.27 ± 5.26  9.35 ± 5.06 
Results of the robust fitting approaches in RMS deviations over all images
Mestimator  Hough transform  

Mean ± STD  12.19 ± 13.62  18.46 ± 25.39 
Median ± MAD  8.81 ± 5.12  10.48 ± 5.75 
Results of the two robust estimation methods in RMS deviations over all ten images
Tukey’s bisquare  LAD  

Mean ± STD  13.43 ± 14.24  13.38 ± 14.50 
Median ± MAD  9.30 ± 5.16  9.25 ± 5.11 
Based on the presented results, we suggest for further eyelid localization research to consider the usage of Mestimators instead of the Hough transform, as it achieves similar results in terms of accuracy but is significantly less computational demanding. Furthermore, we recommend to also consider different curvature models than the parabola. Candidate verification does not seem to be required when using robust estimators.
4 Conclusions
We proposed a new procedure to robustly estimate the position of the eyelid edges in highspeed videokeratoscopic images. The proposed method applies eylash removal before segmenting the image with an active contours approach that is initialized by a contour that is obtained from morphological opening and closing operations. The position of the eyelids are verified and, finally, parametric curve models are fitted by applying robust parameter estimators to the selected pixels. Realdata experiments showed that the Rice model and the parabola achieved best results. Furthermore, robust regression outperforms the Hough transform as a robust fitting method in terms of processing time and is similar in terms of accuracy. The overall precision of the proposed approach is in the order of 10^{−2} mm and allows for replacing the currently used timeconsuming manual labeling.
Declarations
Acknowledgements
The authors would like to thank D.R. Iskander and the staff at Contact Lens and Visual Optics Laboratory (CLVOL) at the School of Optometry, Queensland University of Technology, in Brisbane, Australia, for their efforts in collecting the videokeratoscopic data and for their advice and to the anonymous reviewers for their useful comments on the proposed approach. The work of M. Muma was supported by the project HANDiCAMS which acknowledges the financial support of the Future and Emerging Technologies (FET) Programme within the Seventh Framework Programme for Research of the European Commission (HANDiCAMS), under FETOpen grant number: 323944.
Competing interests
The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 AE Reynolds, Corneal Topography: Measuring and Modifying the Cornea (Springer, New York, 1992).Google Scholar
 W Alkhaldi, DR Iskander, AM Zoubir, MJ Collins, Enhancing the standard operating range of a Placido disk videokeratoscope for corneal surface estimation. IEEE Trans. Biomed. Eng. 56(3), 800–809 (2009).View ArticleGoogle Scholar
 W Alkhaldi, DR Iskander, AM Zoubir, Modelorder selection in Zernike polynomial expansion of corneal surfaces using the efficient detection criterion. IEEE Trans. Biomed. Eng. 57(10), 2429–2437 (2010).View ArticleGoogle Scholar
 W Alkhaldi, Statistical signal and image processing techniques in corneal modeling. PhD thesis (Technische Universität Darmstadt, Germany, 2010).Google Scholar
 M Muma, AM Zoubir, in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE Int. Conf. On. Robust model order selection for corneal height data based on τ estimation (Prague, 2011), pp. 4096–4099.Google Scholar
 DR Iskander, MJ Collins, B Davis, Evaluating tear film stability in the human eye with highspeed videokeratoscopy. IEEE Trans. Biomed. Eng. 52(11), 1939–1949 (2005).View ArticleGoogle Scholar
 D AlonsoCaneiro, J Turuwhenua, DR Iskander, MJ Collins, Diagnosing dry eye with dynamicarea highspeed videokeratoscopy. J. Biomed. Opt. 16(7), 076012 (2011).View ArticleGoogle Scholar
 DH SzczesnaIskander, DR Iskander, Future directions in noninvasive measurements of tear film surface kinetics. Optom. Vis. Sci. 89(5), 749–759 (2012).View ArticleGoogle Scholar
 DH SzczesnaIskander, D AlonsoCaneiro, DR Iskander, Objective measures of prelens tear film dynamics versus visual responses. Optom. Vis. Sci. 93(8), 872–880 (2016).View ArticleGoogle Scholar
 DR Iskander, MJ Collins, Applications of highspeed videokeratoscopy. Clin. Exp. Optom. 88(4), 223–231 (2005).View ArticleGoogle Scholar
 J Nemeth, B Erdelyi, B Csakany, P Gaspar, A Soumelidis, F Kahlesz, Z Lang, Highspeed videotopographic measurement of tear film buildup time. Invest. Ophthalmol. Vis. Sci. 43(6), 1783–1790 (2002).Google Scholar
 YK Jang, BJ Kang, KR Park, A study on eyelid localization considering image focus for iris recognition. Pattern Recognit. Lett. 29(11), 1698–1704 (2008).View ArticleGoogle Scholar
 X Liu, P Li, Q Song, in Proceedings of the 3rd International Conference on Advances in Biometrics (ICB ’09). Lectures Notes in Computer Science, 5558. Eyelid localization in iris images captured in less constrained environment (Alghero, Italy, 2009), pp. 1140–1149.Google Scholar
 MJ Aligholizadeh, SH Javadi, R SabbaghiNadooshan, K Kangarloo, in International Conference on Biometrics and Kansei Engineering. An effective method for eyelashes segmentation using wavelet transform (Takamatsu, 2011), pp. 185–188.Google Scholar
 F Bernard, CE Deuter, P Gemmar, H Schachinger, Eyelid contour detection and tracking for startle research related eyeblink measurements from highspeed video records. Comput. Methods Prog. Biomed. 112(1), 22–37 (2013).View ArticleGoogle Scholar
 JF Canny, A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986).View ArticleGoogle Scholar
 G Sicuranza, Nonlinear Image Processing (Academic Press, San Diego, USA, 2000).MATHGoogle Scholar
 D Zhang, DM Monro, S Rakshit, in IEEE International Conference on Image Processing. Eyelash removal method for human iris recognition (Atlanta, 2006), pp. 285–288.Google Scholar
 R Adams, L Bischof, Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 16(6), 641–647 (1994).View ArticleGoogle Scholar
 S Beucher, F Meyer, The morphological approach to segmentation: the watershed transformation. Opt. Eng. 34:, 433 (1992).Google Scholar
 M Kass, A Witkin, D Terzopoulos, Snakes: active contour models. Int. J. Comput. Vis. 1:, 321–331 (1988).View ArticleMATHGoogle Scholar
 N Otsu, A threshold selection method from graylevel histograms. Automatica. 11:, 23–27 (1975).Google Scholar
 D Ballard, Generalizing the hough transform to detect arbitrary shapes. Pattern Recognit. 13:, 111–122 (1981).View ArticleMATHGoogle Scholar
 P Rosin, E Rammler, The laws governing the fineness of powdered coal. J. Inst. Fuel. 7:, 29–36 (1933).Google Scholar
 W Weibull, A statistical distribution function of wide applicability. J. Appl. Mech. 18:, 293–297 (1951).MATHGoogle Scholar
 C Dagum, in Proceedings of the 40th Session of the International Statistical Institute, 46. A model of income distribution and the conditions of existence of moments of finite order (Warsaw, 1975), pp. 196–202.Google Scholar
 C Dagum, A new model of personal income distribution: specification and estimation. Economie Appliquée. 30:, 413–436 (1977).Google Scholar
 SO Rice, Mathematical analysis of random noise. Bell. Syst. Tech. J. 24:, 146–156 (1945).MathSciNetView ArticleMATHGoogle Scholar
 A Azzalini, A Dalla Valle, The multivariate skewnormal distribution. Biometrika. 83(4), 715–726 (1996).MathSciNetView ArticleMATHGoogle Scholar
 PJ Huber, Robust estimation of a location parameter. Ann. Math. Statist. 35:, 73–101 (1964).MathSciNetView ArticleMATHGoogle Scholar
 AM Zoubir, V Koivunen, Y Chakhchoukh, M Muma, Robust estimation in signal processing: a tutorialstyle treatment of fundamental concepts. IEEE Signal Process. Mag. 29(4), 61–80 (2012).View ArticleGoogle Scholar
 RA Maronna, DR Martin, VJ Yohai, Robust Statistics. Wiley Series in Probability and Statistics (Wiley, Chichester, 2006).Google Scholar
 MA Branch, TF Coleman, Y Li, A subspace, interior, and conjugate gradient method for largescale boundconstrained minimization problems. SIAM J. Sci. Comput. 21:, 1–23 (1999).MathSciNetView ArticleMATHGoogle Scholar
 K Levenberg, A method for the solution of certain problems in least squares. Quart. Appl. Math. 2:, 164–168 (1944).MathSciNetMATHGoogle Scholar
 DW Marquardt, An algorithm for leastsquares estimation of nonlinear parameters. J. Soc. Indus. Appl. Math. 11(2), 431–441 (1963).MathSciNetView ArticleMATHGoogle Scholar
 PV Hough, BW Powell, A method for faster analysis of bubble chamber photographs. Nuovo Cimento Ser. 10. 18:, 1184–1191 (1960).View ArticleGoogle Scholar
 L Masek, Recognition of human iris patterns for biometric identification. Technical report (The University of Western Australia, 2003).Google Scholar
 P Li, X Liu, L Xiao, Q Song, Robust and accurate iris segmentation in very noisy iris images. Image Vis. Comput. 28(2), 246–253 (2010).View ArticleGoogle Scholar
 DS Jeong, JW Hwang, BJ Kang, KR Park, CS Won, DK Park, J Kim, A new iris segmentation method for nonideal iris images. Image Vis. Comput. 28(2), 254–260 (2010).View ArticleGoogle Scholar
 M Muma, Robust estimation and model order selection for signal processing. PhD thesis (2014).Google Scholar