An image autofocusing algorithm for industrial image measurement
 Shuxin Liu^{1, 2},
 Manhua Liu^{3}Email author and
 Zhongyuan Yang^{2}
https://doi.org/10.1186/s1363401603685
© The Author(s). 2016
Received: 22 December 2015
Accepted: 31 May 2016
Published: 13 June 2016
Abstract
Autofocusing task, which automatically obtains the best image focus, plays an important role to improve the image definition for the industrial image measurement application. Imagebased autofocusing is one of the widely used methods for this task because of its fast response, convenience, and intelligence. In general, the imagebased autofocusing algorithm often consists of two important steps which are the image definition evaluation and the search strategy. In this paper, we have developed an image autofocusing algorithm for industrial image measurement. First, we propose a new image definition evaluation method based on the fuzzy entropy, which can reduce the negative effects of noise and variations of light intensity and lens magnification. Second, a combined search method is proposed to combine the multiscale global search and finelevel curve fitting method, which can avoid the disturbance of the local peaks and obtain the best image focus. The proposed image autofocusing algorithm has the advantages of high focusing accuracy, high repeatability and stability under the variations of lens magnification, and light intensity index, which make it applicable for the industrial image measurement. Experimental results and comparisons on the practical industrial image measurement system have been presented to show the effectiveness and superiority of the proposed algorithm.
Keywords
Image autofocusing Fuzzy entropy Image definition evaluation Global search Curve fitting Industrial image measurement1 Introduction
With the rapid development and wide applications of computer and image processing technologies, the imagebased noncontract measurement has been widely used in numerous fields from industrial quality and robotics to medicine and biology, because of its fastness, convenience, intelligence, etc. It is well known that the main task of industrial image measurement is to calculate the dimension size of workpieces based on the images captured by the CCD camera. How to acquire a highdefinition image is the most important task. Autofocusing task, which automatically obtains the best image focus, plays an important role to improve the image definition for the industrial image measurement application. There are many methods proposed for autofocusing [1, 2]. Imagebased autofocusing, which is based on the evaluations of image definition and search of the best focus, is one of the widely used methods for this task because of its noncontact, convenience, and intelligence, and it applies to solving problems of product distortion caused by contact measurement using caliper and coordinated measuring machine for workpieces such as plastic box and cellphone film. In general, the imagebased autofocusing algorithm often consists of two important steps: the image definition evaluation and the search strategy. First, an appropriate evaluation method is applied to calculate the definitions of the captured images at different focal positions of workpiece. Based on the evaluation results, a search method is applied to obtain the position of the best focus that the CCD lens is drove to reach. The imagebased autofocusing usually requires the high focusing accuracy, i.e., the obtained position of the best focus is close to the real values as much as possible. In addition, it also requires that result is the robust to noise and the variations of imaging conditions such as light intensity and lens magnification, i.e., the best focusing positions obtained at multiple times should be consistent with small variation.
The imagebased autofocusing has been widely studied because of its noncontact, fast response, and convenience. Most of these studies have been motivated to improve the focusing accuracy, reduce the computation time, and enhance the robustness to noise and variations, from the two important perspectives: the image definition evaluation and the focus search algorithm [3, 4]. Image definition evaluation, which measures the image quality (definition), is one of the important steps for the imagebased autofocusing. There are many methods proposed for image definition evaluation in the literature. R. Redondo et al. compared and analyzed sixteen different definition evaluation functions in terms of the computation costs, accuracy, and effects of noise and lighting by the contrast tests of the cell tissue images [5]. A. Akiyama et al. proposed a definition evaluation function based on Daubechies wavelet transform, which sets four weightings according to the decomposed frequency bands and was used in uncooled infrared camera [6]. To solve the problems of low image contrast ratio and flat definition evaluation curve under low light, M. Gamadia et al. adopted an image enhancement method to increase the contrast of the image and then designed a corresponding focusing evaluation function to measure the image contrast [7]. Makkapati presented an improved waveletbased image autofocusing method for blood smears in microscope [8]. In this method, the red blood cell images were first segmented by thresholding the green component of image, and the waveletbased focus measure is evaluated on the segmented images. This method can gain the smooth definition evaluation curve without any undulations, which improves the autofocusing accuracy. To realize autofocusing in aerial pushbroom remotesensing camera, Lu et al. proposed an image definition evaluation function based on line spread function (LSF) [9]. Firstly, this method computed the edge spread function (ESF) by searching the blade edge among the edges detected in the image. Secondly, the LSF in the pixel level was obtained by the derivation of ESF. Thirdly, the LSF in the subpixel level was achieved by curve fitting of LSF using the least square. Finally, the standard deviation parameter σ of LSF is used as the image definition evaluation measure. Although these methods have good performance in the specific areas, most of them are easily affected by noise and variations of lens magnification and light intensity index, which is an important problem for industrial image measurement.
After evaluation of the image definition or quality, a search method is usually applied to find the best focusing position for image capture [10]. The most straightforward method is the global search, which obtain the optimal position by scanning through all possible focus positions in a unidirectional manner. It has high computation cost and is only applicable to the cases with narrow focus range. There are various search strategies proposed to speed the search and improve the accuracy in the literature [11–13]. Most of these search strategies can be classified into three categories: the binary search (BS), Fibonacci search (FS), and rulebased search (RS) [14]. In the binary search, the image definition evaluation function is computed at two locations and their difference is computed. If the difference is negative, the next move follows the opposite direction. Given the unimodal shape of the definition evaluation, the binary search can converge to the best focus location at a fast speed provided heuristic choices of the step magnitude. The hillclimbing search is a popular search strategy developed from the basic binary search, which divides the search procedures into two stages: outoffocus region searching and focused region searching [15]. The hillclimbing search method has been improved with modifications with respect to the stepsize selection, termination criteria, search window, etc. [11, 15]. The Fibonacci search is one of the wellknown search strategies, which can narrow the search interval until its size equals a given fraction of the initial search range [16]. This method was further improved by combination of the Fibonacci search and the curve fitting method to avoid the local optimum disturbance [12]. A rulebased search was a sequential search method proposed to adjust the search step size by the distance from the best focus position [13]. This method can avoid the back and forth motor moving of the Fibonacci search. In [11], Liu et al. proposed a search method by combination of image definition evaluation function and modulation transfer function (MTF) auxiliary function to judge the search direction. Although the search method has been widely studied, how to obtain the best focusing position with high accuracy and low computation cost is still a challenging problem.
In this work, we have developed an image autofocusing algorithm for industrial image measurement. First, we have proposed a new method for image definition evaluation which uses the fuzzy entropy to measure the uncertainty degree of image and evaluates the image definition based on statistical analysis of the gray level differences of image. Second, a combined search strategy is proposed to make use of the multiscale global search and finelevel curve fitting to obtain the best image focus. The traditional global search method, which searches the optimal position by scanning through all possible focus positions with constant step size, has two major limitations: high computational cost and suitability with only narrow focused image. Different from the traditional global search method, the multiscale global search is applied in our proposed method, which can facilitate handling the images without narrow focus. Large scale search is performed with the large step size to find the point with maximum definition. Then, the search region is narrowed to the neighborhood centered at the previous found point and the step size is reduced for the small scale search. The above search process is iterated until the difference between the two found points of the successive searches is smaller than a threshold. In this way, the search process is fasted without sacrificing the accuracy. The global search at multiple scales is used to narrow the search range in the coarse level while the curve fitting is applied in the fine level to obtain the optimal position of best focus instead of the position with maximum image definition. Finally, our proposed image autofocusing algorithm is tested on a practical industrial image measurement system to show its performance.
The rest of this paper is organized as follows. Section 2 will present in detail the proposed image autofocusing algorithm. In Section 3, experimental results and comparisons are presented to show the effectiveness of the proposed algorithm. Finally, the paper is concluded in Section 4.
2 The proposed image autofocusing algorithm
In this work, we have developed an image autofocusing algorithm for industrial image measurement, which consists of two main processing steps: evaluation of image definition and search of the best focus, which will be described in detail in the following subsections.
2.1 Image definition evaluation based on fuzzy entropy

Step 1. Definition of fuzzy entropy
where k is a parameter belonging to [0 1] and μ _{ k }(f(i, j)) ∈ [0.5 1]. From Eq. (1), we can know that the value of μ _{ k }(f(i, j)) obtains its maximum of 1 when f(i, j) − k = 0 (i. e., f(i, j) = k), and it decreases to its minimum value when f(i, j) − k increases to the maximum of 1. If the image gray level f (i, j) decreases/increases from k, the membership function value μ _{ k } will decrease symmetrically. Thus, the value of μ _{ k }(f(i, j)) is ranged between 0.5 and 1, which satisfies the requirement of membership function that the membership value must be in the range of [0 1]. For evaluation of image definition, the membership function can be used to measure the membership of gray level difference between an image pixel and its neighborhoods when k is set to the gray level of image pixel and f (i, j) is the gray levels of its neighborhoods. The membership is large if the gray level difference is small and vice versa.

Step 2. Image definition evaluation
where k = f(i, j). The w must be set to an odd number to ensure that the image pixel (i, j) is in the center of the neighboring window. In addition, the parameter w has important effect on the image contrast evaluation. The image contrast evaluation with small w has good sharpness but it is sensitive to noise. In contrast, large w can have better performance for denoising while sacrificing the sharpness. Thus, to balance this tradeoff, w is set to 5 in our experiments. When the central image pixel (i, j) in the window of size w × w is the edge point, the gray level differences are large and the image contrast value m _{ k }(i, j) will be large. Otherwise, m _{ k }(i, j) is small for the homogeneous area. Thus, image contrast evaluation m _{ k }(i, j) monotonically increases with the increasing of the gray level differences between the central image pixel and its neighborhoods. For a captured image of size m × n, we compute the image contrast values m _{ k }(i, j) of all image pixels and finally obtain the image contrast map [m _{ k }(i, j)]_{ m × n }.
Compared to the traditional methods, the proposed image definition measure based on fuzzy entropy has two important advantages. First, the membership function μ _{ k }(f(i, j)) is defined based on the local difference of the image normalized gray values, which can reduce the negative effects of the lighting fluctuation. Second, the image contrast evaluation m _{ k }(i, j) in Eq. (3) is computed by averaging the entropy of membership values in a local neighborhood, which can reduce the effect of noise. Specifically, compared to the waveletbased method [7, 8], which is computation expensive in decomposing the image into different frequency bands and computation of transform, the proposed image definition measure based on fuzzy entropy is more efficient and applicable in practical image measurement system. Different from the LSF method [9] based on line detection, which is effective only for images with lines and sensitive to noise in edge line detection, the proposed image definition measure by averaging the fuzzy entropies in a local neighborhood not only can be used in various types of images but also can reduce the effects of environmental factors such as lens magnification, lighting conditions, and noises. Thus, the local peaks of the image definition values can be effectively reduced to facilitate the search of the best focus. In addition, compared to the other edgebased methods such as Roberts function and Tenengrad function [20], the proposed image definition measure can handle different types of edges in the same way and avoids the difficulty in distinguishing step and line edges. It can produce a reasonable measurement of image definition.
2.2 The combined search method
To address the above problems, we propose a search method based on the combination of global search and finelevel curve fitting. Firstly, the global search method is used to find the peak point with the maximum image definition evaluation. It is well known that the computation cost is very huge if the image sampling rate is high and the search step is small. To reduce the computation cost and fast the search process for realtime application, we apply multiple global searches from coarse scale to fine scale by gradually decreasing the search step size and narrowing the search area. A large step size is used for the coarsescale global search of the maximum image definition, and the search area is reduced to the smaller area around the obtained optimal point for the fine global search with a smaller step size. This process can be iteratively repeated until the global optimal sampling point with the maximum image definition evaluation is obtained.

Step 1: size is applied to find the position with maximum image definition value. Let \( {P}_0^1 \) denote the start point, \( {P}_n^1 \) denote the end point, and m _{1} denote the step size. The size of the searching area is computed as \( {D}_1={P}_n^1{P}_0^1 \). Driving the CCD lens to scan the focusing window at the step of m _{1}, the image is captured at each step position and the definition evaluation value is computed for each image. The set of image definition values are denoted as \( \left\{F\left({P}_0^1\right),\ F\left({P}_1^1\right),\dots, F\left({P}_n^1\right)\right\} \). We use the global search to find the optimal position denoted as \( {P}_m^1 \) with the maximum definition value.

Step 2: The distances from the optimal position \( {P}_m^t \) to the start searching point \( {P}_0^t \), and to the end searching point \( {P}_n^t \) in the previous step t (t ≥ 1) are computed as \( \left{P}_m^t{P}_n^t\right\kern0.5em \mathrm{and}\kern0.5em \left{P}_m^t{P}_0^t\right \), respectively. The maximum of these two distances is computed as \( {D}_t= \max \left(\left{P}_m^t{P}_0^t\right,\left{P}_m^t{P}_n^t\right\right) \). The next searching area is defined as \( \left[{P}_m^t\frac{D_t}{2},\ {P}_m^t+\frac{D_t}{2}\right] \), and the searching step size is computed as \( {m}_{t+1}=\frac{D_t}{D_{t1}}\times {m}_t \). The searching process is same as that of Step 1. The image definition value is calculated at the searching step, and the optimal position with the maximum definition value is obtained as \( {P}_m^{t+1} \).

Step 3: We compute the distance between the current optimal position and the previous optimal position, i.e., \( \left{P}_m^{t+1}{P}_m^t\right \). If it is not larger than a threshold \( \sigma \), i.e., \( \left{P}_m^{t+1}{P}_m^t\right\le \sigma \), go to the Step 4. Otherwise, go to Step 2 for the new search with decreasing the step size. The threshold \( \sigma \) can be set based on the requirements of accuracy and operation rate in the practical applications.

Step 4: Assume that P _{ m } is the final peak position with the maximum definition value obtained by global search in the above step. To obtain the actual optimal position of the best focus, the curve fitting method is applied on a small range around the P _{ m }. We fit the image definition values vs. search positions to a quadratic polynomial equation as follows:$$ y=a+bx+c{x}^2 $$(5)where the x denotes the search position, y denotes the image definition values, and a, b, c are the parameters of the quadratic polynomial equation, which need to be calculated. The least square method is used to compute the equation parameters for the curve fitting:$$ \left[\begin{array}{ccc}\hfill 1\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^2}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{x}_k}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^2}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^3}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{x}_k^2}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^3}\hfill & \hfill {\displaystyle {\sum}_k^n{x}_k^4}\hfill \end{array}\right]\left[\begin{array}{c}\hfill a\hfill \\ {}\hfill b\hfill \\ {}\hfill c\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\displaystyle {\sum}_{k=1}^n{y}_k}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{y}_k{x}_k}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{y}_k{x}_k^2}\hfill \end{array}\right] $$(6)Finally, the curve peak position P _{ f }, which is considered as the final position of the best focus, is calculated as:$$ {P}_f=\frac{b}{2\mathrm{c}} $$(7)
Our proposed search method combines the multiscale global search and curve fitting methods, which not only can achieve high focusing accuracy but also have low computation cost for the realtime industrial measurement system. The traditional global search method finds the optimal position by scanning through all possible focus positions in a unidirectional manner which results in high computation cost. Different from this method, our proposed method can reduce the computation cost through the global to fine search strategy by varying the step size. Compared to the hillclimbing method, which is sensitive to the local optimum disturbance, our proposed method can avoid the local optimum disturbance and find the global point with the maximum image definition. The traditional Fibonacci method iteratively reduce the search area which sets the central point of the search area using the previous obtained point of the maximum definition evaluation and decreases each search area almost half in each iteration. But it requires back and forth motor moving and is sensitive to the local optimum disturbance. The rulebased method was a sequential search method proposed to adjust the search step size by the distance from the best focus position. It can avoid the back and forth motor moving of the Fibonacci search, but it is sensitive to the variations of lens magnification and lighting conditions. Our proposed method takes advantages of the Fibonacci and rulebased search methods by the multiscale global search with varying step size, which not only can avoid the problems of local optimum disturbance but also has low computation cost and high robustness to different variations. In addition, the image of the real best focus may not be captured due to the sampling distance and the different starting search point. In this case, the above existing methods cannot detect the real best focusing position. Our proposed method can detect the real best focusing position by further using the curve fitting, which improves the focusing accuracy. Thus, compared to the existing methods, our proposed method can avoid the local optimum disturbance and achieve high focusing accuracy without sacrificing the computational cost, which make it applicable to the practical image measurement system.
3 Experimental results and analysis
In this section, we will present experiments to test the performances of the proposed image autofocusing algorithm. First, we will introduce the test platform (including its hardware and software) of an industrial image measurement system for the experiments. Second, characteristics of experimental objects include the brake pad, and the bearings are introduced. Third, the proposed image autofocusing algorithm is tested on this platform with experimental objects to show its effectiveness.
From Eq. (10), smaller T indicates the better stability of autofocusing. RMSE expresses difference between the measured value and the real value, SD expresses difference between measure values, and T expresses max different between measure values. In the following experiments, we compute the above three measures to evaluate the performances of the image autofocusing algorithms.
In the practical image measurement system, the image autofocusing may be easily affected by the environmental factors especially the noise, the lens magnification, and the light intensity index, and we also test the robustness of the proposed image autofocusing algorithm to these two variations. In addition, we compare our proposed algorithm to other three image autofocusing algorithms published in the literature [11, 13, 19]. The three image autofocusing algorithms in [11, 13, 19] are denoted as AF1, AF2, and AF3, respectively, while our proposed image autofocusing algorithm is denoted as AF4 in the following subsections. The first experiment is conducted in a relatively ideal environment with the optimal lens magnification and light intensity index. Since the environmental fluctuations of the lens magnification and the light intensity index often appear in the practical image measurement and affect the performance of image autofocusing, we conduct the experiments with the variations of lens magnification and light intensity index.
3.1 The test platform of an industrial image measurement system
The imagebased measurement system works as follows. First, the lens magnification and light intensity index are optimally set to capture the images. Second, the proposed autofocusing algorithm is applied to find the position of the best focus. Third, the lens is drove to the optimal focus position by the motor, and the images of the best focus are captured for measurement. Fourth, the image is processed and the height of workpiece is calculated. Finally, the result is recorded after correcting the error of grating ruler and vertical error by the software.
3.2 Experimental objects and experimental parameters confirmation
The brake pad is a plane body with a little rough surface, and the bearing is a threedimensional workpiece with smooth surface. The purpose of the image autofocusing experiments is to calculate the heights of workpieces. Theoretically, the measured height values of these workpieces will be most accurate if the measured image is captured at the position of the best focus. The real heights of the brake pad and the bearing are 67.560 and 26.234 mm, respectively, while the real heights of the big and small standard cuboids are 60 and 30 mm, respectively.
3.3 Experiment with the optimal lens magnification and light intensity index
Performance comparison results of different image autofocusing algorithms on the optimal conditions for different workpieces
Brake pad  Bearing  

AF1  AF2  AF3  AF4  AF1  AF2  AF3  AF4  
RMSE(mm)  0.0025  0.0035  0.0034  0.0011  0.0032  0.0031  0.0028  0.0013 
SD(mm)  0.0026  0.0037  0.0036  0.0018  0.0029  0.0035  0.0036  0.0017 
T(mm)  0.0080  0.0110  0.0110  0.0030  0.0100  0.0090  0.0090  0.0030 
Big standard cuboid  Small standard cuboid  
RMSE(mm)  0.0037  0.0032  0.0042  0.0010  0.0029  0.0028  0.0031  0.0013 
SD(mm)  0.0035  0.0031  0.0040  0.0009  0.0027  0.0027  0.0029  0.0012 
T(mm)  0.0100  0.0090  0.0110  0.0030  0.0090  0.0090  0.0100  0.0040 
3.4 Experiment with variation of lens magnification
Performance comparison of different image autofocusing algorithms with variation of lens magnification
Brake pad  Bearing  

AF1  AF2  AF3  AF4  AF1  AF2  AF3  AF4  
RMSE(mm)  0.0052  0.0096  0.0100  0.0025  0.0047  0.0084  0.0088  0.0021 
SD(mm)  0.0054  0.0071  0.0078  0.0028  0.0051  0.0078  0.0086  0.0019 
T(mm)  0.0170  0.0320  0.0300  0.0056  0.0210  0.0310  0.0190  0.0062 
Big standard cuboid  Small standard cuboid  
RMSE(mm)  0.0100  0.0110  0.0150  0.0011  0.0098  0.0112  0.0129  0.0018 
SD(mm)  0.0095  0.1020  0.0143  0.0011  0.0094  0.0107  0.0123  0.0017 
T(mm)  0.0290  0.0250  0.0340  0.0030  0.0270  0.0300  0.0310  0.0060 
3.5 Experiment with variation of light intensity index
Performance comparison of different image autofocusing algorithms with variation of light intensity index
Brake pad  Bearing  

AF1  AF2  AF3  AF4  AF1  AF2  AF3  AF4  
RMSE(mm)  0.0053  0.0093  0.0095  0.0024  0.0057  0.0089  0.0089  0.0028 
SD(mm)  0.0044  0.0068  0.0063  0.0024  0.0051  0.0063  0.0076  0.0023 
T(mm)  0.0180  0.0330  0.0370  0.0050  0.0280  0.0370  0.0290  0.0052 
Big standard cuboid  Small standard cuboid  
RMSE(mm)  0.0110  0.0110  0.0140  0.0021  0.0107  0.0121  0.0137  0.0020 
SD(mm)  0.0099  0.0106  0.0132  0.0020  0.0101  0.0115  0.0131  0.0019 
T(mm)  0.0290  0.0270  0.0310  0.0060  0.0280  0.0330  0.0320  0.0060 
3.6 Computation complexity analysis and comparison
In this section, we analyze the computation complexity of the proposed image autofocusing algorithm and compare it to the other three image autofocusing algorithms in [11, 13, 19]. The total image autofocusing process includes the image acquisition, image processing, search of the best focus, the control signal transmission, motor running and delay etc. Our proposed image autofocusing algorithm was implemented with VC++ software on a computer of 3.1 GHz Intel Core i54440, 4 G RAM, and 64bit Windows 7 OS, as shown in Fig. 6b. All experiments are performed for the image autofocusing task on the workpiece of the brake pad. Our proposed algorithm takes 12.2 s for the image autofocusing task as shown in Fig. 4. For fair comparison, we also test the other three algorithms on the same task with the same test platform of image measurement system. The computation costs are 13.8, 11.5, and 11.9 s for the image autofocusing algorithms [11, 13, 19], respectively. The search of the best focus takes most of the computation time for the image autofocusing algorithm. The image autofocusing algorithm in [11] applied the hillclimbing search method to find the best focus, which is computation expensive. The image autofocusing algorithm in [13] does not need the reciprocating motion of the motor and thus can fast the focusing process. But it requires to manually adjust the search step size when lens magnification and lighting change, which makes it low adaptability in the practical and complex image measurement system. The image autofocusing algorithm in [19] has the similar computational cost as our method, but its focusing accuracy and stability are lower than our algorithm.
4 Conclusions
In this paper, we have proposed an image autofocusing algorithm. First, a new method is proposed to model the complex imaging process and evaluate the image definitions based on fuzzy entropy. The proposed method has the advantage on the robustness to noises and variations of lens magnification and lighting conditions. Second, we propose a search method to obtain the optimal position of best focus by combing the multiscale global search and finelevel curve fitting. The combined search method not only can achieve high accuracy of the best focus but also have low computation cost. Finally, experimental results and comparisons show that the proposed algorithm can achieve not only higher focusing accuracy but also has better repeatability and stability under the variations of lens magnification and light intensity index than the other existing algorithms without sacrificing the computation cost. These advantages make it applicable for the practical industrial image measurement under a complex environment.
Declarations
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 61375112 and No. 61070226).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 M Moscaritolo, H Jampel, F Knezevich, R Zeimer, An image based autofocusing algorithm for digital fundus photography. IEEE Trans. Med. Imaging 28(11), 1703–1707 (2009)View ArticleGoogle Scholar
 X Wang, RC Sun, SY Xu, Autofocusing technique based on image processing for remotesensing camera. Proc. of SPIE 6623, 66230F (2008)View ArticleGoogle Scholar
 R Chen, PV Beek, Improving the accuracy and lowlight performance of contrastbased autofocus using supervised machine learning. Pattern Recogn. Lett. 56, 30–37 (2015)View ArticleGoogle Scholar
 M Weston, P Mudge, C Davis, A Peyton, Time efficient autofocussing algorithms for ultrasonic inspection of duallayered media using full matrix capture. NDT&E Intl. 47, 43–50 (2012)View ArticleGoogle Scholar
 R Redondo, G Bueno, JC Valdivieze et al., Autofocus evaluation for brightfield microscopy pathology. J. Biomed. Opt. 17(3), 036008 (2012)View ArticleGoogle Scholar
 A Akiyama, N Kobayashi, E Mutoh, H Kumagai, H Yamada, H Ishll, Infrared image guidance for ground vehicle based on fast wavelet image focusing and tracking. Proc. of SPIE 7429, 742906 (2009)View ArticleGoogle Scholar
 M Gamadia, N Kehtarnavaz, KR Hoffman, Lowlight autofocus enhancement for digital and cellphone camera image pipelines. IEEE Trans. Consum. Electron. 53(2), 249–257 (2007)View ArticleGoogle Scholar
 V. V. Makkapati, in Improved waveletbased microscope autofocusing for blood smears by using segmentation. 5th Annual IEEE Conference on Automation Science and Engineering, (2009), p. 208–211.Google Scholar
 ZH Lu, YF Guo, HF Li, YF Li, Autofocus using LSF in aerial pushbroom remotesensing camera. Infrared Laser Eng. 41(7), 1808–1814 (2012)Google Scholar
 CM Chen, CM Hong, HC Chuang, Efficient autofocus algorithm utilizing discrete difference equation prediction model for digital still cameras. IEEE Trans. Consum. Electron. 52(4), 1135–1143 (2006)View ArticleGoogle Scholar
 CT Liu, ZX He, Y Zhan, HC Li, Searching algorithm of theodolite autofocusing based on compound focal judgment. EURASIP J. Wirel. Commun. Netw. (1), 111 (2014)Google Scholar
 Y. L. Xiong, S. A. Shafer, in Depth from focusing and defocusing. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (1993), p. 68–73Google Scholar
 N Kehtarnavaz, H Oh, Development and realtime implementation of a rulebased autofocus algorithm. R. Time Imag. 9, 197–203 (2003)View ArticleGoogle Scholar
 Y. Yao, B. Abidi, N. Doggaz, M. Abidi, Evaluation of sharpness measured and search algorithm for the autofocusing of high magnification images. Proceeding. SPIE 6246, Visual Information Processing XV, 62460G (2006), doi:10.1117/12.664751
 J He, R Zhou, Z Hong, Modified fast climbing search autofocus algorithm with adaptive step size searching technique for digital camera. IEEE Trans. on Consumer Electronics 49(2), 257–262 (2003)View ArticleGoogle Scholar
 EP Krotkov, Active Computer Vision by Cooperative Focus and Stereo (SpringerVerlag, New York, 1989)View ArticleMATHGoogle Scholar
 S. Alsharhan, F. Karray, W. Gueaieb, O. Basir, in Fuzzy Entropy: a Brief Survey. In: 2001 IEEE International Fuzzy Systems Conference, p. 1135–1139Google Scholar
 M Gamadia, N Kehtarnavaz, A filterswitching autofocus framework for consumer camera imaging systems. IEEE Trans. Consum. Electron. 58(2), 228–236 (2012)View ArticleGoogle Scholar
 D Florian, H Kock, K Plankensteiner, M Glavanovics, Auto focus and image registration techniques for infrared imaging of microelectronic devices. Meas. Sci. Technol. 24, 074020 (2013)Google Scholar
 J Wang, H Chen, G Zhou, T An, An improved Brenner algorithm for image definition criterion. Acta. Photonica. Sinica 41(7), 855–858 (2012)Google Scholar