Skip to main content

An image auto-focusing algorithm for industrial image measurement

Abstract

Auto-focusing task, which automatically obtains the best image focus, plays an important role to improve the image definition for the industrial image measurement application. Image-based auto-focusing is one of the widely used methods for this task because of its fast response, convenience, and intelligence. In general, the image-based auto-focusing algorithm often consists of two important steps which are the image definition evaluation and the search strategy. In this paper, we have developed an image auto-focusing algorithm for industrial image measurement. First, we propose a new image definition evaluation method based on the fuzzy entropy, which can reduce the negative effects of noise and variations of light intensity and lens magnification. Second, a combined search method is proposed to combine the multi-scale global search and fine-level curve fitting method, which can avoid the disturbance of the local peaks and obtain the best image focus. The proposed image auto-focusing algorithm has the advantages of high focusing accuracy, high repeatability and stability under the variations of lens magnification, and light intensity index, which make it applicable for the industrial image measurement. Experimental results and comparisons on the practical industrial image measurement system have been presented to show the effectiveness and superiority of the proposed algorithm.

1 Introduction

With the rapid development and wide applications of computer and image processing technologies, the image-based non-contract measurement has been widely used in numerous fields from industrial quality and robotics to medicine and biology, because of its fastness, convenience, intelligence, etc. It is well known that the main task of industrial image measurement is to calculate the dimension size of workpieces based on the images captured by the CCD camera. How to acquire a high-definition image is the most important task. Auto-focusing task, which automatically obtains the best image focus, plays an important role to improve the image definition for the industrial image measurement application. There are many methods proposed for auto-focusing [1, 2]. Image-based auto-focusing, which is based on the evaluations of image definition and search of the best focus, is one of the widely used methods for this task because of its non-contact, convenience, and intelligence, and it applies to solving problems of product distortion caused by contact measurement using caliper and coordinated measuring machine for workpieces such as plastic box and cellphone film. In general, the image-based auto-focusing algorithm often consists of two important steps: the image definition evaluation and the search strategy. First, an appropriate evaluation method is applied to calculate the definitions of the captured images at different focal positions of workpiece. Based on the evaluation results, a search method is applied to obtain the position of the best focus that the CCD lens is drove to reach. The image-based auto-focusing usually requires the high focusing accuracy, i.e., the obtained position of the best focus is close to the real values as much as possible. In addition, it also requires that result is the robust to noise and the variations of imaging conditions such as light intensity and lens magnification, i.e., the best focusing positions obtained at multiple times should be consistent with small variation.

The image-based auto-focusing has been widely studied because of its non-contact, fast response, and convenience. Most of these studies have been motivated to improve the focusing accuracy, reduce the computation time, and enhance the robustness to noise and variations, from the two important perspectives: the image definition evaluation and the focus search algorithm [3, 4]. Image definition evaluation, which measures the image quality (definition), is one of the important steps for the image-based auto-focusing. There are many methods proposed for image definition evaluation in the literature. R. Redondo et al. compared and analyzed sixteen different definition evaluation functions in terms of the computation costs, accuracy, and effects of noise and lighting by the contrast tests of the cell tissue images [5]. A. Akiyama et al. proposed a definition evaluation function based on Daubechies wavelet transform, which sets four weightings according to the decomposed frequency bands and was used in uncooled infrared camera [6]. To solve the problems of low image contrast ratio and flat definition evaluation curve under low light, M. Gamadia et al. adopted an image enhancement method to increase the contrast of the image and then designed a corresponding focusing evaluation function to measure the image contrast [7]. Makkapati presented an improved wavelet-based image auto-focusing method for blood smears in microscope [8]. In this method, the red blood cell images were first segmented by thresholding the green component of image, and the wavelet-based focus measure is evaluated on the segmented images. This method can gain the smooth definition evaluation curve without any undulations, which improves the auto-focusing accuracy. To realize auto-focusing in aerial push-broom remote-sensing camera, Lu et al. proposed an image definition evaluation function based on line spread function (LSF) [9]. Firstly, this method computed the edge spread function (ESF) by searching the blade edge among the edges detected in the image. Secondly, the LSF in the pixel level was obtained by the derivation of ESF. Thirdly, the LSF in the sub-pixel level was achieved by curve fitting of LSF using the least square. Finally, the standard deviation parameter σ of LSF is used as the image definition evaluation measure. Although these methods have good performance in the specific areas, most of them are easily affected by noise and variations of lens magnification and light intensity index, which is an important problem for industrial image measurement.

After evaluation of the image definition or quality, a search method is usually applied to find the best focusing position for image capture [10]. The most straightforward method is the global search, which obtain the optimal position by scanning through all possible focus positions in a unidirectional manner. It has high computation cost and is only applicable to the cases with narrow focus range. There are various search strategies proposed to speed the search and improve the accuracy in the literature [11–13]. Most of these search strategies can be classified into three categories: the binary search (BS), Fibonacci search (FS), and rule-based search (RS) [14]. In the binary search, the image definition evaluation function is computed at two locations and their difference is computed. If the difference is negative, the next move follows the opposite direction. Given the unimodal shape of the definition evaluation, the binary search can converge to the best focus location at a fast speed provided heuristic choices of the step magnitude. The hill-climbing search is a popular search strategy developed from the basic binary search, which divides the search procedures into two stages: out-of-focus region searching and focused region searching [15]. The hill-climbing search method has been improved with modifications with respect to the step-size selection, termination criteria, search window, etc. [11, 15]. The Fibonacci search is one of the well-known search strategies, which can narrow the search interval until its size equals a given fraction of the initial search range [16]. This method was further improved by combination of the Fibonacci search and the curve fitting method to avoid the local optimum disturbance [12]. A rule-based search was a sequential search method proposed to adjust the search step size by the distance from the best focus position [13]. This method can avoid the back and forth motor moving of the Fibonacci search. In [11], Liu et al. proposed a search method by combination of image definition evaluation function and modulation transfer function (MTF) auxiliary function to judge the search direction. Although the search method has been widely studied, how to obtain the best focusing position with high accuracy and low computation cost is still a challenging problem.

In this work, we have developed an image auto-focusing algorithm for industrial image measurement. First, we have proposed a new method for image definition evaluation which uses the fuzzy entropy to measure the uncertainty degree of image and evaluates the image definition based on statistical analysis of the gray level differences of image. Second, a combined search strategy is proposed to make use of the multi-scale global search and fine-level curve fitting to obtain the best image focus. The traditional global search method, which searches the optimal position by scanning through all possible focus positions with constant step size, has two major limitations: high computational cost and suitability with only narrow focused image. Different from the traditional global search method, the multi-scale global search is applied in our proposed method, which can facilitate handling the images without narrow focus. Large scale search is performed with the large step size to find the point with maximum definition. Then, the search region is narrowed to the neighborhood centered at the previous found point and the step size is reduced for the small scale search. The above search process is iterated until the difference between the two found points of the successive searches is smaller than a threshold. In this way, the search process is fasted without sacrificing the accuracy. The global search at multiple scales is used to narrow the search range in the coarse level while the curve fitting is applied in the fine level to obtain the optimal position of best focus instead of the position with maximum image definition. Finally, our proposed image auto-focusing algorithm is tested on a practical industrial image measurement system to show its performance.

The rest of this paper is organized as follows. Section 2 will present in detail the proposed image auto-focusing algorithm. In Section 3, experimental results and comparisons are presented to show the effectiveness of the proposed algorithm. Finally, the paper is concluded in Section 4.

2 The proposed image auto-focusing algorithm

In this work, we have developed an image auto-focusing algorithm for industrial image measurement, which consists of two main processing steps: evaluation of image definition and search of the best focus, which will be described in detail in the following subsections.

2.1 Image definition evaluation based on fuzzy entropy

Although there are many methods proposed for evaluation of image definition, most of them are sensitive to noise and variations of lens magnification and light condition and may result in the local peaks of the image definition values during the auto-focusing process. To address these problems, we propose a method to define the image definition evaluation function based on the fuzzy entropy, which was proposed to measure the fuzzy degree of fuzzy set reasonably [17]. Fuzzy entropy is the entropy of a fuzzy set, loosely representing the information of uncertainty. Fuzzy set theory has been developed to mimic the powerful capability of human reasoning to design systems that can effectively deal with complex processes [17]. By definition, a fuzzy set is a set containing elements with varying membership degrees. Different from classical (crisp) sets in which elements have full memberships (i.e., their membership is 1), the elements of a fuzzy set are mapped to a universe of membership values using a function theoretic form. The function maps the elements of a fuzzy set into a real value in the interval [0, 1]. Fuzzy set theory is very useful in modeling the complex and imprecise systems. To model the complex imaging process, which is affected by noises and variations of lens magnification and light condition, the fuzzy entropy is used to measure the uncertainty degree of image in this work. The processing steps of the proposed image definition evaluation method are described as follows:

  • Step 1. Definition of fuzzy entropy

Given an image of size m × n pixels with N gray levels, the gray level of each image pixel (i, j) is denoted as f (i, j) which is normalized to the range of [0 1]. The purpose of normalization is to reduce the effect of light condition for the definition evaluation. Different from the Shannon entropy, which measures the randomness uncertainty (probabilistic), the fuzzy entropy contains vagueness and ambiguity uncertainties. Thus, it is defined based on the concept of membership function. Let the fuzzy set A be the image gray levels. Given the image gray level f (i, j) of pixel (i, j), the membership function of the image fuzzy set A is defined as:

$$ {\mu}_k\left(f\left(i,j\right)\right)=\frac{1}{1+\left|f\left(i,j\right)-k\right|} $$
(1)

where k is a parameter belonging to [0 1] and μ k (f(i, j)) ∈ [0.5 1]. From Eq. (1), we can know that the value of μ k (f(i, j)) obtains its maximum of 1 when |f(i, j) − k| = 0 (i. e., f(i, j) = k), and it decreases to its minimum value when |f(i, j) − k| increases to the maximum of 1. If the image gray level f (i, j) decreases/increases from k, the membership function value μ k will decrease symmetrically. Thus, the value of μ k (f(i, j)) is ranged between 0.5 and 1, which satisfies the requirement of membership function that the membership value must be in the range of [0 1]. For evaluation of image definition, the membership function can be used to measure the membership of gray level difference between an image pixel and its neighborhoods when k is set to the gray level of image pixel and f (i, j) is the gray levels of its neighborhoods. The membership is large if the gray level difference is small and vice versa.

According to the information theory, entropy is a measure of the information uncertainty. Fuzzy entropy is defined as a quantity measure of the fuzzy information gained from a fuzzy set or fuzzy system [17]. In an image captured for dimensional measurement, the gray levels are often affected by different factors such as lens magnification, lighting, and noises which may result in vagueness and ambiguity uncertainties. In this work, fuzzy entropy is used to measure the uncertainties of the image gray levels. Given an image fuzzy set A and membership function μ A of image gray levels, the fuzzy entropy is computed as:

$$ {E}_k\left({\mu}_k\left(f\left(i,j\right)\right)\right)=-\left({\mu}_k\left(f\left(i,j\right)\right)\right) log\left({\mu}_k\left(f\left(i,j\right)\right)\right) $$
(2)

From Eq. (2), we can know that the fuzzy entropy obtains the minimum value when the membership function \( {\mu}_k \) is the maximum of 1 at f(i, j) = k. The membership function and the fuzzy entropy are symmetric around their maximum value and minimum value, respectively.

  • Step 2. Image definition evaluation

To evaluate the image definition, we compute an image contrast map based on the fuzzy entropy defined above firstly, because the image contrast is closely related to the image definition. Specifically, at each image pixel (i, j), we set the normalization parameter k as its gray levels f (i, j) and compute the memberships on its neighboring window of size w × w using Eq. (1). Therefore, if the gray level differences between the image pixel (i, j) and its neighboring pixels are small, the membership values will be large and the fuzzy entropies are small. Thus, for an image pixel (i, j), the image contrast evaluation is computed based on the fuzzy entropy on its neighboring window of size w × w pixels as follows:

$$ {m}_k\left(i,j\right)=\frac{1}{w\times w}{\displaystyle {\sum}_{m=-\left(w-1\right)/2}^{\left(w-1\right)/2}{\displaystyle {\sum}_{n=-\left(w-1/2\right)}^{\left(w-1\right)/2}{E}_k\left(\mu {}_k\left(f\left(i+m,\kern0.5em j+n\right)\right)\right)}} $$
(3)

where k = f(i, j). The w must be set to an odd number to ensure that the image pixel (i, j) is in the center of the neighboring window. In addition, the parameter w has important effect on the image contrast evaluation. The image contrast evaluation with small w has good sharpness but it is sensitive to noise. In contrast, large w can have better performance for denoising while sacrificing the sharpness. Thus, to balance this tradeoff, w is set to 5 in our experiments. When the central image pixel (i, j) in the window of size w × w is the edge point, the gray level differences are large and the image contrast value m k (i, j) will be large. Otherwise, m k (i, j) is small for the homogeneous area. Thus, image contrast evaluation m k (i, j) monotonically increases with the increasing of the gray level differences between the central image pixel and its neighborhoods. For a captured image of size m × n, we compute the image contrast values m k (i, j) of all image pixels and finally obtain the image contrast map [m k (i, j)] m × n .

Secondly, we compute the image definition measure based on the image contrast map. To reduce the computation cost, the sum of the measured value m k (i, j) in the focusing window W is computed to evaluate the image definition of each image pixel as follows:

$$ F={\displaystyle {\sum}_{\left(i,j\right)\in W}{m}_k\left(i,j\right)} $$
(4)

To show the effectiveness of the proposed image definition evaluation, we test it on the images of a brake pad workpiece as shown in Fig. 1a. The image was captured under the natural light, and the lug boss denoted with the red circle is the focusing position in our experiments. Figure 1b and c shows the focusing and defocusing images of the lug boss extracted from the brake pad, respectively. In our experiments, we fix the distance between the CCD lens and the brake pad first, and then adjust the focusing distance with a constant step to capture two sets of images from low definition to high definition and from high to low definition again. To obtain the best images, the zoom lens of the camera and the light intensity of illuminant lighting were set at the optimal angle, and the images are cropped to the size of 320 × 320 pixels with the lug boss centered in it.

Fig. 1
figure 1

The tested workpiece: a picture of the brake pad, b focusing image, and c defocusing image

We apply the proposed definition evaluation method to compute the definition values of these captured images. For comparison, we also use Roberts definition evaluation function, Laplacian definition evaluation function, Brenner definition evaluation function, and Tenengrad definition evaluation function [20] to compute the definition values of these images. Figure 2 shows the curves of these image definition values from low to high and from high to low, computed with different methods. From this figure, we can see that there is only one peak and the peak width is narrow in the curve of image definition values by our proposed method, which will facilitate the following search of the best focus. Also, there are no local peaks in the curves of image definition values by other methods, while the peak width are flat and some definition values of different images are similar, which may result in the false detection of the best focus in the following searching steps. Our proposed definition evaluation function can satisfy the image definition evaluation demands of industrial image measurement.

Fig. 2
figure 2

Definition evaluations of the focusing and defocusing images with different methods

Compared to the traditional methods, the proposed image definition measure based on fuzzy entropy has two important advantages. First, the membership function μ k (f(i, j)) is defined based on the local difference of the image normalized gray values, which can reduce the negative effects of the lighting fluctuation. Second, the image contrast evaluation m k (i, j) in Eq. (3) is computed by averaging the entropy of membership values in a local neighborhood, which can reduce the effect of noise. Specifically, compared to the wavelet-based method [7, 8], which is computation expensive in decomposing the image into different frequency bands and computation of transform, the proposed image definition measure based on fuzzy entropy is more efficient and applicable in practical image measurement system. Different from the LSF method [9] based on line detection, which is effective only for images with lines and sensitive to noise in edge line detection, the proposed image definition measure by averaging the fuzzy entropies in a local neighborhood not only can be used in various types of images but also can reduce the effects of environmental factors such as lens magnification, lighting conditions, and noises. Thus, the local peaks of the image definition values can be effectively reduced to facilitate the search of the best focus. In addition, compared to the other edge-based methods such as Roberts function and Tenengrad function [20], the proposed image definition measure can handle different types of edges in the same way and avoids the difficulty in distinguishing step and line edges. It can produce a reasonable measurement of image definition.

2.2 The combined search method

After image definition evaluation, the next important step is to apply a search strategy to find the best imaging focus. There are two important problems which affect the accuracy of the best focus in most of existing search methods [18, 1]. The first problem is that the obtained optimum is a local maximum value instead of the global one, so the obtained focus is not the best one. The second problem is that the obtained optimal position with the global maximum image definition may not be the best focusing position due to the sampling rate and different starting points. Figure 3 shows an example to illustrate these problems. Figure 3a shows a curve of sampled image definition values in the auto-focusing of the brake pad image shown in Fig. 1. There are two local maximums resulted by lighting changes. The traditional hill-climbing method may find the position of local maximum image definition as the red point shown in Fig. 3b. In addition, even if the obtained optimum by search is the global maximum, most existing search methods directly consider the position of the global maximum as the best one. However, since the images are captured at a certain step, the image of the actual best focus may not be captured due to the sampling distance and the different starting search point. Thus, the obtained optimal position with the global maximum may not be the best focusing position. Figure 3c shows the detected position (denoted as the black point) of the maximum image definition obtained through the global search method is not the best focusing position.

Fig. 3
figure 3

Comparison of the hill-climbing search method and the global search method: a the curve of image definition values under light change and its optimal peaks obtained by b the hill-climbing search method, and c the global search method

To address the above problems, we propose a search method based on the combination of global search and fine-level curve fitting. Firstly, the global search method is used to find the peak point with the maximum image definition evaluation. It is well known that the computation cost is very huge if the image sampling rate is high and the search step is small. To reduce the computation cost and fast the search process for real-time application, we apply multiple global searches from coarse scale to fine scale by gradually decreasing the search step size and narrowing the search area. A large step size is used for the coarse-scale global search of the maximum image definition, and the search area is reduced to the smaller area around the obtained optimal point for the fine global search with a smaller step size. This process can be iteratively repeated until the global optimal sampling point with the maximum image definition evaluation is obtained.

Secondly, to find the real point of the best focus, the curve fitting is applied on the small area around the optimal one obtained by global search. The peak point obtained by curve fitting is considered as the final point of the best focus. During the combined search process, the next fine searching area should be symmetrical and centered at the optimal point obtained in the previous search to ensure that it covers the correct focus. In our implementation, the optimal point obtained in the previous search is set as the center of the next fine searching area, and the search range diameter is the maximum one among the two lengths from the optimal point to the starting and ending points of the previous search. In this way, the combined search method not only can achieve high accuracy of the best focus, but also have low computation cost for the real-time industrial measurement system. The detailed processing steps of the proposed combined search method are described as follows.

  • Step 1: size is applied to find the position with maximum image definition value. Let \( {P}_0^1 \) denote the start point, \( {P}_n^1 \) denote the end point, and m 1 denote the step size. The size of the searching area is computed as \( {D}_1={P}_n^1-{P}_0^1 \). Driving the CCD lens to scan the focusing window at the step of m 1, the image is captured at each step position and the definition evaluation value is computed for each image. The set of image definition values are denoted as \( \left\{F\left({P}_0^1\right),\ F\left({P}_1^1\right),\dots, F\left({P}_n^1\right)\right\} \). We use the global search to find the optimal position denoted as \( {P}_m^1 \) with the maximum definition value.

  • Step 2: The distances from the optimal position \( {P}_m^t \) to the start searching point \( {P}_0^t \), and to the end searching point \( {P}_n^t \) in the previous step t (t ≥ 1) are computed as \( \left|{P}_m^t-{P}_n^t\right|\kern0.5em \mathrm{and}\kern0.5em \left|{P}_m^t-{P}_0^t\right| \), respectively. The maximum of these two distances is computed as \( {D}_t= \max \left(\left|{P}_m^t-{P}_0^t\right|,\left|{P}_m^t-{P}_n^t\right|\right) \). The next searching area is defined as \( \left[{P}_m^t-\frac{D_t}{2},\ {P}_m^t+\frac{D_t}{2}\right] \), and the searching step size is computed as \( {m}_{t+1}=\frac{D_t}{D_{t-1}}\times {m}_t \). The searching process is same as that of Step 1. The image definition value is calculated at the searching step, and the optimal position with the maximum definition value is obtained as \( {P}_m^{t+1} \).

  • Step 3: We compute the distance between the current optimal position and the previous optimal position, i.e., \( \left|{P}_m^{t+1}-{P}_m^t\right| \). If it is not larger than a threshold \( \sigma \), i.e., \( \left|{P}_m^{t+1}-{P}_m^t\right|\le \sigma \), go to the Step 4. Otherwise, go to Step 2 for the new search with decreasing the step size. The threshold \( \sigma \) can be set based on the requirements of accuracy and operation rate in the practical applications.

  • Step 4: Assume that P m is the final peak position with the maximum definition value obtained by global search in the above step. To obtain the actual optimal position of the best focus, the curve fitting method is applied on a small range around the P m . We fit the image definition values vs. search positions to a quadratic polynomial equation as follows:

    $$ y=a+bx+c{x}^2 $$
    (5)

    where the x denotes the search position, y denotes the image definition values, and a, b, c are the parameters of the quadratic polynomial equation, which need to be calculated. The least square method is used to compute the equation parameters for the curve fitting:

    $$ \left[\begin{array}{ccc}\hfill 1\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^2}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{x}_k}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^2}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^3}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{x}_k^2}\hfill & \hfill {\displaystyle {\sum}_{k=1}^n{x}_k^3}\hfill & \hfill {\displaystyle {\sum}_k^n{x}_k^4}\hfill \end{array}\right]\left[\begin{array}{c}\hfill a\hfill \\ {}\hfill b\hfill \\ {}\hfill c\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\displaystyle {\sum}_{k=1}^n{y}_k}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{y}_k{x}_k}\hfill \\ {}\hfill {\displaystyle {\sum}_{k=1}^n{y}_k{x}_k^2}\hfill \end{array}\right] $$
    (6)

    Finally, the curve peak position P f , which is considered as the final position of the best focus, is calculated as:

    $$ {P}_f=-\frac{b}{2\mathrm{c}} $$
    (7)

We test the proposed search method on the image auto-focusing of the brake pad shown in Fig. 1. Before the experiments, the actual CCD focusing position is adjusted on the position of −74.6990 mm. The proposed definition evaluation based on fuzzy entropy is used to calculate the image definition value. Figure 4 shows the intermediate results and the final result of our proposed search method by combining the global search and curve fitting for image auto-focusing. From this figure, we can see that the proposed search method can successfully avoid disturbance of the local peak in search of the correct focusing position. In addition, the optimal position, obtained by the global search with gradually narrowing range and decreasing step size, is gradually close to the actual best focusing position. However, this position is easily affected by the starting position and the step size and it may not be the actual best focusing position. By adding the curve fitting method in the fine level, the proposed method is more robust to the starting position and the sampling rate and improves the auto-focusing accuracy and stability (|P − P3| > |P − P4|).

Fig. 4
figure 4

The proposed search method: a the intermediate results of our proposed search method by multi-scale global search and b the final result of our proposed search method

Our proposed search method combines the multi-scale global search and curve fitting methods, which not only can achieve high focusing accuracy but also have low computation cost for the real-time industrial measurement system. The traditional global search method finds the optimal position by scanning through all possible focus positions in a unidirectional manner which results in high computation cost. Different from this method, our proposed method can reduce the computation cost through the global to fine search strategy by varying the step size. Compared to the hill-climbing method, which is sensitive to the local optimum disturbance, our proposed method can avoid the local optimum disturbance and find the global point with the maximum image definition. The traditional Fibonacci method iteratively reduce the search area which sets the central point of the search area using the previous obtained point of the maximum definition evaluation and decreases each search area almost half in each iteration. But it requires back and forth motor moving and is sensitive to the local optimum disturbance. The rule-based method was a sequential search method proposed to adjust the search step size by the distance from the best focus position. It can avoid the back and forth motor moving of the Fibonacci search, but it is sensitive to the variations of lens magnification and lighting conditions. Our proposed method takes advantages of the Fibonacci and rule-based search methods by the multi-scale global search with varying step size, which not only can avoid the problems of local optimum disturbance but also has low computation cost and high robustness to different variations. In addition, the image of the real best focus may not be captured due to the sampling distance and the different starting search point. In this case, the above existing methods cannot detect the real best focusing position. Our proposed method can detect the real best focusing position by further using the curve fitting, which improves the focusing accuracy. Thus, compared to the existing methods, our proposed method can avoid the local optimum disturbance and achieve high focusing accuracy without sacrificing the computational cost, which make it applicable to the practical image measurement system.

3 Experimental results and analysis

In this section, we will present experiments to test the performances of the proposed image auto-focusing algorithm. First, we will introduce the test platform (including its hardware and software) of an industrial image measurement system for the experiments. Second, characteristics of experimental objects include the brake pad, and the bearings are introduced. Third, the proposed image auto-focusing algorithm is tested on this platform with experimental objects to show its effectiveness.

In the industrial image measurement system, the accuracy, repeatability, and stability are the three important performances used for evaluation in practical applications. In this work, the three measures are computed to evaluate the performances of the image auto-focusing algorithm. Let {p 1, p 2, …, p n } denote a set of measured position values obtained with n random runs of the proposed algorithms. First, the root-mean-square error, defined as the square root of the arithmetic mean of the squares of the values, is computed to evaluate the focusing accuracy. The root-mean-square error (RMSE) is computed as follows:

$$ \mathrm{RMSE}=\sqrt{\frac{{\left({p}_1-{p}_0\right)}^2+{\left({p}_2-{p}_0\right)}^2+\dots +{\left({p}_n-{p}_0\right)}^2}{n}} $$
(8)

where p 0 denote the real value. Thus, smaller RMSE means the better auto-focusing accuracy. Second, the standard deviation, which is often used to quantify the amount of variation or dispersion of a set of data values, is computed to evaluate the repeatability of the auto-focusing algorithm. The standard deviation (SD) is computed as follows:

$$ \mathrm{S}\mathrm{D}=\sqrt{\frac{{\left({p}_1-\mu \right)}^2+{\left({p}_2-\mu \right)}^2+\dots +{\left({p}_n-\mu \right)}^2}{n}} $$
(9)

where μ denotes the mean of {p 1, p 2, …, p n }. Similarly, smaller SD indicates better repeatability of auto-focusing. Third, the tolerance, which represents the maximum variation among the observed values, is computed to evaluate the stability of the auto-focusing algorithm. The tolerance is computed as follows:

$$ T = \max \left({p}_1,{p}_2,\dots, {p}_n\right) - \min \left({p}_1,{p}_2,\dots, {p}_n\right) $$
(10)

From Eq. (10), smaller T indicates the better stability of auto-focusing. RMSE expresses difference between the measured value and the real value, SD expresses difference between measure values, and T expresses max different between measure values. In the following experiments, we compute the above three measures to evaluate the performances of the image auto-focusing algorithms.

In the practical image measurement system, the image auto-focusing may be easily affected by the environmental factors especially the noise, the lens magnification, and the light intensity index, and we also test the robustness of the proposed image auto-focusing algorithm to these two variations. In addition, we compare our proposed algorithm to other three image auto-focusing algorithms published in the literature [11, 13, 19]. The three image auto-focusing algorithms in [11, 13, 19] are denoted as AF1, AF2, and AF3, respectively, while our proposed image auto-focusing algorithm is denoted as AF4 in the following subsections. The first experiment is conducted in a relatively ideal environment with the optimal lens magnification and light intensity index. Since the environmental fluctuations of the lens magnification and the light intensity index often appear in the practical image measurement and affect the performance of image auto-focusing, we conduct the experiments with the variations of lens magnification and light intensity index.

3.1 The test platform of an industrial image measurement system

Generally, an industrial image measurement system consists of both hardware and software. Figure 5 shows the diagram of the test image-based measurement system. The hardware of the test platform as shown in Fig. 6a is composed of the illumination lighting, image capture devices such as camera, video card, count card and lens, the motion controller and electronic grating ruler etc. In our test platform, the eight zone LED cold light program controlled by computer is used for the illuminant lighting with the light intensity index varying from 0 to 200 (which indicates that the weakest light intensity is 0, while the strongest one is 200). The American TEO high definition color camera of 700 line, which has the continuous zoom lens, is used to capture the images with the magnification increasing from 0.7 to 4.5. The drive system of motion controller adopts the high precision no-tooth light bar droved by the motor and the guide rail is the HWIN P micron order linearity rail. Electronic grating ruler with a resolution of 1 Î¼m is used to obtain the position value of the x, y, and z coordinates of the measured workpiece. The software as shown in Fig. 6b, which is developed by our research team, implements the image processing techniques, the image auto-focusing algorithm, and the workpiece measurement algorithm.

Fig. 5
figure 5

The diagram of the image-based measurement system

Fig. 6
figure 6

Test system: a image test platform and b software interface. The diagram of the image-based measurement system

The image-based measurement system works as follows. First, the lens magnification and light intensity index are optimally set to capture the images. Second, the proposed auto-focusing algorithm is applied to find the position of the best focus. Third, the lens is drove to the optimal focus position by the motor, and the images of the best focus are captured for measurement. Fourth, the image is processed and the height of workpiece is calculated. Finally, the result is recorded after correcting the error of grating ruler and vertical error by the software.

3.2 Experimental objects and experimental parameters confirmation

To verify the effectiveness and superiority of the proposed algorithm, two experimental workpieces with different characteristics were used, including the brake pad shown in Fig. 1 and the bearing shown in Fig. 7. In addition, one big and one small standard cuboid workpieces as shown in Fig. 8 are tested with our proposed algorithm. Figure 8b and c shows the focusing and defocusing images of the big standard cuboid, respectively, while Fig. 8e and f shows the focusing and defocusing images of the small standard cuboid, respectively.

Fig. 7
figure 7

The tested workpiece: a picture of the bearing, b focusing image, and c defocusing image

Fig. 8
figure 8

The tested workpiece: a picture of the big standard cuboid, b focusing image and c defocusing image of the big standard cuboid, d picture of the small standard cuboid, e focusing image, and f defocusing image of the small standard cuboid

The brake pad is a plane body with a little rough surface, and the bearing is a three-dimensional workpiece with smooth surface. The purpose of the image auto-focusing experiments is to calculate the heights of workpieces. Theoretically, the measured height values of these workpieces will be most accurate if the measured image is captured at the position of the best focus. The real heights of the brake pad and the bearing are 67.560 and 26.234 mm, respectively, while the real heights of the big and small standard cuboids are 60 and 30 mm, respectively.

3.3 Experiment with the optimal lens magnification and light intensity index

This experiment is performed to test the effectiveness of the proposed image auto-focusing algorithm on the optimal lens magnification and light intensity index. Our proposed algorithm is tested on an image-based measurement system, which is composed of the hardware such as the camera, camera lens, and photosource. The optimal lens magnification and light intensity are different for different hardware. Before testing the auto-focusing algorithm, four groups of the calibration experiments are performed on the brake pad with different lens magnification for the light intensity index of 1/10, 2/20, 3/35, and 4/55 and the optimal lens magnification and light intensity are obtained for the image-based measurement system. Each group of experiments was done ten times with the window of 320 × 320 pixels and the focusing motion distance and speed are 1 and 1.5 mm/s, respectively. Figure 9 shows the experimental results, and we can see that the minimum measurement error is achieved when the lens magnification and correspond light intensity index are 3 and 35, respectively. Thus, the optimal lens magnification and the optimal light intensity index are set to 3 and 35, respectively, in this experiment.

Fig. 9
figure 9

Measurement parameter definition experiment results

The image auto-focusing process usually consists of the following main steps: image acquisition, image definition evaluation, search of the best focus, motor control, and motor running and delay to drive the lens to the optimal position. Take the brake pad for an example, Fig. 10 shows a sequence of images captured in the auto-focusing process. From this figure, we can see that captured image has high definition after image auto-focusing.

Fig. 10
figure 10

Image sequences of the workpiece: a defocusing, b far-focusing, c near-focusing, and d focusing

The proposed image auto-focusing algorithm is randomly run ten times on previous two experimental workpieces respectively to obtain ten optimal positions. Figure 11 shows the comparing results of the ten measured values of the size for each workpiece which are obtained with proposed algorithm as well as the other three algorithms. From this figure, we can see that for each workpiece, the measured values of the size obtained by the proposed algorithm (denoted by AF4) are the closest to the real value and also have smallest deviations compared with those obtained through other three algorithms.

Fig. 11
figure 11

Comparison of the measured values obtained by different algorithms randomly run ten times with the optimal lens magnification and light intensity index: a brake pad; b bearing; c big standard cuboid; and d small standard cuboid

In addition, we calculate the three statistical performance measures of each workpiece, i.e., the root-mean-square error, the standard deviation, and the tolerance of the different auto-focusing algorithms for comparison, as shown in Table 1. From these results, we can see that to these two workpieces, our proposed image auto-focusing algorithm (denoted by AF4) not only has high focusing accuracy (lower RMSE) but also has better repeatability and stability (lower SD and T) than those of other three algorithms.

Table 1 Performance comparison results of different image auto-focusing algorithms on the optimal conditions for different workpieces

3.4 Experiment with variation of lens magnification

This experiment is performed to test the image auto-focusing algorithm with variation of lens magnification. In practical image measurement system, the lens magnification may be easily affected by some factors such as vibration and noise, which will degrade the performance of image auto-focusing. In our experiments, the lens magnification increases from 2.5 to 3.5 at a step of 0.1, and the light intensity index is fixedly set the optimal value 35. Similarly, each image auto-focusing algorithm is randomly run eleven times to obtain eleven measured size values on the brake pad and the bearing, respectively. Figure 12 shows the comparison of the measured values obtained by different algorithms for the brake pad and the bearing. From this figure, we can see that the optimal focusing positions obtained by other three algorithms have large fluctuations due to the variation of lens magnification, while our proposed algorithm (denoted by AF4) still can obtain stable and accurate focusing position.

Fig. 12
figure 12

Comparison of the measured values obtained by different algorithms randomly run eleven times with the optimal lens magnification and light intensity index: a brake pad; b bearing; c big standard cuboid; and d small standard cuboid

We also compare the three statistical performance measures, i.e., the root-mean-square error, the standard deviation, and the tolerance of the different auto-focusing algorithms, as shown in Table 2. From these results, we can see that to the two workpieces, the three statistical measures by other three algorithms are greatly increased compared to those in Table 1. Thus, the performances of the other three algorithms are degraded by the variation of lens magnification. Compared to those in Table 1, our proposed algorithm (denoted by AF4) can still achieve low RMSE, SD, and T values. These results and comparisons demonstrate our proposed image auto-focusing algorithm is more robust to the variation of lens magnification than the other three algorithms.

Table 2 Performance comparison of different image auto-focusing algorithms with variation of lens magnification

3.5 Experiment with variation of light intensity index

This experiment is performed to test the image auto-focusing algorithm with the variation of light intensity index. In practical image measurement system, the light intensity index may be easily affected by vibration and noise, which will influence on the image definition evaluation values and degrade the performance of image auto-focusing. From the above experiments, we know that the optimal lens magnification and the optimal light intensity index are 3 and 35, respectively, for our test system. Thus, in this experiment, the light intensity index varies around its optimal value 35, i.e., from 30 to 40 at a step size of 1, while the lens magnification is fixed to its optimal value 3. For comparison, each image auto-focusing algorithm is randomly run eleven times to obtain ten optimal focusing positions for the brake pad and the bearing. Figure 13 shows the comparison of the measured values obtained by different algorithms. From this figure, we can see that the measured values obtained by other three algorithms have large fluctuations due to the variation of light intensity index, which is not acceptable in the practical image measurement system. The measured values obtained by our proposed algorithm (denoted by AF4) are more stable and accurate than the other three algorithms.

Fig. 13
figure 13

Comparison of the measured values obtained by different algorithms randomly run ten times with variation of light intensity index: a brake pad; b bearing; c big standard cuboid; and d small standard cuboid

Similarly, the three statistical performance measures, i.e., the root-mean-square error (RMSE), the standard deviation (SD), and the tolerance (T), are computed for different image auto-focusing algorithms, as shown in Table 3. From these results, we can see that the three statistical measures are much larger than those of optimal light intensity index shown in Table 1 by the other three algorithms. This indicates that the performances of other three algorithms are degraded by the variation of light intensity index. Compared to those in Table 1, our proposed algorithm (denoted by AF4) can still achieve low RMSE, SD, and T values. These results and comparisons demonstrate our proposed image auto-focusing algorithm has more robustness to the variation of light intensity index than the other three algorithms, which is helpful for the practical image measurement system.

Table 3 Performance comparison of different image auto-focusing algorithms with variation of light intensity index

3.6 Computation complexity analysis and comparison

In this section, we analyze the computation complexity of the proposed image auto-focusing algorithm and compare it to the other three image auto-focusing algorithms in [11, 13, 19]. The total image auto-focusing process includes the image acquisition, image processing, search of the best focus, the control signal transmission, motor running and delay etc. Our proposed image auto-focusing algorithm was implemented with VC++ software on a computer of 3.1 GHz Intel Core i5-4440, 4 G RAM, and 64-bit Windows 7 OS, as shown in Fig. 6b. All experiments are performed for the image auto-focusing task on the workpiece of the brake pad. Our proposed algorithm takes 12.2 s for the image auto-focusing task as shown in Fig. 4. For fair comparison, we also test the other three algorithms on the same task with the same test platform of image measurement system. The computation costs are 13.8, 11.5, and 11.9 s for the image auto-focusing algorithms [11, 13, 19], respectively. The search of the best focus takes most of the computation time for the image auto-focusing algorithm. The image auto-focusing algorithm in [11] applied the hill-climbing search method to find the best focus, which is computation expensive. The image auto-focusing algorithm in [13] does not need the reciprocating motion of the motor and thus can fast the focusing process. But it requires to manually adjust the search step size when lens magnification and lighting change, which makes it low adaptability in the practical and complex image measurement system. The image auto-focusing algorithm in [19] has the similar computational cost as our method, but its focusing accuracy and stability are lower than our algorithm.

4 Conclusions

In this paper, we have proposed an image auto-focusing algorithm. First, a new method is proposed to model the complex imaging process and evaluate the image definitions based on fuzzy entropy. The proposed method has the advantage on the robustness to noises and variations of lens magnification and lighting conditions. Second, we propose a search method to obtain the optimal position of best focus by combing the multi-scale global search and fine-level curve fitting. The combined search method not only can achieve high accuracy of the best focus but also have low computation cost. Finally, experimental results and comparisons show that the proposed algorithm can achieve not only higher focusing accuracy but also has better repeatability and stability under the variations of lens magnification and light intensity index than the other existing algorithms without sacrificing the computation cost. These advantages make it applicable for the practical industrial image measurement under a complex environment.

References

  1. M Moscaritolo, H Jampel, F Knezevich, R Zeimer, An image based auto-focusing algorithm for digital fundus photography. IEEE Trans. Med. Imaging 28(11), 1703–1707 (2009)

    Article  Google Scholar 

  2. X Wang, RC Sun, SY Xu, Autofocusing technique based on image processing for remote-sensing camera. Proc. of SPIE 6623, 66230F (2008)

    Article  Google Scholar 

  3. R Chen, PV Beek, Improving the accuracy and low-light performance of contrast-based autofocus using supervised machine learning. Pattern Recogn. Lett. 56, 30–37 (2015)

    Article  Google Scholar 

  4. M Weston, P Mudge, C Davis, A Peyton, Time efficient auto-focussing algorithms for ultrasonic inspection of dual-layered media using full matrix capture. NDT&E Intl. 47, 43–50 (2012)

    Article  Google Scholar 

  5. R Redondo, G Bueno, JC Valdivieze et al., Autofocus evaluation for brightfield microscopy pathology. J. Biomed. Opt. 17(3), 036008 (2012)

    Article  Google Scholar 

  6. A Akiyama, N Kobayashi, E Mutoh, H Kumagai, H Yamada, H Ishll, Infrared image guidance for ground vehicle based on fast wavelet image focusing and tracking. Proc. of SPIE 7429, 742906 (2009)

    Article  Google Scholar 

  7. M Gamadia, N Kehtarnavaz, KR Hoffman, Low-light auto-focus enhancement for digital and cell-phone camera image pipelines. IEEE Trans. Consum. Electron. 53(2), 249–257 (2007)

    Article  Google Scholar 

  8. V. V. Makkapati, in Improved wavelet-based microscope autofocusing for blood smears by using segmentation. 5th Annual IEEE Conference on Automation Science and Engineering, (2009), p. 208–211.

  9. ZH Lu, YF Guo, HF Li, YF Li, Auto-focus using LSF in aerial push-broom remote-sensing camera. Infrared Laser Eng. 41(7), 1808–1814 (2012)

    Google Scholar 

  10. CM Chen, CM Hong, HC Chuang, Efficient auto-focus algorithm utilizing discrete difference equation prediction model for digital still cameras. IEEE Trans. Consum. Electron. 52(4), 1135–1143 (2006)

    Article  Google Scholar 

  11. CT Liu, ZX He, Y Zhan, HC Li, Searching algorithm of theodolite auto-focusing based on compound focal judgment. EURASIP J. Wirel. Commun. Netw. (1), 1-11 (2014)

  12. Y. L. Xiong, S. A. Shafer, in Depth from focusing and defocusing. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (1993), p. 68–73

  13. N Kehtarnavaz, H Oh, Development and real-time implementation of a rule-based auto-focus algorithm. R. Time Imag. 9, 197–203 (2003)

    Article  Google Scholar 

  14. Y. Yao, B. Abidi, N. Doggaz, M. Abidi, Evaluation of sharpness measured and search algorithm for the auto-focusing of high magnification images. Proceeding. SPIE 6246, Visual Information Processing XV, 62460G (2006), doi:10.1117/12.664751

  15. J He, R Zhou, Z Hong, Modified fast climbing search auto-focus algorithm with adaptive step size searching technique for digital camera. IEEE Trans. on Consumer Electronics 49(2), 257–262 (2003)

    Article  Google Scholar 

  16. EP Krotkov, Active Computer Vision by Cooperative Focus and Stereo (Springer-Verlag, New York, 1989)

    Book  MATH  Google Scholar 

  17. S. Al-sharhan, F. Karray, W. Gueaieb, O. Basir, in Fuzzy Entropy: a Brief Survey. In: 2001 IEEE International Fuzzy Systems Conference, p. 1135–1139

  18. M Gamadia, N Kehtarnavaz, A filter-switching auto-focus framework for consumer camera imaging systems. IEEE Trans. Consum. Electron. 58(2), 228–236 (2012)

    Article  Google Scholar 

  19. D Florian, H Kock, K Plankensteiner, M Glavanovics, Auto focus and image registration techniques for infrared imaging of microelectronic devices. Meas. Sci. Technol. 24, 074020 (2013)

  20. J Wang, H Chen, G Zhou, T An, An improved Brenner algorithm for image definition criterion. Acta. Photonica. Sinica 41(7), 855–858 (2012)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 61375112 and No. 61070226).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manhua Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ information

Shu-xin Liu is a Ph.D candidate at the East China Normal University and a lecturer at the Minnan Normal University. He received his MCS degree in Computer Engineering from the East China Jiao Tong University in 2005. His current research interests include image processing, pattern recognition, and machine learning.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, S., Liu, M. & Yang, Z. An image auto-focusing algorithm for industrial image measurement. EURASIP J. Adv. Signal Process. 2016, 70 (2016). https://doi.org/10.1186/s13634-016-0368-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-016-0368-5

Keywords