Skip to main content

A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy

Abstract

Local thresholding methods for uneven lighting image segmentation always have the limitations that they are very sensitive to noise injection and that the performance relies largely upon the choice of the initial window size. This paper proposes a novel algorithm for segmenting uneven lighting images with strong noise injection based on non-local spatial information and intuitionistic fuzzy theory. We regard an image as a gray wave in three-dimensional space, which is composed of many peaks and troughs, and these peaks and troughs can divide the image into many local sub-regions in different directions. Our algorithm computes the relative characteristic of each pixel located in the corresponding sub-region based on fuzzy membership function and uses it to replace its absolute characteristic (its gray level) to reduce the influence of uneven light on image segmentation. At the same time, the non-local adaptive spatial constraints of pixels are introduced to avoid noise interference with the search of local sub-regions and the computation of local characteristics. Moreover, edge information is also taken into account to avoid false peak and trough labeling. Finally, a global method based on intuitionistic fuzzy entropy is employed on the wave transformation image to obtain the segmented result. Experiments on several test images show that the proposed method has excellent capability of decreasing the influence of uneven illumination on images and noise injection and behaves more robustly than several classical global and local thresholding methods.

1 Introduction

Image segmentation, which is the extraction of an object from the background in an image, is one of the essential techniques in areas of image processing and computer vision [1, 2]. However, in some cases, some undesired disturbances in the thresholding segmentation process may generate a false segmentation result. Uneven lighting is one of the leading disturbance sources that can affect the segmentation result, which often is produced in the capturing of an image. The primary causes for the disturbance of uneven illumination are (a) the scene cannot be isolated from the shadows of other objects optically, (b) the light may be unstable in some cases, and (c) the object is very large, and thus it creates an uneven light distribution [3].

Thresholding is a direct and effective technique for image segmentation. The thresholding techniques performed on gray-level images can be divided into two categories, namely, bilevel and multilevel thresholding. In bilevel thresholding, pixels are classified into two different brightness regions as background and object. Multi-level thresholding is applied to more complex images, which contain several classes with different gray-level ranges.

Moreover, the current bilevel thresholding (binarization) techniques are usually divided into two classes, global and local thresholding. The global algorithms generally compute a threshold for an image. Most of the global methods originated in the twentieth century, i.e., the 1970s, which can be classified into several main categories. The first category is based on the shape of the histogram, such as valley-seeking method [4] and histogram approximation method [5]. The second category is based on clustering algorithm, such as Otsu’s method and the fuzzy clustering method. The Otsu’s method [6] is one of the most classical clustering methods, which segments an image by maximizing the between-class variance of the thresholded image. The fuzzy clustering [7] method is another classical global method that computes the fuzzy membership between the pixel and the mean value of two classes and finds groups by applying cluster analysis. Entropy-based methods are the third category of global methods, such as the Shannon entropy-based method [8], the Tsallis entropy-based method [9], the Renyi entropy-based method [10, 11], and the fuzzy entropy-based method [12]. To improve the robustness to noise, the spatial information is taken into account and many modified versions of the Otsu method [13], the fuzzy entropy-based method [14, 15], the fuzzy clustering method [16,17,18], and the Renyi entropy-based method [19, 20] have emerged.

Meanwhile, a local method usually computes a different threshold for the neighbor of each pixel or for each appointed block in the image. Local thresholding algorithms are superior to global ones for segmenting uneven lighting images because they can select adaptive threshold values according to the local area information [21]. Neighbor-based and block-based methods are two major styles of local adaption methods. The neighbor-based methods compute a threshold for each pixel based on the statistics of the arrangement, i.e., the variance of its neighborhood region. For example, Bernsen [22] selects the threshold by a function of the highest and lowest grayscale values. In Niblack’s method [23], a pixel-wise threshold is calculated based on the standard deviation and the local mean of all pixels in the moving window over the gray image. Sauvola et al. [24, 25] first classify each window by content into text, picture, and background and then apply different segmentation rules to the various types of window. Kim [26] modifies Sauvola’s algorithm by introducing more than one window size for the type of text. Moreover, in order to improve the above approaches for the determination of the local threshold, several special features that are extracted in the pixel neighborhood are also taken into account, such as character stroke width [27] and gradient information [28]. In addition, Bradley and Roth [29] introduce spatial variations in lighting and propose a real-time adaptive thresholding technique which has strong robustness to lighting changes in the image by using the integral image of the input. Kim et al. [30] introduce a water flow model for document image binarization. In this model, an image surface is considered as a three-dimensional terrain which is composed of valleys and mountains. Then, they find the local characteristic of the original terrain by pouring some water onto the terrain and computing the filled water. Lastly, they apply a global thresholding algorithm to find the text regions. Moreover, M. Valizadeh and E. Kabir in [31] improve the water flow model and propose an adaptive method to segment degraded document images.

Block-based methods are another category of local thresholding algorithms, which divide the image into different sub-blocks. The sub-blocks are regarded as separate images and segmented by some principles. For example, Taxt [32] obtains the local threshold for each sub-block based on EM algorithm. Eikvil et al. [33] propose a fast text binarization method by segmenting the sub-blocks based on the Otsu method. Park et al. [34] improve Eikvil’s method by segmenting the object sub-blocks with the Otsu method and the background sub-blocks based on their mean value. Huang et al. [3] propose a method that adaptively selects the block size. Chou et al. [35] discriminate the classification of the sub-blocks based on support vector machine (SVM) and segment the different sub-block types with different strategies.

However, there are still several problems in these local thresholding methods. First, the segmentation accuracies of these window merging methods greatly depend on the reasonable selection of the initial window size. Second, partitioning the image into several sub-blocks usually leads to incoherent segmentation results between adjacent sub-blocks. Lastly, the existence of high noise level in the image may cause adjacent pixels of a pixel to contain abnormal features, thus leading to unsatisfactory segmentation results.

A wave transformation model, which is introduced by Wei et al., is a prospective idea for uneven lighting image segmentation [36]. They consider an image surface as a three-dimensional terrain that is composed of mountains and valleys, corresponding to peaks and troughs, respectively, and partition the sub-regions with the local peaks and troughs in multi-directions. Then, a wave transformation is performed on the grayscale waves in the local sub-region, and a matrix of multi-dimensional vectors is obtained. Lastly, the vectors are compressed to one dimension using the principal component analysis (PCA) method, and an Otsu global method is employed to find an optimal wave threshold for segmenting the matrix. This algorithm does not require image partitioning and can yield good segmentation results for uneven light images. However, there are two serious drawbacks of the method. First, it is very sensitive to noise since it does not take into consideration the spatial information in the wave transformation. Second, when the variation of light intensity in the background is too large, it may lead to misclassification of some pixels.

On the other hand, since Zadeh [37] introduced the fuzzy set (FS) theory, it has been used to solve image segmentation problems regarding vague images. Pal and King [38] first introduce the fuzzy membership function and apply it in grayscale image processing. Then, many image segmentation algorithms based on the fuzzy theory are widely studied and are considered as efficient ways because they can describe the fuzzy uncertainty of images excellently [39]. Atanassov [40] proposes a novel concept of higher order FSs, i.e., intuitionistic fuzzy sets (IFSs), which provides a flexible mathematical frame to address the hesitancy derived from imprecise information. He describes the IFSs by two characteristic functions that express the degree of membership and the degree of non-membership, representing the degree of belongingness and non-belongingness, respectively, of elements to the IFS.

In this paper, we propose a novel local thresholding algorithm for segmenting uneven lighting images with noise injection. In particular, we introduce the idea of the wave transformation in Wei’s method and partition the image into sub-regions based on the local peaks and troughs in many straight lines extracted by rows and columns. Then, we perform the transformation of grayscale waves using fuzzy membership so that the relative characteristic (the local membership value) of each pixel substitutes its absolute characteristic (its gray level) to reduce the influence of uneven background light. Simultaneously, non-local spatial constraint and edge information obtained by the Sobel operator [41] are taken into account in order to avoid false peak and trough labeling caused by noise injection and large variation of light intensity. Lastly, we model the wave transformation image with the intuitionistic fuzzy theory and use a global intuitionistic fuzzy measure to segment the transformed image.

The rest of this paper is organized as follows. Section 2 introduces the wave transformation for images and intuitionistic fuzzy set theory. Section 3 describes our segmentation method. Section 4 presents the experimental results and comparison with several well-known segmentation algorithms. Section 5 gives the conclusions.

2 Preliminaries

2.1 Wave transformation for computing the local characteristics of an image

In this section, we give a brief introduction to the wave transformation model proposed by Wei et al. [36], which is used to reduce the influence of uneven light on the segmentation of images.

There are many images where the background lighting is noticeably uneven. For these images, it is unreasonable to classify them into objects and backgrounds only based on the absoluteness of the gray levels. However, the relativity of the gray levels in local sub-regions can reflect the difference between the objects and the background. Therefore, in order to reduce the impact of uneven light on the segmented results, the grayscale wave model is proposed by Wei et al. [36] to obtain the local characteristic of a pixel to replace its original gray level. Figure 1c shows the grayscale wave model. The idea of the wave transformation is as follows. First, the image can be treated as a gray wave in three-dimensional space composed of many local sub-regions. These sub-regions are obtained by finding the local peaks and troughs in a set of grayscale wave curves, which are extracted in turn from the image in several given directions. Suppose that in a sub-region, the pixels close by the peak correspond to the object, and the pixels close to the trough correspond to the background. The closeness degree of a pixel to the local peak or trough represents its relative characteristic and is used to substitute its absolute characteristic (its gray level) for segmenting the image. The closeness degree can be reflected by a membership degree and obtained by the wave transformation as follows. Given a direction x, a membership degree is assigned to a pixel according to its location in the local sub-region between two neighboring peak and trough. Especially, the membership degree of the pixel located at the peak is 1, while the membership degree of the pixel located at the trough is 0. Moreover, we can extract grayscale wave curves in several directions and perform the same wave transformation on the curves. Therefore, a pixel has n transformation values corresponding to n different directions. Lastly, these values are pulled together for each pixel according to a principle, and a multi-direction wave transformation can be obtained for an image.

Fig. 1
figure 1

The illustration of the grayscale wave model in the vertical and horizontal directions

Intuitively speaking, the image can be viewed as waves of the gray pixels. For an individual pixel, its local characteristic is immediately concerned with the location in the corresponding wave. The pixel located at the peak of the wave has a relatively higher level, while the pixel located at the trough of the wave has a relatively lower level. The location of the pixel in the local wave, namely, the wave transformation value that reflects the relative characteristic of a pixel in the local sub-region, can be used to replace its original gray level for segmenting images [36].

Definition 1. Extract a straight line f d in a given direction d, which consists of K pixels. Let g(k) = {0, 1, , L − 1} represent the gray level of the line f d , where k = 1, 2, …, K. Suppose that there are m local peaks, which are located at P 1, P 2, , and m + 1 local troughs, which are located at T 1, T 2, , where T 1 < P 1 < T 2 <  < P m  < T m + 1. Given a threshold α [0, L − 1], if g(P i ) − g(T i − 1) > α,  g(P i ) − g(T i ) > α, and i = 1, 2, , m, the transformation value (membership degree) w of the pixel Q(x k , y k ) is [36]:

$$ w\left({x}_k,{y}_k\right)=\left\{\begin{array}{cc}u\left(H\left(k,;,{T}_{i-1},{P}_i\right)\right)& k\in \left[{T}_{i-1},{P}_i\right)\\ {}u\left(H\left(k,;,{T}_i,{P}_i\right)\right)& k\in \left[{P}_i,{T}_i\right)\end{array}\right., $$
(1)

where H(k; T i − 1, P i ) = (g(k) − g(T i − 1))/(g(P i ) − g(T i − 1)), u is a monotonous increasing function, and (x k , y k )represents the original coordinate of the kth pixel in the local sub-region ϕ m  = [T m , T m + 1]. The transformation values of pixels in other local sub-regions are obtained in the same way. Then, we obtain the transformation wave vector \( G=\left\{{w}_{\phi_1},{w}_{\phi_2},\dots, {w}_{\phi_m}\right\} \) of the line f d . The transformation process is called one-dimentional (1D) gray scale wave transformation.

Definition 2. Suppose the image has M straight lines in direction d. Let f d, i represent the i th straight line in direction d, where d = d 1, d 2, …, d n , and i = 1, 2, , M. Let F d, i  = ψ(f d, i ) denote the 1D grayscale wave transformation of the straight line f d, i . w d represents the grayscale wave vector of a pixel Q(x, y) in f d, i in direction d and is described by [36]:

$$ {w}_d\left(x,y\right)={F}_{d,i}\left(x,y\right),{Q}_{\left(x,y\right)}\in {f}_{d,i}. $$
(2)

Therefore, there are n wave vectors \( {w}_{d_1},{w}_{d_2},\dots, {w}_{d_n} \) for a pixel Q(x, y). The multi-direction grayscale wave transformation Ψ(f) of the image f in all directions d 1, d 2, …, d n is composed by:

$$ \varPsi (f)=\left\{\uppsi \left({f}_{d_1}\right),\uppsi \left({f}_{d_2}\right),\cdots, \uppsi \left({f}_{d_n}\right)\right\}. $$
(3)

Next, we state the reason the grayscale wave transformation can reduce the impact of uneven lighting. Suppose there are two sub-regions, ϕ 1 = [T 1, P 1] and ϕ 2 = [T 2, P 2], i.e., the yellow shadows in Fig. 2, with different lighting strengths in the wave curve g(k).

Fig. 2
figure 2

The illustration of the 1D grayscale wave transformation. a The original grayscale wave curve g(k). b The wave transformation value G(k) of the curve g(k)

According to Eq. (1), the grayscale wave vectors of the two troughs and peaks are:

$$ {w}_{T_1}=u(0),\kern0.5em {w}_{P_1}=u(1),\kern0.5em {w}_{T_2}=u(0),\kern0.5em {w}_{P_2}=u(1). $$
(4)

Therefore, the pixels located at the two peaks have the same wave transformation values, namely, \( {w}_{p_1}={w}_{p_2} \), no matter how large their original gray levels are. Likewise, \( {w}_{t_1}={w}_{t_2} \) for the pixels at the two troughs. Suppose two pixels are located at k 1, k 2 with g(k 1) < g(k 2) in local sub-regions ϕ 1, ϕ 2, i.e., the yellow shadows in Fig. 2, respectively. If H(k 1; T 1, P 1) > H(k 2; T 2, P 2), their transformation values satisfy u(H(k 1; T 1, P 1)) > u(H(k 2; T 2, P 2)) because u is a monotonous increasing function, which indicates that the wave transformation values, located in the green shadows in Fig. 2, depend on the relative characteristics of the pixels rather than their absolute gray levels, thereby reducing the impact of uneven lighting [36].

2.2 Intuitionistic fuzzy sets and intuitionistic fuzzy entropy

In this section, we present the basic elements of intuitionistic fuzzy set theory and two fuzzy membership functions, which will be used in the wave transformation model and image segmentation.

Definition 3. A fuzzy set (FS) \( \tilde{A} \) is defined on a universe X and can be described as follows [37]:

$$ \tilde{A}=\left\{<x,{\mu}_{\tilde{A}}(x)>|x\in X\right\}, $$
(5)

where \( {\mu}_{\tilde{A}}(x)\in \left[0,1\right] \) is the membership function of \( \tilde{A}\in \mathrm{F}(X) \) and represents the degree of element x belonging to \( \tilde{A} \).

For fuzzy sets \( \tilde{A} \) and \( \tilde{B} \), xX, the membership functions of \( \tilde{A}\cap \tilde{B} \) and \( \tilde{A}\cup \tilde{B} \) are defined as \( {\mu}_{\tilde{A}\cap \tilde{B}}(x)=\min \left({\mu}_{\tilde{A}}(x),{\mu}_{\tilde{B}}(x)\right) \) and \( {\mu}_{\tilde{A}\cup \tilde{B}}(x)=\max \left({\mu}_{\tilde{A}}(x),{\mu}_{\tilde{B}}(x)\right) \). \( {\tilde{A}}^c \) is used to express the complement of \( \tilde{A} \), that is, \( {\mu}_{{\tilde{A}}^c}(x)=c\left({\mu}_{\tilde{A}}(x)\right) \) and xX, where c is a complementary function.

Definition 4. An intuitionistic fuzzy set (IFS) A defined on a universe X is expressed by [40]:

$$ \tilde{A}=\left\{<x,{\mu}_A(x),{v}_A(x)>|x\in X\right\} $$
(6)

where μ A (x)  [0, 1] and v A (x)  [0, 1] represent respectively the degree of membership and non-membership of an element x belonging to A based on the condition 0 ≤ μ A (x) + v A (x) ≤ [0, 1]. Atanassov and Stoeva [37] have introduced an intuitionistic index π A (x) of an element xX in A for an intuitionistic fuzzy set (IFS) A in X as follows:

$$ {\pi}_A(x)=1-{\mu}_A(x)-{v}_A(x). $$
(7)

π A (x) is considered as a hesitancy degree of x to A.

Moreover, an FS \( \tilde{A} \) defined on X can also be represented using the notation of IFSs as follows: \( \tilde{A}=\left\{<x,{\mu}_A(x),1-{\mu}_A(x)>|x\in X\right\} \) with π A (x) = 0 for all xX.

In addition, an axiom definition of intuitionistic fuzzy entropy measures is also introduced by Burillo and Bustince [41] to measure the fuzziness of an intuitionistic fuzzy set.

Definition 5. Intuitionistic fuzzy entropy is a function E :  F(X) → R +(R + = [0, +∞)) and satisfies the following conditions: IFS1: E(A) = 0 iff A is an FS. IFS2: E(A) = Cardinal(X) = n iff μ A (x i ) ≥ v A (x i ) = 0 for all x i X. IFS3: E(A) ≥ E(B) iff AB, i.e., μ A (x i ) ≤ μ B (x i ) and v A (x i ) ≤ v B (x i ), for all x i X. IFS4: E(A) = E(A c).

In addition, they also introduce an intuitionistic entropy measure based on the above requirements, expressed by [42]:

$$ E(A)=\sum \limits_{i=1}^n{\pi}_A\left({x}_i\right). $$
(8)

To obtain good segmentation, one must select the membership function that can best interpret the image. This section introduces two membership functions which will be utilized in this paper. The first one is an S-function [43]:

$$ S\left(l;a,b,c\right)=\left\{\begin{array}{cc}0,& 0\le l<a\\ {}{\left(l-a\right)}^2/\left(\left(c-a\right)\left(b-a\right)\right),& a<l\le b\\ {}1-{\left(l-c\right)}^2/\left(\left(c-a\right)\left(c-b\right)\right),& b<l\le c\\ {}1,& l>c\end{array}\right., $$
(9)

where l is the observed variable and the parameters a, b, c determine the shape of the S-function. The second one is an exponential function defined by Chaira and Ray [44]:

$$ {\mu}_F\left(l;t\right)=\left\{\begin{array}{cc}\exp \left(-\left(|l-{m}_{\tilde{D}}|\right)/\left({g}_{\mathrm{max}}-{g}_{\mathrm{min}}\right)\Big)\right)& \mathrm{if}l\le t\\ {}\exp \left(-\left(|l-{m}_{\tilde{B}}|\right)/\left({g}_{\mathrm{max}}-{g}_{\mathrm{min}}\right)\right)& \mathrm{if}l>t\end{array}\right., $$
(10)

where \( {m}_{\tilde{D}}=\left({\sum}_{l=0}^t lh(l)\right)/\left({\sum}_{l=0}^th(l)\right) \) and \( {m}_{\tilde{B}}=\left({\sum}_{l=t+1}^{L-1} lh(l)\right)/\left({\sum}_{l=t+1}^{L-1}h(l)\right) \) are the average values of two parts \( \tilde{D} \) and \( \tilde{B} \); g max and g min are the maximum and minimum grayscale values of the image, respectively; and t is a threshold that separates the objects from the background.

3 The proposed method

3.1 Overview of the approach

As mentioned in Section 2.1, the wave transformation in Wei’s method can reduce the bad influence of uneven light on the image segmentation by obtaining the local characteristic of each pixel. It is accomplished by dividing the image into a number of local sub-regions and computing the local characteristic value of each pixel based on its location in its corresponding region. Specially, the sub-regions are obtained by searching for local peaks and troughs based on the grayscale levels of pixels within a straight line extracted from the image in a given direction.

However, when the image is heavily corrupted by noise, the local characteristics including high-frequency signal with large amplitude have a strong influence on the search of peaks and troughs, namely, the establishment of the local sub-regions and the calculation of the local characteristics of pixels. Therefore, noise injection is one of the greatest challenges of the wave transformation. It is known that the non-local mean-filtered image [45] of the noisy image can retain more information than the median-filtered image and mean-filtered image. In this paper, we propose a novel wave transformation model by introducing fuzzy membership and adding non-local space information and edge information, in order to reduce the influence of uneven light and noise injection on image segmentation. The structure of our algorithm is shown in Fig. 3, which contains mainly the following steps.

  • Step 1. We first obtain the non-local mean-filtered image of a gray-level image in order to improve the noise resistance ability.

  • Step 2. Then, we apply the wave transformation on the filtered image to eliminate uneven light of the image, namely, computing the local membership value of each pixel by a fuzzy membership function. This procedure is completed by mainly four steps. (1) First, we need to find the local sub-regions that each pixel is located in. More concretely, all straight lines of the image in two given directions (namely, the horizontal and the vertical directions) are extracted. The significant local peaks and troughs in each line are searched for, and two neighboring troughs and peaks constitute a local sub-region. (2) Then, the local membership degrees of each pixel located in the two corresponding sub-regions (respectively in the horizontal and vertical directions) are computed by a fuzzy membership function. (3) Moreover, the local membership degree of each pixel is further revised by combining with its edge information, in order to avoid false peak and trough labeling caused by the large variation of light intensity. (4) Lastly, the local membership degrees in the horizontal and vertical directions for each pixel are integrated with its non-local weight matrix, thus obtaining the final local membership values of all pixels and constituting the wave transformation matrix for the image.

  • Step 3. The final membership matrix is used to replace the grayscale matrix of the image; then, it is modeled and segmented with the intuitionistic fuzzy theory.

Fig. 3
figure 3

The framework of our method

3.2 Wave transformation of an image using non-local spatial information and fuzzy membership

3.2.1 Non-local filter of the image

For every pixel Q(x k , y k ) in an image f, where (x k , y k ) represents the original coordinate of the kth pixel, the estimated value \( \overline{Q}\left({x}_k,{y}_k\right) \) with its spatial information is computed as [45]:

$$ \overline{Q}\left({x}_k,{y}_k\right)=\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right)Q\left({x}_j,{y}_j\right), $$
(11)

where \( {V}_k^r \) represents a search window with radius r, which is centered at the pixel (x k , y k ) in the noisy image. The weight v(k, j), \( \left(j\in {V}_k^r\right) \) between two pixels (x k , y k ) and (x j , y j ) relies on their similarity and is defined by:

$$ v\left(k,j\right)=\left(\exp \left(-{\left\Vert {N}_k-{N}_j\right\Vert}_{2,a}^2/{h}^2\right)\right)/\left(Z(k)\right). $$
(12)

Here, h is the filtering degree parameter, N k is a z × z square neighborhood centered at the pixel Q(x k , y k ), a(a > 0) is the standard deviation of the Gaussian kernel, and \( Z(k)=\sum \limits_{j\in {V}_k^r}{e}^{-\left({\left\Vert {N}_k-{N}_j\right\Vert}_{2,a}^2\right)/{h}^2} \) is a normalized constant [45]. The weight v(k, j) depends on the similarity between the neighborhood configurations of the pixel (x k , y k ) and the pixel (x j , y j ), which satisfies 0 ≤ v(k, j) ≤ 1 and \( \sum \limits_{j\in {V}_k^r}v\left(k,j\right)=1 \). For the pixel (x k , y k ), the spatial information v(k, j) will be used for the integration of fuzzy memberships in the next section.

Theorem 1. Suppose the gray level of a pixel (x, y) in the uneven lighting image f δ is constituted by f δ (x, y) = f(x, y) + δ(x, y), where f(x, y) is the original intensity of (x, y) in the image with even light and δ(x, y) is the intensity of the uneven light in (x, y). Given that δ(x, y) remains approximately constant in the local region, the estimated value \( {\overline{Q}}_{\delta}\left(x,y\right) \) of each pixel (x, y) in the uneven lighting image f δ by the non-local filter is equal to the estimated value \( \overline{Q}\left(x,y\right) \) of (x, y) in the original image f plus the uneven light intensity of (x, y), namely, \( {\overline{Q}}_{\delta}\left(x,y\right)=\overline{Q}\left(x,y\right)+\delta \left(x,y\right) \) .

Theorem 1 indicates that the non-local filter does not change the light intensities of an image and removes the noise under the premise that the light intensity δ(x, y) remains approximately unchanged in the local region, which is prepared for the follow-up process, i.e., wave transformation.

3.2.2 Wave transformation with fuzzy membership theory and non-local spatial information

Divide the filtered image into local sub-regions by searching for local peaks and troughs in straight lines

After obtaining the non-local mean-filter image, the filtered image will be divided into local sub-regions by searching for local peaks and troughs in straight lines which are extracted from the image by rows and columns. Specifically, the straight lines in the image are searched for in two directions, namely, the horizontal direction d H and the vertical direction d V . Let the lines g(k) = f d, i , which are selected in the horizontal direction d H , be the original 1D gray wave curve, where \( k\left(k=1,\cdots, K\right) \) represents the kth pixel in the ith line. The peaks P = {P 1, P 2, …, P m } and troughs T = {T 1, T 2, …, T m + 1} in the curve g(k) with T 1 < P 1 < T 2 <  < P m  < T m + 1 are found if they satisfy the following conditions:

$$ g\left({P}_i\right)-g\left({T}_{i-1}\right)>\alpha, \kern0.5em g\left({P}_i\right)-g\left({T}_i\right)>\alpha, $$
(13)

where the parameter α is a preset threshold, which is used to control the sensitivity to the wave with little amplitude caused by noise [36]. In addition, how the value of α is selected is related to the difference of gray levels of pixels in the background and the objects. If α is larger than the least difference of gray levels of pixels in the background and the objects, the objects cannot be extracted. If α is too small, the noise will be classified to the objects.

Compute the wave transformation value (local membership value) of each pixel in the horizontal and vertical directions using fuzzy membership

After finding all the sub-regions constituted by peaks {P 1, P 2, …, P R } and troughs {T 1, T 2, …, T S } in the gray wave curve g(k), the wave transformation can be applied on the pixels in g(k). Suppose there are several local sub-regions ϕ 1, ϕ 2, , ϕ s ,  in g(k). Let a local sub-region ϕ s  = [T s , T s + 1] consist of a peak P s and two troughs T s and T s + 1, where T s  < P s  < T s + 1. Let \( {\phi}_{s_1}=\left[{t}_s,{p}_s\right] \) be the rising edge interval and \( {\phi}_{s_2}=\left[{P}_s,{T}_{s+1}\right] \) be the training edge interval. The local membership degree G(k) of each pixel k in the region \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \) can be determined as follows.

Let the local sub-region \( {\phi}_{s_1} \) have L gray levels \( {f}_{\phi_1}\left(\left\{q\right\}\right)=\left\{g\left({T}_s\right),g\left({T}_s+1\right),\cdots, g\left({P}_s\right)\right\} \) and the sample space \( X=g\left({\phi}_{s_1}\right) \), where q = g(k) is the gray level of the pixel k located at (x k , y k ), where k [T s , P s ]. Let the sub-region \( {\phi}_{s_1} \) be composed of two parts, namely, the background \( \tilde{D} \) and the foreground (the objects) \( \tilde{B} \). In the sub-region, the pixels close by the peak correspond to the object and the pixels close by the trough correspond to the background. The closeness degree of a pixel to the local peak or trough represents its relative characteristic and can be obtained by a membership value as follows. The membership value \( {\mu}_{d_i}\left(g(k)\right)={\mu}_{d_i}(q) \) indicates the degree of the pixel k with the gray level q = g(k) belonging to the peak p s in a local interval \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \). Based on the S-function in Eq. (9), \( {\mu}_{d_i}\left(g(k)\right) \) is determined by:

$$ {\mu}_{d_i}\left(g(k);g\left({T}_s\right),g\left({P}_s\right)\right)=\left\{\begin{array}{cc}2{\left(\frac{g(k)-g\left({T}_s\right)}{g\left({P}_s\right)-g\left({T}_s\right)}\right)}^2& g\left({T}_s\right)\le g(k)\le \frac{g\left({T}_s\right)+g\left({P}_s\right)}{2}\\ {}1-2{\left(\frac{g(k)-g\left({P}_s\right)}{g\left({P}_s\right)-g\left({T}_s\right)}\right)}^2& \frac{g\left({T}_s\right)+g\left({P}_s\right)}{2}<g(k)\le g\left({P}_s\right)\end{array}\right., $$
(14)

where the values g(T s ),  g(P s ) determine the shape of the S-function with g(T s ) < g(P s ). The interval [g(T s ), g(P s )] is called the fuzzy region. Likewise, the membership degree of the pixel k in the training region \( {\phi}_{s_2}=\left[{P}_s,{T}_{s+1}\right] \) can be obtained in the same way. Then, the transformed value G(k) of a pixel k in the local interval ϕ s  = [T s , T s + 1] in the direction d H is as follows:

$$ {G}_{d_H}(k)=\left\{\begin{array}{cc}{\mu}_{d_H}\left(g(k);g\left({T}_s\right),g\left({P}_s\right)\right)& k\in \left[{T}_s,{P}_s\right]\\ {}{\mu}_{d_H}\left(g(k);g\left({T}_{s+1}\right),g\left({P}_s\right)\right)& k\in \left[{P}_s,{T}_{s+1}\right]\end{array}\right.. $$
(15)

Theorem 2. Suppose the gray level of a pixel (x, y) in the uneven lighting image f δ is constituted by f δ (x, y) = f(x, y) + δ(x, y), where f(x, y) is the original intensity of (x, y) in the image with even light and δ(x, y) is the intensity of the uneven light in (x, y). Given that δ(x, y) remains approximately constant in a local sub-region, the wave transformation matrix by the S-function Ψ(f) of the original image f is approximately equal to the wave transformation matrix Ψ(f δ ) of the uneven lighting image f δ .

Theorem 2 indicates that the wave transformation of an image using the fuzzy membership function (S-function) can reduce the light intensity difference between neighborhood sub-regions, thus markedly decreasing the influence of uneven light on the image segmentation.

Revise the wave transformation values of each pixel using its edge information

However, when the premise that the intensity of uneven light remains approximately unchanged in local sub-regions cannot be satisfied, the local membership by the wave transformation should be modified. That is, when the variation of the light intensity in a pure background is so large that it is bigger than the threshold α in Eq. (13), a false sub-region composed by a peak and a trough must be extracted, thus easily leading to misclassification. This is because in a sub-region, the pixels close to the peak correspond to the object, and the pixels close to the trough correspond to the background according to Wave transformation for computing the local characteristics of an image. Therefore, the pixels in the pure background will be classified into two classes, i.e., the object and the background, by the segmentation of the wave transformation matrix, which actually leads to the misclassification of some pixels to object.

Taking the mouse image, for example, as shown in Fig. 4a, uneven light is very serious in the background regions. When the parameter is set as α = 60, four sub-regions are extracted in the 160th row, namely, (T 1, P 1),  (P 1, T 2),  (T 2, P 2) and (P 2, T 3), as shown in the blue curve in Fig. 4c. The two sub-regions (T 1, P 1) and (P 2, T 3) are extracted due to the large variation of light intensity in the original wave curve. The pixels in the two sub-regions actually belong to the background. However, part of these pixels will be misclassified to objects. Figure 4b shows the edge information of the mouse image. The red curve in Fig. 4c is the edge information in the 160th row of the mouse image. Let us take a closer look at the two curves in Fig. 4c. We can see that there is an obvious difference between the pure background sub-regions (T 1, P 1) and (P 2, T 3) and the mixed sub-regions (P 1, T 2) and (T 2, P 2). There is no edge information in the pure background sub-region (T 1, P 1) since the gray levels of the pixels in the region vary gradually due to the intensity variation of the uneven light. However, there is edge information in the region (P 1, T 2), where some gray levels vary dramatically since there are pixels that belong to two different classes. Therefore, in this paper, we take into consideration the edge information to revise the wave transformation value further.

Fig. 4
figure 4

a The mouse image. b The edge information obtained by the Sobel operator. c The gray wave curve (the blue curve) and the edge information (the red curve) of the 160th row for the mouse image

To preserve more global (larger) edges and ignore those locally fluctuated (smaller) edges, we use two improved 5 × 5 Sobel operators [46], \( {S}_{d_H}=\left[2,3,0,-3,-2;3,4,0,-4,-3;6,6,0,-6,-6;3,4,0,-4,-3;2,3,0,3,2\right] \) and \( {S}_{d_v}=\left[2,3,6,3,2;3,4,6,4,3;0,0,0,0,0;-3,-4,-6,-4,-3;-2,-3,-6,-3,-2\right] \), to convolute the images and check the maximum response of the horizontal and vertical edges. Then, we can obtain

$$ {E}_{d_H}\left({x}_k,{y}_k\right)=\sum \limits_{m=0}^4\sum \limits_{n=0}^4I\left({x}_k+m-1,{y}_k+n-1\right)\times {S}_{d_H}\left(m,n\right), $$
(16)
$$ {E}_{d_V}\left({x}_k,{y}_k\right)=\sum \limits_{m=0}^4\sum \limits_{n=0}^4I\left({x}_k+m-1,{y}_k+n-1\right)\times {S}_{d_V}\left(m,n\right), $$
(17)
$$ E\left({x}_k,{y}_k\right)=\max \left({E}_{d_H}\left({x}_k,{y}_k\right),{E}_{d_v}\left({x}_k,{y}_k\right)\right), $$
(18)

where E(x k , y k ) is the gradient of the point (x k , y k ). Given a constant T, if E(x k , y k ) > T, we consider the point (x k , y k ) to be a boundary point. Then, the gray level g(k) of pixel k in the sub-region \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \) will be transformed by judging whether there is any edge information in \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \). Thus, the transformation value of the pixel k [T s , P s ] will be modified as follows.

$$ {G}_{d_H}^{\prime }(k)=\left\{\begin{array}{cc}{G}_{d_H}(k)& \exists j\in \left[{T}_s,{P}_s\right],E\left({x}_j,{y}_j\right)\ge T\\ {}\mathrm{background}& \mathrm{otherwise}\end{array}\right.. $$
(19)

According to Theorem 2, it holds under the premise that the intensity of the uneven light δ(x, y) remains approximately constant in each local sub-region. For the pixel k [T s , P s ], the condition j [T s , P s ], E(x j , y j ) ≥ T indicates that there is edge information, and the trough T s and the peak P s are searched for due to the radical change of the gray level in the sub-region \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \). That is, there are two different classes in the sub-region. Moreover, the intensity of the uneven light δ(x, y) can be regarded as approximately constant in \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \). Then, Theorem 2 holds, and the wave transformation value can remain unchanged, namely, \( {G}_{d_H}^{\prime }(k)={G}_{d_H}(k) \). If the pixel k [T s , P s ] does not satisfy the condition j [T s , P s ], E(x j , y j ) ≥ T, this indicates that there is no edge information. That is, there is only one class in the sub-region, and the trough T s and the peak P s are searched for due to the gradual change of uneven light intensity in the sub-region \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \). Therefore, the variation of uneven light intensity is too large, and its value cannot be regarded as approximately constant in \( {\phi}_{s_1}=\left[{T}_s,{P}_s\right] \). Then, Theorem 2 does not hold, and the wave transformation value \( {G}_{d_H}^{\prime }(k) \) should be modified to the intensity of the background.

Integrate the horizontal and vertical transformation values of each pixel with its non-local information

Since the wave transformation is applied on the image in two directions, there are two transformation values for each pixel, namely, two transformation matrices for the image. In this section, we will integrate the two matrices with non-local space information. Let \( {G}_{d_H,i}^{\prime } \) represent the one-dimensional wave transformation value for the pixel k in the ith line \( {f}_{d_H,i} \) in the direction d H . The horizontal wave transformation matrix \( {\varPsi}_{d_H}(f) \) of the image is composed by the wave transformation \( {G}_{d_H,1}^{\prime },{G}_{d_H,2}^{\prime },{G}_{d_H,3}^{\prime },\cdots \) of all lines in the direction d H . Similarly, the vertical wave transformation matrix \( {\varPsi}_{d_V}(f) \) of the image can be obtained in the same way. To integrate the two membership matrices, \( {\varPsi}_{d_H}(f) \) and \( {\varPsi}_{d_V}(f) \), we take consider the non-local space information of each pixel, namely, the weight matrix of pixels as follows. Let \( {\varPsi}_{d_H}\left({x}_k,{y}_k\right) \) and \( {\varPsi}_{d_V}\left({x}_k,{y}_k\right) \) represent the membership degrees (wave vectors) of the pixel k located at (x k , y k ) in the d H and d V directions. The weight matrix of the pixel k is \( \left\{v\left(k,j\right)\right\},\left(\left({x}_k,{y}_i\right)\in {V}_k^r\right) \). The wave transformation value of the pixel k in the directions d H and d L with space information is modified by:

$$ {G}_{d_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)=\frac{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right){\varPsi}_{d_H}\left({x}_j,{y}_j\right)}{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right)}, $$
(20)
$$ {G}_{d_L}^{{\prime\prime}}\left({x}_k,{y}_k\right)=\frac{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right){\varPsi}_{d_L}\left({x}_j,{y}_j\right)}{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right)}, $$
(21)

where \( {V}_k^{r^{\prime }} \) denotes a search window of radius r (here, we set a search window of radius r  = 1) centered at the pixel (x k , y k ). The final local membership value of the pixel k located at (x k , y k ) is computed by the memberships in the two directions as follows:

$$ {\varPsi}^{\prime}\left({x}_k,{y}_k\right)=\left({G}_{d_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)+{G}_{d_L}^{{\prime\prime}}\left({x}_k,{y}_k\right)\right)/2. $$
(22)

Theorem 3. Suppose the gray level of a pixel (x, y) in the uneven lighting image f δ is constituted by f δ (x, y) = f(x, y) + δ(x, y), where f(x, y) is the original intensity of (x, y) in the image with even light, and δ(x, y) is the intensity of the uneven light in (x, y). Given that δ(x, y) remains approximately constant in a local sub-region, the integration of the horizontal and vertical transformation values of each pixel in the original image f with its non-local information is approximately equal to that of each pixel in the uneven lighting image f δ , namely, \( {\varPsi}_{\delta}^{\prime}\left({x}_k,{y}_k\right)\approx {\varPsi}^{\prime}\left({x}_k,{y}_k\right) \) .

Theorem 3 indicates that the addition of the non-local space information does not change the uneven light intensity, and the integration of the horizontal and vertical wave transformation values with the non-local space information can reduce the influence of the uneven light on the image segmentation.

After calculating all the local membership values of pixels in the image according to the abovementioned method, we obtain the final 2D wave transformation matrix. In the matrix, the characteristic of a pixel is represented by the relative vector (the local membership value) only related to its sub-region in order to reduce the lighting difference between two neighborhood sub-regions. At the same time, the non-local information is incorporated to overcome the influence of the local high-frequency signal on the establishment of the membership matrix. That is, although the membership degrees of pixels to the local peaks substitute the gray levels as a new expression of pixels, they do not separate with the original gray level and space information completely. Last, the membership matrix Ψ will be classified using IFS entropy to obtain a final segmented image.

3.3 Segmentation of transformed image using intuitionistic fuzzy set

We apply L level quantization on the membership matrix Ψ (x, y) of size M × N to obtain a new matrix I (x, y), which is also called wave transformation image. Then, the image is modeled based on intuitionistic fuzzy set [47]. Suppose the image I (x, y) has L gray levels G x  = {0, 1, , L − 1}, and its histogram is H = {h 0, h 1, …, h L − 1}. Let the 1D sample space X = G x  = {0, 1, , L − 1}, and p is the probability of a gray level, i.e., p({i}) = h i ,  i = 0, 1, , L − 1, where i is the quantization level.

The image can be considered as an array of fuzzy singletons according to Pal and King [38]. With regard to an image property, each element of the array represents the membership value of the gray level l. Then, the image can be represented as the fuzzy set: \( \tilde{A}=\left\{<l,{\mu}_{\tilde{A}}(l)>|l\in \left\{0,1,\dots, L-1\right\}\right\} \). To segment images, we use Vlachos’s method [47] by taking into consideration the property of \( {\mu}_{\tilde{A}}(l) \) as the distance of each level from the means of their corresponding classes. Specifically, the membership degree \( {\mu}_{\tilde{A}}(l) \) of each pixel is determined by an exponential membership function in Eq. (10). Its corresponding membership and non-membership functions in Eq. (6) are given by

$$ {\mu}_A\left(l;t\right)={\lambda \mu}_A\left(l;t\right),\kern0.5em {v}_A\left(l;t\right)={\left(1-{\mu}_A\left(l;t\right)\right)}^{\lambda }, $$
(23)

where λ [0, 1], and we set λ = 0.9 in this paper. Therefore, the image is represented in the intuitionistic fuzzy domain as follows: A = {<l, μ A (l), v A (l)>| l {0, …, L − 1}}. Then, we use the intuitionistic entropy in Eq. (8) defined by Burillo and Bustince [41] by means of the following expression:

$$ {E}_{IFS}\left(A;t\right)=\frac{1}{\mathrm{M}\times \mathrm{N}}\sum \limits_{l=1}^{L-1}{h}_A(l)\left(1-{\mu}_A\left(l;t\right)-{v}_A\left(l;t\right)\right), $$
(24)

where M × N is the image dimensions in pixels. The potential idea of the described approach is that the optimal set has the least value of entropy E IFS(A; t), and its corresponding IFS can represent the image more efficiently with the least uncertainty. That is, the minimum entropy corresponds to optimal image segmentation. Therefore, the optimization criterion can be formulated as \( {t}_{\mathrm{opt}}=\mathrm{Arg}\underset{t}{\min}\left\{{E}_{\mathrm{IFS}}\left(A;t\right)\right\} \). The detailed steps of our method are described in the form of a flowchart as shown in Fig. 5.

Fig. 5
figure 5

Flow chart of our method

4 Results and discussion

4.1 Performance of wave transformation using the fuzzy membership function

First, we use the uneven lighting rice image to test the process of wave transformation using the fuzzy membership function. As shown in Fig. 6a, there is a dark region in the lower part of the rice image, where the overall gray level in the region is lower than that in the upper region of the image. Figure 6b shows the gray levels of the pixels in 45th–50th columns, namely, the pixels in the red rectangle of Fig. 6g. If we want to give consideration to both of the regions with different light intensities, it is difficult to find a single global threshold, such as T 1 and T 2 in Fig. 6g, which can extract all of the “rice” objects. As shown in Fig. 6c, without wave transformation, some “rice” objects in the darkened regions are extracted by the IFS entropy-based method with a threshold T = 160.

Fig. 6
figure 6

a The rice image. d The wave transformation image. cd Segmented images before and after wave transformation using the IFS entropy-based method. e The original wave curve of the 45th column. f The wave transformation result of the 45th column by the fuzzy membership function. g The grayscale wave of the 45th–50th columns. h The wave transformation result of the 45th–50th columns

Then, the proposed method is applied on the test image, with α = 60 (see in Eq. (13)) for the search of peaks and troughs in S4. Several peaks and the troughs of the 45th column are searched for in Fig. 6e of the rice image, where the symbol “” represents the peaks and the symbol “” represents the troughs. The local membership values (wave transformation values) of the 45th column are obtained in Fig. 6f by the S-function. It is obvious that all of the peaks and troughs are respectively located in the same horizontal lines. The local characteristics of other pixels are represented by their locations in the sub-regions.

Figure 6h shows the wave transformation result of 45th–50th columns. Figure 6b shows the transformation image of the rice image. It is obvious that the dark region in the lower part of the rice image is appropriately lightened to the same as the upper part of the rice image. Thus, a threshold of 167 can be easily found by the IFS entropy in S, and all of the “rice” objects are extracted by the IFS entropy-based method, as shown in Fig. 6d. Moreover, the threshold of 167 corresponds to a membership degree of \( 167\cdot \frac{1}{L}=0.6523 \) because the transformation image is obtained by applying L = 256 level quantization on the membership matrix.

4.2 Performance of the revision of the wave transformation values using edge information

In this section, we test the process of the revision of the wave transformation value using edge information. Taking the uneven lighting mouse image used in Section 3.2.2 for example, the two pure background sub-regions (T 1, P 1) and (P 2, T 3) in the 160th row are extracted due to its large variation of light intensity with α = 60. The wave transformation values (the membership degrees) of the pixel in these regions vary from 0 to 1, according to Eq. (14) (see Fig. 7b2). Consequently, some pixels will be misclassified as objects, as shown in Fig. 7b3, according to the principle that the pixels close by the peak or the trough correspond to different classes.

Fig. 7
figure 7

a 1 The mouse image. a 2 The gray wave curve (the blue curve) and the edge information (the red curve) of the 160th row for the mouse image. a 3 The edge information obtained by the Sobel operator. b 1 c 1 The gray wave transformation image before and after adding the edge information by Eq. (19) in the horizontal direction. b 2 c 2 The gray wave curve of the 160th row before and after adding the edge information by Eq. (19) in the horizontal direction. b 3 c 3 The segmented images before and after adding edge information T = 800, α = 60

Figure 7c2 shows the gray wave transformation of the 160th row when the edge information is taken into account by Eq. (19), with T = 800. For the pixels in the pure background sub-regions without edge information, i.e., k [T 1, P 1] or k [P 2, T 3], the transformation values are set to 1, namely, the value of the background \( {G}_{\mathrm{Row}\_{160}^{\mathrm{th}}}^{\prime }(k)=\mathrm{background}=1 \). For the pixels in the sub-regions k [P 1, T 2] or k [T 2, P 2] with edge information, the transformation values are unchanged, i.e., \( {G}_{\mathrm{Row}\_{160}^{\mathrm{th}}}^{\prime }(k)={G}_{\mathrm{Row}\_{160}^{\mathrm{th}}}(k) \). Figure 7b1, c1 shows the corresponding gray wave transformation images of Fig. 7b2, c2. It is obvious that the transformation image in Fig. 7c1 agrees with the actual requirement more than that in Fig. 7b1. Figure 7b3, c3 shows the segmented images before and after adding edge information. Apparently, when the edge information is taken into account, the proposed method can obtain a better segmented result.

4.3 Performance of segmentation of uneven lighting images with noise injection

To prove the effectiveness of our method for uneven lighting images with strong noise injection, experimental tests are implemented on six uneven illumination images corrupted by the Gaussian noise. The intensity value of the pixel in these images varies from 0 to 255, namely, L = 256. In the experiments, several classical local methods, Bernsen’s method [22], Niblack’s method [23], and Sauvola’s method [24]; related works for the local methods, Bradley’s method [29], Chou’ s method [35], and Valizadeh’s method [31]; based on an improved water flow model-based method, Wei’s method [36]; and several related global two-dimensional methods with space information, the two-dimensional Otsu method (2DOtsu) [13] proposed by Liu et al., the fuzzy c-means clustering algorithm with nonlocal spatial information (FCM_NLS) [16] proposed by Zhao et al., and the two-dimensional weak fuzzy partition entropy-based method (2DWFPE) [14] proposed by Yu et al. are implemented on the test images, and their results are compared with our method.

The parameter settings for these methods are as follows. Bernsen’s method uses a 93 × 93 neighborhood. Niblack’s method uses a 50 × 50 neighborhood with k =  − 0.2. Sauvola’s method uses a 50 × 50 neighborhood with k = 0.2. Bradley’s method uses a 30 × 30 neighborhood. Valizadeh’s method uses the parameter W = 2. Chou’s method uses a mean threshold of 128 and a variance threshold of 10 with a block size 3 × 3. Wei’s method uses the parameter α = 60. FCM_NLS uses the parameter β = 10. Specifically, for the parameters of the non-local filter in our method and FCM_NLS, we set r = 5, a = 2, and h = 15 for the fingerprint image and r = 5, a = 2, and h = 30 for the other images.

Figures 8, 9, 10, 11, 12, and 13 show six test images, their noisy images (corrupted by the Gaussian noise), their corresponding ground-truths (hand-labeled by people), and their binarized results. Taking the rice image for example, the segmentation results of the local methods are presented in Fig. 8d–h. Although the seven local methods of Bernsen, Niblack, Sauvola, Bradley, Chou, and Valizadeh perform in the foreground areas (objects), they create much of the pepper noise in the background areas. Although Wei’s method uses wave transformation to reduce the uneven light, it still leads to a bad result since it does not take into account the space information. Simultaneously, the global methods 2DOtsu and FCM_NLS, which have relative good performance in anti-noise interference, cannot extract some “rice” objects in the dark regions. 2DWFPE [14] is our previously proposed method which maximizes weak fuzzy partition entropy on the two-dimensional histogram to obtain optimum segmentation results. The global method can improve the segmentation effect for noisy images, but it only uses the absolute characteristic (the gray level) of the pixel and thus cannot deal well with uneven lighting images. However, the method proposed in this paper uses the relative characteristic of the pixel in local sub-regions to reduce the influence of uneven light and non-local spatial information to avoid noise interference. Thus, it can obtain the best results for uneven lighting images with noisy injection, as shown in Fig. 8k.

Fig. 8
figure 8

Segmentation results on the rice image 256 × 256 corrupted by the Gaussian noise (0, 0.010). a Original image. b Noisy image. c Ground-truth. d Bernsen’s method. e Niblack’s method. f Sauvola’s method. g Bradley’s method. h Chou’s method. i Valizadeh’s method. j Wei’s method. k 2DOtsu. l FCM_NLS. m 2DWFPE. n Our method with T = 800, α = 60

Fig. 9
figure 9

Segmentation results on the mouse image 243 × 326 corrupted by the Gaussian noise (0, 0.010). a Original image. b Noisy image. c Ground-truth. d Bernsen’s method. e Niblack’s method. f Sauvola’s method. g Bradley’s method. h Chou’s method. i Valizadeh’s method. j Wei’s method. k 2DOtsu. l FCM_NLS. m 2DWFPE. n Our method with T = 800, α = 60

Fig. 10
figure 10

Segmentation results on the fingerprint image 288 × 254 corrupted by the Gaussian noise (0, 0.002). a Original image. b Noisy image. c Ground-truth. d Bernsen’s method. e Niblack’s method. f Sauvola’s method. g Bradley’s method. h Chou’s method. i Valizadeh’s method. j Wei’s method. k 2DOtsu. l FCM_NLS. m 2DWFPE. n Our method with T = 800, α = 60

Fig. 11
figure 11

Segmentation results on the block image 511 × 441 corrupted by the Gaussian noise (0, 0.015). a Original image. b Noisy image. c Ground-truth. d Bernsen’s method. e Niblack’s method. f Sauvola’s method. g Bradley’s method. h Chou’s method. i Valizadeh’s method. j Wei’s method, k 2DOtsu. l FCM_NLS. m 2DWFPE. n Our method with T = 800, α = 80

Fig. 12
figure 12

Segmentation results on the license-plate image 153 × 544 corrupted by the Gaussian noise (0, 0.015). a Original image. b Noisy image. c Ground-truth. d Bernsen’s method. e Niblack’s method. f Sauvola’s method. g Bradley’s method. h Chou’s method. i Valizadeh’s method. j Wei’s method. k 2DOtsu. l FCM_NLS. m 2DWFPE. n Our method with T = 800, α = 60

Fig. 13
figure 13

Segmentation results on the coin image 280 × 393 corrupted by the Gaussian noise (0, 0.010). a Original image. b Noisy image. c Ground-truth. d Bernsen’s method. e Niblack’s method. f Sauvola’s method. g Bradley’s method. h Chou’s method. i Valizadeh’s method. j Wei’s method. k 2DOtsu. l FCM_NLS. m 2DWFPE. n Our method with T = 400, α = 30

To evaluate the effectiveness of these segmentation methods, we use a supervised evaluation method, i.e., misclassification error (ME) [48]. ME can be expressed as ME = 1 − (| B o  ∩ B T |  + | F o  ∩ F T | )/(| B o |  + | F o | ) through a comparison of a segmented image and a ground-truth image, where is the cardinality of the set, F T and B T represent the foreground and background area pixels of the segmented image, and F o and B o represent the foreground and background area pixels of the ground-truth image. The evaluation values ME of these segmentation methods are shown in Table 1. The effect of local thresholding methods is poor since they are very sensitive to noise. Figures 9, 10, 11, 12, and 13 show the results for the rest of the images. The global 2DOtsu, FCM_NLS, and 2DWFPE methods still cannot correctly extract all of the objects. However, our method takes the spatial information and gradient information into account in the wave transformation, thus obtaining the best results and the lowest misclassification rate.

Table 1 Missclassfication rate (ME) obtained by the eight methods on images corrupted by addictive noise

4.4 Influence of the parameters α and T

The parameters α and T are very important in our method. The value of α determines the search of local peaks and troughs in Eq. (13) (see Section 3.2.2), namely, the search of local sub-regions. Meanwhile, the value of T determines the extraction of edge information in Eq. (19) (see Section 3.2.2). To study the influence of the parameters α and T, we draw the ME curves with varying parameters α and T on three test images, as shown in Figs. 14 and 15. Moreover, we compare the segmentation results with ME = 0.05 as the reference value. For the rice image with the Gaussian noise (0,0.010) in Fig. 15a, it can be found that ME under each α value within [22,120] is smaller than 0.05 and presents no apparent changes, which indicates that we can obtain relatively satisfactory segmented results in this case. That is, too small or too big of an α cannot extract the objects from the background correctly. Therefore, a reasonable value of α has an important influence on the wave transformation value of each pixel, and thus on the segmentation results.

Fig. 14
figure 14

ME of our method versus T and α for the rice image. a With the Gaussian noise (0, 0.010). b With the Gaussian noise (0, 0.020)

Fig. 15
figure 15

ME of our method versus T and α for the rice and license-plate images. a The mouse image with the Gaussian noise (0, 0.010). b The license-plate image with the Gaussian noise (0, 0.010)

Actually, the value of α seriously depends on the gray difference between the objects and the background in the dark regions. In most cases, the larger the gray level difference of the objects and the background in the dark regions, the larger the reasonable value of α and the larger the range in which α can take value in, and vice versa. That is, the smaller the degree of uneven lighting, the larger the range in which α can take value in. Under the condition of the Gaussian noise (0,0.010), if we want to obtain a ME value smaller than 0.05, α can take value in [22, 120] for the rice image as shown in Fig. 14a, and α can take value in [20, 220] for the license-plate image as shown in Fig. 15b. However, for the mouse image, α only can take value in [40, 80] for the relatively satisfactory results in Fig. 15a. Moreover, the ME value also depends on many factors, such as the proportion of the dark regions in the whole image and the intensity of local illumination variation.

Noise injection also has an important influence on the segmentation performance especially on the dark region with a small gray difference between the objects and the background. To reduce the influence of noise injection on the search of local peaks and troughs and differentiate the objects and the background in the dark regions, the reasonable range of α is narrowed with increasing noise strength. Taking the rice image for example, in order to obtain a ME value smaller than 0.05, the reasonable range of the parameter α under the condition of the Gaussian noise (0,0.020) is [42, 125], which is narrower than the range [22, 125] of α under the condition of the Gaussian noise (0,0.010), as shown in Fig. 14a, b.

The segmentation results of our method also depend on the edge detection with the parameter T in Eq. (19). For the rice image, three curves with the parameter T respectively equal to 600, 800, and 1000 tend to change similarly with the parameter α. However, a reasonable parameter T is crucial to the segmentation performance for the mouse image that has relatively more serious uneven light. As shown in Fig. 15a, the ME value with T = 600 is bigger than 0.05, since incomplete edge information leads to some incorrect computation of local memberships and consequently bad segmentation results.

In conclusion, the parameter selection of our method is affected by multiple factors, such as the noise strength, the degree of uneven lighting, and the gray difference between objects and the background. However, the above experimental results show that the intervals [60, 80] and [800, 1000] may be two reasonable ranges respectively for α and T to take value in. Moreover, when the light intensity of the sub-regions becomes darker, the value of the two parameters should decrease properly.

5 Conclusions

In this paper, we presented a novel algorithm for the segmentation of uneven background lighting images with strong noise injection. We first treated the image as a gray wave in three-dimensional space and extracted grayscale wave curves in the horizontal and vertical directions. Then, we applied wave transformation on the curves using fuzzy membership to obtain the relative characteristic of each pixel in order to reduce the influence of the uneven background lighting. Simultaneously, the non-local spatial weight matrix and edge information were also taken into account in the transformation in order to improve the robustness of the transformation to noise injection and avoid false peak and trough labeling. Finally, we segmented the wave transformation image using intuitionistic fuzzy theory. In different experiments, our algorithm demonstrated superior performance against some well-known algorithms on several uneven background lighting images.

Although the proposed algorithm for uneven lighting image segmentation has some advantages, there are still two problems requiring further study. The first critical problem is the selection of the parameter α. The parameter α is set manually based on experience in this paper. Therefore, further research on the automatic determination of α with consideration of both the uneven lighting background and noise injection is necessary. The second problem is the detection of edge information for a noisy image. The edge information has an important impact on the wave transformation of pixels in an uneven lighting image. In this paper, we used two 5 × 5 Sobel models for the edge detection. However, when we used a given global threshold T to extract the edge information, there were still small noise edges being detected in some cases. Therefore, how to effectively detect large edges in the dark regions and ignore locally fluctuated edges caused by noise will also be part of our future research.

References

  1. SB Chaabane, M Sayadi, F Fnaiech, E Brassart, Colour image segmentation using homogeneity method and data fusion techniques. EURASIP J. Adv. Signal. Process. 2010(367297), 11 (2010)

    Google Scholar 

  2. S Kasaei, M Hasanzadeh, Fuzzy image segmentation using membership connectedness. EURASIP J. Adv. Signal. Process. 2008(417293), 13 (2008)

    MATH  Google Scholar 

  3. Q Huang, W Gao, W Cai, Thresholding technique with adaptive window selection for uneven lighting image. Pattern Recogn. Lett. 26, 801–808 (2005)

    Article  Google Scholar 

  4. C Sahasrabudhes, SD Gupta, A valley-seeking threshold selection technique. Comput. Vis. Image. Underst. 56, 55–65 (1992)

    Google Scholar 

  5. N Ramesh, H Yoo, IK Sethi, Thresholding based on histogram approximation. IEEE Proc. Vis. Image. Signal. Process. 142(5), 271–279 (1995)

    Article  Google Scholar 

  6. N Otsu, A thresholding selection method from gray-scale histogram. IEEE Trans. Syst. Man. Cybern. 9, 62–66 (1979)

    Article  Google Scholar 

  7. JC Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms (Plenum, NewYork, 1981)

    Book  MATH  Google Scholar 

  8. TW Ridler, S Calvard, Picture thresholding using an iterative selection method. IEEE Trans. Syst. Man. Cybern. 8, 630–632 (1978)

    Article  Google Scholar 

  9. G Johannsen, J Bille, A threshold selection method using information measures, Proceedings of the 6th International Conference on Pattern Recognition, Munich (1982), pp. 140–143

    Google Scholar 

  10. PK Sahoo, C Wilkins, J Yeage, Threshold selection using Renyi’s entropy. Pattern Recogn. 30(1), 71–84 (1997)

    Article  MATH  Google Scholar 

  11. FY Nie, PF Zhang, JQ Li, DH Ding, A novel generalized entropy and its application in image thresholding. Signal Process. 134, 23–34 (2017)

    Article  Google Scholar 

  12. AKC Wong, PK Sahoo, A gray-level threshold selection method based on maximum entropy principle. IEEE Trans. Syst. Man. Cybern. 19, 886–871 (1989)

    Article  Google Scholar 

  13. JZ Liu, WQ Li, The automatic threshold of gray-level pictures via two-dimensional Otsu method. Acta Automat. Sin. 19(1), 101–105 (1993) (in Chinese)

    Google Scholar 

  14. HY Yu, XB Zhi, JL Fan, Image segmentation based on weak fuzzy partition entropy. Neurocomputing 168, 994–1010 (2015)

    Article  Google Scholar 

  15. JH Lan, YL Zeng, Multi-threshold image segmentation using maximum fuzzy entropy based on a new 2D histogram. Optik. Int. J. Light. Electron. Opt. 124(18), 3756–3760 (2013)

    Article  Google Scholar 

  16. F Zhao, LC Jiao, HQ Liu, XB Gao, A novel fuzzy clustering algorithm with non local adaptive spatial constraint for image segmentation. Signal Process. 91, 988–999 (2011)

    Article  MATH  Google Scholar 

  17. F Zhao, HQ Liu, JL Fan, A multiobjective spatial fuzzy clustering algorithm for image segmentation. Appl. Soft Comput. 30, 48–57 (2015)

    Article  Google Scholar 

  18. LH Son, TM Tuan, Dental segmentation from X-ray images using semi-supervised fuzzy clustering with spatial constraints. Eng. Appl. Artif. Intell. 59, 186–195 (2017)

    Article  Google Scholar 

  19. PK Sahoo, G Arora, A thresholding method based on two dimensional Renyi’s entropy. Pattern Recogn. 37, 1149–1161 (2004)

    Article  MATH  Google Scholar 

  20. AB Ishak, Choosing parameters for Rényi and Tsallis entropies within a two-dimensional multilevel image segmentation framework. Physica A: Statistical Mechanics and its Applications. 466, 521-536 (2017).

  21. Y Liu, SN Srihari, Document image binarization based on texture features. IEEE Trans. Pattern Anal. Mach. Intell. 19, 540–544 (1997)

    Article  Google Scholar 

  22. J Bernsen, Dynamic Thresholding for Gray-Level Images, Proceedings of the Eighth International Conference Pattern Recognition, Paris (1986), pp. 1251–1255

    Google Scholar 

  23. W Niblack, An Introduction to Digital Image Processing (Prentice Hall, Englewood Cliffs, 1986), pp. 115–116

    Google Scholar 

  24. J Sauvola, M Pietikainen, Adaptive document image binarization. Pattern Recogn. 33, 225–236 (2000)

    Article  Google Scholar 

  25. J Sauvola, T Seppanen, S Haapakoski, M Pietikainen, Adaptive document binarization, Proceedings of the ICDAR (1997), pp. 147–152

    Google Scholar 

  26. IJ Kim, in Ninth International Workshop on Frontiers in Hand Writing Recognition. Multi-window binarization of camera image for document recognition (2004), pp. 323–327

    Google Scholar 

  27. Y Yang, H Yan, An adaptive logical method for binarization of degraded document images. Pattern Recogn. 33(5), 787–807 (2000)

    Article  Google Scholar 

  28. ØD Trier, T Taxt, Improvement of ‘integrated function algorithm’ for binarization of document images. Pattern Recogn. Lett. 16(3), 277–283 (1995)

    Article  Google Scholar 

  29. D Bradley, G Roth, Adaptive thresholding using the integral image. J. Graph. Gpu. Game. Tools. 12(2), 13–21 (2007)

    Article  Google Scholar 

  30. IK Kim, DW Jung, RH Park, Document image binarization based on topographic analysis using a water flow model. Pattern Recogn. 35, 265–277 (2002)

    Article  MATH  Google Scholar 

  31. M Valizadeh, E Kabir, An adaptive water flow model for binarization of degraded document images. Int. J. Doc. Anal. Recognit. 16, 165–176 (2013)

    Article  Google Scholar 

  32. T Taxt, PJ Flynn, AK Jain, Segmentation of document images. IEEE Trans. Pattern Anal. Mach. Intell. 11(12), 1322–1329 (1989)

    Article  Google Scholar 

  33. L Eikvil, T Taxt, K Moen, in Proceedings of the International Conference on Document Analysis and Recognition. A fast adaptive method for binarization of document images (1991), pp. 435–443

    Google Scholar 

  34. JH Park, IH Jang, NC Kim, in Proceedings of the IEEE Pacific Rim Conference on Communications, Computers and Signal Processing. Skew Correction of Business Card Images in PDA (2003), pp. 724–727

    Google Scholar 

  35. CH Chou, WH Lin, F Chang, A binarization method with learning-built rules for document images produced by cameras. Pattern Recogn. 43, 1518–1530 (2010)

    Article  MATH  Google Scholar 

  36. W Wei, XJ Shen, QJ Qian, An adaptive thresholding algorithm based on grayscale wave transformation for industrial inspection images. Acta Automat. Sin. 37(8), 944–953 (2011) (in Chinese)

    Google Scholar 

  37. LA Zadeh, Fuzzy sets. Inf. Control. 8(3), 338–353 (1965)

    Article  MATH  Google Scholar 

  38. SK Pal, RA King, Image enhancement using smoothing with fuzzy set. IEEE Trans. Syst. Man Cybern. 11(7), 494–501 (1981)

    Article  Google Scholar 

  39. S Krinidis, V Chatzis, A robust fuzzy local information C-means clustering algorithm. IEEE Trans. Image Process. 19(5), 1328–1337 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  40. KT Atanassov, Intuitionistic fuzzy sets. Fuzzy Sets Syst. 20, 87–96 (1986)

    Article  MATH  Google Scholar 

  41. P Burillo, H Bustince, Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst. 78, 305–316 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  42. KT Atanassov, S Steova, Intuitionistic Fuzzy Sets, in: Proceedings of the Polish Symposium on Interval and Fuzzy Mathematics, Pozan (1993), pp. 23–26

    Google Scholar 

  43. SK Pal, DKD Majumder, Fuzzy Mathematical Approach to Pattern Recognition (J Wiley, New York, 1986)

    MATH  Google Scholar 

  44. T Chaira, AK Ray, Segmentation using fuzzy divergence. Pattern Recogn. Lett. 24, 1837–1844 (2003)

    Article  Google Scholar 

  45. A Buades, B Coll, JM Morel, in In Proceeding of IEEE Computer Society Conference on Computer Vision & Pattern Recognition. A non-local algorithm for image denoising, vol 2 (2005), pp. 60–65

    Google Scholar 

  46. PF Jin, Improved algorithm for Sobel edge detection of image. J. Appl. Opt. 29(4), 625–628 (2008) in Chinese

    MathSciNet  Google Scholar 

  47. IK Vlachos, GD Sergiadis, Intuitionistic fuzzy information—applications to pattern recognition. Pattern Recogn. Lett. 28, 197–206 (2007)

    Article  Google Scholar 

  48. M Sezgin, B Sankur, Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging. 13(1), 146–168 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in by the National Natural Science Foundation of China (Nos. 61102095, 61671377, 61571361, 61340040, 61601362), Natural Science Basic Research Plan in Shaanxi Province of China (No. 2012JQ8045), and Special Research Project of Shaanxi Department of Education (No. 2013JK1131).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haiyan Yu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Theorem 1

Given a pixel (x k , y k ) in an uneven lighting image f δ , where (x k , y k ) represents the original coordinate of the kth pixel, the square neighborhood N δ, k and N δ, j centered at two pixels (x k , y k ) and (x j , y j ) respectively are:

$$ {N}_{\delta, k}={N}_k+{\delta}_k,\kern0.5em {N}_{\delta, j}={N}_j+{\delta}_j. $$
(25)

Since the intensity of the uneven light δ remains approximately constant in a local region, then,

$$ {\delta}_k\approx {\delta}_j. $$
(26)

According to Eq. (12), the weight v(k, j) between two pixels (x k , y k ) and (x j , y j ) in the original even lighting image f is:

$$ v\left(k,j\right)=\frac{\exp \left(-{\left\Vert {N}_k-{N}_j\right\Vert}_{2,a}^2/{h}^2\right)}{\sum \limits_{j\in {V}_k^r}{e}^{-\left({\left\Vert {N}_k-{N}_j\right\Vert}_{2,a}^2\right)/{h}^2}}. $$
(27)

Moreover, the weight v(k, j) between the two pixels (x k , y k ) and (x j , y j ) in the uneven lighting image f δ is:

$$ {\displaystyle \begin{array}{cc}{v}_{\delta}\left(k,j\right)& =\frac{\exp \left(-{\left\Vert {N}_{\delta, k}-{N}_{\delta, j}\right\Vert}_{2,a}^2/{h}^2\right)}{\sum \limits_{j\in {V}_k^r}{e}^{-\left({\left\Vert {N}_{\delta, k}-{N}_{\delta, j}\right\Vert}_{2,a}^2\right)/{h}^2}}=\frac{\exp \left(-{\left\Vert \left({N}_k+{\delta}_k\right)-\left({N}_j+{\delta}_j\right)\right\Vert}_{2,a}^2/{h}^2\right)}{\sum \limits_{j\in {V}_k^r}{e}^{-\left({\left\Vert \left({N}_k+{\delta}_k\right)-\left({N}_j+{\delta}_j\right)\right\Vert}_{2,a}^2\right)/{h}^2}}\\ {}& \approx \frac{\exp \left(-{\left\Vert {N}_k-{N}_j\right\Vert}_{2,a}^2/{h}^2\right)}{\sum \limits_{j\in {V}_k^r}{e}^{-\left({\left\Vert {N}_k-{N}_j\right\Vert}_{2,a}^2\right)/{h}^2}}=v\left(k,j\right)\end{array}}. $$
(28)

That is to say that, the weight matrix for the pixel Q(x k , y k ) in the uneven lighting image f δ is approximately equal to the weight matrix for the pixel Q(x k , y k ) in the original image f, namely, v δ (k, j) ≈ v(k, j).

According to Eq. (11), the estimated value \( \overline{Q}\left({x}_k,{y}_k\right) \) of the pixel (x k , y k ) in the original even lighting image f is computed as:

$$ \overline{Q}\left({x}_k,{y}_k\right)=\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right)Q\left({x}_j,{y}_j\right). $$
(29)

Then the estimated value \( {\overline{Q}}_{\delta}\left({x}_k,{y}_k\right) \) of the pixel (x k , y k ) in the uneven lighting image f δ is computed by:

$$ {\displaystyle \begin{array}{c}{\overline{Q}}_{\delta}\left({x}_k,{y}_k\right)\kern0.5em =\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}{v}_{\delta}\left(k,j\right){Q}_{\delta}\left({x}_j,{y}_j\right)=\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right){Q}_{\delta}\left({x}_j,{y}_j\right)\\ {}\begin{array}{cc}& =\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right)\left(Q\left({x}_j,{y}_j\right)+\delta \left({x}_j,{y}_j\right)\right)\end{array}\\ {}\begin{array}{cc}& =\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right)Q\left({x}_j,{y}_j\right)+\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right)\delta \left({x}_j,{y}_j\right)\end{array}\end{array}}. $$
(30)

Since the weight matrix satisfies the condition: \( \sum \limits_{j\in {V}_k^r}v\left(k,j\right)=1 \) (see in Section 3.2.1) and the intensity in the local region remains approximately constant, i.e., δ(x k , y k ) ≈ δ(x j , y j ), it is obtained that:

$$ \sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^r}v\left(k,j\right)\delta \left({x}_j,{y}_j\right)\approx \delta \left({x}_k,{y}_k\right). $$
(31)

Eq.(31) is brought into Eq. (30), and it follows that:

$$ {\overline{Q}}_{\delta}\left({x}_k,{y}_k\right)\approx \overline{Q}\left({x}_k,{y}_k\right)+\delta \left({x}_k,{y}_k\right). $$
(32)

Then the theorem holds.

1.2 Proof of Theorem 2

Given G(k) is a wave gray curve in an arbitrary direction of the image f, with k = 0, 1, …, K. ϕ s is the sth local sub-region of the curve, where T s and P s are the trough and the peak of the region, respectively. q = g(k) is the gray level of the pixel k [T s , P s ].

The gray level of the trough T s , the peak P s , and an arbitrary pixel k in the sub-region ϕ s in the uneven lighting image f δ are respectively:

$$ {g}_{\delta}\left({T}_s\right)=g\left({T}_s\right)+\delta \left({T}_s\right),\kern0.5em {g}_{\delta}\left({P}_s\right)=g\left({P}_s\right)+\delta \left({P}_s\right),\kern0.5em {g}_{\delta }(k)=g(k)+\delta (k). $$
(33)

Since the intensity of the uneven light δ(x, y) remains approximately constant in each sub-region, then

$$ \delta \left({T}_s\right)\approx \delta \left({P}_s\right)\approx \delta (k). $$
(34)

If g(T s ) ≤ g(k) ≤ (g(T s ) + g(P s ))/2, the membership degree (the wave transformation value) of the pixel k in the original even lighting image f according to the S-function in Eq. (14) is

$$ \mu \left(g(k)\right)=2{\left(\frac{g(k)-g\left({T}_s\right)}{g\left({P}_s\right)-g\left({T}_s\right)}\right)}^2. $$
(35)

The membership degree (the wave transformation value) of the pixel k in the uneven lighting image f δ according to the S-function in Eq. (14) is

$$ {\displaystyle \begin{array}{cc}{\mu}_{\delta}\left(g(k)\right)& =2{\left(\frac{g_{\delta }(k)-{g}_{\delta}\left({T}_s\right)}{g_{\delta}\left({P}_s\right)-{g}_{\delta}\left({T}_s\right)}\right)}^2=2{\left(\frac{\left(g(k)+\delta (k)\right)-\left(g\left({T}_s\right)+\delta \left({T}_s\right)\right)}{\left(g\left({P}_s\right)+\delta \left({P}_s\right)\right)-\left(g\left({T}_s\right)+\delta \left({T}_s\right)\right)}\right)}^2\\ {}& \approx 2{\left(\frac{g(k)-g\left({T}_s\right)}{g\left({P}_s\right)-g\left({T}_s\right)}\right)}^2=\mu \left(g(k)\right)\end{array}}. $$
(36)

That is, μ δ (g(k)) ≈ μ(g(k)), when g(T s ) ≤ g(k) ≤ (g(T s ) + g(P s ))/2.

Similarly, if (g(T s ) + g(P s ))/2 < g(k) ≤ g(P s ), the membership degree of the pixel k in the original even lighting image f according to the S-function in Eq. (14) is

$$ \mu \left(g(k)\right)=1-2{\left(\frac{g(k)-g\left({P}_s\right)}{g\left({P}_s\right)-g\left({T}_s\right)}\right)}^2. $$
(37)

The membership degree of the pixel k in the uneven lighting image f δ according to the S-function in Eq. (14) is

$$ {\displaystyle \begin{array}{cc}{\mu}_{\delta}\left(g(k)\right)& =1-2{\left(\frac{g_{\delta }(k)-{g}_{\delta}\left({P}_s\right)}{g_{\delta}\left({P}_s\right)-{g}_{\delta}\left({T}_s\right)}\right)}^2=1-2{\left(\frac{\left(g(k)+\delta (k)\right)-\left(g\left({P}_s\right)+\delta \left({P}_s\right)\right)}{\left(g\left({P}_s\right)+\delta \left({P}_s\right)\right)-\left(g\left({T}_s\right)+\delta \left({T}_s\right)\right)}\right)}^2\\ {}& \approx 1-2{\left(\frac{g(k)-g\left({P}_s\right)}{g\left({P}_s\right)-g\left({T}_s\right)}\right)}^2=\mu \left(g(k)\right)\end{array}}. $$
(38)

That is, μ δ (g(k)) ≈ μ(g(k)), when (g(T s ) + g(P s ))/2 < g(k) ≤ g(P s ).

Therefore, μ δ (g(k)) ≈ μ(g(k)) for each pixel k in the sub-region ϕ s  = [T s , P s ]. Then, according to Eq. (2) and Eq. (3), it is concluded that two transformation matrices for the hole image satisfy the condition Ψ δ (f) ≈ Ψ(f).

Then the theorem holds.

1.3 Proof of Theorem 3

Given an pixel (x k , y k ) in an uneven lighting image f δ , where (x k , y k ) represents the original coordinate of the kth pixel, v δ (k, j) is the weight matrix for the pixel (x k , y k ).

The wave transformation value of the pixel k in the original image f in the directions d H with space information is modified by:

$$ {G}_{d_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)=\frac{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right){\varPsi}_{d_H}\left({x}_j,{y}_j\right)}{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right)}. $$
(39)

The wave transformation value of the pixel k in the original image f δ in the directions d H with space information is modified by:

$$ {G}_{\delta, {d}_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)=\frac{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}{v}_{\delta}\left(k,j\right){\varPsi}_{\delta, {d}_H}\left({x}_j,{y}_j\right)}{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}{v}_{\delta}\left(k,j\right)}. $$
(40)

According to Theorem 1 and Theorem 2, we can get v δ (k, j) ≈ v(k, j) and \( {\varPsi}_{\delta, {d}_H}\left({x}_j,{y}_j\right)\approx {\varPsi}_{d_H}\left({x}_j,{y}_j\right) \). Then, it follows that

$$ {G}_{\delta, {d}_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)\approx \frac{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right){\varPsi}_{d_H}\left({x}_j,{y}_j\right)}{\sum \limits_{\left({x}_j,{y}_j\right)\in {V}_k^{r^{\prime }}}v\left(k,j\right)}={G}_{\delta, {d}_H}^{{\prime\prime}}\left({x}_k,{y}_k\right). $$
(41)

Similarly, we can get \( {G}_{\delta, {d}_L}^{{\prime\prime}}\left({x}_k,{y}_k\right)\approx {G}_{\delta, {d}_L}^{{\prime\prime}}\left({x}_k,{y}_k\right) \) for the pixels in the vertical direction d L . Then, according to Eq. (22), the integration of two transformation values, \( {G}_{\delta, {d}_H}^{{\prime\prime}}\left({x}_k,{y}_k\right) \) and \( {G}_{\delta, {d}_L}^{{\prime\prime}}\left({x}_k,{y}_k\right) \), of each pixel (x k , y k ) is computed by:

$$ {\displaystyle \begin{array}{l}{\varPsi}_{\delta}^{\prime}\left({x}_k,{y}_k\right)=\frac{G_{\delta, {d}_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)+{G}_{\delta, {d}_L}^{{\prime\prime}}\left({x}_k,{y}_k\right)}{2}\approx \frac{G_{d_H}^{{\prime\prime}}\left({x}_k,{y}_k\right)+{G}_{d_L}^{{\prime\prime}}\left({x}_k,{y}_k\right)}{2}.\\ {}\kern1.50em ={\varPsi}^{\prime}\left({x}_k,{y}_k\right)\end{array}} $$
(42)

Therefore, \( {\varPsi}_{\delta}^{\prime}\left({x}_k,{y}_k\right)\approx {\varPsi}^{\prime}\left({x}_k,{y}_k\right) \). Then the theorem holds.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, H., Fan, J. A novel segmentation method for uneven lighting image with noise injection based on non-local spatial information and intuitionistic fuzzy entropy. EURASIP J. Adv. Signal Process. 2017, 74 (2017). https://doi.org/10.1186/s13634-017-0509-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-017-0509-5

Keywords