Skip to main content

Multifocus image fusion scheme based on feature contrast in the lifting stationary wavelet domain

Abstract

For fusion of multifocus images, a novel image fusion method based on multiscale products in lifting stationary wavelet (LSWT) domain is proposed in this article. In order to avoid the influence of noise and select the coefficients of the fused image properly, different subband coefficients employ different selection principles. For choosing the low frequency subband coefficients, a new modified energy of Laplacian (EOL) is proposed and used as the focus measure to select the coefficients from the clear parts of the low frequency subband images; when choosing the high frequency subband coefficients, a novel feature contrast measurement of the multiscale products is proposed, which is proved to be more suitable for fusion of multifocus images than the traditional contrast measurement, and used to select coefficients from the sharpness parts of the high frequency subbands. Experimental results demonstrate that the proposed fusion approach outperforms the traditional discrete wavelet transform (DWT)-based, LSWT-based and LSWT-traditional-contrast-(LSWT-Tra-Con)-based image fusion methods, even though the source images are corrupted by a Gaussian noise, in terms of both visual quality and objective evaluation.

1. Introduction

In applications of digital cameras, when a lens focuses on a subject at a certain distance, all subjects at that distance are sharply focused. Subjects not at the same distance are out of focus and theoretically are not sharp. It is often not possible to get an image that contains all relevant objects in focus. One way to overcome this problem is image fusion, in which one can acquire a series of pictures with different focus settings and fuse them to produce an image with extended depth of field [13]. During the fusion process, all the important visual information found in the input images must be transferred into the fused image without introduction of artifacts. In addition, the fusion algorithm should be reliable and robust to imperfections such as noise or mis-registration [46].

During the last decade, a number of techniques for image fusion have been proposed. A simple image fusion method consists in taking the average of the source images pixel by pixel. However, along with simplicity comes several undesired side effects including reduced contrast. In recent years, many researchers have recognized the multiscale transforms (MST) are very useful for image fusion, and various MST-based fusion methods have been proposed [711]. In MST domain, the discrete wavelet transform (DWT) becomes the most popular and important multiscale decompositions method in image fusion. Compared with the Laplacian pyramid transform, the DWT has been found to have some advantages such as: (1) The DWT cannot only possess localization but also provide directional information, while the pyramid representation fails to introduce any spatial orientation selectivity into the decomposition process [9]. So DWT can represent the underlying information of the source images more efficiently. This advantage would make the fused image more accurate. (2) No blocking artifacts, which often occur in Laplacian pyramid-fused images, can be observed in the DWT-based fused images. (3) DWT-based fusion has better signal-to-noise ratios than Laplacian-based fusion [12]. (4) DWT-based fusion images can improve the perception over pyramid-based fusion images. More advantages of the DWT over Laplacian pyramid scheme can be seen in [9, 12].

However, the DWT has its own disadvantages. It needs a great deal of convolution calculations, and it either consumes much time or occupies memory resources, which impedes its real-time application. Relative to the DWT, the lifting wavelet transform (LWT) [13] can overcome its shortcomings. Unfortunately, the original LWT and DWT lack shift-invariance and cause pseudo-Gibbs phenomena around singularities [14], which will reduce the resultant image quality. Thus, new lifting stationary wavelet transform (LSWT) [15], as a fully shift-invariant form of LWT, can be introduced and used as the MST method in this article.

Except for the LSWT discussed in the above paragraph, the nonsubsampled contourlet transform (NSCT) [16], which also possesses the shift-invariant, is another important MST method in image fusion field. Compared with the LSWT, the NSCT is built upon non-subsampled pyramids and nonsubsampled directional filter banks [16]. In NSCT, the non-subsampled pyramids are first used to achieve the multi-scale decomposition, and then the nonsubsampled directional filter banks are employed to achieve the direction decomposition. The number of direction decomposition at each level can be different, which is much more flexible than the three directions in wavelet. So it can get better fusion results than the LSWT. However, the NSCT is more time consuming than the LSWT because of its multi-direction and complexity, which impede its real-time application greatly. By considering both the fusion results and computing complexity, in our proposed method, the LSWT is used as the MST method.

In image fusion algorithm in MST domain, one of the most important things for improving fusion quality is the selection of fusion rules, which influences the performance of fusion algorithm remarkably. According to physiological and psychological research, the human vision system (HVS) is highly sensitive to the local image contrast level. To meet this requirement, Toet and Ruyven [17] developed the local luminance contrast in their research in contrast pyramid (CP). In the local luminance contrast, the contrast level is measured by measuring the ratio of the high frequency component of image to the local luminance of the background.

Based on the idea of [17], many different forms of contrast measurement have been proposed and successfully used in image fusion [18, 19]. However, in these contrast measurements, the value (or absolute value) of a single pixel of the high frequency subband in MST domain is directly used as the strength value of the high frequency component. In fact, the value (or absolute value) of a single pixel of the high frequency subband is very limited in determining which pixel is from the clear part of the sub-images. So, a pure use of a single pixel as the high frequency component in the local contrast measurements is not ideal. In addition, almost all the MST-based image fusion algorithms do not consider the noise influence. In many practical applications, additive Gaussian noise, which is characterized by adding to each image pixel a value from a zero-mean Gaussian distribution, can be systematically introduced into image during acquisition. This noise may cause miscalculation of sharpness values, which in turn, degrade the performance of image fusion. To be useful in real process operation, the fusion algorithm should provide pleasing fusion performance for the clean image fusion; meanwhile it should be reliable and robust to imperfections such as noise.

It is well known that there exist dependencies between wavelet coefficients. If a wavelet coefficient produced by a true signal is of large magnitude at a finer scale, its parents at coarser scales are likely to be large as well. However, for those coefficients caused by noise, the magnitudes will decay rapidly along the scales. So, multiplying the adjacent wavelet scales, namely multiscale products (MSP), can sharpen the important structures while weakening noise [20, 21]. Therefore, multiscale products can distinguish edge structures from noise more effectively.

To make up for the aforementioned deficiencies of the traditional MST-based image fusion methods, we present a new multifocus image fusion scheme which incorporates the merits of interscale dependencies into the image fusion field. In this method, after decomposing the original images using the LSWT, we use a new modified energy of Laplacian, which can reflect features of the edges of the low frequency subimage in LSWT domain, as the focus measure to select the coefficients of the fused image; when choosing the high frequency subband coefficients, a novel local neighborhood feature contrast of the multiscale products, which can effectively represent the salient features and sharp boundaries of image, is developed and used as the measurement to select coefficients from the clear parts of source images. The experimental results show that the proposed method does well in fusion of multifocus images no matter they are clean or not, and outperforms typical wavelet-based, LSWT-based, NSCT-based and LSWT typical contrast-based fusion algorithms in terms of objective criteria and visual appearance.

The article is organized as follows. In Sections 2 and 3, the theory of LSWT and multiscale products are introduced respectively in detail; Section 4 describes the image fusion algorithm using LSWT and multiscale products. Section 5 compares the performance of the new algorithm with the performance of other conventional fusion techniques applied to sequences of multifocus test images. Finally, in Section 6 we conclude the article with a short summary.

2. Lifting stationary wavelet transform

2.1. Lifting wavelet transform

Lifting wavelet transform (LWT), proposed by Sweldens [22], is a new wavelet construction method using the lifting scheme in time domain. The main feature of the LWT is that it provides an entirely spatial domain interpretation of the transform, as opposed to the traditional frequency domain based constructions. It abandons the Fourier transform as design tool for wavelets, and wavelets are no longer defined as translates and dilates of one fixed function. Compared with the classical wavelet transform, the LWT requires less computation and memory, and can produce integer-to-integer wavelet transform. It is always perfectly reconstructed no matter how the prediction operator and update operator are designed. Moreover, it possesses several advantages, including possibility of adaptive and nonlinear design, in place calculations, and so on [13, 22, 23]. The decomposition stage of LWT consists of three steps: split, prediction and update.

In the split step, the original signal (or approximate signal) a l at level l is split into even samples and odd samples

a l + 1 = a l ( 2 i ) , d l + 1 = a l ( 2 i + 1 )
(1)

In the prediction step, we apply a prediction operator P on al+1to predict dl+1. The resultant prediction error dl+1is regarded as the detail signal of a l

d l + 1 ( i ) = d l + 1 ( i ) - r = - M / 2 + 1 M / 2 p r a l + 1 ( i + r )
(2)

where p r is one of the coefficient of P and M is the length of prediction coefficients.

In the update step, an update of even samples al+1is accomplished by using an update operator U to detail signal dl+1and adding the result to al+1, the resultant al+1can be regarded as the approximation signal of a l .

a l + 1 ( i ) = a l + 1 ( i ) + j = - N / 2 + 1 N / 2 u j d l + 1 ( i + j - 1 )
(3)

where u j is the coefficient of U and N is the length of update coefficients.

Let a l be the input signal for lifting scheme, the detail and approximation signals at the lower resolution level can be obtained by iterating of the above three steps on the output a.

The inverse LWT can be performed by reversing the prediction and update operators and changing each '+' into '-' and vice versa. The complete expression of the reconstruction of LWT is shown in Equations (4)-(6). Figure 1 depicts the structure of LWT. The computational costs of the forward and inverse transform are exactly the same. The prediction operator P and update operator U can be designed by the interpolation subdivision method introduced in [23]. Choosing different P and U is equivalent to choosing different biorthogonal wavelet filters [24].

Figure 1
figure 1

Structure of LWT, both analysis side (left part) and synthesis side (right part).

a l + 1 ( i ) = a l + 1 ( i ) - j = - N / 2 + 1 N / 2 u j d l + 1 ( i + j - 1 )
(4)
d l + 1 ( i ) = d l + 1 ( i ) + r = - M / 2 + 1 M / 2 p r a l + 1 ( i + r )
(5)
a l ( 2 i ) = a l + 1 , a l ( 2 i + 1 ) = d l + 1
(6)

2.2. Lifting stationary wavelet transform

In the LWT, the shift-invariance is not ensured because there exists the split step and the length of approximation signal and detail signal decreases. However, the shift-invariance is desirable in many image applications such as image enhancement, image denoising and image fusion. In order to obtain the LSWT which possesses the shift-invariance, the method of literature [15] is adopted in this article.

In the LSWT, the split step is discarded. Assuming Pland Ulrepresent the prediction and update operator of the lifting stationary wavelet at level l, respectively. The initial prediction operator P0 and initial update operator U0 can be obtained once M and N are determined, where P0 = {p m }, m = 0,1,..., M - 1; U0 = {u n }, n = 0,1,..., N - 1. The coefficients of Pland Ulare designed by padding P0 and U0 with zeros [15]. The prediction coefficients and update coefficients at level l in the LSWT are expressed as follows:

p i l = p 0 0 , 0 0 2 l - 1 , p 1 0 , 0 0 2 l - 1 , p 2 0 , , p M - 2 0 , 0 0 2 l - 1 , p M - 1 , 0
(7)
u j l = u 0 0 , 0 0 2 l - 1 , u 1 0 , 0 0 2 l - 1 , u 2 0 , , u N - 2 0 , 0 0 2 l - 1 , u N - 1 0 ,
(8)

The decomposition results of an approximation signal a l at level l via lifting stationary wavelet are expressed by following equations.

d l + 1 = a l - P l + 1 a l , a l + 1 = a l + U l + 1 d l + 1
(9)

where dl+1and al+1are detail signal and approximation signal of a l at level l + 1.

The reconstruction procedure of LSWT is directly achieved from its forward transform, which is expressed as below.

a l = 1 2 ( a l + 1 - U l + 1 d l + 1 + d l + 1 + P l + 1 ( a l + 1 - U l + 1 d l + 1 ) )
(10)

The forward and inverse transform of LSWT is shown in Figure 2.

Figure 2
figure 2

The forward and inverse transform of LSWT.

Compared with the DWT, LSWT do not downsample and upsample the highpass and the lowpass coefficients during the decomposition and reconstruction of the image. So, the LSWT not only retains the perfect properties of the LWT, but also possess the shift-invariance. When LSWT is introduced into image fusion, more information for fusion can be obtained. In addition, the size of different sub-images is identical, so it is easy to find the relationship among different subbands, which is beneficial for designing fusion rules [25]. Therefore, the LSWT is more suitable for image fusion.

3. Multiscale products of LSWT

In MST-based image fusion algorithms, almost all the schemes design the fusion rule, namely, selection principles for high frequency subband coefficients (simplified into 'fre-coefs' in figures in this article) based on the wavelet coefficients directly. It is worth noting that much of the noise is also related to high frequencies. As a result, the fused images obtained by these methods are more noisy than the source images. It is well known that there exist dependencies between wavelet coefficients: if a coefficient at a coarser scale has small magnitude, its descendant coefficients at finer scales are likely to be small and vice versa. If two adjacent wavelet subbands are multiplied it can amplify the significant features and dilute noise [21, 26].

Suppose f(x) is a one-dimensional (1-D) discrete signal, we define the multiscale products of W l f as

P l f ( x ) = k = - k 1 k 2 W l + k f ( x )
(11)

where k1 < l and k2L - l are nonnegative integers if we use L to denote the max level, W l f (x) denotes the LSWT of signal f (x) at scales l and position x.

The support of an isolated edge will increase by a factor of two across scale and the neighboring edges will interfere with each other at coarse scale. So in practice it is sufficient to implement the multiplication at two adjacent scales [20]. If we let k1 = 0 and k2 = 1, then we calculate the LSWT scale products as

P l f ( x ) = W l f ( x ) W l + 1 f ( x )
(12)

According to [7] and Equation (12), for 2D image f, the multiscale products at the l th scale, d th direction and location (x, y) can be defined as

P l d ( x , y ) = W l d f ( x , y ) W l + 1 d f ( x , y )
(13)

where d = 1, 2, 3 denote the horizontal, vertical and diagonal directions.

To demonstrate the merits of multiscale products of LSWT, in Figure 3, the LSWT and multiscale products of a noisy test image (f1 = gl + δ) are illustrated, respectively. Though the LSWT coefficients of the original signal g1 are immersed into noise at fine scales, they are enhanced in the scale products P l f. The significant features of g1 are more distinguishable in P l f than in W l f. So we can conclude that the multiscale production of LSWT can amplify the significant features and dilute noise.

Figure 3
figure 3

The noisy test image 'Pepper' (variance δ = 0.01), the HL subimages of LSWT and the multiscale products at the first three scales: (a) noisy image 'Pepper'; (b)-(e) HL1, HL2, HL3, HL4 high frequency sub-images of 'Pepper'; (f)-(h) multiscale products at the first three scales.

4. The proposed fusion algorithm

A good image fusion algorithm should preserve all the salient features of the source images and introduce as less artifacts or inconsistency as possible. In addition, the fusion algorithm should be reliable and robust to imperfections such as noise. In this article, we develop a novel multifocus image fusion scheme to incorporate the merits of interscale dependencies of LSWT into the image fusion technique. Two adjacent wavelet subbands are multiplied to amplify the significant features and dilute noise. In contrast to the conventional MST-based image fusion schemes, we design the fusion rule of the high frequency subbands based on the multiscale products instead of the wavelet coefficients. So our proposed image fusion method can be fairly resistant to the noise because the multiscale products can distinguish edge structures from noise effectively.

Apart from the LSWT and multiscale products in the above section, fusion rules, namely, selection principles for different subband coefficients are another important component in our proposed fusion method. The following study presented in this article is concerned with a design of novel fusion rules for the low frequency subband coefficients and the high frequency subband coefficients. Throughout this study, it has been assumed that the images studied have been appropriately pre-registered, so that corresponding features can coincide pixel to pixel [27]. To simplify the discussion, we assume the fusion process is to generate a composite image F from a pair of source images denoted by A and B. The general procedure of the proposed LSWT-MSP-based fusion algorithm is illustrated in Figure 4 and implemented as

Figure 4
figure 4

Schematic diagram of LSWT-MSP-based fusion algorithm.

  1. (1)

    Decompose the registered source images A and B, respectively, into one low frequency subimage and a series of high frequency subimages via LSWT.

  2. (2)

    Select fusion coefficients for the low frequency subimage and each high frequency subimage from A and B according to fusion rules.

  3. (3)

    Reconstruct the original image based on the new fused coefficients of subimages by taking an inverse LSWT transform, then the fused image F is obtained.

4.1. Selection of lowpass subband coefficients

As the coefficients in the coarsest scale subband represent the approximation component of the source image, the simplest way is to use the conventional averaging method to produce the composite coefficients. However, this will reduce the fused image contrast. To improve the fused image quality, a clarity measure should be defined to determine whether a coefficient of the low frequency subband is in focus or out of focus.

For multifocus image fusion, many typical focus measurements, e.g. variance, energy of image gradient (EOG), spatial frequency (SF), and energy of Laplacian (EOL) of the image, are compared in literature [28]. They all measure the variation of pixels. Pixels with greater values of these measurements, when source images are compared with each other, are considered from the focus parts. According to literatures [28, 29] we know that EOL can provide a better performance than SF and EOG for fusion multifocus images. In this article, we use a new improved energy of image Laplacian (IEOL) as the focus measure to select coefficients from the clear parts of the source images.

The complete original expression of the energy of Laplacian (EOL) of the image f is shown in Equation (14):

EOL = x y ( f x x + f y y ) 2
(14)

where

f x x + f y y = - f ( x - 1 , y - 1 ) - 4 f ( x - 1 , y ) - f ( x - 1 , y + 1 ) - 4 f ( x , y - 1 ) + 20 f ( x , y ) - 4 f ( x , y + 1 ) - f ( x + 1 , y - 1 ) - 4 f ( x + 1 , y ) - f ( x + 1 , y + 1 )
(15)

In Equation (15), the f (x, y) is the gray value of pixel at position (x, y) of image f f xx + f yy represents image gradient obtained by Laplacian operator [-1,-4,-1; -4, 20, -4; -1, -4, -1].

However, the second derivatives in different directions may have different signs which cause one sign to cancel the other. This phenomenon may occur frequently in the textured images. In order to avoid the problem, and maintain robustness of the algorithm in the face of adverse effects that may occur in image fusion. We will use an improved EOL (IEOL) as the clarity measure to select coefficients from the clean parts of source images.

The improved sum of Laplacian (ISL) and the improved energy of Laplacian (IEOL) of image f are computed as:

ISL( x , y ) = | 8 f ( x , y ) 4 f ( x 1 , y ) 4 f ( x + 1 , y ) | + | 8 f ( x , y ) 4 f ( x , y 1 ) 4 f ( x , y + 1 ) | + | 2 f ( x , y ) f ( x 1 , y + 1 ) f ( x + 1 , y 1 ) | + | 2 f ( x , y ) f ( x 1 , y 1 ) f ( x + 1 , y + 1 ) |
(16)
IEOL ( x , y ) = a b W l ( a , b ) ( I S L ( x + a , y + b ) ) 2
(17)

where W l is a template which size is relatively small, and must satisfy the normalization rule Σ a Σ b W l (a, b) = 1. For the low frequency subband, it contains low frequency information. In order to match the information of the LSWT neighborhood of low frequency subband, the values of center and center neighborhood of the template should have little change between each other [30]. In this article, the template size is 3 × 3. In order to highlight the center pixel of the window, a weighted template is used, which is given as:

W l ( a , b ) = 1 15 1 2 1 2 3 2 1 2 1

The IEOL can be used as clarity measure to determine which coefficient is in focus. Suppose the source images A and B are decomposed using LSWT, f l L ( x , y ) and f F L ( x , y ) denote the low frequency coefficients of the source image L (K = A, B) and the fused image F which are located at (x, y) in the L th decomposition level, respectively. The IEOL K L ( x , y ) denotes the IEOL measurement of f K L ( x , y ) . The proposed IEOL-based fusion rule can be described as follows:

f F L ( x , y ) = f A L ( x , y ) i f : IEOL A L ( x , y ) > IEOL B L ( x , y ) f B L ( x , y ) i f : IEOL A L ( x , y ) IEOL B L ( x , y )
(18)

It means that coefficients with maximum IEOL measurement are selected as the coefficient of the fused image when subbands are compared in the LSWT domain. For simplicity, we name this fusion rule as 'IEOL-max' rule in this article.

4.2. Selection of bandpass subband coefficients

The coefficients in the high frequency subbands represent the detailed component of the source image. In traditional multiresolution fusion algorithms, such as [9, 31, 32], the multiresolution coefficients with larger absolute value are considered as sharp brightness changes or salient features in the corresponding source image, such as the edges, contours, and region boundaries, and so on. Thus, for the high frequency subbands coefficients, the most commonly used selection principle is the 'absolute-maximum-choosing' scheme (simplified and named 'Coef-abs-max') without taking any consideration of lowpass subband coefficients, that is, all the information in the lowpass subband is neglected.

Furthermore, in many practical applications, images are distorted by noise during the acquisition or transmission process. But almost all the traditional MST-based image fusion algorithms are designed to transfer the high frequency information from the input images to the fused image. It is worth noting that much of the image noise is also related to the high frequencies and may cause miscalculation of sharpness value. As a result, the fused images obtained by these methods are more noisy than the source images, and the performances are degraded. To make up for the deficiencies of traditional MST-based image algorithms, in our proposed method, after decomposing the original images using LSWT, we design a new image fusion rule based on multiscale products.

As we know, the HVS is highly sensitive to the local image contrast level. To meet this requirement, Toet and Ruyven developed the local luminance contrast in their research in CP [17]. It is defined as

R = L - L B L B = Δ L L B
(19)

where L' denotes the local gray level, L B' is the local brightness of the background and corresponds to the low frequency component. Therefore, ΔL can be taken as the high frequency component.

Based on the above idea, many different forms of contrast measurement have been proposed in MST domain and provide better performance than the 'Coef-abs-max' scheme [18, 19, 25]. However, in those contrast measurements, the value (or absolute value) of a single pixel of the high frequency sub-image, namely the coefficient of the high frequency subband when the source image is decomposed by the MST, is used as ΔL. In fact, the value (or absolute value) of a signal pixel is very limited in determining which pixel is from the clear part of the sub-image. So, a pure use the value (or absolute value) of a single pixel as the high frequency component is not effective enough. We believe it will be more reasonable to employ feature of the high frequency subband, rather than the value (or absolute value) of pixel, as ΔL in the contrast measurement in Equation (19).

Like the sharpness measure, the ISL, shown in Equation (16), can effectively represent the salient features and sharp boundaries of an image. Pixels with larger values of ISL, when the source images are compared with each other, are more possible in focus. That means the ISL can successfully determine which pixel is in the focus. Therefore, it is reasonable to utilize ISL as one type of feature of the high frequency subband to represent ΔL in contrast measurement.

If we use ISLd,l(x, y) (l = 1, 2,...,L) to denote the ISL located at (x, y) in the d th direction (d = 1, 2, 3) and l th scale. The feature contrast Rd,l(x, y) is defined as

R d , l ( x , y ) = IS L d , l ( x , y ) f l ( x , y )
(20)

where f l(x, y) denotes the low frequency coefficients located at (x, y) in the l th scale. In order to improve the robustness of the contrast to the noise of the low frequency subband, the feature contrast can be modified as

S d , l ( x , y ) = IS L d , l ( x , y ) f ̄ l ( x , y )
(21)

where

f ̄ l ( x , y ) = 1 m × n m n f l ( x + m , y + n )
(22)

In Equation (22) the local area size m × n may be 3 × 3 or 5 × 5. In practice, to reduce the computation complexity and the influence of low frequency subband noise, fl(x, y) can be substituted with the coarsest lowpass subband image fL(x, y).

To further conform to the characteristics of HVS, the feature contrast must be improved using the 'local-based' idea, thus a local neighborhood-based feature contrast is proposed in this article. It can be represented as

S R d , l ( x , y ) = a b W h ( a , b ) S d , l ( x + a , y + b )
(23)

where W h is a template of size 3 × 3. For the high frequency subband, it contains high frequency information. In order to match the information of the LSWT neighborhood of high frequency subband, the values of center and center neighborhood of the template should have relative large change between each other [30]. In this article, a weighted template based on city-block distance is used, which is

W h ( a , b ) = 1 16 1 2 1 2 4 2 1 2 1

In order to make up for the deficiencies of the traditional MST-based image fusion algorithm, which cannot restrain the noise influence, a new image fusion scheme is proposed in this article. In this fusion method we incorporate the merits of interscale dependencies, which can amplify the significant features, dilute noise and distinguish edge structures from noise more effectively, into the multifocus image fusion technique. In contrast to the traditional MST-based fusion methods, we design the fusion rule of the high frequency subbands based on the multiscale products instead of the wavelet coefficients. According to the formulae (23), the local feature contrast of multiscale products can be defined as

MS R d , l ( x , y ) = a b W h ( a , b ) MP S d , l ( x + a , y + b )
(24)

where

MP S d , l ( x , y ) = PS L d , l ( x , y ) f ̄ l ( x , y )
(25)
PS L d , l ( x , y ) = 8 P d , l f ( x , y ) - 4 P d , l f ( x - 1 , y ) - 4 P d , l f ( x + 1 , y ) + 8 P d , l f ( x , y ) - 4 P d , l f ( x , y - 1 ) - 4 P d , l f ( x , y + 1 ) + 2 P d , l f ( x , y ) - P d , l f ( x - 1 , y + 1 ) - P d , l f ( x + 1 , y - 1 ) + 2 P d , l f ( x , y ) - P d , l f ( x - 1 , y - 1 ) - P d , l f ( x + 1 , y + 1 )
(26)

where PSLd,l(x, y) denotes the ISL of multiscale products located at (x, y) in l th scale and d th direction; Pd,lf(x, y) and MPSd,l(x, y) are the corresponding multiscale products and the feature contrast.

Therefore, the proposed selection principle for the high frequency subband coefficients can be described as follows:

f F d , l ( x , y ) = f A d , l ( x , y ) i f : MSR A d , l ( x , y ) > MSR B d , l ( x , y ) f B d , l ( x , y ) i f : MSR A d , l ( x , y ) MSR B d , l ( x , y )
(27)

The local feature contrast of multiscale products cannot only effectively represent the salient features and sharp boundaries of image, but also effectively avoid the noise influence. A large value of the feature contrast means more high frequency information. So the proposed fusion scheme can extract more useful detail information from source images and inject them into the fused image. For simplicity, we name this fusion rule as 'MSP-con-max' in this article.

5. Experimental results and analysis

To evaluate the performance of the proposed fusion method, several experimental results are presented in this section. Experiments are performed on four sets of 256-level images: clean 'pepsi' (of size 512 × 512), clean 'flower' (of size 384 × 512), clean 'barb' (of size 512 × 512) and noisy 'pepsi' (of size 512 × 512). All of them are registered perfectly and shown in Figure 5a-f,h-j, respectively.

Figure 5
figure 5

Source images for multifocus image fusion. (a) and (b), (c) and (d), are the multifocus clean image pairs; (h)-(j) are the originals with blur at the right, left and middle, respectively, and (g) is the reference image of (h)-(j); (e) and (f) are the multifocus noisy image pairs, which are partly defocused and partly in good-focus.

In order to show the advantages of the new image fusion method, we establish three steps to demonstrate that the proposed image fusion method outperforms other methods. First, 'MSP-con-max' is compared with 'Coef-abs-max', the 'Traditional-contrast-max' ('Tra-con-max'), and the proposed 'Feature-contrast-max' ('Fea-con-max'), which is designed according to Equation (23), to demonstrate the performance of the 'MSP-con-max' rule. For the 'Tra-con-max', the absolute value of a single pixel of the high frequency subband is used as ΔL in the contrast measurement. Second, the proposed image fusion algorithm is compared with DWT-simple-based method (Method 1), LSWT-simple-based method (Method 2), and NSCT-simple-based method (Method 3), in all of which the low frequency subband coefficients and the high frequency subband coefficients are simply merged by the 'averaging' scheme and the 'Coef-abs-max' scheme, respectively. For comparison purposes, the proposed algorithm is also compared with other four fusion algorithms (namely Methods 4-7). In Methods 4 and 5, LSWT is used as the MST method, and the 'IEOL-max' fusion rule is employed to merge the low frequency subband coefficients. For fusion of the high frequency subband coefficients, the 'Coef-abs-max' and 'Tra-con-max' fusion rules, are respectively used in Methods 4 and 5. For Method 6, the fusion rules of [7], which have been deigned based on the feature of the multiscale products and pulse coupled neural network (PCNN) [7], are respectively used to merge the low and high frequency subband LSWT coefficients (We name the method as 'LSWT-PCNN'). In this method, the PCNN is a model based on the cats primary visual cortex. It is characterized by the global coupling and pulse synchronization of neurons and has been proven suitable for image processing [33]. In Method 7, the NSCT is used as the MST method, and our proposed 'IEOL-max' and 'MSP-con-max' are, respectively, employed to fuse the low and high frequency subbands coefficients (We name it as 'NSCT-MSP-Con'). For multiscale scale products of NSCT, it can be defined just like Equation (13).

In all of these methods, the 'db5' and 'db53' wavelets, together with a decomposition level of 3 are used in DWT-based and LSWT-based methods (including Methods 2, 4, 5, 6 and our proposed method), respectively. Three decomposition levels are also used in the NSCT-based method (including NSCT-simple and NSCT-MSP-Con). All of these methods are used to fuse the multifocus clean images. Third, multifocus noisy images, as shown in Figure 5e,f, are fused by above different methods.

5.1. Contrast-based fusion rule in LSWT domain

In this section, we will show the performance of 'Fea-con-max' and 'MSP-con-max' fusion rules. In order to demonstrate the advantages of the new fusion rule, 'MSP-con-max' and 'Fea-con-max' are compared with'Tra-con-max' and 'Coef-abs-max' on high frequency subbands in LSWT domain.

Figure 6a-d shows the high frequency sub-images of the labeled region of Figure 5a,b,e,f in LSWT domain. One can see that the values of coefficients in clear part are greater than those of blurry part, even though the source image is in a noisy environment. That is why typical 'Coef-abs-max' is used in MST-based fusion algorithms.

Figure 6
figure 6

Comparison of Coef-abs-max, Tra-con-max, Fea-con-max, and MSP-con-max rules. (a), (b) and (c), (d) are the high frequency subbands of the labeled parts of Figure 5a,b and Figure 5e,f; (e), (f) and (g), (h) are the multiscale products of the Figure 6a-d, respectively; (i)-(l) are decision maps of Coef-abs-max, Tra-con-max, Fea-con-max, and MSP-con-max rules in fusion (a) and (b); (m)-(p) are decision maps of Coef-abs-max, Tra-con-max, Fea-con-max, and MSP-con-max rules in fusion (c) and (d).

Figure 6e-h shows the multiscale products of Figure 6a-d, respectively. From Figure 6g,h, we can find that the multiscale products of LSWT can distinguish edge structures from noise effectively. Figure 6i-l are the decision maps, in which the coefficients selected from the image in Figure 6b are represented by white color, whereas the coefficients from Figure 6a are represented by black color. Since labeled part of Figure 6b is clearer than that of Figure 6a, the optimal decision map should be in white color in the whole decision map, which means all coefficients should be selected from Figure 6b. However, the decision maps of 'Coef-abs-max' rule and 'Tra-con-max' rule, shown in Figure 6i,j, indicate that these rules do not select the coefficients from the clear part completely even though 'Tra-con-max' shows better performance than 'Coef-abs-max'. Figure 6k,l indicates that the proposed feature contrast is more reasonable than the traditional contrast. It is also proven that applying feature such as ISL to the contrast is more reasonable than the absolute value of a single pixel.

Figure 6m-p shows the decision maps, in which the white color indicates that coefficients are selected from Figure 6d, otherwise selected from Figure 6c. From these figures we can see that the proposed 'MSP-con-max' rule do well in fusion of the multifocus noisy images. All of these demonstrate that the proposed fusion rule cannot only select the coefficients of the fused image properly but also restrain the influence of noise effectively.

The results of objective assessment are shown in Figure 7a,b. Figure 7a denotes the performance of different fusion rules in fusion of the multifocus clean image, and Figure 7b presents the performance of the different fusion rules in fusion of the multifocus noisy image. In Figure 7a, 'From a' and 'From b' denote the number of pixels that come from Figure 6a,b, respectively. Obviously, the proposed method is superior to others because the number of pixels that come from Figure 6b is the largest. As a result, the fused image is closer to the good-focus source image when using our proposed fusion rule, compared to using 'Coef-abs-max' rule, and 'Tra-con-max' rule, when the source images are noise-free. From Figure 7b, the same conclusion can be drawn that the proposed 'MSP-con-max' fusion rule outperforms the traditional fusion rules, when the source images are in a noisy environment.

Figure 7
figure 7

Performance of different fusion rules: (a) performance of different fusion rules when the source images are clean; (b) performance of different fusion rules when the source images are in a noisy environment.

5.2. Fusion of clean multifocus images

In this section, the experiments are performed on three pairs of multifocus clean images, which are shown in Figure 5a-d,h-j, respectively. All the experiments are implemented in Matlab7.01 and on AMD Athlon(tm) 2.4 GHz with 2 G RAM. For further comparison, besides visual observation, two objective criteria are used to compare the fusion results. The first criterion is the mutual information (MI) [34]. It is a metric defined as the sum of mutual information between each input image and the fused image. The second criterion is QAB/F[35] metric, proposed by Xydeas and Petovic, which considers the amount of edge information transferred from the input images to the fused image. This method uses a Sobel edge detector to calculate strength and orientation information at each pixel in both source and the fused images. For both criteria, the larger the value, the better is the fusion result.

The first experiment is performed on the 'pepsi' multifocus clean images which have been registered perfectly. Figure 8 illustrates the fusion results obtained by the above mentioned eight different methods (including the proposed method). For a clearer comparison, the difference images between the fused images, which are fused results using Methods 1-7 and our proposed method, and the source image in Figure 5b are given in Figure 8i-p. To make better comparisons, Figure 8q-x illustrates parts of the labeled regions of Figure 8i-p. For the focused regions, the difference between the source image and the fused image should be zero. So the lower residue features in the difference image mean the better the fusion method transfers information of the source images to fused image. Focusing on the images which are shown in Figure 8q-s, one can obviously find that the fused images obtained by the LSWT method (Method 2) and the NSCT method (Method 3) are clearer than the DWT fused result. It is proven that shift-invariant methods such as LSWT and NSCT can overcome the pseudo-Gibbs phenomena successfully and improve the quality of the fused image around edges. Figure 8t indicates that the proposed fusion rule of the low frequency subband is more reasonable and useful in fusion multifocus clean images when compared with the 'averaging' fusion scheme. From Figure 8u, we can find that the 'Tra-con-max' fusion scheme does not extract almost all the useful information of the source images and nor transfer it to the fused image. However, Figure 8v,x shows that fused images attained by our proposed method and Methods 6 and 7 are with better visual quality. Almost all of the useful information of the source images has been transferred to the fused images, and meantime, fewer artifacts are introduced during the fusion process. All of these demonstrate the proposed feature contrast is more reasonable and useful than the traditional contrast.

Figure 8
figure 8

The 'Pepsi' multifocus image fusion results: (a)-(h) fused images using Methods 1-7 and the proposed method, respectively; (i)-(p) difference images between Figures 5b and 8a-h; (q)-(x) are the parts of the labeled regions of Figure 8i-p.

In order to further evaluate the fusion performance, the second experiment is performed on another set of multifocus clean images, which are also registered perfectly and shown in Figure 5c,d. The resultant fused images are shown in Figure 9a-h. Again, for clearer comparison, the difference images between the fused images, which are fused results using Methods 1-7 and our proposed method, and the source images which are shown in Figure 5d are given in Figure 9i-p. Parts of the labeled regions of Figure 9i-p are extracted and put into Figure 9q-x. Figure 9q-x indicates that the proposed method and NSCT-MSP-Con-based method can extract almost all the good-focalized parts of source images and preserve the detailed information better than the other methods. Moreover the proposed method provides similar performance compared with NSCT-MSP-Con-based method, even though the NSCT is more suitable for image fusion, i.e., because the fusion rules, which are designed in this article, are every effective and can extract almost all the useful information of the source images and transfer it to the fused image, no matter the image fusion method is in the LSWT domain or in NSCT domain.

Figure 9
figure 9

The 'Flower' multifocus image fusion results: (a)-(h) fused images using Methods 1-7 and the proposed method, respectively; (i)-(p) difference images between Figures 5d and 9a-g; (q)-(x) are the parts of the labeled regions of Figure 9i-p.

Three source images with different blur regions, as shown in Figure 5h-j, are used to evaluate the fusion performance in the third experiment. To make better comparisons, the difference images between the fused images and the reference image, which are shown in Figure 5g, are given in Figure 10i-p. For clearer comparison, the labeled parts of Figure 10i-p are extracted and shown in Figure 10q-x. From Figure 10q-x, the same conclusion can be drawn that the proposed method outperforms others methods.

Figure 10
figure 10

There 'Barb' images fusion results: (a)-(h) fused images using Methods 1-7 and the proposed method, respectively; (i)-(p) difference images between Figures 5g and 10a-h; (q)-(x) are the parts of the labeled regions of Figure 10i-p.

Furthermore, the values of objective criteria on mutual information (MI), QAB/Fand the execution time of Figures 8a-h, 9a-h, and 10a-h are listed in Tables 1, 2, and 3, respectively. We observe that the fused images produced by NSCT-simple-based method are slightly better than the LSWT-simple fusion results, and all of them outperform the DWT approach in terms of MI and QAB/F. However, the NSCT is time consuming, which impedes its realtime application. As the modified version of LWT, LSWT consumes more time than DWT, that is because LSWT possesses the shift-invariance, and needs to process more dates of the image during the fusion process.

Table 1 Performance of different fusion methods on precessing Figure 5a,b
Table 2 Performance of different fusion methods on precessing Figure 5c,d
Table 3 Performance of different fusion methods on precessing Figure 5h-j

Form Tables 1, 2, and 3, we find that the NSCT-MSP-Con-based method provides similar performance compared with our proposed method. However, the NSCT-MSP-Con is more time consuming than our proposed method, because of its multi-direction and complexity. By considering both the fusing results and computing complexity, we utilize LSWT as the MST method in our proposed algorithm. For the LSWT-PCNN-based method, it is also more time consuming than the proposed method, because the PCNN neuron is very complex and it needs iterative operation to obtain pleasing fusion results. Moreover, the number of parameters of each neuron which need to be adjusted is large and they affect each other greatly. In image processing with PCNN, people often assign the same values to the corresponding parameters of each neuron. They are all chosen with experiments or experiences. For the visual system of eyes, it is impossible that all the parameters of neurons have the same value. They should be related to the situation of the neuron cell. All of these disadvantages significantly compromise the performance of LSWT-PCNN. Relative to LSWT-PCNN, our proposed method not only considers the property of HVS, which is highly sensitive to the local image contrast level, but also possesses some advantages such as simple calculation, high efficiency, etc. So our proposed method can provide better performance than LSWT-PCNN.

The values of Tables 1, 2, and 3 demonstrate that the proposed image fusion algorithm significantly outperforms other approaches (except NSCT-MSP-Con-based method) in terms of MI and QAB/F. Moreover, since reference (everywhere in focus) image of Figure 5h-j, as shown in Figure 5g, is available, performance comparison of different methods can be made using root mean square error (RMSE). The values of RMSE between Figure 11a-h and Figure 5g are given in Table 3. From Table 3, we can find that the objective evaluation results of RMSE coincide with the MI and QAB/Fevaluation results very well.

Figure 11
figure 11

The noisy 'pepsi' multifocus noisy image fusion results: (a)-(h) fused images using Methods 1-7 and our proposed method, respectively. (i)-(p) are the parts of the corresponding regions of (a)-(h).

5.3. Fusion of noisy multifocus images

In order to evaluate the performance of the proposed method in a noisy environment, the input multifocus images 'pepsi', as shown in Figure 5e,f, have been additionally corrupted by Gaussian noise with deviation δ = 0.01.

In following experiment, since reference (everywhere-in-focus) images of the scenes under analysis are not available, performance comparison of the proposed method cannot be made using RMSE based metrics for this kind of image. Therefore, image fusion performance evaluation measures which do not require the availability of an ideal image have to be employed. For comparison, besides visual observation, objective criteria on MI and QAB/Fare used to evaluate how much information of the multifocus clean images, which are shown in Figure 5a,b, is contained in the fused images. However, the objective criteria on MI and QAB/Fcannot evaluate the performance of these fusion methods in terms of the input/output noise transmission. For further comparison, the improvement in terms of peak signal to noise ratio (PSNR), proposed by Loza et al. [6], is adopted to measure the noise change between the fused image and source noisy image. Let σ n , f 2 denotes the noise variance in the fused output, the improvement in terms of PSNR is formulated as:

Δ PSNR = 10 ( log 255 σ n , f 2 - log 255 σ n 2 ) = 10 log σ n 2 σ n , f 2
(28)

For the criteria of ΔPSNR, the larger the value, the less noise of fused image is introduced from the original noisy image, and the better is the fusion result.

Figure 11a-h illustrates the fusion results obtained by the above different methods. For a clearer comparison, Figure 11i-p illustrates the parts of the fusion results. By looking at the image examples shown in Figure 11i-l, one can find that the edges information of the fused images, which are fused results using Methods 1-4, respectively, are immersed into noise. That is because all these fusion methods are designed to transfer the high frequency information from the input images into the fused image. It is worth noting that much of the image noise is also related to high frequencies. As a result, the fused images obtained by these methods are more noisy than the source images. From Figure 11m, we can see that the edges of Figure 11m are not clearer than Figure 11n-p, because the noise of the source images causes miscalculation of the contrast values. Therefore, in the presence of noise, the performance of Methods 1-5 may not be as good as those in the noiseless environments. Figure 11n indicates that the Method 6 can reduce the noise level to some extent, but the edges information of the fused image is not clearer compared with Figure 11o,p, which are fused by the Method 7 and our proposed algorithm.

Furthermore, Table 4 gives the quantitative results of the Figure 11. From Table 4, we observe that different fusion methods appear to provide different image fusion performance and the proposed scheme outperforms the other seven image fusion algorithms in terms of larger MI, and QAB/Fquality. And the values of ΔPSNR indicate that the proposed fusion rule of the high frequency subband is more reliable, robust and stable than other fusion rules.

Table 4 Performance of different fusion methods on precessing Figure 5e,f

6. Conclusion

In this article, a new multifocus image fusion algorithm based on feature contrast of multiscale products is proposed in LSWT domain. In the proposed algorithm, a novel feature contrast of multiscale products, which stands for edge features in high frequency sub-images in LSWT domain, is developed and used as the fusion scheme of the high frequency subbands. Three pairs of clean multifocus images and one pair of noisy multifocus images are used to test the performance of the proposed image fusion method, respectively. The experimental results demonstrate that the proposed method outperforms the DWT-simple-based method, the LSWT-simple-based method, LSWT-Traditional-Contrast-based method, the LSWT-PCNN-based method and the NSCT-simple-based method in terms of both visual quality and objective evaluation, even though the source images are in a noisy environment. In the future, we will do more research on the fusion of the noisy images, in order to carry out denoising and fusion of noisy source images simultaneously. And that will become the new trends to develop in image fusion field in the future.

References

  1. Seales WB, Dutta S: Everywhere-in-focus image fusion using controllable cameras. Proc SPIE 1996, 2905: 227-234.

    Article  Google Scholar 

  2. Li ST, Yang B: Multifocus image fusion using region segmentation and spatial frequency. Image Vision Comput 2008, 26(7):971-979. 10.1016/j.imavis.2007.10.012

    Article  Google Scholar 

  3. Wang ZB, Ma YD, Gu J: Multi-focus image fusion using PCNN. Pattern Recogn 2010, 43(6):2003-2016. 10.1016/j.patcog.2010.01.011

    Article  Google Scholar 

  4. Li ST, Kwok JT, Wang YN: Multifocus image fusion using artificial neural networks. Pattern Recogn Lett 2002, 23(8):985-997. 10.1016/S0167-8655(02)00029-6

    Article  Google Scholar 

  5. Pajares G, de la Cruz JM: A wavelet-based image fusion tutorial. Pattern Recogn 2004, 37(9):1855-1872. 10.1016/j.patcog.2004.03.010

    Article  Google Scholar 

  6. Loza A, Bull D, Canagarajah N, Achim A: Non-gaussian model-based fusion of noisy images in the wavelet domain. Comput Vis Image Understand 2010, 114(1):54-65. 10.1016/j.cviu.2009.09.002

    Article  Google Scholar 

  7. Chai Y, Li HF, Guo MY: Multifocus image fusion scheme based on features of multiscale products and PCNN in lifting stationary wavelet domain. Opt Commun 2011, 248(5):1146-1158.

    Article  Google Scholar 

  8. Petrovic VS, Xydeas CS: Gradient-based multiresolution image fusion. IEEE Trans Image Process 2004, 13(2):228-237. 10.1109/TIP.2004.823821

    Article  Google Scholar 

  9. Li H, Manjunath BS, Mitra SK: Multisensor image fusion using the wavelet transform. Graph Models Image Process 1995, 57(3):235-245. 10.1006/gmip.1995.1022

    Article  Google Scholar 

  10. Qu XB, Yan JW, Xiao HZ: Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automatica Sinica 2008, 34(12):1508-1514.

    Article  Google Scholar 

  11. Chai Y, Li HF, Qu JF: Multifocus image fusion using a novel dual-channel PCNN in lifting stationary wavelet transform. Opt Commun 2010, 283(19):3591-3602. 10.1016/j.optcom.2010.04.100

    Article  Google Scholar 

  12. Wilson T, Rogers S, Kabrisky M: Perceptual based hyperspectral image fusion using multi-spectral analysis. Opt Eng 1995, 34(11):3154-3164. 10.1117/12.213617

    Article  Google Scholar 

  13. Sweldens W: The lifting scheme: a construction of second generation wavelets. SIAM J Math Anal 1998, 29(2):511-546. 10.1137/S0036141095289051

    Article  MathSciNet  Google Scholar 

  14. Coifman RR, Donoho DL: Translation Invariant De-Noising, Wavelet and Statistics. Edited by: A Antoniadis, G Oppenheim. Springer-Verlag, New York; 1995:125-150.

    Google Scholar 

  15. Lee CS, Lee CK, Yoo KY: New lifting based structure for undecimated wavelet transform. Electron Lett 2000, 36(22):1894-1895. 10.1049/el:20001294

    Article  Google Scholar 

  16. da Cunha AL, Zhou JP, Do MN: The nonsubsampled contourlet transform: theory, design and application. IEEE Trans Image Process 2006, 15(10):3089-3101.

    Article  Google Scholar 

  17. Toet A, Van ruyven LJ, Valeton JM: Merging thermal and visual images by a contrast pyramid. Opt Eng 1989, 28(7):789-792.

    Article  Google Scholar 

  18. Yang L, Guo BL, Ni W: Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 2008, 72(1-3):203-211. 10.1016/j.neucom.2008.02.025

    Article  Google Scholar 

  19. Zhang Q, Guo BL: Fusion of multi-sensor images based on the nonsubsampled contourlet transform. Acta Automatica Sinica 2008, 34(2):135-141.

    Article  Google Scholar 

  20. Bao P, Zhang L: Noise reduction for magnetic resonance images via adaptive multiscale products thresholding. IEEE Trans Med Imag 2003, 22(9):1089-1099. 10.1109/TMI.2003.816958

    Article  Google Scholar 

  21. Xu Y, Weaver JB, Healy DM Jr, Lu J: Wavelet transform domain filters: a spatially selective noise filtration technique. IEEE Trans Imag Process 1994, 3(6):747-758. 10.1109/83.336245

    Article  Google Scholar 

  22. Sweldens W: The lifting scheme: a custom-design construction of biorthogonal wavelets. Appl Comput Harmonic Anal 1996, 3(2):186-200. 10.1006/acha.1996.0015

    Article  MathSciNet  Google Scholar 

  23. Claypoole RL, Davis GM, Sweldens W, Baraniuk R: Nonlinear wavelet transforms for image coding via lifting. IEEE Trans Image Process 2003, 12(12):1449-1459. 10.1109/TIP.2003.817237

    Article  MathSciNet  Google Scholar 

  24. Stepien J, Zienlinski T, Rumian R: Image denoising using scale-adaptive lifting schems. In Proceedings of the International Conference on Image. Volume 3. Vancouver, BC, Canada; 2000:288-290.

    Google Scholar 

  25. Zhang Q, Guo BL: Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 2009, 89(7):1334-1346. 10.1016/j.sigpro.2009.01.012

    Article  Google Scholar 

  26. Sadler BM, Swami A: Analysis of multiscale products for step deteciton and estimation. IEEE Trans Inf Theory 1999, 45(3):1041-1051.

    Article  MathSciNet  Google Scholar 

  27. Maintz JB, Viergever MA: A surevy of medical image registration. Med Image Anal 1998, 2(1):1-36.

    Article  Google Scholar 

  28. Wei H, Jing ZL: Evaluation of focus measures in multi-focus image fusion. Pattern Recogn Lett 2007, 28(4):493-500. 10.1016/j.patrec.2006.09.005

    Article  Google Scholar 

  29. Wei H, Jing ZL: Multi-focus image fusion using pulse coupled neural network. Pattern Recogn Lett 2007, 28(9):1123-1132. 10.1016/j.patrec.2007.01.013

    Article  Google Scholar 

  30. Song YJ, Ni GQ, Gao K: Regional energy weighting image fusion algorithm by wavelet based contourlet transform. Trans Beijing Inst Technol 2008, 28(2):168-172.

    Google Scholar 

  31. Li ST, Yang B: Multifocus image fusion by combining curvelet and wavelet transform. Pattern Recogn Lett 2008, 29(9):1295-1301. 10.1016/j.patrec.2008.02.002

    Article  Google Scholar 

  32. Li ST, Yang B, Hu JW: Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 2011, 12(2):74-84. 10.1016/j.inffus.2010.03.002

    Article  Google Scholar 

  33. Johnson JL, Padgett ML: PCNN models and applications. IEEE Trans Neural Netw 1999, 10(3):480-498. 10.1109/72.761706

    Article  Google Scholar 

  34. Qu G, Zhang D, Yan P: Information measure for performance of image fusion. Electron. Lett 2001, 38(7):313-315.

    Google Scholar 

  35. Petrovic V, Xydeas C: On the effects of sensor noise in pixel-level image fusion performance. In IEEE Proceedings of the Third International Conference on Image Fusion. Volume 2. Paris, France; 2000:14-19.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the associate editor and the anonymous reviewers for their careful study and valuable suggestions for an earlier version of this article. The article was jointly supported by the National Natural Science Foundation of China (No. 60974090), the Ph.D. Programs Foundation of Ministry of Education of China (No. 200806110016), and the Fundamental Research Funds for the Central Universities (No. CDJXS10172205).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shanbi Wei.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Li, H., Wei, S. & Chai, Y. Multifocus image fusion scheme based on feature contrast in the lifting stationary wavelet domain. EURASIP J. Adv. Signal Process. 2012, 39 (2012). https://doi.org/10.1186/1687-6180-2012-39

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-39

Keywords