Skip to main content

An improved method for color image editing

Abstract

In this article, an improved approach for editing color images has been presented. In this approach, using the Poisson equation, a guided vector field is created by employing source and target images within a selected region at the first step. Next, the guided vector is used in generating the result image. Most of the existing techniques in the literature perform image editing without taking color information into account. However, without utilizing color information, image geometry cannot be created properly in some cases which may result in unsatisfactory results. Unlike the existing methods, the proposed study utilizes all the information contained in each of the color channels in computing gradient norm and performing inpainting process. The test results show that the suggested technique generates satisfactory results in editing color images.

1. Introduction

Digital image processing operations are related with global changes including image correction, filtering, colorization, or local changes in a selected region where altering processes take place. Commercial or artistic photomontages take the local changes in images into account. Along with the technologic improvement, some softwares, such as Adobe© Photoshop©, have been released for image editing. However, professional experience is required to be able to use those kinds of softwares efficiently, and editing photos by the indicated software is tiresome. In addition, the regions altered using those tools may include some visible artifacts.

Digital image editing methods based on the Poisson equation have frequently been used in recent researches [112]. Perez et al. [1] presented an image editing approach based on the Poisson equation with Dirichlet boundary conditions. However, using this method, color inconsistency may occur in edited regions of images. Sun et al. [2] proposed an image matting algorithm with the Poisson equation. However, because of a long processing time, the method is not practically usable. Chuan et al. [3] improved the method that was suggested by Perez et al. to overcome the color inconsistency problem. But, the experiments show that the improved method is still too complex. Leventhal and Sibley [4] suggested an alpha interpolation technique to remove brightly colored artifacts caused by mixed seamless image cloning in edited images. Jia et al. [5] presented a method, called drag-and-drop pasting, which computes an optimized boundary condition automatically by employing a new objective function. However, the method is compared not with mixed seamless image cloning but with only seamless image cloning approach proposed in [1]. Georgiev [68] developed a new method that is invariant to relighting and handles seamlessly illumination change including adaptation and perceptual correctness of results. The method processes the image by considering its surrounding texture contrast as well. Fattal et al. [9] presented a method to render high dynamic range images on a monitor using a Poisson equation. Shen et al. [10] suggested a method whose outputs are generated from the gradient maps by employing a Poisson equation. Dizdaroğlu and İkibaş [11] introduced a color image editing method with the Poisson equation, on which this study was built. Yang et al. [12] proposed a variational model, a distance-enhanced random walks algorithm and a multi-resolution framework based on the Poisson image editing method. Although many methods were summarized related with the research area, most of those methods are complex or may cause artifacts because of independent implementation of each color channel, or using only the lightness channel.

In this article, we present a digital image editing method that utilizes all the information contained in each of the color channels based on the Poisson equation. The test results show that the proposed method generates output images in a seamless manner with no blurring artifacts.

2. Image editing

Let f: Ω→Rn and f: Ω→R be color (n = 3) and grayscale images, respectively, and they are defined on domain of Ω→R2 . f i : Ω→R stands for the image channel i of f(1 ≤ in): p = (x, y)Ω. The proposed method will be explained in detail in the following sections.

The Poisson image editing is basically a process of obtaining a new result image f using the source image g and target image f*. In this method, the guided vector field v is first created using both the images g and f*. After that, the method reconstructs the image f with Dirichlet boundary conditions so that the gradient of f is closest in the L 2 -norm to v over the region Γ, while satisfying f = f* over ∂Γ [1, 4]. Therefore, the edited image region includes the features of both images g and f*, and matches the rest of the image in a seamless manner (see Figure 1).

Figure 1
figure 1

Image editing diagram: The source image, target image, and guided vector field (a), and the result image generated by the Poisson equation method (b).

A basic approach for editing process is to minimize the variation of the image f over the region Γ by estimating the gradient norm ||f||, which is also known as Tihkonov regularization [13]:

min f : Γ R E f = Γ f 2 f Γ = f * Γ
(1)

In Equation 2, the image gradient denoted by f is the first-order derivative of the grayscale image f with respect to its spatial coordinates p = (x, y):

f = f x , f y T = f x , f y T
(2)

In order to represent magnitudes of the grayscale image f and its maximum variation directions, a vector f: Γ→R2 is created. Scalar and point-wise measurement of the image variations are given by the gradient norm ||f||, which is used in image analysis in many cases:

f = f x 2 + f y 2
(3)

Here, f x and f y are the first-order derivatives of the image f in x and y directions, respectively. These can be calculated using Taylor's formula. Figuring out the function f minimizing the functional E(f), requires complex processing operations. A necessary condition is given by the Euler-Lagrange equations, which must be confirmed by f to reach a minimum of E(f):

E f = F f - d d x F f x - d d y F f y = 0 , f Γ = f * Γ
(4)

where F = f 2 = f x 2 + f y 2 2 .

The calculation of d d x F f x and d d y F f y according to the standard differentiation rules is as follows:

F f x = f x f x 2 + f y 2 2 = 2 f x d d x F f x = d d x 2 f x = 2 f x x and d d y F f y = d d y 2 f y = 2 f y y
(5)

Accordingly, - 2 f x x + f y y = 0 is found as a solution to Equation 5. Constant 2 may be removed from the equation for simplicity.

A classic iterative method called gradient descent is used for solving the partial differential equation (PDE) in Equation 1. As a matter of fact, Equation 1 can be seen as the gradient of the functional E(f). A local minimizer fmin of E(f) can be found by starting from finitial and then following the opposite direction of the gradient:

f ( t = 0 ) = f initial f t = - F f - d d x F f x - d d y F f y
(6)

Solving the equation:

f t = f x x + f y y = Δ f , f Γ = f * Γ
(7)

is found. Here, Δ is the Laplace operator, and also Taylor's formula is used in finding second-order derivatives of f xx and f yy . Equation 7 is called as diffusion or heat equation.

Basically, Equation 7, at a particular time t, gives the convolution of finitial with a normalized 2-Dimensional Gaussian kernel G σ of variance σ = 2 t : f initia l * G σ which means the linear smoothing, where G σ = 1 2 π σ 2 exp - x 2 + y 2 2 σ 2 [1416].

The regularization PDE shown in Equation 8 is compatible with the approach expressed above:

f ( t = 0 ) = f initial f ( t + 1 ) = f ( t ) + d t f ( t ) t
(8)

where dt represents adapting time step.

As clearly followed above, the image is gradually blurred in an isotropic way during the PDE evolution. Here, isotropic smoothing acts as a low-pass filter suppressing high frequencies in image of f. Unfortunately, since image edges and noise are both high frequency signals, the edges are quickly blurred as seen in Figures 2 and 3c. Therefore, the interpolation operation based on a guided vector field could generate a better result [1].

Figure 2
figure 2

Heat equation applied on a grayscale noisy image: test Lena image artificially corrupted by additive Gaussian noise (σ = 20) (a) and result of Heat equation method on it after 20 iterations (b).

Figure 3
figure 3

Test image used in completing an artificially degraded region: input image (a), results of direct copying from the other region of the image (b), the region smoothed by Heat equation method (c), and by seamless image cloning approach (d).

A function, minimizing the difference between the gradient f of image f and guided vector field v over the region Γ, should be found. The equation given below can be used for this operation:

min f : Γ R E f = Γ f - v 2 , f Γ = f * Γ
(9)

The result is found when the above equation is solved by the Poisson equation with Dirichlet boundary conditions as follows:

Δ f = div v , f Γ = f * Γ
(10)

Here, div v = u x + v y is the divergence of v = (u,v). This is a fundamental equation used in digital image editing.

The guided vector field v can directly be obtained from the source image g. In this case, the interpolation operation is conducted considering the equation as follows:

v = g
(11)

Equation 12 is obtained for seamless image cloning, if Equation 10 is reorganized accordingly:

Δ f = Δ g , f Γ = f * Γ
(12)

Therefore, the source image is seamlessly cloned to the target image without blurring artifacts within the user-selected region as seen in Figure 3d. If the source image is directly copied to the destination image, some visual annoying effects may reveal in the result image as shown in Figure 3b.

Seamless image cloning method may result in blurring artifacts in the image editing region if the user-selected boundary intersects with prominent structures in the target image as seen in Figure 4c. To solve this problem, another technique is introduced in [1]. At each point of the region Γ, if stronger variations on any of the source image g or the target image f* is to remain the same, the following vector field can be used for mixed seamless image cloning as seen in Figure 4e.

Figure 4
figure 4

Test images used in seamless and mixed seamless image cloning methods: source image and selected region on it (a), target image (b), result images produced using seamless and mixed seamless image cloning methods proposed in [1](c, e), and in this study (d, f), respectively.

p Γ , v p = f * p i f f * p > g p , g p otherwise .
(13)

Another approach for image editing is arranging local illumination changes [1]. In this method, the guided vector field can be defined as follows to arrange the illumination changes in any region of the image as shown in Figure 5a,b.

Figure 5
figure 5

Test image used in arranging local lightness variation methods: input image (a), results of local lightness variation arrangement methods suggested by [1](b), and by our method (c).

v p = α β f * p - β f * p
(14)

with α = 0.2 times the average gradient norm of f* over Γ and, β = 0.2.

Also, the gradient f* of the target image can be passed through a sparse sieve [1]. Therefore, it is retained only the most prominent features on the image, which is called texture flattening. This is performed by applying on Equation 15:

p Γ , v p = M p f * p
(15)

where M is a binary mask obtained using an edge detection approach as depicted in Figure 6a,d.

Figure 6
figure 6

Test image used in texture flattening approaches: input image (a), results of texture flattening approaches: gradient norm of the image calculated by proposed method (b), binary mask calculated from the gradient norm (c), the result produced by [1](d), and the result produced by our method (e).

An efficient method presented in [68] utilizes not only pixels values but also texture contrasts while editing the selected region seamlessly as illustrated in Figure 7. In this figure, we cannot perceive pixel values precisely because of the fact that a variable background encloses a rectangle which is a constant color. Here, whereas pixel gradient is zero, perceived, or covariant gradient is not zero. Consequently, the following equation is invariant relighting and considers illumination changes [68] as seen in Figure 8a,g.

Figure 7
figure 7

Whereas the inner geometric shape has constant pixel values, because of human perception, it seems to have variable colors [6].

Figure 8
figure 8

Test image used in removing an artificially added scratch: input image (a), results of direct cloning from the illuminated area (b), dividing the result image by the source image (c), solving the PDE in the inpainting region (d), scratch removed by constrained PDE-based method in [15](e), by Poisson cloning approach in [6, 8](f), by covariant cloning method in [6, 8](g), and by our covariant inpainting method (h).

Δ f f - 2 f f g g - Δ g g + 2 g g g 2 = 0
(16)

This approach, covariant Laplace equation, can be considered as a Poisson equation with a modified Δg term. Actually, the above equation can be seen as equal to Δ f g = 0 by calculating simple differentiation. The equation is solved in three main steps for covariant inpainting which fills the image holes by propagating linear structures of the source region into the target region by means of diffusion process utilizing the Poisson equation approach [6, 8].

These steps are

  1. (a)

    Dividing the result image f by the source image g.

  2. (b)

    Solving the PDE in the user-selected region.

  3. (c)

    Multiplying the result by the source image g.

3. The proposed method

The method studied in [1] utilizes each color channel of the image in the Poisson equation independently. In other words, classical scalar diffusion PDEs on each channel f i of the initial image finitial is used while processing color images in many applications. However, since each image channel f i evolves independently with different smoothing geometries, this approach is useless because of linear structures of the image that have different directions for each vector components f i . Even if these directions are relatively close to each others, the diffusion process will not behave in a coherent way and a high risk of vector components blending (that will blur the edges) may occur [17]. In addition to this, the exact geometric structure of the image may not be obtained if all the information contained in each of the color channels is considered. Consequently, the image structure may inaccurately diffuse to the target region. Figure 9 shows the lightness channel component of the test image in CIE-Lab color space. Examining the image carefully, it is seen that the lightness channel of the image was not able to be obtained properly. In this case, as can clearly be seen in Figure 9b, insufficient results may be obtained when the image is edited based on gradient norm of the lightness channel. Therefore, image editing should be done by considering the effects of color channels on each other.

Figure 9
figure 9

Color image representation: image in RGB color space (a) and the lightness channel L of the image in CIE-Lab color space (b).

Before explaining the proposed color image editing in details, it is worth to mention the essentials of inpainting color images. In recent years, many notable approaches have been developed for image inpainting, or completion [14, 15, 1722]. Our proposed method employs trace-based PDE method in which the local geometry of the color image can be calculated based on all the information contained in each of the color channels as follows:

p Γ , J p = i = 1 n f i f i T where f i = f i x f i y
(17)

J is defined as follows for a color image f = (R,G,B):

J = j 11 j 12 j 21 j 22 = R x 2 + G x 2 + B x 2 R x R y + G x G y + B x B y R y R x + G y G x + B y B x R y 2 + G y 2 + B y 2
(18)

The positive eigenvalues λ+/- and the orthogonal eigenvectors θ+/- of J are formulized as follows:

λ + - = j 11 + j 22 ± j 11 - j 22 2 + 4 j 12 2 2
(19)

and

θ + - = 2 j 12 j 22 - j 11 ± j 11 - j 22 2 + 4 j 12 2
(20)

More consistent geometry is obtained, provided that Jσ = J*Gσ is smoothed by a Gaussian filter. Here, Jσ is a good estimator of the local geometry of f at point p. Its spectral elements give the vector-valued variations (by the eigenvalues λ+, λ-, of Jσ) at the same time and the orientations (edges) of the local image structures (by the eigenvectors θ+θ- of Jσ).

The gradient norm ||f|| of the color image f is easy to compute as follows since it perceives image structures successfully [14, 15] (see Figure 10):

Figure 10
figure 10

Calculation of gradient norms: considering only the lightness channel L of the image in CIE-Lab color space (a) and considering all the information contained in each of the color channels (b).

f = λ + + λ - = i = 1 n f i 2
(21)

Tschumperlé and Deriche [14] suggested designing a particular field T: Γ→P(2) of diffusion tensors to define the specification of the local smoothing method for the regularization process. Apparently, it should be noticed that T, depending on the local geometry Jσ of f, is thus defined from the spectral elements λ+, λ-, and θ+, θ- of Jσ:

p Γ , T p = s - λ + , λ - θ - θ - T + s + λ + , λ - θ + θ + T
(22)

The strengths of smoothing along θ+and θ- are set by two functions denoted by s-/+: R2R, where the types of applications determine s- and s+. Sample functions for image denoising are proposed in [14]:

s - λ + , λ - = 1 1 + λ + + λ - a 1 and s + λ + , λ - = 1 1 + λ + + λ - a 2 while a 1 < a 2
(23)

Here, the goal of smoothing operation can be listed as follows:

  • The pixels on image edges are smoothed along θ- with a strength inversely relative to the vector edge strength (anisotropic smoothing).

  • The pixels on homogeneous regions are smoothed along all possible directions (isotropic smoothing).

Tschumperlé and Deriche [14] present a regularization-PDE-based approach to agree with the local smoothing geometry T based on a trace operator:

f t = f i t = trace T H i
(24)

where H i is the Hessian matrix of f i :

H i = 2 f i x 2 2 f i x y 2 f i y x 2 f i y 2
(25)

In this study, H i is a symmetric matrix since the images are regular ones and 2 f i x y = 2 f i y x .

As Tschumperlé and Deriche [14] have shown, Equation 24 can be viewed as a local filtering approach with oriented and normalized Gaussian kernels. Here, a small convolution is locally applied around each point p with a 2D Gaussian mask G t T oriented by the tensor T(p):

G t T p = 1 4 π t exp - p T T - 1 p 4 t
(26)

As a matter of fact, a link exists between anisotropic diffusion PDE and classical filtering techniques:

f i t = t r a c e T H i f i ( t ) = f i ( t = 0 ) * G t T
(27)

The regularization PDE shown in Equation 27 is compatible with all local geometric properties expressed above:

f ( t = 0 ) = f initial f ( t + 1 ) = f ( t ) + d t f ( t ) t
(28)

with the diffusion tensor T = s - λ + , λ - θ - θ - T .

The approach in [1] is implemented by our method to seamlessly clone the source image to target image, which is shown in Equation 29.

Δ f i = Δ g i , f i Γ = f i * Γ
(29)

Here, in our seamless image cloning method, each image channel f i evolves independently with different smoothing structures as depicted in Figure 11. However, unlike the method proposed in [1], we apply a cutting approach on pixel values in the processed image and a linear normalization method on the result image to obtain a better visual result as seen in Figure 12e.

Figure 11
figure 11

Workflow diagram for our seamless image cloning method.

Figure 12
figure 12

Test images used in seamless image cloning methods: source image and selected region on it (a), target image (b), result images produced using direct cloning method (c), image cloning approach proposed in [1](d), and our seamless image cloning method (e).

Equation 13 employed in mixed seamless cloning method can be rewritten as below by considering all the information contained in each of the color channels:

p Γ , v i p = f i * p i f f * p > g p , g i p otherwise .
(30)

In our method, the color image gradient norm is used to obtain the image structure more accurately as shown in Figure 10b.

The mentioned methods in [6, 8, 9] were applied only on the lightness channel of the image to arrange local illumination changes and to flatten texture. Therefore, we modify the arrangement of local illumination changes and texture flattening methods as stated in Equations 31 and 32, respectively.

v i p = α β f * p - β f i * p
(31)
p Γ , v i p = M p f i * p
(32)

where M is the binary image obtained by employing ||f*(p)||.

Covariant inpainting method presented in [6, 8] completes the user-selected region in a seamless manner, which takes into account both pixel values and texture contrast of the surrounding area of the edited region as seen Figure 8f,g. However, in the method diffusion process is not coherently employed for all the color image channels f i . Therefore, we extend the method in [6, 8] for color image editing to diffuse the image structure accurately to the target region as depicted in Figures 13f and 14b. Namely, to solve Equation 33 for inpainting step, we use a trace-based method [14], which is explained above.

Figure 13
figure 13

Test image used in removing a defect: input image (a), results of direct cloning from the other area of the image (b), defect retouched by our seamless image cloning method (c), by our mixed seamless image cloning method (d), by image completion method in [20](e), and by our covariant inpainting approach (f).

Figure 14
figure 14

Close-ups of test image: defect retouched by the covariant inpainting method in [6, 8](a) and by our covariant inpainting approach (b).

Δ f g = 0
(33)

Therefore, we calculate only the gradient norm by taking the interaction among the channels into account in order to edit color images using the proposed methods. However, diffusion process in these methods, except for our covariant inpainting approach, has been done separately for each color channel.

4. Experimental results

The proposed method is compared with the previous approaches suggested in [1, 58, 12, 15, 1821] by utilizing seamless and mixed seamless image cloning, arrangement of local lightness variation, texture flatting, multi-resolution cloning considering the color fidelity, covariant inpainting, image completion, curvature-preserving PDE-based inpainting, combined PDE and texture synthesis, and variational inpainting approaches. The methods are tested on color images containing RGB color space and some images in referenced papers were utilized in these performed tests. The proposed editing operation is performed on only the selected region marked by the user.

The test results of mixed seamless image cloning using only the lightness channel and our method are shown in Figure 15. Whereas our method is able to preserve the image structures in the result image, the method considering the lightness channel fails to protect image edges because of using only the lightness channel of the images while obtaining image gradient norms, as seen in Figure 15c,d.

Figure 15
figure 15

Images used in testing mixed seamless image cloning methods: source image and selected region on it (a), target image (b), result images generated using mixed seamless cloning image method based on only the lightness channel (c), and our method (d).

The test results of seamless image cloning method suggested in [1] and our seamless image cloning approach are depicted in Figure 12. Both methods give almost the same results. But, the result generated by our method has slightly more vibrant colors.

The test results of seamless and mixed seamless image cloning suggested in [1] and our proposed method are given in Figure 4. Both approaches developed for seamless editing method cause blurring on some parts of the edited region of the target image. However, mixed seamless image cloning method does not cause any blurriness on the result image. As seen in Figure 4f, there is almost no color inconsistency on the result image generated by the proposed method since all the information contained in each of the color channels is used in editing the image. This situation can be clearly observed on variation of color of fume in the result image.

The test results for seamless image cloning proposed in [5] and mixed seamless image cloning in our study are given in Figure 16. The former method performs complex operations to find an optimized boundary condition. The method uses a shortest closed-path algorithm which is designed to search for the location of the boundary. Its additional computational complexity is O(MN), where M and N stand for the number of pixels in the band of the boundary. Also, the method employs a blended guidance field to incorporate the object's alpha matte to preserve the object's fractional boundary properly. Unlike that method, our proposed method is simple, and it is as effective as the former one. However, in Figure 16g, sea waves can be realized through the man's body because of the fact that they have prominent structures against to his body. This problem may be solved using a modified Poisson matting method. The additional time complexity of our method over the mixed seamless cloning method in [1] to compute the color image gradient norm is O((n - 1)L), where n is the channel number of color images, and L is the number of pixels in the edited region.

Figure 16
figure 16

Test images used in seamless and mixed seamless image cloning methods: source and target images used in [5](a), the optimized boundary computed by [5](b), source image used in our method and selected region on it (c), target image (d), result images produced using seamless image cloning methods proposed in [5](e), seamless image cloning methods based on the optimized boundary proposed in [5](f), and mixed seamless image cloning presented in this study (g).

Other test results for image cloning are given in Figure 17. The existing seamless image cloning method may diffuse prominent global coloring of the object toward the background as seen in Figure 17d. The method proposed in [12] takes the color fidelity term into account to avoid the problem observed in Figure 17e. Our seamless and mixed seamless image cloning methods generate acceptable results; however, some artifacts may come into existence in the neighborhood of the user-selected region. Also, dominant structures of some grasses can be seen through the squirrel's body in our mixed seamless image cloning method as shown in Figure 17g.

Figure 17
figure 17

Test images for image cloning: source image with the user-selected region (a), target image (b), results of direct cloning from another area of the image (c), cloning method in 12(d), multi-resolution image cloning method in 12(e), our seamless image cloning method (f), and our mixed seamless image cloning approach(g).

Another experiment illustrates how to conceal balustrade. The test results generated by the proposed method are given in Figure 18e,f. Figure 18b-d shows the results of methods proposed in [19, 15, 18], respectively. Unlike other methods, our seamless image cloning method and the approach proposed in [18] produces visually plausible results on the brick wall and trees as seen in Figure 18d,e. However, curvature-preserving PDE-based method in [15] produces too much blurring effects, as depicted in Figure 18c and our mixed seamless image cloning method fails to remove the object from the image, because the gradient of the object is more dominant against the user-selected patch as seen in Figure 18f.

Figure 18
figure 18

Test image for object removal: input image (a), results of combined method in 19(b), constrained PDE-based method in 15(c), image completion method in 18(d), our seamless image cloning method (e), and our mixed seamless image cloning method (f).

The results obtained by applying local lightness arrangement methods are given in Figures 5 and 19. As observed in Figures 5a-c and Figure 19a-c, the edited region is almost flat and the gradient norms in these regions are low. For this reason, both the methods suggested in [1] and our proposed approach produce acceptable results in terms of visual quality.

Figure 19
figure 19

Another test image employed in arranging local lightness variation methods: (a), results of local lightness variation arrangement methods suggested by [1](b), and by our approach (c).

The results of texture flattening methods are given in Figure 6. It is seen that our method properly flattens the selected region on the test image.

The results for removing a scratch from a shadow region are depicted in Figure 8. Both covariant cloning methods in [68] and our covariant inpainting approach not only correct the lighting but also reconstruct texture in a seamless manner as shown in Figure 8g,h. However, constrained PDE-based method in [15] and Poisson cloning approach in [6, 8] generate visually annoying effects as seen in Figure 8e,f.

The test results for concealing a blotch on a film frame are given in Figure 20. Our covariant inpainting method generates a visually plausible result while the image completion method in [18] produces some blocking artifacts on the result image. However, the method in [18] works fully automatically, which is not the case for our covariant inpainting approach. Actually, direct cloning and our seamless image cloning methods give good results for this film frame as well, as seen in Figure 20b,c. However, our mixed seamless image cloning method fails to generate the efficient result because of the dominant structure of the object, which the blotch cannot be concealed as shown in Figure 20d.

Figure 20
figure 20

Test image used in concealing a blotch: input image (a), results of direct cloning from another area of the image by flipping horizontally (b), blotch removed by our seamless image cloning method (c), by our mixed seamless image cloning method (d), by the image completion method in [18](e), and by our covariant inpainting method (f).

The results of other tests on defect removal are shown in Figure 13. Our covariant inpainting method, our seamless image cloning approach, and the image completion method in [20] produce better results compared with other methods as seen in Figure 13c,e,f. However, direct cloning, our mixed seamless image cloning methods give visually annoying effects. We are also able to compare our covariant inpainting method with the method presented in [6, 8] since it was applied to a grayscale form of the original image. Compared to the method in [6, 8], our covariant inpainting approach generates less blurring artifacts and linear structures of the given image is propagated better, as shown in Figure 14a,b.

Lastly, another test result is given for object removal. Figure 21 shows the result of constrained PDE-based algorithm in [15], variational inpainting method in [21], and our covariant inpainting approach. Unlike other methods, our method gives a good result; however, two different patches must be selected by the user because of not being able to remove the object from the image efficiently.

Figure 21
figure 21

Another test image for object removal: input image (a), results of constrained PDE-based method in [15](b), variational inpainting algorithm in [21](c), and our covariant inpainting approach (d).

The methods were implemented in Microsoft Visual C++ 2005 by employing CImg Library[23]. The program was run on a PC with Pentium 2.20 GHz processor and 2 GB RAM. The average required time depends typically on selected regions as well as image editing methods. The processing time for our covariant inpainting method shown in Figure 13f is about 5 s for seven iterations.

5. Conclusion

In this article, a method is presented for editing color images by effectively using all the information contained in each of the color channels. In this method, gradient norm and inpainting process are utilized by considering the effects of color channels on each other. This approach minimizes the color inconsistency and thus the selected region in the source image is cloned to the target image seamlessly.

As a future task, automatic region selection using moment invariants may be developed so that the proposed method is improved to be able to generate a faster output. In addition to this, the method will be extended to remove defects such as blotches from old motion pictures since there are proper patches in neighbor frames to retouch the current one [22].

References

  1. Perez P, Gangnet M, Blake A: Poisson image editing. ACM Trans Graph 2003,22(3):313-318. 10.1145/882262.882269

    Article  Google Scholar 

  2. Sun J, Jia J, Tang CK, Shum HY: Poisson matting. ACM Trans Graph 2004,23(3):315-321. 10.1145/1015706.1015721

    Article  Google Scholar 

  3. Chuan Q, Shuozhong W, Xinpeng Z: Image editing without color inconsistency using modified poisson equation. International Conference on Intelligent Information Hiding and Multimedia Signal Processing 2008.

    Google Scholar 

  4. Leventhal D, Sibley PG: Poisson image editing extended. International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2006.

    Google Scholar 

  5. Jia J, Sun J, Tang C, Shum H: Drag-and-drop pasting. International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2006.

    Google Scholar 

  6. Georgiev T: Covariant derivatives and vision. ECCV 2006, 9th European Conference on Computer Vision, Graz, Austria 2006.

    Google Scholar 

  7. Georgiev T: Photoshop healing brush: a tool for seamless cloning. Workshop on Applications of Computer Vision in conjunction with ECCV 2004, Prague 2004, 1-8.

    Google Scholar 

  8. Georgiev T: Image reconstruction invariant to relighting. EUROGRAPHICS 2005.

    Google Scholar 

  9. Fattal R, Lischinski D, Werman M: Gradient domain high dynamic range compression. Proceedings of ACM SIGGRAPH 2002 2002, 249-256.

    Google Scholar 

  10. Shen J, Jin X, Zhou C, Wang CCL: Gradient based image completion by solving the Poisson equation. Comput Graph 2007,31(1):119-126. 10.1016/j.cag.2006.10.004

    Article  Google Scholar 

  11. Dizdaroğlu B, İkibaş C: A seamless image editing technique using color information. IPCV'09: International Conf. on Image Processing, Computer Vision & Pattern Recognition, Las Vegas, USA 2009.

    Google Scholar 

  12. Yang W, Zheng J, Cai J, Rahardja S, Chen CW: Natural and seamless image composition with color control. IEEE Trans Image Process 2009,18(11):2584-2592.

    Article  MathSciNet  Google Scholar 

  13. Tikhonov AN: Regularization of incorrectly posed problems. Soviet Math Dokl 1963, 4: 1624-1627.

    MATH  Google Scholar 

  14. Tschumperlé D, Deriche R: Vector-valued image regularization with PDE's: a common framework for different applications. IEEE Trans Pattern Anal Mach Intell 2005,27(4):506-517.

    Article  Google Scholar 

  15. Tschumperlé D: Fast anisotropic smoothing of multi-valued images using curvature-preserving PDE's. Int J Comput Vis 2006,68(1):65-82. 10.1007/s11263-006-5631-z

    Article  Google Scholar 

  16. Dizdaroğlu B: Solving heat flow equation for image regularization. In The 1st International Symposium on Computing in Science & Engineering (ISCSE). Aydin, Turkey; 2010:372-377.

    Google Scholar 

  17. Tschumperlé D: PDE's based regularization of multivalued images and applications. PhD thesis. Université de Nice-Sophia Antipolis; 2002.

    Google Scholar 

  18. Dizdaroğlu B: An image completion method using decomposition. EURASIP J Adv Signal Process 2011. Article ID 831724, 15

    Google Scholar 

  19. Harald G: Combined PDE and texture synthesis approach to inpainting. Proc ECCV 2004, 2: 214-224.

    MATH  Google Scholar 

  20. Barnes C, Shechtman E, Finkelstein A, Goldman DB: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans Graph 2009,28(3):24:1-24:11.

    Article  Google Scholar 

  21. Bugeau A, Bertalmío M, Caselles V, Sapiro G: A comprehensive framework for image inpainting. IEEE Trans Image Process 2010,19(10):2634-2645.

    Article  MathSciNet  Google Scholar 

  22. Dizdaroğlu B, Gangal A: A spatiotemporal algorithm for detection and restoration of defects in old color films. Lecturer Notes in Computer Sciences 2007, 4678: 509-520. 10.1007/978-3-540-74607-2_46

    Article  Google Scholar 

  23. Tschumperlé D: The C++ template image processing library, The CImg Library.[http://cimg.sourceforge.net/]

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bekir Dizdaroğlu.

Additional information

6. Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Dizdaroğlu, B., İkibaş, C. An improved method for color image editing. EURASIP J. Adv. Signal Process. 2011, 98 (2011). https://doi.org/10.1186/1687-6180-2011-98

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-98

Keywords