Skip to main content

Optimal chroma-like channel design for passive color image splicing detection

Abstract

Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

1 Introduction

Modern information technology brings great convenience in our daily life; however, as every coin has two sides, “Seeing is not believing” [1] and tampered images have flooded everywhere due to the fast development of various powerful and sophisticated image manipulation and modification tools. As a result, people have gradually lost trust in digital images. Digital image forensics techniques have emerged over the past a few years to regain some trust in digital images and they can roughly be divided into two categories: the active methods [24] and the passive methods [525]. For the active approaches, certain “tags” reflecting to the image content such as digital watermark, digital fingerprint, etc., are embedded in the image. At the detection side, such “tags” are extracted and analyzed to authenticate the integrity of the image. However, in real applications on the Internet, it is impossible to restrict all the images to be watermarked before uploading, which limits the application of the active approaches. On the other hand, the passive approaches aim to authenticate the digital image based on certain underlying characteristics of the natural images and thus no prior information is required. Among all the image forgeries, image splicing can be considered as the most fundamental and common operation. It is a process that creates a composite image by cropping and pasting regions from one or more images. In this article, we concern with the passive (blind) digital image forensics methods for image splicing detection.

In recent years, many researchers have proposed various methods to detect image splicing. These methods can roughly be divided into three categories: statistical feature-based, lighting condition-based, and camera-based approaches. For statistical feature-based methods, the composite image may be quite perceptual deceiving but the tampering operation would change the underlying statistical characteristics of the original image. The statistical feature-based methods were therefore proposed to discover the composite image using such cues from these statistical features [811]. For example, it is observed in [10] that the splicing process will introduce a discontinuity to a composite signal at the spliced point and thus the lack of smoothness can be regarded as a departure from a normal signal due to a perturbation of bipolar signal. Based on the above assumptions, Ng et al. [10] modeled the image splicing as perturbation of the authentic counterpart with a bipolar signal. They analyzed the response of the bicoherence magnitude and phase features to detect image splicing based on the proposed model. They stated that image splicing increases the value of bicoherence magnitude and contributes to a phase bias at ±90°. Thus, the two features can be treated as discriminative features for image splicing detection. Shi et al. [11] proposed a natural image model for image splicing detection, the statistical feature consists of moments of characteristic functions of wavelet sub-bands and first-order Markov transition probabilities of neighboring difference block DCT array (DCT Markov). All the features extracted were combined into a vector and the vector was treated as distinguishing features for splicing detection. When composing an image, it is difficult for the forger to match the lighting conditions. Therefore, inconsistencies of light directions can be used as another evidence for revealing the tampering and lighting condition-based methods were proposed in [1214]. Johnson and Farid [12, 13] modeled the lighting environment with a five-dimensional vector and the model approximated the lighting with a linear combination of spherical harmonics. Inconsistencies in the model across the image detected were treated as evidences of image tampering. In some environments (especially the indoor scene), the light source may give rise to a specular highlight on the human eyes. Johnson and Farid [14] showed that the direction to a light source could be estimated from the specular highlight and inconsistencies in lighting conditions across the image were then used to infer splicing traces of digital image. Due to the imperfection of digital camera, some artifacts (e.g., CFA interpolation, chromatic aberration, and sensor noise) are introduced in the imaging process; camera-based methods [1523] have modeled artifacts introduced in the image processing and inconsistencies of these artifacts can be treated as evidence of splicing. CFA interpolation (demosaicing) is commonly used in the imaging process of single CCD digital cameras. For manipulated images, the demosaicing regularity will be destroyed. Based on this ground truth, Cao and Kot [23] proposed an ensemble manipulation detection framework (i.e., FusionBoost) to detect a range of manipulations on local image patches. This framework is composed of a set of lightweight manipulation detectors and each of these detectors is trained by CFA correlation-related features to detect some specific manipulations. All the classification results from these detectors are finally combined by the FusionBoost algorithm and thus the ensemble classifier can detect various image manipulations.

The existing splicing detection methods introduced above are usually designed for gray-scale images; however, most images on the Internet are color images and chromatic information may provide additional cues for splicing detection. Since image splicing detection can be treated as the problem of detecting weak signal (splicing) in the background of strong signal (image contents), removing the strong signal while preserving the weak signal is of vital importance for splicing detection [24]. However, luma channel (Y ) which is used most frequently in the splicing detection work preserves much more image contents than the chromatic channels (e.g., Cb and Cr) [24]. Wang et al. [24], therefore, investigated the effectiveness of chromatic channels in detecting image splicing. They employed gray-level co-occurrence matrix (GLCM) as discriminative features for classification and the experimental results showed that features extracted from chromatic channels achieved much better performance than those extracted from luma channel. In all the approaches mentioned above, various kinds of features were extracted from the existing color channels; however, what kind of chromatic channel is most discriminative in image splicing detection has not been well addressed yet to the best of the authors’ knowledge. Based on the work [24] and our previous work [25], Cb and Cr have proved their superiority in detecting image splicing. Since Cb and Cr which are the linear transforms of R, G, and B channels can be used for image splicing detection, other linear transforms of R, G, and B (i.e., chroma-like channels) may also be employed for image splicing detection. In this article, we propose a way of designing an optimal chroma-like channel which can best differentiate the spliced images from the natural ones. Feature extraction methods proposed in [9, 11, 24, 25] are used to test the effectiveness of the designed channel and the experimental results show that features extracted from the designed chroma-like channel can achieve higher detection rate than those extracted from the traditional color channels.

The rest of this article is organized as follows. Section 2 explains the underlying rationale of using inter-channel information in splicing detection. Details of the proposed chromatic channel design and splicing detection approaches are elaborated in Section 3. In Section 4, the framework of image splicing detection using optimal chroma-like channel is given and several existing image splicing detection methods to be employed in our experiment are introduced. Section 5 shows the experimental results and the corresponding performance analysis. Finally, conclusions are drawn in Section 6.

2 Use of inter-channel information

A color model is a specification of a coordinate system (usually 3D or 4D) and a subspace within which each color is represented by a single point. Color models are used in different applications and the choice of color model is of great importance for feature extraction, object recognition, tracking, etc. [26]. There are many color models available for image splicing detection (e.g., RGB, HSV , YCbCr, CMY , YIQ, YUV , CIE Lab, XYZ, etc.) and an important question is how to select the best color model or channel for image splicing detection. RGB color model or luma channel is the most frequently used in image splicing detection. However, some edge information caused by image splicing may not be clear in a single channel and an example is given in Figure 1. It can be observed from Figure 1 that lots of edges caused by splicing are lost in a single channel, which illustrates that single channel of RGB is not the best choice for image splicing detection.

Figure 1
figure 1

An image and its R, G, B color channels. (a) Original image, (b) R channel, (c) G channel, (d) B channel.

YCbCr color model has also been adopted for image splicing detection in the last few years. YCbCr in Rec.ITURBT. 601−6 is defined as a linear transform from R, G, and B channels which is formulated in (1).

Y Cb Cr = 0 . 299 0 . 587 0 . 117 0 . 299 0 . 587 0 . 886 0 . 701 0 . 587 0 . 114 R G B + 16 128 128
(1)

Y is the weight sum of R, G, and B channels; Cb and Cr channels are the blue-difference and red-difference chroma components, respectively. Y preserves most of the edges, however, as can be observed in (1), G channel contributes most to Y channel (with the coefficient 0.587), while B least (with the coefficient 0.117). Therefore, edges in blue and red areas are likely to be neglected in Y channel and an example is given in Figure 2. It is observed from Figure 2 that edges caused by splicing in the red and blue area of the original image are almost invisible in Y channel, although Cb and Cr channels preserve less edges, edges which are invisible in Y channel appear sharp in the Cb or Cr channel. The missing edges and splicing artifacts in Y , Cb, and Cr channels are marked by arrows from left to right in the second row of Figure 2.

Figure 2
figure 2

Y, Cb, and Cr channels of Figure1a and their Sobel edge images. Y , Cb, and Cr channels are given in the first row and the corresponding Sobel edge images are given in the second row.

Based on the above analysis, it can be concluded that (1) the existing color models are application-specific; however, there is no splicing detection-oriented color model to the best of the authors’ knowledge; (2) some of the edges (or splicing artifacts) are difficult to be detected in a single channel; (3) multi-channel information should be considered in the process of image splicing detection.

3 Proposed method

Since image splicing detection can be regarded as detecting weak signal (splicing artifacts) in the background of strong signal (image contents), detecting splicing artifacts in Y channel which preserves most of the image contents is usually a difficult task. In contrary, Cb and Cr channels contain chromatic information which is less related to the image content. Therefore, detecting image splicing in chromatic channels can be regarded as detecting weak signal in the background of weak signal which helps reduce the detection difficulties [24]. However, some splicing artifacts may be lost in any isolate chromatic channel as illustrated in Figure 2. Considering the above issues, we aim to design a channel which removes the influences of the image content while preserves the splicing artifacts as much as possible.

In this study, the optimal, chroma-like channel C θ , is defined as follows

C θ =(α,β,γ)× ( R , G , B ) T +128
(2)

where α and β are ranged from −1to 1. Assume α + β + γ=0to obtain a chroma-like information and a constant 128 is added similar to that of Cb and Cr channel in (1). When considering the problem of image splicing detection, the spliced edges vary with the selections of the coefficients in (2). An example is given in Figure 3, some spliced images shown in the first row where the telephone box, the teddy bear, and the bird are the spliced parts from left to right and their corresponding Sobel edge images in different chroma-like channels are presented in the following rows. Chroma-like channels with coefficients (0. 3,−0. 5,0. 2),(0. 4,0. 1,−0. 5), and (0. 6,0. 1,−0. 7) are given in the last three rows, respectively. It is observed from Figure 3 that chroma-like channel with properly selected (α,β,γ) could remove the background signal while emphasizing the splicing artifacts.

Figure 3
figure 3

Edge images of spliced images in chroma-like channels. Spliced images are shown in the first row. Edge images in chroma-like channels with different α,β,and γ values are given in the subsequent three rows.

The desired chroma-like channel should be the optimal trade-off between preserving as many splicing artifacts as possible and removing as many redundant image contents as possible. The machine learning-based method support vector machine (SVM) [27] is integrated into our method to solve this problem and detecting image splicing is modeled as a two-class classification task in the optimal chroma-like channel. Let x j (α β γ), j=1,2,…,N be the feature vector of training set extracted from chroma-like channel with coefficient (α β γ). Since α + β + γ=0,x j (α β γ) can be abbreviated as x j (α β)and it belongs to either of two classes, i.e., normal image and spliced image. SVM aims to find a hyperplane wTx(α β) + w0=0 that best classifies all the feature vectors and wTw0can be derived by

minimize 1 2 w T w+C j = 1 N ξ j
(3)

which is subject to:

y j ( w T x j ( α , β ) + w 0 ) 1 ξ j , j = 1 , 2 , , N ξ j 0 , j = 1 , 2 , , N
(4)

where ξ j is a slack variable normalized from 0 to 1, C is a preset positive constant fixed when training the SVM, y j is the preset label ( + 1 for authentic and −1for spliced), and N is the total number of images. Equation (3) is a quadratic optimization task with linear constraints (4) and it can be solved by Lagrangian multiplier method. Lagrangian function is defined as

L w , w 0 , ξ = 1 2 w T w + C j = 1 N ξ j j = 1 N λ j y j w T x j ( α , β ) + w 0 1 + ξ j j = 1 N μ j ξ j .
(5)

The Karush–Kuhn–Tucker conditions that (3) and (4) have to satisfy are

L ( w , w 0 , ξ ) w = 0 w = j = 1 N λ j y j x j ( α , β )
(6)
L ( w , w 0 , ξ ) w 0 = 0 j = 1 N λ j y j = 0
(7)
L ( w , w 0 , ξ ) ξ = 0 λ j + μ j C = 0
(8)
λ j y j w T x j ( α , β ) + w 0 1 + ξ j = 0 , j = 1 , 2 , , N
(9)
μ j ξ j = 0 , j = 1 , 2 , , N
(10)
λ j 0 , j = 1 , 2 , , N
(11)
μ j 0 , j = 1 , 2 , , N.
(12)

According to Wolfe Dual representation, the problem of (3) and (4) is equivalent to

maximizeL(w, w 0 ,ξ)
(13)

subjected to (6)–(12). Substituting (6), (7), and (8) into (13), we can get

maximize 1 2 j = 1 N k = 1 N y j λ j y k λ k x j T (α,β) x k (α,β)+ j = 1 N λ j .
(14)

To improve the performance of SVM, Gaussian kernel defining a mapping into a higher-dimensional space is introduced in our method and it is defined as

K( x j (α,β), x k (α,β))= e x j ( α , β ) x k ( α , β ) 2 σ 2
(15)

Finally, λ, α, and β can be derived by

λ , α , β = arg max λ , α , β 1 2 j = 1 N k = 1 N y j λ j y k λ k K × ( x j ( α , β ) , x k ( α , β ) ) + j = 1 N λ j ,
(16)

which is subject to

j = 1 N λ j y j = 0 0 λ j C 0 α 2 1 0 β 2 1
(17)

Therefore, deriving the optimal chroma-like channel (i.e., optimal [α,β,−αβ]) is equivalent to finding the largest hyperplane margin among candidate feature spaces (features extracted from chroma-like channels) which are mapped into higher spaces using gaussian kernel. Since there is no direct parameter optimization scheme solving (16) and (17), a numerical optimization method is employed to obtain the optimal θ defined as θ=[α,β] for marking convenience, which consists of two steps: initial parameter selection and local optimum searching.

The initial parameter selection plays an important role in the optimization, properly selected initial parameters will greatly improve the algorithm performance. Two-dimensional (α and β) grid searching is used to find the best initial value θ0=[α0,β0]and grid step is fixed at 0. 1, that is, θi + 1θ i =0. 1. For the i th parameter θ i , feature vector x j (θ i ) of the j th image in the dataset is extracted. Feature set S θ i = [ x 1 ( θ i ) , x 2 ( θ i ) , , x N ( θ i ) ] will then be fed into the SVM and classification results Φ ( S θ i ) defined as

Φ( S θ i )= j = 1 N y j [ w T x j ( θ i )+ w 0 ]
(18)

are treated as the selection criteria. w and w0 in (18) can be derived by (16), (17), (6), and (9) with fixed θ i =[α i ,β i ]. Note that if [wTx j (θ i ) + w0]>0,x j (θ i ) is classified as normal image whose label is + 1otherwise x j (θ i ) is classified as spliced one whose label is −1, that is, the detecting performance is in direct proportion to the value of Φ ( S θ i ) . When the feature extraction method and the image dataset for investigation have been fixed, initial parameter θ0 is determined as

θ 0 = arg max θ i Φ( S θ i ).
(19)

After initializing the parameter θ0, a gradient ascent algorithm is then employed to find the local optimum θ. Gradient ascent algorithm takes every step proportional to the current point gradient, and local maximum will be reached in several iterations. Since S θ i is the function of θ i , the (i + 1)th parameter is computed as (20) where Φ ( S θ i ) is the gradient of Φ at θ i , δ is the minimum searching step.

θ i + 1 = θ i + δ Φ ( S θ i ) Φ ( S θ i ) ,
(20)
Φ ( S θ i ) = ∂Φ ( S θ i ) α i , ∂Φ ( S θ i ) β i .
(21)

Local maximum of Φ will be obtained by the following ways:

  1. (1)

    Set initial parameter θ 0, termination condition ε and let i=0.

  2. (2)

    Compute Φ ( S θ i ) , and update the parameter vector θ i + 1by (20) and (21), let i=i + 1.

  3. (3)

    Repeat step (2) until Φ ( S θ i ) < ε and the optimal parameter vector is obtained by θ =θ i .

4 Splicing detection using optimal chroma-like channel

In this section, we will present the general framework of our proposed method for image splicing detection. Besides that, four commonly used image splicing detection features which will be employed to test the effectiveness of proposed method are briefly introduced.

4.1 Framework of optimal chroma-like channel design for image splicing detection

The optimal chroma-like channel design can be regarded as a training process to get the optimal coefficient θwith a labeled image dataset, a specific feature extraction method, and a classifier, which is given as follows. Images in the dataset are divided into two groups, i.e., authentic images labeled as + 1 and spliced images labeled as −1. These labeled images are treated as the ground truth in the training process. First, the labeled images are transformed into chroma-like channel C θ i according to (2). Next, several discriminative features are extracted in C θ i channel and features of all the labeled images are then grouped into a feature set S θ i . After that, S θ i is fed into the SVM classifier and the prediction result Φ ( S θ i ) can be obtained via (18). Finally, a gradient ascent algorithm is employed to find θ for the optimal chroma-like channel. The diagram of the proposed framework is illustrated in Figure 4.

Figure 4
figure 4

Framework of optimal chroma-like channel design.

4.2 Feature extraction methods

The aim of optimal chroma-like channel is to find the most discriminative channel for a specific feature extraction method. In order to test the effectiveness of proposed method, four widely used feature extraction methods are employed in our experimental work, they are, GLCM[24], RLRN[25], DCT Markov[11], and 42DMoments[9, 11].

4.2.1 GLCM

Let x(u v) be the image to be detected, directional GLCM, i.e., GLC M τ (τ=0°,45°,90°,135°), of thresholded neighboring difference edge images are used as discriminative features for color image splicing detection [24]. Since GLC M τ is usually large and sparse, a predefined threshold is then implemented on the edge image in order to get a reasonable size of GLC M τ . Finally, GLC MGLC M45°GLC M90°and GLC M135° are treated as features for classification.

4.2.2 RLRN

For a given image, the run-length matrix p τ (m n) is defined as the number of runs with gray-level m−1and run length n along τ direction. Run-length Run-number (RLRN) defined in (22) is employed in our previous work [25] to detect image splicing in the chroma channels. In the run-length matrix of image, the number of short runs usually dominates the total number of runs, in order to give emphasize on the long lengths of runs, p τ (m nn is adopted as a transformed run-length matrix. The sum of distribution of p τ (m nn is finally fed into classifier for classification.

m p τ (m,n)·n
(22)

4.2.3 DCT Markov

Shi et al. [11] modeled the directional first Markov transition probability matrix in the adjacent difference block DCT domain which are given as follows.

P 0 ° ( ω j | ω i ) = r s δ ( D 0 ° ( r , s ) = ω i , D 0 ° ( r + 1 , s ) = ω j ) r s δ ( D 0 ° ( r , s ) = ω i )
(23)
P 90 ° ( ω j | ω i ) = r s δ ( D 90 ° ( r , s ) = ω i , D 90 ° ( r , s + 1 ) = ω j ) r s δ ( D 90 ° ( r , s ) = ω i )
(24)

where ω i and ω j are the neighboring two states, D(r s)and D90°(r s)are the adjacent difference block DCT arrays along horizontal and vertical directions, respectively, P(ω j |ω i ) (or P90°(ω j |ω i )) is the transition probabilities from state ω i to state ω j along horizontal (or vertical) direction, δ is the δ function. All the elements in Pand P90° are treated as discriminative features for image splicing detection.

4.2.4 42D Moments

Wavelet decomposition and statistical moments have proved their feasibility of image forgery detection in [8, 9, 11]. Moment features defined as (25) were proposed in [9, 11] for image splicing detection.

M l = i = 1 K / 2 x i l | H ( x i ) | i = 1 K / 2 | H ( x i ) | , M u , l = j = 1 K / 2 i = 1 K / 2 u i l | H ( u i , v j ) | j = 1 K / 2 i = 1 K / 2 | H ( u i , v j ) | , M v , l = j = 1 K / 2 i = 1 K / 2 v j l | H ( u i , v j ) | j = 1 K / 2 i = 1 K / 2 | H ( u i , v j ) | ,
(25)

where H(x i ) is the 1D characteristic function of each wavelet sub-band, K is the total number of different values in a sub-band, l(l=1,2,3) is the order of moments, and H(u i v j ) is the 2D characteristic function of 2D histogram. For a given image, first, one-level Haar wavelet decomposition is implemented on the original image and the corresponding prediction error image. Then, characteristic functions of histograms (1D and 2D) of each sub-band are computed. Finally, first-, second-, and third-order moments of characteristic functions are computed via (25) for classification.

5 Experimental results and performance analysis

DVMM image dataset of Columbia University [28] which consists of 183 authentic and 180 spliced uncompressed TIFF images is employed in our experiments. The image sizes range from 757×568 to 1152×768. The spliced images are manipulated via the authentic images without any post processing. Most of the images are indoor scenes and the rest images were taken outdoors on a cloudy day. Some images in the DVMM dataset are given in Figure 5.

Figure 5
figure 5

Some image examples in DVMM dataset. Images in the first row are authentic and images in the second row are spliced.

LIBSVM [29] is used as the classifier and radial basis function (i.e., Gaussian kernel) is selected as kernel. Half images are treated as training set and the rest images are used for testing. The performance is measured by averaging the classification accuracies of 20 times. Contour plots of the grid searching results of GLCM, RLRN, DCT Markov, and 42DMoments-based features are shown in Figure 6.

Figure 6
figure 6

Contour plots of grid searching results over DVMM image dataset. The first row indicates the results of GLCM and RLRN (spatial domain features); DCT Markov and 42DMoments(frequency features) are presented in the second row.

As shown in Figure 6, the detection performance varies with the selection of initial θ. Moreover, the detection accuracy changes gradually with various selections of θ, which demonstrates that it is reasonable to use gradient ascent algorithm to find the local optimum. Figure 6 also shows that there exists intersections which can achieve high detection rate for different kinds of feature extraction methods. For the spatial domain methods (features are extracted in the spatial image domain), i.e., GLCM and RLRN, initial parameter (−0. 1,0. 3) gives the highest detection rates of 91. 2and 86. 3%, respectively. For frequency domain methods (features are extracted in the frequency domain), namely DCT Markov and 42DMoments, initial parameter (−0. 3,0. 7) gives the highest detection rates of 86. 3and 79. 3%, respectively. Gradient ascent algorithm is then employed to find the local optimal parameters θ with the initial parameters established above. The parameters of the gradient ascent algorithm are fixed as ε=0. 1 and δ=0. 02. Optimal θcorresponding to the above four feature extraction methods are given in Table 1. In order to provide a more intuitive explanation of the advantage of the optimal chromatic channel, an example is given in Figure 7, where a composite image and its optimal chroma-like channels for GLCM, RLRN, DCT Markov, and 42DMoments are given in the figure.

Figure 7
figure 7

Optimal chroma-like channels for a composite image. The yellow duck is the spliced part. (a) The composite image. (b–e) The optimal chroma-like channels for GLCM, RLRN, DCT Markov, and 42DMoments, respectively.

Table 1 Optimal θ for feature extraction methods

Figure 8 illustrates the final classification results of the above four different detection methods in RGB, YCbCr, and C θ spaces. The receiver operating characteristics (ROC) curves using different feature extraction methods in R, G, B, Y, Cb, Cr, and C θ channels are compared in Figure 9. From Figures 8 and 9, it is observed that for all the four feature extraction methods, the detection performance using the proposed optimal chroma-like channel outperforms the traditional color channels. Furthermore, the optimal chroma-like channel features even perform better than the three channels fusion features.

Figure 8
figure 8

Detecting performance comparisons among GLCM, RLRN, DCTMarkov, and 42DMoments in different channels.

Figure 9
figure 9

ROC curves of GLCM, RLRN, DCT Markov, and 42D Moments features in different color channels.

6 Conclusions

Compared with the gray-scale images, color images contain more information (inter-channel information) for image splicing detection. Recent research has shown that the color model selection is quite important for many computer vision algorithms, and features extracted from various color channels have different discriminative power. For the above reasons, the main objective of this study is to find an optimal color channel that can achieve the highest discriminative power for image splicing detection. Similar to the idea of finding the hyperplane with largest margin in the given feature space in SVM, our goal is to search for the most discriminative hyperplane (i.e., with the highest detecting accuracy) among all the candidate feature spaces (features extracted from chroma-like channels). Four widely used features for image splicing detection are employed to test the effectiveness of the proposed chroma-like channel. Experimental results have verified that all the four features extracted from the designed chroma-like channel achieve better class separability than those extracted from traditional color channels. In the future work, we are trying to reduce the computational complexity by narrowing the searching area of initial θ.

References

  1. Farid H: Seeing is not believing. IEEE Spectrum 2009, 46(8):44-51.

    Article  Google Scholar 

  2. Yeung MM: Digital watermarking. Commun. ACM 1998, 41(7):30-33.

    Article  Google Scholar 

  3. Fridrich J: Methods for tamper detection in digital images. In Proceedings of Multimedia and Security Workshop at ACM Multimedia. Orlando, USA; 1999:19-23.

    Google Scholar 

  4. Rey C, Dugelay JL: A survey of watermarking algorithms for image authentication. EURASIP J. Appl. Signal Process 2002, 2002(6):613-621. 10.1155/S1110865702204047

    Article  MATH  Google Scholar 

  5. Farid H: A survey of image forgery detection. IEEE Signal Process. Mag 2009, 26(2):16-25.

    Article  Google Scholar 

  6. Huang H, Guo W, Zhang Y: Detection of copy-move forgery in digital images using sift algorithm. In Proceedings of Pacific-Asia Workshop on Computational Intelligence and Industrial Application. Wuhan, China; 2008:272-276.

    Google Scholar 

  7. Pan X, Lyu S: Detecting image region duplication using sift features. In Proceedings of Acoustics Speech and Signal Processing (ICASSP). Dallas, USA; 2010:1706-1709.

    Google Scholar 

  8. Farid H, Lyu S: Higher-order wavelet statistics and their application to digital forensics. In IEEE Workshop on Computer Vision and Pattern Recognition. Madison, Wisconsin, USA; 2003:94-94.

    Google Scholar 

  9. Chen W, Shi YQ, Su W: Image splicing detection using 2-d phase congruency and statistical moments of characteristic function. In Proceedings of The Society of Photo-Optical Instrumentation Engineers (SPIE). San Jose, CA; 2007:6505-6505.

    Google Scholar 

  10. Ng TT, Chang S-F, Sun Q: Blind detection of photomontage using higher order statistics. In IEEE International Symposium on Circuits and Systems. Vancouver, Canada; 2004:688-691.

    Google Scholar 

  11. Shi YQ, Chen C, Chen W: A natural image model approach to splicing detecting. In ACM Proceedings of the 9th Workshop on Multimedia and Security. Dallas, USA; 2007:51-62.

    Google Scholar 

  12. Johnson MK, Farid H: Exposing digital forgeries by detecting inconsistencies in lighting. In ACM Proceedings of the 7th Workshop on Multimedia and Security. New York, USA; 2005:1-10.

    Google Scholar 

  13. Johnson MK, Farid H: Exposing digital forgeries in complex lighting environments. IEEE Trans. Inf. Forensics Secur 2007, 2(3):450-461.

    Article  Google Scholar 

  14. Johnson MK, Farid H: Exposing digital forgeries through specular highlights on the eye. In Proceeding of the 9th International Workshop on Information Hiding. Saint Malo, France; 2007:311-325.

    Chapter  Google Scholar 

  15. Popescu AC, Farid H: Exposing digital forgeries in color filter array interpolated images. IEEE Trans. Signal Process 2005, 53: 3948-3959.

    Article  MathSciNet  Google Scholar 

  16. Johnson MK, Farid H: Exposing digital forgeries through chromatic aberration. In Proceedings of ACM Multimedia and Security Workshop. Geneva, Switzerland; 2006:48-55.

    Google Scholar 

  17. Hsu Y-F, Chang S-F: Detecting image splicing using geometry invariants and camera characteristics consistency. In International Conference on Multimedia and Expo (ICME). Toronto, Canada; 2006:549-552.

    Google Scholar 

  18. Gou H, Swaminathan A, Wu M: Noise features for image tampering detection and steganalysis. In Proceedings of IEEE International Conference on Image Processing (ICIP). San Antonio, USA; 2007:97-100.

    Google Scholar 

  19. Fridrich J, Chen M, Goljan M: Imaging sensor noise as digital x-ray for revealing forgeries. In Proceedings of 9th International Workshop on Information Hiding (IH). Saint Malo, France; 2007:342-358.

    Google Scholar 

  20. Swaminathan A, Wu M, Liu KJR: Digital image forensics via intrinsic fingerprints. IEEE Trans. Inf. Forensics Secur 2008, 3(1):101-117.

    Article  Google Scholar 

  21. Dirik AE, Memon N: Image tamper detection based on demosaicing artifacts. In Proceedings of IEEE International Conference on Image Processing (ICIP). Cairo, Egypt; 2009:1497-1500.

    Google Scholar 

  22. Cao H, Kot AC: Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans. Inf. Forensics Secur 2009, 4(4):899-910.

    Article  Google Scholar 

  23. Cao H, Kot AC: Manipulation detection on image patches using fusionboost. IEEE Trans. Inf. Forensics Secur 2012, 7(3):992-1002.

    Article  Google Scholar 

  24. Wang W, Dong J, Tan T: Effective image splicing detection based on image chroma. In Proceedings of IEEE International Conference on Image Processing (ICIP). Cairo, Egypt; 2009:1257-1260.

    Google Scholar 

  25. Zhao X, Li J, Li S, Wang S: Detecting digital image splicing in chroma spaces, LNCS vol. 6526. In International Workshop on Digital Watermarking, 2010. Seoul, Korea; 2011:12-22.

    Google Scholar 

  26. Stokman H, Gevers T: Selection and fusion of color models for image feature detection. IEEE Trans. Pattern Anal. Mach. Intell 2007, 29(3):371-381.

    Article  Google Scholar 

  27. Vapnik VN: Statistical Learning Theory. Wiley & Sons, New York; 1998.

    MATH  Google Scholar 

  28. Ng TT, Chang SF, Sun Q: A data set of authentic and spliced image blocks. In Technique Report, DVMM. Columbia University, New York, USA; http://www.ee.columbia.edu/ln/dvmm/downloads/AuthSplicedDataSet/photographer

  29. Fan RE, Chang KW, Hseih CJ, Wang XR, Lin CJ: LIBSVM: A library for support vector machines. ACM Trans Intell Syst Technol 2011, 2(3):1-27.

    Google Scholar 

Download references

Acknowledgements

This research work was funded by the National Natural Science Foundation of China (61071152, 60702043), 973 Program (2010CB731403, 2010CB731406) of China, Shanghai Educational Development Foundation and National “Twelfth Five-Year” Plan for Science & Technology Support (2012BAH38B04).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shilin Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Zhao, X., Li, S., Wang, S. et al. Optimal chroma-like channel design for passive color image splicing detection. EURASIP J. Adv. Signal Process. 2012, 240 (2012). https://doi.org/10.1186/1687-6180-2012-240

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-240

Keywords