 Research
 Open Access
 Published:
Performance analysis and improvement of dither modulation under the composite attacks
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 53 (2012)
Abstract
The first goal of this article is to analyze the performance of dither modulation (DM) against the composite attacks including valumetric scaling, additive noise and constant change. The analyzes are developed under the assumptions that the host vector and noise vector are mutually independent and both of them have independently and identically distributed components. We derive the general expressions of the probability density functions of several concerned signals and the decoding error probability. The specific theoretical results are provided for the case of generalized Gaussian host and noise. Based on the analyzes, the performance of DM is predicted for different scenarios with a high degree of accuracy and evaluated for different distribution models of host and noise signals. Numerical simulations confirm the validity of the given theoretical analyzes. Then, we address to improve the robustness of DM against valumetric scaling plus constant change. The normalized dither modulation (NDM) is presented, which works by constructing a gaininvariant vector with zero mean for quantization. Performance analysis shows that NDM is theoretically invariant to valumetric scaling and constant change and achieves similar performance to DM in other aspects. The performance of NDM is further improved by weighting the quantization errors. Experiments on real images also show the advantage of NDM over DM subject to amplitude scaling and constant change.
1 Introduction
In the past decade, much attention has been paid to the quantizationbased watermarking for canceling the host signal interference. One of the most important methods proposed so far is quantization index modulation (QIM) [1]. The basic QIM algorithm includes a number of variants, i.e., dither modulation (DM), distortion compensated dither modulation (DCDM) (also known as scalar Costa scheme (SCS) [2]) and spread transform dither modulation (STDM) [1]. The theoretical performance of QIM methods is a key issue and has received considerable attention.
Initially, the Gaussian channel is often used in the analyzes and the performance of QIM methods has been extensively investigated in this case. A relatively crude approximation to the decoding error probability of QIM was given in [1] for the additive white Gaussian noise (AWGN) attacks. The performance of SCS was completely analyzed by Eggers et al. [2] under the AWGN attacks. In [3], the performance of scalar DCQIM against AWGN was theoretically evaluated from the detection viewpoint. Recently, a new logarithmic QIM (LQIM) was presented in [4] and its performance was analyzed in the presence of AWGN. It has been pointed out in [5] that the performance of QIM methods may be overstated under Gaussian channels. In the second phase, a deeper analysis is done for QIM taking into account a much wider variety of attacks. The careful performance analyzes were presented by PérezGonzàlez et al. [5] for a large class of QIM methods in the cases of uniform and Gaussian noise. Bartolini et al. [6] concentrated on analyzing the performance of STDM in the presence of two important classes of non additive attacks, the gain attack plus noise addition and the quantization attack. In [7], the authors proposed an improved DM scheme to resist lineartimeinvariant filtering and provided a thorough analysis of it. We notice that most of previous analyzes make use of the Gaussian host assumption and even neglect the statistical properties of the host signal.
The conventional QIM has a serious drawback, i.e., the weakness against valumetric scaling. Spherical codes were utilized to cope with this problem in [8]. However, watermark embedding and recovery get very complicated [9]. Oostveen et al. [10] proposed to choose the adaptive quantization step size to be proportional to a local average of the host signal samples. Despite its robustness against valumetric scaling, the method presents a nonzero probability of error even for null distortions and becomes more sensitive to constant change. Rational dithered modulation (RDM) was developed in [9] using a gaininvariant adaptive quantization step size at both embedder and decoder. The RDM achieves invariance to valumetric scaling, but is still sensitive to constant change. Li and Cox [11] applied the modified Watson's perceptual model to provide resistance to valumetric scaling for QIM watermarking. The modification to Watson's model results in the degradation in quality and performance loss with respect to constant change.
The first objective of this article is to analyze the performance of DM against composite attacks, which is lacking in the literature. Obviously, in watermarking applications, it is more often that the watermark undergoes multiple attacks. Specifically, the combination of valumetric scaling, additive noise and constant change will be considered. On the other hand, most of previous analyzes are restricted to the Gaussian noise channel, even sometime regardless of the distribution of the host signal, which we will try to overcome. The generalized Gaussian distribution (GGD) is adopted to model both the host signal and the noise signal in our analysis. Since the GGD is a parametric family of distributions, we will observe how the choice of distribution model affects the performance of DM. Next, the weakness of DM is concerned. DM itself is largely vulnerable to valumetric scaling as well as constant change. Several existing improved DM schemes achieve the robustness against valumetric scaling, but becomes more sensitive to constant change. We will present the normalized DM (NDM) considering both of them. Under the light of the performance analyzes done for DM in this article, we show that NDM approaches the performance of DM, with the great advantage of insensitivity to valumetric scaling and constant change.
The rest of this article is organized as follows. Section 2 reviews the original DM and describes the problems to be solved. Next, Section 3 accurately derives the general PDF models for several relevant signals. In Section 4, the performance of DM under the composite attacks is mathematically analyzed by the derived PDFs. The decoding error probability is given in closed form and the theoretical results are confirmed by numerical simulations. Then, in Section 5, the NDM method is presented and its performance is theoretically evaluated. Section 6 provides a useful strategy to improve the performance of NDM. In Section 7, a series of tests on real data are done to verify the validity of analytical derivations and evaluate the presented approaches. Finally, Section 8 concludes.
Notation: In the remainder of this article, we use boldface lowercase letters to denote column vectors, e.g., x, and scalar variables are denoted by italicized lowercase letters, e.g., x. The probability distribution function (PDF) of a random variable (r.v.) x is denoted by p_{ X }(x), whereas if x is discrete its probability mass function (PMF) is designated by P_{ X }(x). We write x ~ p_{ X }(x) to indicate that a r.v. x is distributed as p_{ X }(x). p_{XY}(xy) means the conditional probability of x given y. And the subscripts of the distribution functions will be dropped wherever it is clear the random variables they refer to. Finally, the mathematical expectation and standard deviation are respectively represented by μ_{ x } and σ_{ x } for a r.v. x.
2 Review of DM and problem
We will concentrate our attention on DM in this study. The uncoded binary DM can be summarized as follows.
Let x∈ ℝ ^{N} be a host signal vector in which we wish to embed the watermark message m. First, the message m is represented by a vector b with NR_{ m } binary antipodal components, i.e., b_{ j } = ± 1, j = 1, ..., NR_{ m }, where R_{ m } denotes the bit rate. The host signal x is then decomposed into NR_{ m } subvectors (blocks) of length L = ⌊1/R_{ m }⌋, denoted by ${x}_{1},\dots ,{{x}_{NR}}_{{}_{m}}$. In the binary DM, two Ldimensional uniform quantizers Q_{1}(·) and Q_{+1}(·) are constructed, whose centroids are given by the lattices and Λ_{1} = 2Δℤ^{L} + d and Λ_{+1} = 2Δℤ^{L} + d+ Δa with d∈ ℝ^{L} a keydependent dithering vector and a = (1, ..., 1)^{T}. Each message bit b_{ j } is hidden by using ${{Q}_{b}}_{j}\left(\cdot \right)$ on x_{ j }, resulting in the watermarked signal y∈ ℝ^{N} as
The watermark detector receives a distorted, watermarked signal, z, and decodes a message $\widehat{m}$ using the minimal distance decoder
where · stands for Euclidean (i.e., ℓ_{2}) norm.
In practical watermarking applications, the watermarked signal might undergo composite attacks. It is well known that quantizationbased watermarking is vulnerable to valumetric scaling attack. While the vector at the input of the decoder is scaled by ρ_{ j }, i.e., z_{ j }=ρ_{ j }y_{ j }, the quantization bins at the decoder are not scaled accordingly, thus producing a mismatch between embedder and decoder that dramatically affects performance. Also, the original DM is not robust to constant change distortion, i.e., z_{ j }= y_{ j }+ c_{ j }a with c_{ j }a constant value. No decoding error is made for c_{ j } < Δ/2, however, the bit error rate (BER) is equal to 1 for Δ/2 <c_{ j } < 3Δ/2. In this study, the two attacks are considered together with additive noise v_{ j }, yielding the attacked signal as
We will analyze the performance of DM in the case of (3), and present an improved DM resisting both valumetric scaling and constant change. In the subsequent analysis, x, y, z and ν are all regarded as random vectors. And we assume that both x and v have independently and identically distributed (i.i.d.) components and v is independent from y. Since the mean value of additive noise ν_{ j }can be counted by the third term in the righthand side of (3), it is reasonable to assume that μ_{ v } = 0.
3 PDF models
Define the extracted vector $r,r\triangleq {Q}_{b}\left(z\right)z$. Obviously, a crucial aspect when performing a rigorous analysis lies in computing the PDF of r. Let us begin with the issue.
3.1 PDF model of the watermarked signal
We use a lowercase letter to indicate any element of the vector denoted by the boldface one. The previously used index j is dropped for no specific values (or subvectors) are concerned. Given x ~ p_{ X }(x), from the relation (1), the PDF of the watermarked signal y conditioned on a transmitted symbol b is written as
where the variable y_{ k } is defined as y_{ k } = 2k Δ + (b + 1)Δ/ 2 + d and δ(·) denotes the delta function.
A few observations are in order about the PDF of y. First, for different dither value d, the PDF p_{ Y }(yb) is different. That means each element of y obeys different distributions by randomly selecting d during embedding. However, due to the fact P_{ Y }(y_{ k }+2Δb) = P_{ Y }(y_{ k }_{+1}b) exists, it is sufficient for us to consider the case d ∈ [Δ,Δ). Further, if the PDF p_{ X } (x) is symmetric, i.e., p_{ X }(x) = p_{ X }(x), from (4), the PDF p_{ Y }(y) satisfies p_{ Y }(yb =  1) = p_{ Y }(yb = 1) for the case of d =  Δ/ 2 and p_{ Y }(yb) = p_{ Y }(yb) for the case of d = 0. The former indicates that the PDFs p_{ Y }(yb =  1) and p_{ Y }(yb = 1) are mirrors of each other and the latter indicates that the PDF p_{ Y }(y) is even. These two properties of p_{ Y }(y) are exhibited in Figure 1.
3.2 PDF model of the attacked signal
Taking the Equation (3) into account and using the fact that for any $\rho >0{{p}_{\rho}}_{Y}\left(y\right)=\frac{1}{\rho}{p}_{Y}\left(\frac{y}{\rho}\right)$ holds, the conditional PDF of z given the transmitted symbol b can be obtained by convolution [12]
where the convolution follows from the independence between y and ν. Observing (5), if the effect of different d on P_{ Y }(y) is ignored (this generally holds when the embedding distortion is acceptable), the PDF p_{ Z }(zb) with d ≠ 0 can be approximately viewed as the translate of p_{ Z }(zb) with d = 0, that is, p_{ Z }(z + ρdb, d ≠ 0) ≈ p_{ Z } (zb, d = 0).
Moreover, when both x and ν are distributed symmetrically around the origin, we have the mirror property p_{ Z }(z + 2cb =  1) = p_{ Z }(zb = 1) for the case d =  Δ/ 2, and the symmetric property p_{ Z }(z + 2cb) = p_{ Z }(zb) for the case d = 0.
Figure 2a depicts qualitatively the PDFs of z for zeromean Gaussian host data with variance 255^{2} and zeromean Gaussian noise. It can be seen that there is a bell curve present around each discrete value of y due to the existence of Gaussian noise, and the two adjacent ones even overlap for a large noise strength. Meanwhile, the distance between two discrete points of y is scaled by the scaling factor ρ and p_{ Z }(z) is translated by constant value c. The corresponding empirical density curves of z are plotted in Figure 2b. We see that the analytical PDF of z fits well with empirical observations.
3.3 PDF model of the extracted signal
Recalling the definition of r given previously, it is immediate to write
where p_{ R }(rb) is the PDF of r conditioned on the transmitted symbol b, and z_{ j } has the similar definition with y_{ k }. Due to (5), the above equation becomes
with μ_{ jk } = z_{ j } ρy_{ k } c.
Now, let us analyze the properties of p_{ R }(r). If ignoring the effect of d on P_{ Y }(y), in view of (6), we derive p_{ R }(r  ϵdb, d ≠ 0) ≈ p_{ R }(rb, d = 0) with ϵ = ρ  1. This shows that for the case d ≠ 0 the PDF p_{ R }(rb) can be approximately obtained by translating p_{ R }(rb, d = 0). Further, while ϵ is small enough for neglecting the term ϵd, we have the property p_{ R }(rb, d ≠ 0) ≈ p_{ R }(rb, d = 0). That is, despite the choice of d, p_{ R }(r) approximately remains unchanged for small ϵ. Similarly to p_{ Z }(z), by assuming the PDFs p_{ X }(x) and p_{ ν }(ν) are even, we obtain the mirror property p_{ R }(r  2cb = 0) = p_{ R }(rb = 1) for $d=\frac{\mathrm{\Delta}}{2}$ and the symmetric property p_{ R }(r  2cb) = p_{ R }(rb) for d = 0. At the same time, for any ϵ, we derive p_{ R }(rb, ρ = 1 + ϵ) = p_{ R }(rb, ρ = 1ε) for d = 0 and p_{ R }(rb = 0, ρ = 1 + ε) = p_{ R }(rb = 1, ρ = 1ε) for $d=\frac{\mathrm{\Delta}}{2}$, where p_{ R }(rb, ρ) denotes the conditional PDF of r given the transmitted symbol b and the scaling factor ρ. These properties of p_{ R }(r) are helpful for us to analyze the performance of DM.
Figure 3 plots the probability density curves of r and the corresponding empirical ones for zeromean Gaussian host data with variance 255^{2} and zeromean Gaussian noise. As can be seen, the distribution curve of r is either dilated or compressed by the scale factor ρ, and the PDF p_{ R }(r) with c ≠ 0 corresponds to p_{ R }(r) with c = 0 translated by the constant value c. The probability that the values of r are distributed around zero decreases as attacks become strong, which results in the increase of BER. Comparison of Figure 3a, b reveals the analytical PDF of r fits perfectly with its empirical distribution.
4 Performance analysis of DM against the composite attacks
As the previous literatures, the decoding bit error probability P_{ e } is used as the final performance measurement. Assuming that the symbol b is sent, the bit error probability will be
where  r  denotes the vector of absolute values of components of r. Defining $s\triangleq r{}^{T}\mathbf{a}$, the above expression is equivalent to
To compute P_{ e }, we need know the PDF p_{ S }(s) of s. The exact solution for p_{ S }(s) may be achieved by several means. One of the standard procedures is by performing multifold integral operation as
where p_{ Rj }(r_{ j }b) = p_{ Rj }(r_{ j }b)+p_{ Rj }(r_{ j }b) and p_{ Rj }(r_{ j }) is the PDF of the j th element of r. The computation of p_{ S }(s) is feasible for a small L by (9). However, it becomes impractical as L increases. To solve the problem, it is nature to use mathematically tractable approximations. Let us assume that all components of d are equal, so that the vector r has i.i.d components. At this point, by the well known central limit theorem (CLT), s thus can be approximated by a Gaussian random variable, whose mean and variance are Lμ_{r}and $L{\sigma}_{\leftr\right}^{2}.$ Using the derived PDF in (6), μ_{r}and ${\sigma}_{\leftr\right}^{2}$ are represented as
Then, the probability P_{ e } is computed as
where Φ(·) stands for the cumulative distribution function (CDF) of the standard Gaussian distribution. It should be pointed out the CLT approximation to P_{ e } is only valid for very large L. In reality, the condition is generally met in order to improve the watermarking robustness.
Now, we can observe several useful properties of P_{ e } from the previous analysis. When ϵ is small enough, by the property p_{ R }(rb, d ≠ 0) ≈ p_{ R }(rb, d = 0), it is easily understood that P_{ e } approximately remains unchanged for different dither value. Therefore, without loss of generality, d is set to 0. Furthermore, for d = 0, if both p_{ X } (x) and p_{ ν } (ν) are even, using the property p_{ R }(rb, ρ = 1 + ϵ) = p_{ R }(rb, ρ = 1  ϵ), the same value of P_{ e } is obtained for the cases of ρ = 1 ϵ and ρ = 1 + ϵ. As a result, the property of P_{ e } also holds for d ≠ 0 approximately.
4.1 Generalized Gaussian host and noise
Theoretically, P_{ e } can be estimated only if the PDFs p_{ X }(x) and p_{ ν }(ν) are given. For the following analysis we consider a specific case where the host signal and attacking noise are statistically modeled by the GGD. The GGD model is used because it includes a family of distributions and suitable for many practical applications. The PDF of the GGD is defined as
where $\kappa =\frac{1}{\sigma}\sqrt{\mathrm{\Gamma}\left(3{\beta}^{1}\right)/\mathrm{\Gamma}\left({\beta}^{1}\right)}$, and $\mathrm{\Gamma}\left(u\right)={\int}_{0}^{\mathrm{\infty}}{t}^{u1}{e}^{t}dt$ is the Gamma function. Thus, the distribution is completely specified by the mean μ, the standard deviation σ and the shape parameter β, and is denoted as GGD(β; μ, σ). The CDF of the GGD has the form [13]
where $\gamma \left({u}_{1},{u}_{2}\right)={\int}_{0}^{{u}_{2}}{t}^{{u}_{1}1}{e}^{t}dt$ is the lower incomplete gamma function, and sgn(·) denotes the sign function, i.e.,
Note that Gaussian and Laplacian distributions are just two special cases of the GGD with β = 2 and β = 1, respectively.
First, the PMF P_{ Y }(y) is calculated according to the distribution model of x. Given p_{ X }(x) ~ GGD(β_{ x }; μ_{ x }, σ_{ x }) and the corresponding CDF Ψ_{ x }(x), in view of (4), we immediately write
Then, the integration terms in (10) and (11) are derived for the case p_{ ν } (ν) ~ GGD(β_{ ν }; 0, σ_{ ν }). In appendix, we obtain
and
Now, the decoding biterror probability can be estimated by computing (10), (11), and (12) with the use of (14), (15), and (16). Since the calculation of p_{ Y }(y) is relatively simple in (4), the above analysis can be easily extended for other host distributions. However, the derivation of the integration terms in (10) and (11) might become very complex for the noise ν with other PDFs. Thus, they are computed numerically when needed.
4.2 Simulations on artificial signals
In order to verify the obtained theoretical results, we conduct a set of experiments on artificial signals. A set of 64000 random data, independently drawn from the GGD with zero mean and variance 255^{2}, are used as the host signal. A random message with equiprobable information bit is embedded using DM with L = 64, Δ = 7.79, and d = 0. The noise signal is also generated according to the zeromean GGD. We calculated the empirical BER under the composite attacks, and compared them to the theoretical values. The obtained results are summarized in Figure 4 for the case of Gaussian host and noise.
Figure 4a shows the BER of DM as a function of the scaling factor ρ while fixing the constant value c and the noise standard deviation σ_{ ν }. As can be seen, DM is definitely very sensitive to the scaling attack. The probability of error is unacceptably high when ρ movies beyond the range [0.98, 1.02]. The existence of noise and constant change causes the increase of BER further. And the effect of constant change becomes relatively distinct for strong noise. The theoretical approximation of BER agrees almost perfectly with the empirical results, particularly in the case of weak attacks. Figure 4a also demonstrates that the BER versus ρ curve is symmetric with respect to the point ρ = 1. Figure 4b depicts the plots of BER versus the constant value c while fixing the scaling factor ρ and the noise standard deviation σ_{ ν }. For small ρ and σ_{ ν }, the BER of DM starts to grow rapidly as long as the absolute value of c approaches to Δ/2. The effect of c on performance decreases as ρ and σ_{ ν } increase. The estimated BERs are approximately equal to the empirical ones, but the estimation accuracy gets worse for a large c. At the same time, Figure 4b shows that the BER versus c curve is symmetrical around c = 0. Figure 4c plots the BER of DM as a function of the noise standard deviation σ_{ ν } while fixing the scaling factor ρ and the constant value c. Obviously, the BER increases as σ_{ ν } becomes large. The curve of BER versus σ_{ ν } seems to be translated due to the effect of valumetric scaling and constant change distortions. Similar to the previous tests, the theoretical BERs fit the empirical ones very well and the maximal difference between them is lower than 0.02.
In the sequel, the sensitivity of DM to statistical properties of the host and noise is investigated. We tested the performance of DM against valumetric scaling attacks for different host PDF shapes controlled by β_{ x }. The results are displayed in Figure 5a. It is remarkable that the performance of DM increases significantly as β_{ x } goes down. For a small β_{ x }, the BER plot becomes relatively flat and the BER grows slowly. This behavior can be explained as follows. For the GGD, the smaller β_{ x } is, the more impulsive the shape, and the heavier the tails. As a result, the probability that a big value of x presents over the range of interests decreases. Thus, the introduced distortion (ρ  1)y by the scaling attack degrades for the same value of ρ, resulting in the decrease of BER. We also observe that the theoretical approximation agrees almost perfectly with the empirical results for the cases β_{ x } = 2, 8, but does worse for β_{ x } = 1. This is because the CLT approximation to BER may underestimate the importance of the tails of p_{ X } (x) with β_{ x } = 1 and gives the smaller results than the true BERs [5]. However, in terms of constant change and additive noise, the performance of DM is insensitive to the shape parameter β_{ x }, due to the fact that the two operations are independent from the watermarked signal. Hence, we just provide the results for the scaling attack herein. Then, we tested the performance of DM against additive noise with different PDF shapes controlled by β_{ ν }. The results are exihibited in Figure 5b. We observe that the BER of DM goes down as β_{ ν } increases for the same noise variance. Applying the same reasoning above here, we may understand that relatively serious distortions are introduced by the noise attack with a large β_{ ν }, and thus, the performance of DM worsens.
5 Normalized DM and its performance
The robustness improvement for DM is taken into account in this section. A novel normalized DM (NDM), is presented, which is theoretically invariant to valumetric scaling and constant change. On the other hand, the performance of NDM is theoretically evaluated in terms of null distortions and noise addition.
5.1 Normalized DM
The main idea of NDM is to construct a gaininvariant vector with zero mean for quantization. There are many strategies for the construction of such a vector. In the study, the vector is achieved in the way that the host vector subtracts its nonzero sample mean and then is divided by its sample standard deviation. The method is described in details as follows.
Let $\u016b\triangleq {u}^{T}\mathbf{a}/L$ and ${S}_{u}^{2}\triangleq \left\rightu\u016b{}^{2}/L$ denote the sample mean and variance of a Ldimensional vector u, respectively. Watermark embedding is performed by
for j = 1, ..., NR_{ m }, where the factors λ_{ j } and η_{ j } are determined by two specific distortion situations. For convenience, we define the normalized host vector as ${x}_{j}^{\prime}=\left({x}_{j}{\stackrel{\u0304}{x}}_{j}\mathbf{a}\right)/{S}_{{x}_{j}}$ and the error vector as ${q}_{{e}_{j}}={Q}_{{b}_{j}}\left({x}_{j}^{\prime}\right){x}_{j}^{\prime}$. By (17), the sample variance of y_{ j }satisfies ${S}_{{y}_{j}}^{2}={\lambda}_{j}^{2}{S}_{{x}_{j}}^{2}\left(1+{S}_{{q}_{{e}_{j}}}^{2}+2{q}_{{e}_{j}}^{T}{x}_{j}^{\prime}/L\right)$. An appropriate strategy to choose λ_{ j } is to let ${S}_{{y}_{j}}^{2}={S}_{{x}_{j}}^{2}.$ Therefore, we have
Then, η_{ j } is obtained through minimizing the distance y_{ j } x_{ j }. This leads to
At detection time, the received signal z is first normalized as done at the embedder's side and then the minimal distance decoder is applied. The modified detector is represented as
Now, it is possible to simultaneously see why NDM is insensitive to valumetric scaling and constant change attacks: Substituting z_{ j } = p_{ j }y_{ j }+c_{ j }a into (20), it can be readily verified that ρ_{ j } and c_{ j } cancel out in the expression, and consequently, the decision ${\hat{b}}_{j}$ does not depend on ρ_{ j } and c_{ j }.
5.2 Performance analysis
Having known that NDM overcomes the main weakness of the conventional DM, we will evaluate the performance of NDM in terms of null distortions and noise addition. Performing the normalization operation on both sides of (17) and applying (18) and (19), we get
The above equation indicates that NDM introduces two extra operations in the absence of channel noise: valumetric scaling with λ_{ j } and constant change with ${\stackrel{\u0304}{q}}_{{e}_{j}}$. In other words, NDM can be regarded as DM undergoing valumetric scaling and constant change distortions. Thus, the theoretical performance of NDM for null distortions is approximately determined by (10), (11), and (12) as the noise standard deviation σ_{ ν } approaches zeros.
To evaluate the effect of λ_{ j } and ${\stackrel{\u0304}{q}}_{{e}_{j}}$ in (21) on the performance of NDM, we introduce the documenttowatermark ratio (DWR), defined as ${\zeta}_{j}\triangleq L{S}_{{x}_{j}}^{2}/\mid \mid {y}_{j}{x}_{j}\mid {\mid}^{2}$ for the j th subvector. Combining (17), (18) and (19), λ_{ j } can be rewritten as
For small Δ, it has been shown [14] that each element of the error vector q_{ e }obeys independently a uniform distribution over the interval [Δ, Δ) and q_{ e }is statistically independent from ${x}_{j}^{\prime}$. Applying the properties, it is easy to derive that ${q}_{{e}_{j}}^{T}{x}_{j}^{\prime}/L$ in (22) has zero mean and variance Δ^{2}/(3L). Thus, λ_{ j } tends to 1  0.5/ζ_{ j } as L → ∞ or Δ → 0 (i.e., ζ_{ j } → ∞). Figure 6 plots the curves of the true average error λ_{ j }  1 versus ζ_{ j } for different L, as well as the limit 0.5/ζ_{ j }. Notably, the gap between the factor λ_{ j } and its limit becomes smaller and smaller as L and DWR increase. Over the most interesting range of ζ_{ j } from 25 dB to 50 dB, the value of λ_{ j }  1 is less than 0.01 for all the values of L tested. From Figure 4a, it is observed that the valumetric scaling with λ_{ j }  1 < 0.01 affects the performance of NDM so less that no decoding error is made.
As to the constant change with $\mathsf{\text{E{}}{q}_{{e}_{j}}\mathsf{\text{}}}$, we have a sufficient condition that $\left\mathsf{\text{E}}\left\{{q}_{{e}_{j}}\right\}\right<\mathrm{\Delta}/2$ for making no error. Considering the statistical properties of ${q}_{{e}_{j}}$, it is possible to resort to the CLT to show that for large L, the sample mean ${\stackrel{\u0304}{q}}_{{e}_{j}}$ can be accurately modeled by a Gaussian PDF with zero mean and variance Δ^{2}/(3L). Thus, the probability that $\left{\stackrel{\u0304}{q}}_{{e}_{j}}\right<\mathrm{\Delta}/2$ holds can be computed as
Since the probability in (23) approaches the value of 1 as L increases, NDM can present a zero probability of error as the original DM for large L in the absence of channel noise. Figure 7 shows the plots of the BER as a function of L. As shown in Figure 7, the probability of error sharply decreases to 0 as L increases. And the agreement of theoretical results with simulations is excellent.
Next, we will analyze the performance of NDM in noise channel. The received signal z_{ j }has the form z_{ j }= y_{ j }+ ν_{ j }, where ν_{ j }is an unknown noise source with zero sample mean and ν_{ j }and y_{ j }are orthonormal. Since NDM is invariant to valumetric scaling and constant change attacks, it's sufficient to consider the case. To measure the impact of the noise, we will follow the popular watermarktonoise ratio (WNR), defined as ${\xi}_{j}\triangleq \left\right{y}_{j}{x}_{j}{}^{2}/\left\mathbf{\nu}j\right{}^{2}$ for the j th subvector. Applying the normalization operation to z_{ j }yields
with ${\lambda}_{j}^{\prime}={\lambda}_{j}\sqrt{\frac{{\zeta}_{j}{\xi}_{j}}{{\zeta}_{j}{\xi}_{j}+1}}$, ${\mathit{\nu}}_{j}^{\prime}=\sqrt{\frac{{\zeta}_{j}{\xi}_{j}}{{\zeta}_{j}{\xi}_{j}+1}}\frac{{\nu}_{j}}{{s}_{{x}_{j}}}$, and ${\stackrel{\u0304}{q}}_{{e}_{j}}^{\prime}={\lambda}_{j}{\stackrel{\u0304}{q}}_{{e}_{j}}$. Expression (24) illustrates that NDM undergoes the composite attacks as considered in (3). Therefore, the previously obtained theoretical results can be used to predict the performance of NDM.
Generally speaking, the factor ${\lambda}_{j}^{\prime}$ in (24) is approximately equal to the value of 1 by the fact that ζ_{ j }ξ_{ j } ≫ 1 holds in practical applications. Figure 8 shows that the value of ${\lambda}_{j}^{\prime}1$ is rather less even for serious distortions (e.g., WNR = 10 dB). On the other hand, for large L, the effect of ${\stackrel{\u0304}{q}}_{{e}_{j}}^{\prime}$ in (24) can be neglected. Based on these two considerations, the increase of BER is mainly derived from the term ${\mathbf{\nu}}_{j}^{\prime}$ in (24). As a result, we can draw the conclusion that NDM almost resists the same amount of noise as the original DM. Figure 9 illustrates the performance difference between NDM and DM under the additive noise attacks. As can be seen, NDM performs slightly worst than DM when the WNR is within the range [1 dB, 3 dB], but outperforms it once WNR is lower than 1 dB. In principal, their performance is very close in this regard. Under the light of the above analysis we conclude that NDM achieves the performance approximately equal to DM, still keeping invariance against valumetric scaling and constant change attacks.
6 The improvement of NDM
The previous analysis shows that when λ_{ j } ≠ 1 and ${\stackrel{\u0304}{q}}_{{e}_{j}}\ne 0$ the two factors have the negative impact on the performance of NDM. Thus, the influence of them should be decreased or eliminated so as to obtain the improved performance. Based on this idea, we present the improved NDM (IMNDM) in the sequel.
In IMNDM, the watermarked vector is generated by weighting the quantization error signal and adding it back to the host signal. The modified embedder is expressed as
where α_{ j }denotes the weight vector whose element is between 0 and 1, and ${\mathit{\alpha}}_{j}\cdot {q}_{{e}_{j}}$ indicates that each dimension of α_{ j }is multiplied by the corresponding dimension of ${q}_{{e}_{j}}$· Similarly to (18) and (19), it is derived that
and
Note that NDM is a special case of IMNDM with α_{ j }= a. The weight vector α_{ j }plays an important role in the performance of IMNDM. Through a careful choice of α_{ j }, the influence of both λ_{ j } and ${\stackrel{\u0304}{q}}_{{e}_{j}}$ in (25) can be decreased (or even eliminated), and at the same time the distortioncompensation (DC) mechanism is introduced. The latter is proved to be an effective way to improve the performance of quantizationbased watermarking [1].
By letting λ_{ j } = 1 and ${\eta}_{j}={\stackrel{\u0304}{x}}_{j}$, we have
Taking use of one solution of (28) in (25) allows us to eliminate the negative impact of λ_{ j } and η_{ j }. Obviously, it is easy to obtain one solution of (28) for one of the two equations in (28) is linear. If (28) has multiple solutions, an appreciate one should be chosen by the performance of IMNDM. Obtaining the appropriate solution for α_{ j }and investigation of its effect on watermarking performance is beyond the scope of this article and is a good direction for future research. If (28) has none solution, α_{ j }should be chosen to minimize λ_{ j }  1 under the situation ${\eta}_{j}={\stackrel{\u0304}{x}}_{j}$. This is a constraint optimization problem and can be solved using the Lagrangian multiplier method.
Figure 9 illustrates the performance of IMNDM described above under the additive noise attack, together with DCDM, and the distortion compensated NDM (DCNDM), namely IMNDM taking the same weight for each element of ${q}_{{e}_{j}}$, where the DC value is set to 0.66 for the latter two schemes. Obviously, DCNDM almost presents the same robustness as DCDM against weak attacks, and performs a little better facing very serious distortions (WNR <4 dB). And they are noticeably outperformed by IMNDM.
7 Experimental results
In this section, a series of experiments are conducted on real images to evaluate the validity of analytical derivations and performance of the proposed method.
7.1 Theoretical verification
In the experiments, we use three standard images, shown in Figure 10. The DM method is implemented in the spatial domain so as to observe its performance without the impact of transform operations. Specifically, all pixels of one image are rearranged in a vector as the host signal. A random binary message is embedded into the host vector by DM when given the quantization step Δ, the dither value d and the number of dimensions L. The watermarking algorithm is tested under the composite attacks of valumetric scaling with the factor ρ, constant change with the value c and additive noise ν following the distribution GGD(β_{ ν }; 0, σ_{ ν }). The distribution parameters of image pixels used for the computation of theoretical BERs are displayed in Table 1, which are obtained by the maximum likelihood estimator [15]. The experimental results are summarized in Figure 11 for L = 32, Δ = 8, and d = 0.
Figure 11a depicts the plots of BER as a function of the scaling factor ρ for each image. On the Crowd image, which has the smallest shape parameter among the tested images, DM achieves the best performance. This behavior is consistent with the results in Figure 5a. The shape parameter of the Lena image is larger than one of Mandrill, but better performance is achieved on the Lena image. This can be explained as follows. The valumetric scaling operation introduces the serious distortions on the Mandrill image with a distinctively large mean luminance. As a result, not only the performance gain caused by the host PDF shape cancels out but also the BER grows up. The analytical curves closely fit the empirical data for the Lena and Mandrill images. By contrast, the prediction accuracy becomes slightly worse for the Crowd image. That is mainly due to the fact that the GGD is a poor model for this image. Figure 11b illustrates the sensitivity of DM to the addition/subtraction of a constant luminance value while fixing ρ and σ_{ ν }. In the test, DM performs closely for all the test images. That is, the performance of DM with respect to constant change attack is insensitive to the statistical properties of host signal. It is remarkable that the empirical performance of DM is predicted by the theoretical results with a high degree of accuracy. The plots of BER versus the standard deviation σ_{ ν } are shown in Figure 11c for β_{ ν } = 1 and Figure 11d for β_{ ν } = 8 while fixing ρ and c. As to the attack, the obvious performance difference is observed between different images. The effect is actually caused by the valumetric scaling operation, and thus can be removed by setting ρ = 1. Comparing Figure 11c with Figure 11d, it becomes clear that the additive noise with a flat PDF is a worstcase attack for DM. This agrees with the observation in Figure 5b. In the two cases, the predictions are desirable, but there are small discrepancies at some points.
7.2 Performance evaluation
We tested the performance of the proposed NDM in terms of imperceptibility and robustness and compared it with DM, DCDM, Oostveen's method [10] and RDM [9]. Experiments were carried out on a database of 4000 images from the Corel database, each of dimension 256 × 384. The watermark embedding was performed in the spatial domain in order to see the sensitivity of the tested schemes to constant intensity change. Specifically, we divided the target image into nonoverlapping blocks of size 8× 8 and extracted a total of 225 blocks with the highest local variance. Each of the extracted blocks was modulated with two random message bits, so a total of 450 bits can be embedded into one image. The DC value was set to 0.66 for DCDM. The L_{2} vector norm of 50thorder was used as the division function in RDM.
In the experiment on watermarking imperceptibility, the watermark energy induced by all the tested schemes is kept the same and in this case the watermarked images' quality is assessed with several objective image quality metrics. The weighted peak signaltonoise ratio (wPSNR) and the total perceptual error (TPE) are used to measure the global image quality, as well as the number of blocks greater than the first local perceptual error threshold (NLPE1) and the second local perceptual error threshold (NLPE2) to measure the local image quality. The parameters for them take the default values as suggested in Checkmark [16]. Table 2 reports the experimental results averaged over all the test images when the DWR is fixed at 21 dB.
As shown in Table 2, among all the tested watermarking schemes, NDM and its improved version offer the highest wPSNR values (in dB), the smallest TPE, NLPE1, and NLPE2 values for the same watermark energy. They all indicate that the performance of NDM, in terms of imperceptibility, is better than that of other ones. This is because the adaptive quantization step size is chosen to be proportional to the local variance of the host image in NDM (see (17)). The image quality produced by IMNDM degrades when compared with NDM. The situation also presents between DM and DCDM. This is attributed to the fact that a large quantization step is used for watermark embedding with distortion compensation. Surprisingly, RDM manifests the worst performance in this regard.
In what follows, the watermark robustness will be evaluated with respect to some typical image processing operations. The watermarked images were produced by the tested schemes when fixing DWR at 21 dB. All the given BERs are averaged over the test set of images, except otherwise indicated.
Figure 12 shows the robustness to amplitude scaling for all schemes. Clearly, except the conventional DM and DCDM, the others manifest strong robustness against this attack. Particularly, the lowest values of BER are achieved by IMNDM over the whole range of scaling factor ρ tested. However, when ρ exceeds 1.2, the robustness of IMNDM goes down slightly. That can be attributed to the increasing rounding and clipping distortions.
Figure 13 illustrates the sensitivity of all schemes to the addition/subtraction of a constant luminance value c. It can be seen that both DM and DCDM are very fragile to constant change attack. The BER of them sharply increases to 1 when c gets close to 10 or 10. Although Oostveen's method and RDM perform better than the original QIM schemes, they are still sensitive to this kind of attack. Our methods are evidently more robust in this regard than other ones. They are almost invariant to constant change and approximately keep the BER of 0 over the range of c tested.
The robustness to AWGN is shown in Figure 14 for each watermarking scheme. In this regard, NDM clearly outperforms Oosteen' method and RDM. Comparing with DM, NDM achieves higher BER for weak noise. This can be explained by the fact that the introduced noise causes the errors in the estimation of the quantization step size for NDM. However, as the noise becomes strong, the BER of DM grows rapidly and is finally lower than one of NDM. The situation is in accordance with the analytical results in Section 5.2. Note that IMNDM behaves like NDM but presents the improved performance.
The robustness of NDM against AWGN was also tested on the Lena image to verify the analytical derivations for NDM. Since the performance of NDM depends on the local variance of the host image, the empirical BER can not be accurately predicted by exploiting the information from a certain image block. Thus, for the computation of the theoretical BER, we chose three image blocks with different variance: the middle one is around the average variance over those image blocks for watermark embedding and other two ones are respectively a little larger and smaller than it. The theoretical and empirical results are depicted in Figure 15. As can be seen, the upper analytical curve relatively fits well to empirical observations in the weak noise case, and other two curves respectively do well for the moderate noise and strong noise cases respectively. In principle, the theoretical results are effective for real image.
The sensitivity to JPEG compression is investigated in Figure 16. In this test, NDM performs a little worse than DM. IMNDM improves the robustness of NDM, but still falls behind DCDM. It is worth seeing that RDM has superior performance with respect to JPEG compression. That can be explained by the nature of JPEG compression. Unlike the AWGN, JPEG compression is an imagedependent processing operation. The goal of it is to reduce an image file size without noticeable image quality degradation. Thus, the perceptually unrelevant data are removed from an image after compression. The test results of image quality reveals that RDM modifies the image data to be easily noticed more largely than other ones, so that it is impaired less by compression. The situation is opposite for NDM. If the perceptual quality is set to be same for all the tested schemes, it is reasonable to believe that NDM will manifest better performance.
NDM is just a basic watermarking algorithm like DM. The above tests allow us to evaluate its performance baseline and the implementation is coarse. If one wants to design a NDM based watermarking scheme for practical applications, some effective technologies on performance improvement should be carried out, such as the choice of transform domain, the use of errorcorrection coding, etc. Several imageadaptive DM algorithms are presented by exploiting the characteristics of the human visual system in [11]. The same ideas can be straightforwardly applied to improve the performance of NDM. Recently, a new Logarithmic QIM is developed by introducing the μLaw concept in [4]. NDM can also attempt to use the concept for the improvement of performance.
8 Conclusion
The contribution of this article is twofold. First, we have been theoretically evaluated the performance of DM facing the combination of valumetric scaling, additive noise and constant change. The analyzes were developed under the assumptions that both the host vector and the noise vector have i.i.d components and the two vectors are independent. We accurately derived the general expressions of the PDFs of the watermarked signal, the attacked signal and the extracted signal. By these derived PDFs, the decoding error probability was generally expressed in closed form. The specific analytical results were presented for the case of generalized Gaussian host and noise. Moreover, the theoretical results can be easily extended by modeling the host and noise signals with other distributions.
According to our analyzes, DM is largely vulnerable to valumetric scaling. And constant change and additive noise give rise to the relatively large performance loss of DM by combining them with valumetric scaling. Particularly, we have seen the effect of statistical properties of the host and noise signals on the performance of DM. The more impulsive the PDF shape of the host signal, the more robust DM is to valumetric scaling. The more flat the PDF shape of the noise source, the more sensitive DM is to additive noise. Simulations on artificial signals and real images show us that the biterror probability is accurately predicted by the given theories for a wide range of host and noise PDF shapes. These can ultimately guide the design of efficient watermarking algorithms based on DM.
Second, a novel watermarking method, called NDM, has been developed. In the method, the normalized host signal vector is constructed for quantization. The NDM achieves its theoretical invariance to both valumetric scaling and constant change, but leads to small performance loss in the absence of channel noise. The BER of NDM against additive noise can be predicted by applying the presented theoretical results of DM. Further, the NDM is improved by weighting the quantization errors. Experiments on images demonstrate that the proposed method achieves better watermark imperceptibility and extremely strong robustness against valumetric scaling and constant change attacks comparing with the original QIM schemes and other improved versions.
Appendix
Here, we will derive the integration terms in (10) and (11) when the attacking noise obeys the distribution GGD(β_{ ν }; 0, σ_{ ν }). For this purpose, using a variable t instead of μ_{ jk }, they are, respectively, rewritten as
and
Thus, achieving the two integrations can be attributed to the computation of I(t_{1}, t_{2}), defined as $I\left({t}_{1},\phantom{\rule{2.77695pt}{0ex}}{t}_{2}\right)\triangleq {\int}_{{t}_{1}}^{{t}_{2}}{u}^{l}{p}_{\nu}\left(u\right)du$ with l being an integer.
Considering the case of t_{1} ≥ 0 and t_{2} ≥ 0, we have
where the first equality follows from (13) and the final equality follows from the definition of the lower incomplete gamma function.
In the case of t_{1} ≤ 0 and t_{2} ≤ 0, I(t_{1}, t_{2}) has the form
where the final equality follows from (31).
Last, while t_{1} ≤ 0 and t_{2} ≥ 0, it follows that
where the final equality is due to (31) and (32). Combining the three cases, a unified form of I(t_{1}, t_{2}) is
By the formula (34) and the CDF of the GGD, (29) becomes
and (30) becomes
Acknowledgements
This study was supported by the National Natural Science Foundation of China (Grant No. 60803122, 61103018), by the Natural Science Foundation of Jiangsu Province (Grant No. BK2011442), by the Innovative Foundation of Yangzhou University (Grant No. 2011CXJ023), by the Opening Project of State Key Laboratory of Digital Publishing Technology, and by the Opening Project of State Key Laboratory of Software Development Environment (Grant No. SKLSDE2011KF08). The authors would like to thank the anonymous reviewers for their detailed comments that improved both the editorial and technical quality of this article substantially.
Abbreviations
 AWGN:

additive white Gaussian noise
 CDF:

cumulative distribution function
 CLT:

central limit theorem
 DC:

distortioncompensation
 DCDM:

distortion compensated dither modulation
 DCNDM:

distortion compensated NDM
 DM:

dither modulation
 DWR:

documenttowatermark ratio
 GGD:

generalized Gaussian distribution
 i.i.d.:

independently and identically distributed
 IMNDM:

improved NDM
 LQIM:

logarithmic QIM
 NDM:

normalized dither modulation
 NLPE1:

number of blocks greater than the first local perceptual error threshold
 NLPE2:

number of blocks greater than the second local perceptual error threshold
 PDF:

probability distribution function
 PMF:

probability mass function
 BER:

bit error rate
 QIM:

quantization index modulation
 r.v.:

random variable
 RDM:

rational dithered modulation
 SCS:

scalar Costa scheme
 STDM:

spread transform dither modulation
 TPE:

total perceptual error
 WNR:

watermarktonoise ratio
 wPSNR:

weighted peak signaltonoise ratio.
References
 1.
Chen B, Wornell GW: Quantization index modulation: a class of provably good methods fordigital watermarking and information embedding. IEEE Trans Inf Theory 2001, 47(4):14231443. 10.1109/18.923725
 2.
Eggers JJ, Bauml R, Tzschoppe R, Girod B: Scalar costa scheme for information embedding. IEEE Trans Signal Process 2003, 51(4):10031019. 10.1109/TSP.2003.809366
 3.
Boyer JP, Duhamel P, BlancTalon J: Performance analysis of scalar DCQIM for zerobit watermarking. IEEE Trans Inf Foren Secur 2007, 2(2):283289.
 4.
Kalantari NK, Ahadi SM: A logarithmic quantization index modulation for perceptually better data hiding. IEEE Trans Image Process 2010, 19(6):15041517.
 5.
PérezGonzàlez F, Balado F, Martin JRH: Performance analysis of existing and new methods for data hiding with knownhost information in additive channels. IEEE Trans Signal Process 2003, 51(4):960980. 10.1109/TSP.2003.809368
 6.
Bartolini F, Barni M, Piva A: Performance analysis of STDM watermarking in presence of nonadditive attacks. IEEE Trans Signal Process 2004, 52(10):29652974. 10.1109/TSP.2004.833868
 7.
PérezGonzàlez F, Mosquera C: Quantizationbased data hiding robust to lineartimeinvariant filtering. IEEE Trans Inf Foren Secur 2008, 3(2):137152.
 8.
Conway JH, Sloane NJA: Sphere Packings, Lattices, and Groups. Springer, New York; 1988.
 9.
PérezGonzàlez F, Mosquera C, Barni M, Abrardo A: Rational dither modulation: a highrate datahiding method invariant to gain attacks. IEEE Trans Signal Process 2005, 53(10):39603975.
 10.
Oostveen JC, Kalker AAC, Staring M: Adaptive quantization watermarking. In Proc of SPIE: Security, Steganography, and Watermarking of Multimedia Contents VI. Volume 5306. San Jose, CA; 2004:296303.
 11.
Li Q, Cox IJ: Using perceptual models to improve fidelity and provide resistance to valumetric scaling for quantization index modulation watermarking. IEEE Trans Inf Forens Secur 2007, 2(2):127139.
 12.
Papoulis A: Probability, Random Variables, and Stochastic Processes. McGrawHill, New York; 1991.
 13.
Saralees N: A generalized normal distribution. J Appl Stat 2005, 32(7):685694. 10.1080/02664760500079464
 14.
Schuchman L: Dither signals and their effect on quantization noise. IEEE Trans Commun Technol 1964, CT12: 162165.
 15.
Do MN, Vetterli M: Waveletbased texture retrieval using generalized gaussian density and KullbackLeibler distance. IEEE Trans Image Process 2002, 11(2):146158. 10.1109/83.982822
 16.
Voloshynovskiy S, Pereira S, Iquise V, Pun T: Attack modelling: towards a second generation watermarking benchmark. Signal Process 2001, 81(Special 6):11771214.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zhu, X., Ding, J. Performance analysis and improvement of dither modulation under the composite attacks. EURASIP J. Adv. Signal Process. 2012, 53 (2012). https://doi.org/10.1186/16876180201253
Received:
Accepted:
Published:
Keywords
 digital watermarking
 quantization index modulation
 composite attacks
 valumetric scaling
 constant change