Open Access

Efficient blind decoders for additive spread spectrum embedding based data hiding

EURASIP Journal on Advances in Signal Processing20122012:88

https://doi.org/10.1186/1687-6180-2012-88

Received: 16 July 2011

Accepted: 23 April 2012

Published: 23 April 2012

Abstract

This article investigates efficient blind watermark decoding approaches for hidden messages embedded into host images, within the framework of additive spread spectrum (SS) embedding based for data hiding. We study SS embedding in both the discrete cosine transform and the discrete Fourier transform (DFT) domains. The contributions of this article are multiple-fold: first, we show that the conventional SS scheme could not be applied directly into the magnitudes of the DFT, and thus we present a modified SS scheme and the optimal maximum likelihood (ML) decoder based on the Weibull distribution is derived. Secondly, we investigate the improved spread spectrum (ISS) embedding, an improved technique of the traditional additive SS, and propose the modified ISS scheme for information hiding in the magnitudes of the DFT coefficients and the optimal ML decoders for ISS embedding are derived. We also provide thorough theoretical error probability analysis for the aforementioned decoders. Thirdly, sub-optimal decoders, including local optimum decoder (LOD), generalized maximum likelihood (GML) decoder, and linear minimum mean square error (LMMSE) decoder, are investigated to reduce the required prior information at the receiver side, and their theoretical decoding performances are derived. Based on decoding performances and the required prior information for decoding, we discuss the preferred host domain and the preferred decoder for additive SS-based data hiding under different situations. Extensive simulations are conducted to illustrate the decoding performances of the presented decoders.

Keywords

digital watermarkingadditive spread spectrum embeddingoptimum decoding

1. Introduction

The growing use of Internet has enabled the users to easily access, share, manipulate, and distribute the digital media data, and digital media has profoundly changed our daily life during the past decade. This proliferation of digital media data creates a technological revolution to the entertainment and media industries, brings new experience to users, and introduces new Internet concepts. However, the massive production and use of digital media also pose new challenges to the copyright industries and raise critical issues of protecting intellectual property of digital media, since current media sharing makes unauthorized copying and illegal distribution of the digital media much easier.

One popular technology for digital right protection is digital watermarking [1], where a specific signal (e.g., the ownership information) is embedded into the host media content without significantly degrading the perceptual quality of the original media data. In contrast to traditional encryption techniques, watermarked media data can still be used while remaining protected, and thus watermarking can provide post-delivery protection of digital media. It is worth mentioning that, despite the popularity of watermarking techniques, effective digital right protection is extremely challenging and currently there is no commonly accepted technical solution which is practically unbeatable when deployed to practical user settings. At any sense, watermarking techniques should only be considered as one important component of an overall protection system.

Amongst the proposed schemes for watermark embedding, spread spectrum (SS) and quantization based methods [2, 3] are the two main broad categories. In SS embedding, an additive or multiplicative watermark is added into the host signal. The quantization based schemes are implemented by quantizing the host signal to the nearest lattice point. In this article, we focus on spread spectrum embedding schemes originally proposed by Cox et al. [4]. At the receiver side, a blind detection scheme is employed, since the original image is generally not available and thus is treated as a noise source. There are two main approaches of SS embedding: the additive spread spectrum watermarking and the multiplicative spread spectrum (MSS) watermarking. In additive SS [5, 6], the watermark is spread over the host signal uniformly while in MSS [7, 8], the watermark spreads according to the content of the host signal. In order to reduce the noise effect of the host signal in additive SS, Malvar and Florencio [9] proposed the improved spread spectrum (ISS), a new modulation technique exploiting the side information at the encoder to reduce the effect of host signal and improve the decoding performance [10]. Recently, the authors have proposed an embedding scheme incorporating the SS and ISS schemes which employs the correlation between the host signal and the signature code to improve the decoding performance [11].

As summarized in [12], depending on different purposes, there are two main types of watermarking schemes: In one type, the embedded watermark is used to communicate a specific hidden message (e.g., binary identification numbers used for image tracking and for video distribution, or a secret hidden message represented by binary sequences) which must be extracted with sufficient decoding accuracy. In the other type of systems, the goal is only to verify whether a specific embedded watermark (e.g., representing copyright information) is presented or not, and the embedded watermark normally does not communicate a secret message that needs to be accurately decoded. It is important to emphasize that the above two problems are formulated differently and different detection (decoding) approaches are desired to serve different performance criteria. References [1318] explicitly have pointed out this distinction in their works.

Based on the above two types of watermarking schemes, current researches on watermark extraction can be categorized into two broad topics: watermark decoding [12, 13, 19] for the case of decoding the hidden message and watermark detection [1618, 2022] for the case of detecting the presence of a specific watermark. Although watermark detection and decoding problems seem to be similar from the hypothesis testing point of view, they actually serve different goals and thus different criteria are used. In watermark decoding, the embedded hidden message should be decoded accurately at the receiver side and therefore the bit error rate is usually used as the performance criterion to measure the accuracy of the decoder in extracting the hidden message, and the watermark decoding problem can be formulated as minimizing the bit error rate. In watermark detection, the goal is to determine whether a specific watermark exists or not, and the detection criteria are mainly based on Neyman-Pearson Theorem (i.e., maximizing the probability of detection for a given probability of false alarm). Performance criterion such as the false alarm probability and the true detection probability are used for evaluating the watermark detector performance. To our knowledge, the majority of the current literature has been focused on watermark detection and many algorithms have been proposed. For instance, in [23] a watermark based on the host content is added and the detection is accomplished with the Neyman-Pearson criterion. In [24], a new perceptual masking is proposed and a correlation based detector is studied for watermark detection. In [25], a class of watermark detectors, including the generalized likelihood ratio, Bayesian, and Rao test detectors are proposed. In this article, we focus on the topic of watermark decoding, since we are particularly interested in communicating hidden message. Since in practice the original host image is generally not available at the decoder side, we focus on blind watermark decoding.

The very first decoder used for watermark decoding in SS embedding is the traditional correlator proposed by Cox. This decoder extracts the embedded information using the correlation between the signature code and the received data. Utilizing the probability density function (PDF) of the host signal could help enhancing the performance of watermark decoding. An optimum ML decoder for additive SS in the DCT domain was proposed by Hernandez et al. [26]. The optimal decoder for multiplicative SS in the DFT domain was investigated in [13]. Regardless of the above referred literatures, compared with the research works on watermark detection, watermark decoding is less studied, and a thorough analytical study of watermark decoding is still required. It is worth emphasizing that, since the watermark decoding problem is formulated as different hypotheses testing problem from the watermark detection problem (i.e., with H0 being the noise-only hypothesis), a specific watermark detector does not necessarily mean a specific watermark decoder. For instance, the local optimum (LO) test (which is based on the derivative of the likelihood) will yield different forms for the LO detector and the LOD. Also, even though ML criterion has been used for both watermark detection and watermark decoding, it is derived differently and has different meanings (i.e., the ML watermark decoder is a Bayesian approach to minimizes the probability of bit error when assuming the equal prior probability of the bit information and thus assuming the threshold to be 1; while for watermark detection, the ML solution is the likelihood ratio test (LRT) detector based on the Neyman-Pearson theorem, where the LRT exploits the probability of false alarm to set the detection threshold). We would also like to emphasize that since different performance criteria are desired in watermark detector and watermark decoding, a specific type of efficient watermark detector does not necessarily mean an efficient watermark decoder.

The common objective of communicating hidden message using watermarking is to successfully embed and decode an imperceptible watermark which can be resistant against distortions and attacks. In order to reduce the performance degradation under certain attacks such as geometric attacks and to take advantage of the properties of certain transform domain, the message embedding can be performed in different domains such as the discrete cosine transform (DCT) domain [27], the discrete Fourier transform (DFT) domain [2831], and the discrete wavelet transform (DWT) domain [32, 33].

In this article, our main purpose is to provide a rigorous watermark decoding framework for data hiding using spread spectrum embedding in the DCT domain and the DFT magnitude domain. In the literature of additive SS, there is lack of investigation on the optimal and sub-optimal decoders using the additive SS in the DFT magnitude domain and we will fill this gap in this article. We will show that the conventional SS scheme could not be applied directly in the DFT magnitude domain and thus we will propose a modified SS embedding scheme. To further provide a guidance on the preferred domain for information hiding using additive SS embedding, based on the derived decoders, we will discuss which domain is preferred under different circumstances. We present a theoretical framework of optimal decoders for additive SS and improved SS in the DCT and DFT magnitude domains. Embedding in the DFT domain has its own advantages and it motivates us to develop optimal watermark decoding schemes for this domain. We note that optimal decoders using ISS provide better decoding performances than the traditional additive SS. As the optimum ML decoder requires the distribution parameters of the host image and the watermark strength information, to address this concern, we also investigate several sub-optimal watermark decoders. By invoking the Taylor series, the LOD is proposed by relaxing the requirement on watermark strength. We derive the generalized maximum likelihood (GML) decoder for information hiding in the DFT magnitude domain. Further, due to simplicity and good performance, we employ the linear minimum mean square error (LMMSE) criterion and derive the LMMSE decoders. We derive the theoretical performance analysis of the proposed ML, LOD, and LMMSE decoders, where the theoretical performance of the ML decoder is served as the performance upper bound of watermark decoding schemes. The main contributions of this article are summarized as follows:

  • Proposed modified SS and ISS embedding schemes in the DFT magnitude domain.

  • Derive the ML and GML decoders for SS and ISS in the DFT magnitude domain; Derive the ML decoder for ISS embedding in the DCT domain.

  • Derive the LOD decoders for SS embedding in the DCT and the DFT magnitude domains, and derive the LOD decoders for ISS embedding in the magnitude of the DFT domain.

  • Derive the LMMSE decoders for SS and ISS embedding in both the DCT and the DFT magnitude domains.

  • Provide the theoretical bit-error-rate performance analysis of the above decoders.

The rest of this article is organized as follows. In Section 2, the traditional additive SS and ISS embedding schemes are briefly reviewed for data hiding and communicating hidden message. Host probability distribution functions for DCT and DFT domains are described in Section 3. The optimal ML decoders are derived in Section 4 and the corresponding bit-error-rate analyses are presented. In Section 5, the sub-optimal decoders, including LOD, GML, and LMMSE decoders are presented and their theoretical performance analyses will be provided. The simulation results are demonstrated in Section 6 to validate the analysis. Finally, the discussions and concluding remarks are given in Section 7.

2. Additive SS and ISS embedding procedure

Suppose a host image I M m × n is supposed to be watermarked, where means the image alphabet, e.g., M = 0 , 1 , . . . , 255 for a gray scale image, and m and n represent the size of the image in the pixel (spatial) domain. Here for simplicity we assume m = n, though the results could be extended to the general case of having unequal m and n. The additive SS embedding procedure for a host image is summarized as follows. First, the image I is partitioned into m p × m p sub blocks of size p × p. Then, each block is usually transformed to a domain which is insensitive to tampering, i.e., T ( I ) R p × p where denotes the transform function transforming the host image into the new domain called host domain. For the total q = m p 2 sub blocks, each of them conveys one hidden information bit b {±1} for one-message embedding or multiple hidden information bits for multi-message information embedding. A perfect transform should remove the imperceptible part of the data and should be insensitive against operations such as translation, lowpass filtering, compression, and other standard signal processing manipulations. In the context of image processing, two popular transform domains include the DCT and DFT which are of interest of this article.

For each block of size p × p in the transformed domain, a subset of host coefficients with length lp2 is selected to be the carrier vector for embedding. Such selected vectors x i R l , i = 1, 2,..., q, are used for information embedding. A signature code s = [s1, s2,..., s l ] T with length l can be employed for one bit embedding, and using multiple signature codes can allow us to embed multiple bits simultaneously. Usually in decoding problem, the signature code coefficients are from the values +1 and -1.

2.1. Additive SS embedding scheme

In the additive SS, the watermark is added to the host signal x = [x1, x2, ..., x l ] T . The signal model for the additive SS data hiding is expressed as
r = x + s A b ,
(1)
where vectors r, x, s are with length l, and A means the bit information amplitude and b means the bit information to be embedded. Distortion due to information hiding is defined as
D = 1 l E r - x 2 ,
(2)

which can be easily shown equals to A2 in SS embedding. The point that should be taken into account is that different domain host signals may affect the procedure of adding the information. Using the DCT domain makes no restriction on the additive SS watermarking. Employing the DFT and explicitly embedding the information in the magnitude of DFT coefficients limit the set of coefficients to be watermarked, since the watermarked DFT coefficients are required to be positive. We will discuss about the embedding scheme in the magnitude of the DFT domain more precisely when the optimal decoders are explained in Section 4.

2.2. ISS embedding scheme

The traditional additive SS, where the host signal acts as a noise source, is a non rejected host method which does not use the host signal information in the decoder. It was shown in [9] that ISS, which reduces the interference effect of the host signal, leads to significant performance improvements. In this article, we employ ISS proposed in [9] to achieve a better performance in decoding hidden information. The signal model for ISS data hiding is defined as
r = s A b + u ,
(3)
where
u = I l - k s s T x ,
(4)
and I l denotes the identity matrix and k is obtained usually by maximizing the watermark to data ratio or by minimizing the probability of error. The distortion for ISS embedding could be obtained as follows
D = 1 l E s A b - k s s T x 2 = A 2 + k 2 s T R x s .
(5)

At the receiver side the hidden information needs to be decoded. Since the optimal decoders require the distribution of the host signal, different distributions for the DCT coefficients and the magnitude of DFT coefficients will be discussed in the next section.

3. Data hiding in the DCT and the DFT magnitude domains

As to be shown in the next section, the distribution of the DCT coefficients is needed to derive the optimal decoder. The authors in [26] suggested that the heavy tailed property for low and mid frequency of DCT coefficients can be modeled by the zero mean generalized Gaussian distribution (GGD) [26] as
f X ( x ) = α e - β x c ,
(6)
where
α = β c 2 Γ 1 c , β = Γ ( 3 / c ) Γ ( 1 / c ) σ x ,
(7)

and σ x means the standard deviation of the host signal and Γ(.) means the Gamma function defined as Γ ( x ) = 0 t x - 1 e - t d t . The power exponent c is the shape parameter where its smaller value leads to the more impulsive shape and heavier tail. The scale parameter β and the shape parameter c can be estimated from the host signal [34].

The DFT is another popular transform domain for image analysis, where the DFT magnitude could be used to represent the host signal. Since the magnitude of the DFT coefficient is real and positive, the Weibull distribution was suggested [13] to model the PDF, because of its flexibility and consistency with the DFT magnitude, as follows
f X ( x ) = γ η x η γ - 1 exp - x η γ u ( x ) ,
(8)

where u(.) determines the step function which returns one where its argument is positive and returns zero when its argument is negative. Moreover, the parameters η > 0 and γ > 0 represent the scale and the shape parameters of the Weibull distribution.

4. The ML optimal decoders

The optimal decoder attempts to obtain an estimate b ^ of b such that the probability error P e = P r b ^ b is minimized. This can be done with the maximum aposteriori (MAP) decoder, which is simplified to the maximum likelihood (ML) decoder with the assumption of the equal prior probability of the bit information. The ML estimate b ^ can be expressed as
b ^ = arg max b { ± 1 } f R r | b , A , s ,
(9)
where f R (r|b, A, s) represents the conditional PDF of r when given b, A, and s. It is clear that the distribution of the host signal plays an important role in the ML decoder structure and thus, distribution of the DCT and magnitude of DFT domains were introduced in Section 3. The ML decoder for binary information hiding could be expressed by using the likelihood ratio rule. In this case the ML decoder decides b ^ = + 1 if
f R r | b = + 1 f R r | b = - 1 > 1 .
(10)

As discussed in Section 3, the PDF of the host signal can be different depending on the transform domain. In practice, due to different desired properties, different transform domains could be used for data hiding. Derivation and performance analyses of the ML decoder for SS embedding require the distribution of the host signal in a specific domain. In the following subsections, the ML decoders for SS and ISS embedding schemes in DCT and DFT domains are derived. It is worth mentioning that the ML decoder for the SS scheme in the DCT domain has been already proposed in [26].

4.1. ML decoders in the DFT domain

One possible host signal for information hiding is the DFT magnitude domain. However, it is important to note that the SS (1) and ISS (5) embedding schemes can not be applied directly in this domain because of the special property of the magnitudes of the DFT coefficients, i.e., they should be always positive. To ensure the intuition that the watermarked signal should be always positive, we propose a modified SS embedding scheme in the DFT magnitude domain as follows
r = x + s A b + e ,
(11)
where the insurance vector e = [e1, e2, ..., e l ] T is designed to make r i 's positive. More specifically, if x i + s i Ab is positive, its corresponding element e i is set to be zero; If x i + s i Ab is negative, e i = -s i Ab is set to make r i equal to x i which is consequently positive. In summary, e i in (11) can be formulated as
e i = - s i A b u - x i + s i A b .
(12)

The modified SS embedding scheme (11) and the vector e defined in (12) reveal that, for those coefficients where e i > 0, the watermarked signal becomes r i = x i , meaning that the coefficient r i does not convey information directly. However, because of the structure of the optimal decoder, which will be derived shortly, such r i 's still could help for decoding. We also note that, for the modified SS scheme (11), by increasing the watermark amplitude A, the number of coefficients with e i > 0 increases and consequently the number of watermarked coefficients decreases. Generally, the goal of this modified SS embedding scheme is to make all the watermarked coefficients positive.

Having proposed the modified SS embedding scheme for information hiding in the magnitude of the DFT domain, we can derive the optimal decoder using the distribution of the host signal. Referring to expression (8), assuming the independent and identical distribution of the coefficients, we have the joint PDF of the host data as
f X ( x ) = exp - i = 1 l x i η i γ i i = 1 l γ i η i γ i x i γ i - 1 u ( x i ) .
(13)
Based on the ML decoder structure (10), it decides b ^ = + 1 if
f R r | b = + 1 f R r | b = - 1 = exp - i = 1 l r i - s i A η i γ i i = 1 l γ i η i γ i r i - s i A γ i - 1 u r i - s i A exp - i = 1 l r i - s i A η i γ i i = 1 l γ i η i γ i r i - s i A γ i - 1 u r i - s i A > 1 .
(14)
Generally, the ML decoder could be expressed as
b ^ = sign { z } ,
(15)
where in this case, after some manipulations, the test statistic regarding (14) is obtained as follows
z = i = 1 l r i + s i A η i γ i - r i - s i A η i γ i + γ i - 1 ln r i - s i A r i + s i A + ln u r i - s i A u r i + s i A .
(16)

Investigating the test statistic of the ML decoder in the DFT magnitude domain reveals that the bit information amplitude as well as the PDF parameters should be provided at the receiver side. Now, we proceed to show that the decoding procedure is error free for two cases. For one case that b = +1 and there is one coefficient with r i + s i A < 0 at the decoder side, we can see that the test statistic in (16) goes to infinity and thus the decoder (15) definitely decides b ^ = + 1 . More precisely, in this case, ln(u(r i -s i A)) is positive and ln(u(r i + s i A)) goes to minus infinity, and thus the test statistic in (16) goes to infinity. Similarly, for the other case that b = -1 and at the decoder side there is one coefficient with r i - s i A < 0, the test statistic in (16) goes to minus infinity and thus the decoder (15) definitely decides b ^ = - 1 .

As mentioned earlier, these coefficients which do not convey information directly could help decoding indirectly. To explain this better, let assume that b = +1 and x i + s i A < 0, thus the corresponding coefficient becomes r i = x i at the embedding side. At the decoder side we will have r i + s i A < 0, which based on the above discussion, leads to the decision b ^ = + 1 . Similarly, let assume that b = -1 and x i - s i A < 0, thus the corresponding coefficient becomes r i = x i at the embedding side. At the decoder side we will have r i + s i A < 0, which leads to decision b ^ = - 1 . In both cases, the decoder performance would be error free and therefore, even some coefficients do not convey hidden information directly, they still could contribute to accurate decoding indirectly.

Deriving an analytic expression for the probability of error is always desirable, because it could help to analyze the behavior of the error. We first show that the test statistic used for decoding could be modeled as Gaussian random variable. It is noted that the test statistic (16) is the sum of l random variables, which with the assumption of independent host signal samples and with the knowledge of signature codes, by employing the central limit theorem, the test statistic could be approximated as a normal random variable if l would be large. Assuming that the signature code accepts values +1 and -1 with the equal probability, one could show that the conditional PDFs of the test statistic are expressed as
f Z z | b = + 1 = N m z , σ z 2 ,
(17)
f Z z | b = - 1 = N - m z , σ z 2 ,
(18)
where m z and σ z 2 represent the mean and the variance of the test statistic. Assuming the equal prior probability for the information bit, i.e., Pr{b = +1} = Pr{b = -1} = 1/2, the probability of error can be expressed as
P e = 1 2 erfc WIR 2 = 1 2 erfc m z 2 2 σ z 2 ,
(19)
where e r f c ( x ) = 2 π x e - t 2 d t is the complementary error function and WIR is referred as the watermark to interference ratio. It could be shown that the mean and variance of the test statistic (16) when b = +1, and for all host signal coefficients which x i > 2A, and with assuming equal probability for the signature code Pr{s i = +1} = Pr{s i = -1} = 1/2, are achieved as below
m z = i 1 2 x i + 2 A η i γ i + 1 2 x i - 2 A η i γ i + 1 2 ( γ i - 1 ) ln x i 2 x i 2 - 4 A 2 - x i η i γ i ,
(20)
σ z 2 = i 1 2 x i + 2 A η i 2 γ i + 1 2 x i - 2 A η i 2 γ i + x i η i 2 γ i + 1 2 γ i - 1 2 ln x i x i + 2 A 2 + 1 2 γ i - 1 2 ln x i x i - 2 A 2 - x i + 2 A η i γ i x i η i γ i - x i - 2 A η i γ i x i η i γ i + ( γ i - 1 ) x i + 2 A η i γ i ln x i x i + 2 A + ( γ i - 1 ) x i - 2 A η i γ i ln x i x i - 2 A - γ i - 1 x i η i γ i ln x i 2 x i 2 - 4 A 2 - 1 2 x i + 2 A η i γ i + 1 2 x i - 2 A η i 2 γ i - x i η i γ i + 1 2 γ i - 1 ln x i 2 x i 2 - 4 A 2 2 .
(21)

If there is a host signal coefficient x i < 2A, according to the earlier discussion, the probability of error would equal to zero. Therefore, the theoretical error probability of the modified SS scheme is expressed as (19) when the mean and variance could be achieved using expressions (20) and (21).

Having introduced information embedding using the modified SS scheme in the DFT magnitude domain, we now present the modified ISS scheme. Similar to the modified SS scheme, to avoid having negative watermarked coefficients, we propose a modified ISS embedding scheme as
r = s A b + u + e ,
(22)
where the insurance vector e is determined as follows. If u i + s i Ab is positive then the corresponding e i is set to zero; If u i + s i Ab is negative, then e i = -s i Ab + ks i s T x to ensure that r i is positive. Therefore e i in the modified ISS scheme could be expressed as
e i = - s i A b + k s i s T x u - u i + s i A b .
(23)
In order to obtain the ML decoder, the conditional distribution of the received signal r should be exploited. To do so, it is straightforward to show that the distribution of the vector u could be given by
F U ( u ) = 1 M exp - i = 1 l m i u η i γ i i = 1 l γ i η i γ i m i u γ i - 1 u ( m i u ) ,
(24)
where |M| is the determinant of M and m i is the i th row of M-1, where M is defined as
M = I - k s s T .
(25)
Exploiting the ML theory leads to decide b ^ = + 1 when the following inequality holds
f R r | b = + 1 f R r | b = - 1 = exp - i = 1 l m i r - s A η i γ i i = 1 l γ i η i γ i m i r - s A γ i - 1 u m i r - s A exp - i = 1 l m i r - s A η i γ i i = 1 l γ i η i γ i m i r - s A γ i - 1 u m i r - s A > 1 .
(26)
With some manipulations on (26), we can have the following test statistic
z = i = 1 l m i ( r + s A ) η i γ i - m i r - s A η i γ i + ( γ i - 1 ) ln m i ( r - s A ) m i ( r + s A ) + ln m i ( r - s A ) m i ( r + s A ) .
(27)
We now investigate the error probability behavior of this scheme. It is observed from (26) that at least one of the terms m i (r + sA) and m i (r - sA) should be positive for all watermarked coefficients, but we can show that the scheme in (22) may not fulfill this requirement. For instance, let us assume that b = +1 is hidden into the host signal, with the embedding scheme (22) and the ML decoder (26), the two terms u(m i (r - sA)) and u(m i (r + sA)) become (u(x i + m i e)) and (u(x i + m i e-2A m i s)), respectively. Although the host signal vector and the insurance vector have positive coefficients, since the elements of m i could be negative, it can not be guaranteed that all coefficients of x i + m i e and x i + m i e - 2A m i s be positive. A similar observation can be noted when b = -1 is hidden into the host signal. Referring to (26), one could conclude that in the cases that both terms m i (r + s A) and m i (r - s A) are negative, the decoder makes random decisions. In order to avoid this undesirable behavior, we improve the modified ISS embedding scheme by proposing
r = s A b + u + e + q ,
(28)
where the vector q = [q1, q2, ..., q l ] T is to make m i (r + s A) when b = -1 (or m i (r - s A) when b = +1) positive. With (28), when b = -1, then y = M-1(r + s A) becomes y = x + M-1e + M-1q, where y = [y1, y2, ..., y l ] T . Let us denote x + M-1e = v p + v n , where v p = v p 1 , v p 2 , . . . , p p l T is the vector whose elements are non-negative, and v n = v n 1 , v n 2 , . . . , v n l T is the vector whose elements are less than zero. Therefore, the vector y could be written as
y = v p + v n + M - 1 q .
(29)
In order to make all the elements of y positive, it is sufficient to make v n + M-1q = 0, which leads to the vector q as
q = - M v n .
(30)

We will apply the modified ISS scheme in (28) for embedding, and use the decoder in (26) for extracting the hidden information. So far, as a summary, the modified ISS embedding scheme (28) in the DFT magnitude domain has been proposed in order to make all the watermarked coefficients positive and to make the decoder (26) meaningful.

The remaining point is determining the parameter k in ISS embedding which could be done using the probability of error. This parameter should take the value which minimizes the theoretical probability of error. It could be shown that the probability of error for modified ISS scheme is obtained using the expression (19) when x i > 2A(l(k-1 - l)-1 + 1), based on the following mean and variance
m z = i 1 2 a i η i γ i + 1 2 d i η i γ i + 1 2 γ i - 1 ln x i 2 a i d i - x i η i γ i ,
(31)
σ z 2 = i 1 2 a i η i 2 γ i + 1 2 d i η i 2 γ i + x i η i 2 γ i + 1 2 γ i - 1 2 ln x i a i 2 + 1 2 γ i - 1 2 ln x i d i 2 - a i η i γ i x i η i γ i - d i η i γ i x i η i γ i + ( γ i - 1 ) a i η i γ i ln x i a i + γ i - 1 d i η i γ i ln x i d i - γ i - 1 x i η i γ i ln x i 2 a i d i - 1 2 a i η i γ i + 1 2 d i η i 2 γ i - x i η i γ i + 1 2 ( γ i - 1 ) ln x i 2 a i d i 2 ,
(32)
where
a i = x i + 2 A l μ + 1 , b i = x i - 2 A l μ + 1 ,
(33)
μ = k - 1 - l - 1 .
(34)
It should be noted that regarding to (5) and the point that the parameter A is always positive, one could conclude that the parameter k in ISS embedding scheme should satisfy 0 k D / s T R x s . This parameter could obtain by optimizing the WIR as the following constrained maximization
λ k = arg max 0 k D s T R x s m z 2 σ z 2 .
(35)

4.2. ML decoders in the DCT domain

As opposed to the SS and ISS schemes in the magnitude of DFT which suffered from lack of optimal decoder, the optimal decoder in the DCT domain exploiting SS scheme has been accomplished in [26]. It has been shown that the ML decoder satisfies (15) where
z = i = 1 l r i + s i A σ x i c - r i - s i A σ x i c .
(36)

We can see that the ML decoder requires knowledge of the bit information amplitude and the shape parameter as well as the signature code. For practical implementation, the receiver should either have these prior information or estimate them. One way to avoid estimating the shape parameter is to use a general value for all images, hoping it could describe the distribution of the DCT coefficients relatively well [35].

Now, we extend this work to obtain the ML decoder for ISS embedding scheme in the DCT domain to achieve better decoding performance. To do so, we should first derive the PDF of the vector u defined in (4). To this end, the PDF of generalized Gaussian is rewritten in the following vector form
f x ( x ) = λ l i = 1 i = l σ x i exp - Γ 3 c Γ 1 c c x T R x - 1 2 c 2 R x - 1 2 x c 2 ,
(37)
where
λ = c 2 Γ 1 c Γ 3 c Γ 1 c ,
(38)
and R x = d i a g σ x 1 2 , σ x 2 2 , . . . , σ x l 2 . In addition, we define [a1, a2,..., a l ] c = [|a1| c , |a2| c ,..., |a l | c ]. Regarding to the ISS signal model (3), it is shown [36] that the PDF of u can be expressed as
f U ( u ) = λ l M i = 1 i = 1 σ x i exp - Γ 3 c Γ 1 c c u T M - 1 R x - 1 2 c 2 R x - 1 2 M - 1 u c 2 .
(39)
Then, likelihood ratio for ISS (3) leads the decoder decides b ^ = + 1 when
f R r | b = + 1 f R r | b = - 1 = exp - ( r - s A ) T M - 1 R x - 1 2 c 2 R x - 1 2 M - 1 ( r - s A ) c 2 exp - ( r + s A ) T M - 1 R x - 1 2 c 2 R x - 1 2 M - 1 ( r + s A ) c 2 > 1 .
(40)
Having accomplished some algebraic simplifications, the ML decoder for ISS embedding scheme is obtained as in the form of (15) where
z = i = 1 l m i ( r + s A ) σ x i c - m i ( r - s A ) σ x i c .
(41)
Having proposed the optimal decoder of the ISS scheme in the DCT domain, the error probability is obtained by (19) where the mean and variance are determined as follow:
m z = i = 1 l 1 σ x i c 1 2 x i + 2 A ( l μ + 1 ) c + 1 2 x i - 2 A ( l μ + 1 ) c - x i c ,
(42)
σ z 2 = i = 1 l 1 σ x i 2 c 1 4 x i + 2 A l μ + 1 c - x i - 2 A ( l μ + 1 ) c 2 .
(43)

Similar to the ISS embedding in the magnitude of the DFT domain, the parameter k could be determined using the constrained maximization (35) taking into account the mean and variance defined in (42) and (43).

5. Sub-optimal decoders

As shown in Section 4, the ML decoder requires the host distribution parameters as well as the watermark amplitude. Assuming low distortion due to watermark, we could estimate the host signal parameters using the received signal, while estimating the watermark amplitude is not easy because of the complex structure of the embedding scheme. Therefore, to reduce the dependency on such prior information, in this section, we will investigate two sub-optimal decoders [37]. In addition, since it was shown that the ML decoder for embedding in the magnitude of the DFT domain is sensitive to watermark amplitude, we hope that the sub-optimal decoders in this domain could decrease this sensitivity and lead to good performances in the presence of additional noise.

5.1. Local optimum decoder

To make the hidden information imperceptible, the watermark amplitude should be small. This motivates us to explore the LOD idea of using Taylor expansion of the test statistic around zero. The Taylor series of f(x) around the point x = a excluding the second and higher orders can be expressed as
f ( x ) = f ( a ) + f ́ ( a ) ( x - a ) ,
(44)

where f ́ ( a ) is the first order derivative of f(.) at the point x = a.

Having introduced this approximation, we first consider the LOD for SS embedding in the DCT domain. Taking into account the point that the test statistic (36) at A = 0 equals to zero, and by taking the derivative of the test statistic and deriving the Taylor series around A = 0, the approximation of the test statistic turns to the following expression
z = i = 1 l s i r i ( c - 1 ) sign { r i } σ x i c .
(45)
Thus, the LOD for SS embedding in the DCT domain is achieved by
b ^ = sign i = 1 l s i r i ( c - 1 ) sign { r i } σ x i c .
(46)

The provided above decoder expression reveals that it is independent of the watermark amplitude and appropriate for the cases which there is no access to the watermark amplitude.

Having obtained the test statistic of LOD for SS embedding, the error probability can be analyzed using (19) where it could be shown that the mean and the variance are as follow
m z = i = 1 l 1 σ x i c x i + A ( c - 1 ) sign { x i + A } - x i - A ( c - 1 ) sign { x i - A } ,
(47)
σ z 2 = i = 1 l 1 σ x i 2 c x i + A ( c - 1 ) sign { x i + A } + x i - A ( c - 1 ) sign { x i - A } 2 .
(48)
We follow the same procedure provided for LOD using the SS scheme to obtain its counterpart using the ISS scheme. To derive the LOD for ISS in the DCT domain, the test statistic (41) should be rewritten in a more tractable form. After some algebraic manipulations, we have the following form of the test statistic
z = i = 1 l μ s i j i s j r j + ( 1 + μ ) r i + ( l μ + 1 ) s i A σ x i c - μ s i j i s j r j + ( 1 + μ ) r i - ( l μ + 1 ) s i A σ x i c .
(49)
Taking the derivative of above expression and using the Taylor series around A = 0, we have the LOD for ISS embedding as
b ^ = sign i = 1 l 1 σ x i c ( l μ + 1 ) s i μ s i j i s j r j + ( 1 + μ ) r i ( c - 1 ) sign μ s i j i s j r j + ( 1 + μ ) r i .
(50)
Again, we observe that above expression, for information decoding using the ISS scheme, has relaxed us having the watermark amplitude, and it is suitable in cases which we do not have access to them. Exploiting (19) leads to the corresponding theoretical error probability of the LOD in (50), with the following mean and variance parameters
m z = i = 1 l ( l μ + 1 ) σ x i c μ j i s j x j + μ A ( l - 1 ) - μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) ( x i + A - k s T x ) ( c - 1 ) sign μ j i s j x j + μ A ( l - 1 ) - μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) ( x i + A - k s T x ) - - μ j i s j x j - μ A ( l - 1 ) + μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) x i + A + k s T x ( c - 1 ) sign - μ j i s j x j - μ A ( l - 1 ) + μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) x i + A + k s T x ,
(51)
σ z 2 = i = 1 l ( l μ + 1 ) σ x i 2 c μ j i s j x j + μ A ( l - 1 ) - μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) ( x i + A - k s T x ) ( c - 1 ) sign μ j i s j x j + μ A ( l - 1 ) - μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) ( x i + A - k s T x ) + - μ j i s j x j - μ A ( l - 1 ) + μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) x i + A + k s T x ( c - 1 ) sign - μ j i s j x j - μ A ( l - 1 ) + μ k ( l - 1 ) ( s T x ) + ( 1 + μ ) x i + A + k s T x 2 .
(52)

Since the LODs are approximations of the ML decoders, the degraded decoding performances from the optimal ML ones are expected. LOD has the advantage that the additional information of the watermark amplitude at the decoder side is not required.

A similar procedure could be taken to obtain the LOD in the DFT magnitude domain. Referring to the test statistic (16), we can obtain the LOD decoder for SS embedding in the DFT magnitude domain as
b ^ = sign i r i γ i - 1 s i γ i η i γ i - ( γ i - 1 ) s i r i .
(53)

It is worthy mentioning that, since LOD is independent of the watermark amplitude A, decoding performance degradation is observed in LOD compared with ML, especially for the high watermark amplitude cases. As discussed in Section 4, though those watermarked coefficients with x i -s i A < 0 or x i + s i A < 0 do not convey information directly, they do help accurate decoding in the ML decoder in (16). From the LOD in (53), it is clear that with no access to the watermark amplitude information, all the received coefficients are exploited for extracting the hidden information even though some of them do not convey any information. This is the source of the decoding performance degradation from ML.

In order to reduce the LOD decoder's sensitivity to watermark amplitude, alternatively, we present the GML as the sub-optimal decoder in the DFT magnitude domain for SS embedding. The GML considers A, b, and e as unknown parameters to be estimated. Referring to the embedding scheme (11), we can obtain the GML decoder as
y ^ = arg max y f x ( r - y ) ,
(54)
where f x (.) is the Weibull distribution defined in (8) and y = s Ab + e. By taking the derivative of the above expression with respect to y and making it zero, we have
y ^ = r - g ,
(55)
g = η 1 γ 1 - 1 γ 1 1 γ 1 , η 2 γ 2 - 1 γ 2 1 γ 2 , . . . , η ι γ ι - 1 γ ι 1 γ ι T .
(56)

The main goal is to estimate b from y ^ . From the expression of y ^ = s A b + e , it is clear that the correlator b ^ = sign s T y ^ is the solution. It should be pointed out that since the GML decoder does not have access to the watermark amplitude information, the GML decoder's performance degrades from that of ML. However, when compared with LOD, GML yields less errors for high watermark amplitudes and thus provides better decoding performance.

The GML decoder for ISS scheme could be obtained similarly by maximizing
y ^ = arg max y f U ( r - y ) ,
(57)
where f U (.) is defined in (24) and y = s Ab + e + q. Similar to the SS case, we have
y ^ = r - M - 1 g .
(58)

Therefore, the GML decoder for ISS is b ^ = sign s T y ^ where y ^ is defined as in (58).

5.2. LMMSE decoder

In Section 4, we focused on the optimal ML decoders, which require the PDF parameters as well as the watermark strength and signature code. Since providing the watermark strength information to the decoder is not always possible, the LOD and GML decoders were proposed in Section 5 to make the decoders independent of this information. However, the PDF parameters are still required by these decoders. This motivated us to develop sub-optimal decoders which depends neither on PDF parameters nor on the watermark strength information. Here, we introduce the LMMSE decoder which requires only the signature code as prior information at the decoder side.

In signal processing, the mean square error (MSE) is a common measure of estimation and the MMSE estimator minimizes the mean square error in a Bayesian setting. More specifically, let θ be an unknown random variable to be estimated, and let y be the measurement, the MMSE estimator is to find a function θ ^ = g ( y ) such that it minimizes the MSE E{(θ - g(y))2|y}. It is known that, under some weak regularity assumptions, the MMSE estimator is given by θ ^ MMSE = E { θ | y } . In many cases, the minimum mean square error estimator could not be achieved, since we may not know the distributions f(θ|y) and f(θ, y) or the conditional expectation can be difficult to compute. Therefore, in practice the LMMSE estimator which has a linear structure and could be achieved more easily [38], is usually applied. The LMMSE decoder takes a linear combination of the received signal r i and some coefficients w i as follows
z = w ^ T r ,
(59)
which the later coefficients should be determined, and the hidden information is extracted using (15). The weight vector w is obtained by minimizing the MSE as
w ^ = arg min w E A b - w T r 2 .
(60)
The information embedding whether in the DCT domain or the magnitude of the DFT domain could be shown in the general form of (28). It could be shown that the result of this minimization leads the LMMSE decoder to the following conventional [38] form
w ^ = R r - 1 s ,
(61)

where the autocorrelation matrix Rr defined as R r = E{rr T }, can be estimated at the receiver side. Therefore, from former expression, we can see that only the signature code is required at the receiver side. Although the LMMSE decoder has the same structure for information embedding in the DCT and magnitude of the DFT, its performance varies in these host domains. As explained earlier for LOD in the magnitude of the DFT domain, all the coefficients do not convey the information. On the other hand, the autocorrelation matrix is estimated using all the coefficients of the received signal and thus it causes degradation in the decoding performance.

To obtain a closed form expression of the error probability for the LMMSE decoder when SS scheme is exploited for information hiding in the DCT domain, by assuming that the test statistic z in (59) follows a Gaussian distribution, we can show that the probability of error would be in the form of (19) when
WIR = E A s T R r - 1 s 2 E x T R r - 1 s 2 = A 2 s T R r - 1 s 2 s T R r - 1 R x R r - 1 s .
(62)
In a similar way, the theoretical error probability of LMMSE decoder for ISS embedding in the DCT domain, using (3), obtains as
WIR = E s T R r - 1 s A b 2 E s T R r - 1 ( I l - k s s T ) x 2 = A 2 ( s T R r - 1 s ) 2 s T R r - 1 ( I l - k s s T ) R x ( I l - k s s T ) R r - 1 s ,
(63)
where
R r = A 2 s s T + ( I l - k s s T ) R x ( I l - k s s T ) .
(64)
In order to achieve a simpler expression of the WIR, by employing the matrix inversion lemma and after some manipulations, we have
WIR = ( D - k 2 s T R x s ) ( s T R x - 1 s ) ( s T s ) ( 1 - k l ) 2 .
(65)
Taking the derivative of the WIR in (65) with respect to the parameter k gives the optimal value of k as
k = ( s T R x s + D l 2 ) - ( s T R x s + D l 2 ) 2 - 4 D l 2 s T R x s 2 l s T R x s .
(66)
Since all the required information are available during the encoding of the image, k can be calculated as above. Similarly, the theoretical error probability of the SS scheme in the magnitude of the DFT domain is obtained, using the embedding scheme (11), in the form of (19) where
WIR = A 2 ( s T R r - 1 s ) 2 s T R r - 1 R x R r - 1 s + s T R r - 1 R e R r - 1 s ,
(67)
where R e = E{ee T }, and the theoretical error probability of the ISS scheme, using (22), be in the form of (19) where
WIR = A 2 ( s T R r - 1 s ) 2 s T R r - 1 M R x MR r - 1 s + s T R r - 1 R q R r - 1 s ,
(68)

and R q = E{(e + q)(e T + q T )}.

6. Experimental results

In this section, simulations on real images are conducted to illustrate the performance of the proposed watermark decoders for decoding hidden message. A set of testing images, such as [13], with size 512 × 512 is employed for information embedding which includes "Boat", "Peppers", "Baboon", "Lena", and "Barbara" to represent almost a wide range of images.

For information embedding in the DCT domain, for each 8 × 8 block of the image, the DCT coefficients are calculated and all coefficients except the dc one are used as the host signal to convey the hidden information, therefore, 63 coefficients are used for conveying of one bit of information. For information hiding in the DFT domain, since the coefficients should remain conjugate symmetric, 31 coefficients are employed. For the DCT-domain data hiding, determining an appropriate value of the shape parameter is important, though the details are out of scope of this article. One approach could be using the ML estimation [39, 40]. In practice, to reduce the computational complexity, an alternative way is to use a constant value regardless of the specific image under analysis. One such constant value was suggested [35] as c = 0.8, and we use this value in our simulations for avoiding additional estimations. Our results are based on 100 simulation runs with using different signature codes, and since each block with size 8 × 8 is used for hiding one bit, the total number of embedded bits is 5122/82 = 4096 in each test image.

We first verify the theoretical error probabilities when employing the ML decoders proposed in Section 4 for both traditional SS and ISS embedding. For data hiding in the DCT domain, both the simulation results and the theoretical results are shown in Figure 1, where the bit error rate (BER) is plotted as a function of the data-to-watermark ratio (DWR) defined in the form of DWR = 10 log ( σ x 2 D ) . It should be mentioned that the average of the BER for five test images has been calculated as the final result. From Figure 1, it is clear that the theoretical analysis and the simulated BER result match closely with each other, and this verifies the theoretical analysis of the error probability. Also, comparing the performances of SS and ISS, as expected, we note that ISS clearly outperforms the traditional SS.
Figure 1

The average bit error rates versus DWR for the ML decoders, where 4,096 bits of information are embedded in the DCT domain of each of the five testing images employing the SS and ISS embedding schemes. The theoretical performances are also reported for comparison.

Similar to the DCT-domain results, we also investigate the performances of ML decoders when the magnitude of the DFT-domain is exploited for information hiding. The average BER performance based on five testing images and the theoretical performance derived in Section 4 are shown in Figure 2. The consistency between the simulated and theoretical results proves the correctness of the provided error analysis in Section 4. From Figures 1 and 2, we note that, at low DWR, the theoretical results of the DCT domain data hiding are more accurate than that of the DFT magnitude domain. The reason is most likely due to the assumed Gaussian distribution of the test statistic, which is more true when more random variables are added together. Since the total number of available coefficients for information hiding in the DFT magnitude domain is only half of that of the DCT domain and, as explained in Section 4, and more DFT coefficients do not convey information when the DWR decreases, the Gaussian assumption imposed on the test statistics in (36) and (41) for the DCT domain embedding is more accurate than the test statistics in (16) and (27) for the DFT magnitude domain embedding. From Figures 1 and 2, we also note that data hiding in the magnitude DFT domain yields better decoding performances than that of the DCT domain. This observation could be explained by the special structure of the test statistic in (14), which is error free when r i + s i A < 0 or r i - s i A < 0 and thus leads to better decoding performances.
Figure 2

The average bit error rate versus DWR for the ML decoders, where 4,096 bits of information are embedded in the DFT magnitude domain of each of the five testing images employing the modified SS and ISS embedding schemes. The theoretical performances are also reported for comparison.

To gain more insight into the optimal and sub-optimal decoders derived in Sections 4 and 5, the decoding performances of the ML, LOD, and LMMSE decoders are compared in Figure 3 where the DCT-domain SS and ISS embedding schemes are studied, respectively. It is noted from these figures that the ML decoder outperforms the sub-optimal ones. The ML decoder uses the watermark amplitude information to make the decision, while the LOD and LMMSE decoders do not require this information and provide close, slightly worse performance to that of ML. The other point observed is that the LMMSE decoder yields slightly better decoding performance than the LOD one, which could be intuitively justified by the structure of LOD. In deriving LOD, it is assumed that the watermark amplitude is small, and the LOD shows close decoding performance to that of ML as long as this assumption is satisfied. As it is seen from Figure 3, the performance gap between the ML and the LOD gets bigger as the DWR decreases. This is because, by decreasing the DWR, the watermark amplitude gets larger and the LOD derived by truncating the second and higher orders of Taylor's series results in a coarser approximation of the ML decoder. The LMMSE does not impose any constraint on the watermark amplitude, and this might be one reason that LMMSE is slightly better than LOD. In addition, the LMMSE decoder does not need to estimate parameters of the host signal and this simplicity makes it attractive for decoding. Therefore, we suggest that the LMMSE decoder is generally a good choice for extracting the information hidden in the DCT domain, in the sense that it needs less information than the ML decoder yet yields close decoding performance.
Figure 3

The average bit error rates versus DWR for the ML, LOD, and LMMSE decoders, where 4,096 bits of information are embedded in the DCT domain of each testing image employing the SS and ISS embedding schemes.

To verify the derived theoretical decoding performance analysis in Section 5, the BER curves of the LOD and LMMSE decoders in the DCT domain are shown in Figures 4 and 5. We observe close matches between the theoretical BER performances and the performances calculated based on the simulations.
Figure 4

The average bit error rate versus DWR for the LMMSE decoders, where 4,096 bits of information are embedded in the DCT domain of each testing image employing the SS and ISS embedding schemes. The theoretical performances are also reported for comparison.

Figure 5

The average bit error rate versus DWR for the LOD decoders, where 4,096 bits of information are embedded in the DCT domain of each testing image employing the SS and ISS embedding schemes. The theoretical performances are also reported for comparison.

To compare the performances of the sub-optimal decoders for the DFT magnitude domain embedding, Figure 6 is reported. From Figure 6, we note that, for the SS embedding, the LOD decoder does not provide comparable BER performances when compared with other decoders. As discussed in Section 5, though not all DFT coefficients convey hidden information, the LOD and LMMSE decoders use all the received coefficients for decoding and thus have degraded decoding performances from that of ML. More specifically, at lower DWR, since less DFT coefficients can be used for conveying the information bit, the decoding performance gap is larger, as observed in Figure 6. It is observed from the Figure 6 that the slope of decoding performance of LOD becomes smaller as the DWR decreases, and the same behavior is observed for the LMMSE decoder in Figure 6. However, this behavior is not observed for the GML decoder, even though GML yields worse performances than the LMMSE decoder. The fact that GML estimates the unknown parameters and uses them to extract the hidden information might be the justification why the slope of its performance does not become smaller as the DWR decreases. From the Figure 6, we note that overall the LMMSE decoder outperforms other sub-optimal decoders.
Figure 6

The average bit error rate versus DWR for the ML, GML, LOD, and LMMSE decoders, where 4,096 bits of information are embedded in the magnitude of the DFT domain of each testing image employing the modified SS and ISS embedding schemes.

To examine the performances of the proposed decoders in the presence of additional distortions/attacks, we consider a scenario, where an additive Gaussian noise is added into the watermarked images, and the decoders' performances are shown in Figures 7 and 8 for the DCT and the DFT magnitude domain embedding, respectively. The DWR has been fixed to be 30 dB and the WNR varies between 0 to 10 dB, where WNR = 10 log ( D σ n 2 ) and σ n 2 denotes the noise variance. From Figure 7, we note that the ISS scheme has better performance than SS, and that all sub-optimal decoders yield close decoding performances to each other. An interesting observation in Figure 8 is that the LMMSE and GML decoders outperform ML. Even though in the absence of any additional attack, ML in the DFT magnitude domain provides the best decoding performance, its performance in the presence of additional noise degrades significantly. This could be explained by the sensitivity of the ML decoder to the watermark amplitude. In the presence of additional noise, the received coefficients can be changed (e.g., the noisy coefficients could satisfy r i + S i A > 0 or r i + s i A > 0 even though they really do not convey any information), and thus are wrongly exploited for decoding and consequently degrade the performance of the ML decoder. On the other hand, the LMMSE and GML decoders do not depend on the watermark amplitude and can yield better performances than ML under additional noise.
Figure 7

The average bit error rate versus WNR for the ML, LOD, and LMMSE decoders, where 4,096 bits of information are embedded in the DCT domain of each testing image. The SS and ISS embedding schemes are employed, respectively, and DWR = 30 dB.

Figure 8

The average bit error rate versus WNR for the ML, GML, and LMMSE decoders, where 4,096 bits of information are embedded in the magnitude of the DFT domain of each testing image. The modified SS and ISS embedding schemes are employed, respectively, and DWR = 30 dB.

In order to illustrate that the proposed modified SS and ISS schemes in the DFT magnitude domain always lead to positive watermarked coefficients, the histogram plots of the watermarked coefficients for the proposed modified schemes are provided in Figures 9 and 10. It could be seen that, as we expected, all coefficients are positive, supporting the intuitive rationale behind using the modified embedding schemes.
Figure 9

Histogram of the watermarked coefficients for the modified SS embedding in the DFT magnitude domain when DWR = 30 dB.

Figure 10

Histogram of the watermarked coefficients for the modified ISS embedding in the DFT magnitude domain when DWR = 30 dB.

Further to justify the benefit of the modified SS scheme in the DFT magnitude domain, we compare the decoding performance of the proposed SS-based scheme with three existing SS-based methods in the DFT magnitude domain in Figure 11. One approach is to use the conventional correlator in SS [41, 42]. The second approach is a decoder based on the Weibull distribution in SS [43], which does not take into consideration of the signs of r i + s i A and r i - s i A. For embedding the information into the DFT magnitude domain, the third approach is based on the MSS [13, 43]. It is clear that the proposed modified scheme yields superior decoding performances over the conventional ones. The modified SS scheme provides better performance than the MSS probably because there are some error free cases in the proposed scheme.
Figure 11

The average bit error rate versus DWR curves for SS based schemes and decoders. Here 4,096 bits of information are embedded in the DFT magnitude domain.

Since some researchers suggested that magnitudes of DFT coefficients may follow PDF other than the Weibull, to check whether the Weibull distribution is a valid assumption, we estimate the PDF of the coefficients based on the Weibull distribution and report empirical results for 15 images. Figure 12 reveals a close match between the empirical and Weibull-based PDFs, supporting the assumption of the Weibull distribution of the coefficients. To investigate the performance consistence on more images, the decoding performance of the proposed decoders are robust, the decoding performances of the ML decoders using the modified SS and ISS schemes in the DFT magnitude domain are shown in Figure 13 based on 100 images. We can see that the decoding performances for 5 and 100 images are similar.
Figure 12

The empirical and Weibull-based PDFs of the magnitude of DFT coefficients of 15 images.

Figure 13

The average bit error rate versus DWR curves for the ML decoders, where 4,096 bits of information are embedded in the DFT magnitude domain of each of the 100 testing images employing the modified SS and ISS embedding schemes respectively.

In summary, some useful observations can be concluded from the experimental results: with the watermark amplitude information available at the receiver side, the DFT magnitude domain data hiding could result in better performances when the ML decoder is employed; With no access to the watermark amplitude information, the information embedding in the DCT domain data hiding is preferred, and the LMMSE decoder is preferred; When considering additional noise, data hiding in the DCT domain with ISS is preferred than the DFT magnitude domain ISS embedding. However, for SS embedding, the LMMSE decoder in the magnitude of the DFT domain provides fairly comparable performances to that of LMMSE in the DCT domain.

7. Conclusion

In this article, the optimal and sub-optimal decoders for additive spread spectrum data hiding were investigated. Overall, we presented a rigorous decoding analysis framework of additive spread spectrum and ISS data hiding when the information bit is embedded into the DCT and the magnitude of the DFT domains, respectively. Generalized Gaussian distribution and Weibull distribution were used for deriving the ML decoders in the different domains. To improve the accuracy of the extracted hidden message, we employed ISS embedding and presented the optimal ML decoders. The theoretical error analyses of SS and ISS embedding in the DCT domain and in the magnitude of the DFT domain were derived. Simulation results showed that, when the watermark amplitude is available at the decoder side, data hiding in the magnitude of the DFT domain could yield better decoding performances than that of the DCT domain.

Though theoretically the ML decoder achieves the decoding performance upper bound, it requires additional prior information such as the watermark amplitude. To relax the requirements on such prior information, the LOD and LMMSE decoders were derived for practical data hiding applications in the DCT domain. The LOD decoder is independent of the watermark amplitude, though it still requires the host signal parameters. The LMMSE decoder provides a linear decoder in terms of the received signal which is independent of the watermark amplitude. The LOD and LMMSE decoders yield performances close to that of the ML decoder, with the LMMSE being slightly better than the LOD, especially at low DWR. For the proposed sub-optimal decoders, we also provided the theoretical analysis of the bit error rate decoding performances.

The sub-optimal LOD was also proposed in the DFT magnitude domain. However LOD does not provide close performances to that of the ML decoder, probably because that LOD uses all the received coefficients for decoding. In order to address this issue, the GML decoder was proposed to provide an estimate of the watermark amplitude and the bit information. Although GML could tackle the LOD deficiency at low DWR, its performance is much worse than that of ML in the absence of any additional attack/distortion. The LMMSE decoder in the DFT magnitude domain shows better performance than that of the LOD and GML decoders.

The simulation results suggest that, with no access to the watermark amplitude information at the decoder side, the sub-optimal decoders in the DCT domain are more reliable than their counterparts in the DFT magnitude domain. Among the proposed sub-optimal decoders, overall the LMMSE decoders are preferred. As expected, the ISS embedding scheme outperforms SS in both the DCT and the DFT magnitude domains, and thus is preferred. Simulations in the presence of additional noise showed that ISS embedding in the DCT domain is preferred. The GML and LMMSE decoders are preferred in the presence of additional noise than the ML one for data hiding in the magnitude of DFT domain.

Declarations

Acknowledgements

The work was supported by a SPG grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).

Authors’ Affiliations

(1)
Department of Electrical and Computer Engineering, University of British Columbia

References

  1. Hartung F, Kutter M: Multimedia watermarking techniques. Proc IEEE 1999, 87(7):1079-1107. 10.1109/5.771066View ArticleGoogle Scholar
  2. Chen B, Wornell G: Quantization index modulation: a class of provably good methods for digital watermarking and information embedding. IEEE Trans Inf Theory 2001, 47(4):1423-1443. 10.1109/18.923725MathSciNetView ArticleGoogle Scholar
  3. Eggers JJ, Bauml R, Tzschoppe R, Girod B: Scalar costa scheme for information embedding. IEEE Trans Signal Process 2003, 51(4):1003-1019. 10.1109/TSP.2003.809366MathSciNetView ArticleGoogle Scholar
  4. Cox IJ, Kilian J, Leighton FT, Shanon T: Secure spread spectrum watermarking for multimedia. IEEE Trans Image Process 1997, 6(12):1673-1687. 10.1109/83.650120View ArticleGoogle Scholar
  5. Cheng Q, Huang TS: An additive approach to transform-domain information hiding and optimum detection structure. IEEE Trans Multimed 2001, 3(3):273-284. 10.1109/6046.944472View ArticleGoogle Scholar
  6. Mairgiotis AK, Galatsanos NP, Yang Y: New additive watermark detectors based on a hierarchical spatially adaptive image model. IEEE Trans Inf Forensics Secur 2008, 3(1):29-37.View ArticleGoogle Scholar
  7. Cannons J, Moulin P: Design and statistical analysis of a hash-aided image watermarking system. IEEE Trans Image process 2004, 13(10):1393-1408. 10.1109/TIP.2004.834660View ArticleGoogle Scholar
  8. Valizadeh A, Wang ZJ: A framework of multiplicative spread spectrum embedding for data hiding: performance, decoder and signature design. In Proc Global Communications (GLOBECOM). Honolulu, USA; 2009:1-6.Google Scholar
  9. Malvar HS, Florencio DA: Improved spread spectrum: a new modulation technique for robust watermarking. IEEE Trans Signal Process 2003, 51(4):898-905. 10.1109/TSP.2003.809385MathSciNetView ArticleGoogle Scholar
  10. Delhumeau J, Furon T, Hurley NJ, Silvestre GC: Improved polynomial detectors for side-informed watermarking. In Proc SPIE Security, Steganography, and watermarking of Multimedia Contents V. Volume 5020. San Jose, USA; 2003:311-321.View ArticleGoogle Scholar
  11. Valizadeh A, Wang ZJ: Correlation-and-bit-aware spread spectrum embedding for data hiding. IEEE Trans Inf Forensics Secur 2011, 6(2):267-282.View ArticleGoogle Scholar
  12. Briassouli A, Tsakalides P, Strintzis MG: Hidden messages in heavy-tails: dct-domain watermark detection using alpha-stable models. IEEE Trans Multimed 2005, 7(4):700-715.View ArticleGoogle Scholar
  13. Barni M, Bartolini F, De Rosa A, Piva A: Optimum decoding and detection of multiplicative watermarks. IEEE Trans Signal Process 2003, 51(4):1118-1123. 10.1109/TSP.2003.809371View ArticleGoogle Scholar
  14. Akhaee MA, Sahraeian SME, Sankur B, Marvasti F: Robust scaling-based image watermarking using maximum-likelihood decoder with optimum strength factor. IEEE Trans Multimed 2009, 11(5):822-833.View ArticleGoogle Scholar
  15. Sencar HT, Velastin S, Nikolaidis N: Intelligent Multimedia Analysis for Security Applications. Springer, Berlin; 2010.View ArticleGoogle Scholar
  16. Cheng Q, Huang TS: Robust optimum detection of transform domain multiplicative watermarks. IEEE Trans Signal Process 2003, 51(4):906-924. 10.1109/TSP.2003.809374MathSciNetView ArticleGoogle Scholar
  17. Huang X, Zhang B: Statistically robust detection of multiplicative spread spectrum watermarks. IEEE Trans Inf Forensics Secur 2007, 2(1):1-13.View ArticleGoogle Scholar
  18. Zhong J, Huang S: An enhanced multiplicative spread spectrum watermarking scheme. IEEE Trans Circ Syst Video Technol 2006, 16(12):1491-1506.MathSciNetView ArticleGoogle Scholar
  19. Perez-Gonzalez F, Balado F, Hernansez Martin JR: Performance analysis of existing and new methods for data hiding with known-host information in additive channels. IEEE Trans Signal Process 2003, 51(4):960-980. 10.1109/TSP.2003.809368MathSciNetView ArticleGoogle Scholar
  20. Zhong J, Huang S: Double-sided watermark embedding and detection. IEEE Trans Inf Forensics Secur 2007, 2(3):297-310.View ArticleGoogle Scholar
  21. Liu W, Dong L, Zeng W: Optimum detection for spread-spectrum watermarking that employs self-masking. IEEE Trans Inf Forensics Secur 2007, 2(4):645-654.View ArticleGoogle Scholar
  22. Merhav N, Sabbag E: Optimal watermark embedding and detection strategies under limited detection resources. IEEE Trans Inf Theory 2008, 54(1):255-274.MathSciNetView ArticleGoogle Scholar
  23. Furon T: A constructive and unifying framework for zero-bit watermarking. IEEE Trans Inf Forensics Secur 2007, 2(2):149-163.View ArticleGoogle Scholar
  24. Karybali IG, Berberidis K: Efficient spatial image watermarking via new perceptual masking and blind detection schemes. IEEE Trans Inf Forensics Secur 2006, 1(2):256-274. 10.1109/TIFS.2006.873652View ArticleGoogle Scholar
  25. Karybali IG, Berberidis K: New additive watermark detectors based on a hierarchical spatially adaptive image model. IEEE Trans Inf Forensics Secur 2008, 3(1):29-37.View ArticleGoogle Scholar
  26. Hernandez JR, Amado M, Perez-Gonzalez F: DCT-domain watermarking techniques for still images: detection performance analysis and a new structure. IEEE Trans Image Process 2000, 9(1):55-68. 10.1109/83.817598View ArticleGoogle Scholar
  27. Barni M, Bartolini F, Cappellini V, Piva A: A DCT-domain system for robust image watermarking. Signal Process 1998, 66(3):357-372. 10.1016/S0165-1684(98)00015-2View ArticleGoogle Scholar
  28. Lin C, Wu M, Bloom JA, Cox I, Miller M, Lui Y: Rotation, scale, and translation resilient watermarking for images. IEEE Trans Image Process 2001, 10(5):767-782. 10.1109/83.918569View ArticleGoogle Scholar
  29. Barni M, Bartolini F, De Rosa A, Piva A: A new decoder for the optimum recovery of non-additive watermarks. IEEE Trans Image Process 2001, 10(5):755-766. 10.1109/83.918568View ArticleGoogle Scholar
  30. Solachidis V, Pitas I: Circularly symmetric watermark embedding in 2-DFT domain. IEEE Trans Image Process 2001, 10(11):1741-1753. 10.1109/83.967401View ArticleGoogle Scholar
  31. Csurka G, Deguillaume F, O'Ruanaidh JJK, Pun T: A Bayesian approach to affine transformation resistant image and video watermarking. In Lecture Notes Comput Sci. Volume 1768. Dresden, Germany; 2000:270-285. 10.1007/10719724_19Google Scholar
  32. Ng TM, Garg HK: Maximum likelihood detection in DWT domain image watermarking using Laplacian modeling. IEEE Signal Process Lett 2009, 12(4):285-288.View ArticleGoogle Scholar
  33. Mahbubur Rahman SM, Omair Ahmad M, Swamy MNS: A new statistical detector for DWT-based additive image watermarking using the Gauss-Hermite expansion. IEEE Trans Image Process 2009, 18(8):1782-1796.MathSciNetView ArticleGoogle Scholar
  34. Muller F: Distribution shape of two-dimensional DCT coefficients of natural images. IEEE Electron Lett 1993, 29(22):1935-1936. 10.1049/el:19931288View ArticleGoogle Scholar
  35. Tanabe N, Farvardin N: Subband image coding using entropy-coded quantization over noisy channels, Univ. Maryland, College Park, Tech. Rep., Computer Science Technical Report Series. 1989.Google Scholar
  36. Papoulis A, Unnikrishna Pillai S: Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York; 2002.Google Scholar
  37. SM Kay: Fundamentals of Statistical Signal Processing: Detection Theory. Volume 2. Prentice Hall, Upper Saddle River; 1993.Google Scholar
  38. Kay SM: Fundamentals of Statistical Signal Processing: Estimation Theory. Volume 1. Prentice Hall, Upper Saddle River; 1993.Google Scholar
  39. Birney KA, Fischer TR: On the modeling of DCT and subband image for compression. IEEE Trans Image Process 1995, 4(2):186-193. 10.1109/83.342184View ArticleGoogle Scholar
  40. Mallat SG: A theory of multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Machine Intell 1989, 11(7):674-693. 10.1109/34.192463View ArticleGoogle Scholar
  41. Kim HW, Choi D, Choi H, Kim T: Selective correlation detector for additive spread spectrum watermarking in transform domain. Signal Process 2010, 90(8):2605-2610. 10.1016/j.sigpro.2010.02.007View ArticleGoogle Scholar
  42. Baranwal N, Datta K: Comparative study of spread spectrum based audio watermarking techniques. In International Conference on Recent Trends in Information Technology (ICRTIT). Chromepet, India; 2011:896-900.View ArticleGoogle Scholar
  43. Piva A, Barni M, Bartolini F, De Rosa A: Data hiding technologies for digital radiography. IEEE Proc Vis Image Signal Process 2005, 152(5):604-610. 10.1049/ip-vis:20041240View ArticleGoogle Scholar

Copyright

© Valizadeh and Wang; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.