Open Access

Non-integer expansion embedding techniques for reversible image watermarking

EURASIP Journal on Advances in Signal Processing20152015:56

https://doi.org/10.1186/s13634-015-0232-z

Received: 11 October 2014

Accepted: 16 May 2015

Published: 8 July 2015

Abstract

This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor’s performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.’s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.

Keywords

Reversible watermarking Non-integer prediction error Expansion embedding Noncausal prediction

1 Introduction

Reversible watermarking techniques embed data in a host signal (for example, an audio/image) and allow for the original digital image to be exactly recovered. This is very useful in some applications, especially in medical, military, and legal domains. There are two important requirements for reversible watermarking techniques: 1) a large embedding capacity and 2) a low watermark distortion. The two requirements conflict with each other since embedding more data into a work will cause bigger distortion. A desirable technique should embed the same capacity with lower distortion or vice versa. During the recent 10 years, the reversible watermarking has been an active research domain. In the literature, several types of reversible watermarking algorithms has been proposed:
  • Type I algorithms use modulo-arithmetic-based additive spread frequency techniques [1, 2], some of which provide robustness but often cause salt-and-pepper artifacts due to the many wrapped around pixel intensity. In this direction, a different approach proposed by Vleeschouwer et al. [3] can reduce the artifacts by using the circular interpolation of the bijective transform of image histogram. Finally, good results are reported by using the integer-to-integer wavelet-based reversible watermarking in [4].

  • The direct method to reversible watermarking is to compress a set of selected features from an image in a lossless way and substitute the selected features with their compressed versions and the watermark data [58], referred to as type II algorithms in the literature. In order to ensure the watermark imperceptivity, the selected features are usually in the least significant bit area, such as the generalized LSB (g-LSB) embedding algorithm [8], which is an extension of the work [5]. Theoretical analysis of reversible watermarking has been presented by Kalker and Willems in [7]. Compared with type I algorithms, type II ones can provide higher payload.

  • The third type of reversible watermarking algorithms can be classified as difference expansion (DE) embedding methods, in which a common feature is to use difference operators to create features with a small magnitude, and further expand these features in order to create vacancies for embedding the watermark and the auxiliary information. The DE embedding technique was originally developed by Tian [9] and improved in [1013]. The capacity in Tian’s method is close to 0.5 bpp in a single pass by using two adjacent pixels as a group. By extending the DE to a generalized integer transform, the auxiliary information can be reduced with groups of three or four pixels [10]. In [11], the sorting step in the wavelet domain was introduced to expand those smaller pixel difference. The DE scheme was introduced for 2-D vector map in [12], and the location map was reduced in [13].

  • A fruitful research direction, proposed by Thodi and Rodriguez [14], is prediction-error expansion (PE) embedding technique. Comparing with the DE-based methods, one of the advantages of the PE technique is that it significantly adds the number of the feature elements that expanded for data embedding. The other advantage is that a predictor generates feature elements that are often smaller in magnitude than the feature elements generated by a difference operator. With embedding into each pixel, the PE embedding techniques provided the maximal capacity up to 1 bpp in a single pass. So far, several versions of PE reversible watermarking algorithms have been proposed in [1419] and others in order to improve the performance of the PE schemes by focusing on the reduction of the size of the auxiliary information (with the use of the sorting and histogram shifting techniques [15]), the reduction of the prediction error (by using multiple predictors [16], the prediction by flooring the average value of the four immediate pixels [15] and adaptive prediction [17]), and the reduction of the embedding distortion (by the pixel selection [17] and the low distortion transform by splitting the difference between the current pixel and its prediction context [18, 19]). These existing PE-based schemes share a common property, that is, the predicted value or its variety was rounded to integer value for expansion embedding.

Reversible watermarking algorithms have also been proposed for digital audio files by expansion embedding [2022]. There are other novel reversible watermarking approaches which are worth mentioning, too. Data embedded into the histogram bins has been proposed by Leest et al. in [23]. Additive embedding strategy by combining histogram shifting technique [24] and bilinear interpolation prediction has been proposed by Chen et al. in [25, 26].

This paper proposes a non-integer PE (NIPE) embedding strategy, which can proceed non-integer prediction errors for data embedding by expanding only the integer element of a prediction error while keeping the fractional element unchanged. Furthermore, we propose a new noncausal image prediction method for the NIPE technique. In comparison with the classical PE technique [14], the NIPE strategy has lower embedding distortion for non-integer prediction values by using better predictor. The new predictor can provide better performance than existing causal predictors (such as difference predictor, MED predictor, and GAP) due to the fact that there is no rounding operation in the prediction phase. Comparing with Sachnev et al.’s noncausal double-set prediction method [15], the proposed prediction method can further reduce the prediction errors by predicting data in a single pass (which avoids the distortion problem due to the fact that half of the pixels in the second set predicted with the watermarked pixels in the first set). Experimental results for some standard test images have shown that the proposed NIPE method with the noncausal image predictor can embed more payload for the same watermark distortion.

The outline of this paper is as follows. In the next section, the proposed NIPE embedding technique is introduced. This is followed by a description of a new prediction strategy and an image predictor. We then address the proposed reversible watermarking scheme and compare the proposed prediction method with existing typical predictors. Furthermore, the watermarking scheme’s performance is tested against other typical reversible image watermarking works. Finally, we draw the conclusions.

2 Prediction-error expansion embedding

PE embedding is a technique to expand a prediction error to create a vacant position and insert a bit into the vacant position, generally at the least significant bit (LSB). The PE-based scheme was originally developed by Thodi and Rodriguez [14] and later developed by other researchers (such as [1619]). In these methods, causal predictive methods with only past pixels are often applied so that the predicted value or its variety can be rounded to integer value for integer prediction-error expansion (IPE) embedding.

In this section, we proposed a NIPE embedding technique, which really brings a predictor into full play to reduce the prediction errors in comparison with the IPE embedding technique. In the NIPE-based method, the predicted value is no longer rounded to integer number. This is beneficial to using more efficient predictor in the NIPE 1. The basic principles of the IPE method [14] and the proposed NIPE method are described as follows.

2.1 Classical IPE embedding

In the IPE embedding technique [14], the prediction error is the difference between a pixel intensity y and its estimate \(\hat {y}\) (which should be rounded to integer value if it is not), denoted by \(e=y-\hat {y}\). After embedding a bit w, the watermarked prediction error is
$$ e_{w}=2\times e + w, $$
(1)
and the marked pixel intensity is
$$ y_{w}=\hat{y}+e_{w}=\hat{y}+2\times e + w. $$
(2)

Since y w is an integer number, \(\hat {y}\) and e are required to be integer values for expansion embedding. This is why we denote Thodi and Rodriguez’s method as the IPE embedding scheme in this paper. It is worth noting that the condition that \(\hat {y}\) should be integer makes an undesirable requirement, that is, the prediction context often contains only causal pixels so as to obtain the same predicted value \(\hat {y}\) in the decoder.

The hidden bit, w, is extracted from the LSB of e w and the original pixel intensity y is recovered by
$$ w = \text{mod}(e_{w},2),\text{ } y=\hat{y}+ \lfloor \frac{e_{w}}{2} \rfloor, $$
(3)

where mod(e w ,2) is the remainder on division of e w by 2 and \(\lfloor \frac {e_{w}}{2}\rfloor \) represents the greatest integer less that or equal to \(\frac {e_{w}}{2}\).

2.2 NIPE embedding technique

Sections 2.1 shows that the IPE embedding approach [14] is suffering from an undesired requirement, that is, the prediction value should be rounded to integer number for expansion embedding. This will restrain a predictor’s performance since the prediction context is restricted to only causal pixels (past pixels) in order to generate the same predicted value in the decoder. In this section, we propose a new expansion embedding approach, one of the advantages of which is able to deal with non-integer prediction values for data embedding. The other, more important, advantage of the approach is that the current pixel can be estimated in a single pass by using noncausal predictive way to improve prediction performance 2.

From the expression \(y_{w}=\hat {y}+2\times e + w\) in Eq. (2), we find that in order to recover the marked pixel y w , not exact of an integer \(\hat {y}\) is needed, but of the sum of \(\hat {y}+2\times e\). Towards this direction, the basic idea of the proposed approach in this paper is to allow \(\hat {y}\) to take non-integer value but make sure that the combination of \(\hat {y}\) and e takes integer value for hiding a given bit w.

For a pixel in intensity y, the prediction error e is a non-integer value when its estimate \(\hat {y}\) takes non-integer number. In this case, split the non-integer error e into two parts: integer part (=f i x(e)) and fractional part δ (δ=e). Here, f i x(.) is a function to strip off the fractional part of its argument and returns the integer part. The function does not perform any form of rounding or scaling, e.g., fix(−4.4)=−4 and fix(5.4)=5. The basic idea of the NIPE embedding technique expands only the integer part of a prediction error for data embedding while keeping the fractional part unchanged. The detail is described as follows.

In the encoder, the watermarked prediction error is computed by
$$ e_{w}=\left\{ \begin{array}{l} 2\times \ell +\delta +w = e+ \ell + w, \text{if } e \geq 0\\ 2\times \ell +\delta -w = e+ \ell - w, \text{otherwise }, \end{array}\right. $$
(4)

where w is a binary bit, taking either 0 or 1. Equation (4) can make sure that the fractional element of e w is equal to that of e. This is beneficial to recover the watermark bit and the original pixel intensity in the decoder.

The resulting watermarked pixel is
$$ {}y_{w}=\hat{y}+e_{w}=\left\{ \begin{array}{l} \hat{y}+ e + \ell + w=y+ \ell + w, \text{if } e \geq 0\\ \hat{y}+ e + \ell - w=y+ \ell - w, \text{Otherwise }. \end{array}\right. $$
(5)

Equation (5) shows that even though \(\hat {y}\) and e take non-integer values, the watermarked pixel y w is an integer number.

In the decoder, the hidden bit w is extracted from e w and the original pixel y is restored by
$$ w=\text{mod}(\ell_{w},2), \text{and}\, y=\hat{y}+ fix\left(\frac{\ell_{w}}{2}\right) + \delta_{w}, $$
(6)

where w is the integer element of e w and δ w =e w w .

Take two simple examples to show the proposed NIPE technique:
  1. Case 1

    Let y=100, w=1, and e=100−101.4=−1.4 when \(\hat {y}=101.4\). In the encoder, e w =e+w=−1.4−1−1=−3.4, w =fix(e w )=−3, δ w =e w w =−0.4, w=mod( w ,2)=1, \(y=\hat {y}\,+\, \text {fix}(\frac {\ell _{w}}{2}) + \delta _{w}= 101.4+ \text {fix}(\frac {-3}{2})-0.4=100\).

     
  2. Case 2

    : Let y=100, w=1 and e=100−97.4=2.6 when \(\hat {y}=97.4\). In the encoder, e w =e++w=2.6+2+1=5.6, w =fix(e w )=5, δ w =e w w =0.6, w=mod( w ,2)=1, \(y=\hat {y}+ \text {fix}(\frac {\ell _{w}}{2}) + \delta _{w}= 97.4+ \text {fix}(\frac {5}{2})+0.6=100\).

     

We can see from these two cases that the NIPE scheme can effectively deal with the non-integer prediction values for reversible watermarking.

Equations (4), (5), and (6) form the proposed NIPE embedding strategy, which can avoid the rounding operation in the IPE.

2.3 Distortion analysis of NIPE and IPE embedding

Let y be a pixel and let \(\hat {y}\) be an estimate of y computed on a neighborhood of y. Let further w is the binary bit to be embedded. Let y w =y+p w , where p w is the watermark distortion on y. For an image with the same predictor, the embedding distortion from the proposed NIPE scheme is lower than from the classical IPE scheme [14]. This can be explained as follows:
  1. Case 1

    \(\hat {y}\) is integer: From (2) in the IPE, \(p'_{w}= y-\hat {y} + w\). From (5) in the NIPE, \(p''_{w}= \text {fix}(y-\hat {y}) + w= y-\hat {y} + w\). In this case, p w′=p w″, indicating that the NIPE has the same embedding distortion as the IPE. For example, if y=100, w = 1, \(\hat {y}=102\). With NIPE, p w″=fix(100−102)+1=−1; with IPE, p w′=100−102+1= −1;

     
  2. Case 2

    \(\hat {y}\) is non-integer and \(y-\hat {y}>0\): In this case, \(p'_{w}= y-\lfloor \hat {y}\rfloor + w\), \(p''_{w}=fix(y-\hat {y}) + w= y-fix(\hat {y}) -1 + w\). Since the estimate of a pixel \(\hat {y}\) is always positive, we have \(p''_{w}= y-fix(\hat {y})-1 + w=y-\lfloor \hat {y}\rfloor -1 + w\), that is p w′=p w″+1, meaning that the IPE scheme will introduce more distortion in this case. For example, if y=100, w=1, \(\hat {y}=98.4\). With NIPE, p w″=f i x(100−98.4)+1=2; With IPE, p w′=100−98+1=3.

     
  3. Case 3

    \(\hat {y}\) is non-integer and \(y-\hat {y} < 0\): In this case, \(p'_{w}= y-\lfloor \hat {y}\rfloor + w\). Referring to (5), \(p''_{w}= \text {fix}(y-\hat {y}) -w= y-\text {fix}(\hat {y})- w = y-\lfloor \hat {y}\rfloor -w\). From \(y-\lfloor \hat {y}\rfloor =p'_{w}- w\), we have p w″=p w′−2w, indicating the distortion from the NIPE is higher when w=1. When w=0, the NIPE and IPE has the same embedding distortion. For example, if y=100, \(\hat {y}=101.4\). With NIPE, p w″=fix(100−101.4)−w=−1−w; with IPE, p w′=100−101+w= −1 + w. If w is 1, then p w″=−2 and p w′=0; If w is 0, then p w″=p w′=−1.

     

In statistics, the predictor error (\(e=y-\hat {y}\)) is equal to the probability of positive or negative and the watermark bit (w) is also equal to the probability of 1 or 0. As a result, the NIPE has the same embedding distortion as the classical IPE scheme (e.g., the same PSNR value).

3 Pixel prediction

Image prediction is an important step in lossless compression coding applications [2730]. For an image, each pixel is estimated from its neighborhood to generate the prediction error. Usually, the mean value and variance of the predicted errors are smaller than that of the original pixels. This is beneficial to improve coding efficiency. There are two main prediction ways: 1) causal prediction [27, 30] and 2) noncausal prediction [28, 29]. The main difference between causal and noncausal predictive ways is that the prediction context of the former is restricted to only past pixels while the latter uses past and future pixels. By using causal prediction, the predicted value can be rounded to integer number for enhancing coding efficiency since the prediction context of a pixel has been known, such as the median edge direction (MED) predictor in JPEG-LS standard [30]. Noncausal predictive ways are beneficial to reduce the prediction errors for image vector quantization coding [28, 29] and speech compression [31].

3.1 Image predictors for reversible watermarking

Image prediction is also an important step in the expansion embedding-based reversible watermarking [9, 10, 14, 15, 18, 19]. Usually, a better predictor can reduce the prediction error for improving the payload capacity with the same embedding distortion. Figure 1 plots four typical prediction operations by estimating the current pixel y with different neighboring pixels: (a) difference predictor used in [9, 10], (b) MED predictor used in [14, 1719], (c) the gradient adaptive predictor (GAP) used in [18, 19], and d) prediction with four immediate pixels used in [15, 26]. As shown in Fig. 1, data prediction in the difference predictor, MED predictor, and GAP is restricted only causal pixels while scanning an image in a raster order. The predictor plotted in Fig. 1d is a noncausal predictive method using four immediate pixels of the current pixel as the context, including two past pixels, and two future pixels.
Fig. 1

Prediction context of a pixel. a Difference prediction with one causal pixel. b MED predictor with three causal pixels. c GAP predictor with seven causal pixels. d Noncausal prediction using bilinear interpolation with two past pixels and two future pixels

In the DE-based reversible watermarking [9], the difference of two adjacent pixels is expanded to create space to embed the data and the auxiliary information. The auxiliary information can be reduced by extending the DE on groups of three or four neighboring pixels [10]. In the PE-based reversible watermarking [14, 1719], the difference is a prediction error between a pixel and its estimate. Since the predictor can apply several causal pixels (such as MED, GAP, and others designed for lossless data compression [30, 32]) as the context, the prediction error in magnitude is usually smaller than the difference of two adjacent pixels. In [14, 17, 18], the MED predictor was used to measure the payload capacity and the embedding distortion. Also in [18], the GAP and simplified GAP (SGAP) are applied to show how to reduce the embedding distortion by marking the current pixel and its context. No matter the MED predictor, GAP or SGAP, the data prediction is restricted only causal pixels for the IPE embedding. For example, the MED predictor combines three past pixels (x t ,x tr ,x r ) as the context of the current pixel (y) as illustrated in Fig. 1 b. The output of the MED predictor is
$$ \hat{y}=\left\{ \begin{array}{l} \text{max}(x_{t},x_{r}),\; \text{if } x_{tr}\leq \text{min}(x_{t},x_{r})\\ \text{min}(x_{t},x_{r}), \;\,\text{if } x_{tr}\geq \text{max}(x_{t},x_{r})\\ x_{t}+x_{r}-x_{tr}, \text{otherwise}. \end{array}\right. $$
(7)

The same predictor was also considered as the median of three simple linear predictors: x t , x r , and x t +x r x tr [33], that is, \(\hat {y}=\frac {2x_{t}+2x_{r}-x_{\textit {tr}}}{3}\).

Comparing with causal prediction ways, noncausal prediction approaches can improve the prediction precision by using more neighboring samples. The prediction method described in [15] was very different one, which can combine the IPE scheme and noncausal prediction with bilinear interpolation in a way that the image is divided into two sets (like a chess board, the black set, and the white set). In the first pass, the pixel in the black set is predicted with four immediate pixels in the white set to generate an integer difference for IPE-based expansion embedding. Then, the white set is predicted by using the marked version of the black set for the second pass of embedding. Similar idea has been used for additive PE embedding [26] which divided an image into four sets and further embedded the data into the sets one by one by using multi-passes embedding. The problem in the noncausal predictors [15, 26] is that part of the pixels are predicted with the watermarked pixels instead of the original ones. This will introduce some unnecessary distortion since the distribution of the prediction errors estimated from the marked pixels has a bigger variance than that estimated from the original pixels.

In the following section, we will present a noncausal predictive model for the NIPE embedding scheme. Comparing with the noncausal prediction methods presented in [15, 26], the predictor proposed in this paper can predict all the pixels before watermarking. This is beneficial to reducing the distortion due to part of the pixels predicted by the modified pixels.

3.2 Proposed noncausal prediction model

In this section, a new method incorporating data prediction not restricted to only causal pixels is designed for the proposed NIPE technique. This noncausal predictive method can predict data in a single pass.

Assume there is a time-discrete signal Y of length N, Y={y 1,y 2,,y N } with y i {0,1,,2 m −1} N , and where m indicates the number of bits used to represent a point (could be a sample or a pixel). The signal after the prediction is denoted by \(\hat {y}\). The residual signal is \(e=y-\hat {y}\). Here, the predicted value is a linear combination of past and future pixels/samples:
$$ \hat{y_{i}}= \sum^{p}_{t=1}a_{i-t}y_{i-t} + \sum^{p}_{t=1}a_{i+t}y_{i+t}, p<i<N-p+1, $$
(8)
where \(\sum ^{p}_{t=1}a_{i-t}y_{i-t}\) is the linear combination of p past pixels/samples, \(\sum ^{p}_{t=1}a_{i+t}y_{i+t}\) that of p future pixels/samples. The prediction error is computed as
$$ d_{i}= y_{i}-\hat{y_{i}}= y_{i}- \sum^{p}_{t=1}a_{i-t}y_{i-t} - \sum^{p}_{t=1}a_{i+t}y_{i+t}. $$
(9)
Since there are no any rounded operations on the predicted value \(\hat {y_{i}}\), Eq. (9) can be rewritten as
$$ y_{i+p}=\frac{y_{i}- d_{i}- \sum^{p}_{t=1}a_{i-t}y_{i-t} - \sum^{p-1}_{t=1}a_{i+t}y_{i+t}}{a_{i+p}}. $$
(10)

Equation (10) shows that the information of the first 2p pixels/samples {y 1,y 2,,y 2p } is needed to be saved as part of the residual signal for the recovery of the original signal. From the data series {y 1,y 2,,y 2p } and d p+1, the pixel/sample y 2p+1 can be recovered. Then, y 2p+2,y 2p+3,y N can be restored in sequential order.

The resulting signal above can be denoted by D={y 1,y 2,,y 2p ,d p+1,d p+2,,d Np }. The data {y 1,y 2,,y 2p } can be further predicted to generate the difference {e 1,e 2,,e 2p } with the predictor described in Section 3.3. Denote d i =e i+p ,p<i<Np+1. As a result, the original signal Y can be further predicted as E={e 1,e 2,,e 2p ,e 2p+1,e 2p+2,,e N }.

The above model shows that the noncausal prediction model can be performed in a single pass since there is no rounding operation on the predicted value \(\hat {y_{i}}\) in Eq. (8). For a two-dimensional image, it can be mapped into the one-dimensional form by using a scanning operation (e.g., in a raster scan or zigzag scan order).

3.3 Predictor with p = 1

The section above has addressed the basic principle of the proposed prediction model. When p=1, the prediction model can be simplified as follows. Beginning from the second pixel, the pixel y i is predicted by averaging its two closest neighbors (y i−1,y i+1):
$$ \hat{y_{i}}= \frac{y_{i-1}+y_{i+1}}{2}, 1<i<N. $$
(11)
The difference is computed as
$$ d_{i}= y_{i}-\hat{y_{i}}= y_{i}-\frac{y_{i-1}+y_{i+1}}{2}, 1<i<N. $$
(12)
From Eq. (12), the original pixel y i+1 is recovered by
$$ y_{i+1}=2 y_{i} - y_{i-1} - 2d_{i}, 1<i<N, $$
(13)

Equation (13) indicates that when the first two pixels y 1 and y 2 are saved, the third pixel y 3 can be recovered by referring to the prediction error d 2 in Eq. (13), then recovering y 4, y 5 and the other pixels in sequential order. Let e 1=y 1, e 2=y 2y 1 and e i+1=d i . Overall, the output of the predictor is denoted as E={e 1,e 2,e 3,,e N−1}. The predictor in the case of p=1 has been fully proven effective for digital audio in our earlier work [34].

3.4 Proposed noncausal prediction for image

In this section, we design a new image prediction method by referring to the proposed prediction model. Though this prediction method looks like the one described in [15], it is a different one with higher prediction accuracy. In [15], half of the pixels were predicted with the modified pixels instead of the original pixels. In the proposed method, all the pixels are predicted with the original pixels. This explains why the proposed NIPE watermarking scheme has better performance. The detail on the proposed image predictor is addressed as follows:

1) For a given 2-D image I in size R×C, use the following projection into the 1-D form:
$$ \left\{ \begin{array}{l} y_{(i-1)\cdot R +j}=I(i,j) \\ N = R\times C, \end{array}\right. $$
(14)

where I(i,j) is the pixel intensity in the ith row and the jth column, satisfying 1≤iR and 1≤jC. Denote the resulting 1-D signal by Y={y 1,y 2,,y N }.

2) After the projection in step 1, predict Y by referring to the proposed noncausal prediction model. Let p=C, that is to say, the estimate of the current pixel y k is a linear combination of C past and C future neighboring pixels, formulated as
$$ \hat{y_{k}}= \sum^{C}_{t=1}a_{k-t}y_{k-t} + \sum^{C}_{t=1}a_{k+t}y_{k+t}, C+1 \leq k \leq N-C. $$
(15)
Consider the strong correlation property between the current pixel (y k ) and its four immediate pixels in the top (y kC ), left (y k−1), right (y k+1), and bottom (y k+C ). The current pixel (y k ) is predicted by
$$ \hat{y}_{k}=\left\{ \begin{array}{c} \frac{y_{k-C}+y_{k+C}+y_{k+1}}{3},\; if\; \text{mod}\,(k,C)=1 \\ \frac{y_{k-C}+y_{k-1}+y_{k+C}}{3},\; if\; \text{mod}\,(k,C)=0 \\ {} \frac{y_{k-C}+y_{k-1}+y_{k+C}+y_{k+1}}{4}, \text{others}, \end{array}\right. $$
(16)
where the condition mod(k,C)=1 means that the pixel in the first column is predicted with the average value of three neighbors in the top, right, and bottom, as shown in Fig. 2 a. For the pixel in the last column (satisfying mod(k,C)=0, see Fig. 2 c), the prediction context includes three immediate neighbors in the top, left, and bottom. The other pixels are estimated by averaging four immediate pixels of the present pixel, as illustrated in Fig. 2 b. Step 2 will predict the pixels from the second row to the second last row. As analyzed in Section 3.2, the information of the first 2×C pixels (in the first and second rows) is saved for the inverse prediction. Let \(e_{k} =y_{k-C}-\hat {y}_{k-C}, 2C+1\leq k \leq N\). The resulting signal from step 2 is denoted by D={y 1,y 2,,y 2C ,e 2C+1,e 2C+2,,e N }.
Fig. 2

Context of pixel y k not in the first and last rows. a y k in the first column. b y k in the other columns. c y k in the last column

3) In order to further reduce the prediction errors, the first 2C elements in D (the pixels in the first two rows) are predicted before data embedding. This can be done by raster scanning the pixels in the first two rows and predicting the 2×C pixels by using the noncausal predictor in the case of p=1, described in Section 3.3. The first 2C pixels are predicted and kept as E 1={e 1,e 2,,e 2C }.

4) From steps 2 and 3, we have E={e 1,e 2,,e 2C ,e 2C+1,e 2C+2,,e N }, which is the output of the proposed image predictors in this paper.

5) The original image can be restored from E by performing the inverse prediction operations. Referring to step 3 and Section 3.3, the first 2C pixels are recovered by
$$ \left\{ \begin{array}{l} y_{1}=e_{1} \\ y_{2}= y_{1}+e_{2} \\ y_{i}=2y_{i-1} - y_{i-2} - 2e_{i}, 3 \leq i \leq 2C. \end{array}\right. $$
(17)
Once the first 2C pixels are recovered, the other pixels can be recovered by referring to Eq. 16 in step 2 with the following expression:
$$ {} y_{k+C}=\left\{ \begin{array}{l} 3y_{k}-y_{k-C}-y_{k+1}-3e_{k+C},\; if\; \text{mod}\,(k,C)=1 \\ 3y_{k}-y_{k-C}-y_{k-1}-3e_{k+C},\; if\; \text{mod}\,(k,C)=0 \\ 4y_{k}-y_{k-C}-y_{k+1}-y_{k-1}-4e_{k+C}, \text{others}, \end{array}\right. $$
(18)

where C<kNC. Finally, the original image is restored from the residual signal E.

3.5 Comments

From the analysis above, we can see that the proposed image predictor can predict pixels with its four immediate pixels which are not modified. Comparing with existing typical predictors, it can further improve the performance due to the following facts:
  1. 1.

    The data is predicted in a noncausal way that most of the pixels can be predicted with four immediate pixels. This improves prediction performance since some predictors only use causal pixels for data prediction, such as MED predictor.

     
  2. 2.

    All the pixels can be predicted in a single pass with original pixels as context. This is beneficial to avoid the the distortion problem due to part of the pixels predicted by the modified pixels, such as the predictor in [15].

     
  3. 3.

    In the literature, the IPE watermarking scheme [15] has a satisfactory performance. The NIPE technique with the predictor above can further reduce the embedding distortion for the same payload by avoiding part of the pixels predicted by the modified pixels.

     

4 Proposed watermarking scheme

The proposed watermarking scheme is a combination of existing techniques (histogram shifting in [14, 24]) and new techniques (NIPE embedding and noncausal image prediction).

4.1 Prediction expansion with histogram shifting

The histogram shifting method, introduced in [14, 24], is an efficient reversible watermarking technique to enhance fidelity of the marked signal and avoid overlapping problems caused by expansion embedding. The combination of histogram shifting and IPE has been previously addressed in [14, 15]. Here, we present how to combine the NIPE method with histogram shifting technique. We adopt a positive threshold value T to control the embedding distortion. Specifically, only those prediction values in [−T,T] are selected for NIPE embedding (denoted as the expanding set S 1), the prediction errors not in the range [−T,T] are going to be shifted (denoted as the shiftable set S 2) to avoid overlapping problems. The reversible watermarking rules are formulated as follows.
$$ e_{wi}=\left\{ \begin{array}{ll} 2\times \ell_{i} + \delta_{i} + w_{i}& \text{if}\; \ell_{i}\in\; [0, T] \\ 2\times \ell_{i} + \delta_{i} - w_{i}& \text{if}\; \ell_{i}\in\; [-T, 0) \\ e_{i} + T +1 & \text{if} \;\ell_{i}>T \\ e_{i} - T-1, 7& \text{if}\; \ell_{i}<-T, \end{array}\right. $$
(19)

where i is integer part of the ith prediction error, e i , satisfying e i = i +δ i . The marked prediction error is denoted by \(e_{w_{i}}\) after the bit w i is inserted.

The decoder recovers the original prediction error e i and the bit w i from \(e_{w_{i}}\) by:
$${} e_{i}=\left\{ \begin{array}{ll} \text{fix}(\frac{\ell_{wi}}{2}) + \delta_{wi} &\text{if} \;\ell_{wi}\in\; \left[-2T-1, 2T+1\right] \\ e_{wi} - T -1 &\text{if}\; \ell_{wi}> 2T +1 \\ e_{wi} + T +1, &\text{if}\; \ell_{wi}<-2T-1 \end{array}\right. $$
(20)
and
$$ w_{i}= \text{mod (\(\ell_{wi}\), 2)}, \text{if \(e_{wi}\in [-2T-1, 2T+1]\)}, $$
(21)

where wi =fix(e wi ) is the integer element of e wi and δ wi =e wi wi . It is worth noting that δ wi =δ i since the encoder only expands the integer part while keeping the fraction element unchanged. Finally, the original pixels are recovered from the original prediction errors by performing inverse prediction operation in Section 3.4.

The ratio between the sets S 1 and S 2 can be controlled by changing the embedding threshold T. The bigger the threshold value T, the higher the embedding payload, the more the embedding distortion is.

4.2 Capacity analysis

The marked images may suffer from overflow and underflow problems due to expansion embedding and histogram shifting operations. Towards this direction, an embedding testing step is first performed to pick up those pixels with overflow or underflow problems. The testing process has been addressed in [14, 15] in detail. For an image with m-bit representation, when a watermarked pixel in intensity is not in the range [0,2 m −1], labeled as a bad pixel. The bad pixels in position will be saved and embedded with the payload to indicate the expandable locations.

Usually, the size of images is smaller than 5000×5000. After the mapping in (14), the length is 5000×5000=25,000,000<225. That is to say, 25 bits of information is required for indicating a bad pixel. In addition, 7 bits of information is required for sending the embedding threshold parameter T to the decoder. Without the consideration of recursive embedding, the maximal embedding rate of the proposed reversible watermarking method can be estimated by:
$$ C = \frac{N_{1}-25\times N_{p}-25-7}{N}, $$
(22)

where C is the maximal embedding rate (bounded to 1), N 1 the length of the expandable set S 1, N p the number of the bad pixels, and N the number of the cover image pixels.

5 Encoder and decoder

The proposed reversible watermarking scheme, as illustrated in Fig. 3, can be used for image or audio files. In this paper, we take some standard images for experimental testing. In the embedding, the maximal capacity (P max) of an image is first computed by using the proposed reversible watermarking strategy, P max<=N. When an actual payload size P(P<=P max) is given, the embedding threshold T can be computed. For recovering the cover image, the information of P and T is needed to be sent to the decoder in a way that the LSB values of the first 32 prediction errors are kept (as part of the payload) and then replaced by the parameters P (25 bits) and T (7 bits).
Fig. 3

Proposed reversible watermarking scheme. a Watermark embedding. b Watermark extraction and lossless recovery

Referring to Sections 2 and 3, the embedding process of the proposed scheme is described as follows:
  1. 1.

    Referring to Section 3.4, predict the cover image Y to get the prediction errors E;

     
  2. 2.

    Find the bad pixels in position by using an embedding testing operation. The embedding testing step is similar to that in [14, 15]. Each bad pixel consumes 25 bits of payload to indicate the embedding position;

     
  3. 3.

    Referring to Section 2.2, embed the data (including P, T, and the bad pixels in position) into E to generate E w ;

     
  4. 4.

    Reconstruct the marked image Y w from E w by using the inverse prediction operation as described in Section 3.4, step 5).

     

In the decoder, the same prediction operations are performed on Y w to get E w . Then, the information of P and T is extracted from the LSB values of the first 32 prediction errors. Furthermore, the hidden data and the original prediction errors E are extracted from E w . Finally, the original image Y is completely recovered from E by using the inverse prediction operations.

6 Experimental testing and analysis

In this section, we adopt 24 gray-level versions of Kodak test images (http://r0k.us/graphics/kodak/index.html) and four standard benchmark images (baboon, barbara, f16, and lena in Fig. 4) as data set. Firstly, the performance of the noncausal predictor proposed in Section 3.4 is tested by comparing with other several typical predictors. This is followed by a performance comparison of the proposed watermarking scheme against three existing state-of-the-art works [14, 15, 18]. All the algorithms were implemented in Matlab, and the experiments were performed by embedding and decoding randomly generated binary bitstreams on image data set for reversible watermarking.
Fig. 4

Four standard benchmark images. a Lena. b Baboon. c Fishingboat. d F16

6.1 Comparison of typical predictors

The shape of the histogram of the predicted errors is often used to measure the performance of the embedding scheme. In general, distribution of the prediction errors obeys a Laplacian distribution. The shape of the distribution is determined by the absolute mean and variance. If the mean is close to zero, the variance essentially determines the shape of the histogram. The smaller variance value, the better performance can be achieved for the reversible watermarking scheme.

In the literature, the predictor proposed in [15] provides a satisfactory performance by dividing an image into two sets (like a chess board divided into black and white sets) to achieve noncausal prediction in a way that a pixel can be predicted with its four immediate pixels. This prediction method is suffering from a distortion problem. That is, half of the pixels were predicted with modified pixels instead of the original ones. As shown in Section 3.4, the proposed prediction method can predict a pixel with its four immediate pixels, and all the pixels can be predicted completely with original pixels as context. As a result, the proposed prediction method has better prediction accuracy. In order to better evaluate the effect of the rounding and watermarking operations, we have computed the absolute mean values and the standard deviations of all the 28 test images which are predicted with the proposed predictor in this paper (denoted by μ and σ), the proposed predictor with an integer rounding operation (μ r and σ r ) and the prediction method in [15] (μ rw and σ rw ).

Figure 5 shows the absolute mean values of all the 28 test images which are predicted with the proposed method in this paper and the differences among these three predictors. We can see from Fig. 5 a that the mean values are close to zero. In Fig. 5 b, the differences (μ r μ) plotted with the “asterisk” line are often positive, indicating the effect of the rounding operation on the predicted values. The “circle” line plots the differences (μ rw μ) that are also positive, indicating the effect of prediction with the modified pixels on the mean values. Figure 6 shows the corresponding standard deviations (σ) and their differences, denoted by σ r σ and σ rw σ, respectively. We can see from the “asterisk” line in Fig. 6 b that the difference values are always positive, indicating effect of the rounding operation and prediction with modified pixels on the standard deviations. Figures 5 and 6 show that the prediction errors with the proposed predictor has smaller mean value and the standard variances for an image.
Fig. 5

The absolute mean values of the prediction errors and the differences of different prediction methods. a Proposed prediction method. b Difference between our predictor and other two methods

Fig. 6

The standard deviations of the prediction errors and the differences of different prediction methods. a Proposed prediction method. b Difference between our predictor and other two methods

Furthermore, Table 1 lists the absolute mean values and the standard deviations of four benchmark images predicted with five different predictors (difference predictor used in [9,10], integer MED predictor used in [14,18], non-integer MED predictor [33], the prediction method in [15], and the proposed one in this paper). We can see from this table that the prediction method proposed in this paper can provide the smallest mean values and standard deviations.
Table 1

Performance comparison of five predictors

Predictors

Lena

Baboon

Fishingboat

F16

 

μ

σ

μ

σ

μ

σ

μ

σ

Difference predictor [9,10]

4.6724

7.8454

18.4130

27.6633

7.2441

11.1876

5.2971

11.9235

Integer MED predictor [14]

4.3407

6.9049

13.5220

19.6804

6.3836

9.3957

3.4273

6.2054

Non-integer MED predictor [33]

4.3060

6.7608

13.3652

19.2567

6.5863

9.7152

3.6089

6.6339

Sachnev’s predictor [15]

3.2063

4.8738

10.9786

15.6010

5.2220

7.7943

2.7578

4.8683

Proposed predictor

3.2049

4.8565

10.9782

15.5957

5.2209

7.7847

2.7465

4.8515

6.2 Comparison with other recent algorithms

We implemented three typical algorithms: Thodi and Rodriguez’s IPE algorithm with histogram shifting and flag bits (P3) [14], Coltuc’s low distortion transform method on MED predictor [18], and the IPE-based double-embedding scheme [15] and the proposed scheme with the NIPE technique and the new prediction method. The scheme, proposed by Thodi and Rodriguez, is the classical IPE embedding strategy which includes a MED predictor to output integer prediction errors for expansion embedding. Reversible watermarking using low distortion transforms the same MED predictor proposed by Coltuc for the reduction of the embedding distortion by marking not only the current pixel but also its context [18]. In the literature, the double-embedding scheme proposed by Sachnev et al. has provided a satisfactory performance by dividing an image into two sets in a way that the pixels can be predicted in a noncausal way. Four standard benchmark images are adopted to report experimental results, as plotted in Fig. 7. Simulation results are similar for the other test images. We can see from Fig. 7 that the NIPE technique with the proposed noncausal prediction method performs better at all embedding rates. The detail is described as follows.
Fig. 7

Comparison of capacity and fidelity against three typical algorithms [14,15,18]

The classical IPE embedding scheme, proposed by Thodi and Rodriguez [14], is a high-capacity reversible watermarking algorithm by developing PE embedding technique with MED predictor. We can see from Fig. 7 that the NIPE technique with the proposed noncausal predictor can provide higher embedding payload in comparison with the IPE scheme. The basic reason is that the proposed noncausal predictor has higher prediction precision than the MED predictor used in [14].

Coltuc [18,19] has also developed Thodi and Rodriguez’s work and has presented the results for the MED predictor. The basic idea of the approach is to embed the entire expanded difference not only into the current pixel but also its context. Then, the minimization of the square errors is considered to reduce the embedding distortion. When the parameter α is 0.25 (referred to [18], Eq. (4)), we can see from Fig. 7 that Coltuc’s embedding method has lower embedding distortion than the IPE for the same embedding rate. In [19], the embedding approach has been further generalized as a low distortion transform (LDT) for reversible watermarking. Comparing with the LDT technique with the MED predictor in [18], the proposed NIPE scheme can provide higher embedding payload or capacity for the same embedding distortion. The basic reason is that the proposed prediction strategy can better estimate the current pixel by incorporating data prediction not restricted to only causal pixels, as listed in Table 1.

Another important improvement on Thodi and Rodriguez’s work has been proposed by Sachnev et al. [15] by introducing a high-precision prediction strategy and sorting technique. Since a pixel can be estimated with its four immediate pixels, the IPE-based scheme can provide satisfactory performance in the literature. Figure 7 shows that the reversible watermarking approach proposed in this paper has lower embedding distortion at all embedding rates than Sachnev et al.’s double embedding scheme. The reason is that the predictor used for NIPE can predict pixels with four original immediate pixels when one used in [15] predicted half of pixels with modified pixels as context.

7 Conclusions

This paper presents a NIPE embedding technique for reversible watermarking. The NIPE technique can remedy a major drawback of Thodi and Rodriguez’s work (called IPE in this paper) that the predicted values should be rounded to integer number for data embedding. With the NIPE technique, the rounding operation in the prediction process (that often appears in IPE-based reversible watermarking algorithms to generate integer prediction errors) can be discarded. This is beneficial to use better predictor. In order to prove the advantage, we proposed a prediction model and designed an image predictor for the NIPE. The new predictor can predict pixels with four immediate pixels, and all pixels can be predicted with the original pixels. With the proposed NIPE and predictor, the embedding distortion is smaller than that in [15] at all embedding rates. Experimental results have shown that the predictor designed in this paper can provide the best performance than several existing typical prediction methods. In comparison with other typical reversible watermarking algorithms, the proposed scheme (combining the NIPE technique and new prediction method) performs better.

8 Endnotes

1In the IPE, the prediction errors should be rounded to integer value. This rounding operation brings a constraint on a predictor’s performance.

2In the literature [15,26], noncausal predictive methods have been used for the IPE by using multi-passes prediction for multi-layers embedding. Since part of the pixels were predicted by using the watermarked pixels instead of the original ones, some distortion has been introduced.

Declarations

Acknowledgements

This work was supported by NSFC (No. 61272414). The authors would like to thank the reviewers’ valuable comments.

Authors’ Affiliations

(1)
Department of Electronic Engineering, School of Information Science and Technology, Jinan University

References

  1. W Bender, D Gruhl, N Morimoto, A Lu, Techniques for data hiding. IBM Syst. J. 35(3), 313–336 (1996).View ArticleGoogle Scholar
  2. B Macq, in Proc. the European Signal Processing Conf,. Lossless multiresolution transform for image authenticating watermarking (Tampere, Finland, 2000).Google Scholar
  3. C De Vleeschouwer, JE Delaigle, B Macq, Circular interpretation of bijective transformations in lossless watermarking for media asset management. IEEE Trans. Multimedia. 5(1), 97–105 (2003).View ArticleGoogle Scholar
  4. S Lee, CD Yoo, T Kalker, Reversible image watermarking based on integer-to-integer wavelet transform. IEEE Trans. Inf. Forensics Secur. 2(3), 321–330 (2010).View ArticleGoogle Scholar
  5. J Fridrich, M Goljan, R Du, in Proc. SPIE Photonics West, Security and Watermarking of Multimedia Contents III, 3971. Invertible authentication (San Jose, 2001), pp. 197–208.Google Scholar
  6. J Fridrich, M Goljan, R Du, Lossless data embeddinga̧ł paradigm in digital watermarking. EURASIP J. Appl. Signal Process. 2, 185–196 (2002).View ArticleGoogle Scholar
  7. AACM Kalker, FMJ Willems, in Proc. 14th Int. Conf. Digital Signal Processing, 1. Capacity bounds and constructions for reversible data-hiding (Santorini, 2002), pp. 71–76.Google Scholar
  8. MU Celik, G Sharma, AM Tekalp, E Saber, Lossless generalized-LSB data embedding. IEEE Trans. Image Process. 14(2), 253–266 (2005).View ArticleGoogle Scholar
  9. J Tian, Reversible data embedding using a difference expansion. IEEE Trans. Circuits Syst. Video Technol. 13(8), 890–896 (2003).View ArticleGoogle Scholar
  10. AM Alattar, Reversible watermark using the difference expansion of a generalized integer transform. IEEE Trans. Image Process. 13(8), 1147–1156 (2004).MathSciNetView ArticleGoogle Scholar
  11. L Kamstra, H Heijmans, Reversible data embedding into images using wavelet techniques and sorting. IEEE Trans. Image Process. 14(12), 2082–2090 (2005).MathSciNetView ArticleGoogle Scholar
  12. X Wang, C Shao, X Xu, X Niu, Reversible data-hiding scheme for 2-D vector maps based on difference expansion. IEEE Trans. Inf. Forensics Secur. 2, 311–319 (2007).View ArticleGoogle Scholar
  13. Y Hu, H-K Lee, J Li, DE-based reversible data hiding with improved overflow location map. IEEE Trans. Circuits Syst. Video Technol. 19(2), 250–260 (2009).View ArticleGoogle Scholar
  14. DM Thodi, JJ Rodriguez, Expansion embedding techniques for reversible watermarking. IEEE Trans. Image Process. 15, 721–729 (2007).MathSciNetView ArticleGoogle Scholar
  15. V Sachnev, HJ Kim, J Nam, S Suresh, Y Shi, Reversible watermarking algorithm using sorting and prediction. IEEE Trans. Circuits Syst. Video Technol. 19(7), 989–999 (2009).View ArticleGoogle Scholar
  16. H-W Tseng, C-P Hsieh, Prediction-based reversible data hiding. Inf. Sci. 179, 246–2469 (2009).View ArticleGoogle Scholar
  17. X Li, B Yang, T Zeng, Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection. IEEE Trans. Image Process. 20(12), 3524–3533 (2011).MathSciNetView ArticleGoogle Scholar
  18. D Coltuc, Improved embedding for prediction-based reversible watermarking. IEEE Trans. Inf. Forensics Secur. 6(3), 873–882 (2011).View ArticleGoogle Scholar
  19. D Coltuc, Low distortion transform for reversible watermarking. IEEE Trans. Image Process. 21(1), 412–417 (2012).MathSciNetView ArticleGoogle Scholar
  20. Veen van der M, F Bruekers, A van Leest, S Cavin, in Proc. SPIE Photonics West, Electronic Imaging 2003, Security and Watermarking of Multimedia Contents V, 5020. High-capacity reversible watermarking for audio (San Jose, California, 2003), pp. 1–11.Google Scholar
  21. B Bradley, AM Alattar, in Proc. SPIE Photonics West, Electronic Imaging 2005, Security and Watermarking of Multimedia Contents VII, 5681. High-capacity, invertible, data-hiding algorithm for digital audio (San Jose, California, 2005), pp. 789–800.Google Scholar
  22. D Yan, R Wang, in International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Reversible data hiding for audio based on prediction error expansion (Harbin, 2008), pp. 249–252.Google Scholar
  23. A Van Leest, M Van der Veen, F Bruekers, in Proc. IEEE Conf. Image Processing, 3. Reversible image watermarking (Barcelona, 2003), pp. 731–734.Google Scholar
  24. Z Ni, Y Shi, N Ansari, S Wei, in Proc. IEEE Int. Symp. Circuits and Systems, 2. Reversible data hiding (Bangkok, 2003), pp. 912–915.Google Scholar
  25. M Chen, Z Chen, X Zeng, Z Xiong, in Proc. 11th ACM Workshop Multimedia and Security. Reversible data hiding using additive prediction-error expansion (Princeton, 2009), pp. 19–24.Google Scholar
  26. L Luo, Z Chen, M Chen, X Zeng, Z Xiong, Reversible image watermarking using interpolation technique. IEEE Trans. Inf. Forensics Security. 5(1), 187–193 (2010).View ArticleGoogle Scholar
  27. Anil K Jain, Fundamentals of digital image processing (Prentice, Hall, Englewood Cliffs, NJ, 1989).MATHGoogle Scholar
  28. A Asif, JMF Moura, Image codec by noncausal prediction, residual mean removal, and cascaded vector quantization. IEEE Trans. Circuits Syst. Video Technol. 6(1), 42–55 (1996).View ArticleGoogle Scholar
  29. N Balram, JMF Moura, Noncausal predictive image codec. IEEE Trans. Image Process. 5(8), 1229–1242 (1996).View ArticleGoogle Scholar
  30. M Weinberger, G Seroussi, Sapiro G, The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS. IEEE Trans. Image Process. 9(8), 1309–1324 (2000).View ArticleGoogle Scholar
  31. WR Gardner, BD Rao, Non-causal linear prediction of voiced speech. IEEE Asilomar Conference on Signals, Systems and Computers, (Pacific Grove, CA, Oct. 1992).Google Scholar
  32. Wu X, Memon N, Context-based, adaptive, lossless image coding. IEEE Trans. Commun. 45(4), 437–444 (1997).View ArticleGoogle Scholar
  33. S Martucci, in Proc. IEEE Int. Symp. Circuits and Systems. Reversible compression of HDTV images using median adaptive prediction and arithmetic coding (New Orleans, 1990), pp. 1310–1313.Google Scholar
  34. S Xiang, in IH 2012, LNCS 7692. Non-integer expansion embedding for prediction-based reversible watermarking (SpringerBerkeley, 2013), pp. 224–239.Google Scholar

Copyright

© Xiang and Wang. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.