 Research
 Open access
 Published:
Progressive sharing of multiple images with sensitivitycontrolled decoding
EURASIP Journal on Advances in Signal Processing volume 2015, Article number: 11 (2015)
Abstract
Secure sharing of digital images is becoming an important issue. Consequently, many schemes for ensuring image sharing security have been proposed. However, existing approaches focus on the sharing of a single image, rather than multiple images. We propose three kinds of sharing methods that progressively reveal n given secret images according to the sensitivity level of each image. Method 1 divides each secret image into n parts and then combines and hides the parts of the images to get n steganographic (stego) JPEG codes of equal importance. Method 2 is similar; however, it allocates different stego JPEG codes of different ‘weights’ to indicate their strength. Method 3 first applies traditional thresholdsharing to the n secret images, then progressively shares k keys, and finally combines the two sharing results to get n stego JPEG codes. In the recovery phase, various parameters are compared to a prespecified low/middle/high (L/M/H) threshold and, according to the respective method, determine whether or not secret images are reconstructed and the quality of the images reconstructed. The results of experiments conducted verify the efficacy of our methods.
1 Introduction
The Internet has become an integral part of human life and society. This public facility constantly transmitted both public and private information. Consequently, the protection of sensitive information transmitted through this medium has become an important issue. Blakley and Shamir [1,2] first conceptualized the idea of a (t, n) threshold secret sharing scheme, in which at least a minimum number t out of n participants are required in order to recover the secret. This scheme has been extended by various researchers [316] and successfully applied to activities such as protection of PDF files [12], visual cryptography [13,14], and network communication [15]. For digital media, many schemes for ensuring image sharing security have been proposed. For example, Thien and Lin [8] proposed using n shares, in which each share is t times smaller than the given secret image, to share a secret image. Wang and Shyu [4] proposed a scalable secret image sharing scheme. Lin and Tsai proposed image sharing schemes with authentication capabilities [9], or with reduction of share size [10]. Further, some approaches are devoted to progressively decoding secrets [37].
Besides sharing, the approaches using data hiding [1719] or watermarking [2024] have also offered other kinds of protection. In general, a hiding method can embed a secret file in a host image. In data hiding, the researchers usually consider the issues such as the size ratio between the secret file and the host image; and the impact on the host image due to embedding. As for the use of watermark, people can embed a watermark in a digital image in order to authenticate or claim the ownership of the digital image. In the design of watermarking methods, the researchers usually pay more attention to the work of resisting attacks such as copy attack, tampering, and cropping. Nowadays, the study of watermarks has covered not only software [21,24] but also hardware [22,23]. Notably, a sharing method often has a postprocessing which utilizes data hiding or some kinds of authentication tool. This is because each generated share looks like noise and may attract the attention of hackers; whereas data hiding can hide the generated shares in ordinary images. An authentication tool might also be needed in order to verify the integrity. For example, ref. [12] uses the SHA256 hash function to authenticate the file.
Our study here focuses on sharing. Among the existing image sharing approaches, the secret being shared is often assumed to be a single image, rather than multiple images. Repeated use of singleimage sharing method often causes the user to neglect the cross relation between distinct images and makes the setting of recovery thresholds not quite suitable. For example, in the sharing process of the photos of criminals, if we only have singleimage sharing software, we might just use one set of thresholds for all photos. However, if we have multipleimage sharing software, which requires us to input the security level of each photo being processed simultaneously, then we will be more likely to take a closer look of the case of each criminal, then distinguish the photos, and finally give a stricter threshold setting for the photos of the more serious crime offenders.
When multiple images are being shared, the fact that the security/sensitivity of some images might be higher than that of the other images has to be considered. In this paper, we consider how to share several secret images simultaneously. This paper proposes three progressive sharing methods (methods 1, 2, and 3) that use sensitivitycontrolled decoding. The sensitive images, i.e., the secret images, are divided into several image groups according to security level, with the more sensitive groups requiring more steganographic images (stegos) to uncover the images they contain. Specifically, after sharing and hiding, all stegos in method 1 are of equal weight, whereas the stegos in method 2 are quite different; some have more weight, and hence their secrethiding ability is more powerful than that of the other stegos. Finally, in method 3, some stegos are so powerful that they are called guardian stegos: in this method, no information can be revealed without a minimum number of guardian stegos. Thus, in our proposed methods, secret images in each security group are revealed progressively when a user receives enough stegos (method 1), the sum of the received weights is sufficient (method 2), or a sufficient number of guardian stegos is present (method 3).
2 Background and related work
2.1 Secret sharing: (t, n) sharing
Thien and Lin [8] proposed the (t, n) threshold method, which distributes a secret image among n shares. First, the secret image is divided into nonoverlapping sectors of t pixels each. Then, the following polynomial is used to encode every sector:
where a _{0},…., a _{ t−1} are the t pixel values in a sector and x is a userspecified index. Here, p is a prime number (or p is a whole power of 2, such as 128 or 256, if the arithmetic +, −, ×, and ÷ are in terms of Galois field operations). Finally, a share, whose index is x, is generated after every encoded result of f(x) is concatenated. Notably, n unique indices {x _{1}, x _{2}, …, x _{ n }} are selected at the beginning in order to create n shares, and each share size is 1/t of the original secret image.
In the decoding phase, if at least a minimum number t of the n shares is available, the original secret image can be reconstructed using Lagrange interpolation. The secret image is revealed if at least a minimum number t of the n shares is gathered; otherwise, only noise is obtained.
2.2 Progressive sharing: [r _{1}&r _{2}&…&r _{ k }; n] sharing
Chen and Lin [3] developed a progressive sharing method. They used k thresholds  specifically, {r _{1} ≤ r _{2} ≤ … ≤ r _{ k }}  with each threshold less than or equal to n. For example, for [(2&3&4); n], the three threshold values are r _{1} = 2, r _{2} = 3, and r _{3} = 4. Then, the image is partitioned into multiple sectors comprising nine (= r _{1} + r _{2} + r _{3}) pixels each. To share a sector  for example, the nine values {146, 167, 255, 60, 124, 165, 211, 73, 25}  first, the rearranging process illustrated in Figure 1 transforms the nine values into nine new values {230, 21, 159, 23, 83, 155, 227, 136, 207}. In the above, note that the binary representation of 230 is 11100110, exactly the first eight digits read from the first column in Figure 1. The first r _{1} = 2 transformed set of values, {230, 21}, gives the first polynomial in Equation 2. The next r _{2} = 3 transformed set of values, {159, 23, 83}, creates the second polynomial in Equation 3. The final r _{3} = 4 transformed set of values, {155, 227, 136, 207}, creates the third polynomial in Equation 4.
Here, as stated in Section 2.1, either let p be a prime number, or let p be a whole power of 2, such as 128 or 256 (if we do all arithmetic in the Galois field). Now, if any two of the generated shares are available, Lagrange interpolation can be used to reconstruct the two coefficients (230 and 21) in Equation 2. By reversing the rearranging process, we get the rough sector, {128, 128, 192, 0, 64, 128, 192, 64, 0}, of the original sector. If any three of the generated shares are available, we can reconstruct the 2 + 3 = 5 coefficients of Equations 2 and 3 and get an approximate sector, {144, 160, 248, 56, 112, 160, 208, 64, 16}, for the original sector. Finally, if any four of the generated shares are available, we can reconstruct the coefficients of Equations 2 to 4 and get the original sector, {146, 167, 255, 60, 124, 165, 211, 73, 25}, without errors.
3 Proposed methods
We propose three methods: method 1 is a basic progressive sharing method that divides n secret images into t groups according to sensitivity levels. In this method, for each j, the sensitivity level of the jth group must be lower than that of the j + 1th group. Further, the user provides several thresholds for each secret image group. For instance, if r _{1} ≤ r _{2} ≤ … ≤ r _{ k } is given for a specified group, and if less than r _{1} shadows are received, nothing can be displayed. However, if r _{1} shadows are available, then the user can get a lowquality version of the images in that group. The more shadows obtained, the better the quality of the recovered images. Finally, if r _{ k } shadows are available, then the user can recover the original images of that group without any errors. This paragraph just mentioned ‘shadow’; and a shadow is formed of several ‘shares’. In fact, in all three proposed methods, each shadow is formed of t shares (because each of the t groups offers a share to the mentioned shadow). The construction detail of the shadows will be in step 4 of the three encoding algorithms in Subsections 3.1.1, 3.2.1, and 3.3.1 below.
Method 2 assigns different weights to different ‘cover’ image groups. The smaller the weight value of a cover group, the smaller the number of shadows hidden in that cover group. The secret images are also partitioned into groups. For each secret image group, for example, secret group j, multiple threshold values (for instance r _{ j1} ≤ r _{ j2} ≤ … ≤ r _{ jk }) are specified by the user. Subsequently, during the decoding, if the sum of the weights of the received cover groups is at least r _{ j1}, then the user can recover a lowquality version of the images in secret group j. The greater the sum of the received weights, the better the quality of the recovered secret images. Finally, if the sum of the weights equals r _{ jk }, then we can recover all original images in secret group j without errors.
Method 3 designates some of the stego images to be guardian stegos. In this method, if a sufficient minimum number of these guardian stegos are received, then lowquality secret images can be reconstructed, as long as the number of received stego images is also at or above a minimum threshold value. The more guardian stegos received, the better the quality of the recovered images, as long as the number of received stego images is also at certain corresponding threshold values. Finally, if all the guardian stegos are received, then all the secret images can be reconstructed without errors, as long as the number of stego images received is also at or above certain threshold values.
3.1 Method 1: basic form (of sharing with sensitivitycontrolled decoding)
3.1.1 Encoding phase
Input: n secret images {S _{1}, S _{2}, …, S _{ n }}, n cover images (each is in JPEG form), and t sets of ‘typer progressiveness thresholds’, {[r _{11} ≤ r _{12} ≤ … ≤ r _{1k }], [r _{21} ≤ r _{22} ≤ … ≤ r _{2k }], …, [r _{ t1} ≤ r _{ t2} ≤ … ≤ r _{ tk }]}.
Output: n JPEG stego codes.
Step 1: Divide {S _{1}, …, S _{ n }} into t groups according to the sensitivity levels of {S _{1}, …, S _{ n }}. (For each j = 1, …, t − 1, the sensitivity level of group j must be lower than that of group j + 1.)
Step 2: Rearrange the data sequence of each secret image as follows:
Step 2.1: For each nonoverlapping 8 × 8 block, perform discrete cosine transform (DCT). Then, according to the zigzag order, only grab DCT values from the direct current (DC) term to the final nonzero value of the alternating current (AC) terms. (Notably, if a quantization of DCT coefficients has been used, then apply Hoffman coding to the residual image which is the difference image between the original image and the image decompressed from the quantized DCT coefficients.)
Step 2.2: For each DCT block, fill in zeroes so that the DCT value of the block is a multiple of RSUM, the local sum of the typer progressiveness thresholds; i.e., RSUM = RSUM_{ j } = r _{ j1} + r _{ j2} + … + r _{ jk } if the image is in the jth group. Then, rearrange the data sequence of the DCT block in accordance with Figure 2.
Step 3: For each secret group j = 1, 2, …, t, use [(r _{ j1}&r _{ j2}&…&r _{ jk }); n] progressive sharing to get n shares, which share the DCT data of each image in group j. (Remark: if lossless reconstruction is also wanted, then for each secret group j = 1, 2, …, t, use r _{ jk } as the threshold value in traditional (nonprogressive) sharing to generate another n shares, which share the Huffman codes (see step 2.1) of the residual images in group j. Now, for i = 1 to n, attach share i of Huffman code to share i of DCT data. This pairwise binding will reduce n + n shares to n shares.)
Step 4: In step 3, each secret group generated n shares, namely, {ith share  i = 1, 2, …, n}. Now, for i = 1, 2, …, n, concatenate (i.e., physically link together) the ith shares across all t secret groups to get the ith shadow. Note that there are t secret groups, and each shadow receives one share from each secret group. Hence, each shadow is formed of t shares. For example, shadow 1 is in the form (share 1 of group 1, share 1 of group 2, …, share 1 of group t).
Step 5: Use the JPEG data hiding method [17] to hide the n shadows in the respective n JPEG codes of the n cover images.
3.1.2 Decoding phase
If (any) r _{11} of the n stego images are available, then we can extract the shadows from the r _{11} stego images: they can then be used to reconstruct lowquality versions of all the secret images in group 1. If (any) r _{12} of the n stego images are available, the quality of the recovered group 1 secret images will be better. Finally, if (any) r _{1k } of the n stego images are available, then the recovered group 1 secret images will all be lossless. Similarly, for each j = 2, …, t, if (any) r _{ j1}, r _{ j2}, … of the n stego images are available, we get the progressive recovery effect mentioned above for group j.
3.2 Method 2: sensitivitycontrolled decoding using weights
3.2.1 Encoding phase
Input: n secret images {S _{1}, S _{2}, …, S _{ n }}, n cover images (each is in JPEG form), t sets of ‘typer progressiveness thresholds’, {[r _{11} ≤ r _{12} ≤ … ≤ r _{1k }], [r _{21} ≤ r _{22} ≤ … ≤ r _{2k }], …, [r _{ t1} ≤ r _{ t2} ≤ … ≤ r _{ tk }]}, and _{ T } positive integers {w _{1}, w _{2}, …, w _{ T }} called ‘weights’. Note: w _{1} + w _{2} + … + w _{ T } = n.
Output: n JPEG stego codes.
Steps 1 to 4: Do steps 1 to 4 in Section 3.1.1.
Step 5: Assign the n cover images to T cover groups so that each cover group has at least one cover image. Then, for each j = 1, 2, …, T, assign weight w _{ j } to cover group j.
Step 6: Use the JPEG data hiding method [17] to hide the w _{1} shadows in the JPEGs of the first cover group, the w _{2} shadows in the JPEGs of the second cover group, and so on. Since w _{1} + w _{2} + … + w _{ T } = n, hiding of the n generated shadows is complete when the final w _{ T } shadows are hidden in the tth cover group.
3.2.2 Decoding phase
The decoding is carried out according to the total sum of the weights of the received cover groups. If the total sum of the received weights corresponds to r _{11}, then we can extract the r _{11} shadows from the received cover groups and reconstruct a lowquality version of all the images in secret group 1. If the total sum of the received weights corresponds to r _{12}, then the recovered images of secret group 1 will be of a better quality. Finally, if the total sum of the received weights corresponds to r _{1k }, then the recovered images of secret group 1 will be lossless. Analogously, for each j = 2, …, t, if the total sum of the received weights correspond to r _{ j1}, r _{ j2}, …, or r _{ jk }, we get the above progressive recovery effect for the jth secret group.
3.3 Method 3: sensitivitycontrolled decoding with guardian stegos
Thus far, for each secret image group, both methods 1 and 2 used ‘multiple’ progressiveness thresholds to control the progressive effect of that secret image group (for example, the parameters [r _{11} ≤ r _{12} ≤ … ≤ r _{1k }] are used for secret image group 1, the parameters [r _{21} ≤ r _{22} ≤ … ≤ r _{2k }] are used for secret image group 2, and so on). In contrast, method 3 uses only one r _{ j } as the ‘single’ threshold for the jth secret image group (true for each j = 1, 2, …). The progressive effect of method 3 is achieved by other types of parameters (parameters of type q, rather than of type r).
3.3.1 Encoding phase
Input: n secret images {S _{1}, S _{2}, …, S _{ n }}, n cover images (each is in JPEG form), t positive integer parameters, {r _{1} ≤ r _{2} ≤ … ≤ r _{ t }}, k keys, {Key _{1}, Key _{2}, …, Key _{ k }}, and k positive integers, [q _{1} ≤ q _{2} ≤ … ≤ q _{ k } = k], called ‘typeq progressiveness parameters’. (Note: typeq parameters are for the progressive sharing of keys, which is different from the t sets of typer thresholds of methods 1 and 2; methods 1 and 2 use no keys.)
Output: n JPEG stego codes.
Step 1: Do step 1 in Section 3.1.1.
Step 2: Rearrange the data sequence of each secret image and encrypt each value as follows:
Step 2.1: Do step 2.1 in Section 3.1.1.
Step 2.2: Partition the DCT coefficients of each block into k nonoverlapping regions according to the zigzag sequence. Region 1 is the most important because it corresponds to the lowestfrequency area, followed by region 2, and so on. Then, for each i = 1, 2, …, k, use Key _{ i } to encrypt the DCT values belonging to region i. Finally, use Key _{ k } again to encrypt the Huffman code generated in step 2.1.
Step 3: For each j = 1, 2, …, t, use r _{ j } as the threshold value in the thresholdsharing to create n shares that share the encrypted data sequence of each image of the jth secret group.
Step 4: For i = 1, 2, …, n, combine the ith shares of all the secret images in the input to get the ith shadow.
Step 5: For each i = 1, 2, …, k, use q _{ i } as the threshold value in the (q _{ i }, k) thresholdsharing to share Key _{ i } among k keyshares. (consequently, among these k keyshares of Key _{ i }, any q _{ i } keyshares can recover Key _{ i } without errors.)
Step 6: For i = 1, 2, …, k, combine the ith keyshares of all keys in the database to get the ith keyshadow.
Step 7: Use the JPEG data hiding method [17] to hide the n shadows in the respective n JPEG codes of the n cover images.
Step 8: Choose k of the n cover images and use the JPEG data hiding method [17] to hide their respective k keyshadows in the k JPEG codes of the k cover images chosen. Note: the k stego images generated in this way are called ‘guardian stegos’.
Thus, for k keys, there are k guardian stegos. Further, the actual number of progressive levels is less than or equal to k (equal to k if all k progressiveness parameters, [q _{1} ≤ q _{2} ≤ … ≤ q _{ k }], are mutually distinct, i.e., q _{1} < q _{2} < … < q _{ k }).
3.3.2 Decoding phase
If (any) r _{1} of the n stego images are available, we can extract the q _{1} keyshadows and the r _{1} shadows from the r _{1} stego images, as long as the r _{1} stego images include q _{1} guardian stegos. Subsequently, we can recover the encrypted version of all the secret images in secret group 1, reconstruct the key Key _{1}, and use Key _{1} to decrypt the encrypted version of the images in secret group 1. This process reveals the lowquality version of all the secret images in secret group 1. If the r _{1} stego images include q _{2} guardian stegos, we can reconstruct the key Key _{2}, resulting in the recovered version of the secret images in group 1 having improved quality. Finally, if the r _{1} stego images include k guardian stegos, the recovered secret images in group 1 are all lossless. Similarly, for each j = 2, …, t, if (any) r _{ j } of the n stego images are available, then, as long as the r _{ j } stego images include the q _{1}, q _{2}, …, or q _{ k } guardian stegos, we get the above progressive recovery effect for the secret images in group j.
4 Experimental results
We conducted experiments 1, 2, and 3 for methods 1, 2, and 3, respectively. We utilized the six 512 × 512 cover images, {Barbara, Lake, Couple, Baboon, Indian, Bridge}, shown in Figure 3 in all the experiments. We also utilized the six secret images, {House, Cameraman, Lena, Pepper, Jet, Blonde}, shown in Figure 4 in each experiment; however, because of the limitations imposed on size by the different methods, the width and height of each secret image were smaller in experiments 1 and 2, and larger in experiment 3.
We measured the quality of each stego image and recovered image using PSNR, defined as
Here, the mean square error (MSE) is given by
for an image with height × width pixels, and pixel_{ ij } and pixel'_{ ij } are, respectively, the value of the pixel at position (i, j) of the two compared images. For the readers' convenience, structural similarity [25] (SSIM) is also listed. Notably, the better the image quality, the closer the distance between SSIM value and 1.
4.1 Experimental results for method 1
In this experiment, the inputs comprised the six 128 × 128 secret images and the six 512 × 512 cover images (Figure 3). We divided the six images into three groups according to the sensitivity levels of the six secret images, [House, Cameraman], [Lena, Pepper], and [Jet, Blonde], and used [(r _{11}&r _{12}&r _{13}); n] = [(2&3&4); 6] in the progressive sharing to distribute the first group's images, [House, Cameraman], among {share_{1} … share_{6}}. Similarly, we used [(r _{21}&r _{22}&r _{23}); n] = [(3&4&5); 6] in the progressive sharing to distribute the second group's images, [Lena, Pepper], among {share_{1} … share_{6}}. Finally, we used [(r _{31}&r _{32}&r _{33}); n] = [(4&5&6); 6] in the progressive sharing to distribute the third group's images, [Jet, Blonde], among {share_{1} … share_{6}}.
To complete the encoding, we constructed the first shadow by integrating all share_{1}s, the second shadow by integrating all share_{2}s, and so on. Finally, we used the JPEG data hiding method [17] to hide the six shadows in the respective six JPEG codes of the six cover images.
In the decoding phase, for the scenario where two of the six stego images were available, we first extracted the two shadows hidden in the two available stego images. Then, by inversesharing and because r _{11} = 2, we were able to recover the lowquality version of both secret images, [House, Cameraman], in group 1. For the scenario where three of the six stego images were available, we first extracted the three shadows hidden in the three available stego images. Then, by inversesharing and because r _{21} = 3 and r _{12} = 3, we were able to recover the lowquality version of both secret images, [Lena, Pepper], in group 2, and the mediumquality version of both secret images, [House, Cameraman], in group 1, respectively.
Similarly, for the scenario where any four of the six stego images were available, because r _{31} = 4, r _{22} = 4, and r _{13} = 4, we were able to recover the lowquality version of both secret images, [Jet, Blonde], in group 3, the mediumquality version of both secret images, [Lena, Pepper], in group 2, and the lossless version of both secret images, [House, Cameraman], in group 1, respectively. For the scenario where any five of the six stego images were available, because r _{32} = 5, r _{13} = 4 < 5, and r _{23} = 5, we were able to recover the mediumquality version of both secret images, [Jet, Blonde], in group 3, and the lossless version of each secret image in the first group, [Lena, Pepper], and the second group, [House, Cameraman], respectively. Finally, for the scenario where all six stego images were available, because r _{ j3} ≤ 6 for each j = 1, 2, 3, we were able to recover all six secret images without error, irrespective of the group to which they belonged.
Table 1 shows the quality of the progressively recovered secret images during the decoding phase. Note that, after encoding, when we decompressed the six JPEG stego codes, which contained the secrets hiding in them, the PSNRs of the decompressed images were between 39.6 and 42 dB, as shown in Table 1. The quality of the images revealed on level 1 (i.e., the lowquality version) is between 24.95 and 27.33 dB, and the quality revealed on level 2 (i.e., mediumquality version) is between 30.35 and 33.09 dB. The secret images were recovered without errors on level 3 of the reconstruction.
4.2 Experimental results for method 2
In this experiment, the input comprised the six 128 × 128 secret images, the six 512 × 512 cover images (Figure 3), and three weight values {1, 2, 3}. The six images were again divided into three groups according to the sensitivity levels of the secret images: [House, Cameraman], [Lena, Pepper], and [Jet, Blonde]. Then, in the progressive sharing, [(r _{11}&r _{12}); n] = [(3&4); 6] was used to distribute each of the first group's secret images, [House, Cameraman], among {share_{1} … share_{6}} so that the ‘rough’ recovery of any image (say, House) in this group would need r _{11} = 3 shares, whereas the lossless recovery of that image would need r _{12} = 4 shares. Similarly, we used [(r _{21}&r _{22}); n] = [(4&5); 6] in the progressive sharing to distribute each of the second group's images, [Lena, Pepper], among {share_{1} … share_{6}}. Finally, we used [(r _{31}&r _{32}); n] = [(5&6); 6] in the progressive sharing to distribute each of the third group's images, [Jet, Blonde], among {share_{1} … share_{6}}.
The first shadow was generated by integrating the share_{1}s of all six secret images, the second by integrating the share_{2}s of all six secret images, and so on. Let SS denote the size of a shadow. We also partitioned the six cover images into three groups, [Barbara, Lake], [Couple, Baboon], and [Indian, Bridge], and assigned them weights 1, 2, and 3, respectively. Finally, we bound all the JPEG codes of the cover images of the first cover group together as a unit. Then, we treated this unit as a cover medium and used the JPEG data hiding method [17] to hide only one shadow in this cover medium (we hid only one shadow because w _{1} = 1). The shadow hidden here was shadow #1, with size being w _{1} × SS = SS. Then, we bound all the JPEG codes of the cover images of the second cover group together as a unit and used the hiding method [17] to hide two (2 = w _{2}) shadows (i.e., shadows #2 and #3) in this unit; thus, the secret data hidden in the second cover group had size w _{2} × SS = 2 × SS. Finally, we bound all the JPEG codes of the cover images of the third cover group together as a unit and used the hiding method to hide in this unit three (3 = w _{3}) shadows (i.e., shadows #4, #5, and #6). Hence, the data being hidden in the third cover group had size w _{3} × SS = 3 × SS.
For the scenario where we received all stego JPEG codes of the first cover group, [Barbara, Lake], we extracted the only shadow (i.e., shadow #1) hidden in the first cover group. However, nothing could be displayed because the weight, w _{1} = 1, was too small. When we did not receive the first cover group, but instead, received all the stego JPEG codes of the second cover group, [Couple, Baboon], we were only able to extract the two shadows (i.e., shadows #2 and #3) hidden in the second cover group. Similarly, nothing could be displayed because the weights w _{2} = 2 were still not sufficiently large. Finally, for a scenario where we received neither the first cover group nor the second cover group but received all the stego JPEG codes of the third cover group, [Indian, Bridge], we first extracted the three shadows (i.e., shadows #4, #5, and #6) hidden in the third cover group. Then, by inversesharing and because threshold r _{11} = 3, we were able to recover the lowquality version of each secret image in the first secret group, [House, Cameraman].
For the scenario where we received all four stego JPEG codes for both the first cover group, [Barbara, Lake], and the second cover group, [Couple, Baboon], we first extracted the w _{1} = 1 shadow hidden in the first cover group and then extracted the w _{2} = 2 shadows hidden in the second cover group. Thus, we extracted w _{1} + w _{2} = 1 + 2 = 3 shadows, namely, {shadows #1, #2, and #3}. Then, by inversesharing and because r _{11} = 3, we were able to recover the lowquality version of each secret image in the first secret group, [House, Cameraman].
Similarly, for the scenario where we received all four stego JPEG codes of both the first cover group, [Barbara, Lake], and the third cover group, [Indian, Bridge], we were able to extract w _{1} + w _{3} = 1 + 3 = 4 shadows, namely, {shadows #1, #4, #5, and #6}. Thus, because r _{21} = 4 and r _{12} = 4, we were able to recover the lowquality version of each secret image in the second secret group, [Lena, Pepper], and the lossless version of each secret image in the first secret group, [House, Cameraman], respectively.
For the scenario where we received all four stego JPEG codes of both the second cover group, [Couple, Baboon], and the third cover group, [Indian, Bridge], we were able to extract w _{2} + w _{3} = 2 + 3 = 5 shadows, namely, {shadows #2, #3, #4, #5, and #6}. Thus, because r _{31} = 5, r _{12} = 4 < 5, and r _{22} = 5, we were able to recover the lowquality version of each secret image in the third secret group, and the lossless version of each secret image in the first and second secret groups, respectively. Finally, for the scenario where we received all six stego JPEG codes, because r _{ j2} ≤ 6 for each j = 1, 2, 3, we were able to recover all six secret images without errors, irrespective of the secret group to which they belonged.
Table 2 shows the quality of the progressively recovered secret images during the decoding phase. Note that, in the encoding phase, when we decompressed the six JPEG stego codes, which contained the secrets hiding in them, the PSNRs of the decompressed images were between 39.26 and 44.17 dB. The revealed secret images' quality on level 1 (i.e., lowquality version) of the reconstruction was between 28.21 dB and 30.71 dB. All secret images on level 2 of the reconstruction were recovered without errors.
4.3 Experimental results for method 3
Here, the input comprised the six 232 × 232 secret images, the six 512 × 512 cover images (Figure 3), three positive integer parameters {4 ≤ 5 ≤ 6} for secret image sharing (the values 4, 5, and 6 are for image groups 1, 2, and 3, respectively), three keys for encryption, and three integers {q _{1} = 2, q _{2} = 2, and q _{3} = 3} called typeq progressiveness parameters for the sharing of ‘keys’.
We again divided the six images into three groups according to the sensitivity levels of the six secret images: secret group 1, lowest sensitivity, comprised [House]; secret group 2, moderate sensitivity, comprised [Cameraman, Lena]; and secret group 3, highest sensitivity, comprised [Pepper, Jet, Blonde]. Then, we encrypted each secret image using all three keys.
We then used (r _{1}, n) = (4, 6) in secret sharing to share the first secret group, i.e., to share the encrypted House, among {share_{1} … share_{6}}. Similarly, we used (r _{2}, n) = (5, 6) in secret sharing to share the second secret group (i.e., the encrypted Cameraman and the encrypted Lena) among {share_{1} … share_{6}}. Finally, we used (r _{3}, n) = (6, 6) in secret sharing to share the third group's encrypted secret images [Pepper, Jet, Blonde] among {share_{1} … share_{6}}. The first shadow was generated by integrating the share_{1}s of all six secret images, the second by integrating the share_{2}s of all six secret images, and so on.
The thresholds to share the three keys {Key _{1}, Key _{2}, Key _{3}} were, respectively, q _{1} = 2, q _{2} = 2, and q _{3} = 3. Hence, for i = 1, 2, 3, we used (q _{ i }, 3) sharing to share the numerical value Key _{ i } among k = 3 keyshares, so that any q _{ i } of the k = 3 generated keyshares (of Key _{ i }) could recover Key _{ i }. Then, for i = 1, 2, 3, we combined the ith keyshares of all three keys to get the ith keyshadow.
Next, we used the JPEG data hiding method [17] to hide the six imageshadows in the respective six JPEG codes of the six cover images. Finally, we chose k = 3 of the six cover images and hid the k = 3 keyshadows in the k = 3 JPEG codes of the chosen k = 3 cover images; for example, {Barbara, Lake, Couple}. These three stego images were designated the ‘guardian stegos’.
With any two of the three guardian stegos, we were able to first extract the two keyshadows hidden in the two available guardian stegos. Then, by the inverse progressive sharing process, we were able to recover Key _{1} and Key _{2} because their thresholds were q _{1} = 2, and q _{2} = 2, respectively. Because two of the three guardian stegos were already available, when any two of the three nonguardian stegos were also available, we had 2 + 2 = 4 shares. We first extracted the four imageshadows hidden in the four available stego images (i.e., two guardian stegos and two nonguardian stegos). Then, by inversesharing and decryption and because r _{1} = 4, we were able to recover the lowquality version of the secret image in the first group. Although we only had two keys (instead of three keys), we were still able to decrypt the first several (lowquality) DCT Coefficients (see step 2 in Section 3.3.1). This is why we can decrypt lowquality versions of an image even when not all three keys are available.
Let us now consider another scenario. We assumed that two of the three guardian stegos were already available; hence, {Key _{1} and Key _{2}} were known. Consequently, because all 6 − 3 = 3 nonguardian stegos were available, we first extracted the 2 + 3 = 5 imageshadows hidden in the five available stego images. Then, by inversesharing and decryption, the lowquality version of each secret image in the second group were recovered because the threshold for the second image group was assumed to be r _{2} = 5. However, since the total number of stego images received was only five, the secret images in the third group still could not be recovered because the threshold for the third image group was assumed to be r _{3} = 6.
For the scenario where all three guardian stegos were available, we first extracted the three keyshadows hidden in the three guardian stegos. Then, by inversesharing, we were able to recover all three keys because the largest threshold for the keys was assumed to be q _{3} = 3 when we earlier distributed the three keys among the three keyshadows. Since all three guardian stegos were now available, if any one of the 6 − 3 = 3 nonguardian stegos was also available, then, since all three keys were already extracted, we were able to recover the lossless version of the secret image in the first group because 3 + 1 = 4 and the threshold for image group 1 was assumed to be r _{1} = 4. In the case where (any) two of the three nonguardian stegos were available, we were able to recover the lossless version of each secret image in the second group because 3 + 2 = 5 and the threshold for image group 2 was assumed to be r _{2} = 5. Finally, for the scenario where all three nonguardian stegos were available, we were able to recover the lossless version of each secret image in the third group because 3 + 3 = 6 and the threshold for image in group 3 was assumed to be r _{3} = 6. The experimental results are listed in Table 3. Note that the thresholds for the sharing of the three keys are {q _{1} = q _{2} = 2, and q _{3} = 3}; thus, there are only two levels to control the recovery of the keys; namely, the collection of two guardian stegos versus the collection of three guardian stegos. Thus, to view secret images, the effective number of progressive levels is also only two.
Note also that, if only one of the three guardian stegos is available, then no secret image can be recovered, even if all three nonguardian stegos are available. This is because the three encryption keys are shared and hidden in the guardian stegos, and the smallest threshold q _{1} = min{q _{1}, q _{2}, q _{3}} to recover at least one key was already set to q _{1} = 2.
5 Discussion and comparison
5.1 Summary and discussion
Our proposed method 1 is a progressive sharing sensitivitycontrolled decoding method; i.e., the decoding is conducted according to the sensitivity level of each image. Images with the same sensitivity level constitute a group. Each secret image in an image group is shared among n shares, and the shares of all images are properly combined to get n shadows with equal significance; consequently, there is no need to worry about which shadow is lost or transmitted first. The n shadows are hidden in the JPEG codes of n cover images to get n stego JPEG codes. If the number of received stegos corresponds to the lowest threshold of an image group, then the rough version of each secret image in that group can be revealed. The higher the number of stegos received, the better the quality of the recovered secret images. In particular, when the number of stego images received corresponds to the highest threshold (considering all thresholds for all groups), then all secret images in all groups can be recovered without errors.
Our proposed method 2 is also a progressive sharing sensitivitycontrolled decoding method; however, it differs from method 1 in that ‘weights’ are used in method 2. We divide the n ‘cover’ images into several groups and equip each cover group with a weight specially assigned to that group. Then, according to the weight of each cover group, we hide some of the n secret shadows in the cover group.
Subsequently, decoding is conducted according to the total sum of the weights of the received cover groups. If the sum of the received weights corresponds to the lowest threshold of a secret group, then all secret images of that secret group can be recovered with a low quality. The larger the sum of the received weights, the better the quality of the recovered secret images. Finally, if the sum of the received weights corresponds to the highest threshold of a secret group, then the recovered secret images of that secret group are lossless.
Both progressive methods (methods 1 and 2) increase the shadow size after using multiple thresholds. Therefore, in our proposed method 3, we use a different technique to progressively share multiple secret images. In method 3, if the number of received guardian stegos corresponds to the lowest threshold, as long as the number of received stego images also corresponds to the threshold value of a secret image group, then the rough version of each secret image in that secret group can be revealed. The more guardian images received, the better the quality of the recovered secret images, as long as the number of received stego images also corresponds to certain threshold values. In particular, when the number of received guardian stegos corresponds to the highest threshold value, then all secret images can be recovered without errors, as long as the number of received stego images also corresponds to certain threshold values.
Compared with methods 1 and 2, method 3 has a tighter restriction in the recovery phase: nothing can be displayed without a sufficient number of guardian stegos. Therefore, methods 1 and 2 are more suitable for a public company whose owners are (public) stock holders. The more shadows (stocks) or the more weights appear in the meeting, the more secret details can be unveiled. In contrast, method 3 is more suitable for a familyowned private company in which all the decisionmaking must first get the permission of the persons in charge, or at least, get the majority agreement of the committee board.
In Table 4, we list the advantages and disadvantages of the three proposed methods. Notably, about the issue of stability, method 1 is the most stable one, as explained below. In method 2, the recovered versions of secret images are identical to that of method 1. However, stego images' quality is less stable in method 2, for the stegos' quality is influenced by the matching between the weights {w _{ i }} and the hiding capacity of the cover groups {CG_{ i }}. When one of the weights is particularly large, the instability becomes obvious. For example, if {w _{1} = 1, w _{2} = 1, w _{3} = 4} and if the three cover groups have similar hiding capacity, then distinct stego groups might have very distinct qualities. In Table 2, where {w _{1} = 1, w _{2} = 2, w _{3} = 3}, the quality of the image Bridge, which is in stego group 3, is also worse than the quality of the {Barbara, Lake} in stego group 1. Finally, method 3 is also less stable than method 1 because some assignment to the values of the typer parameter might cut the effective number of progressive levels, as will be seen in Table 5 and a paragraph near the end of Section 5.5.
Now we analyze the precision of the recovered secret images. Since all three methods can produce errorfree recovery as the highestquality recovery, we focus our comparison on the lowestquality version, i.e., the recovery on level 1. Methods 1 and 2 give identical recovered versions of secret images, so we only need to compare method 1 to method 3. In method 1, as analyzed in Section 5.5, when (r _{ j1} ≤ r _{ j2} ≤ … ≤ r _{ jk }) are utilized as the k progressive thresholds to share an image in secret group j, the lowest version's quality is determined by the ratio r _{ j1}/(r _{ j1} + r _{ j2} + … + r _{ jk }). The larger the ratio value, the better the precision. Therefore, The best level1 quality occurs when k = 2 and r _{ j1}/(r _{ j1} + r _{ j2} + … + r _{ jk }) = r _{ j1}/(r _{ j1} + r _{ j2}) is almost 1/2. In this case, as analyzed in Section 5.5, about r _{ j1}/(r _{ j1} + r _{ j2}) = 50% of the rearranged DCT data are utilized to recover level 1 version. On the other hand, for method 3, if q _{1} of the k guardian stegos are available, then the lowest version's quality is determined by the ratio area(region 1) ÷ [area(region 1) + … + area(region k)], where {region 1, … region k} are the k nonoverlapping regions that partitioned the DCT coefficients in step 2.2 of Section 3.3.1. Since we had the freedom to assign any percentage of the DCT data to region 1, this arearatio can be as low as 1%, or as high as 99%. Now, compared to the 50% of method 1, we can say that the lowestquality recovery of method 3 can be either worse or better than the lowestquality recovery of method 1. The precision comparison between the methods is thus casebycase and inconclusive.
5.2 Comparison with reported methods
Our methods are progressive sharing methods. Functionality comparisons between our methods and various other progressive sharing schemes are shown in Table 6. Our methods' decoding is according to the sensitivity levels of different secret groups. In Table 6, all other schemes consider one secret image instead of multiple secret images. Furthermore, note that, in our method 2, distinct groups of ‘cover’ images are also assigned distinct weights.
The shadow size of method 1 is equal to that of method 2; method 3 has the smallest shadow size. As shown in Table 7, the shadow size is small in each of our three methods; thus, the shadow can be easily hidden in the JPEG codes of cover images. The sizes associated with the various methods are given below. In the traditional (t, n) secret sharing method, the size of the shadow is only 1/t of the original secret data. In our proposed methods 1 and 2, when we use [(r _{1}&r _{2}&…&r _{ k }); n] progressive sharing method to share some secret data, the size of each shadow is k/(r _{1} + r _{2} + … + r _{ k }) times smaller than that of the original secret data, as explained below. We process r _{1} + r _{2} + … + r _{ k } values together each time. The first r _{1} values are shared by (r _{1}, n) sharing; thus, each shadow receives one value after sharing these r _{1} values. Similarly, the next r _{2} values are shared by (r _{2}, n) sharing; thus, each shadow receives one value after sharing these r _{2} values, and so on. Therefore, when we consider the sharing of these r _{1} + r _{2} + … + r _{ k } values; it is obvious that each shadow receives 1 + 1 + … + 1 = k values generated from the sharing of these r _{1} + r _{2} + … + r _{ k } values. As a result, the size of each shadow is k/(r _{1} + r _{2} + … + r _{ k }) of the original size of the secret. Note that k is the number of progressiveness thresholds, {r _{1}…r _{ k }}, being used. Therefore, if the maximal threshold r _{ k } of a progressive sharing, [(r _{1}&r _{2}&…&r _{ k }); n], is equal to the single threshold t of nonprogressive sharing, then both progressive scheme and nonprogressive scheme can recover the original data without errors if t shares are received. However, the inequality
tells us that the shadow size generated by progressive sharing is larger than the shadow size generated by nonprogressive sharing. This is the price of being progressive. For instance, comparing a (4, n) nonprogressive share and a [(3&4); n] progressive share, it can be seen that both schemes can recover secrets without errors when four shadows are received. However, if only three shadows are received, then the progressive scheme can still recover a ‘rough’ version, whereas the nonprogressive scheme cannot. The shadow size generated by (4, n) nonprogressive sharing is S/4 = 0.25S (assuming that S is the size of the secret file); whereas the shadow size generated by [(3&4); n] progressive sharing is 2S/(3 + 4) = 0.286S. If the number of progressive levels increases, for example, from a twolevel scheme [(3&4); n] to a threelevel scheme [(2&3&4); n], then the shadow size also increases and becomes 3S/(2 + 3 + 4) = 0.333S.
Table 7 compares the shadow sizes of various progressive sharing schemes. Assume that the given secret image is the 512 × 512 grayscale image Lena, and the progressiveness thresholds are [(3&4&5&6), 6] for ‘all’ schemes. In Table 7, since our proposed methods 1 and 2 are designed for multiple secrets, we let the first secret group have only one secret image (Lena), and all other secret groups be empty (have no secret image). Then, we use [(r _{11}&r _{12}&r _{13}&r _{14}), n] = [(3&4&5&6), 6] as the (local) progressiveness thresholds for the secret group containing Lena. In Table 7, it can be seen that among all lossless methods, our scheme has the smallest shadow size. Our shadow size is even smaller than that of the lossy method proposed by Hung et. al. [5]. Using a small shadow size is important in every sharing method. If the shadow is small, the n shadows can be transmitted quickly, storage space can be saved, and shadows can be hidden easily in other media.
Table 8 summarizes the PSNRs of the recovered images and stego images, when three progressive levels were used in each reported progressive sharing scheme. Our three methods consider multiple images, so we have a PSNR range, rather than a single PSNR value. Each reported method has its own setting of parameters and is too tedious to list, so almost all experimental values in Table 8 were quoted directly from the cited papers. From Table 8, we can see that, like most of other singlesecret progressive methods, all three multiplesecret methods of ours can also achieve lossless recovery of secret images; however, our impact to host images is smallest because our stego images have highest PSNR values.
5.3 Parameters' values
5.3.1 Parameters of methods 1 and 2

k (the number of progressive levels)
We suggest the use of k = 2 or k = 3. Use k = 2 if the user wishes that the recovery of each secret image has two levels (lowquality vs. lossless). Use k = 3 if the user wishes that the recovery of each secret image has three levels (lowquality, mediumquality, and lossless). The value of k should not be large, because each 8by8 DCT block only have 64 coefficients, and many of them are zeros. For example, using k = 8 is impractical, for {r _{ j1}, r _{ j2},…, r _{ j8}} = {2, 3, 4, 5, 6, 7, 8, 9} implies that every RSUM = 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 = 44 DCT coefficients are bound together and processed as a unit in the progressive sharing process, but the 64 DCT coefficients might not have so many nonzeros. As a result, either the method cannot be implemented or a waste of the shadow space is introduced in the sharing process.

r _{ j1}
For each secret image group, for example, secret group j, both methods 1 and 2 utilize k progressive threshold values (2 ≤ r _{ j1} ≤ r _{ j2} ≤ … ≤ r _{ jk } ≤ n). Here, r _{ j1} means that, people cannot recover any version of the images in secret group j unless at least r _{ j1} stegos (in method 1) are received, or unless the sum of the weights of the received cover groups is at least r _{ j1} (in method 2). Therefore, we suggest the use of a large value for r _{ j1} (for example, r _{ j1} = ⌈n/2⌉ or ⌈2n/3⌉) if the images in secret group j are very sensitive; otherwise, just use a small value for r _{ j1} (for example, r _{ j1} = 2 or 3). Note that r _{ j1} = ⌈n/2⌉ means at least one half of the shadows must be available before the lowestquality version of the sensitive images can be reconstructed.

r _{ jk }
Since r _{ jk } is related to the lossless recovery of the images in secret group j, let r _{ jk } be a very large value (for example, n or n − 1) if secret group j is very sensitive. Notably, using r _{ jk } = n means that all cover images must be received before the lossless version of secret group j can be recovered. Using large value for the parameter r _{ jk } can avoid the stealing of the lossless version of the images in secret group j; however, it also weakens the missingallowable benefit of the sharing approach. Therefore, do not let r _{ jk } be as large as n or n − 1, unless the images in secret group j are very sensitive.

The weights {w _{1}, w _{2}, …, w _{ T }} and the cover groups (CG)
Method 2 uses T weights {w _{1}, w _{2}, …, w _{ T }} with w _{1} + w _{2} + … + w _{ T } = n. The n given cover images are assigned to T cover groups, and cover group j (CG_{ j }) hides w _{ j } shadows, true for each j = 1, 2, …, T. Therefore, CG_{ j }, which denotes the number of cover images in CG_{ j }, cannot be small if w _{ j } is large. To explain this, without the loss of generality, assume that there are, again, n = 6 secret images and n = 6 cover images. If T = 3 and the weights of the three cover groups are w _{1} = 1, w _{2} = 1, and w _{3} = 4, respectively, then the number of cover images in the three cover groups can be {1, 1, 4} or {1, 2, 3} or {2, 2, 2}. Among them, the combination {2, 2, 2} is the worst, as explained below. Since {CG_{ j }}_{ j = 1, 2, 3} = {2, 2, 2}, one shadow (w _{1} = 1) is hidden in the two cover images of CG_{1}; one shadow (w _{2} = 1) is hidden in the two cover images of CG_{2}; but four shadows (w _{3} = 4) are hidden in the two cover images of CG_{3}. The impact on the two cover images of CG_{3} will be too large due to the fact that each cover image must hide (w _{3}/CG_{3}) = (4/2) = 2 shadows. Therefore, to avoid that the quality of some stego images become too low, we suggest that the value of CG_{ j } cannot be small if w _{ j } is large.
In the last paragraph, when {w _{1} = 1, w _{2} = 1, w _{3} = 4}, although the partition {CG_{ j } = w _{ j }} = {1, 1, 4} of the six cover images will not cause big impact on the cover images, the fact that CG_{ 3 } = 4 means that these four cover images are bound together, and the stego file size of cover group 3 is very large (four times larger than that of cover group 1). This might give the manager of group 3 some inconvenience. As a compromise between the stego quality and convenience, {CG_{ j }} = {1, 2, 3} might be the best tradeoff when {w _{1} = 1, w _{2} = 1, w _{3} = 4}.
The other reason why we do not use {CG_{ j } = w _{ j }}_{ j=1, 2, …, T } = {CG_{ j } = w _{ j }}_{ j=1,…,3} = {1, 1, 4} is as explained below. Using CG_{ j } = w _{ j } for any j will make method 2 very similar to method 1: each cover image hide exactly one secret image. Hence, the quality of both the stegos (and the recovered secret images) is identical between methods 1 and 2. The only difference is that the stegos in method 1 is not bound together to form stego groups. So, from the viewpoint of recovery, if CG_{ j } = w _{ j } ∀j = 1, 2, …, T, then method 1 is more convenient than method 2. In method 1, the possible number of shadows for decoding can be any number from 1 through n, because the number of received shadows equals to the number of received stegos. In method 2, however, only w _{1} = 1, w _{2} = 1, w _{3} = 4, w _{1} + w _{2} = 2, w _{1} + w _{3} = 5, w _{1} + w _{2} + w _{3} = 6 shadows are possible combination for the number of received shadows. In this case, the recovery using three shadows will never happen. Therefore, if 3 is one of the progressive threshold values (2 ≤ r _{ ji} ≤ r _{ j2} ≤ … ≤ r _{ jk } ≤ 6) for some cover group j, then each image in that group will not have klevels progressive decoding, provided that method 2 is used and {CG_{1} = 1 = w _{1}; CG_{2} = 1 = w _{2}; CG_{3} = 4 = w _{3}}_{.}
5.3.2 Parameters of method 3

The threshold r _{ j } for the sharing of secret group j (j = 1, 2,…, t)
The basic requirement is r _{ j } ∈{2, …, n}. Assign a very large value (for example, n or n − 1) to r _{ j }, if secret group j is very sensitive. In general, using a larger r _{ j } value can make it harder to steal the images in secret group j; however, this also reduces the missingallowable benefit of the sharing approach: the corruption of two stegos will make the recovery of secret group j become almost impossible. Therefore, do not let r _{ j } be as large as n or n − 1, unless secret group j is very sensitive. On the other hand, use r _{ j } = 2 or r _{ j } = 3 if secret group j is of very low sensitivity.

The k progressiveness parameters [q _{1} ≤ q _{2} ≤ … ≤ q _{ k } = k] of typeq
For each j = 1, 2, …, t, if (any) r _{ j } of the n stego images are available, then, since r _{1} ≤ r _{2} ≤ … ≤ r _{ j }, we can reconstruct the ‘encrypted’ version of each secret image in {secret group 1, …, secret group j}. After that, there are k possible levels of decryption. Assume that the r _{ j } received stego images include q _{ m } guardian stegos. Then, we can recover the m keys {Key _{1}, …, Key _{ m }}; and hence, reconstruct m of the k regions of the DCT coefficients. Consequently, each image in secret groups {1, …, j} can be revealed using the mthlevel recovery quality.
As for the ith progressiveness parameter q _{ i } in the set [q _{1} ≤ q _{2} ≤ … ≤ q _{ k } = k], just assign a large value (such as k or k − 1) to q _{ i } if we wish that the ithlevel recovery should not be done without the joint attendance of many guardian stegos.
Below we give some examples to illustrate how the values of the k progressiveness parameters [q _{1} ≤ q _{2} ≤ … ≤ q _{ k } = k] affect the decoding results. Assume that we have received r _{ j } stego images; so, the ‘encrypted’ version of each secret image in secret groups 1, 2, …, j has been known. Then, e.g., 1, let [q _{1} = q _{2} = … = q _{ k1} = 2, and q _{ k } = k]. If the r _{ j } received stego images include two or more guardian stegos, then we can recover almost every region of DCT coefficients. Consequently, each image in secret groups {1, …, j} can be revealed with a very good quality; e.g., 2, let [q _{1} = q _{2} = … = q _{ k1} = k − 1, and q _{ k } = k]. If the r _{ j } received stego images include k − 2 of all guardian stegos, then we still cannot decrypt any region of any secret image; e.g., 3, let [q _{1} = q _{2} = 2; and q _{ i } = i for i = 3, 4, …, k]. If the r _{ j } received stego images include two guardian stegos, then we can recover each image in secret groups {1, …, j} with level 1 quality. If three guardian stegos are included, then the recovery quality is better, i.e., of level 2. If four guardian stegos are included, then the recovery quality is of level 3, and so on. Therefore, this is a progressive effect with many levels.

The sharing thresholds {r _{ j }} vs. the progressiveness parameters [q _{1} ≤ q _{2} ≤ … ≤ q _{ k } = k]
The influence by r _{ j } is local. For example, r _{ j } = 3 only means that at least three out of n stegos are required in order to get the encrypted version of secret group j. On the contrary, the influence of q _{ i } is global (across all secret groups), because q _{ i } = 3 means that at least three of the k guardian stegos must be available in order to decrypt region i of the DCT coefficients of all secret images in all secret groups.

k
Notably, in step 8 of the algorithm, we chose k of the n cover images to hide the respective k keyshadows to get the k guardian stegos. Hence, k < n is a natural requirement (k = n makes every stego become a guardian stego; but this is a system not quite meaningful, for it is just like a company in which every employee is in the supervisor committee of the company). On the other hand, k = 1 makes no progressive effect (because the kleveldecryption reduces to oneleveldecryption). Thus, k ∈ {2, …, n − 1}. For example, in our experiments, since there were n = 6 images, so k is chosen from {2, 3, 4, 5}. Of course, the larger the value of k, the more the number of recovery levels. Notably, k also affects the image quality recovered on the lowest level, as explained below. In our algorithm, the DCT coefficients are partitioned into k regions. If the partition is uniform among regions, then smaller value of k means that each region has more DCT coefficients; therefore, the level 1 reconstruction of secret images will have better quality.
5.4 Steganography and security issues
The protection of images in our system is with several check points: a) the stranger must extract the shadows which we hid in the JPEG stegos; b) the stranger must intercept enough number of shadows (if he knows which stegos to intercept); c) we can send or store stegos using distinct channels or computers, and our decoding allows that some channels or computers are destroyed, for the missingallowable property of thresholdsharing; d) the interceptors must intercept sufficient number of stegos before they can try to obtain sensitive images, for the decoding using insufficient number of shadows are extremely difficult, as analyzed later in this subsection; e) some of our methods are with multiple keys, and thus increase the difficulty of hackers.
The reasons that we use the JPEG data hiding method [17] to embed our n shadows in n JPEG codes are as follows: 1) Compared to spatialdomain stegos, JPEG code can save storage space and maybe also reduce the chance of attracting attackers; 2) JPEG compression disturbs the correlation between adjacent pixels of an image, so the permutation preprocessing employed in certain image sharing schemes [4,8,10] before sharing can be omitted. 3) As for the security of our JPEG codes, the size of the JPEG code (without hiding any secret), with the quality factors being in the range 10 to 95, is between 8, 119, and 94,581 bytes for many graylevel images. The size of our JPEG stegocodes listed in Tables 1, 2, and 3 are all in this range. Therefore, the attackers will not be suspicious about the size of our JPEG stego codes. 4) Note that the JPEG hiding method [17] has been shown to resist Chisquare [26] and StegDetect [27] attacks, reducing the chance that the attackers notice the existence of our shadows in the JPEG stego codes. 5) To summarize, the hiding in the JPEG stegocode is less notable.
Below we analyze the probability to get the sensitive image through ‘guessing’ when the number of received shadows is less than the minimal requirement. In methods 1 and 2, for each image in secret group j, let the [(r _{ j1}&r _{ j2}&…&r _{ jk }); n] progressive sharing be utilized to get n shares, which share the DCT data of the image. Without the loss of generality, let the image be 128 × 128, and [(r _{ j1}&r _{ j2}&…&r _{ jk }); n] = [(3&4); 6]. Therefore, RSUM = 3 + 4 = 7, and the image has (128 × 128)/(8 × 8) = 16 × 16 = 256 DCT blocks. In our experiments, some images have about 21 of the 64 DCT coefficients are nonzeros on the average. So, in the rearranged DCT data, each block has about 21 numbers. The first r _{ j1}/RSUM = 3/(3 + 4) = 3/7 of the 21 numbers are shared using (3, n) sharing, and the next r _{ j2}/RSUM = 4/(3 + 4) = 4/7 of the 21 numbers are shared using (4, n) sharing. In the decoding, if a person does not receive at least r _{ j1} shadows; for example, assume that he only receives r _{ j1} − 1 = 3 − 1 = 2 shadows; then for a threecoefficient polynomial like f(x) = a _{0} + a _{1} x + a _{2} x ^{2} = 109 + 23x + 83x ^{2}, although he knows f(1) and f(2), he still cannot know the three coefficients are (109, 23, 83). The only thing he knows is a table, i.e., if a _{1} = 0, then (a _{1}, a _{2}) = …; if a _{0} = 1, then (a _{1}, a _{2}) = …; if a _{0} = 2, then (a _{1}, a _{2}) = …; and so on. Now, if each number is in the range 0 to 255, then the probability to get correct (a _{0}, a _{1}, a _{2}) = (109, 23, 83) is 1/256. Since the first 9 of the 21 numbers use r _{ j1} = 3 as the threshold value, the chance that the stranger gets these 9 values from the two shadows that he owns is (1/256)^{9/3} = (1/256)^{3}. As for the next 21 − 9 = 12 numbers, since they are shared using r _{ j2} = 4 as the threshold value, the chance that the stranger gets these 12 values from r _{ j2} − 1 = 4 − 1 = 3 shadows is (1/256)^{12/4} = (1/256)^{3}. However, for the 12 numbers, which are shared using r _{ j2} = 4 as the threshold value, the probability that the stranger gets these 12 values from two shadows is much less than (1/256)^{12/4} = (1/256)^{3}. For example, for a fourcoefficient polynomial like g(x) = b _{0} + b _{1} x + b _{2} x ^{2} + b _{3} x ^{3} = 78 + 43x + 65x ^{2} + 114x ^{3}, although the stranger knows two values such as g(1) and g(2) from the two shadows, he still cannot know the four coefficients are (78, 43, 65, 114). The only thing he knows is a twodimensional table like ‘if (b _{0}, b _{1}) = (0, 0), then (b _{2}, b _{3}) = ….; if (b _{0}, b _{1}) = (0, 1), then (b _{2}, b _{3}) = …..;’ Consequently, if each number is in the range 0 to 255, then the probability to get correct (b _{0}, b _{1}, b _{2}, b _{3}) = (78, 43, 65, 114) is (1/256)^{2}. Hence, the change to get these 12 values from two shadows is [(1/256)^{2}]^{12/4} = (1/256)^{6}. Together, the change to get the 9 + 12 = 21 values of the block from two shadows is (1/256)^{3} × (1/256)^{6} = (1/256)^{9}. For an image of wbyh pixels, there are (w × h)/(8 × 8) blocks. So the chance to get the rearranged DCT coefficients of the image is [(1/256)^{9}]^{(w×h)/(8×8)}. If the sensitive image is 128 × 128, then the chance is [(1/256)^{9}]^{16×16} = (1/256)^{9×256} = (256)^{−2,304} = 10^{−5,548}. If the sensitive image is 512 × 512, then the chance is (1/256)^{9×4,096} = (256)^{−36,864} = 10^{−88,777}.
As for method 3, for each image in secret group j, method 3 uses (r _{ j }, n) sharing to create n shares which share the ‘encrypted’ DCT data of the image. Without the loss of generality, still let the image be 128 × 128 and the sharing threshold be r _{ j } = 3. Therefore, the image has (128 × 128)/(8 × 8) = 16 × 16 = 256 DCT blocks. Still assume that the image has about 21 of the 64 DCT coefficients are nonzeros on the average. Then, from the above analysis, in the decoding, if a person only receives r _{ j1} − 1 = 3 − 1 = 2 shadows, then the chance that the stranger gets these 21 encrypted values of the block from the two shadows is (1/256)^{21/3} = (1/256)^{7}. For an image of wbyh pixels, there are (w × h)/(8 × 8) blocks. So the chance to get the encrypted DCT coefficients of the image is [(1/256)^{7}]^{(wh/64)}. If the sensitive image is 128by128, then the chance is [(1/256)^{7}]^{16×16} = (1/256)^{7×256} = (256)^{−1,792} = 10^{−4,315}. If the sensitive image is 512by512, then the chance is (1/256)^{7×4,096} = (256)^{−28,672} = 10^{−69,049}.
Notably, with this very small chance, what the stranger gets using the two shadows is only the ‘encrypted’ DCT coefficients. He still needs to guess a) the number of keys, b) the value of each key (if the number of guardian stegos that he owns is below a required threshold value), c) the encryption method that we used (k distinct region can use k distinct encryption methods), d) the way we partitioned the nonzero into k regions, and so on.
5.5 Variation of parameters affects the recovery results of secret images
In the experiment of Section 4.1, the image House was progressively shared using (r _{11}&r _{12}&r _{13}) = (2&3&4). The current section discusses how the variation of parameters' values affect the recovery results. Hence, let the new (r _{11}&r _{12}&r _{13}) be (3&4&5) and (4&5&6), respectively. As shown in Table 9, the reconstruction quality is improved, i.e., MSE_{(4}&_{5}&_{6)} < MSE_{(3}&_{4}&_{5)} < MSE_{(2}&_{3}&_{4)}, no matter in level 1's reconstruction or in level 2's. In fact, even if we replace the image House by other secret images, we still have MSE_{(4}&_{5}&_{6)} < MSE_{(3}&_{4}&_{5)} < MSE_{(2}&_{3}&_{4)}. Likewise, as shown in Table 10, in the twolevel experiments, we also observe that MSE_{(5}&_{6)} < MSE_{(4}&_{5)} < MSE_{(3}&_{4)} < MSE_{(2}&_{3)}. This statement is still true when Lena or Pepper is replaced by other images. The reason is explained below. In our design for methods 1 and 2, the rearranged DCT data are shared. In fact, these data are partitioned into RSUM_{ j } = r _{ j1} + r _{ j2} + … + r _{ jk } parts. The first r _{ j1} parts are shared using (r _{ j1}; n) sharing, the next r _{ j2} parts are shared using (r _{ j2}; n) sharing, and so on. In the recovery on level 1, i.e., when r _{ j1} of the n shadows are available, we can recover about r _{ j1}/(r _{ j1} + r _{ j2} + … + r _{ jk }) of the rearranged DCT data. In the recovery on level 2, i.e., when r _{ j2} of the n shadows are available, we can recover about (r _{ j1} + r _{ j2})/(r _{ j1} + r _{ j2} + … + r _{ jk }) of the rearranged DCT data. And so on. For example, in Table 9, when (r _{ j1}, r _{ j2}, r _{ j3}) = (2, 3, 4), the recovery on level 1 can recover about r _{ j1}/(r _{ j1} + r _{ j2} + … + r _{ jk }) = 2/(2 + 3 + 4) = 2/9 of the rearranged DCT data. The recovery on level 2 can recover about (r _{ j1} + r _{j2})/(r _{ j1} + r _{ j2} + … + r _{ jk }) = (2 + 3)/(2 + 3 + 4) = 5/9 of the rearranged DCT data. Analogously, when (r _{ j1}, r _{ j2}, r _{ j3}) = (3, 4, 5), the recovery on level 1 can recover about r _{ j1}/(r _{ j1} + r _{ j2} + … + r _{ jk }) = 3/(3 + 4 + 5) = 25% of the rearranged DCT data. The recovery on level 2 can recover about (r _{ j1} + r _{j2})/(r _{ j1} + r _{ j2} + … + r _{ jk }) = (3 + 4)/(3 + 4 + 5) = 7/12 of the rearranged DCT data. Now, since
it is of no surprise that MSE_{(4}&_{5}&_{6)} < MSE_{(3}&_{4}&_{5)} < MSE_{(2}&_{3}&_{4)} in Table 9, no matter in level 1's reconstruction or in level 2's. The inequality also implies that MSE_{(5}&_{6)} < MSE_{(4}&_{5)} < MSE_{(3}&_{4)} < MSE_{(2}&_{3)} in Table 10. The same analysis also tells us that the second level reconstruction in Table 9 should be better than the first level reconstruction in Table 10, for 2/(2 + 3) < 3/(3 + 4) < 4/(4 + 5) < 5/(5 + 6) < [(2 + 3)/(2 + 3 + 4)] < [(3 + 4)/(3 + 4 + 5)] < [(4 + 5)/(4 + 5 + 6)]. Likewise, the first level reconstruction in Table 10 should be better than the first level reconstruction in Table 9, for 2/(2 + 3 + 4) < 3/(3 + 4 + 5) < 4/(4 + 5 + 6) < 2/(2 + 3) < 3/(3 + 4) < 4/(4 + 5). The phenomenon is depicted in Figure 5.
The above analysis is for methods 1 and 2 (these two methods have identical shadows, and their recovered versions of secrets are also identical). Below we analyze method 3. We got Table 5 when we repeated the experiment in Section 4.3, using k = 4 keys to replace k = 3 keys (so the number of guardian stegos also increases from 3 to 4), and using {q _{1} = 2, q _{2} = 2, q _{3} = 3, and q _{4} = 4} to replace {q _{1} = 2, q _{2} = 2, q _{3} = 3}. Since we used (r, n) = (5, 6) in the secret sharing of [Cameraman, Lena], these two images cannot be viewed unless at least five stego images have been collected. On the other hand, k = 4 implies that the number of guardians is 4, and the number of nonguardians is n − k = 6 − 4 = 2. Hence, when five stegos are received, then either there are three guardians (together with two nonguardians), or there are four guardians (together with one nonguardian). Since {q _{1} = 2, q _{2} = 2, q _{3} = 3, and q _{4} = 4}, the recovery is either of high level (because q _{4} = 4), or of moderate level (because q _{3} = 3). There is no way to get five stegos in which two are guardians and three are not; because the whole system only has two nonguardians. Consequently, only moderate level and high level are possible. Finally, if all six stegos are received, then only the lossless version exists. From this example, we can see that, even though the set {q _{1} = 2, q _{2} = 2, q _{3} = 3, and q _{4} = 4} has three distinct values, using (r, n) = (n − 1, n) = (5, 6) makes the number of progressive levels drop from three to two. On the other hand, for the image House, because (r, n) = (4, 6), so when people receive four stegos, the image House can be fully recovered (if four stegos are all guardians) or moderately recovered (if three of the stegos are guardians), or recovered with the lowest quality (if two of the four received stegos are guardians). Therefore, in method 3, although k is the number of typeq progressive parameters, the actual number of recovery levels might be lower than k − 1, if the threshold value r of the image is very close to n.
Next we discuss the quality caused by the typeq parameters. Assuming that at least r stegos have been received, we can get the encrypted DCT of the image. Then, for {q _{1} = 2, q _{2} = 2, q _{3} = 3, and q _{4} = 4}, it means that regions {1, …, i} of the encrypted DCT can be decrypted if i of the received stegos are guardians (i = 2, 3, 4). Therefore, the recovery uses 2 (or 3, or 4) of the four regions of the DCT. For simplicity, assume that uniform partition was applied earlier to partition the DCT into regions. Then, on levels 1, 2 and 3, respectively, about 2/4, 3/4, 4/4 of the DCT information are utilized to recover the image. On the other hand, for the experiment done in Table 3 of Section 4.3, whose k = 3 parameters of typeq are {q _{1} = 2, q _{2} = 2, and q _{3} = 3}, the information used on each level of image reconstruction is, respectively, about 2/3 and 3/3 of the DCT information. Since 2/4 < 2/3 < 3/4, the recovery related to 2/4 should be worse than the recovery related to 2/3; whereas the recovery related to 2/3 should be worse than the recovery related to 3/4. In other words, the lowlevel recovery of Table 5 should be worse than the lowlevel recovery of Table 3, whereas the lowlevel recovery of Table 3 should be worse than the moderatelevel recovery of Table 5. If we inspect Tables 3 and 5, we find that the result is really as expected.
6 Conclusions
In this paper, we proposed three kinds of progressive sharing methods to deal with multiple images. Method 1 is a basic progressive sharing method that decodes according to the sensitivity level of each secret image group. Method 2 is similar to method 1; however, in method 2, different weights are assigned to different cover image groups. Consequently, in the unveiling of the secret images, some groups of cover images are more useful than others. Method 3 uses a single threshold for each secret image with the same sensitivity level; however, the keys are progressively shared, making method 3 progressive. The increase in the shadow size caused by progressiveness can be neglected in method 3 because the size of the keys is much smaller than the size of the images.
We enhance conventional image sharing methods by providing the following features:

A.
Multiple secret images are divided into several groups with distinct sensitivity levels, and each secret group has its own decoding thresholds for progressiveness. Note that each stego hides some information from ‘all’ groups of secret images. In other words, the information is integrated across the groups. In method 1, all stego JPEG codes have the same importance, as explained below: using ‘any’ r _{ ji } of the n stegos, we can recover every secret image of secret group j; and the recovery quality is of the ith progressive level.

B.
The cover images can also be divided into several groups, as introduced in method 2. Each cover group in method 2 has its own weight, and cover groups with more weight are more powerful (a cover group with weight = 3 is as powerful as three cover groups with weight = 1 each). Consequently, some covers have more influence than others in the revealing of secrets.

C.
In method 3, in addition to using each secret group's own threshold to control its revelations, guardian stegos play an even more dominant role to control the revealing of secrets. Method 3 is also progressive.

D.
We provide progressive decoding for multiimages, and the stegos are in JPEG format. Progressive decoding of multiple secret images is desirable in certain applications such as image retrieval from sensitive databases in crime investigation units, military departments, and the design team of a company.
By choosing a method that suits his/her purposes from our three proposed methods, a user can distribute his/her secret images among n stegos and obtain the progressive recovery effect that s/he desires.
References
A Shamir, How to share a secret. Commun. ACM 22(11), 612–613 (1979)
GR Blakley, Safeguarding cryptographic keys, in Proceedings of AFIPS 1979 National Computer Conference, New Jersey, USA, vol. 48, 1979, pp. 313–317
SK Chen, JC Lin, Faulttolerant and progressive transmission of images. Pattern Recogn. 38(12), 2466–2471 (2005)
RZ Wang, SJ Shyu, Scalable secret image sharing. Signal Process. Image Comm. 22(4), 363–373 (2007)
KH Hung, YJ Chang, JC Lin, Progressive sharing of an image. Opt. Eng. 47(4), 047006 (2008)
LST Chen, JC Lin, Multithreshold progressive image sharing with compact shadows. J. Electron. Imag. 19(1), 013003 (2010)
WP Fang, Friendly progressive visual secret sharing. Pattern Recog. 41(4), 1410–1414 (2008)
CC Thien, JC Lin, Secret image sharing. Comput. Graph. 26(5), 765–770 (2002)
CC Lin, WH Tsai, Secret image sharing with steganography and authentication. J. Syst. Software 73(3), 405–414 (2004)
CC Lin, WH Tsai, Secret image sharing with capability of share data reduction. Opt. Eng. 42(8), 2340–2345 (2003)
FM Bui, D Hatzinakos, Biometric methods for secure communications in body sensor networks: resourceefficient key management and signallevel data scrambling. EURASIP J. Adv. Signal Process 2008, 529879 (2008)
SS Lee, SF Hsu, JC Lin, Protection of PDF files: a sharing approach. Int. J. UE Serv Sci. Tech. 7(2), 27–40 (2014)
D Wang, F Yi, On converting secret sharing scheme to visual secret sharing scheme. EURASIP J. Adv. Signal Process. 2010, 782438 (2010)
TH Chen, KH Tsao, Threshold visual secret sharing by random grids. J. Syst. Software 84(7), 1197–1208 (2011)
AT Boloorchi, MH Samadzadeh, T Chen, Symmetric threshold multipath (STM): an online symmetric key management scheme. Inform. Sci. 268(1), 489–504 (2014)
KS Wu, A secret image sharing scheme for light images. EURASIP J. Adv. Signal Process. 2013, 49 (2013)
LST Chen, SJ Lin, JC Lin, Reversible JPEGbased hiding method with high hidingratio. Int. J. Pattern Recogn. Artif. Intell. 24(3), 433–456 (2010)
YY Tsai, DS Tsai, CL Liu, Reversible data hiding scheme based on neighboring pixel differences. Digit. Signal Process. 23(3), 919–927 (2013)
S Mukherjee, AR Mahajan, Review on algorithms and techniques of reversible data hiding. Int. J. Res. Comput. Commun. Tech. 3(3), 291–295 (2014)
J Lukas, J Fridrich, M Goljan, Digital camera identification from sensor pattern noise. IEEE Trans. Inform. Forensic Secur. 1(2), 205–214 (2006)
E Brannock, M Weeks, R Harrison, Watermarking With Wavelets: Simplicity Leads to Robustness (Paper presented at the IEEE Southeastcon 2008, Huntsville, AL, 2008)
E Castillo, L Parrilla, A Garcia, U MeyerBaese, G Botella, A Lloris, Automated Signature Insertion in Combinational Logic Patterns for HDL IP Core Protection (Paper presented at the 4th southern conference on programmable logic, San Carlos de Bariloche, 2008)
M Meenakumari, G Athisha, Improving the protection of FPGA based sequential IP core designs using hierarchical watermarking technique. J. Theor. Appl. Inf. Technol. 63(3), 701–708 (2014)
I Orović, M Orlandić, S Stanković, An image watermarking based on the pdf modeling and quantization effects in the wavelet domain. Multimed. Tool. Appl. 70(3), 1503–1519 (2014)
Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Chisquare Steganography Test Program [http://www.guillermito2.net/stegano/tools/index.html]
N Provos, P Honeyman, Hide and seek: an introduction to steganography. IEEE Secur. Priv. 1(3), 32–44 (2003)
Acknowledgements
The work was supported by National Science Council of Republic of China, under grant NSC 1002221E009141MY3. The authors also thank the reviewers for their valuable comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chang, SY., Lee, SS., Yeh, TM. et al. Progressive sharing of multiple images with sensitivitycontrolled decoding. EURASIP J. Adv. Signal Process. 2015, 11 (2015). https://doi.org/10.1186/s136340150196z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136340150196z